Search Results

Search found 60391 results on 2416 pages for 'data generation'.

Page 72/2416 | < Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >

  • Issue with dynamic array Queue data structure with void pointer

    - by Nazgulled
    Hi, Maybe there's no way to solve this the way I'd like it but I don't know everything so I better ask... I've implemented a simple Queue with a dynamic array so the user can initialize with whatever number of items it wants. I'm also trying to use a void pointer as to allow any data type, but that's the problem. Here's my code: typedef void * QueueValue; typedef struct sQueueItem { QueueValue value; } QueueItem; typedef struct sQueue { QueueItem *items; int first; int last; int size; int count; } Queue; void queueInitialize(Queue **queue, size_t size) { *queue = xmalloc(sizeof(Queue)); QueueItem *items = xmalloc(sizeof(QueueItem) * size); (*queue)->items = items; (*queue)->first = 0; (*queue)->last = 0; (*queue)->size = size; (*queue)->count = 0; } Bool queuePush(Queue * const queue, QueueValue value, size_t val_sz) { if(isNull(queue) || isFull(queue)) return FALSE; queue->items[queue->last].value = xmalloc(val_sz); memcpy(queue->items[queue->last].value, value, val_sz); queue->last = (queue->last+1) % queue->size; queue->count += 1; return TRUE; } Bool queuePop(Queue * const queue, QueueValue *value) { if(isEmpty(queue)) return FALSE; *value = queue->items[queue->first].value; free(queue->items[queue->first].value); queue->first = (queue->first+1) % queue->size; queue->count -= 1; return TRUE; } The problem lies on the queuePop function. When I call it, I lose the value because I free it right away. I can't seem to solve this dilemma. I want my library to be generic and modular. The user should not care about allocating and freeing memory, that's the library's job. How can the user still get the value from queuePop and let the library handle all memory allocs/frees?

    Read the article

  • Migration Core Data error code 134130

    - by magichero
    I want to do migration with 2 core database. I have read apple developer document. For the first database, I add some attributes (string, integer and date properties) to new version database. And follow all steps, I have done migration successfully with the first once. But the second database, I also add attributes (string, integer, date, transformable and binary-data properties) to new version database. And follow all steps (like the first database) but system return an error (134130). Here is the code: if (persistentStoreCoordinator_) { PMReleaseSafely(persistentStoreCoordinator_); } // Notify NSNotificationCenter *nc = [NSNotificationCenter defaultCenter]; [nc postNotificationName:GCalWillMigrationNotification object:self]; // NSString *sourceStoreType = NSSQLiteStoreType; NSString *dataStorePath = [PMUtility dataStorePathForName:GCalDBWarehousePersistentStoreName]; NSURL *storeURL = [NSURL fileURLWithPath:dataStorePath]; BOOL storeExists = [[NSFileManager defaultManager] fileExistsAtPath:dataStorePath]; // NSError *error = nil; NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithBool:YES], NSMigratePersistentStoresAutomaticallyOption, [NSNumber numberWithBool:YES], NSInferMappingModelAutomaticallyOption, nil]; persistentStoreCoordinator_ = [[NSPersistentStoreCoordinator alloc] initWithManagedObjectModel:self.managedObjectModel]; [persistentStoreCoordinator_ addPersistentStoreWithType:sourceStoreType configuration:nil URL:storeURL options:options error:&error]; if (error != nil) { abort(); } error is not nil and below is log: Error Domain=NSCocoaErrorDomain Code=134130 "The operation couldn\u2019t be completed. (Cocoa error 134130.)" UserInfo=0x856f790 {URL=file://localhost/Users/greensun/Library/Application%20Support/iPhone%20Simulator/5.0/Applications/D10712DE-D9FE-411A-8182-C4F58C60EC6D/Library/Application%20Support/Promise%20Mail/GCalDBWarehouse.sqlite, metadata={type = immutable dict, count = 7, entries = 2 : {contents = "NSStoreModelVersionIdentifiers"} = {type = immutable, count = 1, values = ( 0 : {contents = ""} )} 4 : {contents = "NSPersistenceFrameworkVersion"} = {value = +386, type = kCFNumberSInt64Type} 6 : {contents = "NSStoreModelVersionHashes"} = {type = immutable dict, count = 2, entries = 0 : {contents = "SyncEvent"} = {length = 32, capacity = 32, bytes = 0xfdae355f55c13fbd0344415fea26c8bb ... 4c1721aadd4122aa} 1 : {contents = "ImportEvent"} = {length = 32, capacity = 32, bytes = 0x7676888f0d7eaff4d1f844343028ce02 ... 040af6cbe8c5fd01} } 7 : {contents = "NSStoreUUID"} = {contents = "51678BAC-CCFB-4D00-AF5C-8FA1BEDA6440"} 8 : {contents = "NSStoreType"} = {contents = "SQLite"} 9 : {contents = "_NSAutoVacuumLevel"} = {contents = "2"} 10 : {contents = "NSStoreModelVersionHashesVersion"} = {value = +3, type = kCFNumberSInt32Type} }, reason=Can't find model for source store} I try a lot of solutions but it does not work. I just add more attributes to 2 new version database, and succeed in migrating once. Thanks for reading and your help.

    Read the article

  • Multiple views and source list in a Core Data app

    - by Ellie P.
    I'm working on my first major Cocoa app for an undergraduate research project. The application is document-based and uses Core Data. One of the entities is an abstract entity, Page. Page is parent of several types of pages: ie PageWithHeaderAndFooter, PageWithTwoColumns, BasicPage etc. Page has attributes, such as title and author, that all pages have in common. Each specific type of page has a certain number of layout blocks (PageWithHeaderAndFooter has three: header, footer, body. BasicPage has one: body. etc.) Additionally, all Page subclasses define layout-specific implementations of certain methods. The other relevant entity is Style, which defines the visual look of a Page. (Think of Pages as HTML and Style as CSS.) I would like my app to have an iTunes/Mail-like source list with sections. (One section would be Pages, the other would be Styles.) I have a pretty good idea how to do the sectioned source list (this was a great help). However, after hours of headbanging and fruitless googling, here's what I can't figure out: Pages and Styles listed in the source list, and when you select one of them, all of the relevant fields for that object appear at the right (mostly NSTextViews, pop up menus, etc). I laid that out and did all of the bindings in Interface Builder. The problem is, if my source list contains different types of pages, how do I get a different view to display at the right depending on the type of page selected? For example, if a BasicPage is selected, I want just what you see above: the general page stuff and one NSTextView that corresponds to the one field body of BasicPage. But if I select a PageWithHeaderAndFooter, I want to display the general page stuff plus three NSTextViews (one for header, body, and footer.) If I have a Style selected, I want to display various pop up menus, color wells, etc. For the pages at least, we're only talking about one or more NSTextViews, each of which corresponds to a String attribute of the respective entity. How would you do this? Thank you for your help!

    Read the article

  • Crash in OS X Core Data Utility Tutorial

    - by vinogradov
    I'm trying to follow Apple's Core Data utility Tutorial. It was all going nicely, until... The tutorial uses a custom sub-class of NSManagedObject, called 'Run'. Run.h looks like this: #import <Foundation/Foundation.h> #import <CoreData/CoreData.h> @interface Run : NSManagedObject { NSInteger processID; } @property (retain) NSDate *date; @property (retain) NSDate *primitiveDate; @property NSInteger processID; @end Now, in Run.m we have an accessor method for the processID variable: - (void)setProcessID:(int)newProcessID { [self willChangeValueForKey:@"processID"]; processID = newProcessID; [self didChangeValueForKey:@"processID"]; } In main.m, we use functions to set up a managed object model and context, instantiate an entity called run, and add it to the context. We then get the current NSprocessInfo, in preparation for setting the processID of the run object. NSManagedObjectContext *moc = managedObjectContext(); NSEntityDescription *runEntity = [[mom entitiesByName] objectForKey:@"Run"]; Run *run = [[Run alloc] initWithEntity:runEntity insertIntoManagedObjectContext:moc]; NSProcessInfo *processInfo = [NSProcessInfo processInfo]; Next, we try to call the accessor method defined in Run.m to set the value of processID: [run setProcessID:[processInfo processIdentifier]]; And that's where it's crashing. The object run seems to exist (I can see it in the debugger), so I don't think I'm messaging nil; on the other hand, it doesn't look like the setProcessID: message is actually being received. I'm obviously still learning this stuff (that's what tutorials are for, right?), and I'm probably doing something really stupid. However, any help or suggestions would be gratefully received! ===MORE INFORMATION=== Following up on Jeremy's suggestions: The processID attribute in the model is set up like this: NSAttributeDescription *idAttribute = [[NSAttributeDescription alloc]init]; [idAttribute setName:@"processID"]; [idAttribute setAttributeType:NSInteger32AttributeType]; [idAttribute setOptional:NO]; [idAttribute setDefaultValue:[NSNumber numberWithInteger:-1]]; which seems a little odd; we are defining it as a scalar type, and then giving it an NSNumber object as its default value. In the associated class, Run, processID is defined as an NSInteger. Still, this should be OK - it's all copied directly from the tutorial. It seems to me that the problem is probably in there somewhere. By the way, the getter method for processID is defined like this: - (int)processID { [self willAccessValueForKey:@"processID"]; NSInteger pid = processID; [self didAccessValueForKey:@"processID"]; return pid; } and this method works fine; it accesses and unpacks the default int value of processID (-1). Thanks for the help so far!

    Read the article

  • Solr Facet Search spring-data-solr

    - by sv1
    I am new to Solr and we are using Spring-data for Solr I have a question may be its too simple but I am unable to comprehend. Basically I need to search on any field but I have a "zip" field as one of the facet fields. I have a Solr repository . (Am not sure if the annotations on the Repository are correct.) public interface MyRepository extends SolrCrudRepository<MyPOJO, String> { @Query(value = "*:*") @Facet(fields={"zip"}) public FacetPage<MyPOJO> findByQueryandAnno(String searchTerm,Pageable page); } In my Service class I am trying to call this methods by Injecting MyRepository like below public class MyService { @Inject MyRepository solrRepo; @Inject SolrTemplate solrTemplate; public FacetPage<MyPOJO> tryFacets(String searchString){ //Here is where I am struggling SimpleFacetQuery query = new SimpleQuery(new SimpleStringCriteria(searchString)); query.setFacetOptions(new FacetOptions("zip")); //Not sure how to get the Pageable object. //But the repository doesnt accept without it. return solrTemplate.queryForPage(query,{Pageable Instance to be passed here}) } From my jUnit I am loading context files needed for Solr and Spring //In jUnit the test method looks like this service.tryFacets("some value"); and it fails with - method not found in the service class at the call to the repository method. *********EDIT**************** As per ChristophStrobl's advice created a copyfield called multivaluedCopyField and the searchString argument works good. But the facet still isnt working...Now my code looks like this. I get the MyPOJO object as response but the facetcount and the faceted values are missing. public interface MyRepository extends SolrCrudRepository<MyPOJO, String> { @Query(value = "*:*") @Facet(fields={"zip"}) public FacetPage<MyPOJO> findByQueryandAnno(String searchTerm,Pageable page); } My service class looks like public class MyService { @Inject MyRepository solrRepo; public FacetPage<MyPOJO> tryFacets(String searchString){ //Here is where I am struggling SimpleFacetQuery query = new SimpleQuery(new SimpleStringCriteria(searchString)).setPageRequest new PageRequest(0,5)); query.setFacetOptions(new FacetOptions("zip")); FacetPage<MyPOJO> facetedPOJO= solrRepo.findByQueryandAnno(searchString,query.getPageRequest()); return facetedPOJO; } My jUnit method call is like service.tryFacets("some value");

    Read the article

  • Principles of Big Data By Jules J Berman, O&rsquo;Reilly Media Book Review

    - by Compudicted
    Originally posted on: http://geekswithblogs.net/Compudicted/archive/2013/11/04/principles-of-big-data-by-jules-j-berman-orsquoreilly-media.aspx A fantastic book! Must be part, if not yet, of the fundamentals of the Big Data as a field of science. Highly recommend to those who are into the Big Data practice. Yet, I confess this book is one of my best reads this year and for a number of reasons: The book is full of wisdom, intimate insight, historical facts and real life examples to how Big Data projects get conceived, operate and sadly, yes, sometimes die. But not only that, the book is most importantly is filled with valuable advice, accurate and even overwhelming amount of reference (from the positive side), and the author does not event stop there: there are numerous technical excerpts, links and examples allowing to quickly accomplish many daunting tasks or make you aware of what one needs to perform as a data practitioner (excuse my use of the word practitioner, I just did not find a better substitute to it to trying to reference all who face Big Data). Be aware that Jules Berman’s background is in medicine, naturally, this book discusses this subject a lot as it is very dear to the author’s heart I believe, this does not make this book any less significant however, quite the opposite, I trust if there is an area in science or practice where the biggest benefits can be ripped from Big Data projects it is indeed the medical science, let’s make Cancer history! On a personal note, for me as a database, BI professional it has helped to understand better the motives behind Big Data initiatives, their underwater rivers and high altitude winds that divert or propel them forward. Additionally, I was impressed by the depth and number of mining algorithms covered in it. I must tell this made me very curious and tempting to find out more about these indispensable attributes of Big Data so sure I will be trying stretching my wallet to acquire several books that go more in depth on several most popular of them. My favorite parts of the book, well, all of them actually, but especially chapter 9: Analysis, it is just very close to my heart. But the real reason is it let me see what I do with data from a different angle. And then the next - “Special Considerations”, they are just two logical parts. The writing language is of this book is very acceptable for all levels, I had no technical problem reading it in ebook format on my 8” tablet or a large screen monitor. If I would be asked to say at least something negative I have to state I had a feeling initially that the book’s first part reads like an academic material relaxing the reader as the book progresses forward. I admit I am impressed with Jules’ abilities to use several programming languages and OSS tools, bravo! And I agree, it is not too, too hard to grasp at least the principals of a modern programming language, which seems becomes a defacto knowledge standard item for any modern human being. So grab a copy of this book, read it end to end and make yourself shielded from making mistakes at any stage of your Big Data initiative, by the way this book also helps build better future Big Data projects. Disclaimer: I received a free electronic copy of this book as part of the O'Reilly Blogger Program.

    Read the article

  • Optimal storage of data structure for fast lookup and persistence

    - by Mikael Svenson
    Scenario I have the following methods: public void AddItemSecurity(int itemId, int[] userIds) public int[] GetValidItemIds(int userId) Initially I'm thinking storage on the form: itemId -> userId, userId, userId and userId -> itemId, itemId, itemId AddItemSecurity is based on how I get data from a third party API, GetValidItemIds is how I want to use it at runtime. There are potentially 2000 users and 10 million items. Item id's are on the form: 2007123456, 2010001234 (10 digits where first four represent the year). AddItemSecurity does not have to perform super fast, but GetValidIds needs to be subsecond. Also, if there is an update on an existing itemId I need to remove that itemId for users no longer in the list. I'm trying to think about how I should store this in an optimal fashion. Preferably on disk (with caching), but I want the code maintainable and clean. If the item id's had started at 0, I thought about creating a byte array the length of MaxItemId / 8 for each user, and set a true/false bit if the item was present or not. That would limit the array length to little over 1mb per user and give fast lookups as well as an easy way to update the list per user. By persisting this as Memory Mapped Files with the .Net 4 framework I think I would get decent caching as well (if the machine has enough RAM) without implementing caching logic myself. Parsing the id, stripping out the year, and store an array per year could be a solution. The ItemId - UserId[] list can be serialized directly to disk and read/write with a normal FileStream in order to persist the list and diff it when there are changes. Each time a new user is added all the lists have to updated as well, but this can be done nightly. Question Should I continue to try out this approach, or are there other paths which should be explored as well? I'm thinking SQL server will not perform fast enough, and it would give an overhead (at least if it's hosted on a different server), but my assumptions might be wrong. Any thought or insights on the matter is appreciated. And I want to try to solve it without adding too much hardware :) [Update 2010-03-31] I have now tested with SQL server 2008 under the following conditions. Table with two columns (userid,itemid) both are Int Clustered index on the two columns Added ~800.000 items for 180 users - Total of 144 million rows Allocated 4gb ram for SQL server Dual Core 2.66ghz laptop SSD disk Use a SqlDataReader to read all itemid's into a List Loop over all users If I run one thread it averages on 0.2 seconds. When I add a second thread it goes up to 0.4 seconds, which is still ok. From there on the results are decreasing. Adding a third thread brings alot of the queries up to 2 seonds. A forth thread, up to 4 seconds, a fifth spikes some of the queries up to 50 seconds. The CPU is roofing while this is going on, even on one thread. My test app takes some due to the speedy loop, and sql the rest. Which leads me to the conclusion that it won't scale very well. At least not on my tested hardware. Are there ways to optimize the database, say storing an array of int's per user instead of one record per item. But this makes it harder to remove items.

    Read the article

  • No data when attempting to get JSONP data from cross domain PHP script

    - by Alex
    I am trying to pull latitude and longitude values from another server on a different domain using a singe id string. I am not very familiar with JQuery, but it seems to be the easiest way to go about getting around the same origin problem. I am unable to use iframes and I cannot install PHP on the server running this javascript, which is forcing my hand on this. My queries appear to be going through properly, but I am not getting any results back. I was hoping someone here might have an idea that could help, seeing as I probably wouldn't recognize most obvious errors here. My javascript function is: var surl = "http://...omitted.../pull.php"; var idnum = 5a; //in practice this is defined above alert("BEFORE"); $.ajax({ url: surl, data: {id: idnum}, dataType: "jsonp", jsonp : "callback", jsonp: "jsonpcallback", success: function (rdata) { alert(rdata.lat + ", " + rdata.lon); } }); alert("BETWEEN"); function jsonpcallback(rtndata) { alert("CALLED"); alert(rtndata.lat + ", " + rtndata.lon); } alert("AFTER"); When my javascript is run, the BEFORE, BETWEEN and AFTER alerts are displayed. The CALLED and other jsonpcallback alerts are not shown. Is there another way to tell if the jsoncallback function has been called? Below is the PHP code I have running on the second server. I added the count table to my database just so that I can tell when this script is run. Every time I call the javascript, count has had an extra item inserted and the id number is correct. <?php header("content-type: application/json"); if (isset($_GET['id']) || isset($_POST['id'])){ $db_handle = mysql_connect($server, $username, $password); if (!$db_handle) { die('Could not connect: ' . mysql_error()); } $db_found = mysql_select_db($database, $db_handle); if ($db_found) { if (isset($_POST['id'])){ $SQL = sprintf("SELECT * FROM %s WHERE loc_id='%s'", $loctable, mysql_real_escape_string($_POST['id'])); } if (isset($_GET['id'])){ $SQL = sprintf("SELECT * FROM %s WHERE loc_id='%s'", $loctable, mysql_real_escape_string($_GET['id'])); } $result = mysql_query($SQL, $db_handle); $db_field = mysql_fetch_assoc($result); $rtnjsonobj -> lat = $db_field["lat"]; $rtnjsonobj -> lon = $db_field["lon"]; if (isset($_POST['id'])){ echo $_POST['jsonpcallback']. '('. json_encode($rtnjsonobj) . ')'; } if (isset($_GET['id'])){ echo $_GET['jsonpcallback']. '('. json_encode($rtnjsonobj) . ')'; } $SQL = sprintf("INSERT INTO count (bullshit) VALUES ('%s')", $_GET['id']); $result = mysql_query($SQL, $db_handle); $db_field = mysql_fetch_assoc($result); } mysql_close($db_handle); } else { $rtnjsonobj -> lat = 404; $rtnjsonobj -> lon = 404; echo $_GET['jsonpcallback']. '('. json_encode($rtnjsonobj) . ')'; }?> I am not entirely sure if the jsonp returned by this PHP is correct. When I go directly to the PHP script without including any parameters, I do get the following. ({"lat":404,"lon":404}) The callback function is not included, but that much can be expected when it isn't included in the original call. Does anyone have any idea what might be going wrong here? Thanks in advance!

    Read the article

  • How do audio based games such as Audiosurf and Beat Hazard work?

    - by The Communist Duck
    Note: I am not asking how to make a clone of one of these. I am asking about how they work. I'm sure everyone's seen the games where you use your own music files (or provided ones) and the games produce levels based on them, such as Audiosurf and Beat Hazard. Here is a video of Audiosurf in action, to show what I mean. If you provide a heavy metal song, you would get a completely different set of obstacles, enemies, and game experience from something like Vivaldi. What does interest me is how these games work. I do not know much about audio (well, data-side), but how do they process the song to understand when it is settling down or when it's speeding up? I guess they could just feed the pitch values (assuming those sorts of things exist in audio files) to form a level, but it wouldn't fully explain it. I'm either looking for an explanation, some links to articles about this sort of thing (I'm sure there's a term or terms for it), or even an open-source implementation of this kind of thing ;-) EDIT: After some searching and a little help, I found out about FFT (Fast Fourier Transform). This maybe a step in the right direction, but it is something that does not make any sense to me..or fits with my physics knowledge of waves.

    Read the article

  • JAVA-SQL- Data Migration - ResultSets comparing Failing JUnit test

    - by user1865053
    I CANNOT get this JUnit Test to pass for the life of me. Can somebody point out where this has gone wrong. I am doing a data migration(MSSQL SERVER 2005), but I have the sourceDBUrl and the targetDCUrl the same URL so to narrow it down to syntax errors. So that is what I have, a syntax error. I am comparing the results of a table for the query SELECT programmeapproval, resourceapproval FROM tr_timesheet WHERE timesheetid = ? and the test always fails, but passes for other junit tests I have developed. I created 3 diffemt resultSetsEqual methods and none work. Yet, some other JUnit tests I have developed have PASSED. THE QUERY: SELECT timesheetid, programmeapproval, resourceapproval FROM tr_timesheet Returns three columns timesheetid (PK,int, not null) (populated with a range of numbers 2240 - 2282) programmeapproval (smallint,not null) (populated with the number 1 in every field) resourceapproval (smallint, not null) (populated with a number 1 in every field) When I run the query that is embedded in the code it only returns one row with the programmeapproval and resourceapproval columns and both field populated with the number 1. I have all jdbc drivers correctly installed and tested for connectivity. The JUnit Test is failing at this point according to the IDE. assertTrue(helper.resultSetsEqual2(sourceVal,targetVal)); This is the code: /*THIS IS A JUNIT CLASS****? package a7.unittests.dao; import static org.junit.Assert.assertTrue; import java.sql.Connection; import java.sql.PreparedStatement; import java.sql.ResultSet; import java.sql.Types; import org.junit.Test; import artemispm.tritonalerts.TimesheetAlert; public class UnitTestTimesheetAlert { @Test public void testQUERY_CHECKALERT() throws Exception{ UnitTestHelper helper = new UnitTestHelper(); Connection con = helper.getConnection(helper.sourceDBUrl); Connection conTarget = helper.getConnection(helper.targetDBUrl); PreparedStatement stmt = con.prepareStatement("select programmeapproval, resourceapproval from tr_timesheet where timesheetid = ?"); stmt.setInt(1, 2240); ResultSet sourceVal = stmt.executeQuery(); stmt = conTarget.prepareStatement("select programmeapproval, resourceapproval from tr_timesheet where timesheetid = ?"); stmt.setInt(1,2240); ResultSet targetVal = stmt.executeQuery(); assertTrue(helper.resultSetsEqual2(sourceVal,targetVal)); }} /*END**/ /*THIS IS A REGULAR CLASS**/ package a7.unittests.dao; import java.sql.Connection; import java.sql.DriverManager; import java.sql.ResultSet; import java.sql.ResultSetMetaData; import java.sql.SQLException; public class UnitTestHelper { static String sourceDBUrl = "jdbc:sqlserver://127.0.0.1:1433;databaseName=a7itm;user=a7user;password=a7user"; static String targetDBUrl = "jdbc:sqlserver://127.0.0.1:1433;databaseName=a7itm;user=a7user;password=a7user"; public Connection getConnection(String url)throws Exception{ return DriverManager.getConnection(url); } public boolean resultSetsEqual3 (ResultSet rs1, ResultSet rs2) throws SQLException { int col = 1; //ResultSetMetaData metadata = rs1.getMetaData(); //int count = metadata.getColumnCount(); while (rs1.next() && rs2.next()) { final Object res1 = rs1.getObject(col); final Object res2 = rs2.getObject(col); // Check values if (!res1.equals(res2)) { throw new RuntimeException(String.format("%s and %s aren't equal at common position %d", res1, res2, col)); } // rs1 and rs2 must reach last row in the same iteration if ((rs1.isLast() != rs2.isLast())) { throw new RuntimeException("The two ResultSets contains different number of columns!"); } } return true; } public boolean resultSetsEqual (ResultSet source, ResultSet target) throws SQLException{ while(source.next()) { target.next(); ResultSetMetaData metadata = source.getMetaData(); int count = metadata.getColumnCount(); for (int i =1; i<=count; i++) { if(source.getObject(i) != target.getObject(i)) { return false; } } } return true; } public boolean resultSetsEqual2 (ResultSet source, ResultSet target) throws SQLException{ while(source.next()) { target.next(); ResultSetMetaData metadata = source.getMetaData(); int count = metadata.getColumnCount(); for (int i =1; i<=count; i++) { if(source.getObject(i).equals(target.getObject(i))) { return false; } } } return true; } } /END***/ /*PASTED NEW CLASS - THIS IS A JUNIT TEST CLASS*/ package a7.unittests.dao; import static org.junit.Assert.*; import java.sql.Connection; import java.sql.DriverManager; import org.junit.Test; public class TestDatabaseConnection { @Test public void testConnection() throws Exception{ UnitTestHelper helper = new UnitTestHelper(); Connection con = helper.getConnection(helper.sourceDBUrl); Connection conTarget = helper.getConnection(helper.targetDBUrl); assertTrue(con != null && conTarget != null); } } /**END***/

    Read the article

  • During Spring unit test, data written to db but test not seeing the data

    - by richever
    I wrote a test case that extends AbstractTransactionalJUnit4SpringContextTests. The single test case I've written creates an instance of class User and attempts to write it to the database using Hibernate. The test code then uses SimpleJdbcTemplate to execute a simple select count(*) from the user table to determine if the user was persisted to the database or not. The test always fails though. I was suspect because in the Spring controller I wrote, the ability to save an instance of User to the db is successful. So I added the Rollback annotation to the unit test and sure enough, the data is written to the database since I can even see it in the appropriate table -- the transaction isn't rolled back when the test case is finished. Here's my test case: @ContextConfiguration(locations = { "classpath:context-daos.xml", "classpath:context-dataSource.xml", "classpath:context-hibernate.xml"}) public class UserDaoTest extends AbstractTransactionalJUnit4SpringContextTests { @Autowired private UserDao userDao; @Test @Rollback(false) public void teseCreateUser() { try { UserModel user = randomUser(); String username = user.getUserName(); long id = userDao.create(user); String query = "select count(*) from public.usr where usr_name = '%s'"; long count = simpleJdbcTemplate.queryForLong(String.format(query, username)); Assert.assertEquals("User with username should be in the db", 1, count); } catch (Exception e) { e.printStackTrace(); Assert.assertNull("testCreateUser: " + e.getMessage()); } } } I think I was remiss by not adding the configuration files. context-hibernate.xml <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd> <bean id="namingStrategy" class="org.springframework.beans.factory.config.FieldRetrievingFactoryBean"> <property name="staticField"> <value>org.hibernate.cfg.ImprovedNamingStrategy.INSTANCE</value> </property> </bean> <bean id="sessionFactory" class="org.springframework.orm.hibernate3.LocalSessionFactoryBean" destroy-method="destroy" scope="singleton"> <property name="namingStrategy"> <ref bean="namingStrategy"/> </property> <property name="dataSource" ref="dataSource"/> <property name="mappingResources"> <list> <value>com/company/model/usr.hbm.xml</value> </list> </property> <property name="hibernateProperties"> <props> <prop key="hibernate.dialect">org.hibernate.dialect.PostgreSQLDialect</prop> <prop key="hibernate.show_sql">true</prop> <prop key="hibernate.use_sql_comments">true</prop> <prop key="hibernate.query.substitutions">yes 'Y', no 'N'</prop> <prop key="hibernate.cache.provider_class">org.hibernate.cache.EhCacheProvider</prop> <prop key="hibernate.cache.use_query_cache">true</prop> <prop key="hibernate.cache.use_minimal_puts">false</prop> <prop key="hibernate.cache.use_second_level_cache">true</prop> <prop key="hibernate.current_session_context_class">thread</prop> </props> </property> </bean> <bean id="transactionManager" class="org.springframework.orm.hibernate3.HibernateTransactionManager"> <property name="sessionFactory" ref="sessionFactory"/> <property name="nestedTransactionAllowed" value="false" /> </bean> <bean id="transactionInterceptor" class="org.springframework.transaction.interceptor.TransactionInterceptor"> <property name="transactionManager"> <ref local="transactionManager"/> </property> <property name="transactionAttributes"> <props> <prop key="create">PROPAGATION_REQUIRED</prop> <prop key="delete">PROPAGATION_REQUIRED</prop> <prop key="update">PROPAGATION_REQUIRED</prop> <prop key="*">PROPAGATION_SUPPORTS,readOnly</prop> </props> </property> </bean> </beans> context-dataSource.xml <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd"> <bean id="dataSource" class="com.mchange.v2.c3p0.ComboPooledDataSource" destroy-method="close"> <property name="driverClass" value="org.postgresql.Driver" /> <property name="jdbcUrl" value="jdbc\:postgresql\://localhost:5432/company_dev" /> <property name="user" value="postgres" /> <property name="password" value="postgres" /> </bean> </beans> context-daos.xml <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd"> <bean id="extendedFinderNamingStrategy" class="com.company.dao.finder.impl.ExtendedFinderNamingStrategy"/> <bean id="finderIntroductionAdvisor" class="com.company.dao.finder.impl.FinderIntroductionAdvisor"/> <bean id="abstractDaoTarget" class="com.company.dao.impl.GenericDaoHibernateImpl" abstract="true" depends-on="sessionFactory"> <property name="sessionFactory"> <ref bean="sessionFactory"/> </property> <property name="namingStrategy"> <ref bean="extendedFinderNamingStrategy"/> </property> </bean> <bean id="abstractDao" class="org.springframework.aop.framework.ProxyFactoryBean" abstract="true"> <property name="interceptorNames"> <list> <value>transactionInterceptor</value> <value>finderIntroductionAdvisor</value> </list> </property> </bean> <bean id="userDao" parent="abstractDao"> <property name="proxyInterfaces"> <value>com.company.dao.UserDao</value> </property> <property name="target"> <bean parent="abstractDaoTarget"> <constructor-arg> <value>com.company.model.UserModel</value> </constructor-arg> </bean> </property> </bean> </beans> Some of this I've inherited from someone else. I wouldn't have used the proxying that is going on here because I'm not sure it's needed but this is what I'm working with. Any help much appreciated.

    Read the article

  • Performance impact: What is the optimal payload for SqlBulkCopy.WriteToServer()?

    - by Linchi Shea
    For many years, I have been using a C# program to generate the TPC-C compliant data for testing. The program relies on the SqlBulkCopy class to load the data generated by the program into the SQL Server tables. In general, the performance of this C# data loader is satisfactory. Lately however, I found myself in a situation where I needed to generate a much larger amount of data than I typically do and the data needed to be loaded within a confined time frame. So I was driven to look into the code...(read more)

    Read the article

  • Oracle Announces Oracle Data Integrator 12c and Oracle GoldenGate 12c

    - by Roxana Babiciu
    In today’s data-driven business environment, organizations need to cost-effectively manage the ever-growing streams of information originating both inside and outside the firewall and address emerging deployment styles like cloud, big data analytics, and real-time replication. To help customers succeed, Oracle is enhancing its data integration offering with Oracle Data Integrator 12c and Oracle GoldenGate 12c. These flexible and comprehensive solutions help customers capitalize on their data to reduce costs and drive business growth. Read more here

    Read the article

  • Instruction vs data cache usage

    - by Nick Rosencrantz
    Say I've got a cache memory where instruction and data have different cache memories ("Harvard architecture"). Which cache, instruction or data, is used most often? I mean "most often" as in time, not amount of data since data memory might be used "more" in terms of amount of data while instruction cache might be used "more often" especially depending on the program. Are there different answers a) in general and b) for a specific program?

    Read the article

  • Transparent Data Encryption

    Transparent Data Encryption is designed to protect data by encrypting the physical files of the database, rather than the data itself. Its main purpose is to prevent unauthorized access to the data by restoring the files to another server. With Transparent Data Encryption in place, this requires the original encryption certificate and master key. It was introduced in the Enterprise edition of SQL Server 2008. John Magnabosco explains fully, and guides you through the process of setting it up.

    Read the article

  • Google I/O 2012 - Spatial Data Visualization

    Google I/O 2012 - Spatial Data Visualization Brendan Kenny, Enoch Lau Maps were among the first data visualizations, but they can also provide the backdrop for visualizing your own spatial data. In this session, we'll take a voyage through the world of map based data visualization, arming you with the tools you need to most effectively bring your data to life on a map using the Maps API v3. For all I/O 2012 sessions, go to developers.google.com From: GoogleDevelopers Views: 1053 26 ratings Time: 01:00:17 More in Science & Technology

    Read the article

  • OTN Virtual Technology Summit - July 9 - Middleware Track

    - by OTN ArchBeat
    The Architecture of Analytics: Big Time Big Data and Business Intelligence This four-session track, part of the free OTN Virtual Technology Summit on July 9, will present a solution architect's perspective on how business intelligence products in Oracle's Fusion Middleware family and beyond fit into an effective big data architecture, offering insight and expertise from Oracle ACE Directors and product team experts specializing in business Intelligence to help you meet your big data business intelligence challenges. Register now! Sessions Oracle Big Data Appliance Case Study: Using Big Data to Analyze Cancer-Genome Relationships Tom Plunkett, Lead Author of the Oracle Big Data Handbook What does it take to build an award winning Big Data solution? This presentation takes a deep technical dive into the use of the Oracle Big Data Appliance in a project for the National Cancer Institute's Frederick National Laboratory for Cancer Research. The Frederick National Laboratory and the Oracle team won several awards for analyzing relationships between genomes and cancer subtypes with big data, including the 2012 Government Big Data Solutions Award, the 2013 Excellence.Gov Finalist for Innovation, and the 2013 ComputerWorld Honors Laureate for Innovation. [30 mins] Getting Value from Big Data Variety Richard Tomlinson, Director, Product Management, Oracle Big data variety implies big data complexity. Performing analytics on diverse data typically involves mashing up structured, semi-structured and unstructured content. So how can we do this effectively to get real value? How do we relate diverse content so we can start to analyze it? This session looks at how we approach this tricky problem using Endeca Information Discovery. [30 mins] How To Leverage Your Investment In Oracle Business Intelligence Enterprise Edition Within a Big Data Architecture Oracle ACE Director Kevin McGinley More and more organizations are realizing the value Big Data technologies contribute to the return on investment in Analytics. But as an increasing variety of data types reside in different data stores, organizations are finding that a unified Analytics layer can help bridge the divide in modern data architectures. This session will examine how you can enable Oracle Business Intelligence Enterprise Edition (OBIEE) to play a role in a unified Analytics layer and the benefits and use cases for doing so. [30 mins] Oracle Data Integrator 12c As Your Big Data Data Integration Hub Oracle ACE Director Mark Rittman Oracle Data Integrator 12c (ODI12c), as well as being able to integrate and transform data from application and database data sources, also has the ability to load, transform and orchestrate data loads to and from Big Data sources. In this session, we'll look at ODI12c's ability to load data from Hadoop, Hive, NoSQL and file sources, transform that data using Hive and MapReduce processing across the Hadoop cluster, and then bulk-load that data into an Oracle Data Warehouse using Oracle Big Data Connectors. We will also look at how ODI12c enables ETL-offloading to a Hadoop cluster, with some tips and techniques on real-time capture into a Hadoop data reservoir and techniques and limitations when performing ETL on big data sources. [90 mins] Register now!

    Read the article

  • Separating logic and data in browser game

    - by Tesserex
    I've been thinking this over for days and I'm still not sure what to do. I'm trying to refactor a combat system in PHP (...sorry.) Here's what exists so far: There are two (so far) types of entities that can participate in combat. Let's just call them players and NPCs. Their data is already written pretty well. When involved in combat, these entities are wrapped with another object in the DB called a Combatant, which gives them information about the particular fight. They can be involved in multiple combats at once. I'm trying to write the logic engine for combat by having combatants injected into it. I want to be able to mock everything for testing. In order to separate logic and data, I want to have two interfaces / base classes, one being ICombatantData and the other ICombatantLogic. The two implementers of data will be one for the real objects stored in the database, and the other for my mock objects. I'm now running into uncertainties with designing the logic side of things. I can have one implementer for each of players and NPCs, but then I have an issue. A combatant needs to be able to return the entity that it wraps. Should this getter method be part of logic or data? I feel strongly that it should be in data, because the logic part is used for executing combat, and won't be available if someone is just looking up information about an upcoming fight. But the data classes only separate mock from DB, not player from NPC. If I try having two child classes of the DB data implementer, one for each entity type, then how do I architect that while keeping my mocks in the loop? Do I need some third interface like IEntityProvider that I inject into the data classes? Also with some of the ideas I've been considering, I feel like I'll have to put checks in place to make sure you don't mismatch things, like making the logic for an NPC accidentally wrap the data for a player. Does that make any sense? Is that a situation that would even be possible if the architecture is correct, or would the right design prohibit that completely so I don't need to check for it? If someone could help me just layout a class diagram or something for this it would help me a lot. Thanks. edit Also useful to note, the mock data class doesn't really need the Entity, since I'll just be specifying all the parameters like combat stats directly instead. So maybe that will affect the correct design.

    Read the article

  • Data Source Use of Oracle Edition Based Redefinition (EBR)

    - by Steve Felts
    Edition-based redefinition is a new feature in the 11gR2 release of the Oracle database. It enables you to upgrade the database component of an application while it is in use, thereby minimizing or eliminating down time. It works by allowing for a pre-upgrade and post-upgrade view of the data to exist at the same time, providing a hot upgrade capability. You can then specify which view you want for a particular session.  See the Oracle Database Advanced Application Developer's Guide for further information. There is also a good white paper at Edition Based Definition. Using this feature of the Oracle database does not require any new WebLogic Server functionality. It is set for each connection in the pool automatically by simply specifying SQL ALTER SESSION SET EDITION = edition_name in the Init SQL parameter in the data source configuration. This can be configured either via the console or via WLST (setInitSQL on the JDBCConnectionPoolParams). This SQL statement is executed for each newly created physical database connection.Note that we are assuming that a data source references only one edition of the database. To make use of this feature, you would have an earlier version of the application with a data source that references the earlier EDITION and a later version of the application with a data source that references the later EDITION.   Once you start talking about multiple versions of a WLS application, you should be using the WLS "side-by-side" or "versioned" deployment feature.  See Developing Applications for Production Redeployment for more information.  By combining Oracle database EBR and WLS versioned deployment, the application can be failed over with no downtime, making the combination of features more powerful than either independently. There is a catch - you need to be running with a versioned database and a versioned application initially so then you can switch versions.  The recommended way to version a WLS application is to simply add the "Weblogic-Application-Version" property in the MANIFEST.MF file(you can also specify it at deployment time). The recommended way to configure the data source is to use a packaged data source descriptor that's stored in the ear or war so that everything is self-contained.  There are some restrictions.  You can't use a packaged data source with Logging Last Resource (LLR) - you need to use a system resource.  You can't use an application-scoped packaged data source with EmulateTwoPhaseCommit for the global-transactions-protocol with a versioned application - use a global scope.  See Configuring JDBC Application Modules for Deployment for more details. There's one known problem - it doesn't work correctly with an XA data source (patch available with bug 14075837).

    Read the article

  • I have data that sends in "bursts" of 100 records with a significant delay. How do I structure my classes for multithreading?

    - by makerofthings7
    My datasource sends information in 100 batches of 100 records with a delay of 1 to 3 seconds between batches. I would like to start processing data as soon as it's received, but I'm not sure how to best approach this. Some ideas I've been playing with include: yield Concurrent Dictionary ConcurrentDictionary with INotifyProperyChanged Events etc. As you can see I'm all over the place, and would appreciate some tested guidance on how to approach this

    Read the article

  • Windows Azure from a Data Perspective

    Before creating a data application in Windows Azure, it is important to make choices based on the type of data you have, as well as the security and the business requirements. There are a wide range of options, because Windows Azure has intrinsic data storage, completely separate from SQL Azure, that is highly available and replicated. Your data requirements are likely to dictate the type of data storage options you choose.

    Read the article

  • Windows Azure from a Data Perspective

    Before creating a data application in Windows Azure, it is important to make choices based on the type of data you have, as well as the security and the business requirements. There are a wide range of options, because Windows Azure has intrinsic data storage, completely separate from SQL Azure, that is highly available and replicated. Your data requirements are likely to dictate the type of data storage options you choose.

    Read the article

  • Windows Azure from a Data Perspective

    Before creating a data application in Windows Azure, it is important to make choices based on the type of data you have, as well as the security and the business requirements. There are a wide range of options, because Windows Azure has intrinsic data storage, completely separate from SQL Azure, that is highly available and replicated. Your data requirements are likely to dictate the type of data storage options you choose.

    Read the article

  • ???????/??????????!! Oracle Data Integrator????????

    - by user788995
    ????? ??:2009/12/17 ??:??????/?? ??????????????????????????????????????????????????????? Oracle Data Integrator ????????????????????????????Oracle Data Integrator ?????????????????????????????????????????????????????????????? Oracle Data Integrator ??Oracle Data Integrator ??????????????Oracle Data Integrator ?????? ????????? ????????????????? http://otndnld.oracle.co.jp/ondemand/otn-seminar/movie/ODI_1217_1330.wmv http://otndnld.oracle.co.jp/ondemand/otn-seminar/movie/mp4/ODI_1217_1330.mp4 http://www.oracle.com/technetwork/jp/ondemand/db-technique/20091217-odi-knowhow-254853-ja.pdf

    Read the article

  • Experience Oracle Enterprise Data Quality at OpenWorld 2012

    - by Mala Narasimharajan
    This year Enterprise Data Quality features sessions designed to cover topics above and beyond product strategy and roadmap. Specifically, we have planned to have sessions around data governance, how does EDQ enable data acquisition, migration as well as integration. In addition, check out our hands-on-lab to see for yourself the new enhancements and integration to EDQ. Finally, no OpenWorld is complete without fully exploring the DemoGrounds – come and learn how enterprise data quality is critical to enterprise success and fit-for-purpose data. SESSIONS Mon Oct 1   1:45 PM - 2:45 PM Oracle Enterprise Data Quality: Product Overview and Roadmap Moscone West – 3006 Wed Oct 3   1:15 PM - 2:15 PM Data Preparation and Ongoing Governance with the Oracle Enterprise Data Quality Platform                                                   Moscone West – 3000 Thurs – Oct 4   12:45 PM - 1:45 PM Data Acquisition, Migration, and Integration with the Oracle Enterprise Data Quality Platform                                                 Moscone West – 3005     HANDS ON LAB – Mon Oct 1   4:45 PM - 5:45 PM Introduction to Oracle Enterprise Data Quality Application                                                                                                                   Marriott Marquis - Salon ½ DEMO   Trusted Data with Oracle Enterprise Data Quality Moscone South, Right - S-243

    Read the article

< Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >