Search Results

Search found 60471 results on 2419 pages for 'spring data neo4j'.

Page 29/2419 | < Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >

  • Looking for Cutting-Edge Data Integration: 2014 Excellence Awards

    - by Sandrine Riley
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 It is nomination time!!! This year's Oracle Fusion Middleware Excellence Awards will honor customers and partners who are creatively using various products across Oracle Fusion Middleware. Think you have something unique and innovative with one or a few of our Oracle Data Integration products? We would love to hear from you! Please submit today. The deadline for the nomination is June 20, 2014. What you win: An Oracle Fusion Middleware Innovation trophy One free pass to Oracle OpenWorld 2014 Priority consideration for placement in Profit magazine, Oracle Magazine, or other Oracle publications & press release Oracle Fusion Middleware Innovation logo for inclusion on your own Website and/or press release Let us reminisce a little… For details on the 2013 Data Integration Winners: Royal Bank of Scotland’s Market and International Banking and The Yalumba Wine Company, check out this blog post: 2013 Oracle Excellence Awards for Fusion Middleware Innovation… and the Winners for Data Integration are… and for details on the 2012 Data Integration Winners: Raymond James and Morrisons, check out this blog post: And the Winners of Fusion Middleware Innovation Awards in Data Integration are…  Now to view the 2013 Winners (for all categories). We hope to honor you! Here's what you need to do:  Click here to submit your nomination today.  And just a reminder: the deadline to submit a nomination is 5pm Pacific Time on June 20, 2014. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • How to add custom SOAP-Header element to the generated WSDL in Spring-WS

    - by Petr Macek
    Hi, we are migrating from WebLogic web-services to Spring-WS (1.5.X). There is currently one issue we are facing: We need to pass a context object (on WLS it is passed as SOAP-Header element) to other services that are still running on WLS from the Spring-WS powered service. The header element is still formulated on client side and the newly created WS (Spring-WS) should just pass it to other services. I can imagine how the custom element would be passed: override the doWithMessage(WebServiceMessage message) method... Is there a way to generate the wsdl with the help of DefaultWsdl11Definition to contain that custom header element? See the example: <wsdl:operation name="GetSomeInformation"> <soap:operation soapAction="http://www.dummyservice.com/InformationService/GetSomeInformation" /> <wsdl:input> <soap:body use="literal" /> <soap:header message="ctx:ServiceContextMessage" part="serviceContext" use="literal" /> </wsdl:input> <wsdl:output> <soap:body use="literal" /> </wsdl:output> <wsdl:fault name="Error"> <soap:fault name="Error" use="literal" /> </wsdl:fault> </wsdl:operation> Thanks for help

    Read the article

  • How to populate Java (web) application with initial data using Spring/JPA/Hibernate

    - by Tuukka Mustonen
    I want to setup my database with initial data programmatically. I want to populate my database for development runs, not for testing runs (it's easy). The product is built on top of Spring and JPA/Hibernate. Developer checks out the project Developer runs command/script to setup database with initial data Developer starts application (server) and begins developing/testing then: Developer runs command/script to flush the database and set it up with new initial data because database structures or the initial data bundle were changed What I want is to setup my environment by required parts in order to call my DAOs and insert new objects into database. I do not want to create initial data sets in raw SQL, XML, take dumps of database or whatever. I want to programmatically create objects and persist them in database as I would in normal application logic. One way to accomplish this would be to start up my application normally and run a special servlet that does the initialization. But is that really the way to go? I would love to execute the initial data setup as Maven task and I don't know how to do that if I take the servlet approach. There is somewhat similar question. I took a quick glance at the suggested DBUnit and Unitils. But they seem to be heavily focused in setting up testing environments, which is not what I want here. DBUnit does initial data population, but only using xml/csv fixtures, which is not what I'm after here. Then, Maven has SQL plugin, but I don't want to handle raw SQL. Maven also has Hibernate plugin, but it seems to help only in Hibernate configuration and table schema creation (not in populating db with data). How to do this?

    Read the article

  • Spring Security: Multiple Logins to the same resources: Form Login + Facebook Connect (uid, sessionK

    - by Daxon
    To begin I know about http://blog.kadirpekel.com/2009/11/09/facebook-connect-integration-with-spring-security/ The only problem is that it completely replaces the Form Login with Facebook Connect. I have the native form login in place, I also have Facebook Connect in place, Upon gathering user information I link it to a native account but without a password. At that point I would like to call a link or method to start process of going into the Spring Security Filter Chain. Here is the source code that works, but am trying to modify. It contains all the files I'm taking about. Now from what I understand I need to add a custom FacebookAuthenticationProvider so that my AuthenticationManager knows about it. <bean id="facebookAuthenticationProvider" class="org.springframework.security.facebook.FacebookAuthenticationProvider"> </bean> <security:authentication-manager alias="authenticationManager"> <security:authentication-provider ref="facebookAuthenticationProvider" /> </security:authentication-manager> Then within the FacebookAuthenticationProvider I would have to call an FacebookAuthenticationToken that would take my the current facebook Uid and SessionKey of the user. Then try authenticate this Token. So where does the FacebookAuthenticationFilter come into it? I'm just trying to understand the order at which these 3 files are called. As if you were trying to implement any other custom authentication. FacebookAuthenticationFilter.java FacebookAuthenticationProvider.java FacebookAuthenticationToken.java I have also posted this on the Spring Security Forum

    Read the article

  • google appengine local datastore integration testing with spring

    - by mirror303
    Hi all, I want to write some integration tests to see how my spring-managed DAO's behave when talking to the appengine datastore. Following the spring manual I will be providing my test-classes with the proper annotations: @RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration(locations = { "classpath:applicationContext.xml" }) After a lot of browsing I found this blog post dating back to august '09 from somebody doing exactly what I want to achieve. It involves writing a TestEnvironment class that implements ApiProxy.Environment plus talking to ApiProxyLocalImpl. However, if I look at the current docs (for version 1.3.1), it seems that this has been replaced by newing an instance of the framework provided LocalDatastoreServiceTestConfig which is passed to a LocalServiceTestHelper. It is too bad that the appengine docs don't show an example how to do this with JPA because then the spring wiring would be trivial. Trying to follow the route outlined in the blog posting has me running into a compiler messages telling me that classes such as ApiProxyLocalImpl are not visible by me. Hence, there must be a new way of doing it, which probably involves the LocalServiceTestHelper. My question: Does anybody know how? I know I will need to configure an EntityManagerFactory and provide it with the Datastore connection somehow... but how? :)

    Read the article

  • appengine local datastore integration testing with spring

    - by mirror303
    Hi all, I want to write some integration tests to see how my spring-managed DAO's behave when talking to the appengine datastore. Following the spring manual I will be providing my test-classes with the proper annotations: @RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration(locations = { "classpath:applicationContext.xml" }) After a lot of browsing I found this blog post dating back to august '09 from somebody doing exactly what I want to achieve. It involves writing a TestEnvironment class that implements ApiProxy.Environment plus talking to ApiProxyLocalImpl. However, if I look at the current docs (for version 1.3.1), it seems that this has been replaced by newing an instance of the framework provided LocalDatastoreServiceTestConfig which is passed to a LocalServiceTestHelper. It is too bad that the appengine docs don't show an example how to do this with JPA because then the spring wiring would be trivial. Trying to follow the route outlined in the blog posting has me running into a compiler messages telling me that classes such as ApiProxyLocalImpl are not visible by me. Hence, there must be a new way of doing it, which probably involves the LocalServiceTestHelper. My question: Does anybody know how? I know I will need to configure an EntityManagerFactory and provide it with the Datastore connection somehow... but how? :)

    Read the article

  • How to marshall non-string objects with JAXB and Spring

    - by lesula
    I was trying to follow this tutorial in order to create my own restful web-service using Spring framework. The client do a GET request to, let's say http://api.myapp/app/students and the server returns an xml version of the object classroom: @XmlRootElement(name = "class") public class Classroom { private String classId = null; private ArrayList<Student> students = null; public Classroom() { } public String getClassId() { return classId; } public void setClassId(String classId) { this.classId = classId; } @XmlElement(name="student") public ArrayList<Student> getStudents() { return students; } public void setStudents(ArrayList<Student> students) { this.students = students; } } The object Student is another bean containing only Strings. In my app-servlet.xml i copied this lines: <bean id="studentsView" class="org.springframework.web.servlet.view.xml.MarshallingView"> <constructor-arg ref="jaxbMarshaller" /> </bean> <!-- JAXB2 marshaller. Automagically turns beans into xml --> <bean id="jaxbMarshaller" class="org.springframework.oxm.jaxb.Jaxb2Marshaller"> <property name="classesToBeBound"> <list> <value>com.spring.datasource.Classroom</value> <value>com.spring.datasource.Student</value> </list> </property> </bean> Now my question is: what if i wanted to insert some non-string objects as class variables? Let's say i want a tag containing the String version of an InetAddress, such as <inetAddress>192.168.1.1</inetAddress> How can i force JAXB to call the method inetAddress.toString() in such a way that it appears as a String in the xml? In the returned xml non-string objects are ignored!

    Read the article

  • Help with Perl persistent data storage using Data::Dumper

    - by stephenmm
    I have been trying to figure this out for way to long tonight. I have googled it to death and none of the examples or my hacks of the examples are getting it done. It seems like this should be pretty easy but I just cannot get it. Here is the code: #!/usr/bin/perl -w use strict; use Data::Dumper; my $complex_variable = {}; my $MEMORY = "$ENV{HOME}/data/memory-file"; $complex_variable->{ 'key' } = 'value'; $complex_variable->{ 'key1' } = 'value1'; $complex_variable->{ 'key2' } = 'value2'; $complex_variable->{ 'key3' } = 'value3'; print Dumper($complex_variable)."TEST001\n"; open M, ">$MEMORY" or die; print M Data::Dumper->Dump([$complex_variable], ['$complex_variable']); close M; $complex_variable = {}; print Dumper($complex_variable)."TEST002\n"; # Then later to restore the value, it's simply: do $MEMORY; #eval $MEMORY; print Dumper($complex_variable)."TEST003\n"; And here is my output: $VAR1 = { 'key2' => 'value2', 'key1' => 'value1', 'key3' => 'value3', 'key' => 'value' }; TEST001 $VAR1 = {}; TEST002 $VAR1 = {}; TEST003 Everything that I read says that the TEST003 output should look identical to the TEST001 output which is exactly what I am trying to achieve. What am I missing here? Should I be "do"ing differently or should I be "eval"ing instead and if so how? Thanks for any help...

    Read the article

  • Relational database data explorer / visualization?

    - by Ian Boyd
    Is there a tool that can let one browse relational data as a graph of connected nodes? For example, i'm faced with trying to cleanse some anomolous data. i can start with two offending rows. In this particular example, the TransactionID should, by business rules, be unique to the table, but i find a transaction that violates that rule: SELECT * FROM LCTTrans WHERE TransactionID = 1075048 LCTID TransactionID ========= ============= 4358 1075048 4359 1075048 2 row(s) affected But really what i want to begin to hunt down all the related data, to try to see which is right. So this hypothetical software would start by showing me these two rows: Next, i want to see that transaction that is linked into this table: Now that transaction points to an MAL, so show me that: Now lets add those two LCTs, that the transaction is "on". A transaction can be on only one LCT, yet this one is pointing to two: Okay computer, both of those LCTs point to an MAL and the transaction that created them, show me those: Those last two transactions, they also point at an MAL, and they themselves point to an LCT, show me those: Okay, now are there any entries in LCTTrans that point to LCTs 4358 or 4359?... And so on, and so on. Now i did all this manually, running single selects, copying and pasting uniqueidentifier keys and converting them into friendly id numbers so i could easily see the relationships. Is there software that can do this?

    Read the article

  • Spring Stripes framework problem

    - by ali
    I am new to stripes and am attempting to integrate spring into stripes In the following code : public class ContactFormActionBeanTest { private static MockServletContext mockServletContext; private static MockHttpSession mockSession; @BeforeClass public static void setup() throws Exception { mockServletContext = new MockServletContext("webmail"); Map<String,String> params = new HashMap<String,String>(); params.put("ActionResolver.Packages", "stripesbook.action"); params.put("Extension.Packages", "stripesbook.ext," + "net.sourceforge.stripes.integration.spring"); mockServletContext.addFilter(StripesFilter.class, "StripesFilter", params); mockServletContext.setServlet(DispatcherServlet.class, "DispatcherServlet", null); mockSession = new MockHttpSession(mockServletContext); mockServletContext.addInitParameter("contextConfigLocation", "/WEB-INF/applicationContext-test.xml"); ContextLoaderListener springContextLoader = new ContextLoaderListener(); springContextLoader.contextInitialized( new ServletContextEvent(mockServletContext)); // Load mock user MockRoundtrip trip = new MockRoundtrip(mockServletContext, MockDataLoaderActionBean.class, mockSession); trip.execute(); // Login mock user trip = new MockRoundtrip(mockServletContext, LoginActionBean.class, mockSession); trip.setParameter("username", "freddy"); trip.setParameter("password", "nadia"); trip.execute("login"); } I get null in springContextLoader ContextLoaderListener springContextLoader = new ContextLoaderListener(); and test fails. Am I missing something? I am using eclipse with maven. Also when I try to deploy it for tomcat 6.0 I get following warnings: WARN net.sourceforge.stripes.util.ResolverUtil - Could not examine class 'stripesbook/ext/ContactFormatter.class' due to a java.lang.UnsupportedClassVersionError with message: Bad version number in .class file (unable to load class stripesbook.ext.ContactFormatter) I have checked to be sure that I am compiling with Java 5(set JDK compiler to 1.5) instead of 1.6 (Java 6); but didn't work out for me and still have problems running spring-stripes integrated project.

    Read the article

  • Spring configuration of C3P0 with Hibernate?

    - by HDave
    I have a Spring/JPA application with Hibernate as the JPA provider. I've configured a C3P0 data source in Spring via: <bean id="myJdbcDataSource" class="com.mchange.v2.c3p0.ComboPooledDataSource" destroy-method="close"> <!-- Connection properties --> <property name="driverClass" value="$DS{database.class}" /> <property name="jdbcUrl" value="$DS{database.url}" /> <property name="user" value="$DS{database.username}" /> <property name="password" value="$DS{database.password}" /> <!-- Pool properties --> <property name="minPoolSize" value="5" /> <property name="maxPoolSize" value="20" /> <property name="maxStatements" value="50" /> <property name="idleConnectionTestPeriod" value="3000" /> <property name="loginTimeout" value="300" /> I then specified this data source in the Spring entity manager factory as follows: <bean id="myLocalEmf" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"> <property name="persistenceUnitName" value="myapp-core" /> <property name="dataSource" ref="myJdbcDataSource" /> </bean> However, I recently noticed while browsing maven artifacts a "hibernate-c3p0". What is this? Is this something I need to use? Or do I already have this configured properly?

    Read the article

  • Trouble with object injection in Spring.Net

    - by Abdel Olakara
    Hi all, I have a issue with my Spring.Net configuration where its not injecting an object. I have a CommService to which an object named GeneralEmail is injected to. Here is the configuration: <!-- GeneralMail Object --> <object id="GeneralMailObject" type="CommUtil.Email.GeneralEmail, CommUtil"> <constructor-arg name="host" value="xxxxx.com"/> <constructor-arg name="port" value="25"/> <constructor-arg name="user" value="[email protected]"/> <constructor-arg name="password" value="xxxxx"/> <constructor-arg name="template" value="xxxxx"/> </object> <!-- Communication Service --> <object id="CommServiceObject" type="TApp.Code.Services.CommService, TApp"> <property name="emailService" ref="GeneralMailObject" /> </object> The communication service object is again injected to many other aspx pages & service. In one scenario, I need to call the commnucation service from an static WebMethod. I try doing: CommService cso = new CommService(); But when i try to get the emailService object, its null! why didn't the spring inject the GeneralMail object into my cso object? What am I doing wrong and how do I access the object from spring container. Thanks in advance for the suggestions and solutions. Reagrds, Abdel Olakara

    Read the article

  • Long running transactions with Spring and Hibernate?

    - by jimbokun
    The underlying problem I want to solve is running a task that generates several temporary tables in MySQL, which need to stay around long enough to fetch results from Java after they are created. Because of the size of the data involved, the task must be completed in batches. Each batch is a call to a stored procedure called through JDBC. The entire process can take half an hour or more for a large data set. To ensure access to the temporary tables, I run the entire task, start to finish, in a single Spring transaction with a TransactionCallbackWithoutResult. Otherwise, I could get a different connection that does not have access to the temporary tables (this would happen occasionally before I wrapped everything in a transaction). This worked fine in my development environment. However, in production I got the following exception: java.sql.SQLException: Lock wait timeout exceeded; try restarting transaction This happened when a different task tried to access some of the same tables during the execution of my long running transaction. What confuses me is that the long running transaction only inserts or updates into temporary tables. All access to non-temporary tables are selects only. From what documentation I can find, the default Spring transaction isolation level should not cause MySQL to block in this case. So my first question, is this the right approach? Can I ensure that I repeatedly get the same connection through a Hibernate template without a long running transaction? If the long running transaction approach is the correct one, what should I check in terms of isolation levels? Is my understanding correct that the default isolation level in Spring/MySQL transactions should not lock tables that are only accessed through selects? What can I do to debug which tables are causing the conflict, and prevent those tables from being locked by the transaction?

    Read the article

  • How to properly set relationships in Core Data when using setValue and data already exists

    - by ern
    Let's say I have two objects: Articles and Categories. For the sake of this example all relevant categories have already been added to the data store. When looping through data that holds edits for articles, there is category relationship information that needs to be saved. I was planning on using the -setValue method in the Article class in order to set the relationships like so: - (void)setValue:(id)value forUndefinedKey:(NSString *)key { if([key isEqualToString:@"categories"]){ NSLog(@"trying to set categories..."); } } The problem is that value isn't a Category, it is just a string (or array of strings) holding the title of a category. I could certainly do a lookup within this method for each category and assign it, but that seems inefficient when processing a whole bunch of articles at once. Another option is to populate an array of all possible categories and just filter, but my question is where to store that array? Should it be a class method on Article? Is there a way to pass in additional data to the -setValue method? Is there another, better option for setting the relationship I'm not thinking of? Thanks for your help.

    Read the article

  • Spring bean initialization in a web app

    - by EugeneP
    We work with a web application and autowire beans using WebApplicationContextUtils in the init method. Could you clarify some details about bean initialization? The question rises from the static factory method. Suppose there's a bean that is created in a static factory method. As we can see, when the web app is deployed, the ContextLoaderListener initializes all the beans present in Spring xml config file. Now happens such a thing. In the static factory method we run a timer that starts ticking. But in reality we wouldn't want it to start ticking unless the bean is injected into a property of the object ! That is question number one - all the beans are automatically initialized on deploy - correct? And after that when we need an injection, it simply feels the link with the address of the object created during initialization, though OBJECT WAS CREATED ON WEB APP DEPLOY, immediately ! (I assume the default singleton-creation Spring behavior) Second question: are all copies of a web app use the same beans, so all beans are WEB-APP wide, every Spring bean is shared between all the copies of this web app running?

    Read the article

  • ActiveMq integration with Spring 2.5

    - by Tony
    I am using ActiveMq 5.32 with Spring 2.5.5. I use pretty generic configuration, as long as I include the jmsTransactionManager in DefaultMessageListenerContainer, Spring throw an error on start up: "Error creating bean with name 'org.springframework.jms.listener.DefaultMessageListenerContainer#0'" Without the transactionManager attribute , this works fine, but when I add 10 messages to the message queue, a transaction exception will occur. Part of my configurations : <bean class="org.springframework.jms.listener.DefaultMessageListenerContainer"> <property name="connectionFactory" ref="connectionFactory" /> <property name="destination" ref="emailDestination" /> <property name="messageListener" ref="emailServiceMDP" /> <property name="transactionManager" ref="jmsTransactionManager" /> </bean> <bean id="jmsTransactionManager" class="org.springframework.jms.connection.JmsTransactionManager"> <property name="connectionFactory" ref="connectionFactory" /> </bean> Does this version of Spring and Activemq has some know issues in integration ? Or do I need additional libs to get jmsTransactionManager to work ?

    Read the article

  • Spring Dependency Injecting an annotated Aspect

    Using Spring I've had some issues with doing a dependency injection on an annotated Aspect class. CacheService is injected upon the Spring context's startup, but when the weaving takes place, it says that the cacheService is null. So I am forced to relook up the spring context manually and get the bean from there. Is there another way of going about it? Here is an example of my Aspect: import org.apache.log4j.Logger; import org.aspectj.lang.ProceedingJoinPoint; import org.aspectj.lang.annotation.Around; import org.aspectj.lang.annotation.Aspect; import com.mzgubin.application.cache.CacheService; @Aspect public class CachingAdvice { private static Logger log = Logger.getLogger(CachingAdvice.class); private CacheService cacheService; @Around("execution(public *com.mzgubin.application.callMethod(..)) &&" + "args(params)") public Object addCachingToCreateXMLFromSite(ProceedingJoinPoint pjp, InterestingParams params) throws Throwable { log.debug("Weaving a method call to see if we should return something from the cache or create it from scratch by letting control flow move on"); Object result = null; if (getCacheService().objectExists(params))}{ result = getCacheService().getObject(params); } else { result = pjp.proceed(pjp.getArgs()); getCacheService().storeObject(params, result); } return result; } public CacheService getCacheService(){ return cacheService; } public void setCacheService(CacheService cacheService){ this.cacheService = cacheService; } }

    Read the article

  • Suggested Web Application Framework and Database for Enterprise, “Big-Data” App?

    - by willOEM
    I have a web application that I have been developing for a small group within my company over the past few years, using Pipeline Pilot (plus jQuery and Python scripting) for web development and back-end computation, and Oracle 10g for my RDBMS. Users upload experimental genomic data, which is parsed into a database, and made available for querying, transformation, and reporting. Experimental data sets are large and have many layers of metadata. A given experimental data record might have a foreign key relationship with a table that describes this data point's assay. Assays can cover multiple genes, which can have multiple transcript, which can have multiple mutations, which can affect multiple signaling pathways, etc. Users need to approach this data from any point in those layers in the metadata. Since all data sets for a given data type can run over a billion rows, this results in some large, dynamic queries that are hard to predict. New data sets are added on a weekly basis (~1GB per set). Experimental data is never updated, but the associated metadata can be updated weekly for a few records and yearly for most others. For every data set insert the system sees, there will be between 10 and 100 selects run against it and associated data. It is okay for updates and inserts to run slow, so long as queries run quick and are as up-to-date as possible. The application continues to grow in size and scope and is already starting to run slower than I like. I am worried that we have about outgrown Pipeline Pilot, and perhaps Oracle (as the sole database). Would a NoSQL database or an OLAP system be appropriate here? What web application frameworks work well with systems like this? I'd like the solution to be something scalable, portable and supportable X-years down the road. Here is the current state of the application: Web Server/Data Processing: Pipeline Pilot on Windows Server + IIS Database: Oracle 10g, ~1TB of data, ~180 tables with several billion-plus row tables Network Storage: Isilon, ~50TB of low-priority raw data

    Read the article

  • Simple ADF page using BAM Data Control

    - by [email protected]
    var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-15829414-1"); pageTracker._trackPageview(); } catch(err) {} Purpose : In this blog I will walk you through very simple steps to create an ADF page using BAM data control connection.Details : Create the projectOpen JDeveloper (make sure you have installed the SOA extension for JDev)Create new Application using "Generic Application" template.Click on "Next"Shuttle  "ADF Faces" to right pane for the project technology.Click "Finish"Create a BAM connectionIn the resource palette click on "Folder->New Connection -> BAM"Enter the connection name and click "Next"Enter Connection details Click on "Test connection" and "Finish"Create the BAM Data Control Open the IDE connection created in above step.Drag and drop "Employees" to "Data controls" palette.Select "Flat Query" and Click "Finish".Create the View Create a new JSF page.From Data control Panel drag and drop "Employees->Query->ADF Read Only table"Right click and Run the page.

    Read the article

  • Data Aggregation of CSV files java

    - by royB
    I have k csv files (5 csv files for example), each file has m fields which produce a key and n values. I need to produce a single csv file with aggregated data. I'm looking for the most efficient solution for this problem, speed mainly. I don't think by the way that we will have memory issues. Also I would like to know if hashing is really a good solution because we will have to use 64 bit hashing solution to reduce the chance for a collision to less than 1% (we are having around 30000000 rows per aggregation). For example file 1: f1,f2,f3,v1,v2,v3,v4 a1,b1,c1,50,60,70,80 a3,b2,c4,60,60,80,90 file 2: f1,f2,f3,v1,v2,v3,v4 a1,b1,c1,30,50,90,40 a3,b2,c4,30,70,50,90 result: f1,f2,f3,v1,v2,v3,v4 a1,b1,c1,80,110,160,120 a3,b2,c4,90,130,130,180 algorithm that we thought until now: hashing (using concurentHashTable) merge sorting the files DB: using mysql or hadoop or redis. The solution needs to be able to handle Huge amount of data (each file more than two million rows) a better example: file 1 country,city,peopleNum england,london,1000000 england,coventry,500000 file 2: country,city,peopleNum england,london,500000 england,coventry,500000 england,manchester,500000 merged file: country,city,peopleNum england,london,1500000 england,coventry,1000000 england,manchester,500000 The key is: country,city. This is just an example, my real key is of size 6 and the data columns are of size 8 - total of 14 columns. We would like that the solution will be the fastest in regard of data processing.

    Read the article

  • SQL – Download FREE Book – Data Access for HighlyScalable Solutions: Using SQL, NoSQL, and Polyglot Persistence

    - by Pinal Dave
    Recently I was preparing for Big Data and I ended up on very interesting read for everybody. This is created by Microsoft and it is indeed a fantastic read as per my opinion. It took me some time to read this entire book but it was worth reading this as it tried to answer two of the very interesting questions related to muscle. Here is the abstract from the book: Organizations seeking to use a NoSQL database are therefore faced with a twofold challenge: • Which NoSQL database(s) best meet(s) the needs of the organization? • How does an organization integrate a NoSQL database into its solutions? As I keep on reading the book, I find it very interesting and informative. I suggest if you have time this weekend, download the book and read it. This guide focuses on the most common types of NoSQL database currently available, describes the situations for which they are most suited, and shows examples of how you might incorporate them into a business application. The guide summarizes the experiences of a fictitious organization named Adventure Works, who implemented a solution that comprised an assortment of different databases. Download Data Access for HighlyScalable Solutions:  Using SQL, NoSQL,  and Polyglot Persistence While we are talking about Big Data and NoSQL do not forget to check out my tomorrow’s blog as I am going to talk about the same subject and it will be very interesting. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, NoSQL, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • How to analyze data

    - by Subhash Dike
    We are working on an application that allows user to search/read some content in a particular domain. We wanted to add some capability in the app which can suggest user some content based on the usage pattern (analyze data based on frequency and relevance). Currently every time user search or read something we do store that information in backend database. We would like to use this data to present some additional content to user. Could someone explain what kind of tools will be required for such a job and any example? And what this concept is called, data analysis? data mining? business intelligence? or something else? Update: Sorry for being too broad, here is an example SQL Database (Just to give an idea, actual db is little different with normalization and stuff) Table: UserArticles Fields: UserName | ArticleId | ArticleTitle | DateVisited | ArticleCategory Table: CategoryArticles Fields: Category | Article Title | Author etc. One Category may have one more articles. One user may have read the same article multiple times (in this case we place additional entry in the user article table. Task: Use the information availabel in UserArticle table and rank categories in order which would be presented to user automatically in other part of application. Factors to be considered are frequency and recency. This might be possible through simple queries or may require specialized tools. Either way, the task is what mention above. I am not too sure which route to take, hence the question. Thoughts??

    Read the article

  • Data structure for bubble shooter game

    - by SundayMonday
    I'm starting to make a bubble shooter game for a mobile OS. Assume this is just the basic "three or more same-color bubbles that touch pop" and all bubbles that are separated from their group fall/pop. What data structures are common for storing the bubbles? I've considered using an undirected, connected graph where each node is a bubble. This seems like it could help answer the question "which bubbles (if any) should fall now?" after some arbitrary bubbles are popped and corresponding nodes are removed from the graph. I think the answer is all bubbles that were just disconnected from the graph should fall. However the graph approach might be overkill so I'm not sure. Another consideration for the data structure is collision detection. Perhaps being able to grab a list of neighboring bubbles in constant time for a particular "bubble slot" is useful. So the collision detection would be something like "moving bubble is closest to slot ij, neighbors of slot ij are bubbles a,b,c, moving bubble is sufficiently close to bubble b hence moving bubble should come to rest in slot ij". A game like this could be probably be made with a relatively crude grid structure as the primary data structure. However it seems like answering "which bubbles (if any) should fall now?" would be trickier with this data structure.

    Read the article

  • Can JSON be made easily and safely editable by the non-technical Excel crowd?

    - by glitch
    I'm looking for a data storage format that's very intuitive and easy to edit. It should be ideally targeted towards the same crowd as Excel. At the same time I would like the data structure to be a tree. Ideally this would be JSON, since it offers both the tree aspect and allows for more interesting constructs like arrays. That and parsing libraries for JSON are ubiquitous, so I don't have to reinvent the wheel. The problem is that, at least with a non-specialized text editor, JSON is a giant pain to edit for a non-technical user. I'm thinking along the lines of someone who might have used Excel in the past, but never a real text editor. Someone who might not be comfortable with the idea of preserving JSON syntax by hand. Are there data formats out there that would fit this profile? I'd very much prefer this to be a JSON actually, but then it would require a solid editing tool that would hide the underlying implementation from the user. Think Excel and how it abstracts CSV syntax from the user. The reason I'm looking for something like this is because the team has been working with pretty hierarchical data for a while now and we've hit the limits of how easy it is to represent in simple CSVs without having to create complex rules for how represent hierarchy semantics from each row. Any suggestions?

    Read the article

  • Data Synchronization in mobile apps - multiple devices, multiple users

    - by ProgrammerNewbie
    I'm looking into building my first mobile app. One of the core features of the application is that multiple devices/users will have access to the same data -- and all of them will have CRUD rights. I believe the architecture should involve a central server where all the data is stored. The devices will use an API to interact with the server to perform its data operations (e.g. adding a record, editing a record, deleting a record). I imagine a scenario where synchronizing the data will become a problem. Assume the application should work when it is not connected to the Internet, and thus cannot communicate with this central server. So: User A is offline and edits record #100 User B is offline and edits record #100 User C is offline and deletes record #100 User C goes online (presumably, record #100 should get deleted on the server) User A and B goes online, but the records they edited no longer exist All sorts of scenarios similar to the above can come up. How is this generally handled? I plan to use MySQL, but am wondering if it's not appropriate for such a problem.

    Read the article

< Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >