Search Results

Search found 60846 results on 2434 pages for 'spring data mongodb'.

Page 8/2434 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • Web services, Java EE, Spring, DB integration project ideas - maybe data mining related?

    - by saral jain
    I am a graduate Computer Science student (Data Mining and Machine Learning) and have good exposure to core Java (3 years). I have read up on a bunch of stuff on the following topics: Design patterns, Java EE Web services (SOAP and REST), Spring, and Hibernate Java Concurrency - advanced features like Task and Executors. I would now like to do a project combining this stuff -- over my free time of course -- to get a better understanding of these things and to kind of make an end to end software (to learn the best design principles etc + SVN, maven). Any good project ideas would be really appreciated. I just want to build this stuff to learn, so I don't really mind re-inventing the wheel. Also, anything related to data mining would be an added bonus as it fits with my research but is absolutely not necessary since this project is more to learn to do large scale software development.

    Read the article

  • Move data from others user accounts in my user account

    - by user118136
    I had problems with compiz setting and I make multiple accounts, now I want to transfer my information from all deleted users in my current account, some data I can not copy because I am not right to read, I type in terminal "sudo nautilus" and I get the permission for read, but the copied data is available only for superusers and I must charge the permissions for each file and each folder. How I can copy the information with out the superuser rights OR how I can charge the permissions for selected folder and all files and folders included in it?

    Read the article

  • I want a trivial example of where MongoDB can scale but a relational database will have trouble

    - by Ryan Weir
    I'm just learning to use MongoDB, and when discussing with other programmers would like a quick example of why NoSQL can be a good choice compared to a traditional RDBMS - however the scenarios I come up with and can find online seem pretty contrived. E.g. a blog with lots of traffic could be represented relationally, but will require some performance tuning and joins across tables (assuming full denormalization is being used). Whereas MongoDB would allow direct retrieval from one collection to the same effect. But the response I'm getting from other programmers is "why not just keep it relational and then add some trivial caching later?" Does anybody have a less contrived example where MongoDB will really shine and a relational db will fall over much quicker? The smaller the project/system the better, because it leaves less room for disagreement. Something along the lines of the complexity of the blog example would be really useful. Thanks.

    Read the article

  • web services, J2EE, spring, DB integration project ideas- maybe data mining related?

    - by sj88
    Hey guys, I am a graduate CS student (Data mining and machine learning) and have a good exposure to core JAVA (3 years). I have read up a bunch of stuff on Design patterns J2EE Web services( soap and rest) spring and hibernate Java Concurrency - advanced features like Task and Executors. I would now like to do a project combining this stuff (over my free time of corse) to get a better understanding of these things and to kind of make an end to end software (to learn the best design principles etc + svn, maven). Any good project ideas would be really appreciated. I just wanna build this stuff to learn so I dont really mind re-inventing the wheel. Also, anything related to data mining would be an added bonus (fits with my research) but absolutly not necesary (since this project is more to learn to do large scale software developement)

    Read the article

  • How to Set Up a MongoDB NoSQL Cluster Using Oracle Solaris Zones

    - by Orgad Kimchi
    This article starts with a brief overview of MongoDB and follows with an example of setting up a MongoDB three nodes cluster using Oracle Solaris Zones. The following are benefits of using Oracle Solaris for a MongoDB cluster: • You can add new MongoDB hosts to the cluster in minutes instead of hours using the zone cloning feature. Using Oracle Solaris Zones, you can easily scale out your MongoDB cluster. • In case there is a user error or software error, the Service Management Facility ensures the high availability of each cluster member and ensures that MongoDB replication failover will occur only as a last resort. • You can discover performance issues in minutes versus days by using DTrace, which provides increased operating system observability. DTrace provides a holistic performance overview of the operating system and allows deep performance analysis through cooperation with the built-in MongoDB tools. • ZFS built-in compression provides optimized disk I/O utilization for better I/O performance. In the example presented in this article, all the MongoDB cluster building blocks will be installed using the Oracle Solaris Zones, Service Management Facility, ZFS, and network virtualization technologies. Figure 1 shows the architecture:

    Read the article

  • What is the best cloud technology to use for MongoDB/GridFS database servers

    - by Nerian
    We are going to launch a service that will require between 1 and 2 GB for file storage per paid user. I am going to use GridFS for storing files. GridFS is a module for MongoDB that allows to store large files in de database. I am pondering the different options for storing the database. But since I am unexperienced at deployment and it is my first time with Mongodb I need your experience. Criteria: I want to spend my time developing my core business, that is, my own application. I am a Ruby on Rails developer. I do not like to mess with server configuration. Hence, I would like a fully managed hosting solution. But I would like to know about any other option, if you think it is worth it. It should be able to scale. Cloud style. Pay as you go. The lower the price, the better. So far I known of these services: https://mongohq.com/pricing https://mongomachine.com/pricing https://mongolab.com/about/pricing/ http://cloudcontrol.com/add-ons/mongodb/ And they seem to be OK for common needs, that is no file storage. But I am going to use GridFS, so the size matters. These services seems to scale, in price, quite poorly. MongoHQ: The larger plan max storage is 20 GB. Seems like a very little storage, for GridFS. MongoMachine: Flat price, 2.5$ per GB. I didn't found the limit. Seems like a good price, comparing the others. MongoLab: 3.984 GB max, which I don't think I will hit, so perfect. 8$ per GB, quite costly. CloudControl: The larger plan is 20 Gb. The custom service starts at 250€ plus some unspecified charge per GB. What is your experience with these services? Any downtimes? Other possibilities? Edit: Added meaning of GridFS

    Read the article

  • PHP-FPM processes holding onto MongoDB connection states

    - by Brendan
    For the relevant part of our server stack, we're running: NGINX 1.2.3 PHP-FPM 5.3.10 with PECL mongo 1.2.12 MongoDB 2.0.7 CentOS 6.2 We're getting some strange, but predictable behavior when the MongoDB server goes away (crashes, gets killed, etc). Even with a try/catch block around the connection code, i.e: try { $mdb = new Mongo('mongodb://localhost:27017'); } catch (MongoConnectionException $e) { die( $e->getMessage() ); } $db = $mdb->selectDB('collection_name'); Depending on which PHP-FPM workers have connected to mongo already, the connection state is cached, causing further exceptions to go unhandled, because the $mdb connection handler can't be used. The troubling thing is that the try does not consistently fail for a considerable amount of time, up to 15 minutes later, when -- I assume -- the php-fpm processes die/respawn. Essentially, the behavior is that when you hit a worker that hasn't connected to mongo yet, you get the die message above, and when you connect to a worker that has, you get an unhandled exception from $mdb->selectDB('collection_name'); because catch does not run. When PHP is a single process, i.e. via Apache with mod_php, this behavior does not occur. Just for posterity, going back to Apache/mod_php is not an option for us at this time. Is there a way to fix this behavior? I don't want the connection state to be inconsistent between different php-fpm processes.

    Read the article

  • Spring-json problem in Liferay with Spring 2.5

    - by Jesus Benito
    Hi all, I am trying to use the library spring-json.1.3.1 in a project that has been done with Liferay 5.1.2 which includes Spring 2.5. Following the project website instructions, I managed to make the request hit in my controller, but at the moment of returning the json object back through the modelAndView object it fails with the following error: java.lang.IllegalArgumentException at com.liferay.portlet.MimeResponseImpl.setContentType(MimeResponseImpl.java:162) I have checked Liferays source code, and it checks that contentType that its being set is in a harcoded list,if it not it will throw a IllegalArgumentException that it is exactly what os happening. This is my view resolver code: <bean id="xmlFileViewResolver" class="org.springframework.web.servlet.view.XmlViewResolver"> /WEB-INF/context/views.xml 1 My views.xml code: <beans> <bean name="jsonView" class="org.springframework.web.servlet.view.json.JsonView"/> And my controller: @SuppressWarnings("unchecked") @Override public ModelAndView handleRenderRequest(RenderRequest arg0, RenderResponse arg1) throws Exception { Map model = new HashMap(); model.put("firstname", "Peter"); model.put("secondname", "Schmitt"); return new ModelAndView("jsonView", model); } Any ideas?

    Read the article

  • Ruby on Rails vs Grails vs. Spring ROO vs. Spring App

    - by lizdev
    Hi, I'm planning on writing a simple web application that will be used by lots of users (as complicated as a simple bookmarking app) and I'm trying to decide which framework/language to use. I'm very experienced with Spring/Hibernate and Java in general but new to both Grails and RoR (and Spring ROO). The only reason I'm considering RoR is because Java hosting is MUCH more expensive than RoR hosting (which is supported by almost any hosting vendor for 5$ per month). Assuming the price wasn't an issue, which one of the frameworks/languages mentioned above would you recommend for a Java developer (who knows how to configure Spring/Hibernate etc.)? I'm afraid that by using RoR I won't be able to easily support many users who are using the website at the same time. thanks

    Read the article

  • Spring Web Service Client Tutorial or Example Required

    - by Nirmal
    Hello All... I need to jump into the Spring Web Service Project, in that I required to implement the Spring Web Service's Client Only.. So, I have already gone through with Spring's Client Reference Document. So, I got the idea of required classes for the implementation of Client. But my problem is like I have done some googling, but didn't get any proper example of both Client and Server from that I can implement one sample for my client. So, if anybody gives me some link or tutorial for proper example from that I can learn my client side implementation would be greatly appreciated. Thanks in advance...

    Read the article

  • PHP Programmer wanting to learn Spring

    - by grokker
    I'm a PHP programmer and I want to try creating a webapp using the Spring framework. The problem is I'm clueless and I don't know where to start. What tutorials/books/websites do you guys suggest that I should learn from? What's IoC? Do I use it alongside MVC? What components of the Spring framework should I use? How do I know what to use? Are there webapps created with Spring that I could study from? Thank you so much in advance! P.S. I've used Struts (1) about a year ago.

    Read the article

  • Tomcat not showing Spring Context initialization errors when running from Eclipse WTP

    - by SourceRebels
    Hi all, Im working with Eclipse Galileo (WTP), Spring 2.5.6-SEC01 and Apache Tomcat 5.5.28. When I run my application from Eclipse, I'm able to see Tomcat standard output and error from the console view. When there is a Spring initialization error (ex: malformed spring XML) I'm not able to see the error message or the stacktrace at the Console view. Anyone found before a problem like this? how you solve it? Thanks in advance, I'm getting mad :-) Edited: I'm seeing all Tomcat startup messages and my System.out.println and System.err.println messages in Eclipse Console. I also try to pass this two system properties to my Tomcat Server: -Djava.util.logging.manager="org.apache.juli.ClassLoaderLogManager" -Djava.util.logging.config.file="C:\apache-tomcat-5.5.28\conf\logging.properties"

    Read the article

  • Spring 3, Java EE 6

    - by arg20
    I'm learning Java EE 6. I've seen how much progress it has achieved in this release of the umbrella specification. EJBs 3.1 are far easier and more lightweight than previous versions, and CDI is amazing. I'm not familiar with Spring, but I often read that it offered some neat features that the Java EE stack didn't. Yet I also read now that JEE has caught up, and can now fully compete with Spring. I know that choosing from both depends on many factors, but if we only focus on features, say the latest trends etc. Which one has the leading edge?. Can Spring 3 offer some assets The JAVA EE 6 stack can't? Also, what about Seam framework? From what I read it's like java ee 6 but with some additions?

    Read the article

  • How do I upgrade mongodb 1.8 to 2.2 on ubuntu?

    - by Alex Waters
    Following this guide: http://docs.mongodb.org/manual/tutorial/install-mongodb-on-ubuntu/ I ended up with mongodb 18 on my Ubuntu 10.04. I've just read the 2.2 mongodb release notes on upgrading and it says to just replace the binary. Would that be /usr/lib/mongodb/mongod ? It looks like the 2.2 tar has several files I might need to copy over: mongo, mongod, mongoexport, mongodump, mongofiles, mongoimport, mongorestore, mongos, xulwrapper Can I just copy and paste all of those over to replace the old versions of those files?

    Read the article

  • SQL SERVER – Guest Post – Architecting Data Warehouse – Niraj Bhatt

    - by pinaldave
    Niraj Bhatt works as an Enterprise Architect for a Fortune 500 company and has an innate passion for building / studying software systems. He is a top rated speaker at various technical forums including Tech·Ed, MCT Summit, Developer Summit, and Virtual Tech Days, among others. Having run a successful startup for four years Niraj enjoys working on – IT innovations that can impact an enterprise bottom line, streamlining IT budgets through IT consolidation, architecture and integration of systems, performance tuning, and review of enterprise applications. He has received Microsoft MVP award for ASP.NET, Connected Systems and most recently on Windows Azure. When he is away from his laptop, you will find him taking deep dives in automobiles, pottery, rafting, photography, cooking and financial statements though not necessarily in that order. He is also a manager/speaker at BDOTNET, Asia’s largest .NET user group. Here is the guest post by Niraj Bhatt. As data in your applications grows it’s the database that usually becomes a bottleneck. It’s hard to scale a relational DB and the preferred approach for large scale applications is to create separate databases for writes and reads. These databases are referred as transactional database and reporting database. Though there are tools / techniques which can allow you to create snapshot of your transactional database for reporting purpose, sometimes they don’t quite fit the reporting requirements of an enterprise. These requirements typically are data analytics, effective schema (for an Information worker to self-service herself), historical data, better performance (flat data, no joins) etc. This is where a need for data warehouse or an OLAP system arises. A Key point to remember is a data warehouse is mostly a relational database. It’s built on top of same concepts like Tables, Rows, Columns, Primary keys, Foreign Keys, etc. Before we talk about how data warehouses are typically structured let’s understand key components that can create a data flow between OLTP systems and OLAP systems. There are 3 major areas to it: a) OLTP system should be capable of tracking its changes as all these changes should go back to data warehouse for historical recording. For e.g. if an OLTP transaction moves a customer from silver to gold category, OLTP system needs to ensure that this change is tracked and send to data warehouse for reporting purpose. A report in context could be how many customers divided by geographies moved from sliver to gold category. In data warehouse terminology this process is called Change Data Capture. There are quite a few systems that leverage database triggers to move these changes to corresponding tracking tables. There are also out of box features provided by some databases e.g. SQL Server 2008 offers Change Data Capture and Change Tracking for addressing such requirements. b) After we make the OLTP system capable of tracking its changes we need to provision a batch process that can run periodically and takes these changes from OLTP system and dump them into data warehouse. There are many tools out there that can help you fill this gap – SQL Server Integration Services happens to be one of them. c) So we have an OLTP system that knows how to track its changes, we have jobs that run periodically to move these changes to warehouse. The question though remains is how warehouse will record these changes? This structural change in data warehouse arena is often covered under something called Slowly Changing Dimension (SCD). While we will talk about dimensions in a while, SCD can be applied to pure relational tables too. SCD enables a database structure to capture historical data. This would create multiple records for a given entity in relational database and data warehouses prefer having their own primary key, often known as surrogate key. As I mentioned a data warehouse is just a relational database but industry often attributes a specific schema style to data warehouses. These styles are Star Schema or Snowflake Schema. The motivation behind these styles is to create a flat database structure (as opposed to normalized one), which is easy to understand / use, easy to query and easy to slice / dice. Star schema is a database structure made up of dimensions and facts. Facts are generally the numbers (sales, quantity, etc.) that you want to slice and dice. Fact tables have these numbers and have references (foreign keys) to set of tables that provide context around those facts. E.g. if you have recorded 10,000 USD as sales that number would go in a sales fact table and could have foreign keys attached to it that refers to the sales agent responsible for sale and to time table which contains the dates between which that sale was made. These agent and time tables are called dimensions which provide context to the numbers stored in fact tables. This schema structure of fact being at center surrounded by dimensions is called Star schema. A similar structure with difference of dimension tables being normalized is called a Snowflake schema. This relational structure of facts and dimensions serves as an input for another analysis structure called Cube. Though physically Cube is a special structure supported by commercial databases like SQL Server Analysis Services, logically it’s a multidimensional structure where dimensions define the sides of cube and facts define the content. Facts are often called as Measures inside a cube. Dimensions often tend to form a hierarchy. E.g. Product may be broken into categories and categories in turn to individual items. Category and Items are often referred as Levels and their constituents as Members with their overall structure called as Hierarchy. Measures are rolled up as per dimensional hierarchy. These rolled up measures are called Aggregates. Now this may seem like an overwhelming vocabulary to deal with but don’t worry it will sink in as you start working with Cubes and others. Let’s see few other terms that we would run into while talking about data warehouses. ODS or an Operational Data Store is a frequently misused term. There would be few users in your organization that want to report on most current data and can’t afford to miss a single transaction for their report. Then there is another set of users that typically don’t care how current the data is. Mostly senior level executives who are interesting in trending, mining, forecasting, strategizing, etc. don’t care for that one specific transaction. This is where an ODS can come in handy. ODS can use the same star schema and the OLAP cubes we saw earlier. The only difference is that the data inside an ODS would be short lived, i.e. for few months and ODS would sync with OLTP system every few minutes. Data warehouse can periodically sync with ODS either daily or weekly depending on business drivers. Data marts are another frequently talked about topic in data warehousing. They are subject-specific data warehouse. Data warehouses that try to span over an enterprise are normally too big to scope, build, manage, track, etc. Hence they are often scaled down to something called Data mart that supports a specific segment of business like sales, marketing, or support. Data marts too, are often designed using star schema model discussed earlier. Industry is divided when it comes to use of data marts. Some experts prefer having data marts along with a central data warehouse. Data warehouse here acts as information staging and distribution hub with spokes being data marts connected via data feeds serving summarized data. Others eliminate the need for a centralized data warehouse citing that most users want to report on detailed data. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Best Practices, Business Intelligence, Data Warehousing, Database, Pinal Dave, PostADay, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Spring MVC application - URL gives No file found (404)

    - by user1700184
    I created a Spring-MVC project. web.xml: <servlet> <servlet-name>mvc-dispatcher</servlet-name> <servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>mvc-dispatcher</servlet-name> <url-pattern>/soundmails</url-pattern> </servlet-mapping> mvc-dispatcher-servlet.xml <?xml version="1.0"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:mvc="http://www.springframework.org/schema/mvc" xmlns:context="http://www.springframework.org/schema/context" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/mvc http://www.springframework.org/schema/mvc/spring-mvc-3.0.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.0.xsd"> <mvc:annotation-driven /> <context:component-scan base-package="somepkg.controllers" /> <bean id="multipartResolver" class="org.gmr.web.multipart.GMultipartResolver"> <property name="maxUploadSize" value="1048576" /> </bean> <bean id="placeholderConfig" class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer"> <!-- property name="location"> <value>/WEB-INF/social.properties</value> </property--> </bean> <bean id="jacksonMessageConverter" class="org.springframework.http.converter.json.MappingJacksonHttpMessageConverter"></bean> <bean class="org.springframework.web.servlet.mvc.annotation.AnnotationMethodHandlerAdapter"> <property name="messageConverters"> <list> <ref bean="jacksonMessageConverter"/> </list> </property> </bean> </beans> The controller has this code: ProjectController.java @Controller @RequestMapping("/soundmails") public class FileUploadController { @RequestMapping(value="/test", method=RequestMethod.GET) public @ResponseBody String test() { System.out.println("Hai"); return "Hai"; } } I am using Google App Engine in my local machine to test this. I am getting these in my log: [INFO] Oct 24, 2013 1:54:18 AM com.google.appengine.tools.development.LocalResourceFileServlet doGet [INFO] WARNING: No file found for: /soundmails/test I tried /soundmails/soundmails/test as well. That is also giving the same error. I am using Spring 3.1.0.RELEASE Can someone help me figure out what I am missing - /soundmails/test is giving 404 error. Edit I am unable to enable DEBUG logs for this. For some reason, it is not taking log level configured in logging.properties But I observed something interesting: 1) If I map the request to empty string (value = "") @RequestMapping(value="", method=RequestMethod.GET) public @ResponseBody String test() { System.out.println("Hai"); return "Hai"; } Then, when I try to access 127.0.0.1/soundmails, it works fine (returns string "Hai"). 2) When I have value="/test" @RequestMapping(value="/test", method=RequestMethod.GET) public @ResponseBody String test() { System.out.println("Hai"); return "Hai"; } and I try to access 127.0.0.1/soundmails/test, it is giving HTTP 404. This is weird.

    Read the article

  • Data Mining Resources

    - by Dejan Sarka
    There are many different types of analyses, each one with its own pros and cons. Relational reports have a predefined structure, and end users cannot change it. They are simple to use for end users. Reports can use real-time data and snapshots of data to show the state of a report at specific points in time. One of the drawbacks is that report authoring is limited to IT pros and advanced users. Any kind of dynamic restructuring is very limited. If real-time data is used for a report, the report has a negative impact on the performance of the source system. Processing of the reports might be slow because the data comes from relational database management systems, which are not optimized for reporting only. If you create a semantic model of your data, your end users can create ad-hoc report structures. However, the development is more complex because a developer is needed to create these semantic models. For OLAP, you typically use specialized database management systems. You get lightning speed of analyses. End users can use rich and thin clients to interactively change the structure of the report. Typically, they do it graphically. However, the development of an OLAP system is many times quite complex. It involves the preparation and maintenance of an enterprise data warehouse and OLAP cubes. In order to exploit the possibility of real-time restructuring of reports, the users must be both active and educated. The data is usually stale, as it is loaded into data warehouses and OLAP cubes with a scheduled process. With data mining, a structure is not selected in advance; it searches for the structure. As a result, data mining can give you the most valuable results because you can discover patterns you did not expect. A data mining model structure is limited only by the attributes that you use to train the model. One of the drawbacks is that a lot of knowledge is needed for a successful data mining project. End users have to understand the results. Subject matter experts and IT professionals need to understand business problem thoroughly. The development might be sometimes even more complex than the development of OLAP cubes. Each type of analysis has its own place in an enterprise system. SQL Server has tools for all kinds of analyses. However, data mining is the most advanced way of analyzing the data; this is the “I” in BI. In order to get the most out of it, you need to learn quite a lot. In this blog post, I am gathering together resources for learning, including forthcoming events. Books Multiple authors: SQL Server MVP Deep Dives – I wrote an introductory data mining chapter there. Erik Veerman, Teo Lachev and Dejan Sarka: MCTS Self-Paced Training Kit (Exam 70-448): Microsoft SQL Server 2008 - Business Intelligence Development and Maintenance – you can find a good overview of a complete BI solution, including data mining, in this book. Jamie MacLennan, ZhaoHui Tang, and Bogdan Crivat: Data Mining with Microsoft SQL Server 2008 – can’t miss this book if you want to mine your data with SQL Server tools. Michael Berry, Gordon Linoff: Mastering Data Mining: The Art and Science of Customer Relationship Management – data mining from both, business and technical perspective. Dorian Pyle: Data Preparation for Data Mining – an in-depth book about data preparation. Thomas and Ronald Wonnacott: Introductory Statistics – if you thought that you could get away without statistics, then you are not serious about data mining. Jiawei Han and Micheline Kamber: Data Mining Concepts and Techniques – in-depth explanation of the most popular data mining algorithms. Michael Berry and Gordon Linoff: Data Mining Techniques – another book that explains data mining algorithms, more fro a business perspective. Paolo Guidici: Applied Data Mining – very mathematical book, only if you enjoy statistics and mathematics in general. Forthcoming presentations I am presenting two data mining related sessions during the PASS Summit in Charlotte, NC: Wednesday, October 16th, 2013 - Fraud Detection: Notes from the Field – I am showing how to use data mining for a specific business problem. The presentation is based on real-life projects. Friday, October 18th: Excel 2013 Advanced Analytics – I am focusing on Excel Data Mining Add-ins, and how to use them together with Power Pivot and other add-ins. This is the most you can get out of Excel. Sinergija 2013, Belgrade, Serbia Tuesday, October 22nd: Excel 2013 Analytics to the Max – another presentation focusing on the most advanced analytics you can get in Excel. SQL Rally Amsterdam, Netherlands Thursday, November 7th: Advanced Analytics in Excel 2013 – and again I am presenting about data mining in Excel. Why three different titles for the same presentation? I don’t know, I guess I forgot the name I proposed every time right after I sent the proposal. Courses Data Mining with SQL Server 2012 – I wrote a 3-day course for SolidQ. If you are interested in this course, which I could also deliver in a shorter seminar way, you can contact your closes SolidQ subsidiary, or, of course, me directly on addresses [email protected] or [email protected]. This course could also complement the existing courseware portfolio of training providers, which are welcome to contact me as well. OK, now you know: no more excuses, start learning data mining, get the most out of your data

    Read the article

  • Master Data Services Employees Sample Model

    - by Davide Mauri
    I’ve been playing with Master Data Services quite a lot in those last days and I’m also monitoring the web for all available resources on it. Today I’ve found this freshly released sample available on MSDN Code Gallery: SQL Server Master Data Services Employee Sample Model http://code.msdn.microsoft.com/SSMDSEmployeeSample This sample shows how Recursive Hierarchies can be modeled in order to represent a typical organizational chart scenario where a self-relationship exists on the Employee entity. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Looking for Cutting-Edge Data Integration: 2010 Innovation Awards

    - by dain.hansen
    This year's Oracle Fusion Middleware Innovation Awards will honor customers and partners who are creatively using to various products across Oracle Fusion Middleware. Brand new to this year's awards is a category for Data Integration. Think you have something unique and innovative with one of our Oracle Data Integration products? We'd love to hear from you! Please submit today The deadline for the nomination is 5 p.m. PT Friday, August 6th 2010, and winning organizations will be notified by late August 2010. What you win! FREE pass to Oracle OpenWorld 2010 in San Francisco for select winners in each category. Honored by Oracle executives at awards ceremony held during Oracle OpenWorld 2010 in San Francisco. Oracle Middleware Innovation Award Winner Plaque 1-3 meetings with Oracle Executives during Oracle OpenWorld 2010 Feature article placement in Oracle Magazine and placement in Oracle Press Release Customer snapshot and video testimonial opportunity, to be hosted on oracle.com Podcast interview opportunity with Senior Oracle Executive

    Read the article

  • Data Integration 12c Raising the Big Data Roof at Oracle OpenWorld

    - by Tanu Sood
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Times New Roman","serif"; mso-fareast-font-family:"MS Mincho";} Author: Dain Hansen, Director, Oracle It was an exciting OpenWorld 2013 for us in the Data Integration track. Our theme this year was all about ‘being future ready’ - previewing one of our biggest releases this year: Oracle Data Integration 12c. Just this week we followed up with this preview by announcing the general availability of 12c release for Oracle’s key data integration products: Oracle Data Integrator 12c and Oracle GoldenGate 12c. The new release delivers extreme performance, increase IT productivity, and simplify deployment, while helping IT organizations to keep pace with new data-oriented technology trends including cloud computing, big data analytics, real-time business intelligence. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Times New Roman","serif"; mso-fareast-font-family:"MS Mincho";} Mark Hurd's keynote on day one set the tone for the Data Integration sessions. Mark focused on big data analytics and the changing consumer expectations. Especially real-time insight is a key theme for Oracle overall and data integration products. In Mark Hurd's keynote we heard from key customers, such as Airbus and Thomson Reuters, how real-time analysis of operational data including machine data creates value, in some cases even saves lives. Thomas Kurian gave a deeper look into Oracle's big data and fast data solutions. In the initial lead Data Integration track session - Brad Adelberg, VP of Development, presented Oracle’s Data Integration 12c product strategy based on key trends from the initial OpenWorld keynotes. Brad talked about how Oracle's data integration products address the new data integration requirements that evolved with cloud computing, big data, and changing consumer expectations and how they set the key themes in our products’ road map. Brad explained why and how fast-time to value, high-performance and future-ready solutions is the top focus areas for product development. If you were not able to attend OpenWorld or this session I recommend reading the white paper: Five New Data Integration Requirements and How to Meet them with Oracle Data Integration, which provides an in-depth look into how Oracle addresses the new trends in the DI market. Following Brad’s session, Nick Wagner provided in depth review of Oracle GoldenGate’s latest features and roadmap. Nick discussed how Oracle GoldenGate’s tight integration with Oracle Database sets the product apart from the competition. We also heard that heterogeneity of the product is still a major focus for GoldenGate’s development and there will be more news on that front when there is a major release. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Times New Roman","serif"; mso-fareast-font-family:"MS Mincho";} After GoldenGate’s product strategy session, Denis Gray from the PM team presented Oracle Data Integrator’s product strategy session, talking about the latest and greatest on ODI. Another good session was delivered by long-time GoldenGate users, Comcast.  Jason Hurd and Amit Patel of Comcast talked about the various use cases they deploy Oracle GoldenGate throughout their enterprise, from database upgrades, feeding reporting systems, to active-active database synchronization.  The Comcast team shared many good tips on how to use GoldenGate for both zero downtime upgrades and active-active replication with conflict management requirement. One of our other important goals we had this year for the Data Integration track at OpenWorld was hearing from our customers. We ended day 1 on just that, with a wonderful award ceremony for Oracle Excellence Awards for Oracle Fusion Middleware Innovation. The ceremony was held in the Yerba Buena Center for the Arts. Congratulations to Royal Bank of Scotland and Yalumba Wine Company, the winners in the Data Integration category. You can find more information on the award and the winners in our previous blog post: 2013 Oracle Excellence Awards for Fusion Middleware Innovation… Selected for their innovation use of Oracle’s Data Integration products; the winners for the Data Integration Category are Royal Bank of Scotland and The Yalumba Wine Company. Congratulations!!! Royal Bank of Scotland’s Market and International Banking division provides clients across the globe with seamless trading and competitive pricing, underpinned by a deep knowledge of risk management across the full spectrum of financial products. They handle millions of transactions daily to keep the lifeblood of their clients’ businesses flowing – whether through payment management solutions or through bespoke trade finance solutions. Royal Bank of Scotland is leveraging Oracle GoldenGate and Oracle Data Integrator along with Oracle Business Intelligence Enterprise Edition and the Oracle Database for a variety of solutions. Mainly, Oracle GoldenGate and Oracle Data Integrator are used to feed their data warehouse – providing a real-time data integration solution that feeds transactional data to their analytics system in minutes to enable improved decision making with timely, accurate data for their business users. Oracle Data Integrator’s in-database transformation capabilities and its ability to integrate with Oracle GoldenGate for real-time data capture is the foundation of this implementation. This solution makes it such that changes happening in the analytics systems are available the same day they are deployed on the operational system with 100% data quality guaranteed. Additionally, the solution has helped to reduce their operational database size from 150GB to 10GB. Impressive! Now what if I told you this solution was built in 3 months and had a less than 6 month return on investment? That’s outstanding! The Yalumba Wine Company is situated in the Barossa Valley of Australia. It is the oldest family owned winery in Australia with a unique way of aging their wines in specially crafted 100 liter barrels. Did you know that “Yalumba” is Aboriginal for “all the land around”? The Yalumba Wine Company is growing rapidly, and was in need of introducing a more modern standard to the existing manufacturing processes to meet globalization demands, overall time-to-market, and better operational efficiency objectives of product development. The Yalumba Wine Company worked with a partner, Bristlecone to develop a unique solution whereby Oracle Data Integrator is leveraged to pull data from Salesforce.com and JD Edwards, in addition to their other pre-existing source systems, for consumption into their data warehouse. They have emphasized the overall ease of developing integration workflows with Oracle Data Integrator. The solution has brought better visibility for the business users, shorter data loading and transformation performance to their data warehouse with rapid incorporation of new data sources, and a solid future-proof foundation for their organization. Moving forward, they plan on leveraging more from Oracle’s Data Integration portfolio. Terrific! In addition to these two customers on Tuesday we featured many other important Oracle Data Integrator and Oracle GoldenGate customers. On Tuesday the GoldenGate panel included: Land O’Lakes, Smuckers, and Veolia Water. Besides giving us yummy nutrition and healthy water, these companies have another aspect in common. They all use GoldenGate to boost their ERP application. Please read the recap by Irem Radzik. On Wednesday, the ODI Panel included: Barry Ralston and Ryan Weber of Infinity Insurance, Paul Stracke of Paychex Inc., and Ian Wall of Vertex Pharmaceuticals for a session filled with interesting projects, use cases and approaches to leveraging Oracle Data Integrator. Please read the recap by Sandrine Riley for more. Thanks to everyone who joined with us and we hope to stay connected! To hear more about our Data Integration12c products join us in an upcoming webcast to learn more. Follow us www.twitter.com/ORCLGoldenGate or goto our website at www.oracle.com/goto/dataintegration

    Read the article

  • Spring rejecting bean name, no URL paths specified

    - by richever
    I am trying to register an interceptor using a annotation-driven controller configuration. As far as I can tell, I've done everything correctly but when I try testing the interceptor nothing happens. After looking in the logs I found the following: 2010-04-04 20:06:18,231 DEBUG [main] support.AbstractAutowireCapableBeanFactory (AbstractAutowireCapableBeanFactory.java:452) - Finished creating instance of bean 'org.springframework.web.servlet.mvc.annotation.DefaultAnnotationHandlerMapping#0' 2010-04-04 20:06:18,515 DEBUG [main] handler.AbstractDetectingUrlHandlerMapping (AbstractDetectingUrlHandlerMapping.java:86) - Rejected bean name 'org.springframework.web.servlet.mvc.annotation.DefaultAnnotationHandlerMapping#0': no URL paths identified 2010-04-04 20:06:19,109 DEBUG [main] support.AbstractBeanFactory (AbstractBeanFactory.java:241) - Returning cached instance of singleton bean 'org.springframework.web.servlet.mvc.annotation.DefaultAnnotationHandlerMapping#0' Look at the second line of this log snippet. Is Spring rejecting the DefaultAnnotationHandlerMapping bean? And if so could this be the problem with my interceptor not working? Here is my application context: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:p="http://www.springframework.org/schema/p" xmlns:context="http://www.springframework.org/schema/context" xmlns:mvc="http://www.springframework.org/schema/mvc" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.0.xsd http://www.springframework.org/schema/mvc http://www.springframework.org/schema/mvc/spring-mvc-3.0.xsd" default-autowire="byName"> <!-- Configures the @Controller programming model --> <mvc:annotation-driven /> <!-- Scan for annotations... --> <context:component-scan base-package=" com.splash.web.controller, com.splash.web.service, com.splash.web.authentication"/> <bean id="authorizedUserInterceptor" class="com.splash.web.handler.AuthorizedUserInterceptor"/> <bean class="org.springframework.web.servlet.mvc.annotation.DefaultAnnotationHandlerMapping"> <property name="interceptors"> <list> <ref bean="authorizedUserInterceptor"/> </list> </property> </bean> Here is my interceptor: package com.splash.web.handler; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import org.apache.log4j.Logger; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.web.servlet.handler.HandlerInterceptorAdapter; public class AuthorizedUserInterceptor extends HandlerInterceptorAdapter { @Override public boolean preHandle(HttpServletRequest request, HttpServletResponse response, Object handler) throws Exception { log.debug(">>> Operation intercepted..."); return true; } } Does anyone see anything wrong with this? What does the error I mentioned above actually mean and could it have any bearing on the interceptor not being called? Thanks!

    Read the article

  • Spring Security - Persistent Remember Me Issue

    - by Taylor L
    I've been trying to track down why Spring Security isn't creating the Spring Security remember me cookie (SPRING_SECURITY_REMEMBER_ME_COOKIE). At first glance, the logs make it seem like the login is failing, but the login is actually successful in the sense that if I navigate to a page that requires authentication I am not redirected back to the login page. However, the logs appear to be saying the login credentials are invalid. I'm using Spring 3.0.1, Spring Security 3.0.1, and Google App Engine 1.3.1. Any ideas as to what is going on? Mar 16, 2010 10:05:56 AM org.springframework.security.web.authentication.rememberme.PersistentTokenBasedRememberMeServices onLoginSuccess FINE: Creating new persistent login for user [email protected] Mar 16, 2010 10:10:07 AM org.springframework.security.web.authentication.rememberme.AbstractRememberMeServices loginFail FINE: Interactive login attempt was unsuccessful. Mar 16, 2010 10:10:07 AM org.springframework.security.web.authentication.rememberme.AbstractRememberMeServices cancelCookie FINE: Cancelling cookie Below is the relevant portion of the applicationContext-security.xml. <http auto-config="false"> <intercept-url pattern="/css/**" filters="none" /> <intercept-url pattern="/img/**" filters="none" /> <intercept-url pattern="/js/**" filters="none" /> <intercept-url pattern="/app/admin/**" filters="none" /> <intercept-url pattern="/app/login/**" filters="none" /> <intercept-url pattern="/app/register/**" filters="none" /> <intercept-url pattern="/app/error/**" filters="none" /> <intercept-url pattern="/" filters="none" /> <intercept-url pattern="/**" access="ROLE_USER" /> <logout logout-success-url="/" /> <form-login login-page="/app/login" default-target-url="/" authentication-failure-url="/app/login?login_error=1" /> <session-management invalid-session-url="/app/login" /> <remember-me services-ref="rememberMeServices" key="myKey" /> </http> <authentication-manager alias="authenticationManager"> <authentication-provider user-service-ref="userDetailsService"> <password-encoder hash="sha-256" base64="true"> <salt-source user-property="username" /> </password-encoder> </authentication-provider> </authentication-manager> <beans:bean id="userDetailsService" class="com.my.service.auth.UserDetailsServiceImpl" /> <beans:bean id="rememberMeServices" class="org.springframework.security.web.authentication.rememberme.PersistentTokenBasedRememberMeServices"> <beans:property name="userDetailsService" ref="userDetailsService" /> <beans:property name="tokenRepository" ref="persistentTokenRepository" /> <beans:property name="key" value="myKey" /> </beans:bean> <beans:bean id="persistentTokenRepository" class="com.my.service.auth.PersistentTokenRepositoryImpl" />

    Read the article

  • spring MVC sample web app

    - by Don
    Hi, I'm looking for an example Spring MVC 2.5 web app that I can easily: Setup as a project in Eclipse Deploy to a local app server (using Ant/Maven) There are a couple of example applications included with the Spring distribution ('petclinic' and 'jpetstore'), but they don't provide any Eclipse project files (or a way to generate them). They also seem a bit complicated for my needs, e.g. require a local database to be setup. Thanks, Don

    Read the article

  • spring roo vs appfuse generate service /dao layer

    - by cometta
    I am looking for feedback from experienced users on spring roo and appfuse. Which do you think does a better job reverse engineering database tables and generating a service layer, dao layer, and jpa entities? If I am not mistaken, spring roo currently cannot reverse engineer a database.

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >