Search Results

Search found 64715 results on 2589 pages for 'spring data rest'.

Page 68/2589 | < Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >

  • Tab Sweep - Jazoon aftermath, PaaS press, REST +WebSocket, ...

    - by alexismp
    Recent Tips and News on Java EE 6 & GlassFish: •The GlassFish Tale - Oracle Scene (Markus) • Notes from Jazoon 2011 (Marek) • Jazoon '11 presentations (Jazoon.com) • Enterprise Java upgrade geared to PaaS clouds (TechCentral.ie) • JavaOne 2011: Content review process and Tips for submissions (Arun) • REST + WebSocket applications? Why not using the Atmosphere Framework (Jeanfrancois) • Get your Java 7 screensaver! (Duke)

    Read the article

  • Master Data Management Implementation Styles

    - by david.butler(at)oracle.com
    In any Master Data Management solution deployment, one of the key decisions to be made is the choice of the MDM architecture. Gartner and other analysts describe some different Hub deployment styles, which must be supported by a best of breed MDM solution in order to guarantee the success of the deployment project.   Registry Style: In a Registry Style MDM Hub, the various source systems publish their data and a subscribing Hub stores only the source system IDs, the Foreign Keys (record IDs on source systems) and the key data values needed for matching. The Hub runs the cleansing and matching algorithms and assigns unique global identifiers to the matched records, but does not send any data back to the source systems. The Registry Style MDM Hub uses data federation capabilities to build the "virtual" golden view of the master entity from the connected systems.   Consolidation Style: The Consolidation Style MDM Hub has a physically instantiated, "golden" record stored in the central Hub. The authoring of the data remains distributed across the spoke systems and the master data can be updated based on events, but is not guaranteed to be up to date. The master data in this case is usually not used for transactions, but rather supports reporting; however, it can also be used for reference operationally.   Coexistence Style: The Coexistence Style MDM Hub involves master data that's authored and stored in numerous spoke systems, but includes a physically instantiated golden record in the central Hub and harmonized master data across the application portfolio. The golden record is constructed in the same manner as in the consolidation style, and, in the operational world, Consolidation Style MDM Hubs often evolve into the Coexistence Style. The key difference is that in this architectural style the master data stored in the central MDM system is selectively published out to the subscribing spoke systems.   Transaction Style: In this architecture, the Hub stores, enhances and maintains all the relevant (master) data attributes. It becomes the authoritative source of truth and publishes this valuable information back to the respective source systems. The Hub publishes and writes back the various data elements to the source systems after the linking, cleansing, matching and enriching algorithms have done their work. Upstream, transactional applications can read master data from the MDM Hub, and, potentially, all spoke systems subscribe to updates published from the central system in a form of harmonization. The Hub needs to support merging of master records. Security and visibility policies at the data attribute level need to be supported by the Transaction Style hub, as well.   Adaptive Transaction Style: This is similar to the Transaction Style, but additionally provides the capability to respond to diverse information and process requests across the enterprise. This style emerged most recently to address the limitations of the above approaches. With the Adaptive Transaction Style, the Hub is built as a platform for consolidating data from disparate third party and internal sources and for serving unified master entity views to operational applications, analytical systems or both. This approach delivers a real-time Hub that has a reliable, persistent foundation of master reference and relationship data, along with all the history and lineage of data changes needed for audit and compliance tracking. On top of this persistent master data foundation, the Hub can dynamically aggregate transaction data on demand from different source systems to deliver the unified golden view to downstream systems. Data can also be accessed through batch interfaces, published to a message bus or served through a real-time services layer. New data sources can be readily added in this approach by extending the data model and by configuring the new source mappings and the survivorship rules, meaning that all legacy data hubs can be leveraged to contribute their records/rules into the new transaction hub. Finally, through rich user interfaces for data stewardship, it allows exception handling by business analysts to keep it current with business rules/practices while maintaining the reliability of best-of-breed master records.   Confederation Style: In this architectural style, several Hubs are maintained at departmental and/or agency and/or territorial level, and each of them are connected to the other Hubs either directly or via a central Super-Hub. Each Domain level Hub can be implemented using any of the previously described styles, but normally the Central Super-Hub is a Registry Style one. This is particularly important for Public Sector organizations, where most of the time it is practically or legally impossible to store in a single central hub all the relevant constituent information from all departments.   Oracle MDM Solutions can be deployed according to any of the above MDM architectural styles, and have been specifically designed to fully support the Transaction and Adaptive Transaction styles. Oracle MDM Solutions provide strong data federation and integration capabilities which are key to enabling the use of the Confederated Hub as a possible architectural style approach. Don't lock yourself into a solution that cannot evolve with your needs. With Oracle's support for any type of deployment architecture, its ability to leverage the outstanding capabilities of the Oracle technology stack, and its open interfaces for non-Oracle technology stacks, Oracle MDM Solutions provide a low TCO and a quick ROI by enabling a phased implementation strategy.

    Read the article

  • Best practices for upgrading user data when updating versions of software

    - by Javy
    In my code I check the current version of the software on launch and compare it to the version stored in the user's data file(s). If the version is newer, then I call different methods to update the old data to the newer data version, if necessary. I usually have to make a new method to convert the data with each update that changes user data in some way, and cannot remove the old ones in case there was someone who missed an update. So the app must be able to go through each method call and update their data until they get their data current. With larger data sets, this could be a problem. In addition, I recently had a brief discussion with another StackOverflow user this and he indicated he always appended a date stamp to the filename to manage data versions, although his reasoning as to why this was better than storing the version data in the file itself was unclear. Since I've rarely seen management of user data versions in books I've read, I'm curious what are the best practices for naming user data files and procedures for updating older data to newer versions.

    Read the article

  • Dependency Management tool for REST endpoints

    - by ShaggyInjun
    I work in a Rest Oriented envrionment. The number of endpoints is quite large and span multiple applications. The dependencies between the endpoints are large in number as well and not very well planned. Applications have cyclic dependencies amongst each other. Unfortunately, there is no central location where all the endpoints are documented and declare dependencies ( the endpoints that they inturn call ). Is there a tool that will help in such dependency management. I tried searching for a tool online, but not know what such a thing would be called, I am unable to find anything. P.S. Google only helps those who know what they need help with. :(

    Read the article

  • How to tackle archived who-is personal data with opt-out?

    - by defaye
    As far as I understand it, it is possible to opt-out (in the UK at least) of having your address details displayed on who-is information of a domain for non-trading individuals. What I want to know is, after opt-out, how do individuals combat archived data? Is there any enforcement of this? How many who-is websites are there which archive data and what rights do we have to force them to remove that data without paying absurd fees? In the case of capitulating to these scoundrels, what point is it in paying for the removal of archived data if that data can presumably resurface on another who-is repository? In other words, what strategy is one supposed to take, besides being wiser after the fact?

    Read the article

  • Two divs, one fixed width, the other, the rest

    - by Shamil
    I've got two div containers. Whilst one needs to be a specific width, I need to adjust it, so that, the other div takes up the rest of the space. Is there any way I can do this? <div class="left"></div> <div class="right"></div> // needs to be 250px .left { float: left; width: 83%; display: table-cell; vertical-align: middle; min-height: 50px; margin-right: 10px; overflow: auto } .right { float: right; width: 16%; text-align: right; display: table-cell; vertical-align: middle; min-height: 50px; height: 100%; overflow: auto } Thanks

    Read the article

  • java.lang.ClassNotFoundException: org.springframework.ui.ModelMap

    - by aelshereay
    I create a simple webapp using tomcat 6, spring 2.5.6 and maven. The problem is when I boot up tomcat, I am getting the following errors: SEVERE: StandardWrapper.Throwable java.lang.NoClassDefFoundError: org/springframework/ui/ModelMap ... Caused by: java.lang.ClassNotFoundException: org.springframework.ui.ModelMap The ModelMap class does exist in spring-2.5.6.jar and spring-context-2.5.6.jar, I also have some other spring jars. All of them are being deployed to tomcat correctly, when I check the application WEB-INF (deployed to tomcat) I found all those jars there! I have only one @Controller that has a @RequestMapping("/home.htm") showForm(ModelMap model) method. My applicationContext is quite simple: <?xml version="1.0" encoding="utf-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:aop="http://www.springframework.org/schema/aop" xmlns:context="http://www.springframework.org/schema/context" xmlns:dwr="http://www.directwebremoting.org/schema/spring-dwr" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.5.xsd http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-2.5.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-2.5.xsd http://www.directwebremoting.org/schema/spring-dwr http://www.directwebremoting.org/schema/spring-dwr-3.0.xsd" default-autowire="byName"> <context:component-scan base-package="org.myapp"/> <bean class="org.springframework.web.servlet.mvc.annotation.DefaultAnnotationHandlerMapping"/> <bean class="org.springframework.web.servlet.mvc.annotation.AnnotationMethodHandlerAdapter"/> <bean id="viewResolver" class="org.springframework.web.servlet.view.InternalResourceViewResolver"> <property name="viewClass" value="org.springframework.web.servlet.view.JstlView"></property> <property name="prefix" value="/WEB-INF/view/"></property> <property name="suffix" value=".jsp"></property> </bean> </beans>

    Read the article

  • For REST, how do I receive posted data using PHP?

    - by netrox
    I want to set up a tiny RESTful interface for my web services using PHP. The problem is that I looked at frameworks and I cannot figure out how do I recieve the posted data without field names? For example, if a server posts data to my server, I cannot figure out how do I get it without needing the postfield (POST variables). Traditionally, with forms, people send post data with field names such as this: curl_setopt($ch, CURLOPT_POSTFIELDS, postfield=postvalue); and I would use PHP code like this: $postvalue=$_POST[postfield]; to get value of postfield but since the server posting data is not using postfield and is just sending XML, how do I get it without fields? How do I capture the XML? That's where I am lost.

    Read the article

  • Android programming: Authentication and data exchange with Java EE

    - by Konsumierer
    I am having a Java application running in a Tomcat server using Spring, Hibernate, etc. and a two web interfaces, one implemented in Tapestry 5 and the other one using Flex with BlazeDS and Spring-BlazeDS. In my first android application I would now like to log in to the server and retrieve some data. I´m wondering how I could achieve this in a secure way. First of all I need to know which technology is the best to retrieve data from the server and how can I restrict the access to users only that have been successfully authenticated. With what I read until now I would try to implement a HTTPServlet on the server and make server calls via HTTP Client. In the servlet I could probably use the HTTPSession to check if the request comes from an authenticated user. And the data I would try to send serialized (JSON). Unfortunately, I´ve never done those things and maybe I´m on the wrong way and there are more comfortable solutions.

    Read the article

  • Chef: nested data bag data to template file returns "can't convert String into Integer"

    - by Dalho Park
    I'm creating simple test recipe with a template and data bag. What I'm trying to do is creating a config file from data bag that has simple nested information, but I receive error "can't convert String into Integer" Here are my setting file 1) recipe/default.rb data1 = data_bag_item( 'mytest', 'qa' )['test'] data2 = data_bag_item( 'mytest', 'qa' ) template "/opt/env/test.cfg" do source "test.erb" action :create_if_missing mode 0664 owner "root" group "root" variables({ :pepe1 = data1['part.name'], :pepe2 = data2['transport.tcp.ip2'] }) end 2)my data bag named "mytest" $knife data bag show mytest qa id: qa test: part.name: L12 transport.tcp.ip: 111.111.111.111 transport.tcp.port: 9199 transport.tcp.ip2: 222.222.222.222 3)template file test.erb part.name=<%= @pepe1 % transport.tcp.binding=<%= @pepe2 % Error reurns when I run chef-client on my server, [2013-06-24T19:50:38+00:00] DEBUG: filtered backtrace of compile error: /var/chef/cache/cookbooks/config_test/recipes/default.rb:19:in []',/var/chef/cache/cookbooks/config_test/recipes/default.rb:19:inblock in from_file',/var/chef/cache/cookbooks/config_test/recipes/default.rb:12:in from_file' [2013-06-24T19:50:38+00:00] DEBUG: filtered backtrace of compile error: /var/chef/cache/cookbooks/config_test/recipes/default.rb:19:in[]',/var/chef/cache/cookbooks/config_test/recipes/default.rb:19:in block in from_file',/var/chef/cache/cookbooks/config_test/recipes/default.rb:12:infrom_file' [2013-06-24T19:50:38+00:00] DEBUG: backtrace entry for compile error: '/var/chef/cache/cookbooks/config_test/recipes/default.rb:19:in `[]'' [2013-06-24T19:50:38+00:00] DEBUG: Line number of compile error: '19' Recipe Compile Error in /var/chef/cache/cookbooks/config_test/recipes/default.rb TypeError can't convert String into Integer Cookbook Trace: /var/chef/cache/cookbooks/config_test/recipes/default.rb:19:in []' /var/chef/cache/cookbooks/config_test/recipes/default.rb:19:inblock in from_file' /var/chef/cache/cookbooks/config_test/recipes/default.rb:12:in `from_file' Relevant File Content: /var/chef/cache/cookbooks/config_test/recipes/default.rb: 12: template "/opt/env/test.cfg" do 13: source "test.erb" 14: action :create_if_missing 15: mode 0664 16: owner "root" 17: group "root" 18: variables({ 19 :pepe1 = data1['part.name'], 20: :pepe2 = data2['transport.tcp.ip2'] 21: }) 22: end 23: I tried many things and if I comment out "pepe1 = data1['part.name'],", then :pepe2 = data2['transport.tcp.ip2'] works fine. only nested data "part.name" cannot be set to @pepe1. Does anyone knows why I receive the errors? thanks,

    Read the article

  • pure-ftpd debian, can't get www-data user working

    - by lynks
    I'm trying to add FTP access to the apache web files, in the past I have done this with an ftpuser and group arrangement. This time I would like to make it possible to login directly as www-data (the default apache user on debian) to make things a bit cleaner. I have checked and re-checked all the common issues; MinUID is set to 1 (www-data has uid 33) www-data has shell set to /bin/bash in /etc/passwd PAMAuthentication is off UnixAuthentication is on I have restarted pure-ftpd using /etc/init.d/pure-ftpd restart My resulting pure-ftpd run is; /usr/sbin/pure-ftpd -l unix -A -Y 1 -u 1 -E -O clf:/var/log/pure-ftpd/transfer.log -8 UTF-8 -B My syslog contains; Oct 7 19:46:40 Debian-60-squeeze-64 pure-ftpd: ([email protected]) [WARNING] Can't login as [www-data]: account disabled And my ftp client is giving me; 530 Sorry, but I can't trust you Am I missing something obvious?

    Read the article

  • Data recovery on a corrupted 3TB disk

    - by Mark K Cowan
    Short version I probably need software to run a deep-scan recovery (ideally on Linux) to find files on NTFS filesystem. The file data is intact, but the references are no longer present. Analogous to recovering data from a "quick-formatted" partition. Hopefully there is a smarter way available than deep-scan, one which would recover filenames and possibly paths. Long version I have a 3TB disk containing a load of backups. Windows 7 SP1 refused to detect the disk when plugged in directly via SATA, so I put it on a USB/SATA adaptor which seemed to work at first. The SATA/USB adaptor probably does not support disks over 2.2TB though. Windows first asked me if I wanted to 'format' the disk, then later showed me most of the contents but some folder were inaccessible. I stupidly decided to run a CHKDSK on my backup disk, which made the folders accessible but also left them empty. I connected this disk via SATA to my main PC (Arch Linux). I tried: testdisk ntfsundelete ntfsfix --no-action (to look for diagnostically relevant faults, disk was "OK" though) to no avail as the files references in the tables had presumably been zeroed out by CHKDSK, rather than using a typical journal'd deletion). If it is useful at all, a majority of the files that I want to recover are JPEG, Photoshop PSD, and MPEG-3/MPEG-4/AVI/MKV files. If worst comes to worst, I'll just design my own sector scanner and use some simple heuristic-driven analysis to recover raw binary blocks of data from the disk which appears to match the structures of the above file types. I am unfamiliar with the exact workings of NTFS but used to be proficient at recovering FAT32 systems with just a hex-editor, so I can provide any useful diagnostic information if you let me know how to find it! My priorities in ascending order of importance for choosing the accepted answer: Restores directory structure Recovers many filenames in addition to the file data Is free / very cheap Runs on Linux Recovers a majority of file data The last point is the most important, but the more of the higher points you match the more rep you'll probably get :)

    Read the article

  • Any "Magic Tricks" For Getting Data Back After Windows 7 Install

    - by user163757
    My old man installed Windows 7 without making a proper backup, and now realizes he left behind some important data. He did a true "clean install", so there is no Windows.old folder in the root directory. However, I believe the format performed on the hard drive was only a quick format, so I am hoping there is some chance at data recovery. I took his hard drive out, and have spent a majority of the weekend researching data recovery options. I paid $70 for the GetDataBack software, but have had little success with it. I can see all of the files I want to restore, however they appear corrupt when I try to open them. With that all being said, does anyone know of a viable way to recover some of this data, or is it a lost cause all together?

    Read the article

  • raid 0 data recovery?

    - by Fred
    HI All, I have two identical seagate 7200.9 500Gb drives confiured as a RAID 0 spanned disk in windows. One of the drives has lost power and wont spin up at all. I know this normally means death for the data on both drives but i have a cunning plan.. DISK 1 - NO POWER RAID 0 DISK DISK 2 - FULLY FUNCTIONAL RAID 0 DISK DISK 3 - FULLY FUNCTIONAL SPARE DISK Copy the working drive (disk 2) data to a third 500GB DISK (disk 3), remove the logic board from the working disk (disk 2) and replace it with the non working logic board on the broken drive (disk 1) , then hopefully recreate the RAID 0 with disk 1 and disk 3, just long enough to get the data off it. Hope this makes sense, here are my questions: Windows disk manager atm recognises disk 2 but wont let me access it in anyway, therefore copying the data off it (or getting a disk image) cant be done in windows. Does anyone know of any software (in linux or self booting) that would allow me to access this disk? Anyone know of any software that will recreate the spanned drive off two disk images Am i missing any key information that means i definitely shouldn't even bother starting this, i know its a long shot anyway but its worth a try unless i definitely cant do it. The irritating thing is that i am sure its a logic board failure on disk 1 as it simply wont power up at all, suddenly no signs of life, so i am sure the data is intact! Any help would be really appreciated! Thanks

    Read the article

  • Data take on with Drupal 6

    - by Robert MacLean
    We are migrating our current intranet to Drupal 6 and there is a lot of data within the current system which can be classified into: List data, general lists of fields. Common use is phone list of the employees phone numbers. Document repository. Just basically a web version of a file share for documents. I can easily get the data + meta infomation out, but how do I bulk upload the two types of data into Drupal, as uploading the hundred of thousands of items manually is just not acceptable.

    Read the article

  • Team City Backup from Rest API doesn't backup

    - by ChrisFletcher
    I'm using the REST API in Team City 6.5 to request a backup is created as follows: http://teamcity:8111/httpAuth/app/rest/server/backup?includeConfigs=true&includeDatabase=true&includeBuildLogs=true&fileName=filename as specified in the documentation here: http://confluence.jetbrains.net/display/TW/REST+API+Plugin#RESTAPIPlugin-DataBackup The problem is that it simply returns "idle", as does the backup status call and no backup appears. I can backup fine from the web interface and I can return a list of users from the REST API without issue also. I'm starting to suspect there is some sort of permission or config option but I can't find one.

    Read the article

  • Summing up spreadsheet data when a column contains “#N/A”

    - by Doris
    I am using Goggle Spreadsheet to work up some historical stock data and I use a Google function (=googlefinance=…) to import the historical closing prices for a stock, then I work with that data further. But, in that list of data generated from the =googlefinance=… function, one of the amounts comes up as #N/A. I don’t know why, but it happens for various symbols that I have tried. When I use a max function on the array, which includes the N/A line, the max function does not come up with anything but an N/A, so the N/A throws off any further functions. I thought I’d create a second column to the right of the imported data in which I can give it an IF function, something like, If ((A1 <0), "0", A1), with the expectation that it would return 0 if cell A1 is the N/A, and the cell value if it is not N/A. However, this still returns N/A. I also tried an IS BLANK function but that resulted in the same NA. Does anyone have any suggestions for a workaround to eliminate the N/A from an array of numbers that I am trying to work with?

    Read the article

  • Spring @Autowired messageSource working in Controller but not in other classes?

    - by Jayaprakash
    New updates: As I could not succeed in configuring messageSource through annotations, I attempted to configure messageSource injection through servlet-context.xml. I still have messageSource as null. Please let me know if you need any more specific info, and I will provide. Thanks for your help in advance. servlet-context.xml <beans:bean id="message" class="com.mycompany.myapp.domain.common.message.Message"> <beans:property name="messageSource" ref="messageSource" /> </beans:bean> Spring gives the below information message about spring initialization. INFO : org.springframework.context.annotation.ClassPathBeanDefinitionScanner - JSR-330 'javax.inject.Named' annotation found and supported for component scanning INFO : org.springframework.beans.factory.support.DefaultListableBeanFactory - Overriding bean definition for bean 'message': replacing [Generic bean: class [com.mycompany.myapp.domain.common.message.Message]; scope=singleton; abstract=false; lazyInit=false; autowireMode=0; dependencyCheck=0; autowireCandidate=true; primary=false; factoryBeanName=null; factoryMethodName=null; initMethodName=null; destroyMethodName=null; defined in file [C:\springsource\tc-server-developer-2.1.0.RELEASE\spring-insight-instance\wtpwebapps\myapp\WEB-INF\classes\com\mycompany\myapp\domain\common\message\Message.class]] with [Generic bean: class [com.mycompany.myapp.domain.common.message.Message]; scope=; abstract=false; lazyInit=false; autowireMode=0; dependencyCheck=0; autowireCandidate=true; primary=false; factoryBeanName=null; factoryMethodName=null; initMethodName=null; destroyMethodName=null; defined in ServletContext resource [/WEB-INF/spring/appServlet/servlet-context.xml]] INFO : org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor - JSR-330 'javax.inject.Inject' annotation found and supported for autowiring INFO : org.springframework.beans.factory.support.DefaultListableBeanFactory - Pre-instantiating singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory@1c7caac5: defining beans [org.springframework.web.servlet.mvc.annotation.DefaultAnnotationHandlerMapping#0,org.springframework.format.support.FormattingConversionServiceFactoryBean#0,org.springframework.validation.beanvalidation.LocalValidatorFactoryBean#0,org.springframework.web.servlet.mvc.annotation.AnnotationMethodHandlerAdapter#0,org.springframework.web.servlet.handler.MappedInterceptor#0,org.springframework.web.servlet.mvc.HttpRequestHandlerAdapter,org.springframework.web.servlet.resource.ResourceHttpRequestHandler#0,org.springframework.web.servlet.handler.SimpleUrlHandlerMapping#0,xxxDao,message,xxxService,jsonDateSerializer,xxxController,homeController,org.springframework.context.annotation.internalConfigurationAnnotationProcessor,org.springframework.context.annotation.internalAutowiredAnnotationProcessor,org.springframework.context.annotation.internalRequiredAnnotationProcessor,org.springframework.context.annotation.internalCommonAnnotationProcessor,tilesViewResolver,tilesConfigurer,messageSource,org.springframework.web.servlet.handler.MappedInterceptor#1,localeResolver,org.springframework.web.servlet.view.ContentNegotiatingViewResolver#0,validator,resourceBundleLocator,messageInterpolator]; parent: org.springframework.beans.factory.support.DefaultListableBeanFactory@4f47af3 I have the below definition for message source in 3 classes. In debug mode, I can see that in class xxxController, messageSource is initialized to org.springframework.context.support.ReloadableResourceBundleMessageSource. I have annotated Message class with @Component and xxxHibernateDaoImpl with @Repository. I also included context namespace definition in servlet-context.xml. But in Message class and xxxHibernateDaoImpl class, the messageSource is still null. Why is Spring not initializing messageSource in the two other classes though in xxxController classes, it initializes correctly? @Controller public class xxxController{ @Autowired private ReloadableResourceBundleMessageSource messageSource; } @Component public class Message{ @Autowired private ReloadableResourceBundleMessageSource messageSource; } @Repository("xxxDao") public class xxxHibernateDaoImpl{ @Autowired private ReloadableResourceBundleMessageSource messageSource; } <beans:beans xmlns:context="http://www.springframework.org/schema/context" xsi:schemaLocation=" http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.0.xsd"> <beans:bean id="messageSource" class="org.springframework.context.support.ReloadableResourceBundleMessageSource"> <beans:property name="basename" value="/resources/messages/messages" /> </beans:bean> <context:component-scan base-package="com.mycompany.myapp"/> </beans>

    Read the article

  • TDWI World Conference Features Oracle and Big Data

    - by Mandy Ho
    Oracle is a Gold Sponsor at this year's TDWI World Conference Series, held at the Manchester Grand Hyatt in San Diego, California - July 31 to Aug 1. The theme of this event is Big Data Tipping Point: BI Strategies in the Era of Big Data. The conference features an educational look at how data is now being generated so quickly that organizations across all industries need new technologies to stay ahead - to understand customer behavior, detect fraud, improve processes and accelerate performance. Attendees will hear how the internet, social media and streaming data are fundamentally changing business intelligence and data warehousing. Big data is reaching critical mass - the tipping point. Oracle will be conducting the following Evening Workshop. To reserve your space, call 1.800.820.5592 ext 10775. Title...:    Integrating Big Data into Your Data Center (or A Big Data Reference Architecture) Date.:    Wed., August 1, 2012, at 7:00 p.m Venue:: Manchester Grand Hyatt, San Diego, Room Weblogs, Social Media, smart meters, senors and other devices generate high volumes of low density information that isn't readily accessible in enterprise data warehouses and business intelligence applications today. But, this data can have relevant business value, especially when analyzed alongside traditional information sources. In this session, we will outline a reference architecture for big data that will help you maximize the value of your big data implementation. You will learn: The key technologies in a big architecture, and their specific use case The integration point of the various technologies and how they fit into your existing IT environment How effectively leverage analytical sandboxes for data discovery and agile development of data driven solutions   At the end of this session you will understand the reference architecture and have the tools to implement this architecture at your company. Presenter: Jean-Pierre Dijcks, Senior Principal Product Manager Don't miss our booth and the chance to meet with our Big data experts on the exhibition floor at booth #306. 

    Read the article

  • How do you encode Algebraic Data Types in a C#- or Java-like language?

    - by Jörg W Mittag
    There are some problems which are easily solved by Algebraic Data Types, for example a List type can be very succinctly expressed as: data ConsList a = Empty | ConsCell a (ConsList a) consmap f Empty = Empty consmap f (ConsCell a b) = ConsCell (f a) (consmap f b) l = ConsCell 1 (ConsCell 2 (ConsCell 3 Empty)) consmap (+1) l This particular example is in Haskell, but it would be similar in other languages with native support for Algebraic Data Types. It turns out that there is an obvious mapping to OO-style subtyping: the datatype becomes an abstract base class and every data constructor becomes a concrete subclass. Here's an example in Scala: sealed abstract class ConsList[+T] { def map[U](f: T => U): ConsList[U] } object Empty extends ConsList[Nothing] { override def map[U](f: Nothing => U) = this } final class ConsCell[T](first: T, rest: ConsList[T]) extends ConsList[T] { override def map[U](f: T => U) = new ConsCell(f(first), rest.map(f)) } val l = (new ConsCell(1, new ConsCell(2, new ConsCell(3, Empty))) l.map(1+) The only thing needed beyond naive subclassing is a way to seal classes, i.e. a way to make it impossible to add subclasses to a hierarchy. How would you approach this problem in a language like C# or Java? The two stumbling blocks I found when trying to use Algebraic Data Types in C# were: I couldn't figure out what the bottom type is called in C# (i.e. I couldn't figure out what to put into class Empty : ConsList< ??? >) I couldn't figure out a way to seal ConsList so that no subclasses can be added to the hierarchy What would be the most idiomatic way to implement Algebraic Data Types in C# and/or Java? Or, if it isn't possible, what would be the idiomatic replacement?

    Read the article

  • Why is my ServiceOperation method missing from my WCF Data Services client proxy code?

    - by Kev
    I have a simple WCF Data Services service and I want to expose a Service Operation as follows: [System.ServiceModel.ServiceBehavior(IncludeExceptionDetailInFaults = true)] public class ConfigurationData : DataService<ProductRepository> { // This method is called only once to initialize service-wide policies. public static void InitializeService(IDataServiceConfiguration config) { config.SetEntitySetAccessRule("*", EntitySetRights.ReadMultiple | EntitySetRights.ReadSingle); config.SetServiceOperationAccessRule("*", ServiceOperationRights.All); config.UseVerboseErrors = true; } // This operation isn't getting generated client side [WebGet] public IQueryable<Product> GetProducts() { // Simple example for testing return (new ProductRepository()).Product; } Why isn't the GetProducts method visible when I add the service reference on the client?

    Read the article

  • How to Convert multiple sets of Data going from left to right to top to bottom the Pythonic way?

    - by ThinkCode
    Following is a sample of sets of contacts for each company going from left to right. ID Company ContactFirst1 ContactLast1 Title1 Email1 ContactFirst2 ContactLast2 Title2 Email2 1 ABC John Doe CEO [email protected] Steve Bern CIO [email protected] How do I get them to go top to bottom as shown? ID Company Contactfirst ContactLast Title Email 1 ABC John Doe CEO [email protected] 1 ABC Steve Bern CIO [email protected] I am hoping there is a Pythonic way of solving this task. Any pointers or samples are really appreciated! p.s : In the actual file, there are 10 sets of contacts going from left to right and there are few thousand such records. It is a CSV file and I loaded into MySQL to manipulate the data.

    Read the article

  • How to add data manually in core data entity

    - by pankaj
    Hi I am working on core data for the first time. I have just created an entity and attributes for that entity. I want to add some data inside the entity(u can say i want to add data in a table), earlier i when i was using sqlite, i would add data using terminal. But here in core data i am not able to find a place where i can manually add data. I just want to add data in entity and display it in a UITableView. I have gone through the the documentation of core data but it does not explain how to add data manually although it explains how i can add it programmiticaly but i dont need to do it programically. I want to do it manually. Thanks in advance

    Read the article

  • Posting an image and textual based data to a wcf service

    - by James Hay
    I have a requirement to write a web service that allows me to post an image to a server along with some additional information about that image. I'm completely new to developing web services (normally client side dev) so I'm a little stumped as to what I need to look into and try. How do you post binary data and plain text into a servic? What RequestFormat should I use? It looks like my options are xml or json. Can I use either of these? Bit of a waffly question but I just need some direction rather than a solution as I can't seem to find much online.

    Read the article

< Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >