Search Results

Search found 128 results on 6 pages for 'bidirectional'.

Page 4/6 | < Previous Page | 1 2 3 4 5 6  | Next Page >

  • What are typical server to client communication patterns?

    - by jagse
    What are the usual patterns for bidirectional communication between a client and a server in a wlan environment. How is it possible for the server to push data to a mobile client over wlan after a connection has been established. Lets say I have a webservice running on a server and the moblie cients in the wlan can use this webservice. Now the question is how can the server invoke methods at the client, or directly send data to the client. How is this handled usually? I would apriciate some links to read about this topic. Is this a common problem or is it not that easy to solve? Cheers

    Read the article

  • Get the ID of a Child in a cascade="all" relationship, while adding it to a collection, in Hibernate

    - by Marco
    Hi, i have two Entities, "Parent" and "Child", that are linked through a bidirectional one-to-many relationship with the cascade attribute set to "all". When adding a Child object to the Parent children collection using the code below, i can't get the ID of the persisted child until i commit the transaction: Parent p = (Parent) session.load(Parent.class, pid); Child c = new Child(); p.addChild(c); // "c" hasn't an ID (is always zero) However, when i persist a child entity by explicitly calling the session.save() method, the ID is created and set immediately, even if the transaction hasn't been committed: Child c = new Child(); session.save(c); // "c" has an ID Is there a way to get the ID of the child entity immediately without calling the session.save() method? Thanks

    Read the article

  • Hibernate Bi-Directional ManyToMany Updates with Second Level cache

    - by DD
    I have a bidirectional many-to-many class: public class A{ @ManyToMany(mappedBy="listA") private List<B> listB; } public class B{ @ManyToMany private List<A> listA; } Now I save a listA into B: b.setListA(listA); This all works fine until I turn on second-level caching on the collection a.ListB. Now, when I update the list in B, a.listB does not get updated and remains stale. How do you get around this? Thanks, Dwayne

    Read the article

  • Java/C++ communication via pipe on Windows

    - by Warlax
    Hi, I have two separate programs, one in Java and one in C++, both running on Windows. We need to do bidirectional interprocess communication between the two. Up until now, we were using this awkward solution of writing to text files and reading them on the other side, where the producer would generate a .lock file when it's done writing and the consumer would remove that when it's done reading... like I said, awkward. If we were on *nix, we would use a pipe using popen() on the C++ and RadomAccessFile on the Java side. It seems to work well. What can we do on Windows? Can we use named pipes? Thank you.

    Read the article

  • Best canvas for drawing in wxPython?

    - by Pablo Rodriguez
    I have to draw a graph of elements composing a topological model of a physical network. There would be nodes and arches, and the latter could be unidirectional or bidirectional. I would like to capture the clicking events for the nodes and the arches (to select the element and show its properties somewhere), and the dragging events for the nodes (to move them around) and arches (to connect or disconnect elements). I've done some research and I've narrowed the alternatives down to OGL (Object Graphics Library) and FloatCanvas. I would not like to go down to the DrawingContext, but it is not discarded if necessary. Which canvas option would you choose?

    Read the article

  • Connect two daemons in python

    - by Simon
    What is the best way to connect two daemons in Python? I have daemon A and B. I'd like to receive data generated by B in A's module (maybe bidirectional). Both daemons support plugins, so I'd like to shut communication in plugins. What's the best and cross-platform way to do that? I know few mechanisms from low-level solutions - shared memory (C/C++), linux pipe, sockets (TCP/UDP), etc. and few high-level - queue (JMS, Rabbit), RPC. Both daemons should run on the same host, but obviously better approach is to abstract from connection type. What are typical solutions/libraries in python? I'm looking for an elegant and lightweight solution. I don't need external server, just two processes talking with each other. What should I use in python to do that?

    Read the article

  • Howto check if a object is connected to another in hibernate

    - by codevourer
    Imagine two domain object classes, A and B. A has a bidirectional one-to-many relationship to B. A is related to thousands of B. The relations must be unique, it's not possible to have a duplicate. To check if an instance of B is already connected to a given instance of A, we could perform an easy INNER JOIN but this will only ensure the already persisted relations. What about the current transient relations? class A { @OneToMany private List<B> listOfB; } If we access the listOfB and perform a check of contains() this will fetch all the connected instances of B lazy from the datasource. I only want to validate them by their primary-key. Is there an easy solution where I can do things like "Does this instance of A is connected with this instance of B?" Without loading all these data into memory and perform a based on collections?

    Read the article

  • Java Spotlight Episode 58: Peter Korn and Ofir Leitner on ME Accessibility

    - by Roger Brinkley
    Tweet Interview with Peter Korn and Ofir Leitner on Mobile and Embedded Accessibility. Joining us this week on the Java All Star Developer Panel are Dalibor Topic, Java Free and Open Source Software Ambassador and Alexis Moussine-Pouchkine, Java EE Developer Advocate. Right-click or Control-click to download this MP3 file. You can also subscribe to the Java Spotlight Podcast Feed to get the latest podcast automatically. If you use iTunes you can open iTunes and subscribe with this link: Java Spotlight Podcast in iTunes. Show Notes News Announcing Oracle WebLogic 12c Geronimo 3 beta - Another Apache project now compatible with Java EE 6 NetBeans 7.1 RC1 is out JavaFX links of the weeks JavaFX videos on Parleys: Nicolas Lorain's Introduction to JavaFX 2.0 from JavaOne 2011 & Richard Bair on JavaFX Architecture and Programming Model Events Dec 4, SOUJava Geek Bike Ride 2011, Sao Paulo  Dec 5-7, UKOUG, Birmingham, UK Dec 6-8, Java One Brazil, Sao Paulo Dec 9 UAIJUG, Uberlandia Dec 9 CEJUG, Fortaleza/CE Dec 10 GUJAVA, Florianopolis Dec 10 ALJUG, Maceio/AL Dec 11 Javaneiros, Campo Grande/MS Dec 12 GOJAVA, Goiania/GO Dec 13 RioJUG, Rio de Janeiro Feature interview Peter Korn is Oracle's Accessibility Principal – their senior individual contributor on accessibility. He is also Technical Manager of the AEGIS project, leading an EC-funded €12.6m investment building accessibility into future mainstream ICT (FP7-ICT224348). Mr. Korn co-developed and co-implemented the Java Accessibility API, and developed the Java Access Bridge for Windows. He helped design the open source GNOME Accessibility architecture found on most modern UNIX and GNU/Linux systems, and consulted on accessibility support for OpenOffice.org, Firefox, Thunderbird, and other applications. Prior to Sun/Oracle, Peter co-developed the outSPOKEN for Windows screen reader. Mr. Korn represented Sun/Oracle on TEITAC for the Section 508/255 refresh, co-led the OASIS ODF Accessibility subcommittee, and sits on INCITS V2 where he is contributing to ISO 13066: defining AT-IT interoperability standards including specifically the Java Accessibility API. Ofir Leitner is the architect of one of LWUIT's key features - the HTMLComponent which allows rendering HTML within LWUIT applications and to embed web-flows inside apps. Ofir is also responsible for LWUIT's bidirectional and RTL support and for the accessibility work that is being done these days in LWUIT. Mail Bag What's Cool Devoxx 2011 (Alexis) Eclipsecon Europe Talk by Andrew Overholt: IcedTea & IcedTea-Web Geek bike ride & Rio 500 Twitter followers @JavaSpotlight Show Transcripts Transcript for this show is available here when available.

    Read the article

  • Send raw data to USB parallel port after upgrading to 11.10

    - by zaphod
    I have a laser cutter connected via a generic USB to parallel adapter. The laser cutter speaks HPGL, as it happens, but since this is a laser cutter and not a plotter, I usually want to generate the HPGL myself, since I care about the ordering, speed, and direction of cuts and so on. In previous versions of Ubuntu, I was able to print to the cutter by copying an HPGL file directly to the corresponding USB "lp" device. For example: cp foo.plt /dev/usblp1 Well, I just upgraded to Ubuntu 11.10 oneiric, and I can't find any "lp" devices in /dev anymore. D'oh! What's the preferred way to send raw data to a parallel port in Ubuntu? I've tried System Settings Printing + Add, hoping that I might be able to associate my device with some kind of "raw printer" driver and print to it with a command like lp -d LaserCutter foo.plt But my USB to parallel adapter doesn't seem to show up in the list. What I do see are my HP Color LaserJet, two USB-to-serial adapters, "Enter URI", and "Network Printer". Meanwhile, over in /dev, I do see /dev/ttyUSB0 and /dev/ttyUSB1 devices for the 2 USB-to-serial adapters. I don't see anything obvious corresponding to the HP printer (which was /dev/usblp0 prior to the upgrade), except for generic USB stuff. For example, sudo find /dev | grep lp produces no output. I do seem to be able to print to the HP printer just fine, though. The printer setup GUI gives it a device URI starting with "hp:" which isn't much help for the parallel adapter. The CUPS administrator's guide makes it sound like I might need to feed it a device URI of the form parallel:/dev/SOMETHING, but of course if I had a /dev/SOMETHING I'd probably just go on writing to it directly. Here's what dmesg says after I disconnect and reconnect the device from the USB port: [ 924.722906] usb 1-1.1.4: USB disconnect, device number 7 [ 959.993002] usb 1-1.1.4: new full speed USB device number 8 using ehci_hcd And here's how it shows up in lsusb -v: Bus 001 Device 008: ID 1a86:7584 QinHeng Electronics CH340S Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 1.10 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 8 idVendor 0x1a86 QinHeng Electronics idProduct 0x7584 CH340S bcdDevice 2.52 iManufacturer 0 iProduct 2 USB2.0-Print iSerial 0 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 32 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0x80 (Bus Powered) MaxPower 96mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 2 bInterfaceClass 7 Printer bInterfaceSubClass 1 Printer bInterfaceProtocol 2 Bidirectional iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x82 EP 2 IN bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0020 1x 32 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x02 EP 2 OUT bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0020 1x 32 bytes bInterval 0 Device Status: 0x0000 (Bus Powered)

    Read the article

  • Maximum Availability with Oracle GoldenGate

    - by Irem Radzik
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Oracle Database offers a variety of built-in and optional products for maximum availability, and it is well known for its robust high availability and disaster recovery solutions. With its heterogeneous, real-time, transactional data movement capabilities, Oracle GoldenGate is a key part of the Maximum Availability Architecture for Oracle Database. This week on Thursday Dec. 13th we will be presenting in a live webcast how Oracle GoldenGate fits into Oracle Database Maximum Availability Architecture (MAA). Joe Meeks from the Oracle Database High Availability team will discuss how Oracle GoldenGate complements other key products within MAA such as Active Data Guard. Nick Wagner from GoldenGate PM team will present how to upgrade to latest Oracle Database release without any downtime. Nick will also cover 2 new features of  Oracle GoldenGate 11gR2:  Integrated Capture for Oracle Database and Automated Conflict Detection and Resolution. Nick will provide in depth review of these new features with examples. Oracle GoldenGate also offers maximum availability for non-Oracle databases, such as HP NonStop, SQL Server, DB2 (LUW, iSeries, or zSeries) and more. The same robust, reliable real-time, bidirectional data movement capabilities apply to all supported databases.  I'd like to invite you to join us on Thursday Dec. 13th 10am PT/1pm ET to hear from the product experts on how to use GoldenGate for maximizing database availability and to ask your questions. You can find the registration link below. Webcast: Maximum Availability with Oracle GoldenGate Thursday Dec. 13th 10am PT/1pm ET Look forward to another great webcast with lots of interaction with the audience. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • Send raw data to USB parallel port after upgrading to 11.10 oneiric

    - by zaphod
    I have a laser cutter connected via a generic USB to parallel adapter. The laser cutter speaks HPGL, as it happens, but since this is a laser cutter and not a plotter, I usually want to generate the HPGL myself, since I care about the ordering, speed, and direction of cuts and so on. In previous versions of Ubuntu, I was able to print to the cutter by copying an HPGL file directly to the corresponding USB "lp" device. For example: cp foo.plt /dev/usblp1 Well, I just upgraded to Ubuntu 11.10 oneiric, and I can't find any "lp" devices in /dev anymore. D'oh! What's the preferred way to send raw data to a parallel port in Ubuntu? I've tried System Settings Printing + Add, hoping that I might be able to associate my device with some kind of "raw printer" driver and print to it with a command like lp -d LaserCutter foo.plt But my USB to parallel adapter doesn't seem to show up in the list. What I do see are my HP Color LaserJet, two USB-to-serial adapters, "Enter URI", and "Network Printer". Meanwhile, over in /dev, I do see /dev/ttyUSB0 and /dev/ttyUSB1 devices for the 2 USB-to-serial adapters. I don't see anything obvious corresponding to the HP printer (which was /dev/usblp0 prior to the upgrade), except for generic USB stuff. For example, sudo find /dev | grep lp produces no output. I do seem to be able to print to the HP printer just fine, though. The printer setup GUI gives it a device URI starting with "hp:" which isn't much help for the parallel adapter. The CUPS administrator's guide makes it sound like I might need to feed it a device URI of the form parallel:/dev/SOMETHING, but of course if I had a /dev/SOMETHING I'd probably just go on writing to it directly. Here's what dmesg says after I disconnect and reconnect the device from the USB port: [ 924.722906] usb 1-1.1.4: USB disconnect, device number 7 [ 959.993002] usb 1-1.1.4: new full speed USB device number 8 using ehci_hcd And here's how it shows up in lsusb -v: Bus 001 Device 008: ID 1a86:7584 QinHeng Electronics CH340S Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 1.10 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 8 idVendor 0x1a86 QinHeng Electronics idProduct 0x7584 CH340S bcdDevice 2.52 iManufacturer 0 iProduct 2 USB2.0-Print iSerial 0 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 32 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0x80 (Bus Powered) MaxPower 96mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 2 bInterfaceClass 7 Printer bInterfaceSubClass 1 Printer bInterfaceProtocol 2 Bidirectional iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x82 EP 2 IN bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0020 1x 32 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x02 EP 2 OUT bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0020 1x 32 bytes bInterval 0 Device Status: 0x0000 (Bus Powered)

    Read the article

  • Parallel port blocking

    - by asalamon74
    I have a legacy Java program which handles a special card printer by sending binary data to the LPT1 port (no printer driver is involved, the Java program creates the binary stream). The program was working correctly with the client's old computer. The Java program sent all the bytes to the printer and after sending the last byte the program was not blocked. It took an other minute to finish the card printing, but the user was able to continue the work with the program. After changing the client's computer (but not the printer, or the Java program), the program does not finish the task till the card is ready, it is blocked until the last second. It seems to me that LPT1 has a different behavior now than was before. Is it possible to change this in Windows? I've checked BIOS for parallel port settings: The parallel port is set to EPP+ECP (but also tried the other two options: Bidirectional, Output only). Maybe some kind of parallel port buffer is too small? How can I increase it?

    Read the article

  • How to troubleshoot connectivity when curl gets an *empty response*

    - by chad
    I want to know how to proceed in troubleshooting why a curl request to a webserver doesn't work. I'm not looking for help that would be dependent upon my environment, I just want to know how to collect information about exactly what part of the communication is failing, port numbers, etc. chad-integration:~ # curl -v 111.222.159.30 * About to connect() to 111.222.159.30 port 80 (#0) * Trying 111.222.159.30... connected * Connected to 111.222.159.30 (111.222.159.30) port 80 (#0) > GET / HTTP/1.1 > User-Agent: curl/7.19.0 (x86_64-suse-linux-gnu) libcurl/7.19.0 OpenSSL/0.9.8h zlib/1.2.3 libidn/1.10 > Host: 111.222.159.30 > Accept: */* > * Empty reply from server * Connection #0 to host 111.222.159.30 left intact curl: (52) Empty reply from server * Closing connection #0 So, I understand that an empty response means that curl didn't get any response from the server. No problem, that's precisely what I'm trying to figure out. But what more specific info can I derive from cURL here? It was able to successfully "connect", so doesn't that involve some bidirectional communication? If so, then why does the response not come also? Note, I've verified my service is up and returning responses. Note, I'm a bit green at this level of networking, so feel free to provide some general orientation material.

    Read the article

  • Apache proxy: Why is one vhost returning Forbidden while the other one works?

    - by Stefan Majewsky
    I have a Java application that needs to talk to another intranet website using HTTPS in both directions. After fighting with Java's SSL implementations for some time, I gave up on that, and have now set up an Apache that's supposed to act as a bidirectional reverse proxy: external app ---(HTTPS request)---> Apache ---(local HTTP request)---> Java app This direction works just fine, however the other direction does not: Java app ---(local HTTP request)---> Apache ---(HTTPS request)---> external app This is the configuration for the vhost implementing the second proxy: Listen 127.0.0.1:8081 <VirtualHost appgateway:8081> ServerName appgateway.local SSLProxyEngine on ProxyPass / https://externalapp.corp:443/ ProxyPassReverse / https://externalapp.corp:443/ ProxyRequests Off AllowEncodedSlashes On # we do not need to apply any more restrictions here, because we listened on # local connections only in the first place (see the Listen directive above) <Proxy https://externalapp.corp:443/*> Order deny,allow Allow from all </Proxy> </VirtualHost> A curl http://127.0.0.1:8081/ should serve the equivalent of https://externalapp.corp, but instead results in 403 Forbidden, with the following message in the Apache error log: [Wed Jun 04 08:57:19 2014] [error] [client 127.0.0.1] Directory index forbidden by Options directive: /srv/www/htdocs/ This message completely puzzles me: Yes, I have not set up any permissions on the DocumentRoot of this vhost, but everything works fine for the other proxy direction where I haven't. For reference, here's the other vhost: Listen this_vm_hostname:443 <VirtualHost javaapp:443> ServerName javaapp.corp SSLEngine on SSLProxyEngine on # not shown: SSLCipherSuite, SSLCertificateFile, SSLCertificateKeyFile SSLOptions +StdEnvVars ProxyPass / http://localhost:8080/ ProxyPassReverse / http://localhost:8080/ ProxyRequests Off AllowEncodedSlashes On # Local reverse proxy authorization override <Proxy http://localhost:8080/*> Order deny,allow Allow from all </Proxy> </VirtualHost>

    Read the article

  • How can I automatically synchronize a directory tree on multiple machines?

    - by Blacklight Shining
    I have two Mac laptops and a Debian server, each with a directory that I would like to keep in sync between the three. The solution should meet the following criteria (in rough order of importance): It must not use any third-party service (e.g. Dropbox, SugarSync, Google whatever). This does not include installing additional software (as long as it's free). It must not require me to use specific directories or change my way of storing things. (Dropbox does this IIRC) It must work in all directions (changes made on /any/ machine should be pushed to the others) All data sent must be encrypted (I have ssh keypairs set up already) It must work even when not all machines are available (changes should be pushed to a machine when it comes back online) It must work even when the /directories/ on some machines are not available (they may be stored on disk images which will not always be mounted) This can be solved for Macs by using launchd to automatically launch and kill (or in some way change the behavior of) whatever daemon is used for syncing when the images are mounted and unmounted. It must be immediate (using an event-based system, not a periodic one like cron) It must be flexible (if more machines are added, I should be able to incorporate them easily) I also have some preferences that I would like to be fulfilled, but do not have to be: It should notify me somehow if there are conflicts or other errors. It should recognize symbolic and hard links and create corresponding ones. It should allow me to create a list of exceptions (subdirectories which will not be synced at all). It should not require me to set up port forwarding or otherwise reconfigure a network. This can be solved by using an ssh tunnel with reverse port forwarding. If you have a solution that meets some, but not all of the criteria, please contribute it in the comments as it might be useful in some way, and it might be possible to meet some of the criteria separately. What I tried, and why it didn't work: rsync and lsyncd do not support bidirectional synchronization csync2 is designed for server clusters and does not appear to work with machines with dynamic IPs DRBD (suggested by amotzg) involves installing a kernel module and does not appear to work on systems running OS X

    Read the article

  • Oracle Social Network Developer Challenge: TEAM Informatics

    - by Kellsey Ruppel
    Originally posted by Jake Kuramoto on The Apps Lab blog. Here comes another Oracle Social Network Developer Challenge entry, this one courtesy of TEAM Informatics (@teaminformatics). As their name suggests, their entry was a true team effort, featuring the work of Jon Chartrand, Deepthi Sanikommu, Dmitry Shtulman, Raghavendra Joshi, and Daniel Stitely with Wayne Boerger doing the presentation honors. Speaking of the presentation, Wayne’s laptop wouldn’t project onto the plasma we had in the OTN Lounge, but luckily, Noel (@noelportugal) had his iPad and VGA dongle in his backpack of goodies, so they were able to improvise by using the iPad camera to capture Wayne’s demo and project the video to the plasma. Code will find a way. Anyway, TEAM built Do Over, an integration with Atlassian’s JIRA, coincidentally something I’ve chatted with Rich (@rmanalan) about in the past. The basic idea is simple; integrate JIRA issues with Oracle Social Network to expand and centralize the conversation around issue resolution. In Dmitry’s words: We were able to put together a team on fairly short notice and, after batting a few ideas around, decided to pursue an integration with JIRA, an issue and project tracking tool used in-house at TEAM.  After getting to know WebCenter Social, we saw immediate benefits that a JIRA integration could bring, primarily due to the fact that JIRA only allows assignment of an issue to one person at a time.  Integrating Social would allow collaboration and issue resolution to happen right from the JIRA Issue interface. TEAM tackled a very common pain point among developers, i.e. including everyone who needs to be involved in issue resolution into a single thread. If you’ve ever fixed bugs or participated in that process, you’ll know that not everyone has access to the issue resolution system, which makes consolidating discussion time-consuming and fragmented. Why? Because we typically use email as the tool for collaboration. Oracle Social Network allows for all parties involved to work in a single, private and secure conversation, and through its RESTful Public API, information from external systems like JIRA can be brought in for context. TEAM only had time to address half the solution, but given more time, I’m sure they would have made the integration bidirectional, allowing for relevant commentary to be pushed back to JIRA, closing the loop. Here are some screenshot of their integration. #gallery-1 { margin: auto; } #gallery-1 .gallery-item { float: left; margin-top: 10px; text-align: center; width: 33%; } #gallery-1 img { border: 2px solid #cfcfcf; } #gallery-1 .gallery-caption { margin-left: 0; } When Oracle Social Network is released, TEAM will have something they use internally to work on issues, and maybe they’ll even productize their work and add it to the Atlassian Marketplace so that other JIRA users can benefit from the combination of Oracle Social Network and JIRA. Thanks to everyone at TEAM for participating in our challenge. We hope they had a good experience. Look for the details of the other entries this week. Be sure to check out a full recap from Dmitry over on the TEAM blog.

    Read the article

  • SQL SERVER – Integrate Your Data with Skyvia – Cloud ETL Solution

    - by Pinal Dave
    In our days data integration often becomes a key aspect of business success. For business analysts it’s very important to get integrated data from various sources, such as relational databases, cloud CRMs, etc. to make correct and successful decisions. There are various data integration solutions on market, and today I will tell about one of them – Skyvia. Skyvia is a cloud data integration service, which allows integrating data in cloud CRMs and different relational databases. It is a completely online solution and does not require anything except for a browser. Skyvia provides powerful etl tools for data import, export, replication, and synchronization for SQL Server and other databases and cloud CRMs. You can use Skyvia data import tools to load data from various sources to SQL Server (and SQL Azure). Skyvia supports such cloud CRMs as Salesforce and Microsoft Dynamics CRM and such databases as MySQL and PostgreSQL. You even can migrate data from SQL Server to SQL Server, or from SQL Server to other databases and cloud CRMs. Additionally Skyvia supports import of CSV files, either uploaded manually or stored on cloud file storage services, such as Dropbox, Box, Google Drive, or FTP servers. When data import is not enough, Skyvia offers bidirectional data synchronization. With this tool, you can synchronize SQL Server data with other databases and cloud CRMs. After performing the first synchronization, Skyvia tracks data changes in the synchronized data storages. In SQL Server databases (and other relational databases) it creates additional tracking tables and triggers. This allows synchronizing only the changed data. Skyvia also maps records by their primary key values to each other, so it does not require different sources to have the same primary key structure. It still can match the corresponding records without having to add any additional columns or changing data structure. The only requirement for synchronization is that primary keys must be autogenerated. With Skyvia it’s not necessary for data to have the same structure in integrated data storages. Skyvia supports powerful mapping mechanisms that allow synchronizing data with completely different structure. It provides support for complex mathematical and string expressions when mapping data, using lookups, etc. You may use data splitting – loading data from a single CSV file or source table to multiple related target tables. Or you may load data from several source CSV files or tables to several related target tables. In each case Skyvia preserves data relations. It builds corresponding relations between the target data automatically. When you often work with cloud CRM data, native CRM data reporting and analysis tools may be not enough for you. And there is a vast set of professional data analysis and reporting tools available for SQL Server. With Skyvia you can quickly copy your cloud CRM data to an SQL Server database and apply corresponding SQL Server tools to the data. In such case you can use Skyvia data replication tools. It allows you to quickly copy cloud CRM data to SQL Server or other databases without customizing any mapping. You need just to specify columns to copy data from. Target database tables will be created automatically. Skyvia offers powerful filtering settings to replicate only the records you need. Skyvia also provides capability to export data from SQL Server (including SQL Azure) and other databases and cloud CRMs to CSV files. These files can be either downloadable manually or loaded to cloud file storages or FTP server. You can use export, for example, to backup SQL Azure data to Dropbox. Any data integration operation can be scheduled for automatic execution. Thus, you can automate your SQL Azure data backup or data synchronization – just configure it once, then schedule it, and benefit from automatic data integration with Skyvia. Currently registration and using Skyvia is completely free, so you can try it yourself and find out whether its data migration and integration tools suits for you. Visit this link to register on Skyvia: https://app.skyvia.com/register Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Cloud Computing

    Read the article

  • Bridging the Gap in Cloud, Big Data, and Real-time

    - by Dain C. Hansen
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} With all the buzz of around big data and cloud computing, it is easy to overlook one of your most precious commodities—your data. Today’s businesses cannot stand still when it comes to data. Market success now depends on speed, volume, complexity, and keeping pace with the latest data integration breakthroughs. Are you up to speed with big data, cloud integration, real-time analytics? Join us in this three part blog series where we’ll look at each component in more detail. Meet us online on October 24th where we’ll take your questions about what issues you are facing in this brave new world of integration. Let’s start first with Cloud. What happens with your data when you decide to implement a private cloud architecture? Or public cloud? Data integration solutions play a vital role migrating data simply, efficiently, and reliably to the cloud; they are a necessary ingredient of any platform as a service strategy because they support cloud deployments with data-layer application integration between on-premise and cloud environments of all kinds. For private cloud architectures, consolidation of your databases and data stores is an important step to take to be able to receive the full benefits of cloud computing. Private cloud integration requires bidirectional replication between heterogeneous systems to allow you to perform data consolidation without interrupting your business operations. In addition, integrating data requires bulk load and transformation into and out of your private cloud is a crucial step for those companies moving to private cloud. In addition, the need for managing data services as part of SOA/BPM solutions that enable agile application delivery and help build shared data services for organizations. But what about public Cloud? If you have moved your data to a public cloud application, you may also need to connect your on-premise enterprise systems and the cloud environment by moving data in bulk or as real-time transactions across geographies. For public and private cloud architectures both, Oracle offers a complete and extensible set of integration options that span not only data integration but also service and process integration, security, and management. For those companies investing in Oracle Cloud, you can move your data through Oracle SOA Suite using REST APIs to Oracle Messaging Cloud Service —a new service that lets applications deployed in Oracle Cloud securely and reliably communicate over Java Messaging Service . As an example of loading and transforming data into other public clouds, Oracle Data Integrator supports a knowledge module for Salesforce.com—now available on AppExchange. Other third-party knowledge modules are being developed by customers and partners every day. To learn more about how to leverage Oracle’s Data Integration products for Cloud, join us live: Data Integration Breakthroughs Webcast on October 24th 10 AM PST.

    Read the article

  • Oracle GoldenGate 11g Release 2 Launch Webcast Replay Available

    - by Irem Radzik
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif"; mso-fareast-font-family:"Times New Roman";} For those of you who missed Oracle GoldenGate 11g Release 2 launch webcasts last week, the replay is now available from the following url. Harnessing the Power of the New Release of Oracle GoldenGate 11g I would highly recommend watching the webcast to meet many new features of the new release and hear the product management team respond to the questions from the audience in a nice long Q&A section. In my blog last week I listed the media coverage for this new release. There is a new article published by ITJungle talking about Oracle GoldenGate’s heterogeneity and support for DB2 for iSeries: Oracle Completes DB2/400 Support in Data Replication Tool As mentioned in last week’s blog, we received over 150 questions from the audience and in this blog I'd like to continue to post some of the frequently asked,  questions and their answers: Question: What are the fundamental differences between classic data capture and integrated data capture? Do both use the redo logs in the source database? Answer: Yes, they both use redo logs. Classic capture parses the redo log data directly, whereas the Integrated Capture lets the Oracle database parse the redo log record using an internal API. Question: Does GoldenGate version need to match Oracle Database version? Answer: No, they are not directly linked. Oracle GoldenGate 11g Release 2 supports Oracle Database version 10gR2 as well. For Oracle Database version 10gR1 and Oracle Database version 9i you will need GoldenGate11g Release 1 or lower. And for Oracle Database 8i you need Oracle GoldenGate 10 or earlier versions. Question: If I already use Data Guard, do I need GoldenGate? Answer: Data Guard is designed as the best disaster recovery solution for Oracle Database. If you would like to implement a bidirectional Active-Active replication solution or need to move data between heterogeneous systems, you will need GoldenGate. Question: On Compression and GoldenGate, if the source uses compression, is it required that the target also use compression? Answer: No, the source and target do not need to have the same compression settings. Question: Does GG support Advance Security Option on the Source database? Answer: Yes it does. Question: Can I use GoldenGate to upgrade the Oracle Database to 11g and do OS migration at the same time? Answer: Yes, this is a very common project where GoldenGate can eliminate downtime, give flexibility to test the target as needed, and minimize risks with fail-back option to the old environment. For more information on database upgrades please check out the following white papers: Best Practices for Migrating/Upgrading Oracle Database Using Oracle GoldenGate 11g Zero-Downtime Database Upgrades Using Oracle GoldenGate Question: Does GoldenGate create any trigger in the source database table level or row level to for real-time data integration? Answer: No, GoldenGate does not create triggers. Question: Can transformation be done after insert to destination table or need to be done before? Answer: It can happen in the Capture (Extract) process, in the  Delivery (Replicat) process, or in the target database. For more resources on Oracle GoldenGate 11gR2 please check out our Oracle GoldenGate 11gR2 resource kit as well.

    Read the article

  • Manually iterating over a selection of XML elements (C#, XDocument)

    - by user316117
    What is the “best practice” way of manually iterating (i.e., one at a time with a “next” button) over a set of XElements in my XDocument? Say I select the set of elements I want thusly: var elems = from XElement el in m_xDoc.Descendants() where (el.Name.LocalName.ToString() == "q_a") select el; I can use an IEnumerator to iterate over them, i.e., IEnumerator m_iter; But when I get to the end and I want to wrap around to the beginning if I call Reset() on it, it throws a NotSupportedException. That’s because, as the Microsoft C# 2.0 Specification under chapter 22 "Iterators" says "Note that enumerator objects do not support the IEnumerator.Reset method. Invoking this method causes a System.NotSupportedException to be thrown ." So what IS the right way of doing this? And what if I also want to have bidirectional iteration, i.e., a “back” button, too? Someone on a Microsoft discussion forum said I shouldn’t be using IEnumerable directly anyway. He said there was a way to do what I want with LINQ but I didn’t understand what. Someone else suggested dumping the XElements into a List with ToList(), which I think would work, but I wasn’t sure it was “best practice”. Thanks in advance for any suggestions!

    Read the article

  • Self-reference entity in Hibernate

    - by Marco
    Hi guys, I have an Action entity, that can have other Action objects as child in a bidirectional one-to-many relationship. The problem is that Hibernate outputs the following exception: "Repeated column in mapping for collection: DbAction.childs column: actionId" Below the code of the mapping: <?xml version="1.0"?> <!DOCTYPE hibernate-mapping PUBLIC "-//Hibernate/Hibernate Mapping DTD 3.0//EN" "http://hibernate.sourceforge.net/hibernate-mapping-3.0.dtd"> <hibernate-mapping> <class name="DbAction" table="actions"> <id name="actionId" type="short" /> <property not-null="true" name="value" type="string" /> <set name="childs" table="action_action" cascade="all-delete-orphan"> <key column="actionId" /> <many-to-many column="actionId" unique="true" class="DbAction" /> </set> <join table="action_action" inverse="true" optional="false"> <key column="actionId" /> <many-to-one name="parentAction" column="actionId" not-null="true" class="DbAction" /> </join> </class> </hibernate-mapping>

    Read the article

  • Hibernate and parent/child relations

    - by Marco
    Hi to all, I'm using Hibernate in a Java application, and i feel that something could be done better for the management of parent/child relationships. I've a complex set of entities, that have some kind of relationships between them (one-to-many, many-to-many, one-to-one, both unidirectional and bidirectional). Every time an entity is saved and it has a parent, to estabilish the relationship the parent has to add the child to its collection (considering a one-to-may relationship). For example: Parent p = (Parent) session.load(Parent.class, pid); Child c = new Child(); c.setParent(p); p.getChildren().add(c); session.save(c); session.flush(); In the same way, if i remove a child then i have to explicitly remove it from the parent collection too. Child c = (Child) session.load(Child.class, cid); session.delete(c); Parent p = (Parent) session.load(Parent.class, pid); p.getChildren().remove(c); session.flush(); I was wondering if there are some best practices out there to do this jobs in a different way: when i save a child entity, automatically add it to the parent collection. If i remove a child, automatically update the parent collection by removing the child, etc. For example, Child c = new Child(); c.setParent(p); session.save(c); // Automatically update the parent collection session.flush(); or Child c = (Child) session.load(Child.class, cid); session.delete(c); // Automatically updates its parents (could be more than one) session.flush(); Anyway, it would not be difficult to implement this behaviour, but i was wondering if exist some standard tools or well known libraries that deals with this issue. And, if not, what are the reasons? Thanks

    Read the article

  • Hibernate Bi- Directional many to many mapping advice!

    - by Rob
    hi all, i woundered if anyone might be able to help me out. I am trying to work out what to google for (or any other ideas!!) basically i have a bidirectional many to many mapping between a user entity and a club entity (via a join table called userClubs) I now want to include a column in userClubs that represents the role so that when i call user.getClubs() I can also work out what level access they have. Is there a clever way to do this using hibernate or do i need to rethink the database structure? Thank you for any help (or just for reading this far!!) the user.hbm.xml looks a bit like <set name="clubs" table="userClubs" cascade="save-update"> <key column="user_ID"/> <many-to-many column="activity_ID" class="com.ActivityGB.client.domain.Activity"/> </set> the activity.hbm.xml part <set name="members" inverse="true" table="userClubs" cascade="save-update"> <key column="activity_ID"/> <many-to-many column="user_ID" class="com.ActivityGB.client.domain.User"/> </set> The current userClubs table contains the fields id | user_ID | activity_ID I would like to include in there id | user_ID | activity_ID | role and be able to access the role on both sides...

    Read the article

  • What's a good Java-based Master-Slave communication mechanism?

    - by plecong
    I'm creating a Java application that requires master-slave communication between JVMs, possibly residing on the same physical machine. There will be a "master" server running inside a JEE application server (i.e. JBoss) that will have "slave" clients connect to it and dynamically register itself for communication (that is the master will not know the IP addresses/ports of the slaves so cannot be configured in advance). The master server acts as a controller that will dole work out to the slaves and the slaves will periodically respond with notifications, so there would be bi-directional communication. I was originally thinking of RPC-based systems where each side would be a server, but it could get complicated, so I'd prefer a mechanism where there's an open socket and they talk back and forth. I'm looking for a communication mechanism that would be low-latency where the messages would be mostly primitive types, so no serious serialization is necessary. Here's what I've looked at: RMI JMS: Built-in to Java, the "slave" clients would connect to the existing ConnectionFactory in the application server. JAX-WS/RS: Both master and slave would be servers exposing an RPC interface for bi-directional communication. JGroups/Hazelcast: Use shared distributed data structures to facilitate communication. Memcached/MongoDB: Use these as "queues" to facilitate communication, though the clients would have to poll so there would be some latency. Thrift: This does seem to keep a persistent connection, but not sure how to integrate/embed a Thrift server into JBoss WebSocket/Raw Socket: This would work, but require a lot more custom code than I'd like. Is there any technology I'm missing? Edit: Also looked at: JMX: Have the client connect to JBoss' JMX server and receive JMX notifications for bidirectional comms.

    Read the article

  • How can I evaluate the connectedness of my nodes?

    - by Travis Leleu
    I've got a space that has nodes that are all interconnected, based on a "similarity score". I would like to determine how "connected" a node is with the others. My purpose is to find nodes that are poorly connected to make sure that the backlink from the other node is prioritized. Perhaps an example would help. I've got a web page that links to my other pages based on a similarity score. Suppose I have the pages: A, B, C, ... A has a backlink from every other page, so it's very well connected. It also has links to all my other pages (each line in the graph is essentially bidirectional). B only has 1 backlink, from A. C has a link from A and D. I would like to make sure that the A-B link is prioritized over the A-C link (even if the similarity score between C and A is higher than B and A). In short, I would like to evaluate which nodes are least and best connected, so that I can mangle the results to my means. I believe this is Graph Connectedness, but I'm at a loss to develop a (simple) algorithm that will help me here. Simply counting the backlinks to a node may be a starting point -- but then how do I take the next step, which is to properly weight the links on the original node (A, in the example above)?

    Read the article

< Previous Page | 1 2 3 4 5 6  | Next Page >