Search Results

Search found 27120 results on 1085 pages for 'link building'.

Page 521/1085 | < Previous Page | 517 518 519 520 521 522 523 524 525 526 527 528  | Next Page >

  • Serializing network messages

    - by mtsvetkov
    I am writing a network wrapper around boost::asio and was wondering what is a good and simple way to serialize my messages. I have a message factory which can take care of dispatching the data to the correct builder, but I want to know if there are any established solutions for getting the binary data on the sender side and consequently passing the data for deserialization on the receiver end. Some options I've explored are: passing a pointer to a char[] to the serialize/deserialize functions (for serialize to write to, and deserialize to read from), but it's difficult to enforce buffer size this way; building on that, I decided to have the serialize function return a boost::asio::mutable_buffer, however ownership of the memory gets blurred between multiple classes, as the network wrapper needs to clean up the memory allocated by the message builder. I have also seen solutions involving streambuf's and stringstream's, but manipulating binary data in terms of its string representation is something I want to avoid. Is there some sort of binary stream I can use instead? What I am looking for is a solution (preferrably using boost libs) that lets the message builder dictate the amount of memory allocated during serialization and what that would look like in terms of passing the data around between the wrapper and message factory/message builders. PS. Messages contain almost exclusively built-in types and PODs and form a shallow but wide hierarchy for the sake of going through a factory. Note: a link to examples of using boost::serialization for something like this would be appreciated as I'm having difficulties figuring out the relation between it and buffers.

    Read the article

  • Setting up a Google Analytics Campaign

    - by Ashfame
    I will be doing a bunch of things to give one of my projects (main app) a big initial push for which I will be building a few small Facebook apps which will help in promoting the main apps. Traffic from these apps need to be tracked individually. My main app will be posting on the walls when the user needs to be notified. Traffic from these posts need to be tracked. Traffic from emails sent by the main app need to be tracked, like different types of email. I need to track all of these & possibly a couple of more but I need to be sure that I build my campaign URLs correctly as I won't get another chance to fix it. Correct me where I am wrong: Campaign Name: Launch Campaign Medium: Email Campaign Source: Type1 or Type2 (I can break it down for different types of email, right?) For apps: Campaign Name: Launch Campaign Medium: Apps Campaign Source: App1 or App2 (I can break it down here for different apps, right?) What if I want to track two different links within a single email or a single app? Any way of tracking them individually too but still keeping to track them as one because tracking them as one makes more sense for me. Campaign Term & Campaign content is irrelevant in my case, or I can/should use them for something? And I will also be tracking traffic of different apps. Should I do more? Let me know if my scenario wasn't clear enough & I need to explain more.

    Read the article

  • SEO questions about PR, Page Structure, and Other

    - by jasondavis
    A couple of basic questions related to SEO. 1) If I have a site that has several different niches that I am trying to promote from. Example Web Developer broke down into section of Web Design, Graphic Design, Programming, Software for SEO purposes, would it be better to use subdomains for these main sections or use the main domain with a folder like structure? 2) Is PR different for each page of a domain or ever page has a PR of the same on that domain? Also do sub-domains have a different PR? 3) When entering a hugely over saturated niche such as web-design, is it even possible to compete with the big sites that have been ranked on google #1 page for years? 4) Lastly, I have read about how important titles, link anchors, and headings are for SEO and how content is the most important. So left's say we are building a standard header, body, sidebar, footer page. In the the actual markup, would it be better to make sure the main content comes before the sidebar on the page or does this probably not make a difference? 5) I seen mentioned in another answer here that microformats can help with SEO, is there any fact behind this? Thank you for any info on this

    Read the article

  • How do I dissuade users from using the same password with similar systems?

    - by Resorath
    I'm building a web application that connects to other web services (using strictly anonymous binding, so no user passwords are being used). However the web application maintains its own users itself, and is required to ask certain details such as e-mail addresses and public linking information to these other web services (for example, a username but not a password). I want to deter or prevent users from reusing passwords in my application that they have also used in the applications I'm linking to. For example, if I ask for their e-mail and provide me with their gmail address, I don't want them using their gmail password for my system. Another example would be reusing a password to a linked system in which they also gave me their username. One idea I had was to simply try using the information they gave me, along with the password they are trying to store and log in to these external web applications to test the password - then immediately unbind if I was successful and ask the user to use a different password. However I suspect there is a host of morale and legal issues there. The reason this is a big deal to me is accountability. My application is simply not funded enough to invest properly in security around user passwords. A salted, hashed password in a public SQL-like database is as secure as it gets. So if passwords and linked usernames or e-mails get out, I don't want my userbase compromised.

    Read the article

  • At the Java DEMOgrounds - Java EE 7 WebSocket Early Access

    - by Janice J. Heiss
    At the packed and happening Java DEMOgrounds, I wandered over to check out Java EE Web Profile and Platform Technologies. Martin Matula, a Senior Development Manager at Oracle on the JavaEE/GlassFish team, responsible for the area of web services (including JAX-WS and JAX-RS), was demonstrating Java EE Web Profile and Platform Technologies.Matula was previewing some Java EE 7 WebSocket early access features via a group drawing application that showcases the upcoming JSR 356, “Java API for WebSocket”, which is the API for building RESTful web services and Server-Sent Events, an HTML5 feature. He emphasized that this is supported in Jersey, the reference implementation for JAX-RS, as well.“In this demo,” Matula explained, “I have a simple JavaScript front-end talking to the back-end deployed on GlassFish. It uses RESTful web services to get the list of drawings we have. I can create new drawings and the list is updated immediately using the Server-Sent Events, so the message is coming from the server to the client. Everything is getting updated live using WebSocket, which is the bi-directional communication new protocol in HTML5. This is using Project Jersey and Project Tyrus. Tyrus is the implementation of WebSocket protocol for Java. Jersey implements the RESTful APIs as well as the Server-Sent Events protocol.”

    Read the article

  • Partner Webcast - Oracle Data Integration Competency Center (DICC): A Niche Market for services

    - by Thanos Terentes Printzios
    Market success now depends on data integration speed. This is why we collected all best practices from the most advanced IT leaders, simply to prove that a Data Integration competency center should be the primary new IT team you should establish. This is a niche market with unlimited potential for partners becoming, the much needed, data integration services provider trusted by customers. We would like to elaborate with OPN Partners on the Business Value Assessment and Total Economic Impact of the Data Integration Platform for End Users, while justifying re-organizing your IT services teams. We are happy to share our research on: The Economical impact of data integration platform/competency center. Justifying strongest reasons and differentiators, using numeric analysis and best-practice in customer case studies from specific industries Utilizing diagnostics and health-check analysis in building a business case for your customers What exactly is so special in the technology of Oracle Data Integration Impact of growing data volume and amount of data sources Analysis of usual solutions that are being implemented so far, addressing key challenges and mistakes During this partner webcast we will balance business case centric content with extensive numerical ROI analysis. Join us to find out how to build a unified approach to moving/sharing/integrating data across the enterprise and why this is an important new services opportunity for partners. Agenda: Data Integration Competency Center Oracle Data Integration Solution Overview Services Niche Market For OPN Summary Q&A Delivery Format This FREE online LIVE eSeminar will be delivered over the Web. Registrations received less than 24hours prior to start time may not receive confirmation to attend. Presenter: Milomir Vojvodic, EMEA Senior Business Development Manager for Oracle Data Integration Product Group Date: Thursday, September 4th, 10pm CEST (8am UTC/11am EEST)Duration: 1 hour Register Today For any questions please contact us at [email protected]

    Read the article

  • Is it worth moving from Microsoft tech to Linux, NodeJS & other open source frameworks to save money for a start-up?

    - by dormisher
    I am currently getting involved in a startup, I am the only developer involved at the moment, and the other guys are leaving all the tech decisions up to me at the moment. For my day job I work at a software house that uses Microsoft tech on a day to day basis, we utilise .NET, SqlServer, Windows Server etc. However, I realise that as a startup we need to keep costs down, and after having a brief look at the cost of hosting for Windows I was shocked to see some of the prices for a dedicated server. The cheapest I found was £100 a month. Also if the business needs to scale in the future and we end up needing multiple servers, we could end up shelling out £10's of £000's a year in SQL Server / Windows Server licenses etc. I then had a quick look at the price of Linux hosting for a dedicated server and saw the price was waaaaaay lower than windows hosting. One place was offering a machine with 2 cores for less than £20 a month. This got me thinking maybe the way to go is open source on Linux. As I write a lot of Javascript at work (I'm working on a single page backbone app at the moment), I thought maybe NodeJS and a web framework like Express would be cool to use. I then thought that instead of using SQL why not use an open source NoSQL database like MongoDB, which has great support on NodeJS? My only concern is that some of the work the application is going to do is going to be dynamically building images and various other image related stuff, i.e. stuff that is quite CPU heavy - so I'm thinking of maybe writing anything CPU heavy in C++ and consuming it as a module in Node. That's the background - but basically is Linux a good match for: Hosting a NodeJS/Express site? Compiling C++ node modules? Using a NoSQL DB like MongoDB? And is it a good idea to move to these unfamiliar technologies to save money?

    Read the article

  • aplay -l says no soundcards found; alsaconf says no supported cords; yet /proc/asound contains cards

    - by nimasmi
    I am trying to get HDMI output using a Gainward Nvidia 210 512 MB on Ubuntu 10.04 Lucid Lynx. I have upgraded alsa-driver, alsa-lib and alsa-utils to 1.0.24 by building from source, thanks to this blog post. Some relevant output... user@box:~$ lspci | grep Audio 00:05.0 Audio device: nVidia Corporation MCP61 High Definition Audio (rev a2) 01:09.0 Multimedia video controller: Conexant Systems, Inc. CX23880/1/2/3 PCI Video and Audio Decoder (rev 05) 01:09.2 Multimedia controller: Conexant Systems, Inc. CX23880/1/2/3 PCI Video and Audio Decoder [MPEG Port] (rev 05) 01:09.4 Multimedia controller: Conexant Systems, Inc. CX23880/1/2/3 PCI Video and Audio Decoder [IR Port] (rev 05) 02:00.1 Audio device: nVidia Corporation High Definition Audio Controller (rev a1) user@box:~$ cat /proc/asound/version Advanced Linux Sound Architecture Driver Version 1.0.24. Compiled on Sep 15 2012 for kernel 2.6.32-42-generic (SMP). user@box:~$ ls /proc/asound` card0 cards hwdep NVidia oss seq version card1 devices modules NVidia_1 pcm timers user@box:~$ aplay -l aplay: device_list:240: no soundcards found... user@box:~$ sudo /sbin/alsa-utils start * Setting up ALSA... * warning: 'alsactl restore' failed with error message 'alsactl: set_control:1403: Cannot write control '2:0:0:IEC958 Playback Default:0' : Operation not permitted'... amixer: Invalid command! ...done. Any help appreciated. PS my video card is connected only through the PCI-E slot. I assume there is no extra audio connection required.

    Read the article

  • Should I listen to my employer and use CASE tools?

    - by omsharp
    My employer (Not a Developer) thinks that CASE tools will help us improve our development process and documentation. I am not sure about that, we are a small team of 5 developers building mobile banking solutions for local clients. I think CASE tools will be a waste of time and money as they need to be purchased and we will need some time before we get used to them and be efficient working with them for modeling and stuff. Code generation is another issue, I really think that the CASE generated code won't be as good as code written by good developers. I think that if we stick with agile princeliness, design patterns, use TDD, and keep our code clean. we should be good. And as far as Analysis and Design, I think simple UML diagrams on whiteboard should do the trick. Documentation is good and important, but should be made as little as possible and we should not focus on Docs and forget the code. This is what i think. Am I correct? or should I listen to my employer and start researching for an appropriate CASE Tool?

    Read the article

  • How to write code that communicates with an accelerator in the real address space (real mode)?

    - by ysap
    This is a preliminary question for the issue, where I was asked to program a host-accelerator program on an embedded system we are building. The system is comprised of (among the standard peripherals) an ARM core and an accelerator processor. Both processors access the system bus via their bus interfaces, and share the same 32-bit global physical memory space. Both share access to the system's DRAM through the system bus. (The computer concept is similar to Beagleboard/raspberry Pie, but with a specialized accelerator added) The accelerator has its own internal memory (SRAM) which is exposed to the system and occupies a portion of the global address space (as opposed to how a graphics card would talk to teh CPU via a "small" aperture in the system memory space). On the ARM core (the host) we plan on running Ubuntu 12.04. The mode of operation of communicating between the processors should be that the host issues memory transactions on the system bus that are targeted at the accelerator internal memory. As far as my understanding goes, if I write a program for the host that simply writes to the physical address of the accelerator, most chances are that the program will crash due to a segmentation violation. So, I assume that I need some way of communicating with the device in real mode. What is the easiest way to achieve this mode of operation?

    Read the article

  • Should Android and iPhone UI be different?

    - by Phonon
    I'm not completely new to developing apps, but I'm at a point where I'm trying to develop something and deploy it on several mobile platforms. To only concentrate on two major ones, suppose I'm developing an app for Android and iPhone and designing UI and the general user interaction architecture. Both platforms give guidelines as to how their UIs should work. For example, most iPhone apps have the Navigation Bar (the one that says Testing 1 and has a Back button) and an Icon Bar for navigating a program, while Android uses an Options Menu fetched via a Menu button and the "back" navigation is handled with the physical Back button on the device. I've seen many apps that try to force the same UI on every platform. For example, custom-building an iPhone style Icon Bar and putting it in their Android apps, but it just doesn't quite look right to me and it feels like it violates UI design guidelines somewhat. Are there any good design patters for implementing something sufficiently similar on both platforms, yet still platform-specific enough so that the user would not feel out of their comfort zone? What do people usually do in these situations?

    Read the article

  • New Horizon

    - by alexismp
    I have resigned from Oracle and thus will soon leave the GlassFish group. I feel very proud looking back at what we've achieved as a team with GlassFish in the past few years, including those past two years at Oracle. If you know anything about the history of application servers at Sun, you'll recognize that building such a community around GlassFish and its amazing number of downloads is nothing short of a small miracle. The Java EE platform has also seen a strong resurgence, bringing it back to the forefront of effective enterprise Java development in many ways. Having been hired by Sun some 13 years ago to sell NetDynamics I certainly feel that I leave the company's application server in *much* better shape. Oracle has ambitious plans for GlassFish and has been in my opinion a good steward for this community. I see no reason for this to change and I do expect the community to keep on pushing Oracle to get even better with time. This ride has been intense and the people I've met and worked with, both inside and outside Sun/Oracle, have made the experience the best one of my career. My journey now continues here: alexismp.wordpress.com. See you there!

    Read the article

  • implementing dynamic query handler on historical data

    - by user2390183
    EDIT : Refined question to focus on the core issue Context: I have historical data about property (house) sales collected from various sources in a centralized/cloud data source (assume info collection is handled by a third party) Planning to develop an application to query and retrieve data from this centralized data source Example Queries: Simple : for given XYZ post code, what is average house price for 3 bed room house? Complex: What is estimated price for an house at "DD,Some Street,XYZ Post Code" (worked out from average values of historic data filtered by various characteristics of the house: house post code, no of bed rooms, total area, and other deeper insights like house building type, year of built, features)? In addition to average price, the application should support other property info ** maximum, or minimum price..etc and trend (graph) on a selected property attribute over a period of time**. Hence, the queries should not enforce the search based on a primary key or few fixed fields In other words, queries can be What is the change in 3 Bed Room house price (irrespective of location) over last 30 days? What kind of properties we can get for X price (irrespective of location or house type) The challenge I have is identifying the domain (BI/ Data Analytical or DB Design or DB Query Interface or DW related or something else) this problem (dynamic query on historic data) belong to, so that I can do further exploration My findings so far I could be wrong on the following, so please correct me if you think so I briefly read about BI/Data Analytics - I think it is heavy weight solution for my problem and has scalability issues. DB Design - As I understand RDBMS works well if you know Data model at design time. I am expecting attributes about property or other entity (user) that am going to bring in, would evolve quickly. hence maintenance would be an issue. As I am going to have multiple users executing query at same time, performance would be a bottleneck Other options like Graph DB (http://www.tinkerpop.com/) seems to be bit complex (they are good. but using those tools meant for generic purpose, make me think like assembly programming to solve my problem ) BigData related solution are to analyse data from multiple unrelated domains So, Any suggestion on the space this problem fit in ? (Especially if you have design/implementation experience of back-end for property listing or similar portals)

    Read the article

  • POLL: How do you build your interfaces in xcode for iphone/coco-touch applications?

    - by LolaRun
    Hi all, It's been 2 months i'm using xcode and building iphone apps, and i'm finding it really hard to grasp the good design for the applications. I always face problems like you can't put your tabbarcontroller in another custom viewcontroller sort of a thing? or other problems... That 'sometimes' - of course - would work if you did the creation of the views/viewcontrollers programmatically. so I don't know if i should start writing the creation of my objects or use interface builder. This is why i'm asking this poll question, to see what's the most accepted way among other developers. I'm not going to answer my question, so the votes would not increase my reputation and risk my poll being closed soon. I prefer that the first two readers of this question should answer by 'using IB' and the second one 'programmatically' and the rest would vote up or down for these two answers. And don't vote my question either, i just need to know, and maybe someone else wants to know. so the longer this question lives the better. and gaining nothing from it concerning reputation would help that cause. thank you all for your cooperation. and go ahead and answer Peace

    Read the article

  • Error while installing an application

    - by Bong.Da.City
    So i have installed synaptic package manager.. via it, i have checked once libopencv-highgui-dev and applied complete removal.. after that i installed it... now everytime i try to install an application e.g Format Junkie sudo add-apt-repository ppa:format-junkie-team/release && sudo apt-get update && sudo apt-get install formatjunkie in the command install format junkie it gives me that error everytime: sudo apt-get install formatjunkie Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: libopencv-features2d-dev : Depends: libopencv-highgui-dev (= 2.3.1-11ubuntu2) but it is not going to be installed E: Error, pkgProblemResolver::Resolve generated breaks, this may be caused by held packages. What should i do? And 2nd what did i did wrong so it won't happen another time? output of lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 12.10 Release: 12.10 Codename: quantal

    Read the article

  • Hosting and scaling a Facebook application in the cloud? [migrated]

    - by DhruvPathak
    We would be building a Facebook application in Django (Python), but still not sure of where to host it economically, and with a good provision to scale in case the app gets viral. Some details about the app: Would be HTML based like a website,using django as a framework. 100K is the number of expected pageviews in a day, if the app is viral. The users will not generate any media content, only some database data will be generated by them. It would be great if someone with more experience can guide on following points: A) Hosting on Google app engine or Amazon EC2 or some other cloud like RackSpace : Preferable points found in AppEngine were ease of deployment, cost effectiveness and easy scaling. For EC2: Full hold of the virtual machine,Amazon NoSQL and RDMBS database services in case we decide to use them. B) Does backend technology affect monthly cost? eg. would CPU and memory usage difference of Django over , for example , PHP framework like CodeIgnitor really make remarkable difference in running costs. (Here is the article that triggered this thought process : http://journal.dedasys.com/2010/01/12/rough-estimates-of-the-dollar-cost-of-scaling-web-platforms-part-i#comments) C) Does something like Heroku , which provides additional services over Amazon EC2, prove to be better than raw cloud management? It is not that we are trying for premature scaling, we just want to have a good start so that we are ready to handle unpredicted growth and scale.

    Read the article

  • How to start a Software Company

    - by MeshMan
    I've always been interested in wondering how software companies happen. I find it extremely difficult once you're tied down with car, house, life etc. Funding is always the biggest concern. To make this a bit more specific, I see two types. Those offering a product/service or those offering a consultancy company. One things that bugs me about the product/service kind is that we all know how burning the candle at both ends is extremely exhausting. Coding for 8-10 hours in the day and then code in the evenings on your own stuff, doesn't last long. No matter how passionate you are about your idea, simply put, coding day and night is a recipe for burn out. Is this a defeatist attitude though? Can it be balanced? A consultancy kind isn't as tricky in my honest opinion. I think once you have spent years and years in the industry building up relationships, contacts from contracting or moving around, and of course, being involved in the community, then landing your first project as a consultant I'm sure is easier than the product/service kind. I'd imagine friends then could join you as you take on bigger company projects, like an Agile implementation or TDD training, then off you go gaining bigger things. Could you please specify which company type you're answering if you can't contribute to both. I'd like to hear everyone's experiences or ideas on any level for software company start-ups.

    Read the article

  • Package manager doesn't work anymore

    - by LukaD
    I'm using ubuntu 10.10 and recently my package manager has stopped working because of some problems with dependencies or something. I can't upgrade, install or uninstall anything at all. This is a huge problem. I couldn't find a solution to this with google so I'm asking here for help. This is what apt-get -f install outputs LANG=en_US.UTF-8 sudo apt-get install -f Reading package lists... Done Building dependency tree Reading state information... Done The following package was automatically installed and is no longer required: firefox-4.0-core Use 'apt-get autoremove' to remove them. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 1 not fully installed or removed. After this operation, 0B of additional disk space will be used. Setting up openjdk-6-jre-headless (6b20-1.9.5-0ubuntu1) ... update-alternatives: error: alternative path /usr/lib/jvm/java-6-openjdk/jre/bin/java doesn't exist. dpkg: error processing openjdk-6-jre-headless (--configure): subprocess installed post-installation script returned error exit status 2 Errors were encountered while processing: openjdk-6-jre-headless E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • Matrices: Arrays or separate member variables?

    - by bjz
    I'm teaching myself 3D maths and in the process building my own rudimentary engine (of sorts). I was wondering what would be the best way to structure my matrix class. There are a few options: Separate member variables: struct Mat4 { float m11, m12, m13, m14, m21, m22, m23, m24, m31, m32, m33, m34, m41, m42, m43, m44; // methods } A multi-dimensional array: struct Mat4 { float[4][4] m; // methods } An array of vectors struct Mat4 { Vec4[4] m; // methods } I'm guessing there would be positives and negatives to each. From 3D Math Primer for Graphics and Game Development, 2nd Edition p.155: Matrices use 1-based indices, so the first row and column are numbered 1. For example, a12 (read “a one two,” not “a twelve”) is the element in the first row, second column. Notice that this is different from programming languages such as C++ and Java, which use 0-based array indices. A matrix does not have a column 0 or row 0. This difference in indexing can cause some confusion if matrices are stored using an actual array data type. For this reason, it’s common for classes that store small, fixed size matrices of the type used for geometric purposes to give each element its own named member variable, such as float a11, instead of using the language’s native array support with something like float elem[3][3]. So that's one vote for method one. Is this really the accepted way to do things? It seems rather unwieldy if the only benefit would be sticking with the conventional math notation.

    Read the article

  • Conventions for search result scoring

    - by DeaconDesperado
    I assume this type of question is more on-topic here than on regular SO. I have been working on a search feature for my team's web application and have had a lot of success building a multithreaded, "divide and conquer" processing system to work through a large amount of fulltext. Our problem domain is pretty specific. Users of the app generate posts, and as a general rule, posts that are more recent are considered to be of greater relevance. Some of the data we are trying to extract from search is very specific (user's feelings about specific items or things) and we are using python nltk to do named-entity extraction to find interesting likely query terms. Essentially we look for descriptive adjective-noun pairs and generate a general picture of a user's expressed sentiment as a list of tokens. This search is intended as an internal tool for our team to draw out a local picture of sentiments like "soggy pizza." There's some machine learning in there too to do entity resolution on terms like "soggy" to all manner of adjectives expressing nastiness. My problem is I am at a loss for how to go about scoring these results. The text being searched is split up into tokens in a list, so my initial approach would be to normalize a float score between 0.0-1.0 generated off of how far into the list the terms appear and how often they are repeated (a later mention of the term being worth less, earlier more, greater frequency-greater score, etc.) A certain amount of weight could be given to the timestamp as well, though I am not certain how to calculate this. I am curious if anyone has had to solve a similar problem in a search relevance grading between appreciable metrics (frequency, term location/colocation, recency) and if there are and guidelines for how to weight each. I should mention as well that the final fallback procedure in the search is to pipe the query to Sphinx, which has its own scoring practices. Sphinx operates as the last resort in case our application specific processing can't find any eligible candidates.

    Read the article

  • Oracle MDM Panel at OOW 12: Best practices, Lessons Learned and More...

    - by Mala Narasimharajan
    By Narayana Machiraju  We are less than two weeks out from the start of Oracle Open World 2012. The MDM team has built-up a solid line-up of product and customer sessions for you to attend this year in addition to the hands-on labs, and numerous demonstration pods in Moscone West. This year we will be hosting a customer panel session dedicated to Oracle Customer Hub at Oracle Open World. An esteemed panel of Oracle Customer Hub customers in different Industries: Credit Suisse, Allianz and Elsevier will provide insight into the journey of Customer MDM right from building a business case and MDM vision, establishing and sustaining governance, implementation strategies and realizing the benefits. You will also hear about implementation challenges, phasing strategies and lessons learned from real-life experiences. If you are already implementing Customer MDM or evaluating the benefits of MDM and you would like to hear directly from our customers then I highly recommend you attend this session: Customer MDM Panel: Discussion and Q&A on Implementation Best Practices, Data Quality, Data Governance          and ROI Wednesday October, 3rd, 5:00PM - 6:00PM Westin Market Street Hotel - Metropolitan 1 The MDM track at Oracle Open World covers variety of topics related to MDM. In addition to the product management team presenting product updates and roadmap, we have several customer panels, Conference sessions and Customer round table sessions featuring a lot of marquee Customers. You can see an overview of MDM sessions here.  We hope to see you at Open World and stay in touch via our future blogs.

    Read the article

  • Making The EBS Upgrade From 11.5.10 Easier - Part II

    - by Annemarie Provisero
    ADVISOR WEBCAST: Making The EBS Upgrade From 11.5.10 Easier - Part II PRODUCT FAMILY: E-Business Suite July 12, 2011 at 8 am PT, 9 am MT, 11 am ET This one-hour session is recommended for technical users who are responsible for upgrading their E-Business Suite applications from Release 11.5.10 to Release 12.1.x. As you begin your upgrade process, there are a number of tools available to assist you in a successful upgrade. A successful upgrade requires careful planning, correct upgrade processing, detailed testing, and user (re)training prior to upgrade. Over three sessions we will discuss the tools that you can use to assist in your upgrade tasks. These tools are available to you via My Oracle Support and as part of the E-Business Suite product offerings. In In this second session, we’ll cover the following topics: Recap of Part I Detailed Look at Maintenance Wizard Detailed Look at Patch Wizard A short, live demonstration and question and answer period will be included. In the first part of the three-session series, we covered the following topics: Overview of Tools Available for Upgrading Upgrade versus Re-implementing Upgrade Community Upgrade Product Information Center Page Detailed Look at Upgrade Advisor A replay of that session is available via Note 740964.1,  Advisor Webcast Archive. A third session will be presented on July 19, 2011 to review best practices for using the upgrade tools. A short, live demonstration (only if applicable) and question and answer period will be included. Oracle Advisor Webcasts are dedicated to building your awareness around our products and services. This session does not replace offerings from Oracle Global Support Services. Click here to register for this session ------------------------------------------------------------------------------------------------------------- The above webcast is a service of the E-Business Suite Communities in My Oracle Support. For more information on other webcasts, please reference the Oracle Advisor Webcast Schedule.Click here to visit the E-Business Communities in My Oracle Support Note that all links require access to My Oracle Support.

    Read the article

  • Kubuntu apt-get -f install error

    - by ShaggyInjun
    I am seeing an error while running apt-get -f install. Can somebody help me out .. venkat@ubuntu:~/Downloads$ sudo apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: libjack-jackd2-0 Suggested packages: jackd2 The following packages will be upgraded: libjack-jackd2-0 1 upgraded, 0 newly installed, 0 to remove and 256 not upgraded. 109 not fully installed or removed. Need to get 0 B/197 kB of archives. After this operation, 3,072 B of additional disk space will be used. Do you want to continue [Y/n]? Y (Reading database ... 274641 files and directories currently installed.) Preparing to replace libjack-jackd2-0 1.9.8~dfsg.1-1ubuntu1 (using .../libjack-jackd2- 0_1.9.8~dfsg.2-1precise1_amd64.deb) ... Unpacking replacement libjack-jackd2-0 ... dpkg: error processing /var/cache/apt/archives/libjack-jackd2-0_1.9.8~dfsg.2- 1precise1_amd64.deb (--unpack): './usr/share/doc/libjack-jackd2-0/buildinfo.gz' is different from the same file on the system dpkg-deb: error: subprocess paste was killed by signal (Broken pipe) Errors were encountered while processing: /var/cache/apt/archives/libjack-jackd2-0_1.9.8~dfsg.2-1precise1_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • Rendering a DOM across multiple displays

    - by meetamit
    I'm building a data-driven animation with HTML and javascript to run in a web browser. I would like to display it tiled across three 1080p monitors. This essentially yields a viewport that's 5760px wide and 1080px tall. Pretty large. Does anyone have experience setting up something like this? I have many questions below, but any tip would be appreciated: Is it reasonable to expect a DOM to render into such a large viewport size at close to 60fps? I might choose to use canvas, instead of SVG or HTML, but that would yield a giant canvas. Can a canvas with such high resolution be performant? Of course everything depends on the complexity of the graphics I want to render, but I'm looking to remove that factor from this question, so assume I'm asking about a canvas animation that can run at 60fps at 1920x1080 resolution. Would it run roughly as fast at 3 times the width? Would three.js and WebGL be a more proper approach at that resolution? How do you actually cause Chrome or FF to span 3 monitors at full screen? Do I need a 3rd party solution of any kind? Thanks!

    Read the article

  • How to unit test models in MVC / MVR app?

    - by BBnyc
    I'm building a node.js web app and am trying to do so for the first time in a test driven fashion. I'm using nodeunit for testing, which I find allows me to write tests quickly and painlessly. In this particular app, the heavy lifting primarily involves translating SQL data into complex Javascript object and serving them to the front-end via json. Likewise, the app also spends a great deal of code validating and translating complex, multidimensional Javascript objects it receives from the front-end into SQL rows. Hence I have used a fat model design for the app -- most of the real code resides in the models, where the data translation happens. What's the best approach to test such models with unit tests? I mean in particular the methods that have create javascript objects from the SQL rows and serve them to the front-end. Right now what I'm doing is making particular requests of my models with the unit tests and checking the returned data for all of the fields that should be there. However I have a suspicion that this is not the most robust kind of testing I could be doing. My current testing design also means I have to package my app code with some dummy data so that my tests can anticipate the kind of data that the app should be returning when tests run.

    Read the article

< Previous Page | 517 518 519 520 521 522 523 524 525 526 527 528  | Next Page >