Search Results

Search found 8389 results on 336 pages for 'shared calendar'.

Page 212/336 | < Previous Page | 208 209 210 211 212 213 214 215 216 217 218 219  | Next Page >

  • Umbraco Gold Partner going from strength to strength.

    - by Vizioz Limited
    It is amazing how time flies when you are having fun and we certainly have been having an interesting year at Vizioz, I thought it was about time I wrote another blog post and shared some of our news.Over the last 6 months we have:a) Had the pleasure of working with some great clients from the USA, Ireland and the UKb) Built some interesting and complex sites for Multi-national brands (under NDA's, you'd be impressed if you knew!)c) Converted the Umbraco Users Manual to a free iBook for all those iPad owning Umbraco users.d) We have hired three new employees (Sam, Pearl & Zaara)e) We have given our notice on our current office (see below)And on the horizon:a) We have submitted Allen for Umbraco to the Apple App store for approval (hopefully this will be available very soon!)b) We are about to sign a new office lease for a new office that is twice the size of our current office, so we will have room for a meeting room, a chill out room and some more employees!So it's exciting times at Vizioz, thank you to all our fantastic clients for making this possible, we look forward to working with you all over the years to come.One thing we don't shout about as much as we probably should, we also renewed our Umbraco Gold Partner status for the second year, showing our commitment to the Umbraco CMS, if you are looking for a great Umbraco partner with experienced developers to build your new site, or to take over the on-going maintenance of an existing site, then pick up the phone and give us a call, we would love to add you to our list of happy customers!

    Read the article

  • OTBI vs. OBIA

    - by PRajkumar
      What are the differences between OTBI and OBIA?   OTBI -- Oracle Transactional Business Intelligence OBIA – Oracle Business Intelligence Applications   OBIA   1. OBIA is the pre-packaged BI Apps that Oracle has provided for several years. It is the data warehouse based Solution 2. It is based on the Universal data warehouse design with different prebuilt adapters that can connect to various source application to bring the     data into the warehouse 3. It allows consolidating the data from various sources to bring them together 4. It provides a library of metrics that help to measure business 5. It provides set of predefined reports and dashboards 6. OBIA works for multiple sources including E-Business Suite, PeopleSoft, JDE, SAP and FUSION Applications    OTBI 1. It is a real time BI 2. There is no warehouse or ETL process for OTBI 3. It is a Fusion Apps only 4. OTBI leveraging the advanced technologies from both BI platform and ADF to enable the online BI queries against database directly 5. OTBI does not have prebuilt dashboards and reports like OBIA   Note: Both OTBI and OBIA are available from same metadata repository. Some of the repository objects are shared between OTBI and OBIA. It was designed to allows to have following configuration:   OTBI Only OBIA Only OTBI and OBIA coexist    Both OTBI and OBIA are accessing Fusion Apps via the ADF

    Read the article

  • Code bases for desktop and mobile versions of the same app

    - by Code-Guru
    I have written a small Java Swing desktop application. It seems like a natural step to port it to Android since I am interested in learning how to program for that platform. I believe that I can reuse some of my existing code base. (Of course, exactly how much reuse I can get out of it will only be determined as I start coding the Android app.) Currently I am hosting my Java Swing app on Sourceforge.net and use Git for version control. As I start creating the Android app, I am considering two options: Add the Android code to my existing repository, creating separate directories and Java packages for the Android-specific code and resources. Create a new Sourceforge project (or even host a new one) and creating a new Git repository. a. With a new repository, I can simply add the files from my original project that I will reuse. (I don't particularly like this option as it will be difficult to modify both copies of the same file in both repositories.) b. Or I can branch the original repository. This adds the difficulty of merging changes of shared source files. Mostly I am trying to decide between choices 1. and 2b. If I'm going to branch the existing repository, what advantages are there to hosting it as a separate SF project (or even using another OSS hosting service) as opposed to keeping all my source code in the current SF project?

    Read the article

  • ASP 3.0 Folder/File Permissions Settings (ASP Classic)

    - by ASP Pee-Wee
    Dear Stack Exchange, Hi, I have built a form input page in HTML that has an action to post to an ASP handler/processor .asp file. The form handler/processor .asp file contains only <% Insert VBScript Here % and no HTML output whatsoever. The .asp file was never intended to be a "web viewable" .asp file like an .asp home page file or html file would. It's supposed to be for my eyes only- not the public's however it does need to take info posted by the public and do something with it on it's end. I have used VBScript/ASP3.0 to build the form handler/processor file and would like to know how to keep someone from viewing the actual VBScript in the handler/processor .asp file. I am aware of obfuscation but I would like to know how to keep prying eyes from even being able to take a look at the obfuscated code in the handler/processor file. I realize that the server executes the .asp file first before outputting anything to the browser so I guess that my main concern is mostly that someone may could "download" the form handler/processor .asp file, then view it's contents on their machine. Assuming the form handler .asp file is where it is, behind the root, and is on a windows server (no htaccess approach) how could one protect it so that it could never be viewed or simply pulled down via anonymous ftp or something like that? Is there something like "script only" permissions that the system administrator could set up for a particular folder? Remember, with shared hosting I can't go above the root. If so, would the form still be able to post? How would any of you guys go about protecting the asp file in addition to obfuscation? Any help would be greatly appreciated. Thanks, ASP Pee-Wee

    Read the article

  • News about Oracle Documaker Enterprise Edition

    - by Susanne Hale
    Updates come from the Documaker front on two counts: Oracle Documaker Awarded XCelent Award for Best Functionality Celent has published a NEW report entitled Document Automation Solution Vendors for Insurers 2011. In the evaluation, Oracle received the XCelent award for Functionality, which recognizes solutions as the leader in this category of the evaluation. According to Celent, “Insurers need to address issues related to the creation and handling of all sorts of documents. Key issues in document creation are complexity and volume. Today, most document automation vendors provide an array of features to cope with the complexity and volume of documents insurers need to generate.” The report ranks ten solution providers on Technology, Functionality, Market Penetration, and Services. Each profile provides detailed information about the vendor and its document automation system, the professional services and support staff it offers, product features, insurance customers and reference feedback, its technology, implementation process, and pricing.  A summary of the report is available at Celent’s web site. Documaker User Group in Wisconsin Holds First Meeting Oracle Documaker users in Wisconsin made the first Documaker User Group meeting a great success, with representation from eight companies. On April 19, over 25 attendees got together to share information, best practices, experiences and concepts related to Documaker and enterprise document automation; they were also able to share feedback with Documaker product management. One insurer shared how they publish and deliver documents to both internal and external customers as quickly and cost effectively as possible, since providing point of sale documents to the sales force in real time is crucial to obtaining and maintaining the book of business. They outlined best practices that ensure consistent development and testing strategies processes are in place to maximize performance and reliability. And, they gave an overview of the supporting applications they developed to monitor and improve performance as well as monitor and track each transaction. Wisconsin User Group meeting photos are posted on the Oracle Insurance Facebook page http://www.facebook.com/OracleInsurance. The Wisconsin User Group will meet again on October 26. If you and other Documaker customers in your area are interested in setting up a user group in your area, please contact Susanne Hale ([email protected]), (703) 927-0863.

    Read the article

  • Oracle Social Network -The Social Glue for Enterprise Applications

    - by kellsey.ruppel
    by Peter Reiser  - Social Business Evangelist, Oracle WebCenter  Tom Petrocelli of Enterprise Strategy Group published a report recently, “Oracle Social Network: The Social Glue for Enterprise Applications”, on Oracle Social Network (OSN) and how traditional social products create social silos whereas OSN is the “social glue” for enterprise applications.  This report supports the point of Oracle’s Social Business Strategy to seamless integrate social capabilities into the main business processes. Quote from report: “Oracle has adopted the correct approach to creating a social layer and socially enabled applications. Oracle Social Network is not simply another enterprise social network product; it is a complete social layer for the enterprise application stack. This approach will serve Oracle users well in the future.” OSN allow to capture the related Conversations of a business process right where it’s happens – within the respective Business application.  Fusion CRM is an excellent example for this approach. Quote from report: “Oracle’s new software, Oracle Social Network, is an example of a solution to the silo problem. While Oracle fields a typical enterprise social network application with microblogging, file sharing, shared documents or wikis, and activity streams, the front-end application is only a small part of what Oracle Social Network does. Instead, Oracle Social Network is a platform that provides social features as a service to other enterprise applications. In effect, Oracle Social Network socially enables all of Oracle’s enterprise applications—all enterprise applications really—with not only the same features, but also the same conversations. As a result, the social conversations act as a conduit for inter-application communication and collaboration.” Source: ESG Research Report, Oracle Social Network: The Social Glue for Enterprise Applications, August 2012. You can download the report here.

    Read the article

  • Ground Control by David Baum

    - by JuergenKress
    As cloud computing moves out of the early-adopter phase, organizations are carefully evaluating how to get to the cloud. They are examining standard methods for developing, integrating, deploying, and scaling their cloud applications, and after weighing their choices, they are choosing to develop and deploy cloud applications based on Oracle Cloud Application Foundation, part of Oracle Fusion Middleware. Oracle WebLogic Server is the flagship software product of Oracle Cloud Application Foundation. Oracle WebLogic Server is optimized to run on Oracle Exalogic Elastic Cloud, the integrated hardware and software platform for the Oracle Cloud Application Foundation family. Many companies, including Reliance Commercial Finance, are adopting this middleware infrastructure to enable private cloud computing and its convenient, on-demand access to a shared pool of configurable computing resources. “Cloud computing has become an extremely critical design factor for us,” says Shashi Kumar Ravulapaty, senior vice president and chief technology officer at Reliance Commercial Finance. “It’s one of our main focus areas. Oracle Exalogic, especially in combination with Oracle WebLogic, is a perfect fit for rapidly provisioning capacity in a private cloud infrastructure.” Reliance Commercial Finance provides loans to tens of thousands of customers throughout India. With more than 1,500 employees accessing the company’s core business applications every day, the company was having trouble processing more than 6,000 daily transactions with its legacy infrastructure, especially at the end of each month when hundreds of concurrent users need to access the company’s loan processing and approval applications. Read the complete article here. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Installation procedure RAC One Node

    - by rene.kundersma
    Okay, In order to test RAC One Node, on my Oracle VM Laptop, I just: - installed Oracle VM 2.2 - Created two OEL 5.3 images The two images are fully prepared for Oracle 11gr2 Grid Infrastructure and 11gr2 RAC including four shared disks for ASM and private nics. After installation of the Oracle 11gr2 Grid Infrastructure and a "software only installation" of 11gr2 RAC, I installed patch 9004119 as you can see with the opatch lsinv output: This patch has the scripts required to administer RAC One Node, you will see them later. At the moment we have them available for Linux and Solaris. After installation of the patch, I created a RAC database with an instance on one node. Please note that the "Global Database Name" has to be the same as the SID prefix and should be less then or equal to 8 characters: When the database creation is done, first I create a service. This is because RAC One Node needs to be "initialized" each time you add a service: The service configuration details are: After creating the service, a script called raconeinit needs to run from $RDBMS_HOME/bin. This is a script supplied by the patch. I can imagine the next major patch set of 11gr2 has this scripts available by default. The script will configure the database to run on other nodes: After initialization, when you would run raconeinit again, you would see: So, now the configuration is ready and we are ready to run 'Omotion' and move the service around from one node to the other (yes, vm competitor: this is service is available during the migration, nice right ?) . Omotion is started by running Omotion. With Omotion -v you get verbose output: So, during the migration you will see the two instance active: And, after the migration, there is only one instance left on the new node:

    Read the article

  • SQL SERVER – Effect of Collation on Resultset – SQL in Sixty Seconds #026 – Video

    - by pinaldave
    Collation is a very important concept but often ignored. I have often seen developers either not understanding this or ignored it – this is plain wrong. In simple word we can say Collation is the language or interpreting done by SQL Server. Well, in today’s SQL in Sixty Seconds we are going to observe how collation affects the resultset. Today’s blog post is inspired from my earlier blog post SQL SERVER – Effect of Case Sensitive Collation on Resultset. I strongly encourage you to read this earlier blog post for sample code as well additional explanation related to the concept shared in today’s SQL in Sixty Seconds. Here is the code used in the video. USE TempDB GO -- Sample Data Building CREATE TABLE ColTable (Col1 VARCHAR(15) COLLATE Latin1_General_CI_AS, Col2 VARCHAR(14) COLLATE Latin1_General_CS_AS) ; INSERT ColTable(Col1, Col2) VALUES ('Apple','Apple'), ('apple','apple'), ('pineapple','pineapple'), ('Pineapple','Pineapple'); GO -- Retrieve Data SELECT * FROM ColTable GO -- Retrieve Data SELECT * FROM ColTable ORDER BY Col1 GO -- Retrieve Data SELECT * FROM ColTable ORDER BY Col2 GO -- Clean up DROP TABLE ColTable GO Related Tips in SQL in Sixty Seconds: SQL SERVER – Effect of Case Sensitive Collation on Resultset Example of Width Sensitive and Width Insensitive Collation Collation and Collation Sensitivity – Quiz – Puzzle – 6 of 31 Change Collation of Database Column – T-SQL Script Find Collation of Database and Table Column Using T-SQL Default Collation of SQL Server 2008 Cannot resolve collation conflict for equal to operation If we like your idea we promise to share with you educational material. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video

    Read the article

  • ArchBeat Top 20 for March 11-17, 2012

    - by Bob Rhubart
    The 20 most-clicked links as shared via my social networks for the week of March 11-17, 2012. Start Small, Grow Fast: SOA Best Practices article by @biemond, @rluttikhuizen, @demed Packt Publishing offers discounts of up to 30% on 60+ Oracle titles IT Strategies from Oracle; Three Recipes for Oracle Service Bus 11g ; Stir Up Some SOA Oracle Cloud Conference: dates and locations worldwide Applications Architecture | Roy Hunter and Brian Rasmussen How Strategic is IT? - Assessing Strategic Value | Al Kiessel White Paper: An Architect’s Guide to Big Data | Dr. Helen Sun, Peter Heller Getting Started with Oracle Unbreakable Enterprise Kernel Release 2 | Lenz Grimmer Great Solaris 10 features paving the way to Solaris 11 | Karoly Vegh Who the Linux Developer Met on His Way to St. Ives | Rick Ramsey Peripheral Responsibilities Required for Large IDM Build Outs (Including Fusion Apps) | Brian Eidelman IOUG Real World Performance Tour, w/Tom Kyte, Andrew Holdsworth, Graham Wood Configure IPoIB on Solaris 10 branded zone | Leo Yuen Oracle OpenWorld 2012 Call for Papers Use Case Assumptions versus Pre-Conditions | Dave Burke Handling Custom XML documents in Oracle B2B | @Biemond Building a Coherence Cluster with Multiple Application Servers | Rene van Wijk XMLA vs BAPI | Sunil S. Ranka The Java EE 6 Example - Running Galleria on WebLogic 12 - Part 3 | @MyFear Public Sector Architecture | @jeremy_forman, @hamzajahangir Thought for the Day "The goal of Computer Science is to build something that will last at least until we've finished building it." —Anonymous

    Read the article

  • Ask the Readers: Share Your Tips for Defeating Viruses and Malware

    - by Mysticgeek
    We’ve shared some of our best tips for dealing with malware over the years, and now it’s your turn! Share your favorite tips for protecting against, or getting rid of viruses and other types of malicious software. Unfortunately, if you’re a PC user it’s a given that you have to play defense against various forms of Malware. We’ve written several articles showing how to get rid of viruses and other forms of malware over the years using various strategies. We have some excellent articles explaining how to get rid of Advanced Virus Remover, Antivirus Live, Internet Security 2010, and Security Tool – all of which disguise themselves as legit antivirus apps. Now we turn it over to you to share your favorite tips and tricks for defending against malicious infections. If your computer has been infected, what steps did you take to get rid of it and clean up your machine? Leave a comment below and join in the discussion! Similar Articles Productive Geek Tips How To Remove Security Tool and other Rogue/Fake Antivirus MalwareNorton Antivirus 2010 [Review]How To Remove Internet Security 2010 and other Rogue/Fake Antivirus MalwareHow To Remove Antivirus Live and Other Rogue/Fake Antivirus MalwareHow-To Geek Comment Policy TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Make your Joomla & Drupal Sites Mobile with OSMOBI Integrate Twitter and Delicious and Make Life Easier Design Your Web Pages Using the Golden Ratio Worldwide Growth of the Internet How to Find Your Mac Address Use My TextTools to Edit and Organize Text

    Read the article

  • First toe in the water with Object Databases : DB4O

    - by REA_ANDREW
    I have been wanting to have a play with Object Databases for a while now, and today I have done just that.  One of the obvious choices I had to make was which one to use.  My criteria for choosing one today was simple, I wanted one which I could literally wack in and start using, which means I wanted one which either had a .NET API or was designed/ported to .NET.  My decision was between two being: db4o MongoDb I went for db4o for the single reason that it looked like I could get it running and integrated the quickest.  I am making a Blogging application and front end as a project with which I can test and learn with these object databases.  Another requirement which I thought I would mention is that I also want to be able to use the said database in a shared hosting environment where I cannot install, run and maintain a server instance of said object database.  I can do exactly this with db4o. I have not tried to do this with MongoDb at time of writing.  There are quite a few in the industry now and you read an interesting post about different ones and how they are used with some of the heavy weights in the industry here : http://blog.marcua.net/post/442594842/notes-from-nosql-live-boston-2010 In the example which I am building I am using StructureMap as my IOC.  To inject the object for db4o I went with a Singleton instance scope as I am using a single file and I need this to be available to any thread on in the process as opposed to using the server implementation where I could open and close client connections with the server handling each one respectively.  Again I want to point out that I have chosen to stick with the non server implementation of db4o as I wanted to use this in a shared hosting environment where I cannot have such servers installed and run.     public static class Bootstrapper    {        public static void ConfigureStructureMap()        {            ObjectFactory.Initialize(x => x.AddRegistry(new MyApplicationRegistry()));        }    }    public class MyApplicationRegistry : Registry    {        public const string DB4O_FILENAME = "blog123";        public string DbPath        {            get            {                return Path.Combine(Path.GetDirectoryName(Assembly.GetAssembly(typeof(IBlogRepository)).Location), DB4O_FILENAME);            }        }        public MyApplicationRegistry()        {            For<IObjectContainer>().Singleton().Use(                () => Db4oEmbedded.OpenFile(Db4oEmbedded.NewConfiguration(), DbPath));            Scan(assemblyScanner =>            {                assemblyScanner.TheCallingAssembly();                assemblyScanner.WithDefaultConventions();            });        }    } So my code above is the structure map plumbing which I use for the application.  I am doing this simply as a quick scratch pad to play around with different things so I am simply segregating logical layers with folder structure as opposed to different assemblies.  It will be easy if I want to do this with any segment but for the purposes of example I have literally just wacked everything in the one assembly.  You can see an example file structure I have on the right.  I am planning on testing out a few implementations of the object databases out there so I can program to an interface of IBlogRepository One of the things which I was unsure about was how it performed under a multi threaded environment which it will undoubtedly be used 9 times out of 10, and for the reason that I am using the db context as a singleton, I assumed that the library was of course thread safe but I did not know as I have not read any where in the documentation, again this is probably me not reading things correctly.  In short though I threw together a simple test where I simply iterate to a limit each time kicking a common task off with a thread from a thread pool.  This task simply created and added an random Post and added it to the storage. The execution of the threads I put inside the Setup of the Test and then simply ensure the number of posts committed to the database is equal to the number of iterations I made; here is the code I used to do the multi thread jobs: [TestInitialize] public void Setup() { var sw = new System.Diagnostics.Stopwatch(); sw.Start(); var resetEvent = new ManualResetEvent(false); ThreadPool.SetMaxThreads(20, 20); for (var i = 0; i < MAX_ITERATIONS; i++) { ThreadPool.QueueUserWorkItem(delegate(object state) { var eventToReset = (ManualResetEvent)state; var post = new Post { Author = MockUser, Content = "Mock Content", Title = "Title" }; Repository.Put(post); var counter = Interlocked.Decrement(ref _threadCounter); if (counter == 0) eventToReset.Set(); }, resetEvent); } WaitHandle.WaitAll(new[] { resetEvent }); sw.Stop(); Console.WriteLine("{0:00}.{1:00} seconds", sw.Elapsed.Seconds, sw.Elapsed.Milliseconds); }   I was not doing this to test out the speed performance of db4o but while I was doing this I could not help but put in a StopWatch and see out of sheer interest how fast it would take to insert a number of Posts.  I tested it out in this case with 10000 inserts of a small, simple POCO and it resulted in an average of:  899.36 object inserts / second.  Again this is just  simple crude test which came out of my curiosity at how it performed under many threads when using the non server implementation of db4o. The spec summary of the computer I used is as follows: With regards to the actual Repository implementation itself, it really is quite straight forward and I have to say I am very surprised at how easy it was to integrate and get up and running.  One thing I have noticed in the exposure I have had so far is that the Query returns IList<T> as opposed to IQueryable<T> but again I have not looked into this in depth and this could be there already and if not they have provided everything one needs to make there own repository.  An example of a couple of methods from by db4o implementation of the BlogRepository is below: public class BlogRepository : IBlogRepository { private readonly IObjectContainer _db; public BlogRepository(IObjectContainer db) { _db = db; } public void Put(DomainObject obj) { _db.Store(obj); } public void Delete(DomainObject obj) { _db.Delete(obj); } public Post GetByKey(object key) { return _db.Query<Post>(post => post.Key == key).FirstOrDefault(); } … Anyways I hope to get a few more implementations going of the object databases and literally just get familiarized with them and the concept of no sql databases. Cheers for now, Andrew

    Read the article

  • Oracle Social Network -The Social Glue for Enterprise Applications

    - by me
    Tom Petrocelli of Enterprise Strategy Group published a report recently, “Oracle Social Network: The Social Glue for Enterprise Applications”, on Oracle Social Network (OSN) and how traditional social products create social silos whereas OSN is the “social glue” for enterprise applications.  This report supports the point of Oracle’s Social Business Strategy to seamless integrate social capabilities into the main business processes. Quote from report: “Oracle has adopted the correct approach to creating a social layer and socially enabled applications. Oracle Social Network is not simply another enterprise social network product; it is a complete social layer for the enterprise application stack. This approach will serve Oracle users well in the future.” OSN allow to capture the related Conversations of a business process right where it’s happens – within the respective Business application.  Fusion CRM is an excellent example for this approach. Quote from report: “Oracle’s new software, Oracle Social Network, is an example of a solution to the silo problem. While Oracle fields a typical enterprise social network application with microblogging, file sharing, shared documents or wikis, and activity streams, the front-end application is only a small part of what Oracle Social Network does. Instead, Oracle Social Network is a platform that provides social features as a service to other enterprise applications. In effect, Oracle Social Network socially enables all of Oracle’s enterprise applications—all enterprise applications really—with not only the same features, but also the same conversations. As a result, the social conversations act as a conduit for inter-application communication and collaboration.” Source: ESG Research Report, Oracle Social Network: The Social Glue for Enterprise Applications, August 2012. cross-post from Oracle WebCenter blog

    Read the article

  • Podcast Show Notes: Toronto Architect Day Panel Discussion

    - by Bob Rhubart
    The latest Oracle Technology Network ArchBeat Podcast features a four-part series recorded live during the panel discussion at OTN Architect Day in Tornonto, April 21, 2011. More than 100 people attended the event, and the audience tossed a lot of great questions at a terrific panel. Listen for yourself... Listen to Part 1 Panel introduction and a discussion of the typical characteristics of Cloud early-adopters. Listen to Part 2 (June 22) The panelists respond to an audience question about what happens when data in the Cloud crosses international borders. Listen to Part 3 (June 29) The panel discusses public versus private cloud as the best strategy for small or start-up businesses. Listen to Part 4 (July 6) The panel responds to an audience question about how cloud computing changes performance testing paradigms. The Architect Day panel includes (listed alphabetically): Dr. James Baty: Vice President, Oracle Global Enterprise Architecture Program [LinkedIn] Dave Chappelle: Enterprise Architect, Oracle Global Enterprise Architecture Program [LinkedIn] Timothy Davis: Director, Enterprise Architecture, Oracle Enterprise Solutions Group [LinkedIn] Michael Glas: Director, Enterprise Architecture, Oracle [LinkedIn] Bob Hensle: Director, Oracle [LinkedIn] Floyd Marinescu: Co-founder & Chief Editor of InfoQ.com and the QCon conferences [LinkedIn | Twitter | Homepage] Cary Millsap: Oracle ACE Director; Founder, President, and CEO at Method R Corporation [LinkedIn | Blog | Twitter] Coming Soon IASA CEO Paul Preiss talks about architecture as a profession. Thomas Erl and Anne Thomas Manes discuss their new book SOA Governance: Governing Shared Services On-Premise & in the Cloud A discussion of women in architecture Stay tuned: RSS

    Read the article

  • Introducing Oracle VM VirtualBox

    - by Fat Bloke
    I guess these things always take longer than expected and, while the dust is still not completely settled in all the ex-Sun geographies, it is high time we started looking at some of the great new assets in the Oracle VM portfolio. So let's start with one of the most exciting: Oracle VM VirtualBox. VirtualBox is cross-platform virtualization software, oftentimes called a hypervisor, and it runs on Windows, Linux, Solaris and the Mac. Which means that you download it, you install it on your existing platform, and start creating and running virtual machines alongside your existing applications. For example, on my Mac I can run Oracle Enterprise Linux and Windows 7 alongside my Mac apps like this...(Click to zoom)VirtualBox use has grown phenomenally to the point that at Sun it was the 3rd most popular download behind Java and MySQL. Its success can be attributed to the fact that it doesn't need dedicated hardware, it can be installed on either client or server classes of computers, is very easy to use and is free for personal use. And, as you might expect, VirtualBox has it's own vibrant community too, over at www.virtualbox.org There are hundreds of tutorials out there about how to use VirtualBox to create vm's and install different operating systems ranging from Windows 7 to ChromeOS, and if you don't want to install an operating system yourself, you can download pre-built virtual appliances from community sites such as VirtualBox Images or commercial companies selling subscriptions to whole application stacks, such as JumpBox . In no time you'll be creating and sharing your own vm's using the VirtualBox OVF export and import function. VirtualBox is deceptively powerful. Under the simple GUI lies a formidable engine capable of running heavyweight multi-CPU virtual workloads, exhibiting Enterprise capabilities including a built-in remote display server, an iSCSI initiator for connecting to shared storage, and the ability to teleport running vm's from one host to another. And for solution builders, you should be aware that VirtualBox has a scriptable command line interface and an SDK and rich web service APIs. To get a further feel for what VirtualBox is capable of, check out some of these short movies or simply go download it for yourself.- FB

    Read the article

  • Unity Plugin DLLNotFoundException

    - by Dewayne
    I am using a plugin DLL that I created in Visual C++ Express 2010 on windows 7 64 bit Ultimate Edition. The DLL functions properly on the machine that it was originally created on. The problem is that the DLL is not functioning in the Unity3d Editor on another machine and giving an error that basically states that the DLL is missing some of its dependencies. The target machine is running Windows 7 Home 64 bit (if this is relevant) Results from the error log of Dependency Walker: Error: The Side-by-Side configuration information for "c:\users\dewayne\desktop\shared\vrpnplugin\unityplugin\build\release\OPTITRACKPLUGIN.DLL" contains errors. The application has failed to start because its side-by-side configuration is incorrect. Please see the application event log or use the command-line sxstrace.exe tool for more detail (14001). Error: At least one module has an unresolved import due to a missing export function in an implicitly dependent module. Error: Modules with different CPU types were found. Warning: At least one delay-load dependency module was not found. Warning: At least one module has an unresolved import due to a missing export function in a delay-load dependent module. The Visual C++ Express 2010 project and solution file can be found here: https://docs.google.com/leaf?id=0B1F4pP7mRSiYMGU2YTJiNTUtOWJiMS00YTYzLThhYWQtMzNiOWJhZDU5M2M0&hl=en&authkey=CJSXhqgH The zip is 79MB and also contains its dependencies. The DLL in question is OptiTrackPlugin.dll

    Read the article

  • What free Remote Desktop (server) solutions are there?

    - by Tao
    I know Ubuntu comes with a "Remote Desktop" option that appears to be a straightforward VNC server, and I'm trying to understand the alternatives. Here are the possibilities I've heard about so far: VNC VNC + SSH Tunnelling NX Server, free edition FreeNX NeatX X2Go X11 Forwarding over SSH xrdp I'm coming at this from a Windows user's perspective: To the best of my experience, RDP (aka Terminal Services) is a reasonably secure (barring mitm/server spoofing), efficient desktop sharing protocol with well-supported clients, that can be exposed to the internet when necessary without major fears of intrusion. To the best of my knowledge straight VNC is none of those things, which is where I get confused - why wouldn't a better desktop sharing technology be developed or used in the open-source world? I know VNC can be wrapped with SSH, but that seems beyond the reach of a casual user. X11 forwarding over SSH may be more or less efficient, I have no idea, but is definitely even more complicated, and doesn't (as far as I know) give you access to already-running stuff (no desktop sharing as such, just remote application running). So, I'd like any feedback/preferences amongst these or any other "Free" desktop sharing options, using these criteria and/or any others: Security (esp. for access across internet) Efficiency (bandwidth usage, responsiveness, etc) Free-ness, as in Speech (not sure where RDP or FreeNX lie for this) Free-ness, as in Beer (are there any commercial solutions with usable dependable free offerings?) Ease of use (server and client side) Cross-OS Client availability Cross-OS Server availability Support for independent sessions and shared (and/or "Console") sessions Ongoing support/maintenance/development Thanks!

    Read the article

  • wifi not recognized

    - by pumper
    I had wifi and worked then some day ubuntu asked me to update some packeages and restarted the system and after that no wifi. this is my wireless_script output : ########## wireless info START ########## ##### release ##### Distributor ID: Ubuntu Description: Ubuntu 14.04 LTS Release: 14.04 Codename: trusty ##### kernel ##### Linux S510p 3.13.0-24-generic #47-Ubuntu SMP Fri May 2 23:30:00 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux ##### lspci ##### 02:00.0 Network controller [0280]: Qualcomm Atheros QCA9565 / AR9565 Wireless Network Adapter [168c:0036] (rev 01) Subsystem: Lenovo Device [17aa:3026] Kernel driver in use: ath9k 03:00.0 Ethernet controller [0200]: Qualcomm Atheros AR8162 Fast Ethernet [1969:1090] (rev 10) Subsystem: Lenovo Device [17aa:3807] Kernel driver in use: alx ##### lsusb ##### Bus 001 Device 006: ID 0eef:a111 D-WAV Scientific Co., Ltd Bus 001 Device 007: ID 0cf3:3004 Atheros Communications, Inc. Bus 001 Device 004: ID 174f:1488 Syntek Bus 001 Device 003: ID 03f0:5607 Hewlett-Packard Bus 001 Device 002: ID 8087:8000 Intel Corp. Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 002 Device 002: ID 15d9:0a4c Trust International B.V. USB+PS/2 Optical Mouse Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub ##### PCMCIA Card Info ##### ##### rfkill ##### 0: ideapad_wlan: Wireless LAN Soft blocked: no Hard blocked: no 1: ideapad_bluetooth: Bluetooth Soft blocked: no Hard blocked: no 2: phy0: Wireless LAN Soft blocked: no Hard blocked: no 3: hci0: Bluetooth Soft blocked: no Hard blocked: no ##### iw reg get ##### country 00: (2402 - 2472 @ 40), (3, 20) (2457 - 2482 @ 40), (3, 20), PASSIVE-SCAN, NO-IBSS (2474 - 2494 @ 20), (3, 20), NO-OFDM, PASSIVE-SCAN, NO-IBSS (5170 - 5250 @ 40), (3, 20), PASSIVE-SCAN, NO-IBSS (5735 - 5835 @ 40), (3, 20), PASSIVE-SCAN, NO-IBSS ##### interfaces ##### # interfaces(5) file used by ifup(8) and ifdown(8) auto lo iface lo inet loopback auto dsl-provider iface dsl-provider inet ppp pre-up /sbin/ifconfig wlan0 up # line maintained by pppoeconf provider dsl-provider auto wlan0 iface wlan0 inet manual ##### iwconfig ##### wlan0 IEEE 802.11bgn ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=16 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off ##### route ##### Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface ##### resolv.conf ##### ##### nm-tool ##### NetworkManager Tool State: connected (global) - Device: eth0 ----------------------------------------------------------------- Type: Wired Driver: alx State: unavailable Default: no HW Address: <MAC address removed> Capabilities: Carrier Detect: yes Wired Properties Carrier: off - Device: wlan0 ---------------------------------------------------------------- Type: 802.11 WiFi Driver: ath9k State: unmanaged Default: no HW Address: <MAC address removed> Capabilities: Wireless Properties WEP Encryption: yes WPA Encryption: yes WPA2 Encryption: yes Wireless Access Points ##### NetworkManager.state ##### [main] NetworkingEnabled=true WirelessEnabled=true WWANEnabled=true WimaxEnabled=true ##### NetworkManager.conf ##### [main] plugins=ifupdown,keyfile,ofono dns=dnsmasq no-auto-default=<MAC address removed>, [ifupdown] managed=false ##### iwlist ##### wlan0 Scan completed : Cell 01 - Address: <MAC address removed> Channel:1 Frequency:2.412 GHz (Channel 1) Quality=55/70 Signal level=-55 dBm Encryption key:on ESSID:"mohsen" Bit Rates:1 Mb/s; 2 Mb/s; 5.5 Mb/s; 11 Mb/s; 6 Mb/s 9 Mb/s; 12 Mb/s; 18 Mb/s Bit Rates:24 Mb/s; 36 Mb/s; 48 Mb/s; 54 Mb/s Mode:Master Extra:tsf=000000076c342498 Extra: Last beacon: 12ms ago IE: Unknown: 00066D6F6873656E IE: Unknown: 010882848B960C121824 IE: Unknown: 030101 IE: Unknown: 2A0104 IE: Unknown: 32043048606C ##### iwlist channel ##### wlan0 13 channels in total; available frequencies : Channel 01 : 2.412 GHz Channel 02 : 2.417 GHz Channel 03 : 2.422 GHz Channel 04 : 2.427 GHz Channel 05 : 2.432 GHz Channel 06 : 2.437 GHz Channel 07 : 2.442 GHz Channel 08 : 2.447 GHz Channel 09 : 2.452 GHz Channel 10 : 2.457 GHz Channel 11 : 2.462 GHz Channel 12 : 2.467 GHz Channel 13 : 2.472 GHz ##### lsmod ##### ath3k 13318 0 bluetooth 395423 23 bnep,ath3k,btusb,rfcomm ath9k 164164 0 ath9k_common 13551 1 ath9k ath9k_hw 453856 2 ath9k_common,ath9k ath 28698 3 ath9k_common,ath9k,ath9k_hw mac80211 626489 1 ath9k cfg80211 484040 3 ath,ath9k,mac80211 ##### modinfo ##### filename: /lib/modules/3.13.0-24-generic/kernel/drivers/bluetooth/ath3k.ko firmware: ath3k-1.fw license: GPL version: 1.0 description: Atheros AR30xx firmware driver author: Atheros Communications srcversion: 98A5245588C09E5E41690D0 alias: usb:v0489pE036d*dc*dsc*dp*ic*isc*ip*in* alias: usb:v0489pE03Cd*dc*dsc*dp*ic*isc*ip*in* alias: usb:v0489pE02Cd*dc*dsc*dp*ic*isc*ip*in* alias: usb:v0CF3pE003d*dc*dsc*dp*ic*isc*ip*in* alias: usb:v0CF3p3121d*dc*dsc*dp*ic*isc*ip*in* alias: usb:v13D3p3402d*dc*dsc*dp*ic*isc*ip*in* alias: usb:v04C5p1330d*dc*dsc*dp*ic*isc*ip*in* alias: usb:v0489pE04Dd*dc*dsc*dp*ic*isc*ip*in* alias: usb:v0489pE056d*dc*dsc*dp*ic*isc*ip*in* alias: usb:v0489pE04Ed*dc*dsc*dp*ic*isc*ip*in* alias: usb:v13D3p3393d*dc*dsc*dp*ic*isc*ip*in* alias: usb:v0489pE057d*dc*dsc*dp*ic*isc*ip*in* alias: usb:v0930p0220d*dc*dsc*dp*ic*isc*ip*in* alias: usb:v0930p0219d*dc*dsc*dp*ic*isc*ip*in* alias: usb:v0CF3pE005d*dc*dsc*dp*ic*isc*ip*in* alias: usb:v0CF3pE004d*dc*dsc*dp*ic*isc*ip*in* alias: usb:v13D3p3362d*dc*dsc*dp*ic*isc*ip*in* alias: usb:v04CAp3008d*dc*dsc*dp*ic*isc*ip*in* alias: usb:v04CAp3006d*dc*dsc*dp*ic*isc*ip*in* alias: usb:v04CAp3005d*dc*dsc*dp*ic*isc*ip*in* alias: usb:v04CAp3004d*dc*dsc*dp*ic*isc*ip*in* alias: usb:v13D3p3375d*dc*dsc*dp*ic*isc*ip*in* alias: usb:v0CF3p817Ad*dc*dsc*dp*ic*isc*ip*in* alias: usb:v0CF3p311Dd*dc*dsc*dp*ic*isc*ip*in* alias: usb:v0CF3p3008d*dc*dsc*dp*ic*isc*ip*in* alias: usb:v0CF3p3004d*dc*dsc*dp*ic*isc*ip*in* alias: usb:v0CF3p0036d*dc*dsc*dp*ic*isc*ip*in* alias: usb:v03F0p311Dd*dc*dsc*dp*ic*isc*ip*in* alias: usb:v0489pE027d*dc*dsc*dp*ic*isc*ip*in* alias: usb:v0489pE03Dd*dc*dsc*dp*ic*isc*ip*in* alias: usb:v0930p0215d*dc*dsc*dp*ic*isc*ip*in* alias: usb:v13D3p3304d*dc*dsc*dp*ic*isc*ip*in* alias: usb:v0CF3pE019d*dc*dsc*dp*ic*isc*ip*in* alias: usb:v0CF3p3002d*dc*dsc*dp*ic*isc*ip*in* alias: usb:v0CF3p3000d*dc*dsc*dp*ic*isc*ip*in* depends: bluetooth intree: Y vermagic: 3.13.0-24-generic SMP mod_unload modversions signer: Magrathea: Glacier signing key sig_key: <MAC address removed>:D9:06:21:70:6E:8D:06:60:4D:73:0B:35:9F:C0 sig_hashalgo: sha512 filename: /lib/modules/3.13.0-24-generic/kernel/drivers/net/wireless/ath/ath9k/ath9k.ko license: Dual BSD/GPL description: Support for Atheros 802.11n wireless LAN cards. author: Atheros Communications srcversion: BAF225EEB618908380B28DA alias: platform:qca955x_wmac alias: platform:ar934x_wmac alias: platform:ar933x_wmac alias: platform:ath9k alias: pci:v0000168Cd00000036sv*sd*bc*sc*i* alias: pci:v0000168Cd00000036sv0000185Fsd00003027bc*sc*i* alias: pci:v0000168Cd00000036sv00001B9Asd00002810bc*sc*i* alias: pci:v0000168Cd00000036sv0000144Fsd00007202bc*sc*i* alias: pci:v0000168Cd00000036sv00001A3Bsd00002130bc*sc*i* alias: pci:v0000168Cd00000036sv000011ADsd00000612bc*sc*i* alias: pci:v0000168Cd00000036sv000011ADsd00000652bc*sc*i* alias: pci:v0000168Cd00000036sv000011ADsd00000642bc*sc*i* alias: pci:v0000168Cd00000036sv0000168Csd0000302Cbc*sc*i* alias: pci:v0000168Cd00000036sv0000168Csd00003027bc*sc*i* alias: pci:v0000168Cd00000036sv0000144Dsd0000411Ebc*sc*i* alias: pci:v0000168Cd00000036sv0000144Dsd0000411Dbc*sc*i* alias: pci:v0000168Cd00000036sv0000144Dsd0000411Cbc*sc*i* alias: pci:v0000168Cd00000036sv0000144Dsd0000411Bbc*sc*i* alias: pci:v0000168Cd00000036sv0000144Dsd0000411Abc*sc*i* alias: pci:v0000168Cd00000036sv00001028sd0000020Ebc*sc*i* alias: pci:v0000168Cd00000036sv0000103Csd0000217Fbc*sc*i* alias: pci:v0000168Cd00000036sv0000103Csd000018E3bc*sc*i* alias: pci:v0000168Cd00000036sv000017AAsd00003026bc*sc*i* alias: pci:v0000168Cd00000036sv00001A3Bsd0000213Abc*sc*i* alias: pci:v0000168Cd00000036sv000011ADsd00000662bc*sc*i* alias: pci:v0000168Cd00000036sv000011ADsd00000672bc*sc*i* alias: pci:v0000168Cd00000036sv000011ADsd00000622bc*sc*i* alias: pci:v0000168Cd00000036sv0000185Fsd00003028bc*sc*i* alias: pci:v0000168Cd00000036sv0000105Bsd0000E069bc*sc*i* alias: pci:v0000168Cd00000036sv0000168Csd0000302Bbc*sc*i* alias: pci:v0000168Cd00000036sv0000168Csd00003026bc*sc*i* alias: pci:v0000168Cd00000036sv0000168Csd00003025bc*sc*i* alias: pci:v0000168Cd00000036sv00001B9Asd00002812bc*sc*i* alias: pci:v0000168Cd00000036sv00001B9Asd00002811bc*sc*i* alias: pci:v0000168Cd00000036sv000011ADsd00006671bc*sc*i* alias: pci:v0000168Cd00000036sv000011ADsd00000632bc*sc*i* alias: pci:v0000168Cd00000036sv0000185Fsd0000A119bc*sc*i* alias: pci:v0000168Cd00000036sv0000105Bsd0000E068bc*sc*i* alias: pci:v0000168Cd00000036sv00001A3Bsd00002176bc*sc*i* alias: pci:v0000168Cd00000036sv0000168Csd00003028bc*sc*i* alias: pci:v0000168Cd00000037sv*sd*bc*sc*i* alias: pci:v0000168Cd00000034sv*sd*bc*sc*i* alias: pci:v0000168Cd00000034sv000010CFsd00001783bc*sc*i* alias: pci:v0000168Cd00000034sv000014CDsd00000064bc*sc*i* alias: pci:v0000168Cd00000034sv000014CDsd00000063bc*sc*i* alias: pci:v0000168Cd00000034sv0000103Csd00001864bc*sc*i* alias: pci:v0000168Cd00000034sv000011ADsd00006641bc*sc*i* alias: pci:v0000168Cd00000034sv000011ADsd00006631bc*sc*i* alias: pci:v0000168Cd00000034sv00001043sd0000850Ebc*sc*i* alias: pci:v0000168Cd00000034sv00001A3Bsd00002110bc*sc*i* alias: pci:v0000168Cd00000034sv00001969sd00000091bc*sc*i* alias: pci:v0000168Cd00000034sv000017AAsd00003214bc*sc*i* alias: pci:v0000168Cd00000034sv0000168Csd00003117bc*sc*i* alias: pci:v0000168Cd00000034sv000011ADsd00006661bc*sc*i* alias: pci:v0000168Cd00000034sv00001A3Bsd00002116bc*sc*i* alias: pci:v0000168Cd00000033sv*sd*bc*sc*i* alias: pci:v0000168Cd00000032sv*sd*bc*sc*i* alias: pci:v0000168Cd00000032sv00001043sd0000850Dbc*sc*i* alias: pci:v0000168Cd00000032sv00001B9Asd00001C01bc*sc*i* alias: pci:v0000168Cd00000032sv00001B9Asd00001C00bc*sc*i* alias: pci:v0000168Cd00000032sv00001A3Bsd00001F95bc*sc*i* alias: pci:v0000168Cd00000032sv00001A3Bsd00001195bc*sc*i* alias: pci:v0000168Cd00000032sv00001A3Bsd00001F86bc*sc*i* alias: pci:v0000168Cd00000032sv00001A3Bsd00001186bc*sc*i* alias: pci:v0000168Cd00000032sv00001B9Asd00002001bc*sc*i* alias: pci:v0000168Cd00000032sv00001B9Asd00002000bc*sc*i* alias: pci:v0000168Cd00000032sv0000144Fsd00007197bc*sc*i* alias: pci:v0000168Cd00000032sv0000105Bsd0000E04Fbc*sc*i* alias: pci:v0000168Cd00000032sv0000105Bsd0000E04Ebc*sc*i* alias: pci:v0000168Cd00000032sv000011ADsd00006628bc*sc*i* alias: pci:v0000168Cd00000032sv000011ADsd00006627bc*sc*i* alias: pci:v0000168Cd00000032sv00001C56sd00004001bc*sc*i* alias: pci:v0000168Cd00000032sv00001A3Bsd00002100bc*sc*i* alias: pci:v0000168Cd00000032sv00001A3Bsd00002C97bc*sc*i* alias: pci:v0000168Cd00000032sv000017AAsd00003219bc*sc*i* alias: pci:v0000168Cd00000032sv000017AAsd00003218bc*sc*i* alias: pci:v0000168Cd00000032sv0000144Dsd0000C708bc*sc*i* alias: pci:v0000168Cd00000032sv0000144Dsd0000C680bc*sc*i* alias: pci:v0000168Cd00000032sv0000144Dsd0000C706bc*sc*i* alias: pci:v0000168Cd00000032sv0000144Dsd0000410Fbc*sc*i* alias: pci:v0000168Cd00000032sv0000144Dsd0000410Ebc*sc*i* alias: pci:v0000168Cd00000032sv0000144Dsd0000410Dbc*sc*i* alias: pci:v0000168Cd00000032sv0000144Dsd00004106bc*sc*i* alias: pci:v0000168Cd00000032sv0000144Dsd00004105bc*sc*i* alias: pci:v0000168Cd00000032sv0000185Fsd00003027bc*sc*i* alias: pci:v0000168Cd00000032sv0000185Fsd00003119bc*sc*i* alias: pci:v0000168Cd00000032sv0000168Csd00003122bc*sc*i* alias: pci:v0000168Cd00000032sv0000168Csd00003119bc*sc*i* alias: pci:v0000168Cd00000032sv0000105Bsd0000E075bc*sc*i* alias: pci:v0000168Cd00000032sv00001A3Bsd00002152bc*sc*i* alias: pci:v0000168Cd00000032sv00001A3Bsd0000126Abc*sc*i* alias: pci:v0000168Cd00000032sv00001A3Bsd00002126bc*sc*i* alias: pci:v0000168Cd00000032sv00001A3Bsd00001237bc*sc*i* alias: pci:v0000168Cd00000032sv00001A3Bsd00002086bc*sc*i* alias: pci:v0000168Cd00000030sv*sd*bc*sc*i* alias: pci:v0000168Cd0000002Esv*sd*bc*sc*i* alias: pci:v0000168Cd0000002Dsv*sd*bc*sc*i* alias: pci:v0000168Cd0000002Csv*sd*bc*sc*i* alias: pci:v0000168Cd0000002Bsv*sd*bc*sc*i* alias: pci:v0000168Cd0000002Bsv00001A3Bsd00002C37bc*sc*i* alias: pci:v0000168Cd0000002Asv000010CFsd00001536bc*sc*i* alias: pci:v0000168Cd0000002Asv000010CFsd0000147Dbc*sc*i* alias: pci:v0000168Cd0000002Asv000010CFsd0000147Cbc*sc*i* alias: pci:v0000168Cd0000002Asv0000185Fsd0000309Dbc*sc*i* alias: pci:v0000168Cd0000002Asv00001A32sd00000306bc*sc*i* alias: pci:v0000168Cd0000002Asv000011ADsd00006642bc*sc*i* alias: pci:v0000168Cd0000002Asv000011ADsd00006632bc*sc*i* alias: pci:v0000168Cd0000002Asv0000105Bsd0000E01Fbc*sc*i* alias: pci:v0000168Cd0000002Asv00001A3Bsd00001C71bc*sc*i* alias: pci:v0000168Cd0000002Asv*sd*bc*sc*i* alias: pci:v0000168Cd00000029sv*sd*bc*sc*i* alias: pci:v0000168Cd00000027sv*sd*bc*sc*i* alias: pci:v0000168Cd00000024sv*sd*bc*sc*i* alias: pci:v0000168Cd00000023sv*sd*bc*sc*i* depends: ath9k_hw,mac80211,ath9k_common,cfg80211,ath intree: Y vermagic: 3.13.0-24-generic SMP mod_unload modversions signer: Magrathea: Glacier signing key sig_key: <MAC address removed>:D9:06:21:70:6E:8D:06:60:4D:73:0B:35:9F:C0 sig_hashalgo: sha512 parm: debug:Debugging mask (uint) parm: nohwcrypt:Disable hardware encryption (int) parm: blink:Enable LED blink on activity (int) parm: btcoex_enable:Enable wifi-BT coexistence (int) parm: bt_ant_diversity:Enable WLAN/BT RX antenna diversity (int) parm: ps_enable:Enable WLAN PowerSave (int) filename: /lib/modules/3.13.0-24-generic/kernel/drivers/net/wireless/ath/ath9k/ath9k_common.ko license: Dual BSD/GPL description: Shared library for Atheros wireless 802.11n LAN cards. author: Atheros Communications srcversion: 696B00A6C59713EC0966997 depends: ath,ath9k_hw intree: Y vermagic: 3.13.0-24-generic SMP mod_unload modversions signer: Magrathea: Glacier signing key sig_key: <MAC address removed>:D9:06:21:70:6E:8D:06:60:4D:73:0B:35:9F:C0 sig_hashalgo: sha512 filename: /lib/modules/3.13.0-24-generic/kernel/drivers/net/wireless/ath/ath9k/ath9k_hw.ko license: Dual BSD/GPL description: Support for Atheros 802.11n wireless LAN cards. author: Atheros Communications srcversion: 4809F3842A0542CD6B556D3 depends: ath intree: Y vermagic: 3.13.0-24-generic SMP mod_unload modversions signer: Magrathea: Glacier signing key sig_key: <MAC address removed>:D9:06:21:70:6E:8D:06:60:4D:73:0B:35:9F:C0 sig_hashalgo: sha512 filename: /lib/modules/3.13.0-24-generic/kernel/drivers/net/wireless/ath/ath.ko license: Dual BSD/GPL description: Shared library for Atheros wireless LAN cards. author: Atheros Communications srcversion: 88A67C5359B02C5A710AFCF depends: cfg80211 intree: Y vermagic: 3.13.0-24-generic SMP mod_unload modversions signer: Magrathea: Glacier signing key sig_key: <MAC address removed>:D9:06:21:70:6E:8D:06:60:4D:73:0B:35:9F:C0 sig_hashalgo: sha512 ##### modules ##### lp rtc ##### blacklist ##### [/etc/modprobe.d/blacklist-ath_pci.conf] blacklist ath_pci [/etc/modprobe.d/blacklist.conf] blacklist evbug blacklist usbmouse blacklist usbkbd blacklist eepro100 blacklist de4x5 blacklist eth1394 blacklist snd_intel8x0m blacklist snd_aw2 blacklist i2c_i801 blacklist prism54 blacklist bcm43xx blacklist garmin_gps blacklist asus_acpi blacklist snd_pcsp blacklist pcspkr blacklist amd76x_edac [/etc/modprobe.d/fbdev-blacklist.conf] blacklist arkfb blacklist aty128fb blacklist atyfb blacklist radeonfb blacklist cirrusfb blacklist cyber2000fb blacklist gx1fb blacklist gxfb blacklist kyrofb blacklist matroxfb_base blacklist mb862xxfb blacklist neofb blacklist nvidiafb blacklist pm2fb blacklist pm3fb blacklist s3fb blacklist savagefb blacklist sisfb blacklist tdfxfb blacklist tridentfb blacklist viafb blacklist vt8623fb ##### udev rules ##### # PCI device 0x1969:0x1090 (alx) SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="<MAC address removed>", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth0" # PCI device 0x168c:0x0036 (ath9k) SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="<MAC address removed>", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="wlan*", NAME="wlan0" ##### dmesg ##### [ 1.707662] psmouse serio1: elantech: assuming hardware version 3 (with firmware version 0x450f03) [ 11.918852] ath: phy0: WB335 1-ANT card detected [ 11.918856] ath: phy0: Set BT/WLAN RX diversity capability [ 11.926438] ath: phy0: Enable LNA combining [ 11.928469] ath: phy0: ASPM enabled: 0x42 [ 11.928473] ath: EEPROM regdomain: 0x65 [ 11.928475] ath: EEPROM indicates we should expect a direct regpair map [ 11.928478] ath: Country alpha2 being used: 00 [ 11.928479] ath: Regpair used: 0x65 [ 14.066021] IPv6: ADDRCONF(NETDEV_UP): wlan0: link is not ready ########## wireless info END ############

    Read the article

  • Point subdomain to another web hosting

    - by zulhfreelancer
    I buy a domain from GoDaddy. Let's call this domain as abc.com. I've a hosting account on web hosting A with abc.com as my primary domain. I created a sub-domain pic.abc.com and I wish it to be pointed to my new hosting account. This web hosting B also use abc.com as the primary domain. The main purpose of doing this because I only want to use web hosting B to host my pic.abc.com contents. That's all. Other content from *.abc.com and abc.com will be remain in web hosting A. How to do this? Note 1: I try to add pic.abc.com as an add-on domain in my web hosting B account. Unfortunately, I couldn't do that. Note 2: I also try to use URL forwarding method using DNS management service like DNS Social. It doesn't work. Note 3: I'm using WordPress on the pic.abc.com site. Both web hosting A and B running cPanel - Apache servers (shared hosting). Thanks in advance!

    Read the article

  • UPK Professional Customer Success Story: Medtronic

    - by [email protected]
    In case you missed the live event, be sure to listen to last week's UPK Customer iSeminar featuring Medtronic. This was the first iSeminar in our quarterly series to showcase UPK Professional (UPK and Knowledge Pathways). Donna Miller and Staci Gilbert gave viewers an inside look at samples of Medtronic's content as they shared their experiences, methodology and best practices for use of the solution. Here are some highlights of the call: • Medtronic initially purchased UPK Professional to support a multi-year, global SAP rollout for 9,000 end users located in 24 countries. • As time went on, they expanded their use of UPK Professional to include several of their other enterprise applications: PeopleSoft, Siebel CRM, Hyperion Financial Management, a number of SAP bolt-ons, Documentum, TrackWise, and many others. • In combination with their Saba LMS, UPK Professional has allowed Medtronic to create, deploy, track and certify consistent end user training for critical transactions and processes across their organization worldwide - essential for a company in a heavily regulated industry. • For key pieces of content or certain end user populations, some Medtronic business units localize/translate the global UPK content. Staci demonstrated examples of their SAP content which has been translated into Japanese. • In the live SAP environment, end users rely on UPK's context sensitive in-application performance support. Medtronic has found this to be very helpful post go-live, giving just-in-time support so end users are confident in a new system or when performing tasks they don't often touch (at quarter or year end). UPK also serves as Medtronic's internal Google. • Medtronic has realized savings on many fronts: reduction in support calls due to in-application performance support, elimination of their training clients, and speedier training (1.5 days rather than 5-7 days) of temporary workers by moving from ILT to a blended solution that includes UPK simulations for eLearning. Thanks again to Donna and Staci for an exceptional presentation. They offered so many great examples for anyone who's looking for ways to get more out of UPK or interested in learning about UPK Professional: Knowledge Pathways. - Karen Rihs, Oracle UPK Outbound Product Management

    Read the article

  • Windows Azure Evolution &ndash; TFS Integration (WAWS Part 2)

    - by Shaun
    So this is the fourth blog post about the new features of Windows Azure and the second part of Windows Azure Web Sites. But this is not just focus on the WAWS since the function I’m going to introduce is available in both Windows Azure Web Sites and Windows Azure Cloud Service (a.k.a. hosted service). In the previous post I talked about the Windows Azure Web Sites and how to use its gallery to build a WordPress personal blog without coding. Besides the gallery we can create an empty web site and upload our website from vary approaches. And one of the highlighted feature here is that, we can make our web site integrated with a source control service, such as TFS and Git, so that it will be deployed automatically once a new commit or build available.   Create New Empty Web Site In the developer portal when creating a new web site, we can select QUICK CREATE item. This will create an empty web site with only one shared instance without any database associated. Let’s specify the URL, region and subscription and click OK. After a few seconds our website will be ready. And now we can click the BROWSE button to open this empty website. As you can see there is a welcome page available in my website even thought I didn’t upload or deploy anything. This means even though the website will be charged even before anything was deployed, similar as the cloud service (hosted service). It is because once we created a website, Windows Azure platform had arranged a hosting process (w3wp.exe) in the group of virtual machines.   Create Project in TFS Preview Service and Setup Link Currently the Windows Azure Web Sites can integrate with TFS and Git as its deployment source, and it only support the Microsoft TFS Preview Service for now. I will not deep into how to use the TFS preview service in this post but once we click into the website we had just created and then clicked the “Set up TFS publishing”, there will be a dialog helping us to connect to this service. If you don’t have an account you can click the link shown below to request one. Assuming we have already had an account of TFS service then we need to create a new project firstly. Go to your TFS service website and create a new project, giving the project name, description and the process template. Then, back to the developer portal and clicked the “Set up TFS publishing” link. In the popping up window I will provide my TFS service URL and click the “Authorize now” link. Click “Accept” button to allow my windows azure to connect to my TFS service. Then it will be back to the developer portal and list all projects in my account. Just select the one I had just created and click OK. Then our website is linking to the TFS project I specified and finally it will show similar like this below. This means the web site had been linked to the TFS successfully.   Work with TFS Preview Service in VS2010 In the figure above there are some links to guide us how to connect to the TFS server through Visual Studio 2010 and 2012 RC. If you are using Visual Studio 2012 RC, you don’t need any extension. But if you are using Visual Studio 2010 you must have SP1 and KB2581206 installed. To connect to my TFS service just open the Visual Studio and in the Team Explorer, we can add a new TFS server and paste the URL of my TFS service from the developer portal. And select the project I had just created, then it will be listed in my Team Explorer. Now let’s start to build our website. Since the website we are going to build will be deployed to WAWS, it’s NOT a cloud service, NOT a web role. So in this case we need to create a normal ASP.NET web application. For example, an ASP.NET MVC 3 web application. Next, right click on the solution and select “Add Solution to Source Control”, select the project I had just created. Then check my code in. Once the check-in finished we can see that there is a build running in the TFS server. And if we back to the developer portal, we will see in our web site deployment page there’s a deployment running. In fact, once we linked our web site to our TFS then it will create a new build definition in our TFS project. It will be triggered by each check-in and deploy to the web site we linked automatically. So that when our code had been compiled it will be published to our web site from our TFS server. Once the build and deployment finished we can see it’s now active on our developer portal. Now we can see the web site that created from my Visual Studio and deployed by my TFS.   Continue Deployment through VS and TFS A big benefit when using TFS publishing is the continue deployment. Now if I changed some code in my Visual Studio, for example update some text on the home page and check in my changes, then it will trigger an new build and deploy to my WAWS automatically. And even more, if we wanted to rollback to a previous version we can just select an existing deployment listed in the portal and click REDEPLOY at the bottom.   Q&A: Can Web Site use Storage work with a Worker Role? Stacy asked a question in my previous post, which was “can a web site use Windows Azure Storage and furthermore working with a worker role”. Since the web site is deployed on the windows azure virtual machines in data center, it must be able to use all windows azure features such as the storage, SQL databases, CDN, etc.. But since when using web site we normally have a standard ASP.NET web application, PHP website or NodeJS, the windows azure SDK was not referenced by default. But we can add them by ourselves. In our sample project let’s right click on my MVC project and clicked the “Manage NuGet packages”. And in the dialog I will search windows azure packages and select the “Windows Azure Storage” to install. Then we will have the assemblies to access windows azure storage such as tables, queues and blobs. Since I have a storage account already, let’s have a quick demo, just to list all blobs in a container. The code would be like this. 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Web; 5: using System.Web.Mvc; 6: using Microsoft.WindowsAzure; 7: using Microsoft.WindowsAzure.StorageClient; 8:  9: namespace WAASTFSDemo.Controllers 10: { 11: public class HomeController : Controller 12: { 13: public ActionResult Index() 14: { 15: ViewBag.Message = "Welcome to Windows Azure!"; 16:  17: var credentials = new StorageCredentialsAccountAndKey("[STORAGE_ACCOUNT]", "[STORAGE_KEY]"); 18: var account = new CloudStorageAccount(credentials, false); 19: var client = account.CreateCloudBlobClient(); 20: var container = client.GetContainerReference("shared"); 21: ViewBag.Blobs = container.ListBlobs().Select(b => b.Uri.AbsoluteUri); 22:  23: return View(); 24: } 25:  26: public ActionResult About() 27: { 28: return View(); 29: } 30: } 31: } 1: @{ 2: ViewBag.Title = "Home Page"; 3: } 4:  5: <h2>@ViewBag.Message</h2> 6: <p> 7: To learn more about ASP.NET MVC visit <a href="http://asp.net/mvc" title="ASP.NET MVC Website">http://asp.net/mvc</a>. 8: </p> 9: <div> 10: <ul> 11: @foreach (var blob in ViewBag.Blobs) 12: { 13: <li>@blob</li> 14: } 15: </ul> 16: </div> And then just check in the code, it will be deployed to my web site. Finally we can see the blobs in my storage.   This is just an example but it proves that web sites can connect to storage, table, blob and queue as well. So the answer to Stacy should be “yes”. The web site can use queue storage to work with worker role.   Summary In this post I demonstrated how to integrate with TFS from Windows Azure Web Sites. You can see our website can be built, uploaded and deployed automatically by TFS service. All we need to do is to provide the TFS name and select the project. Not only the Windows Azure Web Site, in this upgrade the Windows Azure Cloud Services (hosted service) can be published through TFS as well. Very similar as what we have shown below. But currently, only Microsoft TFS Service Preview can be integrated with Windows Azure. But I think in the future we can link the TFS in our enterprise and some 3rd party TFS such as CodePlex to Windows Azure.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Using Table-Valued Parameters With SQL Server Reporting Services

    - by Jesse
    In my last post I talked about using table-valued parameters to pass a list of integer values to a stored procedure without resorting to using comma-delimited strings and parsing out each value into a TABLE variable. In this post I’ll extend the “Customer Transaction Summary” report example to see how we might leverage this same stored procedure from within an SQL Server Reporting Services (SSRS) report. I’ve worked with SSRS off and on for the past several years and have generally found it to be a very useful tool for building nice-looking reports for end users quickly and easily. That said, I’ve been frustrated by SSRS from time to time when seemingly simple things are difficult to accomplish or simply not supported at all. I thought that using table-valued parameters from within a SSRS report would be simple, but unfortunately I was wrong. Customer Transaction Summary Example Let’s take the “Customer Transaction Summary” report example from the last post and try to plug that same stored procedure into an SSRS report. Our report will have three parameters: Start Date – beginning of the date range for which the report will summarize customer transactions End Date – end of the date range for which the report will summarize customer transactions Customer Ids – One or more customer Ids representing the customers that will be included in the report The simplest way to get started with this report will be to create a new dataset and point it at our Customer Transaction Summary report stored procedure (note that I’m using SSRS 2012 in the screenshots below, but there should be little to no difference with SSRS 2008): When you initially create this dataset the SSRS designer will try to invoke the stored procedure to determine what the parameters and output fields are for you automatically. As part of this process the following dialog pops-up: Obviously I can’t use this dialog to specify a value for the ‘@customerIds’ parameter since it is of the IntegerListTableType user-defined type that we created in the last post. Unfortunately this really throws the SSRS designer for a loop, and regardless of what combination of Data Type, Pass Null Value, or Parameter Value I used here, I kept getting this error dialog with the message, "Operand type clash: nvarchar is incompatible with IntegerListTableType". This error message makes some sense considering that the nvarchar type is indeed incompatible with the IntegerListTableType, but there’s little clue given as to how to remedy the situation. I don’t know for sure, but I think that behind-the-scenes the SSRS designer is trying to give the @customerIds parameter an nvarchar-typed SqlParameter which is causing the issue. When I first saw this error I figured that this might just be a limitation of the dataset designer and that I’d be able to work around the issue by manually defining the parameters. I know that there are some special steps that need to be taken when invoking a stored procedure with a table-valued parameter from ADO .NET, so I figured that I might be able to use some custom code embedded in the report  to create a SqlParameter instance with the needed properties and value to make this work, but the “Operand type clash" error message persisted. The Text Query Approach Just because we’re using a stored procedure to create the dataset for this report doesn’t mean that we can’t use the ‘Text’ Query Type option and construct an EXEC statement that will invoke the stored procedure. In order for this to work properly the EXEC statement will also need to declare and populate an IntegerListTableType variable to pass into the stored procedure. Before I go any further I want to make one point clear: this is a really ugly hack and it makes me cringe to do it. Simply put, I strongly feel that it should not be this difficult to use a table-valued parameter with SSRS. With that said, let’s take a look at what we’ll have to do to make this work. Manually Define Parameters First, we’ll need to manually define the parameters for report by right-clicking on the ‘Parameters’ folder in the ‘Report Data’ window. We’ll need to define the ‘@startDate’ and ‘@endDate’ as simple date parameters. We’ll also create a parameter called ‘@customerIds’ that will be a mutli-valued Integer parameter: In the ‘Available Values’ tab we’ll point this parameter at a simple dataset that just returns the CustomerId and CustomerName of each row in the Customers table of the database or manually define a handful of Customer Id values to make available when the report runs. Once we have these parameters properly defined we can take another crack at creating the dataset that will invoke the ‘rpt_CustomerTransactionSummary’ stored procedure. This time we’ll choose the ‘Text’ query type option and put the following into the ‘Query’ text area: 1: exec('declare @customerIdList IntegerListTableType ' + @customerIdInserts + 2: ' EXEC rpt_CustomerTransactionSummary 3: @startDate=''' + @startDate + ''', 4: @endDate='''+ @endDate + ''', 5: @customerIds=@customerIdList')   By using the ‘Text’ query type we can enter any arbitrary SQL that we we want to and then use parameters and string concatenation to inject pieces of that query at run time. It can be a bit tricky to parse this out at first glance, but from the SSRS designer’s point of view this query defines three parameters: @customerIdInserts – This will be a Text parameter that we use to define INSERT statements that will populate the @customerIdList variable that is being declared in the SQL. This parameter won’t actually ever get passed into the stored procedure. I’ll go into how this will work in a bit. @startDate – This is a simple date parameter that will get passed through directly into the @startDate parameter of the stored procedure on line 3. @endDate – This is another simple data parameter that will get passed through into the @endDate parameter of the stored procedure on line 4. At this point the dataset designer will be able to correctly parse the query and should even be able to detect the fields that the stored procedure will return without needing to specify any values for query when prompted to. Once the dataset has been correctly defined we’ll have a @customerIdInserts parameter listed in the ‘Parameters’ tab of the dataset designer. We need to define an expression for this parameter that will take the values selected by the user for the ‘@customerIds’ parameter that we defined earlier and convert them into INSERT statements that will populate the @customerIdList variable that we defined in our Text query. In order to do this we’ll need to add some custom code to our report using the ‘Report Properties’ dialog: Any custom code defined in the Report Properties dialog gets embedded into the .rdl of the report itself and (unfortunately) must be written in VB .NET. Note that you can also add references to custom .NET assemblies (which could be written in any language), but that’s outside the scope of this post so we’ll stick with the “quick and dirty” VB .NET approach for now. Here’s the VB .NET code (note that any embedded code that you add here must be defined in a static/shared function, though you can define as many functions as you want): 1: Public Shared Function BuildIntegerListInserts(ByVal variableName As String, ByVal paramValues As Object()) As String 2: Dim insertStatements As New System.Text.StringBuilder() 3: For Each paramValue As Object In paramValues 4: insertStatements.AppendLine(String.Format("INSERT {0} VALUES ({1})", variableName, paramValue)) 5: Next 6: Return insertStatements.ToString() 7: End Function   This method takes a variable name and an array of objects. We use an array of objects here because that is how SSRS will pass us the values that were selected by the user at run-time. The method uses a StringBuilder to construct INSERT statements that will insert each value from the object array into the provided variable name. Once this method has been defined in the custom code for the report we can go back into the dataset designer’s Parameters tab and update the expression for the ‘@customerIdInserts’ parameter by clicking on the button with the “function” symbol that appears to the right of the parameter value. We’ll set the expression to: 1: =Code.BuildIntegerListInserts("@customerIdList ", Parameters!customerIds.Value)   In order to invoke our custom code method we simply need to invoke “Code.<method name>” and pass in any needed parameters. The first parameter needs to match the name of the IntegerListTableType variable that we used in the EXEC statement of our query. The second parameter will come from the Value property of the ‘@customerIds’ parameter (this evaluates to an object array at run time). Finally, we’ll need to edit the properties of the ‘@customerIdInserts’ parameter on the report to mark it as a nullable internal parameter so that users aren’t prompted to provide a value for it when running the report. Limitations And Final Thoughts When I first started looking into the text query approach described above I wondered if there might be an upper limit to the size of the string that can be used to run a report. Obviously, the size of the actual query could increase pretty dramatically if you have a parameter that has a lot of potential values or you need to support several different table-valued parameters in the same query. I tested the example Customer Transaction Summary report with 1000 selected customers without any issue, but your mileage may vary depending on how much data you might need to pass into your query. If you think that the text query hack is a lot of work just to use a table-valued parameter, I agree! I think that it should be a lot easier than this to use a table-valued parameter from within SSRS, but so far I haven’t found a better way. It might be possible to create some custom .NET code that could build the EXEC statement for a given set of parameters automatically, but exploring that will have to wait for another post. For now, unless there’s a really compelling reason or requirement to use table-valued parameters from SSRS reports I would probably stick with the tried and true “join-multi-valued-parameter-to-CSV-and-split-in-the-query” approach for using mutli-valued parameters in a stored procedure.

    Read the article

  • Pivotal Announces JSR-352 Compliance for Spring Batch

    - by reza_rahman
    Pivotal, the company currently funding development of the popular Spring Framework, recently announced JSR 352 (aka Batch Applications for the Java Platform) compliance for the Spring Batch project. More specifically, Spring Batch targets JSR-352 Java SE runtime compatibility rather than Java EE runtime compatibility. If you are surprised that APIs included in Java EE can pass TCKs targeted for Java SE, you should not be. Many other Java EE APIs target compatibility in Java SE environments such as JMS and JPA. You can read about Spring Batch's support for JSR-352 here as well as the Spring configuration to get JSR-352 working in Spring (typically a very low level implementation concern intended to be completely transparent to most JSR-352 users). JSR 352 is one of the few very encouraging cases of major active contribution to the Java EE standard from the Spring development team (the other major effort being Rod Johnson's co-leadership of JSR 330 along with Bob Lee). While IBM's Christopher Vignola led the spec and contributed IBM's years of highly mission critical batch processing experience from products like WebSphere Compute Grid and z/OS batch, the Spring team provided major influences to the API in particular for the chunk processing, listeners, splits and operational interfaces. The GlassFish team's own Mahesh Kannan also contributed, in particular by implementing much of the Java EE integration work for the reference implementation. This was an excellent example of multilateral engineering collaboration through the standards process. For many complex reasons it is not too hard to find evidence of less than amicable interaction between the Spring ecosystem and the Java EE standard over the years if one cares to dig deep enough. In reality most developers see Spring and Java EE as two sides of the same server-side Java coin. At the core Spring and Java EE ecosystems have always shared deep undercurrents of common user bases, bi-directional flows of ideas and perhaps genuine if not begrudging mutual respect. We can all hope for continued strength for both ecosystems and graceful high notes of collaboration via efforts like JSR 352.

    Read the article

  • Designing a social network with CQRS, graph databases and relational databases in mind

    - by Siraj Mansour
    I have done quite an amount of research on the topic so far, but i couldn't come up with a conclusion to make up my mind. I am designing a social network and during my research i stumbled upon graph databases, i found neo4j pretty interesting for user relations and traversing through nodes. I also thought of using a relational database such as MS-SQL or MySQL to store entity data only and depending on neo4j for connections between entities. Of course this means more work in my application to store and pull data in and out of 2 different sources. My first question : Is using this approach (graph + relational) a good approach for designing my social network keeping in mind that users on social networks don't have to in synch with real data by split second ? What are the positives and negatives of this approach ? My Second question : I've been doing some reading on CQRS and as i understood it is mostly useful for collaborative environments, and environments where users see a lot of "stale" data. social networks has shared comments, events, etc .. and many users query or update the same data. Could CQRS be a helpful approach ? Would it give any performance/scalability benefits or non-useful complexity ? Is it fairly applicable with my possible choice of (graph + relational) databases approach mentioned in the question above ? My purpose is to know if the approaches i have mentioned above seem good enough for the business context.

    Read the article

  • Advanced Oracle SOA Suite Oracle Open World 2012 SOA Presentations

    - by JuergenKress
    The list below only includes SOA presentations delivered or moderated by Oracle SOA Product Management. For a complete list of Oracle Open World 2012 presentations, please go here. Oracle SOA Suite, the Most Capable Tool for Every Possible Integration Challenge Using the Right Tools, Techniques, and Technologies for Integration Projects Administration and Management Essentials for Oracle SOA Suite 11g Extreme Performance and Scale Delivered by SOA on Oracle Exalogic Successful Application Integration and SOA Projects: Customer Panel How to Integrate Cloud Applications with Oracle SOA Suite Transforming the Utilities Industry with Oracle Fusion Middleware Cloud and On-Premises Applications Integration, Using Oracle Integration Adapters Delivering High Value B2B Gateways with Oracle SOA Suite 11g Implementing Successful Healthcare Applications with Oracle SOA Suite Migrating to Oracle SOA Suite: A Sun Java CAPS Customer Experience If Mobile Enablement Is on Your Mind, Oracle SOA Suite and Oracle Service Bus Can Help Building Shared Services Infrastructure with Oracle Service Bus: Customer Panel SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: OOW,OOW presentations,OOW soa ppt,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

< Previous Page | 208 209 210 211 212 213 214 215 216 217 218 219  | Next Page >