Search Results

Search found 23827 results on 954 pages for 'software architecture'.

Page 399/954 | < Previous Page | 395 396 397 398 399 400 401 402 403 404 405 406  | Next Page >

  • 2010 Collaboration Summit Impressions

    - by Elena Zannoni
    It's a bit late, but there you have it anyway. April 14 to 16 I attended the Linux Foundation Collaboration Summit in SFO. I was running two tracks, one on tracing and one on tools. You can see the tracks and the slides here: http://events.linuxfoundation.org/events/collaboration-summit/slides I was pretty busy both days, Thursday with a whole day tracing track, Friday with a half day toolchain track. The sessions were well attended, the rooms were full, with people spilling in the hallways. Some new things were presented, like Kernelshark, by Steve Rostedt, a GUI (yes, believe it or not, a GUI) written in GTK. It is very nice, showing a timeline for traced kernel events, and you can zoom in and filter at will. It works on the latest kernels, and it requires some new things/fixes in GTK. I don't recall exactly what version of GTK though. Dominique Toupin from Ericsson presented something about user requirements for tracing. Mostly though about who's who in the embedded world, and eclipse. Masami and Mathieu presented an update on their work. See their slides. The interesting thing to me was of course the new version of uprobes w/o underlying utrace presented by Jim Keniston. At the end of the session we had a discussion about the future of utrace. Roland wasn't there, butTom Tromey (also from RedHat) collected the feedback. Basically we are at a standstill now that utrace has been rejected yet again. There wasn't much advise that anybody could give, except jokingly, we decided that the only way in is to make it a part of perf events. There needs to be another refactoring, but most of all, this "killer app" that would be enabled because of utrace hasn't materialized yet. We think that having a good debugging story on Linux is enough of a killer app, for instance allowing multiple tracers, and not relying on SIGCHLD etc. I think this wasn't completely clear to the kernel community. Trying to achieve debugging via a gdb stub inside the kernel interfacing to utrace and that is controlled via the gdb remote protocol also lost its appeal (thankfully, since the gdb remote protocol is archaic). Somebody would have to be creative in how to submit utrace. It doesn't have to be called utrace (it was really a random choice, for lack of a letter that was not already used in front of the word "trace"). So basically, I think the ideas behind utrace are sound, and the necessity of a new interface is acknowledged. But I believe the integration/submission process with the kernel folks has to restart from scratch, clean slate. We'll see. There are many conferences and meetings coming up in the near future where things can be discussed further. On the second day, Friday, we had the tools talks. It was interesting to observe the more "kernel" oriented people's behavior towards the gcc etc community. The first talk was by Mark Mitchell, about Gcc and its new plugin architecture. After that, Paolo talked about the new C++1x standard, which will be finalized in 2011. Many features are already implemented in the libstdc++ library and gcc and usable today. We had a few minutes (really, the half day track was quite short) where Bradley Kuhn from the Software Freedom Law Center explained the GPLv3 exception for gcc (due to the new gcc plugin architecture and the availability of the intermediate results from the compilation, which is a new thing). I will not try to explain, but basically you cannot take the result of the preprocessing and then use that in your own proprietary compiler. After, we had a talk by Ian Taylor about the new Gold linker. One good thing in that area is that they are trying to make gold the new default linker (for instance Fedora will use gold as the distro linker). However gold is very different from binutils' old linker. It doesn't use a linker script, for instance. The kernel has been linked with gold many times as an exercise (the ground work was done by Kris Van Hees), but this needs to be constantly tested/monitored because the kernel linker script is very complex, and uses esoteric features (Wenji is now monitoring that each kernel RC can be built with gold). It was positive that people are now aware of gold and the need for it to be ported to more architectures. It seems that the porting is very easy, with little arch dependent code. Finally Tom Tromey presented about gdb and the archer project. Archer is a development branch of gdb mostly done by RedHat, where they are focusing on better c++ printing, c++ expression parsing, and plugins. The archer work is merged regularly in the gdb mainline. In general it was a good conference. I did miss most of the first day, because that's when I flew in. But I caught a couple of talks. Nothing earth shattering, except for Google giving each person registered a free Android phone. Yey.

    Read the article

  • Bash: command not found

    - by Alexandre Teles
    I have a script that needs to know the processor architecture. I'm doing this way: if [["$(uname -m)" = "x86_64"]]; then wget https://dl.google.com/linux/direct/google-chrome-stable_current_x86_64.rpm else echo "Nossa! Você só pode usar 3,5GB de memória RAM. Que triste :( Vou baixar a versão 32bits pra você tá?" wget https://dl.google.com/linux/direct/google-chrome-stable_current_i386.rpm fi But when I execute the code, I receive: instala_chrome.sh: line 35: [[x86_64: command not found Anyone can help me to solve this? Thanks!

    Read the article

  • Can't install psycopg2 (Python2.6 Ubuntu9.10 PostgreSQL8.4.2)

    - by yuv
    $ ~/psycopg2-2.0.13$ python setup.py install running install running build running build_py creating build creating build/lib.linux-x86_64-2.6 creating build/lib.linux-x86_64-2.6/psycopg2 copying lib/init.py - build/lib.linux-x86_64-2.6/psycopg2 copying lib/pool.py - build/lib.linux-x86_64-2.6/psycopg2 copying lib/extensions.py - build/lib.linux-x86_64-2.6/psycopg2 copying lib/errorcodes.py - build/lib.linux-x86_64-2.6/psycopg2 copying lib/extras.py - build/lib.linux-x86_64-2.6/psycopg2 copying lib/tz.py - build/lib.linux-x86_64-2.6/psycopg2 copying lib/psycopg1.py - build/lib.linux-x86_64-2.6/psycopg2 running build_ext error: No such file or directory or $ sudo easy_install psycopg2==2.0.13 Searching for psycopg2==2.0.13 Reading http;//pypi.python.org/simple/psycopg2/ Reading http;//initd.org/projects/psycopg2 Reading http;//initd.org/pub/software/psycopg/ Best match: psycopg2 2.0.13 Downloading http;//initd.org/pub/software/psycopg/psycopg2-2.0.13.tar.gz Processing psycopg2-2.0.13.tar.gz Running psycopg2-2.0.13/setup.py -q bdist_egg --dist-dir /tmp/easy_install-QuViWt/psycopg2-2.0.13/egg-dist-tmp-P2W9Dh error: Setup script exited with error: No such file or directory thanks in advance

    Read the article

  • Poll: CSS3 color transparency

    - by lepe
    Many of you already know that you can express colors in your stylesheets like: color: #FFF; color: #FFFFFF; color: rgb(255,255,255); color: hsl(100%,100%,100%); and that you can use rgba() and hsla() to express color transparency. Do you think it would be a good idea to be able to express color transparency in #RRGGBBAA or #RGBA annotation? Why YES, and why NOT??? Also if you want, give a little of details of how you perform your development (if you use a web-builder software, a graphic software (for designs), use vim, etc..)

    Read the article

  • Brainstorming - MIDI over LAN

    - by Hunter Bridges
    I'm planning out a summer coding project for myself. I work with a lot of MIDI and have been researching it a lot. I know it's an old technology, but it works with a lot of music hardware/software so in my eyes, it's still viable. Anyway, I haven't ever worked with writing drivers or anything, so I don't know where I would start with this. So, provided I have MIDI data already being sent over LAN to a server (I know how to do that part), what steps would it take for the server to channel those received messages to an emulated MIDI device, that could then be accessed in music software and so on? Also, what would it take to also send MIDI data back through the LAN to be received by the device that originally received the message? I'm not really looking for too specific a solution. Really I am just trying to come up with a game plan right now and I need to figure out where/what to research.

    Read the article

  • Leveraging Microsoft Patterns and Practices

    - by Tim Murphy
    I want to bring the Patterns and Practices group to the attention of those who have not already been exposed.  I have been a fan of the P&P team since they came out with the original Application Blocks which eventually turned into the Enterprise Library.  Their main purpose is to assemble guidance and tools that make it easier for all of us to build amazing solutions.  I would simply suggest you spend some time exploring the information and code libraries that they have produced.  Free resources are always a great find and I have used a number of the P&P solutions over the years with success.  If nothing else you may find some new ideas.  Enjoy. http://msdn.microsoft.com/en-us/practices/                           del.icio.us Tags: Patterns and Practices,Microsoft,Architecture,software development

    Read the article

  • Are PackageMaker installations with preinstall scripts broken on Snow Leopard?

    - by Stu Thompson
    Everything worked on 10.5, but now my PackageMaker installation project is broken. I've been fighting a problem for a few days now, and either Snow Leopard (OS X 10.6.1) has broken PackageMaker installations I am lacking a very, very basic tidbit of knowledge To narrow down the problem, I've gotten to this point: Create a new PackageMaker installation Have it install a jpeg image into my home directoy Define a preinstall script that does nothing #/bin/sh exit 0 Run the above...and watch it fail with the below error message like clock work Sep 14 15:09:45 manoa installd[5620]: PackageKit: ----- Begin install ----- Sep 14 15:09:45 manoa installd[5620]: PackageKit: request=PKInstallRequest <1 packages, destination=/> Sep 14 15:09:45 manoa installd[5620]: PackageKit: packages=(\n "PKLeopardPackage <file://localhost/Users/stu/Desktop/asdf.pkg>"\n) Sep 14 15:09:46 manoa installd[5620]: PackageKit: Extracting /Users/stu/Desktop/asdf.pkg (destination=/var/folders/Hb/HbXJFyEpFaupt5QyLN-pTk+++TI/-Tmp-/PKInstallSandbox-tmp/Root/~, uid=501) Sep 14 15:09:46 manoa installd[5620]: PackageKit: Executing script "./preinstall" in /private/tmp/PKInstallSandbox.cmlS2H/Scripts/test.test.5year_header.pkg.PFrHNB Sep 14 15:09:46 manoa installd[5620]: PackageKit: *** launch path not accessible Sep 14 15:09:46 manoa installd[5620]: PackageKit: Install Failed: PKG: pre-install scripts for "test.test.5year_header.pkg"\nError Domain=PKInstallErrorDomain Code=112 UserInfo=0x100149430 "An error occurred while running scripts from the package “asdf”." {\n NSFilePath = "./preinstall";\n NSLocalizedDescription = "An error occurred while running scripts from the package \U201casdf\U201d.";\n NSURL = "file://localhost/Users/stu/Desktop/asdf.pkg";\n PKInstallPackageIdentifier = "test.test.5year_header.pkg";\n} Sep 14 15:09:46 manoa Installer[5614]: install:didFailWithError:Error Domain=PKInstallErrorDomain Code=112 UserInfo=0x1195917c0 "An error occurred while running scripts from the package “asdf”." Sep 14 15:09:46 manoa Installer[5614]: Install failed: The Installer encountered an error that caused the installation to fail. Contact the software manufacturer for assistance. Sep 14 15:09:47 manoa Installer[5614]: IFDInstallController 144040 state = 7 Sep 14 15:09:47 manoa Installer[5614]: Displaying 'Install Failed' UI. Sep 14 15:09:47 manoa Installer[5614]: 'Install Failed' UI displayed message:'The Installer encountered an error that caused the installation to fail. Contact the software manufacturer for assistance.'. There is no file in /private/tmp/PKInstallSandbox.cmlS2H/Scripts/test.test.5year_header.pkg.PFrHNB/, which makes me think the problem is with PackageMaker, and not me. But I'm new to the world of OS X software installation, so doubts remain. So, the question: Is PackageMaker with a preinstall script broken on OS X 10.6? Or is there some requirement regarding preinstall scripts that I do not understand?

    Read the article

  • Object Oriented database development jobs

    - by GigaPr
    Hi, i am a software engineering student currently looking for a job as developer. I have been offered a position in a company which implements software using object oriented databases. these are something completely new for me as at university we never worked on it, just some theory. my questions are do you think is a good way to start my career as developer? what is the job market for this type of developemnt? are these skills requested? what markets this technology touches? thanks

    Read the article

  • Registry ReadString method is not working in Windows 7 in Delphi 7

    - by Tofig Hasanov
    The following code sample used to return me windows id before, but now it doesn't work, and returns empty string, dunno why. function GetWindowsID: string; var Registry: TRegistry; str:string; begin Registry := TRegistry.Create(KEY_WRITE); try Registry.Lazywrite := false; Registry.RootKey := HKEY_LOCAL_MACHINE; // Registry.RootKey := HKEY_CURRENT_USER; if CheckForWinNT = true then Begin if not Registry.OpenKeyReadOnly('\Software\Microsoft\Windows NT\CurrentVersion') then showmessagE('cant open'); end else Registry.OpenKeyReadOnly('\Software\Microsoft\Windows\CurrentVersion'); str := Registry.ReadString('ProductId'); result:=str; Registry.CloseKey; finally Registry.Free; end; // try..finally end; Anybody can help?

    Read the article

  • Visual Studio reference x64 GAC

    - by icelava
    How can one get Visual Studio 2005/2008 to reference assemblies in the 64-bit GAC instead of the 32-bit GAC? We are settin the target platfom to x64 and the compiler is throwing the error of Error 2 Warning as Error: Assembly generation -- Referenced assembly 'System.Data.dll' targets a different processor Common Error 3 Warning as Error: Assembly generation -- Referenced assembly 'mscorlib.dll' targets a different processor Common Error 4 Assembly signing failed; output may not be signed -- The system cannot find the file specified. Common Update 29 Dec 08 Been trying out Aaron Stebner's suggestions to place 64-bit assemblies onto an isolated location (e.g. C:\Windows\Microsoft.NET\Framework64\v2.0.50727\GAC_64) and creating additional entries in the registry like HKLM\SOFTWARE\Microsoft.NETFramework\AssemblyFolders\GAC_64 or HKLM\SOFTWARE\Microsoft.NETFramework\v2.0.50727\AssemblyFoldersEx\GAC_64 but Visual Studio 2005 is still not picking it up....

    Read the article

  • Big Data – Role of Cloud Computing in Big Data – Day 11 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the NewSQL. In this article we will understand the role of Cloud in Big Data Story What is Cloud? Cloud is the biggest buzzword around from last few years. Everyone knows about the Cloud and it is extremely well defined online. In this article we will discuss cloud in the context of the Big Data. Cloud computing is a method of providing a shared computing resources to the application which requires dynamic resources. These resources include applications, computing, storage, networking, development and various deployment platforms. The fundamentals of the cloud computing are that it shares pretty much share all the resources and deliver to end users as a service.  Examples of the Cloud Computing and Big Data are Google and Amazon.com. Both have fantastic Big Data offering with the help of the cloud. We will discuss this later in this blog post. There are two different Cloud Deployment Models: 1) The Public Cloud and 2) The Private Cloud Public Cloud Public Cloud is the cloud infrastructure build by commercial providers (Amazon, Rackspace etc.) creates a highly scalable data center that hides the complex infrastructure from the consumer and provides various services. Private Cloud Private Cloud is the cloud infrastructure build by a single organization where they are managing highly scalable data center internally. Here is the quick comparison between Public Cloud and Private Cloud from Wikipedia:   Public Cloud Private Cloud Initial cost Typically zero Typically high Running cost Unpredictable Unpredictable Customization Impossible Possible Privacy No (Host has access to the data Yes Single sign-on Impossible Possible Scaling up Easy while within defined limits Laborious but no limits Hybrid Cloud Hybrid Cloud is the cloud infrastructure build with the composition of two or more clouds like public and private cloud. Hybrid cloud gives best of the both the world as it combines multiple cloud deployment models together. Cloud and Big Data – Common Characteristics There are many characteristics of the Cloud Architecture and Cloud Computing which are also essentially important for Big Data as well. They highly overlap and at many places it just makes sense to use the power of both the architecture and build a highly scalable framework. Here is the list of all the characteristics of cloud computing important in Big Data Scalability Elasticity Ad-hoc Resource Pooling Low Cost to Setup Infastructure Pay on Use or Pay as you Go Highly Available Leading Big Data Cloud Providers There are many players in Big Data Cloud but we will list a few of the known players in this list. Amazon Amazon is arguably the most popular Infrastructure as a Service (IaaS) provider. The history of how Amazon started in this business is very interesting. They started out with a massive infrastructure to support their own business. Gradually they figured out that their own resources are underutilized most of the time. They decided to get the maximum out of the resources they have and hence  they launched their Amazon Elastic Compute Cloud (Amazon EC2) service in 2006. Their products have evolved a lot recently and now it is one of their primary business besides their retail selling. Amazon also offers Big Data services understand Amazon Web Services. Here is the list of the included services: Amazon Elastic MapReduce – It processes very high volumes of data Amazon DynammoDB – It is fully managed NoSQL (Not Only SQL) database service Amazon Simple Storage Services (S3) – A web-scale service designed to store and accommodate any amount of data Amazon High Performance Computing – It provides low-tenancy tuned high performance computing cluster Amazon RedShift – It is petabyte scale data warehousing service Google Though Google is known for Search Engine, we all know that it is much more than that. Google Compute Engine – It offers secure, flexible computing from energy efficient data centers Google Big Query – It allows SQL-like queries to run against large datasets Google Prediction API – It is a cloud based machine learning tool Other Players Besides Amazon and Google we also have other players in the Big Data market as well. Microsoft is also attempting Big Data with the Cloud with Microsoft Azure. Additionally Rackspace and NASA together have initiated OpenStack. The goal of Openstack is to provide a massively scaled, multitenant cloud that can run on any hardware. Thing to Watch The cloud based solutions provides a great integration with the Big Data’s story as well it is very economical to implement as well. However, there are few things one should be very careful when deploying Big Data on cloud solutions. Here is a list of a few things to watch: Data Integrity Initial Cost Recurring Cost Performance Data Access Security Location Compliance Every company have different approaches to Big Data and have different rules and regulations. Based on various factors, one can implement their own custom Big Data solution on a cloud. Tomorrow In tomorrow’s blog post we will discuss about various Operational Databases supporting Big Data. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Application Lifecycle Management with Visual Studio 2010 – Wrox Book

    - by Guy Harwood
    After running with a somewhat disconnected set of tools (vs 2008, Ontime, sharepoint 2007) for managing our projects we decided to make the move to Team Foundation Server 2010.  With limited coverage of the product available online i went in search of a book and found this… View this book on the Wrox website I must point out that i have only read 10 of the 26 chapters so far, mainly the ones that cover source code control, work item tracking and database projects.  This enables our dev team to get familiar with it before switching project management over at a future date. Needless to say i am very impressed with the detail it provides, answering pretty much every question i had about TFS so far.  I'm looking forward to digging into the sections on testing, code analysis and architecture. Highly recommended.

    Read the article

  • Installing MonoDevelop on Suse Enterprise 10.0

    - by Robert Harvey
    I tried to install MonoDevelop on Suse 11.0 Enterprise, using the 1-click install on the MonoDevelop download page, but quickly wound up in a tangle of missing dependencies. I then tried using the Suse software repositories to get MonoDevelop, and waded through several of the dependencies for awhile trying to get the necessary packages to fulfill the dependencies, but some of the packages in the Suse repositories actually appear to be missing the needed RPM files. Are these repositories no longer being actively maintained? I am aware that there is a CD on the Mono site (called the Mono LiveCD) that appears to contain a complete installation of the development environment, as well as a DVD for OpenSuse 11.2 (on the OpenSuse site) that might actually have all of the Mono software already installed. But the target environment for the utility I am writing is Suse 11.0 Enterprise Server. Does that matter? What is the shortest distance between two points here?

    Read the article

  • Accelerate your SOA with Data Integration - Live Webinar Tuesday!

    - by dain.hansen
    Need to put wind in your SOA sails? Organizations are turning more and more to Real-time data integration to complement their Service Oriented Architecture. The benefit? Lowering costs through consolidating legacy systems, reducing risk of bad data polluting their applications, and shortening the time to deliver new service offerings. Join us on Tuesday April 13th, 11AM PST for our live webinar on the value of combining SOA and Data Integration together. In this webcast you'll learn how to innovate across your applications swiftly and at a lower cost using Oracle Data Integration technologies: Oracle Data Integrator Enterprise Edition, Oracle GoldenGate, and Oracle Data Quality. You'll also hear: Best practices for building re-usable data services that are high performing and scalable across the enterprise How real-time data integration can maximize SOA returns while providing continuous availability for your mission critical applications Architectural approaches to speed service implementation and delivery times, with pre-integrations to CRM, ERP, BI, and other packaged applications Register now for this live webinar!

    Read the article

  • The OTN Garage Blog Week in Review

    - by Rick Ramsey
    In case you missed the last few blogs on the OTN Garage (because somebody neglected to cross-post them here), here they are: What Day Is It and Why Am I Wearing a Little Furry Skirt? - Oracle VM Templates, Oracle Linux, Wim Coekaerts, and jet lag. A Real Cutting Edge - Oracle Sun blade systems architecture, Blade Clusters, and best practices. Which Version of Solaris Were You Running When ... - Oracle Solaris Legacy Containers and the Voyager 1 Content Cluster: Understanding the Local Boot Option in the Automatic Installer of Oracle Solaris 11 Express - Resources to help you understand this cool option Rick - System Admin and Developer Community of the Oracle Technology Network

    Read the article

  • Need information regarding Ubuntu for Android

    - by a.premkumar
    As far as I understand, Ubuntu for Android is not the same as Actual Ubuntu in Desktops. So I am confusing both with each other. Please enlighten me on this. Could Ubuntu on Android run 32bit softwares same as the PC versions do? Or it is just the ARM version that would not be able to run the existing PC version softwares? If it is ARM now, would it be a 32 or 64 bit version in future?(Ofcourse if the mobile device architecture supports it). Will there be a separate version for Tablets, so that there would be no need for separate docking and allows seamless switching from Android to Ubuntu internally on the device? Regards, Premkumar. A

    Read the article

  • Recommendations for keeping a build server updated

    - by gareth_bowles
    As a guy who frequently switches between QA, build and operations, I keep running into the issue of what to do about operating system updates on the build server. The dichotomy is the same on Windows, Linux, MacOS or any other o/s that can update itself via the internet: The QA team wants to keep the build server exactly as it is from the beginning of the product release cycle to the end, since installing updates could destabilize the server and means that successive builds aren't made against the same baseline. The ops team wants the software to be deployed on a system with all the latest security patches; this can mean that the software isn't deployed on exactly the same version of the o/s that it was built on. I usually mitigate this by taking release candidate builds and installing them on a test server that has a completely up-to-date o/s, repeating the automated tests that are run on the build server and doing some additional system level testing to make sure everything looks good before deployment. However, this seems inefficient to me; does anyone have a better way ?

    Read the article

  • How do I read 64-bit Registry values from VBScript running as a an msi post-installation task?

    - by Joergen Bech
    I need to read the location of the Temporary ASP.NET Files folder from VBScript as part of a post-installation task in an installer created using a Visual Studio 2008 deployment project. I thought I would do something like this: Set oShell = CreateObject("Wscript.Shell") strPath = oShell.RegRead("HKLM\SOFTWARE\Microsoft\ASP.NET\2.0.50727.0\Path") and then concatenate strPath with "\Temporary ASP.NET Files" and be done with it. On an x64 system, however, I am getting the value from the WOW6432Node (HKLM\SOFTWARE\Wow6432Node\Microsoft\ASP.NET\2.0.50727.0), which gives me the 32-bit framework path (C:\Windows\Microsoft.NET\Framework\v2.0.50727), but on an x64 system, I actually want the 64-bit path, i.e. C:\Windows\Microsoft.NET\Framework64\v2.0.50727. I understand that this happens because the .vbs file is run using the 32-bit script host due to the parent process (the installer) being 32-bit itself. How can I run the script using the 64-bit script host - or - how can I read the 64-bit values even if the script is run using the 32-bit script host?

    Read the article

  • What is your motivation?

    - by vava
    As software developers we all know that motivation matters. Without it we could just stare into the monitor all day long and do nothing. There are some tricks to get yourself motivated like talking to people or doing the fun part of the project, but they do not always work. In the mean time I started to notice that I am most productive when I could see the person who is appreciating my work. The user, who is using the software and enjoying it. Because if there's none, what's the point of writing this code? So I was wondering, what makes you be at your top, is it the users, your fellow coders or maybe the money you get? PS. I know there's quite a few questions about motivation but they all about overcoming current situation. What I want to hear is what makes you come to the office every day, what you enjoy the most in your job, what makes you want to write this code and do it as fast as you possible could.

    Read the article

  • Is RAC One Node Certified for E-Business Suite?

    - by Steven Chan
    Oracle Real Application Clusters (RAC) is a cluster database with a shared cache architecture that supports the transparent deployment of a single database across a pool of servers.  RAC is certified with both Oracle E-Business Suite Release 11i and 12.  We publish best-practices documentation for specific combinations of EBS + RAC versions.  For example, if you were planning on implementing RAC for EBS 12, you would use this documentation:Using Oracle 11g Release 2 Real Application Clusters with Oracle E-Business Suite Release 12 (Note 823587.1)Many of the largest E-Business Suite users in the world run RAC today, including Oracle; see this Oracle R12 case study for details.A number of customers have recently asked whether RAC One Node can be used with the E-Business Suite.  From the RAC website:Oracle RAC One Node is a new option available with Oracle Database 11g Release 2. Oracle RAC One Node is a single instance of an Oracle RAC-enabled database running on one node in a cluster.

    Read the article

  • Webcenter 11g Patch Set 3 (PS3) - Formação para Parceiros - 19/Jan/11

    - by Claudia Costa
    O Oracle WebCenter Suite consiste num conjunto integrado de soluções  baseadas em standards de mercado, com o objectivo de criar aplicações de negócio, portais corporativos, comunidades de interesse, grupos colaborativos, redes sociais, consolidar sistemas aplicacionais web e potenciar as funcionalidades web 2.0 dentro e fora de uma organização. Toda a solução está em conformidade com os principais Standards de mercado e baseia-se numa arquitectura totalmente orientada ao serviço (SOA). Venha conhecer nesta sessão as novas funcionalidades do Webcenter Suite 11g PS3. Agenda 09.30h Boas Vindas e Introdução 09.40h Oracle & Enterprise 2.0 - João Borrego 10.00h Introduction to Oracle WebCenter Suite as a User Experience Platform (UXP) - Monte Kluemper 10.40h Coffee Break 11.00h Building Rich UIs with Oracle WebCenter Portal - Monte Kluemper 12.00h Interactive Q&A Session - João Borrego e Monte Kluemper 12.30h Almoço 14.00h Deep-dive into WebCenter Technical Architecture - Monte Kluemper 15.30h Final da Sessão Target: - Equipas de Venda e Comerciais (manhã)- Equipas Técnicas e de pré-venda (tarde) Data e Local:19 de JaneiroLagoas Park Hotel Para este Workshop os lugares estão limitados. Por favor aguarde um email de confirmação de sua inscrição.Inscrições : Email Para mais informações, por favor contacte: Claudia Costa / 21 423 50 27

    Read the article

  • Scaling-out Your Services by Message Bus based WCF Transport Extension &ndash; Part 1 &ndash; Background

    - by Shaun
    Cloud computing gives us more flexibility on the computing resource, we can provision and deploy an application or service with multiple instances over multiple machines. With the increment of the service instances, how to balance the incoming message and workload would become a new challenge. Currently there are two approaches we can use to pass the incoming messages to the service instances, I would like call them dispatcher mode and pulling mode.   Dispatcher Mode The dispatcher mode introduces a role which takes the responsible to find the best service instance to process the request. The image below describes the sharp of this mode. There are four clients communicate with the service through the underlying transportation. For example, if we are using HTTP the clients might be connecting to the same service URL. On the server side there’s a dispatcher listening on this URL and try to retrieve all messages. When a message came in, the dispatcher will find a proper service instance to process it. There are three mechanism to find the instance: Round-robin: Dispatcher will always send the message to the next instance. For example, if the dispatcher sent the message to instance 2, then the next message will be sent to instance 3, regardless if instance 3 is busy or not at that moment. Random: Dispatcher will find a service instance randomly, and same as the round-robin mode it regardless if the instance is busy or not. Sticky: Dispatcher will send all related messages to the same service instance. This approach always being used if the service methods are state-ful or session-ful. But as you can see, all of these approaches are not really load balanced. The clients will send messages at any time, and each message might take different process duration on the server side. This means in some cases, some of the service instances are very busy while others are almost idle. For example, if we were using round-robin mode, it could be happened that most of the simple task messages were passed to instance 1 while the complex ones were sent to instance 3, even though instance 1 should be idle. This brings some problem in our architecture. The first one is that, the response to the clients might be longer than it should be. As it’s shown in the figure above, message 6 and 9 can be processed by instance 1 or instance 2, but in reality they were dispatched to the busy instance 3 since the dispatcher and round-robin mode. Secondly, if there are many requests came from the clients in a very short period, service instances might be filled by tons of pending tasks and some instances might be crashed. Third, if we are using some cloud platform to host our service instances, for example the Windows Azure, the computing resource is billed by service deployment period instead of the actual CPU usage. This means if any service instance is idle it is wasting our money! Last one, the dispatcher would be the bottleneck of our system since all incoming messages must be routed by the dispatcher. If we are using HTTP or TCP as the transport, the dispatcher would be a network load balance. If we wants more capacity, we have to scale-up, or buy a hardware load balance which is very expensive, as well as scaling-out the service instances. Pulling Mode Pulling mode doesn’t need a dispatcher to route the messages. All service instances are listening to the same transport and try to retrieve the next proper message to process if they are idle. Since there is no dispatcher in pulling mode, it requires some features on the transportation. The transportation must support multiple client connection and server listening. HTTP and TCP doesn’t allow multiple clients are listening on the same address and port, so it cannot be used in pulling mode directly. All messages in the transportation must be FIFO, which means the old message must be received before the new one. Message selection would be a plus on the transportation. This means both service and client can specify some selection criteria and just receive some specified kinds of messages. This feature is not mandatory but would be very useful when implementing the request reply and duplex WCF channel modes. Otherwise we must have a memory dictionary to store the reply messages. I will explain more about this in the following articles. Message bus, or the message queue would be best candidate as the transportation when using the pulling mode. First, it allows multiple application to listen on the same queue, and it’s FIFO. Some of the message bus also support the message selection, such as TIBCO EMS, RabbitMQ. Some others provide in memory dictionary which can store the reply messages, for example the Redis. The principle of pulling mode is to let the service instances self-managed. This means each instance will try to retrieve the next pending incoming message if they finished the current task. This gives us more benefit and can solve the problems we met with in the dispatcher mode. The incoming message will be received to the best instance to process, which means this will be very balanced. And it will not happen that some instances are busy while other are idle, since the idle one will retrieve more tasks to make them busy. Since all instances are try their best to be busy we can use less instances than dispatcher mode, which more cost effective. Since there’s no dispatcher in the system, there is no bottleneck. When we introduced more service instances, in dispatcher mode we have to change something to let the dispatcher know the new instances. But in pulling mode since all service instance are self-managed, there no extra change at all. If there are many incoming messages, since the message bus can queue them in the transportation, service instances would not be crashed. All above are the benefits using the pulling mode, but it will introduce some problem as well. The process tracking and debugging become more difficult. Since the service instances are self-managed, we cannot know which instance will process the message. So we need more information to support debug and track. Real-time response may not be supported. All service instances will process the next message after the current one has done, if we have some real-time request this may not be a good solution. Compare with the Pros and Cons above, the pulling mode would a better solution for the distributed system architecture. Because what we need more is the scalability, cost-effect and the self-management.   WCF and WCF Transport Extensibility Windows Communication Foundation (WCF) is a framework for building service-oriented applications. In the .NET world WCF is the best way to implement the service. In this series I’m going to demonstrate how to implement the pulling mode on top of a message bus by extending the WCF. I don’t want to deep into every related field in WCF but will highlight its transport extensibility. When we implemented an RPC foundation there are many aspects we need to deal with, for example the message encoding, encryption, authentication and message sending and receiving. In WCF, each aspect is represented by a channel. A message will be passed through all necessary channels and finally send to the underlying transportation. And on the other side the message will be received from the transport and though the same channels until the business logic. This mode is called “Channel Stack” in WCF, and the last channel in the channel stack must always be a transport channel, which takes the responsible for sending and receiving the messages. As we are going to implement the WCF over message bus and implement the pulling mode scaling-out solution, we need to create our own transport channel so that the client and service can exchange messages over our bus. Before we deep into the transport channel, let’s have a look on the message exchange patterns that WCF defines. Message exchange pattern (MEP) defines how client and service exchange the messages over the transportation. WCF defines 3 basic MEPs which are datagram, Request-Reply and Duplex. Datagram: Also known as one-way, or fire-forgot mode. The message sent from the client to the service, and no need any reply from the service. The client doesn’t care about the message result at all. Request-Reply: Very common used pattern. The client send the request message to the service and wait until the reply message comes from the service. Duplex: The client sent message to the service, when the service processing the message it can callback to the client. When callback the service would be like a client while the client would be like a service. In WCF, each MEP represent some channels associated. MEP Channels Datagram IInputChannel, IOutputChannel Request-Reply IRequestChannel, IReplyChannel Duplex IDuplexChannel And the channels are created by ChannelListener on the server side, and ChannelFactory on the client side. The ChannelListener and ChannelFactory are created by the TransportBindingElement. The TransportBindingElement is created by the Binding, which can be defined as a new binding or from a custom binding. For more information about the transport channel mode, please refer to the MSDN document. The figure below shows the transport channel objects when using the request-reply MEP. And this is the datagram MEP. And this is the duplex MEP. After investigated the WCF transport architecture, channel mode and MEP, we finally identified what we should do to extend our message bus based transport layer. They are: Binding: (Optional) Defines the channel elements in the channel stack and added our transport binding element at the bottom of the stack. But we can use the build-in CustomBinding as well. TransportBindingElement: Defines which MEP is supported in our transport and create the related ChannelListener and ChannelFactory. This also defines the scheme of the endpoint if using this transport. ChannelListener: Create the server side channel based on the MEP it’s. We can have one ChannelListener to create channels for all supported MEPs, or we can have ChannelListener for each MEP. In this series I will use the second approach. ChannelFactory: Create the client side channel based on the MEP it’s. We can have one ChannelFactory to create channels for all supported MEPs, or we can have ChannelFactory for each MEP. In this series I will use the second approach. Channels: Based on the MEPs we want to support, we need to implement the channels accordingly. For example, if we want our transport support Request-Reply mode we should implement IRequestChannel and IReplyChannel. In this series I will implement all 3 MEPs listed above one by one. Scaffold: In order to make our transport extension works we also need to implement some scaffold stuff. For example we need some classes to send and receive message though out message bus. We also need some codes to read and write the WCF message, etc.. These are not necessary but would be very useful in our example.   Message Bus There is only one thing remained before we can begin to implement our scaling-out support WCF transport, which is the message bus. As I mentioned above, the message bus must have some features to fulfill all the WCF MEPs. In my company we will be using TIBCO EMS, which is an enterprise message bus product. And I have said before we can use any message bus production if it’s satisfied with our requests. Here I would like to introduce an interface to separate the message bus from the WCF. This allows us to implement the bus operations by any kinds bus we are going to use. The interface would be like this. 1: public interface IBus : IDisposable 2: { 3: string SendRequest(string message, bool fromClient, string from, string to = null); 4:  5: void SendReply(string message, bool fromClient, string replyTo); 6:  7: BusMessage Receive(bool fromClient, string replyTo); 8: } There are only three methods for the bus interface. Let me explain one by one. The SendRequest method takes the responsible for sending the request message into the bus. The parameters description are: message: The WCF message content. fromClient: Indicates if this message was came from the client. from: The channel ID that this message was sent from. The channel ID will be generated when any kinds of channel was created, which will be explained in the following articles. to: The channel ID that this message should be received. In Request-Reply and Duplex MEP this is necessary since the reply message must be received by the channel which sent the related request message. The SendReply method takes the responsible for sending the reply message. It’s very similar as the previous one but no “from” parameter. This is because it’s no need to reply a reply message again in any MEPs. The Receive method takes the responsible for waiting for a incoming message, includes the request message and specified reply message. It returned a BusMessage object, which contains some information about the channel information. The code of the BusMessage class is 1: public class BusMessage 2: { 3: public string MessageID { get; private set; } 4: public string From { get; private set; } 5: public string ReplyTo { get; private set; } 6: public string Content { get; private set; } 7:  8: public BusMessage(string messageId, string fromChannelId, string replyToChannelId, string content) 9: { 10: MessageID = messageId; 11: From = fromChannelId; 12: ReplyTo = replyToChannelId; 13: Content = content; 14: } 15: } Now let’s implement a message bus based on the IBus interface. Since I don’t want you to buy and install the TIBCO EMS or any other message bus products, I will implement an in process memory bus. This bus is only for test and sample purpose. It can only be used if the service and client are in the same process. Very straightforward. 1: public class InProcMessageBus : IBus 2: { 3: private readonly ConcurrentDictionary<Guid, InProcMessageEntity> _queue; 4: private readonly object _lock; 5:  6: public InProcMessageBus() 7: { 8: _queue = new ConcurrentDictionary<Guid, InProcMessageEntity>(); 9: _lock = new object(); 10: } 11:  12: public string SendRequest(string message, bool fromClient, string from, string to = null) 13: { 14: var entity = new InProcMessageEntity(message, fromClient, from, to); 15: _queue.TryAdd(entity.ID, entity); 16: return entity.ID.ToString(); 17: } 18:  19: public void SendReply(string message, bool fromClient, string replyTo) 20: { 21: var entity = new InProcMessageEntity(message, fromClient, null, replyTo); 22: _queue.TryAdd(entity.ID, entity); 23: } 24:  25: public BusMessage Receive(bool fromClient, string replyTo) 26: { 27: InProcMessageEntity e = null; 28: while (true) 29: { 30: lock (_lock) 31: { 32: var entity = _queue 33: .Where(kvp => kvp.Value.FromClient == fromClient && (kvp.Value.To == replyTo || string.IsNullOrWhiteSpace(kvp.Value.To))) 34: .FirstOrDefault(); 35: if (entity.Key != Guid.Empty && entity.Value != null) 36: { 37: _queue.TryRemove(entity.Key, out e); 38: } 39: } 40: if (e == null) 41: { 42: Thread.Sleep(100); 43: } 44: else 45: { 46: return new BusMessage(e.ID.ToString(), e.From, e.To, e.Content); 47: } 48: } 49: } 50:  51: public void Dispose() 52: { 53: } 54: } The InProcMessageBus stores the messages in the objects of InProcMessageEntity, which can take some extra information beside the WCF message itself. 1: public class InProcMessageEntity 2: { 3: public Guid ID { get; set; } 4: public string Content { get; set; } 5: public bool FromClient { get; set; } 6: public string From { get; set; } 7: public string To { get; set; } 8:  9: public InProcMessageEntity() 10: : this(string.Empty, false, string.Empty, string.Empty) 11: { 12: } 13:  14: public InProcMessageEntity(string content, bool fromClient, string from, string to) 15: { 16: ID = Guid.NewGuid(); 17: Content = content; 18: FromClient = fromClient; 19: From = from; 20: To = to; 21: } 22: }   Summary OK, now I have all necessary stuff ready. The next step would be implementing our WCF message bus transport extension. In this post I described two scaling-out approaches on the service side especially if we are using the cloud platform: dispatcher mode and pulling mode. And I compared the Pros and Cons of them. Then I introduced the WCF channel stack, channel mode and the transport extension part, and identified what we should do to create our own WCF transport extension, to let our WCF services using pulling mode based on a message bus. And finally I provided some classes that need to be used in the future posts that working against an in process memory message bus, for the demonstration purpose only. In the next post I will begin to implement the transport extension step by step.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • CQRS - Benefits

    - by Dylan Smith
    Thanks to all the comments and feedback from the last post I think I have a better understanding now of the benefits of CQRS (separate from the benefits of Event Sourcing). I’m going to try and sum it up here, and point out some areas where I could still use some advice: CQRS Benefits Sounds like the primary benefit of CQRS as an architecture is it allows you to create a simpler domain model by sucking out everything related to queries. I can definitely see the benefit to this, in general the domain logic related to commands is the high-value behavior in the software, but the logic required to service the queries would add a lot of low-value “noise” to the domain model that would dilute the high-value (command) behavior – sorting, paging, filtering, pre-fetch paths, etc. Also the most appropriate domain structure for implementing commands might not be the most optimal for implementing queries. To paraphrase Greg, this usually results in a domain model that is mediocre at both, piss-poor at one, or more likely piss-poor at both commands and queries. Not only will you be able to simplify your domain model by pulling out all the query logic, but at least a handful of commands in most systems will probably be “pass-though” type commands with little to no logic that just generate events. If these can be implemented directly in the command-handler and never touch the domain model, this allows you to slim down the domain model even more. Also, if you were to do event sourcing without CQRS, you no longer have a database containing the current state (only the domain model would) which makes it difficult (or impossible) to support ad-hoc querying and/or reporting that is common in most business software. Of course CQRS provides some great scalability benefits, not only scalability but I have to assume that it provides extremely low latency for most operations, especially if you have an asynchronous event bus. I know Greg says that you get a 3x scaling (Commands, Queries, Client) of your ability to perform parallel development, but IMHO, it seems like it only provides 1.5x scaling since even without CQRS you’re going to have your client loosely coupled to your domain - which is still a great benefit to be able to realize. Questions / Concerns If all the queries against an aggregate get pulled out to the Query layer, what if the only commands for that aggregate can be handled in a “pass-through” manner with the command handler directly generating events. Is it possible to have an aggregate that isn’t modeled in the domain model? Are there any issues or downsides to this? I know in the feedback from my previous posts it was suggested that having one domain model handling both commands and queries requires implementing a lot of traversals between objects that wouldn’t be necessary if it was only servicing commands. My question is, do you include traversals in your domain model based on the needs of the code, or based on the conceptual domain model? If none of my Commands require a Customer.Orders traversal, but the conceptual domain includes the concept of a set of orders belonging to a customer – should I model that in my domain model or not? I like the idea of using the Query side of the architecture as a place to put junior devs where the risk of them screwing something up has minimal impact. But I’m not sold on the idea that you can actually outsource it. Like I said in one of my comments on my previous post, the code to handle a query and generate DTO’s is going to be dead simple, but the code to process events and apply them to the tables on the query side is going to require a significant amount of domain knowledge to know which events to listen for to update each of the de-normalized tables (and what changes need to be made when each event is processed). I don’t know about everybody else, but having Indian/Russian/whatever outsourced developers have to do anything that requires significant domain knowledge has never been successful in my experience. And if you need to spec out for each new query which events to listen to and what to do with each one, well that’s probably going to be just as much work to document as it would be to just implement it. Greg made the point in a comment that doing an aggregate query like “Total Sales By Customer” is going to be inefficient if you use event sourcing but not CQRS. I don’t understand why that would be the case. I imagine in that case you’d simply have a method/property on the Customer object that calculated total sales for that customer by enumerating over the Orders collection. Then the application services layer would generate DTO’s off of the Customers collection that included say the CustomerID, CustomerName, TotalSales, or whatever the case may be. As long as you use a snapshotting implementation, I don’t see why that would be anymore inefficient in a DDD+Event Sourcing implementation than in a typical DDD implementation. Like I mentioned in my last post I still have some questions about query logic that haven’t been answered yet, but before I start asking those I want to make sure I have a strong grasp on what benefits CQRS provides.  My main concern with the query logic was that I know I could just toss it all into the query side, but I was concerned that I would be losing the benefits of using CQRS in the first place if I did that.  I want to elaborate more on this though with some example situations in an upcoming post.

    Read the article

  • 7 Ways Modern Windows 8 Apps Are Different From Windows Desktop Apps

    - by Chris Hoffman
    Windows 8 apps – originally known as Metro-style apps and now known as Windows 8 style, Modern UI style, or Windows Store style apps, depending on which Microsoft employee you ask — are very different from traditional desktop apps. The Modern interface isn’t just a fresh coat of paint. The new Windows Runtime, or WinRT, application architecture (not to be confused with Windows RT) is very different from the Windows desktop we’re used to. How To Delete, Move, or Rename Locked Files in Windows HTG Explains: Why Screen Savers Are No Longer Necessary 6 Ways Windows 8 Is More Secure Than Windows 7

    Read the article

  • Benefits of PerformancePoint Services Using SharePoint Server 2010

    - by Wayne
    What is PerformancePoint Services? Most of the time it happens that the metrics that make up your key performance indicators are not simple values from a data source. In SharePoint Server 2007 PerformancePoint Services, you could create two kinds of KPI metrics: Simple single value metrics from any supported data source or Complex multiple value metrics from a single Analysis Services data source using MDX. Now things are even easier with Performance Point Services in SharePoint 2010. Let us check what is it? PerformancePoint Services in SharePoint Server 2010 is a performance management service that you can use to monitor and analyze your business. By providing flexible, easy-to-use tools for building dashboards, scorecards, reports, and key performance indicators (KPIs), PerformancePoint Services can help everyone across an organization make informed business decisions that align with companywide objectives and strategy. Scorecards, dashboards, and KPIs help drive accountability. Integrated analytics help employees move quickly from monitoring information to analyzing it and, when appropriate, sharing it throughout the organization. Prior to the addition of PerformancePoint Services to SharePoint Server, Microsoft Office PerformancePoint Server 2007 functioned as a standalone server. Now PerformancePoint functionality is available as an integrated part of the SharePoint Server Enterprise license, as is the case with Excel Services in Microsoft SharePoint Server 2010. The popular features of earlier versions of PerformancePoint Services are preserved along with numerous enhancements and additional functionality. New PerformancePoint Services features PerformancePoint Services now can utilize SharePoint Server scalability, collaboration, backup and recovery, and disaster recovery capabilities. Dashboards and dashboard items are stored and secured within SharePoint lists and libraries, providing you with a single security and repository framework. New features and enhancements of SharePoint 2010 PerformancePoint Services • With PerformancePoint Services, functioning as a service in SharePoint Server, dashboards and dashboard items are stored and secured within SharePoint lists and libraries, providing you with a single security and repository framework. The new architecture also takes advantage of SharePoint Server scalability, collaboration, backup and recovery, and disaster recovery capabilities. You also can include and link PerformancePoint Services Web Parts with other SharePoint Server Web Parts on the same page. The new architecture also streamlines security models that simplify access to report data. • The Decomposition Tree is a new visualization report type available in PerformancePoint Services. You can use it to quickly and visually break down higher-level data values from a multi-dimensional data set to understand the driving forces behind those values. The Decomposition Tree is available in scorecards and analytic reports and ultimately in dashboards. • You can access more detailed business information with improved scorecards. Scorecards have been enhanced to make it easy for you to drill down and quickly access more detailed information. PerformancePoint scorecards also offer more flexible layout options, dynamic hierarchies, and calculated KPI features. Using this enhanced functionality, you can now create custom metrics that use multiple data sources. You can also sort, filter, and view variances between actual and target values to help you identify concerns or risks. • Better Time Intelligence filtering capabilities that you can use to create and use dynamic time filters that are always up to date. Other improved filters improve the ability for dashboard users to quickly focus in on information that is most relevant. • Ability to include and link PerformancePoint Services Web Parts together with other PerformancePoint Services Web parts on the same page. • Easier to author and publish dashboard items by using Dashboard Designer. • SQL Server Analysis Services 2008 support. • Increased support for accessibility compliance in individual reports and scorecards. • The KPI Details report is a new report type that displays contextually relevant information about KPIs, metrics, rows, columns, and cells within a scorecard. The KPI Details report works as a Web part that links to a scorecard or individual KPI to show relevant metadata to the end user in SharePoint Server. This Web part can be added to PerformancePoint dashboards or any SharePoint Server page. • Create analytics reports to better understand underlying business forces behind the results. Analytic reports have been enhanced to support value filtering, new chart types, and server-based conditional formatting. To conclude, PerformancePoint Services, by becoming tightly integrated with SharePoint Server 2010, takes advantage of many enterprise-level SharePoint Server 2010 features. Unfortunately, SharePoint Foundation 2010 doesn’t include this feature. There are still many choices in SharePoint family of products that include SharePoint Server 2010, SharePoint Foundation, SharePoint Server 2007 and associated free SharePoint web parts and templates.

    Read the article

< Previous Page | 395 396 397 398 399 400 401 402 403 404 405 406  | Next Page >