Search Results

Search found 21914 results on 877 pages for 'core services'.

Page 317/877 | < Previous Page | 313 314 315 316 317 318 319 320 321 322 323 324  | Next Page >

  • What is a best practice tier structure of a Java EE 6/7 application?

    - by James Drinkard
    I was attempting to find a best practice for modeling the tiers in a Java EE application yesterday and couldn't come up with anything current. In the past, say java 1.4, it was four tiers: Presentation Tier Web Tier Business Logic Tier DAL (Data Access Layer ) which I always considered a tier and not a layer. After working with Web Services and SOA I thought to add in a services tier, but that may fall under 3. the business logic tier. I did searches for quite a while and reading articles. It seems like Domain Driven Design is becoming more popular, but I couldn't find a diagram on it's tier structure. Anyone have ideas or diagrams on what the proper tier structure is for newer Java EE applications or is it really the same, but more items are ranked under the four I've mentioned?

    Read the article

  • Writing a new programming language - when and how to bootstrap datastructures?

    - by OnResolve
    I'm in the process of writing my own programming language which, thus far, has been going great in terms of what I set out to accomplish. However, now, I'd like to bootstrap some pre-existing data structures and/or objects. My problem is that I'm not really sure on how to begin. When the compiler begins do I splice in these add-ins so their part of the scope of the application? If I make these in some core library, my concern is how I distribute the library in addition to the compiler--or are they part of the compiler? I get that there are probably a number of plausible ways to approach this, but I'm having trouble with the setting my direction. If it helps, the language is on top of the .NET core (i.e it compiles to CLR code). Any help or suggestions are very much appreciated!

    Read the article

  • My sound stopped working today, how can I fix it?

    - by Oli
    This seems to be a problem with pulseaudio. I was logged in over VNC on my phone and started playing a video this caused X to crash (as sometimes happens). I restarted and suddenly the sound doesn't work. I have a Intel HDA/Realtek ALC889 00:1b.0 Audio device: Intel Corporation 82801JI (ICH10 Family) HD Audio Controller alsamixer is detecting this just fine. PulseAudio doesn't detect this alsa device so is using auto_null as the default sink (logs below). When I properly kill PulseAudio (tell it not to auto-start) direct ALSA communication with the sound card works just fine. speaker-test, for example, works. So the hardware and ALSA layers are fine IMO. In the logs, it seems that the card might be "busy" but I really don't know how or why it would be now (and never before). Is there an ALSA lock file somewhere that it still there because of my crash? I just ran sudo fuser /dev/snd/* and saw this: oli@bert:~$ sudo fuser /dev/snd/* /dev/snd/controlC0: 1884 /dev/snd/pcmC0D0c: 1884m /dev/snd/timer: 1884 A look at the process list (ps aux | grep 1884) tells me process 1884 is arecord -c 1 -f S16_LE -r 8000 -t raw. No idea what this is or why it's running. When I try and kill arecord (as root), it just respawns and rebinds on the hardware. I'm in a very annoying situation where I don't know what is going on and don't know how to find out. I'm open to all suggestions to get this working again. Fire away. And here's what I get when I stop PA auto-loading, kill it and then start it with -vvvv. oli@bert:~$ pulseaudio -vvvvv I: main.c: setrlimit(RLIMIT_NICE, (31, 31)) failed: Operation not permitted I: main.c: setrlimit(RLIMIT_RTPRIO, (9, 9)) failed: Operation not permitted D: core-rtclock.c: Timer slack is set to 50 us. D: core-util.c: RealtimeKit worked. I: core-util.c: Successfully gained nice level -11. I: main.c: This is PulseAudio 0.9.21-63-gd3efa-dirty D: main.c: Compilation host: x86_64-pc-linux-gnu D: main.c: Compilation CFLAGS: -g -O2 -g -Wall -O3 -Wall -W -Wextra -pipe -Wno-long-long -Winline -Wvla -Wno-overlength-strings -Wunsafe-loop-optimizations -Wundef -Wformat=2 -Wlogical-op -Wsign-compare -Wformat-security -Wmissing-include-dirs -Wformat-nonliteral -Wold-style-definition -Wpointer-arith -Winit-self -Wdeclaration-after-statement -Wfloat-equal -Wmissing-prototypes -Wstrict-prototypes -Wredundant-decls -Wmissing-declarations -Wmissing-noreturn -Wshadow -Wendif-labels -Wcast-align -Wstrict-aliasing=2 -Wwrite-strings -Wno-unused-parameter -ffast-math -Wp,-D_FORTIFY_SOURCE=2 -fno-common -fdiagnostics-show-option D: main.c: Running on host: Linux x86_64 2.6.38-rc3 #1 SMP Tue Feb 1 10:53:04 GMT 2011 D: main.c: Found 8 CPUs. I: main.c: Page size is 4096 bytes D: main.c: Compiled with Valgrind support: no D: main.c: Running in valgrind mode: no D: main.c: Running in VM: no D: main.c: Optimised build: yes D: main.c: All asserts enabled. I: main.c: Machine ID is 8310740c4729ef474fe5ecec4bbf5a6b. I: main.c: Session ID is 8310740c4729ef474fe5ecec4bbf5a6b-1297338553.571075-1050119523. I: main.c: Using runtime directory /home/oli/.pulse/8310740c4729ef474fe5ecec4bbf5a6b-runtime. I: main.c: Using state directory /home/oli/.pulse. I: main.c: Using modules directory /usr/lib/pulse-0.9.21/modules. I: main.c: Running in system mode: no I: main.c: Fresh high-resolution timers available! Enjoy ol' chap! I: cpu-x86.c: CPU flags: CMOV MMX SSE SSE2 SSE3 SSSE3 SSE4_1 SSE4_2 I: svolume_mmx.c: Initialising MMX optimized functions. I: remap_mmx.c: Initialising MMX optimized remappers. I: svolume_sse.c: Initialising SSE2 optimized functions. I: remap_sse.c: Initialising SSE2 optimized remappers. I: sconv_sse.c: Initialising SSE2 optimized conversions. D: memblock.c: Using shared memory pool with 1024 slots of size 64.0 KiB each, total size is 64.0 MiB, maximum usable slot size is 65472 D: database-tdb.c: Opened TDB database '/home/oli/.pulse/8310740c4729ef474fe5ecec4bbf5a6b-device-volumes.tdb' I: module-device-restore.c: Sucessfully opened database file '/home/oli/.pulse/8310740c4729ef474fe5ecec4bbf5a6b-device-volumes'. I: module.c: Loaded "module-device-restore" (index: #0; argument: ""). D: database-tdb.c: Opened TDB database '/home/oli/.pulse/8310740c4729ef474fe5ecec4bbf5a6b-stream-volumes.tdb' I: module-stream-restore.c: Sucessfully opened database file '/home/oli/.pulse/8310740c4729ef474fe5ecec4bbf5a6b-stream-volumes'. I: module.c: Loaded "module-stream-restore" (index: #1; argument: ""). D: database-tdb.c: Opened TDB database '/home/oli/.pulse/8310740c4729ef474fe5ecec4bbf5a6b-card-database.tdb' I: module-card-restore.c: Sucessfully opened database file '/home/oli/.pulse/8310740c4729ef474fe5ecec4bbf5a6b-card-database'. I: module.c: Loaded "module-card-restore" (index: #2; argument: ""). I: module.c: Loaded "module-augment-properties" (index: #3; argument: ""). D: cli-command.c: Checking for existance of '/usr/lib/pulse-0.9.21/modules/module-udev-detect.so': success D: module-udev-detect.c: /dev/snd/controlC0 is accessible: yes D: module-udev-detect.c: /devices/pci0000:00/0000:00:1b.0/sound/card0 is busy: yes I: module-udev-detect.c: Found 1 cards. I: module.c: Loaded "module-udev-detect" (index: #4; argument: ""). D: cli-command.c: Checking for existance of '/usr/lib/pulse-0.9.21/modules/module-bluetooth-discover.so': success D: dbus-util.c: Successfully connected to D-Bus system bus ba7c9a1f90b3d49d930bca2100000015 as :1.62 D: bluetooth-util.c: dbus: interface=org.freedesktop.DBus, path=/org/freedesktop/DBus, member=NameAcquired D: bluetooth-util.c: Bluetooth daemon is apparently not available. I: module.c: Loaded "module-bluetooth-discover" (index: #5; argument: ""). D: cli-command.c: Checking for existance of '/usr/lib/pulse-0.9.21/modules/module-esound-protocol-unix.so': success I: module.c: Loaded "module-esound-protocol-unix" (index: #6; argument: ""). I: module.c: Loaded "module-native-protocol-unix" (index: #7; argument: ""). D: cli-command.c: Checking for existance of '/usr/lib/pulse-0.9.21/modules/module-gconf.so': success I: module.c: Loaded "module-gconf" (index: #8; argument: ""). I: module-default-device-restore.c: Saved default sink 'auto_null' not existant, not restoring default sink setting. I: module-default-device-restore.c: Saved default source 'auto_null.monitor' not existant, not restoring default source setting. I: module.c: Loaded "module-default-device-restore" (index: #9; argument: ""). I: module.c: Loaded "module-rescue-streams" (index: #10; argument: ""). D: module-always-sink.c: Autoloading null-sink as no other sinks detected. I: sink.c: Created sink 0 "auto_null" with sample spec s16le 6ch 44100Hz and channel map front-left,front-left-of-center,front-center,front-right,front-right-of-center,rear-center I: sink.c: device.description = "Dummy Output" I: sink.c: device.class = "abstract" I: sink.c: device.icon_name = "audio-card" D: core-subscribe.c: Dropped redundant event due to change event. I: source.c: Created source 0 "auto_null.monitor" with sample spec s16le 6ch 44100Hz and channel map front-left,front-left-of-center,front-center,front-right,front-right-of-center,rear-center I: source.c: device.description = "Monitor of Dummy Output" I: source.c: device.class = "monitor" I: source.c: device.icon_name = "audio-input-microphone" D: module-null-sink.c: Thread starting up I: module.c: Loaded "module-null-sink" (index: #11; argument: "sink_name=auto_null sink_properties='device.description="Dummy Output"'"). I: module.c: Loaded "module-always-sink" (index: #12; argument: ""). I: module.c: Loaded "module-intended-roles" (index: #13; argument: ""). D: module-suspend-on-idle.c: Sink auto_null becomes idle, timeout in 5 seconds. I: module.c: Loaded "module-suspend-on-idle" (index: #14; argument: ""). I: client.c: Created 0 "ConsoleKit Session /org/freedesktop/ConsoleKit/Session1" D: module-console-kit.c: Added new session /org/freedesktop/ConsoleKit/Session1 I: module.c: Loaded "module-console-kit" (index: #15; argument: ""). I: module.c: Loaded "module-position-event-sounds" (index: #16; argument: ""). D: dbus-util.c: Successfully connected to D-Bus session bus efbffc6788fad56cfd64d40c00000018 as :1.182 D: main.c: Got org.pulseaudio.Server! I: main.c: Daemon startup complete. I: client.c: Created 1 "Native client (UNIX socket client)" I: client.c: Created 2 "Native client (UNIX socket client)" D: protocol-native.c: Protocol version: remote 16, local 16 I: protocol-native.c: Got credentials: uid=1000 gid=1000 success=1 D: protocol-native.c: SHM possible: yes D: protocol-native.c: Negotiated SHM: yes D: protocol-native.c: Protocol version: remote 16, local 16 I: protocol-native.c: Got credentials: uid=1000 gid=1000 success=1 D: protocol-native.c: SHM possible: yes D: protocol-native.c: Negotiated SHM: yes D: module-augment-properties.c: Looking for .desktop file for gnome-volume-control-applet D: module-augment-properties.c: Looking for .desktop file for gnome-settings-daemon D: core-subscribe.c: Dropped redundant event due to change event. I: module-suspend-on-idle.c: Sink auto_null idle for too long, suspending ... D: sink.c: Suspend cause of sink auto_null is 0x0004, suspending Note the one section that seems to find the hardware but says it's busy (no idea if this is relevant). D: cli-command.c: Checking for existance of '/usr/lib/pulse-0.9.21/modules/module-udev-detect.so': success D: module-udev-detect.c: /dev/snd/controlC0 is accessible: yes D: module-udev-detect.c: /devices/pci0000:00/0000:00:1b.0/sound/card0 is busy: yes I: module-udev-detect.c: Found 1 cards.

    Read the article

  • Telecomunication SID model and resources [on hold]

    - by andygluk
    There is a SID model well-known in telecom industry. Following this model you define resources as resources owned by your enterprise, and then you build resource-oriented services on top of it and then customer-oriented services and so on... So everything is based on enterprise-owned resources, which you have to identify first. What I am looking for and what I am asking is some alternative to this model, build not on enterprise-owned resources, but on resources sell by enterprise. Say, you are selling licenses for using your products. So instead of building model on top of enterprise resources you may be interested to build it on top of licenses you are selling.

    Read the article

  • Tellago releases a RESTful API for BizTalk Server business rules

    - by Charles Young
    Jesus Rodriguez has blogged recently on Tellago Devlabs' release of an open source RESTful API for BizTalk Server Business Rules.   This is an excellent addition to the BizTalk ecosystem and I congratulate Tellago on their work.   See http://weblogs.asp.net/gsusx/archive/2011/02/08/tellago-devlabs-a-restful-api-for-biztalk-server-business-rules.aspx   The Microsoft BRE was originally designed to be used as an embedded library in .NET applications. This is reflected in the implementation of the Rules Engine Update (REU) Service which is a TCP/IP service that is hosted by a Windows service running locally on each BizTalk box. The job of the REU is to distribute rules, managed and held in a central database repository, across the various servers in a BizTalk group.   The engine is therefore distributed on each box, rather than exploited behind a central rules service.   This model is all very well, but proves quite restrictive in enterprise environments. The problem is that the BRE can only run legally on licensed BizTalk boxes. Increasingly we need to deliver rules capabilities across a more widely distributed environment. For example, in the project I am working on currently, we need to surface decisioning capabilities for use within WF workflow services running under AppFabric on non-BTS boxes. The BRE does not, currently, offer any centralised rule service facilities out of the box, and hence you have to roll your own (and then run your rules services on BTS boxes which has raised a few eyebrows on my current project, as all other WCF services run on a dedicated server farm ).   Tellago's API addresses this by providing a RESTful API for querying the rules repository and executing rule sets against XML passed in the request payload. As Jesus points out in his post, using a RESTful approach hugely increases the reach of BRE-based decisioning, allowing simple invocation from code written in dynamic languages, mobile devices, etc.   We developed our own SOAP-based general-purpose rules service to handle scenarios such as the one we face on my current project. SOAP is arguably better suited to enterprise service bus environments (please don't 'flame' me - I refuse to engage in the RESTFul vs. SOAP war). For example, on my current project we use claims based authorisation across the entire service bus and use WIF and WS-Federation for this purpose.   We have extended this to the rules service. I can't release the code for commercial reasons :-( but this approach allows us to legally extend the reach of BRE far beyond the confines of the BizTalk boxes on which it runs and to provide general purpose decisioning capabilities on the bus.   So, well done Tellago.   I haven't had a chance to play with the API yet, but am looking forward to doing so.

    Read the article

  • WebCenter Customer Spotlight: Los Angeles Department of Water and Power

    - by me
    Author: Peter Reiser - Social Business Evangelist, Oracle WebCenter  Solution Summary Los Angeles Department of Water and Power (LADWP) is the largest public utility company in the United States with over 1.6 million customers. LADWP provides water and power for millions of residential & commercial customers in Southern California. The goal of the project was to implement a newly designed web portal to increase customer self-service while reducing transactions via IVR and automate many of the paper based processes to web based workflows for their 1.6 million customers. LADWP implemented a Self Service Portal using Oracle WebCenter Portal & Oracle WebCenter Content and Oracle SOA Suite for the integration of their complex back-end systems infrastructure. The new portal has received extremely positive feedback from not only the customers and users of the portal, but also other utilities. At Oracle OpenWorld 2012, LADWP won the prestigious WebCenter innovation award for their innovative solution. Company OverviewLos Angeles Department of Water and Power (LADWP) is the largest public utility company in the United States with over 1.6 million customers. LADWP provides water and power for millions of residential & commercial customers in Southern California. LADWP also bills most of these customers for sanitation services provided by another department in the city of Los Angeles.  Business ChallengesThe goal of the project was to implement a newly designed web portal that is easy to navigate from a web browser and mobile devices, as well as be the platform for surfacing internet and intranet applications at LADWP. The primary objective of the new portal was to increase customer self-service while reducing the transactions via IVR and walk-up and to automate many of the paper based processes to web based workflows for customers. This includes automation of Self Service implemented through My Account (Bill Pay, Payment History, Bill History, Usage analysis, Service Request Management) Financial Assistance Programs Customer Rebate Programs Turn Off/Turn On/Transfer of Services Outage Reporting eNotification (SMS, email) Solution DeployedLADWP implemented a Self Service Portal using Oracle WebCenter Portal & Oracle WebCenter Content. Using Oracle SOA Suite they integrated various back-end systems including Oracle Siebel CRM IBM Mainframe based CIS FILENET for document management EBP Eletronic Bill Payment System HP Imprint System for BillXML data Other systems including outage reporting systems, SMS service, etc. The new portal’s features include: Complete Graphical redesign based on best practices in UI Design for high usability Customer Self Service implemented through MyAccount (Bill Pay, Payment History, Bill History, Usage Analysis, Service Request Management) Financial Assistance Programs (CRM, WebCenter) Customer Rebate Programs (CRM, WebCenter) Turn On/Off/Transfer of services (Commercial & Residential) Outage Reporting eNotification (SMS, email) Multilingual (English & Spanish) – using WebCenter multi-language support Section 508 (ADA) Compliant Search – Using WebCenter SES (Secured Enterprise Search) Distributed Authorship in WebCenter Content Mobile Access (any Mobile Browser) Business ResultsThe new portal has received extremely positive feedback from not only customers and users of the portal, but also other utilities. At Oracle OpenWorld 2012, LADWP won the prestigious WebCenter innovation award for their innovative solution. Additional Information LADWP OpenWorld presentation Oracle WebCenter Portal Oracle WebCenter Content Oracle SOA Suite

    Read the article

  • How do you visually represent programming skills?

    - by TomSchober
    I had a discussion with a recruiter recently that made me wish I could visually represent programming skills. In trying to explain how skills relate, what are the important properties of those skills? Would a tagging model work (i.e. "Design Pattern," "Programming Language," "IDE," or "VCS")? Are they really hierarchical? Clarification: The real problem I see is communicating the level of granularity among skill sets. For instance saying someone "knows Java" is a uselessly broad term in describing what someone can DO. However saying they know how to write web services with the Java Programming language is a bit better. To go even further, saying they know Spring as a tool under all that is probably specific enough. What should we call those levels of granularity? What are the relationships between the terms we use? i.e. Framework to Language, Tool to Language, Framework to Solution(like web services), etc.

    Read the article

  • Oracle OpenWorld 2012 Hands-on Lab: “Using Oracle AIA Foundation Pack for Oracle E-Business Suite Integration”

    - by Lionel Dubreuil
    Sharpen your Oracle skill sets and master Oracle technology in Oracle OpenWorld Hands-on Labs.In self-paced, practical learning sessions covering everything from business applications to middleware, database, storage, and enterprise management solutions, you'll discover new ways to derive maximum benefits from your Oracle hardware and software solutionsOracle experts will be available in person to answer questions and guide you through each lab.Hands-on Labs fill up early, and seats are limited, so don’t be late.This HOL10233 - Using Oracle AIA Foundation Pack for Oracle E-Business Suite Integration is scheduled for: Date: Monday, Oct 1 Time: 10:45 AM - 11:45 AM Location: Marriott Marquis - Nob Hill CD In this Hands-on Lab, learn how to integrate Oracle E-Business Suite with third-party applications in your ecosystem with Oracle Application Integration Architecture (AIA) Foundation Pack.This hands-on lab focuses on SOA-based integration with Oracle E-Business Suite, using Oracle E-Business Suite Integrated SOA Gateway or Oracle Applications Adapter to orchestrate a process across disparate applications in your ecosystem.Objectives for this session are to: Learn how to design and build an integration, from functional definition to development Learn how to automatically build services with robust error handling logic Learn how to discover and reuse services available in Oracle E-Business Suite

    Read the article

  • Row Count Plus Transformation

    As the name suggests we have taken the current Row Count Transform that is provided by Microsoft in the Integration Services toolbox and we have recreated the functionality and extended upon it. There are two things about the current version that we thought could do with cleaning up Lack of a custom UI You have to type the variable name yourself In the Row Count Plus Transformation we solve these issues for you. Another thing we thought was missing is the ability to calculate the time taken between components in the pipeline. An example usage would be that you want to know how many rows flowed between Component A and Component B and how long it took. Again we have solved this issue. Credit must go to Erik Veerman of Solid Quality Learning for the idea behind noting the duration. We were looking at one of his packages and saw that he was doing something very similar but he was using a Script Component as a transformation. Our philosophy is that if you have to write or Copy and Paste the same piece of code more than once then you should be thinking about a custom component and here it is. The Row Count Plus Transformation populates variables with the values returned from; Counting the rows that have flowed through the path Returning the time in seconds between when it first saw a row come down this path and when it saw the final row. It is possible to leave both these boxes blank and the component will still work.   All input columns are passed through the transformation unaltered, you are not permitted to change or add to the inputs or outputs of this component. Optionally you can set the component to fire an event, which happens during the PostExecute phase of the execution. This can be useful to improve visibility of this information, such that it is captured in package logging, or can be used to drive workflow in the case of an error event. Properties Property Data Type Description OutputRowCountVariable String The name of the variable into which the amount of row read will be passed (Optional). OutputDurationVariable String The name of the variable into which the duration in seconds will be passed. (Optional). EventType RowCountPlusTransform.EventType The type of event to fire during post execute, included in which are the row count and duration values. RowCountPlusTransform.EventType Enumeration Name Value Description None 0 Do not fire any event. Information 1 Fire an Information event. Warning 2 Fire a Warning event. Error 3 Fire an Error event. Installation The component is provided as an MSI file which you can download and run to install it. This simply places the files on disk in the correct locations and also installs the assemblies in the Global Assembly Cache as per Microsoft’s recommendations. You may need to restart the SQL Server Integration Services service, as this caches information about what components are installed, as well as restarting any open instances of Business Intelligence Development Studio (BIDS) / Visual Studio that you may be using to build your SSIS packages. For 2005/2008 Only - Finally you will have to add the transformation to the Visual Studio toolbox manually. Right-click the toolbox, and select Choose Items.... Select the SSIS Data Flow Items tab, and then check the Row Count Plus Transformation in the Choose Toolbox Items window. This process has been described in detail in the related FAQ entry for How do I install a task or transform component? We recommend you follow best practice and apply the current Microsoft SQL Server Service pack to your SQL Server servers and workstations, and this component requires a minimum of SQL Server 2005 Service Pack 1. Downloads The Row Number Transformation is available for SQL Server 2005, SQL Server 2008 (includes R2) and SQL Server 2012. Please choose the version to match your SQL Server version, or you can install multiple versions and use them side by side if you have more than one version of SQL Server installed. Row Count Plus Transformation for SQL Server 2005 Row Count Plus Transformation for SQL Server 2008 Row Count Plus Transformation for SQL Server 2012 Version History SQL Server 2012 Version 3.0.0.6 - SQL Server 2012 release. Includes upgrade support for both 2005 and 2008 packages to 2012. (5 Jun 2012) SQL Server 2008 Version 2.0.0.5 - SQL Server 2008 release. (15 Oct 2008) SQL Server 2005 Version 1.1.0.43 - Bug fix for duration. For long running processes the duration second count may have been incorrect. (8 Sep 2006) Version 1.1.0.42 - SP1 Compatibility Testing. Added the ability to raise an event with the count and duration data for easier logging or workflow. (18 Jun 2006) Version 1.0.0.1 - SQL Server 2005 RTM. Made available as general public release. (20 Mar 2006) Screenshot Troubleshooting Make sure you have downloaded the version that matches your version of SQL Server. We offer separate downloads for SQL Server 2005, SQL Server 2008 and SQL Server 2012. If you get an error when you try and use the component along the lines of The component could not be added to the Data Flow task. Please verify that this component is properly installed.  ... The data flow object "Konesans ..." is not installed correctly on this computer, this usually indicates that the internal cache of SSIS components needs to be updated. This is held by the SSIS service, so you need restart the the SQL Server Integration Services service. You can do this from the Services applet in Control Panel or Administrative Tools in Windows. You can also restart the computer if you prefer. You may also need to restart any current instances of Business Intelligence Development Studio (BIDS) / Visual Studio that you may be using to build your SSIS packages. Once installation is complete you need to manually add the task to the toolbox before you will see it and to be able add it to packages - How do I install a task or transform component?

    Read the article

  • Ubuntu 11.10 doesn't detect Intel integrated graphics (i7-2670QM CPU)

    - by Telmo Marques
    The laptop I'm using is an MSI GT683DX-847PT that comes with an NVIDIA GeForce GTX570M discrete GPU, and an Intel Core i7-2670QM CPU. According to Intel's description of the Core i7-2670QM CPU, it has an HD Graphics 3000 integrated GPU. The problem is that the Intel integrated graphics GPU doesn't come up in lspci nor in lshw, only the NVIDIA GPU shows up. Here is the output of both commands: sudo lspci: http://pastebin.com/raw.php?i=9AZg8bJy sudo lshw: http://pastebin.com/raw.php?i=6cAMFQsY I was counting on having two GPU's to run CUDA programs on the discrete NVIDIA GPU, while X was handled by the integrated Intel GPU, to prevent kernel execution timeout. Why doesn't the Intel HD Graphics 3000 GPU show up? Any tests I could make to verify the presence of an integrated GPU?

    Read the article

  • Should all new web projects build their backend based on xml/json result sets?

    - by Blankman
    If you were building a new Saas project, would it make sense to start with all of the backend services returning xml/json? Because these days you need to build for both the web and mobile devices, and having a backend that is build from the start to return xml and json, you are ready to go mobile (all services have the business logic, so you won't be repeating anything). Now the web would be MVC, so the controller would just be routing the request to your service backend, and converting the json or xml to html. The obviousl downside is that you have to build a backend, and then another web project that calls your backend. But this also goes to you favor as it forces you to seperate your concerns, and not leak business logic in your controller/view layer. Thoughts?

    Read the article

  • Configurable Objects - Introduction

    - by Anthony Shorten
    One of the interesting facilities in the framework is Configurable Object functionality (it is also known as Task Optimization and also known as Cool Tools). The idea is that any implementation can create their own views of the base product objects and services and implement functionality against those new views. For example, in Oracle Utilities Customer Care and Billing, there is a Person object. That object is used to store and manage information about individuals as well as companies. In the base product you would use the Person Maintenance screen and fill in some of the screen when you wanted to register or maintain and individual as well and fill out other parts of the screen when you wanted to register or maintain a company. This can be somewhat confusing to some customers. Using Configurable Objects this can be simplified. A business object can be created that is a view of the any object. For example, you could create a Human business object which would cover the aspects of the Person object pertaining to an individual and a Company business object to cover the aspects unique to a company. Even the tag names (i.e. Field Names) in the object can be changed to be more what the implementation is familiar with. The object can also restructure the object. For example, a common identifier for an individual in the USA is the Social Security number, this value is a Person Identifier (as this varies in each country). In the new Human object you can remap the Person Identifier as a Social Security number. To define a Business Object you use a schema editor built into the browser user interface and use a mapping language to setup the business objects. An example of the language is shown below in an extract of the schema for the Human business object. As you can see there are mapping as well as formatting and other tags. This information can be built manually or using a wizard which generates the base structure for you to alter. This is all stored as meta data when saved. Once a Business object is built it can be used as basis for code, other business objects (we support inheritance), called by a screen (called a UI Map) or even as a Web Service. This is just a start with Configurable Objects as you can also create views of base services called Business Services, Service Scripts used for non-object or complex object processing (as well as other things), UI Maps used for screens and Data Areas to reuse definitions across multiple objects. Configurable Objects are powerful and I only really touched on them here. Over the next few months I hope to add lots more entries about them.

    Read the article

  • Enterprise Java and Hibernate [closed]

    - by KyelJmD
    I am now done learning the JavaSE. Now , I want to jump out of my comfort zone and learn how to program /create web applications using java. First My Question: What is the difference between Enterprise Java and Hibernate? are they a like? or Hibernate is a framework for java? Now that I am done learning "Core" java I need to move ahead. what are the things I need to learn to use Hibernate? and learn java. as of now I have no idea where should I go after learning "core java". and I don't know what path to go to

    Read the article

  • EM12c Release 4: Database as a Service Enhancements

    - by Adeesh Fulay
    Oracle Enterprise Manager 12.1.0.4 (or simply put EM12c R4) is the latest update to the product. As previous versions, this release provides tons of enhancements and bug fixes, attributing to improved stability and quality. One of the areas that is most exciting and has seen tremendous growth in the last few years is that of Database as a Service. EM12c R4 provides a significant update to Database as a Service. The key themes are: Comprehensive Database Service Catalog (includes single instance, RAC, and Data Guard) Additional Storage Options for Snap Clone (includes support for Database feature CloneDB) Improved Rapid Start Kits Extensible Metering and Chargeback Miscellaneous Enhancements 1. Comprehensive Database Service Catalog Before we get deep into implementation of a service catalog, lets first understand what it is and what benefits it provides. Per ITIL, a service catalog is an exhaustive list of IT services that an organization provides or offers to its employees or customers. Service catalogs have been widely popular in the space of cloud computing, primarily as the medium to provide standardized and pre-approved service definitions. There is already some good collateral out there that talks about Oracle database service catalogs. The two whitepapers i recommend reading are: Service Catalogs: Defining Standardized Database Service High Availability Best Practices for Database Consolidation: The Foundation for Database as a Service [Oracle MAA] EM12c comes with an out-of-the-box service catalog and self service portal since release 1. For the customers, it provides the following benefits: Present a collection of standardized database service definitions, Define standardized pools of hardware and software for provisioning, Role based access to cater to different class of users, Automated procedures to provision the predefined database definitions, Setup chargeback plans based on service tiers and database configuration sizes, etc Starting Release 4, the scope of services offered via the service catalog has been expanded to include databases with varying levels of availability - Single Instance (SI) or Real Application Clusters (RAC) databases with multiple data guard based standby databases. Some salient points of the data guard integration: Standby pools can now be defined across different datacenters or within the same datacenter as the primary (this helps in modelling the concept of near and far DR sites) The standby databases can be single instance, RAC, or RAC One Node databases Multiple standby databases can be provisioned, where the maximum limit is determined by the version of database software The standby databases can be in either mount or read only (requires active data guard option) mode All database versions 10g to 12c supported (as certified with EM 12c) All 3 protection modes can be used - Maximum availability, performance, security Log apply can be set to sync or async along with the required apply lag The different service levels or service tiers are popularly represented using metals - Platinum, Gold, Silver, Bronze, and so on. The Oracle MAA whitepaper (referenced above) calls out the various service tiers as defined by Oracle's best practices, but customers can choose any logical combinations from the table below:  Primary  Standby [1 or more]  EM 12cR4  SI  -  SI  SI  RAC -  RAC SI  RAC RAC  RON -  RON RON where RON = RAC One Node is supported via custom post-scripts in the service template A sample service catalog would look like the image below. Here we have defined 4 service levels, which have been deployed across 2 data centers, and have 3 standardized sizes. Again, it is important to note that this is just an example to get the creative juices flowing. I imagine each customer would come up with their own catalog based on the application requirements, their RTO/RPO goals, and the product licenses they own. In the screenwatch titled 'Build Service Catalog using EM12c DBaaS', I walk through the complete steps required to setup this sample service catalog in EM12c. 2. Additional Storage Options for Snap Clone In my previous blog posts, i have described the snap clone feature in detail. Essentially, it provides a storage agnostic, self service, rapid, and space efficient approach to solving your data cloning problems. The net benefit is that you get incredible amounts of storage savings (on average 90%) all while cloning databases in a matter of minutes. Space and Time, two things enterprises would love to save on. This feature has been designed with the goal of providing data cloning capabilities while protecting your existing investments in server, storage, and software. With this in mind, we have pursued with the dual solution approach of Hardware and Software. In the hardware approach, we connect directly to your storage appliances and perform all low level actions required to rapidly clone your databases. While in the software approach, we use an intermediate software layer to talk to any storage vendor or any storage configuration to perform the same low level actions. Thus delivering the benefits of database thin cloning, without requiring you to drastically changing the infrastructure or IT's operating style. In release 4, we expand the scope of options supported by snap clone with the addition of database CloneDB. While CloneDB is not a new feature, it was first introduced in 11.2.0.2 patchset, it has over the years become more stable and mature. CloneDB leverages a combination of Direct NFS (or dNFS) feature of the database, RMAN image copies, sparse files, and copy-on-write technology to create thin clones of databases from existing backups in a matter of minutes. It essentially has all the traits that we want to present to our customers via the snap clone feature. For more information on cloneDB, i highly recommend reading the following sources: Blog by Tim Hall: Direct NFS (DNFS) CloneDB in Oracle Database 11g Release 2 Oracle OpenWorld Presentation by Cern: Efficient Database Cloning using Direct NFS and CloneDB The advantages of the new CloneDB integration with EM12c Snap Clone are: Space and time savings Ease of setup - no additional software is required other than the Oracle database binary Works on all platforms Reduce the dependence on storage administrators Cloning process fully orchestrated by EM12c, and delivered to developers/DBAs/QA Testers via the self service portal Uses dNFS to delivers better performance, availability, and scalability over kernel NFS Complete lifecycle of the clones managed by EM12c - performance, configuration, etc 3. Improved Rapid Start Kits DBaaS deployments tend to be complex and its setup requires a series of steps. These steps are typically performed across different users and different UIs. The Rapid Start Kit provides a single command solution to setup Database as a Service (DBaaS) and Pluggable Database as a Service (PDBaaS). One command creates all the Cloud artifacts like Roles, Administrators, Credentials, Database Profiles, PaaS Infrastructure Zone, Database Pools and Service Templates. Once the Rapid Start Kit has been successfully executed, requests can be made to provision databases and PDBs from the self service portal. Rapid start kit can create complex topologies involving multiple zones, pools and service templates. It also supports standby databases and use of RMAN image backups. The Rapid Start Kit in reality is a simple emcli script which takes a bunch of xml files as input and executes the complete automation in a matter of seconds. On a full rack Exadata, it took only 40 seconds to setup PDBaaS end-to-end. This kit works for both Oracle's engineered systems like Exadata, SuperCluster, etc and also on commodity hardware. One can draw parallel to the Exadata One Command script, which again takes a bunch of inputs from the administrators and then runs a simple script that configures everything from network to provisioning the DB software. Steps to use the kit: The kit can be found under the SSA plug-in directory on the OMS: EM_BASE/oracle/MW/plugins/oracle.sysman.ssa.oms.plugin_12.1.0.8.0/dbaas/setup It can be run from this default location or from any server which has emcli client installed For most scenarios, you would use the script dbaas/setup/database_cloud_setup.py For Exadata, special integration is provided to reduce the number of inputs even further. The script to use for this scenario would be dbaas/setup/exadata_cloud_setup.py The database_cloud_setup.py script takes two inputs: Cloud boundary xml: This file defines the cloud topology in terms of the zones and pools along with host names, oracle home locations or container database names that would be used as infrastructure for provisioning database services. This file is optional in case of Exadata, as the boundary is well know via the Exadata system target available in EM. Input xml: This file captures inputs for users, roles, profiles, service templates, etc. Essentially, all inputs required to define the DB services and other settings of the self service portal. Once all the xml files have been prepared, invoke the script as follows for PDBaaS: emcli @database_cloud_setup.py -pdbaas -cloud_boundary=/tmp/my_boundary.xml -cloud_input=/tmp/pdb_inputs.xml          The script will prompt for passwords a few times for key users like sysman, cloud admin, SSA admin, etc. Once complete, you can simply log into EM as the self service user and request for databases from the portal. More information available in the Rapid Start Kit chapter in Cloud Administration Guide.  4. Extensible Metering and Chargeback  Last but not the least, Metering and Chargeback in release 4 has been made extensible in all possible regards. The new extensibility features allow customer, partners, system integrators, etc to : Extend chargeback to any target type managed in EM Promote any metric in EM as a chargeback entity Extend list of charge items via metric or configuration extensions Model abstract entities like no. of backup requests, job executions, support requests, etc  A slew of emcli verbs have also been added that allows administrators to create, edit, delete, import/export charge plans, and assign cost centers all via the command line. More information available in the Chargeback API chapter in Cloud Administration Guide. 5. Miscellaneous Enhancements There are other miscellaneous, yet important, enhancements that are worth a mention. These mostly have been asked by customers like you. These are: Custom naming of DB Services Self service users can provide custom names for DB SID, DB service, schemas, and tablespaces Every custom name is validated for uniqueness in EM 'Create like' of Service Templates Now creating variants of a service template is only a click away. This would be vital when you publish service templates to represent different database sizes or service levels. Profile viewer View the details of a profile like datafile, control files, snapshot ids, export/import files, etc prior to its selection in the service template Cleanup automation - for failed and successful requests Single emcli command to cleanup all remnant artifacts of a failed request Cleanup can be performed on a per request bases or by the entire pool As an extension, you can also delete successful requests Improved delete user workflow Allows administrators to reassign cloud resources to another user or delete all of them Support for multiple tablespaces for schema as a service In addition to multiple schemas, user can also specify multiple tablespaces per request I hope this was a good introduction to the new Database as a Service enhancements in EM12c R4. I encourage you to explore many of these new and existing features and give us feedback. Good luck! References: Cloud Management Page on OTN Cloud Administration Guide [Documentation] -- Adeesh Fulay (@adeeshf)

    Read the article

  • New Exadata, Exalogic, Exalytics Public References

    - by Javier Puerta
    Deutschetelekom (Germany) Exalytics, OBIEE, Essbase, ACS (with partners T-Systems and Deloitte Consulting) - Published: June 04, 2014 Daelim Industrial (Korea) [Korean] Oracle Exalytics, Oracle Exadata, Oracle Hyperion (with partner Kolon Benit) - Published: May 29, 2014 Algar Telecom (Brazil) [also in Portuguese] Oracle Exadata, Oracle Advanced Customer Support Services - Published: May 23, 2014 Globacom (Nigeria) [also in Spanish] Big Data Appliance, NoSQL DB Community Edition, ACS (with partner mCentric, Ltd.) - Published: May 22, 2014 MagtiCom LTD (Georgia) Oracle Exadata, Oracle Consulting (with partner UGT) - Published: May 21, 2014 Hospital Alemão Oswaldo Cruz (Brazil - local language) Oracle Exadata, Oracle Active Data Guard, Oracle ZFS (with partner Teiko) - Published: May 13, 2014 Accelya Kale (India) Oracle Exadata (with partner Softcell Technologies Limited) - Published: May 12, 2014 Autoridade Tributária e Aduaneira (Portugal) [also Portuguese] Exadata, Exalogic (with partner Timestamp) - Published: May 06, 2014 Reliance Commercial Finance (India) Oracle Exadata, Oracle Exalogic, Oracle WebLogic Suite, Oracle Advanced Customer Support Services - Published: May 01, 2014

    Read the article

  • Error message during update from 13.04 to 13.10

    - by layonhands
    The following was reported after I attempted to report the problem back to Ubuntu: The problem cannot be reported: You have some obsolete package versions installed. Please upgrade the following packages and check if the problem still occurs: ubuntu-release-upgrader-gtk, apport, apport-gtk, apport-symptoms, apt, apt-utils, at-spi2-core, binutils, dbus, gcc-4.7-base, gdb, gir1.2-atk-1.0, gir1.2-gtk-3.0, glib-networking, glib-networking-common, glib-networking-services, gnupg, gpgv, ifupdown, initramfs-tools, initramfs-tools-bin, kmod, libappindicator3-1, libapt-inst1.5, libapt-pkg4.12, libasound2, libatk-bridge2.0-0, libatk1.0-0, libatk1.0-data, libatspi2.0-0, libc-bin, libc6, libcups2, libdbus-1-3, libdbusmenu-glib4, libdbusmenu-gtk3-4, libdrm-intel1, libdrm-nouveau2, libdrm-radeon1, libdrm2, libgail-3-0, libgcc1, libgcrypt11, libglib2.0-0, libglib2.0-data, libgnutls26, libgomp1, libgstreamer-plugins-base1.0-0, libgstreamer1.0-0, libgtk-3-0, libgtk-3-bin, libgtk-3-common, libgudev-1.0-0, libicu48, libindicator3-7, libkmod2, liblcms2-2, libpci3, libplymouth2, libpolkit-agent-1-0, libpolkit-backend-1-0, libpolkit-gobject-1-0, libprocps0, libpython-stdlib, libpython2.7, libpython2.7-minimal, libpython2.7-stdlib, libpython3-stdlib, libpython3.3-minimal, libpython3.3-stdlib, libssl1.0.0, libstdc++6, libtiff5, libudev1, libx11-6, libx11-data, libx11-xcb1, libxcb-dri2-0, libxcb-glx0, libxcb-render0, libxcb-shm0, libxcb1, libxcursor1, libxext6, libxfixes3, libxi6, libxinerama1, libxml2, libxrandr2, libxrender1, libxres1, libxt6, libxtst6, libxxf86vm1, lsb-base, lsb-release, module-init-tools, multiarch-support, openssl, passwd, pciutils, perl, perl-base, perl-modules, plymouth, plymouth-theme-ubuntu-text, policykit-1, procps, python, python-gi, python-minimal, python2.7, python2.7-minimal, python3, python3-apport, python3-distupgrade, python3-gi, python3-minimal, python3-problem-report, python3-software-properties, python3-update-manager, python3.3, python3.3-minimal, rsyslog, shared-mime-info, software-properties-common, software-properties-gtk, tar, tzdata, ubuntu-release-upgrader-core, ubuntu-release-upgrader-gtk, udev, update-manager, update-manager-core, update-notifier, update-notifier-common If this question has already been answered, I'm sorry for the repost, but I would appreciate a link to the fix. Thanks. FYI: Dell Latitude D630, Intel Centrino processor. Also, the updater is currently running what seems to be the update. I will report back when it is done going through its process to let you know if it is in fact the 13.10 update. Update 2: System went through an update, but it wasn't for the OS. I think it was an update for the error message mentioned above. Now the OS update is currently running the 'distribution upgrade' portion of the update. This is further than it had gone before. Again I will report back once this is done to let you know whether or not the update was successful. Final Update: Don't know for sure what happened, but I'm almost sure that the error mentioned above was resolved in the first update prior to the 13.10 update. All set.

    Read the article

  • Install Ubuntu on Mac OS X Mavericks, MacBook Air

    - by Unknown
    I was wondering if its okay to install Ubuntu on my Macbook Air, and if it is okay please let me know the procedure. I would prefer to do it by NOT using reFind (not sure what the name is). The following is my system specification. Hardware Overview: Model Name: MacBook Air Model Identifier: MacBookAir6,2 Processor Name: Intel Core i5 Processor Speed: 1.3 GHz Number of Processors: 1 Total Number of Cores: 2 L2 Cache (per Core): 256 KB L3 Cache: 3 MB Memory: 8 GB System Software Overview: System Version: OS X 10.9.2 (13C1021) Kernel Version: Darwin 13.1.0 Boot Volume: Macintosh HD Boot Mode: Normal MacBook Air (13-inch Mid 2013), OS X Mavericks (10.9.2)

    Read the article

  • Microsoft Codename Dallas

    - by kaleidoscope
    Dallas is Microsoft’s Information Service offering which allows developers and information workers to find, acquire and consume published datasets and web services. Users subscribe to datasets and web services of interest and can integrate the information into their own applications via a standardized set of API’s. Data can also be analyzed online using the Dallas Service Explorer or externally using the Power Pivot Add-In for Excel. We can explore all the datasets and subscribe to the catalog for using the data. Dallas Developer Portal https://www.sqlazureservices.com More information can be found at:      http://channel9.msdn.com/learn/courses/Azure/Dallas/IntroToDallas/Overview/   Lokesh, M

    Read the article

  • Le gestionnaire d'accès de Sun repris par des anciens de la société : OpenSSO devient OpenAM grâce à

    Le gestionnaire d'accès de Sun repris par des anciens de la société OpenSSO devient OpenAM sous l'égide de Simon Phipps, nouvel employé de ForgeRock Dans la famille des technologies de Sun dont on se demande ce qu'elles vont devenir avec leur rachat par Oracle, voici OpenSSO. OpenSSO est un gestionnaire d'accès à des services web, open source, fondé sur un mécanisme de single sign-on qui fournit « des services d'identité essentiels pour simplifier, de manière transparente, l'exécution de la connexion unique ». Sous l'égide d'Oracle, cette technologie était semble-t-il sur une voie de garage. Le géant du logiciel possédait déjà ses propres solutions avant même le rach...

    Read the article

  • TFS vs. Star Team comparison

    - by ryanabr
    I have a sales call today in which the person that I am talking to is interested in what TFS would give them over Star Team, The first thing I believe that I can say is that TFS is cheaper! Especially if you are doing MSFT development already and your team members have MSDN subscriptions as the CALs for TFS are covered in the MSDN subscription. The other thing that I noticed about Star Team was all of the references to ‘readiness’ and ‘integration’. While that is great, that means that other tools will be needed to provide the features that are already bundled with TFS like, SharePoint integration, as well as Analysis Services and Reporting Services to provide visibility on the web with reports on project health, and team velocity. Below is a quick table that I was able to throw together to answer some high level questions: Feature TFS Star Team Work Items X X Work Item custom Queries X X Customizable Work Items X Web Portal View X X Reporting X Integration Version Control X X Build Management X Integration Integrated Test Suite X Integration Cost Free for first 5 / MSDN Sub covers others $7500 / seat

    Read the article

  • Simple SST Unhooker

    This article includes description of simple unhooker that restores original SST hooked by unknown rootkits, which hide some services and processes.

    Read the article

  • Workflow Overview & Best Practices - EMEA

    - by Annemarie Provisero
    ADVISOR WEBCAST: Workflow Overview & Best Practices - EMEA PRODUCT FAMILY: EBS - ATG - Workflow   February 16, 2011 at 10:00 am CET, 02:30 pm India, 06:00 pm Japan, 08:00 pm Australia This 1.5-hour session is recommended for technical and functional Users who are interested to get an generic overview about the Tools and Utilities available to get a closer look into the Java Virtual Machine used in an E-Business Suite Environment and how to tune it. TOPICS WILL INCLUDE: Introduction of Workflow Useful Utilities and Tools Best Practices Q&A A short, live demonstration (only if applicable) and question and answer period will be included. Oracle Advisor Webcasts are dedicated to building your awareness around our products and services. This session does not replace offerings from Oracle Global Support Services. Click here to register for this session ------------------------------------------------------------------------------------------------------------- The above webcast is a service of the E-Business Suite Communities in My Oracle Support.For more information on other webcasts, please reference the Oracle Advisor Webcast Schedule.Click here to visit the E-Business Communities in My Oracle Support Note that all links require access to My Oracle Support.

    Read the article

< Previous Page | 313 314 315 316 317 318 319 320 321 322 323 324  | Next Page >