Search Results

Search found 21861 results on 875 pages for 'external library'.

Page 281/875 | < Previous Page | 277 278 279 280 281 282 283 284 285 286 287 288  | Next Page >

  • SQLAuthority News – History of the Database – 5 Years of Blogging at SQLAuthority

    - by pinaldave
    Don’t miss the Contest:Participate in 5th Anniversary Contest   Today is this blog’s birthday, and I want to do a fun, informative blog post. Five years ago this day I started this blog. Intention – my personal web blog. I wrote this blog for me and still today whatever I learn I share here. I don’t want to wander too far off topic, though, so I will write about two of my favorite things – history and databases.  And what better way to cover these two topics than to talk about the history of databases. If you want to be technical, databases as we know them today only date back to the late 1960’s and early 1970’s, when computers began to keep records and store memories.  But the idea of memory storage didn’t just appear 40 years ago – there was a history behind wanting to keep these records. In fact, the written word originated as a way to keep records – ancient man didn’t decide they suddenly wanted to read novels, they needed a way to keep track of the harvest, of their flocks, and of the tributes paid to the local lord.  And that is how writing and the database began.  You could consider the cave paintings from 17,0000 years ago at Lascaux, France, or the clay token from the ancient Sumerians in 8,000 BC to be the first instances of record keeping – and thus databases. If you prefer, you can consider the advent of written language to be the first database.  Many historians believe the first written language appeared in the 37th century BC, with Egyptian hieroglyphics. The ancient Sumerians, not to be outdone, also created their own written language within a few hundred years. Databases could be more closely described as collections of information, in which case the Sumerians win the prize for the first archive.  A collection of 20,000 stone tablets was unearthed in 1964 near the modern day city Tell Mardikh, in Syria.  This ancient database is from 2,500 BC, and appears to be a sort of law library where apprentice-scribes copied important documents.  Further archaeological digs hope to uncover the palace library, and thus an even larger database. Of course, the most famous ancient database would have to be the Royal Library of Alexandria, the great collection of records and wisdom in ancient Egypt.  It was created by Ptolemy I, and existed from 300 BC through 30 AD, when Julius Caesar effectively erased the hard drives when he accidentally set fire to it.  As any programmer knows who has forgotten to hit “save” or has experienced a sudden power outage, thousands of hours of work was lost in a single instant. Databases existed in very similar conditions up until recently.  Cuneiform tablets gave way to papyrus, which led to vellum, and eventually modern paper and the printing press.  Someday the databases we rely on so much today will become another chapter in the history of record keeping.  Who knows what the databases of tomorrow will look like! Reference:  Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • MEB Support to NetBackup MMS

    - by Hema Sridharan
    In MySQL Enterprise Backup 3.6, new option was introduced to support backup to tapes via SBT interface. SBT stands for System Backup to Tape, an Oracle API that helps to perform backup and restore jobs via media management software such as Oracle's Secure Backup (OSB). There are other storage managers like IBM's Tivoli Storage Manager (TSM) and Symantec's Netbackup (NB) which are also supported by MEB but we don't guarantee that it will function as expected for every release. MEB supports SBT API version 2.0 In this blog, I am primarily going to focus the interface of MEB and Symantec's NB. If we are using tapes for backup, ensure that tape library and tape drives are compatible. Test Setup 1. Install NB 7.5 master and media servers in Linux OS. ( NB 7.1 can also be used but for testing purpose I used NB 7.5)2. Install MEB 3.8 also in Linux OS.3. Install NB admin console in your windows desktop and configure the NB master server from there. Note: Ensure that you have root user permission to install NetBackup. Configuration Steps for MEB and NB Once MEB and NB are installed, Ensure that NB is linked to MEB by specifying the library /usr/openv/netbackup/bin/libobk.so64 in the mysqlbackup command line using --sbt-lib-path. Configure the NB master server from windows console. That is configure the storage units by specifying the Storage unit name, Disk type, Media Server name etc.  Create NetBackup policies that are user selectable. But please make sure that policy type is "Oracle".  Define the clients where MEB will be executed. Some times this will be different host where MEB is run or some times in same Media server where NB and tapes are attached. Now once the installation and configuration steps are performed for MEB and NB, the next part is the actual execution.MEB should be run as single file backup using --backup-image option with prefix sbt:(it is a tag which tells MEB that it should stream the backup image through the SBT interface) which is sent to NB client via SBT interface . The resulting backup image is stored where NB stores the images that it backs up. The following diagram shows how MEB interacts with MMS through SBT interface. Backup The following parameters should also be ready for the execution,    --sbt-lib-path : Path to SBT library specific to NetBackup MMS. SBT lib for NetBackup  is in /usr/openv/netbackup/bin/libobk.so64    --sbt-environment: Environment variables must be defined specific to NetBackup. In our example below, we use     NB_ORA_SERV=myserver.com,    NB_ORA_CLIENT=myserver.com,    NB_ORA_POLICY=NBU-MEB    ORACLE_HOME = /export/home2/tmp/hema/mysql-server/ ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ./mysqlbackup --port=13000 --protocol=tcp --user=root --backup-image=sbt:bkpsbtNB --sbt-lib-path=/usr/openv/netbackup/bin/libobk.so64 --sbt-environment="NB_ORA_SERV=myserver.com, NB_ORA_CLIENT=myserver.com, NB_ORA_POLICY=NBU-MEB, ORACLE_HOME=/export/home2/tmp/hema/mysql-server/” --backup-dir=/export/home2/tmp/hema/MEB_bkdir/ backup-to-image ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Once backup is completed successfully, this should appear in Activity Monitor in NetBackup Console.For restore,  image contents has to be extracted using image-to-backup-dir command and then apply-log and copy-back steps are applied. ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ./mysqlbackup --sbt-lib-path=/usr/openv/netbackup/bin/libobk.so64  --backup-dir=/export/home2/tmp/hema/NBMEB/ --backup-image=sbt:bkpsbtNB image-to-backup-dir-----------------------------------------------------------------------------------------------------------------------------------Now apply logs as usual, shutdown the server and perform restore, restart the server and check the data contents. ./mysqlbackup   ---backup-dir=/export/home2/tmp/hema/NBMEB/  apply-log ./mysqlbackup --datadir=/export/home2/tmp/hema/mysql-server/mysql-5.5-meb-repo/mysql-test/var/mysqld.1/data/  --backup-dir=/export/home2/tmp/hema/MEB_bkpdir/ innodb_log_files_in_group=2 --innodb_log_file_size=5M --user=root --port=13000 --protocol=tcp copy-back The NB console should show 'Restore" job as done. If you don't see that there is something wrong with MEB or NetBackup.You can also refer to more detailed steps of MEB and NB integration in whitepaper here

    Read the article

  • What packages do I need to compile .tex documents using XeLaTeX?

    - by maria
    Hi I'm aware of the existence of similar threads on this forum. But any of replies mach to my problem. I'm using Ubuntu 10.4 and I hadn't problems with fonts till I've decided to use XeLaTeX instead of LaTeX (cf http://tex.stackexchange.com/questions/12347/typesetting-a-document-using-arabic-script/12358#12358). The problem is that I'm not able to compile any .tex document using XeLaTeX, as well as properly display XeLaTeX documentation. As I've learn thanks to mentioned thread, XeLaTeX uses the fonts availables in general in the system. I was trying yo read fontspec documentation, but it opens in pdf with a lot of white gaps and terminal output (quite long) consist mostly of errors. This are just few lines of it: Error: Missing language pack for 'Adobe-Japan1' mapping Error: Unknown font tag 'F5.1' Error (24124): No font in show Error: Unknown font tag 'F5.1' I was trying to compile simple XeLaTeX file: \documentclass{article} \usepackage{fontspec} \setmainfont{Linux Libertine O} \begin{document} Hello World! \end{document} without succes. This is terminal output of compilation: This is XeTeX, Version 3.1415926-2.2-0.9995.2 (TeX Live 2009/Debian) restricted \write18 enabled. entering extended mode (./ex.tex LaTeX2e <2009/09/24> Babel <v3.8l> and hyphenation patterns for english, usenglishmax, dumylang, noh yphenation, polish, loaded. (/usr/share/texmf-texlive/tex/latex/base/article.cls Document Class: article 2007/10/19 v1.4h Standard LaTeX document class (/usr/share/texmf-texlive/tex/latex/base/size10.clo)) (/usr/share/texmf-texlive/tex/xelatex/fontspec/fontspec.sty (/usr/share/texmf-texlive/tex/generic/ifxetex/ifxetex.sty) (/usr/share/texmf-texlive/tex/latex/tools/calc.sty) (/usr/share/texmf-texlive/tex/latex/xkeyval/xkeyval.sty (/usr/share/texmf-texlive/tex/generic/xkeyval/xkeyval.tex (/usr/share/texmf-texlive/tex/generic/xkeyval/keyval.tex))) (/usr/share/texmf-texlive/tex/latex/base/fontenc.sty (/usr/share/texmf-texlive/tex/xelatex/euenc/eu1enc.def) (/usr/share/texmf-texlive/tex/xelatex/euenc/eu1lmr.fd)) fontspec.cfg loaded. (/usr/share/texmf-texlive/tex/xelatex/fontspec/fontspec.cfg))kpathsea: Invalid fontname `Linux Libertine O', contains ' ' ! Font \zf@basefont="Linux Libertine O" at 10.0pt not loadable: Metric (TFM) fi le or installed font not found. \zf@fontspec ...ntname \zf@suffix " at \f@size pt \unless \ifzf@icu \zf@set@... l.3 \setmainfont{Linux Libertine O} ? I can't find Linux Libertine O. Searching for otf- by aptitude gives as result: maria@maria-laptop:/etc/fonts$ aptitude search otf p emdebian-rootfs - emdebian root filesystem support p libotf-bin - A Library for handling OpenType Font - utilities p libotf-dev - A Library for handling OpenType Font - development i libotf0 - A Library for handling OpenType Font - runtime p libotf0-dbg - The libotf libraries and debugging symbols p libpam-dotfile - A PAM module which allows users to have more than one password p livecd-rootfs - construction script for the livecd rootfs p makebootfat - Utility to create a bootable FAT filesystem p otf-ipaexfont - Japanese OpenType font, IPAexFont (IPAexGothic/Mincho) p otf-ipaexfont-gothic - Japanese OpenType font, IPAexFont (IPAexGothic) p otf-ipaexfont-mincho - Japanese OpenType font, IPAexFont (IPAexMincho) p otf-ipafont - Japanese OpenType font set, IPAfont p otf-ipafont-gothic - Japanese OpenType font set, IPA Gothic font p otf-ipafont-mincho - Japanese OpenType font set, IPA Mincho font p otf-stix - the Scientific and Technical Information eXchange fonts p otf-thai-tlwg - Thai fonts in OpenType format p otf-yozvox-yozfont - Japanese proportional Handwriting OpenType font p otf2bdf - generate BDF bitmap fonts from OpenType outline fonts p robotfindskitten - Zen Simulation of robot finding kitten So font in question is not just uninstalled, but not available, if I'm not wrong. Does it mean that I lack some repositoires? I was trying also to apply solution from the thread How do I reinstall default fonts?, but the result is: maria@maria-laptop:~$ sudo apt-get install msttcorefonts [sudo] password for maria: Reading package lists... Done Building dependency tree Reading state information... Done Note, selecting ttf-mscorefonts-installer instead of msttcorefonts ttf-mscorefonts-installer is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. maria@maria-laptop:~$ It seems that is not a usual problem for use of XeLaTeX; nobody in the mentioned thread suggested instalation of anything else than TeX Live. Thanks in advance

    Read the article

  • CacheAdapter 2.4 – Bug fixes and minor functional update

    - by Glav
    Note: If you are unfamiliar with the CacheAdapter library and what it does, you can read all about its awesome ability to utilise memory, Asp.Net Web, Windows Azure AppFabric and memcached caching implementations via a single unified, simple to use API from here and here.. The CacheAdapter library is receiving an update to version 2.4 and is currently available on Nuget here. Update: The CacheAdapter has actualy just had a minor revision to 2.4.1. This significantly increases the performance and reliability in memcached scenario under more extreme loads. General to moderate usage wont see any noticeable difference though. Bugs This latest version fixes a big that is only present in the memcached implementation and is only seen in rare, intermittent times (making i particularly hard to find). The bug is where a cache node would be removed from the farm when errors in deserialization of cached objects would occur due to serialised data not being read from the stream in entirety. The code also contains enhancements to better surface serialization exceptions to aid in the debugging process. This is also specifically targeted at the memcached implementation. This is important when moving from something like memory or Asp.Web caching mechanisms to memcached where the serialization rules are not as lenient. There are a few other minor bug fixes, code cleanup and a little refactoring. Minor feature addition In addition to this bug fix, many people have asked for a single setting to either enable or disable the cache.In this version, you can disable the cache by setting the IsCacheEnabled flag to false in the application configuration file. Something like the example below: <Glav.CacheAdapter.MainConfig> <setting name="CacheToUse" serializeAs="String"> <value>memcached</value> </setting> <setting name="DistributedCacheServers" serializeAs="String"> <value>localhost:11211</value> </setting> <setting name="IsCacheEnabled" serializeAs="String"> <value>False</value> </setting> </Glav.CacheAdapter.MainConfig> Your reasons to use this feature may vary (perhaps some performance testing or problem diagnosis). At any rate, disabling the cache will cause every attempt to retrieve data from the cache, resulting in a cache miss and returning null. If you are using the ICacheProvider with the delegate/Func<T> syntax to populate the cache, this delegate method will get executed every single time. For example, when the cache is disabled, the following delegate/Func<T> code will be executed every time: var data1 = cacheProvider.Get<SomeData>("cache-key", DateTime.Now.AddHours(1), () => { // With the cache disabled, this data access code is executed every attempt to // get this data via the CacheProvider. var someData = new SomeData() { SomeText = "cache example1", SomeNumber = 1 }; return someData; }); One final note: If you access the cache directly via the ICache instance, instead of the higher level ICacheProvider API, you bypass this setting and still access the underlying cache implementation. Only the ICacheProvider instance observes the IsCacheEnabled setting. Thanks to those individuals who have used this library and provided feedback. Ifyou have any suggestions or ideas, please submit them to the issue register on bitbucket (which is where you can grab all the source code from too)

    Read the article

  • How to deploy global managed beans

    - by frank.nimphius
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} "Global managed" beans is the term I use in this post to describe beans that are used across applications. Global managed beans contain helper - or utility - methods like or instead of JSFUtils and ADFUtils. The difference between global managed beans and static helper classes like JSFUtis and ADFUtils is that they are EL accessible, providing reusable functionality that is ready to use on UI components and - if accessed from Java - in other managed beans. For example, the ADF Faces page template (af:pageTemplate) allows you to define attributes for the consuming page to pass in object references or strings into it. It does not have method attributes that allow command components contained in a template to invoke listeners in managed beans and the ADF binding layer, or to execute actions. To create templates that provide global button or menu functionality, like logon, logout, print etc., an option for developers is to deployed managed beans with the ADF Faces page templates. To deploy a managed bean with a page template, create an ADF library from the project containing the template definition and import this ADF library into the target project using the Resource palette. When importing an ADF library, all its content is added to the project, including page template definitions, managed bean sources and configurations. More about page templates http://download.oracle.com/docs/cd/E17904_01/apirefs.1111/e12419/tagdoc/af_pageTemplate.html Another use-case for globally configured managed beans is for creating helper methods to be used in many applications. Instead of creating a base managed bean class that then is extended by all managed beans used in applications, you can deploy a managed bean in a JAR file and add the faces-config.xml file with the managed bean configuration to the JAR's META-INF folder as shown below. Using a globally configured managed bean allows you to use Expression Language in the UI to access common functionality but also use Java in application specific managed beans. Storing the faces-config.xml file in the JAR file META-INF directory automatically makes it available when the JAR file is found in the class path of an application.

    Read the article

  • Selling Federal Enterprise Architecture (EA)

    - by TedMcLaughlan
    Selling Federal Enterprise Architecture A taxonomy of subject areas, from which to develop a prioritized marketing and communications plan to evangelize EA activities within and among US Federal Government organizations and constituents. Any and all feedback is appreciated, particularly in developing and extending this discussion as a tool for use – more information and details are also available. "Selling" the discipline of Enterprise Architecture (EA) in the Federal Government (particularly in non-DoD agencies) is difficult, notwithstanding the general availability and use of the Federal Enterprise Architecture Framework (FEAF) for some time now, and the relatively mature use of the reference models in the OMB Capital Planning and Investment (CPIC) cycles. EA in the Federal Government also tends to be a very esoteric and hard to decipher conversation – early apologies to those who agree to continue reading this somewhat lengthy article. Alignment to the FEAF and OMB compliance mandates is long underway across the Federal Departments and Agencies (and visible via tools like PortfolioStat and ITDashboard.gov – but there is still a gap between the top-down compliance directives and enablement programs, and the bottom-up awareness and effective use of EA for either IT investment management or actual mission effectiveness. "EA isn't getting deep enough penetration into programs, components, sub-agencies, etc.", verified a panelist at the most recent EA Government Conference in DC. Newer guidance from OMB may be especially difficult to handle, where bottom-up input can't be accurately aligned, analyzed and reported via standardized EA discipline at the Agency level – for example in addressing the new (for FY13) Exhibit 53D "Agency IT Reductions and Reinvestments" and the information required for "Cloud Computing Alternatives Evaluation" (supporting the new Exhibit 53C, "Agency Cloud Computing Portfolio"). Therefore, EA must be "sold" directly to the communities that matter, from a coordinated, proactive messaging perspective that takes BOTH the Program-level value drivers AND the broader Agency mission and IT maturity context into consideration. Selling EA means persuading others to take additional time and possibly assign additional resources, for a mix of direct and indirect benefits – many of which aren't likely to be realized in the short-term. This means there's probably little current, allocated budget to work with; ergo the challenge of trying to sell an "unfunded mandate". Also, the concept of "Enterprise" in large Departments like Homeland Security tends to cross all kinds of organizational boundaries – as Richard Spires recently indicated by commenting that "...organizational boundaries still trump functional similarities. Most people understand what we're trying to do internally, and at a high level they get it. The problem, of course, is when you get down to them and their system and the fact that you're going to be touching them...there's always that fear factor," Spires said. It is quite clear to the Federal IT Investment community that for EA to meet its objective, understandable, relevant value must be measured and reported using a repeatable method – as described by GAO's recent report "Enterprise Architecture Value Needs To Be Measured and Reported". What's not clear is the method or guidance to sell this value. In fact, the current GAO "Framework for Assessing and Improving Enterprise Architecture Management (Version 2.0)", a.k.a. the "EAMMF", does not include words like "sell", "persuade", "market", etc., except in reference ("within Core Element 19: Organization business owner and CXO representatives are actively engaged in architecture development") to a brief section in the CIO Council's 2001 "Practical Guide to Federal Enterprise Architecture", entitled "3.3.1. Develop an EA Marketing Strategy and Communications Plan." Furthermore, Core Element 19 of the EAMMF is advised to be applied in "Stage 3: Developing Initial EA Versions". This kind of EA sales campaign truly should start much earlier in the maturity progress, i.e. in Stages 0 or 1. So, what are the understandable, relevant benefits (or value) to sell, that can find an agreeable, participatory audience, and can pave the way towards success of a longer-term, funded set of EA mechanisms that can be methodically measured and reported? Pragmatic benefits from a useful EA that can help overcome the fear of change? And how should they be sold? Following is a brief taxonomy (it's a taxonomy, to help organize SME support) of benefit-related subjects that might make the most sense, in creating the messages and organizing an initial "engagement plan" for evangelizing EA "from within". An EA "Sales Taxonomy" of sorts. We're not boiling the ocean here; the subjects that are included are ones that currently appear to be urgently relevant to the current Federal IT Investment landscape. Note that successful dialogue in these topics is directly usable as input or guidance for actually developing early-stage, "Fit-for-Purpose" (a DoDAF term) Enterprise Architecture artifacts, as prescribed by common methods found in most EA methodologies, including FEAF, TOGAF, DoDAF and our own Oracle Enterprise Architecture Framework (OEAF). The taxonomy below is organized by (1) Target Community, (2) Benefit or Value, and (3) EA Program Facet - as in: "Let's talk to (1: Community Member) about how and why (3: EA Facet) the EA program can help with (2: Benefit/Value)". Once the initial discussion targets and subjects are approved (that can be measured and reported), a "marketing and communications plan" can be created. A working example follows the Taxonomy. Enterprise Architecture Sales Taxonomy Draft, Summary Version 1. Community 1.1. Budgeted Programs or Portfolios Communities of Purpose (CoPR) 1.1.1. Program/System Owners (Senior Execs) Creating or Executing Acquisition Plans 1.1.2. Program/System Owners Facing Strategic Change 1.1.2.1. Mandated 1.1.2.2. Expected/Anticipated 1.1.3. Program Managers - Creating Employee Performance Plans 1.1.4. CO/COTRs – Creating Contractor Performance Plans, or evaluating Value Engineering Change Proposals (VECP) 1.2. Governance & Communications Communities of Practice (CoP) 1.2.1. Policy Owners 1.2.1.1. OCFO 1.2.1.1.1. Budget/Procurement Office 1.2.1.1.2. Strategic Planning 1.2.1.2. OCIO 1.2.1.2.1. IT Management 1.2.1.2.2. IT Operations 1.2.1.2.3. Information Assurance (Cyber Security) 1.2.1.2.4. IT Innovation 1.2.1.3. Information-Sharing/ Process Collaboration (i.e. policies and procedures regarding Partners, Agreements) 1.2.2. Governing IT Council/SME Peers (i.e. an "Architects Council") 1.2.2.1. Enterprise Architects (assumes others exist; also assumes EA participants aren't buried solely within the CIO shop) 1.2.2.2. Domain, Enclave, Segment Architects – i.e. the right affinity group for a "shared services" EA structure (per the EAMMF), which may be classified as Federated, Segmented, Service-Oriented, or Extended 1.2.2.3. External Oversight/Constraints 1.2.2.3.1. GAO/OIG & Legal 1.2.2.3.2. Industry Standards 1.2.2.3.3. Official public notification, response 1.2.3. Mission Constituents Participant & Analyst Community of Interest (CoI) 1.2.3.1. Mission Operators/Users 1.2.3.2. Public Constituents 1.2.3.3. Industry Advisory Groups, Stakeholders 1.2.3.4. Media 2. Benefit/Value (Note the actual benefits may not be discretely attributable to EA alone; EA is a very collaborative, cross-cutting discipline.) 2.1. Program Costs – EA enables sound decisions regarding... 2.1.1. Cost Avoidance – a TCO theme 2.1.2. Sequencing – alignment of capability delivery 2.1.3. Budget Instability – a Federal reality 2.2. Investment Capital – EA illuminates new investment resources via... 2.2.1. Value Engineering – contractor-driven cost savings on existing budgets, direct or collateral 2.2.2. Reuse – reuse of investments between programs can result in savings, chargeback models; avoiding duplication 2.2.3. License Refactoring – IT license & support models may not reflect actual or intended usage 2.3. Contextual Knowledge – EA enables informed decisions by revealing... 2.3.1. Common Operating Picture (COP) – i.e. cross-program impacts and synergy, relative to context 2.3.2. Expertise & Skill – who truly should be involved in architectural decisions, both business and IT 2.3.3. Influence – the impact of politics and relationships can be examined 2.3.4. Disruptive Technologies – new technologies may reduce costs or mitigate risk in unanticipated ways 2.3.5. What-If Scenarios – can become much more refined, current, verifiable; basis for Target Architectures 2.4. Mission Performance – EA enables beneficial decision results regarding... 2.4.1. IT Performance and Optimization – towards 100% effective, available resource utilization 2.4.2. IT Stability – towards 100%, real-time uptime 2.4.3. Agility – responding to rapid changes in mission 2.4.4. Outcomes –measures of mission success, KPIs – vs. only "Outputs" 2.4.5. Constraints – appropriate response to constraints 2.4.6. Personnel Performance – better line-of-sight through performance plans to mission outcome 2.5. Mission Risk Mitigation – EA mitigates decision risks in terms of... 2.5.1. Compliance – all the right boxes are checked 2.5.2. Dependencies –cross-agency, segment, government 2.5.3. Transparency – risks, impact and resource utilization are illuminated quickly, comprehensively 2.5.4. Threats and Vulnerabilities – current, realistic awareness and profiles 2.5.5. Consequences – realization of risk can be mapped as a series of consequences, from earlier decisions or new decisions required for current issues 2.5.5.1. Unanticipated – illuminating signals of future or non-symmetric risk; helping to "future-proof" 2.5.5.2. Anticipated – discovering the level of impact that matters 3. EA Program Facet (What parts of the EA can and should be communicated, using business or mission terms?) 3.1. Architecture Models – the visual tools to be created and used 3.1.1. Operating Architecture – the Business Operating Model/Architecture elements of the EA truly drive all other elements, plus expose communication channels 3.1.2. Use Of – how can the EA models be used, and how are they populated, from a reasonable, pragmatic yet compliant perspective? What are the core/minimal models required? What's the relationship of these models, with existing system models? 3.1.3. Scope – what level of granularity within the models, and what level of abstraction across the models, is likely to be most effective and useful? 3.2. Traceability – the maturity, status, completeness of the tools 3.2.1. Status – what in fact is the degree of maturity across the integrated EA model and other relevant governance models, and who may already be benefiting from it? 3.2.2. Visibility – how does the EA visibly and effectively prove IT investment performance goals are being reached, with positive mission outcome? 3.3. Governance – what's the interaction, participation method; how are the tools used? 3.3.1. Contributions – how is the EA program informed, accept submissions, collect data? Who are the experts? 3.3.2. Review – how is the EA validated, against what criteria?  Taxonomy Usage Example:   1. To speak with: a. ...a particular set of System Owners Facing Strategic Change, via mandate (like the "Cloud First" mandate); about... b. ...how the EA program's visible and easily accessible Infrastructure Reference Model (i.e. "IRM" or "TRM"), if updated more completely with current system data, can... c. ...help shed light on ways to mitigate risks and avoid future costs associated with NOT leveraging potentially-available shared services across the enterprise... 2. ....the following Marketing & Communications (Sales) Plan can be constructed: a. Create an easy-to-read "Consequence Model" that illustrates how adoption of a cloud capability (like elastic operational storage) can enable rapid and durable compliance with the mandate – using EA traceability. Traceability might be from the IRM to the ARM (that identifies reusable services invoking the elastic storage), and then to the PRM with performance measures (such as % utilization of purchased storage allocation) included in the OMB Exhibits; and b. Schedule a meeting with the Program Owners, timed during their Acquisition Strategy meetings in response to the mandate, to use the "Consequence Model" for advising them to organize a rapid and relevant RFI solicitation for this cloud capability (regarding alternatives for sourcing elastic operational storage); and c. Schedule a series of short "Discovery" meetings with the system architecture leads (as agreed by the Program Owners), to further populate/validate the "As-Is" models and frame the "To Be" models (via scenarios), to better inform the RFI, obtain the best feedback from the vendor community, and provide potential value for and avoid impact to all other programs and systems. --end example -- Note that communications with the intended audience should take a page out of the standard "Search Engine Optimization" (SEO) playbook, using keywords and phrases relating to "value" and "outcome" vs. "compliance" and "output". Searches in email boxes, internal and external search engines for phrases like "cost avoidance strategies", "mission performance metrics" and "innovation funding" should yield messages and content from the EA team. This targeted, informed, practical sales approach should result in additional buy-in and participation, additional EA information contribution and model validation, development of more SMEs and quick "proof points" (with real-life testing) to bolster the case for EA. The proof point here is a successful, timely procurement that satisfies not only the external mandate and external oversight review, but also meets internal EA compliance/conformance goals and therefore is more transparently useful across the community. In short, if sold effectively, the EA will perform and be recognized. EA won’t therefore be used only for compliance, but also (according to a validated, stated purpose) to directly influence decisions and outcomes. The opinions, views and analysis expressed in this document are those of the author and do not necessarily reflect the views of Oracle.

    Read the article

  • Using HTML5 Today part 3&ndash; Using Polyfills

    - by Steve Albers
    Shims helps when adding semantic tags to older IE browsers, but there is a huge range of other new HTML5 features that having varying support on browsers.  Polyfills are JavaScript code and/or browser plug-ins that can provide older or less featured browsers with API support.  The best polyfills will detect the whether the current browser has native support, and only adds the functionality if necessary.  The Douglas Crockford JSON2.js library is an example of this approach: if the browser already supports the JSON object, nothing changes.  If JSON is not available, the library adds a JSON property in the global object. This approach provides some big benefits: It lets you add great new HTML5 features to your web sites sooner. It lets the developer focus on writing to the up-and-coming standard rather than proprietary APIs. Where most one-off legacy code fixes tends to break down over time, well done polyfills will stop executing over time (as customer browsers natively support the feature) meaning polyfill code may not need to be tested against new browsers since they will execute the native methods instead. Your should also remember that Polyfills represent an entirely separate code path (and sometimes plug-in) that requires testing for support.  Also Polyfills tend to run on older browsers, which often have slower JavaScript performance.  As a result you might find that performance on older browsers is not comparable. When looking for Polyfills you can start by checking the Modernizr GitHub wiki or the HTML5 Please site. For an example of a polyfill consider a page that writes a few geometric shapes on a <canvas> <script src="jquery-1.7.1.min.js"><script> <script> $(document).ready(function () { drawCanvas(); }); function drawCanvas() { var context = $("canvas")[0].getContext('2d'); //background context.fillStyle = "#8B0000"; context.fillRect(5, 5, 300, 100); // emptybox context.strokeStyle = "#B0C4DE"; context.lineWidth = 4; context.strokeRect(20, 15, 80, 80); // circle context.arc(160, 55, 40, 0, Math.PI * 2, false); context.fillStyle = "#4B0082"; context.fill(); </script>   The result is a simple static canvas with a box & a circle:   …to enable this functionality on a pre-canvas browser we can find a polyfill.  A check on html5please.com references  FlashCanvas.  Pull down the zip and extract the files (flashcanvas.js, flash10canvas.swf, etc) to a directory on your site.  Then based on the documentation you need to add a single line to your original HTML file: <!--[if lt IE 9]><script src="flashcanvas.js"></script><![endif]—> …and you have canvas functionality!  The IE conditional comments ensure that the library is only loaded in browsers where it is useful, improving page load & processing time. Like all Polyfills, you should test to verify the functionality matches your expectations across browsers you need to support.  For instance the Flash Canvas home page advertises 70% support of HTML5 Canvas spec tests.

    Read the article

  • Appropriate design / technologies to handle dynamic string formatting?

    - by Mark W
    recently I was tasked with implementing a way of adding support for versioning of hardware packet specifications to one of our libraries. First a bit of information about the project. We have a hardware library which has classes for each of the various commands we support sending to our hardware. These hardware modules are essentially just lights with a few buttons, and a 2 or 4 digit display. The packets typically follow the format {SOH}AADD{ETX}, where AA is our sentinel action code, and DD is the device ID. These packet specs are different from one command to the next obviously, and the different firmware versions we have support different specifications. For example, on version 1 an action code of 14 may have a spec of {SOH}AADDTEXT{ETX} which would be AA = 14 literal, DD = device ID, TEXT = literal text to display on the device. Then we come out with a revision with adds an extended byte(s) onto the end of the packet like this {SOH}AADDTEXTE{ETX}. Assume the TEXT field is fixed width for this example. We have now added a new field onto the end which could be used to say specify the color or flash rate of the text/buttons. Currently this java library only supports one version of the commands, the latest. In our hardware library we would have a class for this command, say a DisplayTextArgs.java. That class would have fields for the device ID, the text, and the extended byte. The command class would expose a method which generates the string ("{SOH}AADDTEXTE{ETX}") using the value from the class. In practice we would create the Args class as needed, populate the fields, call the method to get our packet string, then ship that down across the CAN. Some of our other commands specification can vary for the same command, on the same version, depending on some runtime state. For example, another command for version 1 may be {SOH}AA{ETX}, where this action code clears all of the modules behind a specific controller device of their text. We may overload this packet to have option fields with multiple meanings like {SOH}AAOC{ETX} where OC is literal text, which tells the controller to only clear text on a specific module type, and to leave the others alone, or the spec could also have an option format of {SOH}AADD{ETX} to clear the text off a a specific device. Currently, in the method which generates the packet string, we would evaluate fields on the args class to determine which spec we will be using when formatting the packet. For this example, it would be along the lines of: if m_DeviceID != null then use {SOH}AADD{ETX} else if m_ClearOCs == true then use {SOH}AAOC{EXT} else use {SOH}AA{ETX} I had considered using XML, or a database to store String.format format strings, which were linked to firmware version numbers in some table. We would load them up at startup, and pass in the version number of the hardwares firmware we are currently using (I can query the devices for their firmware version, but the version is not included in all packets as part of the spec). This breaks down pretty quickly because of the dynamic nature of how we select which version of the command to use. I then considered using a rule engine to possibly build out expressions which could be interpreted at runtume, to evaluate the args class's state, and from that select the appropriate format string to use, but my brief look at rule engines for java scared me away with its complexity. While it seems like it might be a viable solution, it seems overly complex. So this is why I am here. I wouldn't say design is my strongest skill, and im having trouble figuring out the best way to approach this problem. I probably wont be able to radically change the args classes, but if the trade off was good enough, I may be able to convince my boss that the change is appropriate. What I would like from the community is some feedback on some best practices / design methodologies / API or other resources which I could use to accomplish: Logic to determine which set of commands to use for a given firmware version Of those command, which version of each command to use (based on the args classes state) Keep the rules logic decoupled from the application so as to avoid needing releases for every firmware version Be simple enough so I don't need weeks of study and trial and error to implement effectively.

    Read the article

  • D3.js: "NS_ERROR_DOM_BAD_URI: Access to restricted URI denied"

    - by user2102328
    I have an html-file with several d3-graphs directly written in script tags into it. When I outsource one of the graphs into an external js file I get this message "NS_ERROR_DOM_BAD_URI: Access to restricted URI denied". If I delete the code with d3.json where it reads a local json file the error disappears. But it has to be possible to load a json file in an external js which is embedded into an html, right? d3.json("forcetree.json", function(json) { root = json; update(); });

    Read the article

  • Exception on SslStream.AuthenticateAsClient (The message was badly formatted)

    - by Noms
    I have got wierd problem going on. I am trying to connect to Apple server via TCP/SSL. I am using a Client certificate provided by Apple for push notifications. I installed the certificate on my server (Win2k3) in both Local Trusted Root certificates and Local Personal Certificates folder. Now I have a class library that deals with that connection, when i call this class library from a console application running from the server it works absolutely fine, but when i call that class library from an asp.net page or asmx web service I get the following exception. A call to SSPI failed, see inner exception. The message received was unexpected or badly formatted. This is my code: X509Certificate cert = new X509Certificate(certificateLocation, certificatePassword); X509CertificateCollection certCollection = new X509CertificateCollection(new X509Certificate[1] { cert }); // OPEN the new SSL Stream SslStream ssl = new SslStream(client.GetStream(), false, new RemoteCertificateValidationCallback(ValidateServerCertificate), null); ssl.AuthenticateAsClient(ipAddress, certCollection, SslProtocols.Default, false); ssl.AuthenticateAsClient is where the error gets thrown. This is driving me nuts. If the console application can connect fine, there must be some problem with asp.net network layer security that is failing the authentication... not sure, perhaps need to add something or some sort of security policy in the web.config. Also just to point out that i can connect fine on my local development machine both with console and website. Anyone has got any ideas?

    Read the article

  • Why am I getting SEHException when calling RoleEnvironment.GetConfigurationSettingValue("MYKEY")?

    - by korifrancis
    I'm trying to call RoleEnvironment.GetConfigurationSetting("SOMEKEY") like so: public partial class AzureBasePage : System.Web.UI.Page { protected ChargifyConnect Chargify { get { if (this._chargify == null) { this._chargify = new ChargifyConnect(); this._chargify.apiKey = RoleEnvironment.GetConfigurationSettingValue("CHARGIFY_API_KEY"); } return this._chargify; } } private ChargifyConnect _chargify = null; } My ServiceConfiguration.cscfg key looks like this: <Setting name="CHARGIFY_API_KEY" value="AbCdEfGhIjKlMnOp" /> And I get this error: Exception Details: System.Runtime.InteropServices.SEHException: External component has thrown an exception. [SEHException (0x80004005): External component has thrown an exception.] RoleEnvironmentGetConfigurationSettingValueW(UInt16* , UInt16* , UInt32 , UInt32* ) +0 Microsoft.WindowsAzure.ServiceRuntime.Internal.InteropRoleManager.GetConfigurationSetting(String name, String& ret) +92 Microsoft.WindowsAzure.ServiceRuntime.RoleEnvironment.GetConfigurationSettingValue(String configurationSettingName) +67 ChargifyNET.ChargifyAzurePage.get_Chargify() in C:\NetProjects\ChargifyDotNET\Source\Chargify.NET\ChargifyAzurePage.cs:26 Chargify.Azure._Default.Page_Load(Object sender, EventArgs e) in C:\NetProjects\ChargifyDotNET\Source\Chargify.Azure\Default.aspx.vb:8 System.Web.UI.Control.OnLoad(EventArgs e) +99 System.Web.UI.Control.LoadRecursive() +50 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +627

    Read the article

  • iPhone: Software Development And Distribution

    - by xsl
    I have a few quick questions about the iPhone software development. I did some research about the topic, but there are a few specific things I would like to ask here, because I will have to estimate the cost of the required hardware and software, before I am allowed to buy anything. I never did any Mac development nor have I ever owned an iPhone, so needless to say this is quite hard for me. I will buy an iMac mini with 2 GB RAM for iPhone development. I will have to use it at the same time as my regular PC, but the majority of the time I won't use the Mac at all. Do I have to buy an additional monitor, a mouse and a keyboard or is there a better solution? I will have to port a C library to the iPhone platform and develop an iPhone application that uses the ported library. Do I need anything else than the iPhone SDK to do this? If I use an external library (see above), can I test the application with the integrated emulator, or is it recommend to buy the device? In a later phase of the project I will have to buy an iPhone, but I will have to wait until the iPhone 4 is released here in Europe, because the application requires a camera. In addition to this I will have to send data to a remote webservice. Aside from these two things I don't require any other features. Can I just buy the iPhone online from another country (the iPhones here are sim locked), or should I buy one with a contract? When the application is ready, it will be installed on a few iPhones owned by our customer. Because of security reasons it is crucial that there is no third party involved in this process (i.e. the application should not be distributed on the app store). Is this possible?

    Read the article

  • wcf service creating proxy by using svcutil.exe in command prompt?

    - by Surya sasidhar
    when i am trying to generate proxy manually using comand prompt i am getting this error Setting environment for using Microsoft Visual Studio 2008 x86 tools. C:\Program Files\Microsoft Visual Studio 9.0\VC>cd\ C:\>svcutil /language:cs /out:proxy.cs /config:app.config /http://localhost:2544 /myservicewcf/Sasi.svc 'svcutil' is not recognized as an internal or external command, operable program or batch file. C:\>svcutil.exe /language:cs /out:proxy.cs /config:app.config /http://localhost: 2544/myservicewcf/sasi.svc 'svcutil.exe' is not recognized as an internal or external command, operable program or batch file. C:\> can u help me please

    Read the article

  • Extract inline images from Lotus Notes using Lotus Notes Java API

    - by user1660645
    I'm having issues to extract inline images that are pasted in the email body if the emails are sent from external email(like gmail for example) into the Lotus notes. The emails which are sent from Lotus Notes itself has no issues and I'm able to retrieve the inline images by using the document.generateXML() method and parsing through ['picture' xml element tag] the stream. My real concern is how to extract from the external emails(like gmail). I would really appreciate for your help and time on this. Thanks in advance!...

    Read the article

  • Question about using ServiceModelEx's "WCFLogbook" from "Programming WCF Services" book

    - by Bill
    I am attempting to compile/run a sample WCF application from Juval Lowy's website (author of Programming WCF Services & founder of IDesign). Of course the example application utilizes Juval's ServiceModelEx library and logs faults/errors to a "WCFLogbook" database. Unfortunately, when the sample app faults, I get the following new error: "Cannot open database "WCFLogbook" requested by the login. The login failed. Login failed for user 'Bill-PC\Bill'." I suspect the error occurs as a result of not finding the "WCFLogbook" database, which I believe still needs to be created. Included in the library source directory, there are two files -WCFLogbookDataSet.xsd and WCFLogbook.sql; which neither seem to be referenced anywhere within the library code. This leads me to beleive that the sql and xsd files are there to be used to create the database somehow in SQL. Could someone please advise me if I am going in the correct direction here and if whether these files can be used to create the database (and if so, how)?

    Read the article

  • class weblogic.management.WeblogicMBean not found

    - by khue
    Hi all I meet this problem when I try to run Junit test case in fork mode (starting each test in a separate JVM) using Build ant file. [junit] Exception in thread "main" java.lang.NoClassDefFoundError: weblogic/management/WebLogicMBean [junit] at java.lang.ClassLoader.defineClass1(Native Method) [junit] at java.lang.ClassLoader.defineClass(ClassLoader.java:621) [junit] at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:124) [junit] at java.net.URLClassLoader.defineClass(URLClassLoader.java:260) [junit] at java.net.URLClassLoader.access$000(URLClassLoader.java:56) [junit] at java.net.URLClassLoader$1.run(URLClassLoader.java:195) [junit] at java.security.AccessController.doPrivileged(Native Method) [junit] at java.net.URLClassLoader.findClass(URLClassLoader.java:188) [junit] at java.lang.ClassLoader.loadClass(ClassLoader.java:307) [junit] at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301) [junit] at java.lang.ClassLoader.loadClass(ClassLoader.java:252) [junit] at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:320) [junit] at java.lang.ClassLoader.defineClass1(Native Method) [junit] at java.lang.ClassLoader.defineClass(ClassLoader.java:621) [junit] at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:124) [junit] at java.net.URLClassLoader.defineClass(URLClassLoader.java:260) [junit] at java.net.URLClassLoader.access$000(URLClassLoader.java:56) [junit] at java.net.URLClassLoader$1.run(URLClassLoader.java:195) [junit] at java.security.AccessController.doPrivileged(Native Method) [junit] at java.net.URLClassLoader.findClass(URLClassLoader.java:188) [junit] at java.lang.ClassLoader.loadClass(ClassLoader.java:307) [junit] at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301) [junit] at java.lang.ClassLoader.loadClass(ClassLoader.java:252) [junit] at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:320) [junit] at java.lang.ClassLoader.defineClass1(Native Method) [junit] at java.lang.ClassLoader.defineClass(ClassLoader.java:621) [junit] at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:124) [junit] at java.net.URLClassLoader.defineClass(URLClassLoader.java:260) [junit] at java.net.URLClassLoader.access$000(URLClassLoader.java:56) [junit] at java.net.URLClassLoader$1.run(URLClassLoader.java:195) .... I have the library weblogic.jar in my build library folders, which is set as classpath for the junit task. I look at this file and can't find the WeblogicMBean.class inside. However, in Jdev, I can import weblogic.management.WeblogicMBean into my class if I set library reference to this weblogic.jar file and compile my class without problem. Any suggestion of what really goes wrong? Thanks a lot.

    Read the article

  • ExternalInterface in flex calling javascript function works for mozilla/chrome but NOT IE

    - by Rees
    hello, i have a flex application that does a simple ExternalInterface.call("shareOptions"), which calls a shareOptions() javascript method and works absolutely fine with Mozilla and chrome, however when I test with IE i get the following error: Error: [object Error] at flash.external::ExternalInterface$/_toAS() at flash.external::ExternalInterface$/call() I looked at the adobe livedocs documentation but can't determine what the issue is with IE. is there something i'm missing?? if anyone knows, please let me know ASAP! thanks in advance. private function shareOptions(event:MouseEvent):void{ ExternalInterface.marshallExceptions = true; if (ExternalInterface.available){ ExternalInterface.call("shareOptions"); } } the javascript <script language="JavaScript" type="text/javascript"> function shareOptions() { myWin = window.open('http://www.mysite.shareOptions.php','yeee!','width=640,height=690,toolbar=no,location=0,directories=no,status=no,menubar=no,scrollbars=no,copyhistory=no,resizable=yes,x=500,y=500'); myWin.moveTo(300,300); } </script>

    Read the article

  • dbExpress error in Delphi 2010

    - by JosephStyons
    The below code works in Delphi 2007, but it gives me this error in Delphi 2010: --------------------------- Error --------------------------- Cannot load oci.dll library (error code 127). The oci.dll library may be missing from the system path or you may have an incompatible version of the library installed. --------------------------- OK Details >> --------------------------- The exception is raised when I set "connected" to "true". I have tried placing a copy of "oci.dll" in the same folder as the .exe file, but I get the same message. I also get this message when using the form designer and a visible TSQLConnection component. Any thoughts? function TDBExpressConnector.GetConnection(username, password, servername: string) : TSQLConnection; begin //take a username, password, and server //return a connected TSQLConnection try FSqlDB := TSQLConnection.Create(nil); with FSqlDB do begin Connected := False; DriverName := 'Oracle'; GetDriverFunc := 'getSQLDriverORACLE'; KeepConnection := True; LibraryName := 'dbxora30.dll'; ConnectionName := 'OracleConnection';; Params.Clear; Params.Add('DriverName=Oracle'); Params.Add('DataBase=' + servername); Params.Add('User_Name=' + username); Params.Add('Password=' + password); Params.Add('RowsetSize=20'); Params.Add('BlobSize=-1'); Params.Add('ErrorResourceFile='); Params.Add('LocaleCode=0000'); Params.Add('Oracle TransIsolation=ReadCommited'); Params.Add('OS Authentication=False'); Params.Add('Multiple Transaction=False'); Params.Add('Trim Char=False'); Params.Add('Decimal Separator=.'); LoginPrompt := False; Connected := True; end; Result := FSqlDB; except on e:Exception do raise; end; //try-except end;

    Read the article

  • How to turn off particular monitor in C#?

    - by Boris
    OK, I know there are quite a few posts on this topic. However, none of them provide the solution to my issue: I don't want just to turn off my monitor(s), I wish my code to turn off a specific monitor. The URL the most people refer to, http://fci-h.blogspot.com/2007/03/turn-off-your-monitor-via-code-c.html, doesn't help here, as it turns off all the displays. So, I have my laptop screen and an additional external monitor. While I'm watching movies, I switch the display to the external monitor and my laptop screen goes black, however, it's still on and glowing in the dark. I wish to turn it off. Could anyone help please? Thanks.

    Read the article

  • SSIS package randomly hangs during execution

    - by Adam MacLeod
    Hi Guys, I am having an ongoing and painful problem with an SSIS package. The package runs every 5 minutes as an SQL Agent Job and every 2-10 days the package will start running and never stop (thus preventing further executions). If I stop the hung job manually it will begin working perfectly again in the next 5 minute interval. The SSIS package is for moving data from an Oracle database to a MSSQL 2005 database. It has 7 steps: Step 1 calls an Oracle Stored Procedure to prepare the temporary tables inside ORACLE Steps 2-6 process the data from the ORACLE tables to the MSSQL tables ORACLE - MSSQL Step 7 calls an Oracle Stored Procedure to clear the ORACLE temporary tables I suspect that the issue is caused by a communications error between the MSSQL server and the ORACLE server. Both the MSSQL database and Agent/package run on one machine with the ORACLE database running over the network. I have enabled logging of the SQL package and after more than 2GB of log file I have captured the instant where the package stops responding: OnPreValidate,ADV-SRV5,NT AUTHORITY\SYSTEM,CallistaIntegrationToMonashCRM_delta,{F88F6C45-CFA2-4801-A2F2-DDF03D458A48},{3A1FB1E3-B76D-444D-876B-D1FBBB9BA246},6/06/2010 10:15:01 AM,6/06/2010 10:15:01 AM,0,0x,(null) OnPreValidate,ADV-SRV5,NT AUTHORITY\SYSTEM,Address,{c5907799-f918-43da-818a-d4bd7f188367},{3A1FB1E3-B76D-444D-876B-D1FBBB9BA246},6/06/2010 10:15:01 AM,6/06/2010 10:15:01 AM,0,0x,(null) OnInformation,ADV-SRV5,NT AUTHORITY\SYSTEM,Address,{c5907799-f918-43da-818a-d4bd7f188367},{3A1FB1E3-B76D-444D-876B-D1FBBB9BA246},6/06/2010 10:15:01 AM,6/06/2010 10:15:01 AM,1074016266,0x,Validation phase is beginning. OnProgress,ADV-SRV5,NT AUTHORITY\SYSTEM,Address,{c5907799-f918-43da-818a-d4bd7f188367},{3A1FB1E3-B76D-444D-876B-D1FBBB9BA246},6/06/2010 10:15:01 AM,6/06/2010 10:15:01 AM,0,0x,Validating Diagnostic,ADV-SRV5,NT AUTHORITY\SYSTEM,Callista,{cb5d6fe3-3ea4-4453-8e5a-965818021df7},{3A1FB1E3-B76D-444D-876B-D1FBBB9BA246},6/06/2010 10:15:01 AM,6/06/2010 10:15:01 AM,0,0x,ExternalRequest_pre: The object is ready to make the following external request: 'IDataInitialize::GetDataSource'. Diagnostic,ADV-SRV5,NT AUTHORITY\SYSTEM,Callista,{cb5d6fe3-3ea4-4453-8e5a-965818021df7},{3A1FB1E3-B76D-444D-876B-D1FBBB9BA246},6/06/2010 10:15:01 AM,6/06/2010 10:15:01 AM,0,0x,ExternalRequest_post: 'IDataInitialize::GetDataSource succeeded'. The external request has completed. Diagnostic,ADV-SRV5,NT AUTHORITY\SYSTEM,Callista,{cb5d6fe3-3ea4-4453-8e5a-965818021df7},{3A1FB1E3-B76D-444D-876B-D1FBBB9BA246},6/06/2010 10:15:01 AM,6/06/2010 10:15:01 AM,0,0x,ExternalRequest_pre: The object is ready to make the following external request: 'IDBInitialize::Initialize'. These messages show the entire log generated for the failed run, for a successful run the output is typically ~2500 lines. I can see that the package is hanging during the initialize operation on the Callista connection (ORACLE database). I have not been able to work out a way to either fix this issue or have the package die gracefully (an error to the log would be A-OK with me). Any help or advice would be greatly appreciated.

    Read the article

  • Stanford iPhone dev Paparazzi 1 setup help

    - by cksubs
    Hi, I'm having trouble using a tab bar control. Basically, when I build and run I'm just getting the blank white default "Window" screen. I was following this guide: http://www.iphoneosdevcafe.com/2010/03/assignment-4-part-1/#comment-36 Drag a Tab Bar Controller from the Library to MainWindow.xib. Control drag from the App Delegate to the Tab Bar Controller. Drag a Navigation Controller from the Library to MainWindow.xib. Control drag from the App Delegate to the Navigation Controller. Drag a second Navigation Controller from the Library to MainWindow.xib. Control drag from the App Delegate to the Navigation Controller. This completes all the connections between the App Delegate and the tab bar and two navigation controllers. By using IB to set up the tab bar and navigation controllers in this way, you need not allocate and init the controllers in AppDelegate.m. When you build and run this, you will see the tab bar controller and two navigation controllers. Is there a step there that he missed? How do I hook the Tab Bar Controller up to the Window? EDIT: Do I need to do something like this? mainTabBar = [[UITabBarController alloc] init]; [window addSubview:mainTabBar.view]; That's still not working, but I feel like I'm on the right track? Why can't this all be done from Interface Builder?

    Read the article

  • Why No android.content.SyncAdapter meta-data registering sync-adapter?

    - by mobibob
    I am following the SampleSyncAdapter and upon startup, it appears that my SyncAdapter is not configured correctly. It reports an error trying to load its meta-data. How can I isolate the problem? You can see the other accounts in the system that register correctly. Logcat: 12-21 17:10:50.667 W/PackageManager( 121): Unable to load service info ResolveInfo{4605dcd0 com.myapp.syncadapter.MySyncAdapter p=0 o=0 m=0x108000} 12-21 17:10:50.667 W/PackageManager( 121): org.xmlpull.v1.XmlPullParserException: No android.content.SyncAdapter meta-data 12-21 17:10:50.667 W/PackageManager( 121): at android.content.pm.RegisteredServicesCache.parseServiceInfo(RegisteredServicesCache.java:391) 12-21 17:10:50.667 W/PackageManager( 121): at android.content.pm.RegisteredServicesCache.generateServicesMap(RegisteredServicesCache.java:260) 12-21 17:10:50.667 W/PackageManager( 121): at android.content.pm.RegisteredServicesCache$1.onReceive(RegisteredServicesCache.java:110) 12-21 17:10:50.667 W/PackageManager( 121): at android.app.ActivityThread$PackageInfo$ReceiverDispatcher$Args.run(ActivityThread.java:892) 12-21 17:10:50.667 W/PackageManager( 121): at android.os.Handler.handleCallback(Handler.java:587) 12-21 17:10:50.667 W/PackageManager( 121): at android.os.Handler.dispatchMessage(Handler.java:92) 12-21 17:10:50.667 W/PackageManager( 121): at android.os.Looper.loop(Looper.java:123) 12-21 17:10:50.667 W/PackageManager( 121): at com.android.server.ServerThread.run(SystemServer.java:570) 12-21 17:10:50.747 D/Sources ( 294): Creating external source for type=com.skype.contacts.sync, packageName=com.skype.raider 12-21 17:10:50.747 D/Sources ( 294): Creating external source for type=com.twitter.android.auth.login, packageName=com.twitter.android 12-21 17:10:50.747 D/Sources ( 294): Creating external source for type=com.example.android.samplesync, packageName=com.example.android.samplesync 12-21 17:10:50.747 W/PackageManager( 121): Unable to load service info ResolveInfo{460504b0 com.myapp.syncadapter.MySyncAdapter p=0 o=0 m=0x108000} 12-21 17:10:50.747 W/PackageManager( 121): org.xmlpull.v1.XmlPullParserException: No android.content.SyncAdapter meta-data 12-21 17:10:50.747 W/PackageManager( 121): at android.content.pm.RegisteredServicesCache.parseServiceInfo(RegisteredServicesCache.java:391) 12-21 17:10:50.747 W/PackageManager( 121): at android.content.pm.RegisteredServicesCache.generateServicesMap(RegisteredServicesCache.java:260) 12-21 17:10:50.747 W/PackageManager( 121): at android.content.pm.RegisteredServicesCache$1.onReceive(RegisteredServicesCache.java:110) 12-21 17:10:50.747 W/PackageManager( 121): at android.app.ActivityThread$PackageInfo$ReceiverDispatcher$Args.run(ActivityThread.java:892) 12-21 17:10:50.747 W/PackageManager( 121): at android.os.Handler.handleCallback(Handler.java:587) 12-21 17:10:50.747 W/PackageManager( 121): at android.os.Handler.dispatchMessage(Handler.java:92) 12-21 17:10:50.747 W/PackageManager( 121): at android.os.Looper.loop(Looper.java:123) 12-21 17:10:50.747 W/PackageManager( 121): at com.android.server.ServerThread.run(SystemServer.java:570)

    Read the article

  • Authenticating from a "child" application via CAS

    - by Rob Wilkerson
    I have a portal application that loads external content (widgets) via an iframe. Users login to CAS via the portal itself. There are a few portal APIs, though, that need to be called from that external content. What information do I have to pass from the portal to the widgets that the widgets can use to make these calls without being rejected by CAS? UPDATE The more I investigate, the more I think that my question boils down to how CAS actually does what it's supposed to do. In other words, how can I go from one site where I've authenticated to another and tell it that I've already done the authentication thing. What's the mechanism behind that and how can I employ in in a web context.

    Read the article

  • Dojo addOnLoad, but is Dojo loaded?

    - by adamrice32
    I've encountered what seems like a chicken & egg problem, and have what I think is a logical solution. However, it occurred to me that others must have encountered something similar, so I figured I'd float it out there for the masses. The situation is that I want to use dojo's addOnLoad function to queue up a number of callbacks which should be executed after the DOM has completed rendering on the client side. So what I'm doing is as follows: <html> <head> <script type="text/javascript" src="dojo.xd.js"></script> ... </head> <body> ... <script type="text/javascript"> dojo.addOnLoad( ... ); dojo.addOnLoad( ... ); ... </script> </body> </html> Now, the issue is that I seem to be calling dojo.addOnLoad before the entire Dojo library has been downloaded the browser. This makes sense in a way, because the inline SCRIPT contents should be executed before the entire DOM is loaded (and the normal body onload callback is triggered). My question is this - is my approach sound, or would it make more sense to register a normal/standard body onload JavaScript callback to call a function, which does the same work that each of the dojo.addOnLoads is doing in the SCRIPT block. Of course, this begs the question, why would you ever then use dojo.addOnLoad if you're not guaranteed that the Dojo library will be loaded prior to using the library? Hopefully this situation makes sense to someone other than me. Seems like someone else may have encountered this situation. Thoughts? Best Regards, Adam Rice

    Read the article

  • extern(al) problem

    - by Knowing me knowing you
    Why can't I compile this code? //main #include "stdafx.h" #include "X.h" #include "Y.h" //#include "def.h" extern X operator*(X, Y);//HERE ARE DECLARED EXTERNAL *(X,Y) AND f(X) extern int f(X); /*GLOBALS*/ X x = 1; Y y = x; int i = 2; int _tmain(int argc, _TCHAR* argv[]) { i + 10; y + 10; y + 10 * y; //x + (y + i); x * x + i; f(7); //f(y); //y + y; //106 + y; return 0; } //X struct X { int i; X(int value):i(value) { } X operator+(int value) { return X(i + value); } operator int() { return i; } }; //Y struct Y { int i; Y(X x):i(x.i) { } Y operator+(X x) { return Y(i + x.i); } }; //def.h int f(X x); X operator*(X x, Y y); //def.cpp #include "stdafx.h" #include "def.h" #include "X.h" #include "Y.h" int f(X x) { return x; } X operator*(X x, Y y) { return x * y; } I'm getting err msg: Error 2 error LNK2019: unresolved external symbol "int __cdecl f(struct X)" Error 3 error LNK2019: unresolved external symbol "struct X __cdecl operator*(struct X,struct Y)" Another interesting thing is that if I place the implementation in def.h file it does compiles without errs. But then what about def.cpp? Why I'm not getting err msg that function f(X) is already defined? Here shouldn't apply ODR rule. Second concern I'm having is that if in def.cpp I change the return type of f from int to double intelliSense underlines this as an error but program still compiles? Why?

    Read the article

< Previous Page | 277 278 279 280 281 282 283 284 285 286 287 288  | Next Page >