Search Results

Search found 10206 results on 409 pages for 'upk release'.

Page 104/409 | < Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >

  • Installing CUDA on Ubuntu 12.04 with nvidia driver 295.59

    - by johnmcd
    I have been trying to get cuda to run on a nvidia gt 650m based laptop. I am running Ubuntu 12.04 with the nvidia 295.59 driver. Also, my laptop uses Optimus so I have install the driver via bumblebee. Bumblebee is not working correctly yet -- however I believe it is possible to install CUDA independently. To install CUDA I have followed the instructions detailed here: How can I get nVidia CUDA or OpenCL working on a laptop with nVidia discrete card/Intel Integrated Graphics? However I am still running into problem building the sdk. I made the changes specified at the above link in common.mk, but I got the following (snippet) from the build process: make[2]: Entering directory `/home/john/NVIDIA_GPU_Computing_SDK/C/src/fluidsGL' /usr/bin/ld: warning: libnvidia-tls.so.302.17, needed by /usr/lib/nvidia-current/libGL.so, not found (try using -rpath or -rpath-link) /usr/bin/ld: warning: libnvidia-glcore.so.302.17, needed by /usr/lib/nvidia-current/libGL.so, not found (try using -rpath or -rpath-link) /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv018tls' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv012glcore' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv017glcore' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv012tls' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv015tls' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv019tls' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv000glcore' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv017tls' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv013tls' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv013glcore' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv018glcore' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv022tls' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv007tls' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv009tls' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv020tls' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv014glcore' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv015glcore' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv016tls' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv001glcore' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv006tls' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv021tls' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv011tls' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv020glcore' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv019glcore' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv002glcore' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv021glcore' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv014tls' collect2: ld returned 1 exit status make[2]: *** [../../bin/linux/release/fluidsGL] Error 1 make[2]: Leaving directory `/home/john/NVIDIA_GPU_Computing_SDK/C/src/fluidsGL' make[1]: *** [src/fluidsGL/Makefile.ph_build] Error 2 make[1]: Leaving directory `/home/john/NVIDIA_GPU_Computing_SDK/C' make: *** [all] Error 2 The libraries that ld warns about are on my system and are installed on the system: $ locate libnvidia-tls.so.302.17 libnvidia-glcore.so.302.17 /usr/lib/nvidia-current/libnvidia-glcore.so.302.17 /usr/lib/nvidia-current/libnvidia-tls.so.302.17 /usr/lib/nvidia-current/tls/libnvidia-tls.so.302.17 /usr/lib32/nvidia-current/libnvidia-glcore.so.302.17 /usr/lib32/nvidia-current/libnvidia-tls.so.302.17 /usr/lib32/nvidia-current/tls/libnvidia-tls.so.302.17 however /usr/lib/nvidia-current and /usr/lib32/nvidia-current are not being picked up by ldconfig. I have tried adding them by adding a file to /etc/ld.so.conf.d/ which gets past this error, however now I am getting the following error: make[2]: Entering directory `/home/john/NVIDIA_GPU_Computing_SDK/C/src/deviceQueryDrv' cc1plus: warning: command line option ‘-Wimplicit’ is valid for C/ObjC but not for C++ [enabled by default] obj/x86_64/release/deviceQueryDrv.cpp.o: In function `main': deviceQueryDrv.cpp:(.text.startup+0x5f): undefined reference to `cuInit' deviceQueryDrv.cpp:(.text.startup+0x99): undefined reference to `cuDeviceGetCount' deviceQueryDrv.cpp:(.text.startup+0x10b): undefined reference to `cuDeviceComputeCapability' deviceQueryDrv.cpp:(.text.startup+0x127): undefined reference to `cuDeviceGetName' deviceQueryDrv.cpp:(.text.startup+0x16a): undefined reference to `cuDriverGetVersion' deviceQueryDrv.cpp:(.text.startup+0x1f0): undefined reference to `cuDeviceTotalMem_v2' deviceQueryDrv.cpp:(.text.startup+0x262): undefined reference to `cuDeviceGetAttribute' deviceQueryDrv.cpp:(.text.startup+0x457): undefined reference to `cuDeviceGetAttribute' deviceQueryDrv.cpp:(.text.startup+0x4bc): undefined reference to `cuDeviceGetAttribute' deviceQueryDrv.cpp:(.text.startup+0x502): undefined reference to `cuDeviceGetAttribute' deviceQueryDrv.cpp:(.text.startup+0x533): undefined reference to `cuDeviceGetAttribute' obj/x86_64/release/deviceQueryDrv.cpp.o:deviceQueryDrv.cpp:(.text.startup+0x55e): more undefined references to `cuDeviceGetAttribute' follow collect2: ld returned 1 exit status make[2]: *** [../../bin/linux/release/deviceQueryDrv] Error 1 make[2]: Leaving directory `/home/john/NVIDIA_GPU_Computing_SDK/C/src/deviceQueryDrv' make[1]: *** [src/deviceQueryDrv/Makefile.ph_build] Error 2 make[1]: Leaving directory `/home/john/NVIDIA_GPU_Computing_SDK/C' make: *** [all] Error 2 I would appreciate any help that anyone can provide me with. If I can provide any further information please let me know. Thanks.

    Read the article

  • Oracle WebCenter: Extending Oracle Applications & Oracle Fusion Applications

    - by kellsey.ruppel(at)oracle.com
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} -- Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";}We’ve talked in previous weeks about the key goals of the new release of WebCenter are providing a Modern User Experience, unparalleled Application Integration, converging all the best of the existing portal platforms into WebCenter and delivering a Common User Experience Architecture.  We’ve provided an overview of Oracle WebCenter and discussed some of the other key goals in previous weeks, and this week, we’ll focus on how the new release of Oracle WebCenter extends Oracle Applications and Fusion Applications.When we talk about the new release of Oracle WebCenter, we really emphasize to customers that they can leverage their existing investments and benefit from WebCenter’s Complete, Open and Integrated platform. To summarize what we mean here, Oracle WebCenter is:COMPLETEComprehensive platform for Portals/Websites, Composite Applications with integrated Social/Collaboration services and Content Management infrastructureOPENStandards support improves reuse of existing resources and extends the value of existing systemsINTEGRATEDImplicit integration with Oracle Applications, Oracle Fusion Applications & other enterprise applicationsWith all the existing enterprise applications in Oracle’s application portfolio, in the new release of WebCenter we’ve got a set of pre-built catalogs that customers can use directly to get at all the portlet resources certified and available from Oracle.  It provides customers with a ready-to-use view of their application resources.  And since WebCenter provides seamless support for building these portlets/components in a professional IDE like JDeveloper or from within a Browser, developers and business analysts can quickly assemble the information they require for their existing application investment.  In addition, we’ve taken all the user flows and patterns that we’ve learned in building Fusion Applications and focused on making it dramatically easier to use tools to create reusable application UI components. In this way, one team in the organization using an application can share their components with other teams.  And more importantly, the new team can make changes to the component without breaking the original component.  When tied to enterprise applications, this capability is extremely powerful.  This is what Oracle means when they talk about Enterprise Mashups.  And finally, we’ve provided an innovative way to go well beyond traditional “on the glass” integration by enabling business transactions for the existing applications direct integration using activity streams. This delivers aggregated and “on time” delivery of information to the business users based on what‘s happening in the enterprise that is relevant to their particular job function.  Most importantly, it ties into the personalization interactions discussed earlier so that it can help target information to you directly based on past interactions.  Application integration is key to making businesses function more efficiently with these new Enterprise 2.0 technologies.Keep checking back this week as we share more information on how WebCenter is the most complete, open and integrated modern user experience platform and show key ways WebCenter can extend Oracle Applications & Oracle Fusion Applications.

    Read the article

  • Oracle WebCenter: Common User Experience Architecture

    - by kellsey.ruppel(at)oracle.com
    You may remember that the key goals of the new release of WebCenter are providing a Modern User Experience, unparalleled Application Integration, converging all the best of the existing portal platforms into WebCenter and delivering a Common User Experience Architecture.  In previous weeks we've provided an overview of Oracle WebCenter and discussed some of the other key goals and this week, we'll focus on how the new release of Oracle WebCenter delivers a Common User Experience Architecture.When Oracle talks about a Common User Experience Architecture, it really focuses on a core set of areas.  First, the way that information is accessed needs to be consistent and extensible so that as requirements change, the applications don't need to be rewritten for every change. Second, this information access layer needs to be securely accessible to any application, site, or any other channel that needs to leverage this information.  Third, there needs to be a consistent presentation layout, Oracle calls it a UI shell, so that all resources can fit together in a useable, productive way.  Fourth, there needs to be a common set of design patterns for how different menus, features, and services fit into this UI Shell for broad and productive usability.  Fifth, there needs to be a set of design patterns for the individual services that plug into this UI shell so that end users can move from one module of the application to another without new learning.  Finally, all of these layers need to be customizable in an easy way that insulates IT from patching and upgrading problems and allows the business owners the agility to quickly change with the market conditions.As Oracle has already announced, we will release our next generation of enterprise applications called Oracle Fusion Applications.  We have thousands of developers building these applications that all had different programming tool experience and UI design experience.  We've educated over 6,000 developers building Oracle Fusion Applications to leverage these Common User Experience Architecture patterns to speed their learning curve of the new Java standards as well as SOA principles to deliver a revolutionary new set of applications.  You could imagine the big challenge with getting all these developers with different backgrounds and different UI design skills to deliver a completely integrated application user experience.  This is why Oracle invested heavily in designing this Common User Experience Architecture, based on Oracle WebCenter and the Oracle Application Development Framework (ADF).  It pulls together the best practices and design patterns that Oracle development required in order to bring Fusion Applications to market and Oracle WebCenter is the user experience layer that all of this is surfaced through.  In this way, customers can quickly brand a deployment for new partnerships without having to redevelop a new site.  Or they can quickly add new options to the UI Shell to enable their line of business managers to quickly adapt to a new competitive product.  And with the core integration of the activities to produce a Business Activity Stream, customers are able to stay on top of all their key business actions when they happen as they happen and more importantly, the system can recommend actions or resources to help act on these activities.And we've authored this whole set of design patterns for Oracle development to take advantage of in delivering Fusion Applications.  We're also applying these design patterns to our existing eBusiness Suite, Peoplesoft, Siebel, and JD Edwards applications so that they can tie in the exact same way that Fusion Applications has been brought together.  This will provide customers with a complete Common User Experience Architecture for their entire ecosystem of applications within their enterprise whether they are from Oracle, another vender, or custom built applications. And this is all provided in the new release of Oracle WebCenter.  These design patterns cover elements around delivering a complete, aggregated menu of all the capabilities that their role allows independent of which application they are trying to access.   It means that as they move from one application to another, they will have a consistent user experience.  And if they are using an Oracle application, any customizations that are made to the application are preserved and managed through upgrades and patches.Be sure to check back this week as we share more information and resources on Oracle's Common User Experience Architecture.

    Read the article

  • Can't add repos after upgrading to 12.04 LTS

    - by joao
    I'm a complete Linux newbie. I've just upgraded from 10.04 to 12.04 LTS and all sorts of things have started to go wrong. One main problem is the fact that I can't add repos. Example: sudo add-apt-repository ppa:team-xbmc outputs: Traceback (most recent call last): File "/usr/bin/add-apt-repository", line 8, in <module> from softwareproperties.SoftwareProperties import SoftwareProperties File "/usr/lib/python2.7/dist-packages/softwareproperties/SoftwareProperties.py", line 53, in <module> from ppa import AddPPASigningKeyThread, expand_ppa_line File "/usr/lib/python2.7/dist-packages/softwareproperties/ppa.py", line 27, in <module> import pycurl ImportError: librtmp.so.0: cannot open shared object file: No such file or directory /etc/apt/sources.list # deb cdrom:[Ubuntu 10.04.1 LTS _Lucid Lynx_ - Release i386 (20100816.1)]/ lucid main restricted # deb cdrom:[Ubuntu 10.04.1 LTS _Lucid Lynx_ - Release i386 (20100816.1)]/ maverick main restricted # See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to # newer versions of the distribution. deb http://archive.ubuntu.com/ubuntu precise main restricted deb-src http://archive.ubuntu.com/ubuntu precise main restricted ## Major bug fix updates produced after the final release of the ## distribution. deb http://archive.ubuntu.com/ubuntu precise-updates main restricted deb-src http://archive.ubuntu.com/ubuntu precise-updates main restricted ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team. Also, please note that software in universe WILL NOT receive any ## review or updates from the Ubuntu security team. deb http://archive.ubuntu.com/ubuntu precise universe deb-src http://archive.ubuntu.com/ubuntu precise universe deb http://archive.ubuntu.com/ubuntu precise-updates universe deb-src http://archive.ubuntu.com/ubuntu precise-updates universe ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team, and may not be under a free licence. Please satisfy yourself as to ## your rights to use the software. Also, please note that software in ## multiverse WILL NOT receive any review or updates from the Ubuntu ## security team. deb http://archive.ubuntu.com/ubuntu precise multiverse deb-src http://archive.ubuntu.com/ubuntu precise multiverse deb http://archive.ubuntu.com/ubuntu precise-updates multiverse deb-src http://archive.ubuntu.com/ubuntu precise-updates multiverse ## Uncomment the following two lines to add software from the 'backports' ## repository. ## N.B. software from this repository may not have been tested as ## extensively as that contained in the main release, although it includes ## newer versions of some applications which may provide useful features. ## Also, please note that software in backports WILL NOT receive any review ## or updates from the Ubuntu security team. # deb-src http://pt.archive.ubuntu.com/ubuntu/ lucid-backports main restricted universe multiverse ## Uncomment the following two lines to add software from Canonical's ## 'partner' repository. ## This software is not part of Ubuntu, but is offered by Canonical and the ## respective vendors as a service to Ubuntu users. # deb http://archive.canonical.com/ubuntu lucid partner # deb-src http://archive.canonical.com/ubuntu lucid partner deb http://archive.ubuntu.com/ubuntu precise-security main restricted deb-src http://archive.ubuntu.com/ubuntu precise-security main restricted deb http://archive.ubuntu.com/ubuntu precise-security universe deb-src http://archive.ubuntu.com/ubuntu precise-security universe deb http://archive.ubuntu.com/ubuntu precise-security multiverse deb-src http://archive.ubuntu.com/ubuntu precise-security multiverse # deb http://ppa.launchpad.net/stebbins/handbrake-snapshots/ubuntu precise main # disabled on upgrade to precise I have no clue what do do next. Should I just scrap this installation and start from scratch or is this fixable? librtmp.so.0 also shows up in error logs I've started to get from XBMC (I'm not sure if this is relevant info). Thanks in advance for any help you can give me!

    Read the article

  • JDeveloper 11g R1 (11.1.1.4.0) - New Features on ADF Desktop Integration Explained

    - by juan.ruiz
    One of the areas that introduced many new features on the latest release (11.1.1.4.0)  of JDeveloper 11g R1 is ADF Desktop integration - in this article I’ll provide an overview of these new features. New ADF Desktop Integration Ribbon in Excel - After installing the ADF desktop integration add-in and depending on the mode in which you open the desktop integration workbook, the ADF Desktop integration ribbon for design time and runtime are displayed as a separate tab within Excel. In previous version the ADF Desktop integration environment used to be placed inside the add-ins tab. Above you can see both, design time ribbon as well as runtime ribbon. On the design time ribbon you can manage the workbook and worksheet properties, worksheet component properties, diagnostics, execution and publication of the workbook. The runtime version of the ribbon is totally customizable and represents what it used to be the runtime menu on the spreadsheet, in this ribbon you can include all the operations and actions that could be executed by the end user while working with the spreadsheet data. Diagnostics - A very important aspect for developers is how to debug or verify the interactions of the client with the server, for that ADF desktop integration has provided since day one a series of diagnostics tools. In this release the diagnostics tools are more visible and are really easy to configure. You can access the client console while testing the workbook, or you can simple dump all the messages to a log file – having the ability of setting the output level for both. Security - There are a number of enhancements on security but the one with more impact for developers is tha security now is optional when using ADF Desktop Integration. Until this version every time that you wanted to work with ADFdi it was a must that the application was previously secured. In this release security is optional which means that if you have previously defined security on your application, then you must secure the ADFdi servlet as explained in one of my previous (ADD LINK) posts. In the other hand, if but the time that you start working with ADFdi you have not defined security, you can test and publish your workbooks without adding security. Support for Continuous Integration - In this release we have added tooling for continuous integration building. in the ADF desktop integration space, the concept translates to adding functionality that developers can use to publish ADFdi workbooks as part of their entire application build. For that purpose, we have a publish tool that can be easily invoke from an ANT task such that all the design time workbooks are re-published into the latest version of the application building process. Key Column - At runtime, on any worksheet containing editable tables you will notice a new additional column called the key column. The purpose of this column is to make the end user aware that all rows on the table need to be selected at the time of sorting. The users cannot alter the value of this column. From the developers points of view there are no steps required in order to have the key column included into the worksheets. Installation and Creation of New Workbooks - Both use cases can be executed now directly from JDeveloper. As part of the Tools menu options the developer can install the ADF desktop integration designer. Also, creating new workbooks that previously was done through that convert tool shipped with JDeveloper is now automatic done from the New Gallery. Creating a new ADFdi workbook adds metadata information information to the Excel workbook so you can work in design time. Other Enhancements Support for Excel 2010 and the ADF components ready-only enabled don’t allow to change its value – the cell in Excel is automatically protected, this could cause confusion among customers of previous releases.

    Read the article

  • New Management Console in Java SE Advanced 8u20

    - by Erik Costlow-Oracle
    Java SE 8 update 20 is a new feature release designed to provide desktop administrators with better control of their managed systems. The release notes for 8u20 are available from the public JDK release notes page. This release is not a Critical Patch Update (CPU). I would like to call attention to two noteworthy features of Oracle Java SE Advanced, the commercially supported version of Java SE for enterprises that require both support and specialized tools. The new Advanced Management Console provides a way to monitor and understand client systems at scale. It allows organizations to track usage and more easily create and manage client configuration like Deployment Rule Sets (DRS). DRS can control execution of tracked applications as well as specify compatibility of which application should use which Java SE installation. The new MSI Installer integrates into various desktop management tools, making it easier to customize and roll out different Java SE versions. Advanced Management Console The Advanced Management Console is part of Java SE Advanced designed for desktop administrators, whose users need to run many different Java applications. It provides usage tracking for those Applet & Web Start applications to help identify them for guided DRS creation. DRS can then be verified against the tracked data, to ensure that end-users can run their application against the appropriate Java version with no prompts. Usage tracking also has a different definition for Java SE than it does for most software applications. Unlike most applications where usage can be determined by a simple run-count, Java is a platform used for launching other applications. This means that usage tracking must answer both "how often is this Java SE version used" and "what applications are launched by it." Usage Tracking One piece of Java SE Advanced is a centralized usage tracker. Simply placing a properties file on the client informs systems to report information to this usage tracker, so that the desktop administrator can better understand usage. Information is sent via UDP to prevent any delay on the client. The usage tracking server resides at a central location on the intranet to collect information from those clients. The information is stored in a normalized database for performance, meaning that a single usage tracker can handle a large number of clients. Guided Deployment Rule Sets Deployment Rule Sets were introduced in Java 7 update 40 (September 2013) in order to help administrators control security prompts and guide compatibility. A previous post, Deployment Rule Sets by Example, explains how to configure a rule set so that most applications run against the most secure version but a specific applet may run against the Java version that was current several years ago. There are a different set of questions that can be asked by a desktop administrator in a large or distributed firm: Where are the Java RIAs that our users need? Which RIA needs which Java version? Which users need which Java versions? How do I verify these answers once I have them? The guided deployment rule set creation uses usage tracker data to identify applications both by certificate hash and location. After creating the rules, a comparison tool exists to verify them against the tracked data: If you intend to run an RIA, is it green? If something specific should be blocked, is it red? This makes user-testing easier. MSI Installer The Windows Installer format (MSI) provides a number of benefits for desktop administrators that customize or manage software at scale. Unlike the basic installer that most users obtain from Java.com or OTN, this installer is built around customization and integration with various desktop management products like SCCM. Desktop administrators using the MSI installer can use every feature provided by the format, such as silent installs/upgrades, low-privileged installations, or self-repair capabilities Customers looking for Java SE Advanced can download the MSI installer through their My Oracle Support (MOS) account. Java SE Advanced The new features in Java SE Advanced make it easier for desktop administrators to identify and control client installations at scale. Administrators at organizations that want either the tools or associated commercial support should consider Java SE Advanced.

    Read the article

  • NHibernate 3.0 and FluentNHibernate, how to get up and running&hellip;.

    - by DesigningCode
    First up. Its actually really easy. I’m not very religious about my DB tech, I don’t really care, I just want something that works.  So I’m happy to consider all options if they provide an advantage, and recently I was considering jumping from NHibernate to EF 4.0.  However before ditching NHibernate and jumping to EF 4.0 I thought I should try the head version of NHibernates trunk and the Head version of FluentNHibernate. I currently have a “Repository / Unit of Work” Framework built up around these two techs.  All up it makes my life pretty simple for dealing with databases.   The problem is the current release of NHibernate + the Linq provider wasn’t too hot for our purposes.  Especially trying to plug it into older VB.NET code.   The Linq provider spat the dummy with VB.NET lambdas.  Mainly because in C# Query().Where(l => l.Name.Contains("x") || l.Name.Contains("y")).ToList(); is not the same as the VB.NET Query().Where(Function(l) l.Name.Contains("x") Or l.Name.Contains("y")).ToList VB.NET seems to spit out … well…. something different :-) so anyways… Compiling your own version of NHibernate and FluentNHibernate.  It’s actually pretty easy! First you’ll need to install tortisesvn NAnt and Git if you don’t already have them.  NHibernate first step, get the subversion trunk https://nhibernate.svn.sourceforge.net/svnroot/nhibernate/trunk/ into a directory somewhere.  eg \thirdparty\nhibernate Then use NAnt to build it.   (if you open the .sln it will show errors in that  AssemblyInfo.cs doesn’t exist ) to build it, there is a .txt document with sample command line build instructions,  I simply used :- NAnt -D:project.config=release clean build >output-release-build.log *wait* *wait* *wait* and ta da, you will have a bin directory with all the release dlls. FluentNHibernate This was pretty simple. there’s instructions here :- http://wiki.fluentnhibernate.org/Getting_started#Installation basically, with git, create a directory, and you issue the command git clone git://github.com/jagregory/fluent-nhibernate.git and wait, and soon enough you have the source. Now, from the bin directory that NHibernate spit out, take everything and dump it into the subdirectory “fluent-nhibernate\tools\NHibernate” Now, to build, you can use rake….which a ruby build system, however you can also just open the solution and build.   Which is what I did.  I had a few problems with the references which I simply re-added using the new ones.  Once built, I just took all the NHibnerate dlls, and the fluent ones and replaced my existing NHibernate / Fluent and killed off the old linq project. All I had to change is the places that used  .Linq<T>  and replace them with .Query<T>  (which was easy as I had wrapped it already to isolate my code from such changes) and hey presto, everything worked.  Even the VB.NET linq calls. I need to do some more testing as I’ve only done basic smoke tests, but its all looking pretty good, so for now, I will stick to NHibernate!

    Read the article

  • Automated Error Reporting = More Robust Software

    - by Laila
    I would like to tell you how to revolutionize your software development process </marketing hyperbole> On a more serious note, we (Red Gate's .NET Development team) recently rolled a new tool into our development process which has made our lives dramatically easier AND improved the quality of our software, and I (& one of our developers, Alex Davies) just wanted to take a quick moment to share the love. I work with a development team that takes pride in what they ship, so we take software testing rather seriously. For every development project we run, we allocate at least one software tester for every two developers, and we never ship software without first shipping early access releases and betas to get user feedback. And therein lies the challenge -encouraging users to provide consistent, useful feedback is a headache, but without that feedback, improving the software is. tricky. Until fairly recently, we used the standard (if long-winded) approach of receiving bug reports of variable quality via email or through our support forums. If that didn't give us enough information to reproduce the problem - which was most of the time - we had to enter into a time-consuming to-and-fro conversation with the end-user, to get scrape together the data we needed to work out where the problem lay. As I'm sure you're aware, this is painfully slow. To the delight of the team, we recently got to work with SmartAssembly, which lets us embed automated exception and error reporting into our software with very little pain, and we decided to do a little dogfooding. As a result, we've have made a really handy (if perhaps slightly obvious) discovery: As soon as we release a beta, or indeed any release of software, we now get tonnes of customer feedback through automated error reports. Making this process easier for our users has dramatically increased the amount (and quality) of feedback we get. From their point of view, they get an experience similar to Microsoft's error reporting, and process is essentially idiot-proof. From our side of things, we can now react much faster to the information we get, fixing the bugs and shipping a new-and-improved release, which our users rather appreciate. Smiles and hugs all round. Even more so because, as we're use SmartAssembly's Automated Error Reporting, we get to avoid having to spend weeks building an exception reporting mechanism. It takes just a few minutes to add reporting to a project, and we get a bunch of useful information back, like a stack trace and the values of all the local variables, which we can use to fix bugs. Happily, "Automated Error Reporting = More Robust Software" can actually be read two ways: we've found that we not only ship higher quality software, but we also release within a shorter time. We can ship stable software that our users are happy to upgrade to, and we then bask in the glory of lots of positive customer feedback. Once we'd starting working with SmartAssembly, we were curious to know how widespread error reporting was as a practice. Our product manager ran a survey in autumn last year, and found that 40% of software developers never really considered deploying error reporting. Considering how we've now got plenty of experience on the subject, one of our dev guys, Alex Davies, thought we should share what we've learnt, and he's kindly offered to host a webinar on delivering robust software with Automated Error Reporting. Drawing on our own in-house development experiences, he'll cover how to add error reporting to your program, how to actually use the error reports to fix bugs (don't snigger, not everyone's as bright as you), how to customize the error report dialog that your users see, and how to automatically get log files from your users' machine. The webinar will take place on Jan 25th (that's next week). It's free to attend, but you'll still need to register to hear Alex's dulcet tones.

    Read the article

  • Oracle EMEA News Digest - May 2014

    - by Steve Walker
    Systems Oracle introduced a technology preview of an OpenStack® distribution that allows Oracle Linux and Oracle VM users to work with the open source cloud software. This provides customers with additional choices and interoperability while taking advantage of the efficiency, performance, scalability, and security of Oracle Linux and Oracle VM. The distribution is delivered as part of the Oracle Linux and Oracle VM Premier Support offerings, at no additional cost. Oracle plans to work further with the OpenStack community to develop and enhance its enterprise-class capabilities to meet customer demands. Also in the Open Source arena, Oracle announced the general availability of MySQL Fabric. MySQL Fabric provides an integrated system that makes it simpler to manage groups of MySQL databases. It delivers both high availability - via failure detection and failover - and scalability through automated data sharding. Oracle Database, Middleware and Technology The company made two announcements for Oracle Tuxedo, the #1 application server for C, C++, COBOL and Java deployments in private cloud or traditional data center environments. With enhanced management and monitoring features and tighter integration with Oracle technologies, the latest release of Oracle Tuxedo 12c enables organizations to dramatically increase application throughput, while reducing total cost of ownership and time to market for new application development and deployment. Oracle also introduced the latest release of its mainframe application rehosting platform, Oracle Tuxedo ART 12c, to help organizations speed up migration projects and accelerate the adoption of the new environment by current IT staff. It enables organizations to accelerate the rehosting of IBM mainframe applications and greatly enhance management and supportability of the rehosted applications while reducing costs and risk. Applications According to new Oracle studies, B2B and B2C commerce professionals find integrated, omni-channel customer experiences increasingly valuable to their organizations, and are continuing to invest in technologies and digital content strategies to facilitate them. The studies—one for B2B and one for B2C—surveyed e-commerce professionals in business and technology departments from around the world. Although the priorities, success metrics, and technology investments differed between the two groups, customer acquisition and retention emerged as common themes across B2B and B2C. Growing market share and enhancing customer experience are cited as top investment areas for all e-commerce professionals. In product news, Oracle announced the latest release of Oracle Business Intelligence (BI) Applications (version 11.1.1.8.1, in case anyone asks). It includes prebuilt connectors between Oracle Procurement and Spend Analytics and Oracle’s JD Edwards. Additionally, a new Oracle Human Resources Analytics module for developing and maintaining a skilled workforce has been introduced. In use at more than 4,000 companies worldwide, Oracle BI Applications support leading enterprise applications, including Oracle E-Business Suite, Oracle’s PeopleSoft, Oracle's Siebel CRM, Oracle’s JD Edwards EnterpriseOne offering high-performing analytics at a lower cost. Industries For the Communications Industry, Oracle has launched a new release of the Oracle Communications Core Session Manager. This gives CSPs a new way to design, deploy and manage complex networking services and embrace next-generation technology, It provides them with an immediate entry point for  network function virtualization (NFV) efforts, allowing them to realize immediate benefits associated with network virtualization – including increased service agility and improved network resource sharing. And for the Utilities Industry, Oracle is releasing solutions with new business features and enhanced technical architecture that help position utilities for success now and into the future. Oracle has provided new releases for its customer information system,  meter data management system, customer self-service solution and mobile workforce management solution.

    Read the article

  • New features in TFS Demo Setup 1.0.0.2

    - by Tarun Arora
    Release Notes – http://tfsdemosetup.codeplex.com/ | Download | Source Code | Report a Bug | Ideas Just pushed out the 2nd release of the TFS Demo setup on CodePlex, below a quick look at some of the new features/improvements in the tool… Details of the existing features can be found here. Feature 1 – Set up Work Items Queries as Team Favorites The task board looks cooler when the team favourite work item queries show up on the task board. The demo setup console application now has the ability to set up the work item queries as team favorites for you. If you want to see how you can add Team Favorites programmatically, refer to this blogpost here. Image 1 – Task board without Team Favorites Let’s see how the TFS Demo Setup application sets-up team favorites as part of the run… Open up the DemoDictionary.xml and you should be able to see the new node <TeamFavorites> this accepts multiple <TeamFavorite>. You simply need to specify the <Type> as Query and in the <Name> specify the name of the work item query that you would like added as a favorite. Image 2 – Highlighting the TeamFavorites block in DemoDictionary.xml So, when the demo set up application is run with the above config, work item queries “Blocked Tasks” and “Open Impediments” are added as team favorites. They then show up on the task board, as highlighted in the screen shot below. Image 3 – Team Favorites setup during the TFS demo setup app execution Feature 2 – Choose what you want to setup and exclude the rest I had a great feature request come in requesting the ability to exclude parts of the setup at the sole discretion of the executioner. To accommodate this, I have added an attribute with each block, the attribute “Run” accepts “true” or “false”. If you set the flag to true then at the time of execution that block would be considered for setup and if you set the flag to false, the block will be ignored during the setup. So, lets look at an example below… The attribute "Run” is set to true for TeamSettings, Team Favorites, TeamMembers and WorkItems. So, all of these would be setup as part of the demo setup application execution. Image 4 – New Attribute Run added to all blocks in DemoDictionary.xml If I did not want to recreate the team and did not want to add new work items but only wanted to add favorites and team members to the existing team “AgileChamps1” then I could simple run the application with below DemoDictionary.xml. Note – TeamSettings Run=”false” and WorkItems Run=”false”. Image 5 – TeamFavorites and TeamMembers set as true and others set to false Feature 3 – Usability Improvement If you try and assign a work item to a team member that does not exist then the application throws a nasty exception. This behaviour has now been changed, upon adding such a work item, the work items will be created and not assigned to any user. The work item id will be printed to the console making it simple for you to assign the work item manually. As you can see in the screen shot below, I am trying to assign the work item to a user “Tarun” and a user “v2” both are *not valid users in my team project collection* so the tool creates the work items and provides me the work item id and lets me know that since the user is invalid the work item could not be assigned to the user. Better user experience ae Image 6 – Behaviour if work item assigned to users are in valid users in team project That’s about it for the current release. I have some new features planned for the next release. Mean while if you have any ideas/comments please feel free to leave a comment. Stay tuned for more… Enjoy! Other posts on TFS Demo Setup can be found here.

    Read the article

  • SJS AS 9.1 U2 (GF v2 U2) - Patch 25 // GF v2.1 - Patch 19 // Sun GlassFish Enterprise Server v2.1.1 Patch 13

    - by arungupta
    SJS AS 9.1 U2 (GF v2 U2) patch 25 is a commercial (Restricted) patch (see Overview of GFv2) available as part of Oracle's Commercial Support for GlassFish. This release is also patch 19 of GlassFish 2.1 and patch 13 of GlassFish 2.1.1. The file-based patches were released onSep 1, 2011; package-based patches were released on Sep 13, 2011. Release Overview Description SJS AS 9.1 U2 (GFv2 U2) - Patch 25 - File and Package-Based Patch for Solaris SPARC, Solaris x86, Linux, Windows and AIX. GlassFish 2.1 - Patch 19 - File and Package-Based Patch for Solaris SPARC, Solaris x86, Linux, Windows and AIX. GlassFish 2.1.1 - Patch 13 - File and Package-Based Patch for Solaris SPARC, Solaris x86, Linux, Windows and AIX. Patch Ids This release comes in 3 different variants: Package-based patches with HADB • Solaris SPARC - [128640-27] • Solarix i586 - [128641-27] • Linux RPM - [128642-27] File-based patches with HADB • Solaris SPARC - [128643-27] • Solaris i586 - [128644-27] • Linux - [128645-27] • Windows - [128646-27] File based patches without HADB • Solaris SPARC - [128647-27] • Solaris i586 - [128648-27] • Linux - [128649-27] • Windows - [128650-27] • AIX - [137916-27] Update Date Nov 23, 2011 Comment Commercial (for-fee) release with regular bug fixes. This is patch 25 for SJS AS 9.1 U2; it is also patch 19 for GlassFish v2.1 and patch 13 for GlassFish v2.1.1. It contains the fixes from the previous patches plus fixes for 18 unique defects. Status CURRENT Bugs Fixed in this Patch: • [12823919]: RESPONSE BYTECHUNK FLUSH WILL GENERATE A MIMEHEADER WHEN SESSION REPLICATION ON • [12818767]: INTEGRATE NEW GRIZZLY 1.0.40 • [12807660]: BUILD, STAGE AND INTEGRATING HADB • [12807643]: INTEGRATE MQ 4.4 U2 P4 • [12802648]: GLASSFISH BUILD FAILED DUE TO METRO INTEGRATION • [12799002]: JNDI RESOURCE NOT ENABLED IF TARGETTING USING ADMIN GUI ON GF 2.1.1 PATCH 11 • [12794672]: ORG.APACHE.JASPER.RUNTIME.BODYCONTENTIMPL DOES NOT COMPACT CB BUFFER • [12772029]: BUG 12308270 - NEED HOTFIX FROM GF RUNNING OPENSSO • [12749346]: VERSION CHANGES FOR GLASSFISH V2.1.1 PATCH 13 • [12749151]: INTEGRATING METRO 1.6.1-B01 INTO GF 2.1.1 P13 • [12719221]: PORTUNIFICATION WSTCPPROTOCOLFINDER.FIND NULLPOINTEREXCEPTION THROWN • [12695620]: HADB: LOGBUFFERSIZE CALCULATED INCORRECTLY FOR VALUES 120 MB AND THE MEMORY FO • [12687345]: ENVIRONMENT VARIABLE PARSING FOR SUN_APPSVR_NOBACKUP CAN FAIL DEPENDING ENV VARS • [12547651]: GLASSFISH DISPLAY BUG • [12359965]: GEREQUESTURI RETURNS URI WITH NULL PREPENDED INTERMITTENT AFTER UPGRADE • [12308270]: SUNBT7020210 ENHANCE JAXRPC SOAP RESPONSE USE PREVIOUS CONFIGURED NAMESPACE PREF • [12308003]: SUNBT7018895 FAILURE TO DEPLOY OR RUN WEBSERVICE AFTER UPDATING TO GF 2.1.1 P07 • [12246256]: SUNBT6739013 [RN]GLASSFISH/SUN APPLICATION INSTALLER CRASHES ON LINUX Additional Notes: More details about these bugs can be found at My Oracle Support.

    Read the article

  • Where should you put constants and why?

    - by Tim Meyer
    In our mostly large applications, we usually have a only few locations for constants: One class for GUI and internal contstants (Tab Page titles, Group Box titles, calculation factors, enumerations) One class for database tables and columns (this part is generated code) plus readable names for them (manually assigned) One class for application messages (logging, message boxes etc) The constants are usually separated into different structs in those classes. In our C++ applications, the constants are only defined in the .h file and the values are assigned in the .cpp file. One of the advantages is that all strings etc are in one central place and everybody knows where to find them when something must be changed. This is especially something project managers seem to like as people come and go and this way everybody can change such trivial things without having to dig into the application's structure. Also, you can easily change the title of similar Group Boxes / Tab Pages etc at once. Another aspect is that you can just print that class and give it to a non-programmer who can check if the captions are intuitive, and if messages to the user are too detailed or too confusing etc. However, I see certain disadvantages: Every single class is tightly coupled to the constants classes Adding/Removing/Renaming/Moving a constant requires recompilation of at least 90% of the application (Note: Changing the value doesn't, at least for C++). In one of our C++ projects with 1500 classes, this means around 7 minutes of compilation time (using precompiled headers; without them it's around 50 minutes) plus around 10 minutes of linking against certain static libraries. Building a speed optimized release through the Visual Studio Compiler takes up to 3 hours. I don't know if the huge amount of class relations is the source but it might as well be. You get driven into temporarily hard-coding strings straight into code because you want to test something very quickly and don't want to wait 15 minutes just for that test (and probably every subsequent one). Everybody knows what happens to the "I will fix that later"-thoughts. Reusing a class in another project isn't always that easy (mainly due to other tight couplings, but the constants handling doesn't make it easier.) Where would you store constants like that? Also what arguments would you bring in order to convince your project manager that there are better concepts which also comply with the advantages listed above? Feel free to give a C++-specific or independent answer. PS: I know this question is kind of subjective but I honestly don't know of any better place than this site for this kind of question. Update on this project I have news on the compile time thing: Following Caleb's and gbjbaanb's posts, I split my constants file into several other files when I had time. I also eventually split my project into several libraries which was now possible much easier. Compiling this in release mode showed that the auto-generated file which contains the database definitions (table, column names and more - more than 8000 symbols) and builds up certain hashes caused the huge compile times in release mode. Deactivating MSVC's optimizer for the library which contains the DB constants now allowed us to reduce the total compile time of your Project (several applications) in release mode from up to 8 hours to less than one hour! We have yet to find out why MSVC has such a hard time optimizing these files, but for now this change relieves a lot of pressure as we no longer have to rely on nightly builds only. That fact - and other benefits, such as less tight coupling, better reuseability etc - also showed that spending time splitting up the "constants" wasn't such a bad idea after all ;-)

    Read the article

  • Getting Help with 'SEPA' Questions

    - by MargaretW
    What is 'SEPA'? The Single Euro Payments Area (SEPA) is a self-regulatory initiative for the European banking industry championed by the European Commission (EC) and the European Central Bank (ECB). The aim of the SEPA initiative is to improve the efficiency of cross border payments and the economies of scale by developing common standards, procedures, and infrastructure. The SEPA territory currently consists of 33 European countries -- the 28 EU states, together with Iceland, Liechtenstein, Monaco, Norway and Switzerland. Part of that infrastructure includes two new SEPA instruments that were introduced in 2008: SEPA Credit Transfer (a Payables transaction in Oracle EBS) SEPA Core Direct Debit (a Receivables transaction in Oracle EBS) A SEPA Credit Transfer (SCT) is an outgoing payment instrument for the execution of credit transfers in Euro between customer payment accounts located in SEPA. SEPA Credit Transfers are executed on behalf of an Originator holding a payment account with an Originator Bank in favor of a Beneficiary holding a payment account at a Beneficiary Bank. In R12 of Oracle applications, the current SEPA credit transfer implementation is based on Version 5 of the "SEPA Credit Transfer Scheme Customer-To-Bank Implementation Guidelines" and the "SEPA Credit Transfer Scheme Rulebook" issued by European Payments Council (EPC). These guidelines define the rules to be applied to the UNIFI (ISO20022) XML message standards for the implementation of the SEPA Credit Transfers in the customer-to-bank space. This format is compliant with SEPA Credit Transfer version 6. A SEPA Core Direct Debit (SDD) is an incoming payment instrument used for making domestic and cross-border payments within the 33 countries of SEPA, wherein the debtor (payer) authorizes the creditor (payee) to collect the payment from his bank account. The payment can be a fixed amount like a mortgage payment, or variable amounts such as those of invoices. The "SEPA Core Direct Debit" scheme replaces various country-specific direct debit schemes currently prevailing within the SEPA zone. SDD is based on the ISO20022 XML messaging standards, version 5.0 of the "SEPA Core Direct Debit Scheme Rulebook", and "SEPA Direct Debit Core Scheme Customer-to-Bank Implementation Guidelines". This format is also compliant with SEPA Core Direct Debit version 6. EU Regulation #260/2012 established the technical and business requirements for both instruments in euro. The regulation is referred to as the "SEPA end-date regulation", and also defines the deadlines for the migration to the new SEPA instruments: Euro Member States: February 1, 2014 Non-Euro Member States: October 31, 2016. Oracle and SEPA Within the Oracle E-Business Suite of applications, Oracle Payables (AP), Oracle Receivables (AR), and Oracle Payments (IBY) provide SEPA transaction capabilities for the following releases, as noted: Release 11.5.10.x -  AP & AR Release 12.0.x - AP & AR & IBY Release 12.1.x - AP & AR & IBY Release 12.2.x - AP & AR & IBY Resources To assist our customers in migrating, using, and troubleshooting SEPA functionality, a number of resource documents related to SEPA are available on My Oracle Support (MOS), including: R11i: AP: White Paper - SEPA Credit Transfer V5 support in Oracle Payables, Doc ID 1404743.1R11i: AR: White Paper - SEPA Core Direct Debit v5.0 support in Oracle Receivables, Doc ID 1410159.1R12: IBY: White Paper - SEPA Credit Transfer v5 support in Oracle Payments, Doc ID 1404007.1R12: IBY: White Paper - SEPA Core Direct Debit v5 support in Oracle Payments, Doc ID 1420049.1R11i/R12: AP/AR/IBY: Get Help Setting Up, Using, and Troubleshooting SEPA Payments in Oracle, Doc ID 1594441.2R11i/R12: Single European Payments Area (SEPA) - UPDATES, Doc ID 1541718.1R11i/R12: FAQs for Single European Payments Area (SEPA), Doc ID 791226.1

    Read the article

  • MSbuild task fails because "Any CPU" solution is built out of order

    - by Art Vandalay
    I have two solutions to build in Teambuild, one is the application itself, the other one is the WiX installer. I want to build the application using "Any CPU" build configuration and the installer using "x86". I've listed the "Any CPU" solution first in my project file, but Teambuild always builds the "x86" solution first. I'm setting BuildSolutionsInParallel = false, but it still builds the solutions in the reverse listed order. If I change the first solution to "Mixed Platform", it works fine. How can I get the solutions to build in the order listed in the project file? <Project ...> <PropertyGroup> <!-- We want to build the install solution after the build solution --> <BuildSolutionsInParallel>false</BuildSolutionsInParallel> </PropertyGroup> <ItemGroup> <SolutionToBuild Include="$(BuildProjectFolderPath)/Pricer/Pricer.sln"> <Targets></Targets> <Properties></Properties> </SolutionToBuild> <SolutionToBuild Include="$(BuildProjectFolderPath)/Pricer/Pricer.Install/Pricer.Install.sln"> <Targets></Targets> <Properties></Properties> </SolutionToBuild> </ItemGroup> <ItemGroup> <ConfigurationToBuild Include="Release|Any CPU"> <FlavorToBuild>Release</FlavorToBuild> <PlatformToBuild>Any CPU</PlatformToBuild> </ConfigurationToBuild> <ConfigurationToBuild Include="Release|x86"> <FlavorToBuild>Release</FlavorToBuild> <PlatformToBuild>x86</PlatformToBuild> </ConfigurationToBuild> </ItemGroup> </Project>

    Read the article

  • Can somebody explain the differences, status and future of the various ASP.NET AJAX libraries and to

    - by tjrobinson
    I'm confused about the differences and relationships between the various Microsoft ASP.NET AJAX components/libraries/toolkits and particularly the naming of them. It starts off relatively simple with ASP.NET AJAX itself: ASP.NET AJAX 1.0 (available for ASP.NET 2.0 in a separate package called ASP.NET 1.0 Extensions) ASP.NET AJAX 3.5 (included with ASP.NET 3.5) ASP.NET AJAX 4.0 (included with ASP.NET 4.0) Then come the various projects on CodePlex and elsewhere: ASP.NET AJAX Control Toolkit (aka Original Ajax Control Toolkit) Samples CodePlex It seems that the September 2009 Release is the final release of the Original Ajax Control Toolkit and that it's been superseded by... Ajax Control Toolkit in ASP.NET Ajax Library It looks like the old ASP.NET AJAX Control Toolkit has now become part of a larger ASP.NET Ajax Library but is still maintained seperately on CodePlex. This release is in beta at time of writing so presumably if I want to use the "Control Toolkit" I should stick with the September 2009 Release of the Original ASP.NET AJAX Control Toolkit CodePlex Microsoft Ajax Library Preview Is this the same as the ASP.NET Ajax Library mentioned above just with a confusing name variation? Is the "Control Toolkit" included in Preview 6 and is it older newer or older than the code in Ajax Control Toolkit in ASP.NET Ajax Library? CodePlex Microsoft ASP.NET Ajax Wiki - note the inconsistent insertion of ASP.NET into the name Links to useful articles, roadmaps would be useful.

    Read the article

  • How to Compile Mod_Python 3.3.1 for Python 2.6 and Apache 2.2 on Windows?

    - by John
    I have no experience compiling code other than using Visual Studio's Build command. I am hoping we can create a step by step guide for compiling mod_python on windows. Please be as descriptive as possible. This is what I've done so far: Download and install python 2.6.2 Download and install apache 2.2.11 Download the most recent source code for mod_python from svn From here I'm lost to what the next step is. I've downloaded Microsoft Visual C++ 2008 Express Edition. As mentioned by Hao I've already tried the tutorial mentioned in that link. Here is the error messages I'm receiving with that tutorial. C:\mod_python\distbuild_installer.bat Could Not Find C:\mod_python\src*.obj running bdist_wininst running build running build_py creating build creating build\lib.win32-2.6 creating build\lib.win32-2.6\mod_python copying C:\mod_python\lib\python\mod_python\apache.py - build\lib.win32-2.6\mod _python copying C:\mod_python\lib\python\mod_python\cache.py - build\lib.win32-2.6\mod_ python copying C:\mod_python\lib\python\mod_python\cgihandler.py - build\lib.win32-2.6 \mod_python copying C:\mod_python\lib\python\mod_python\Cookie.py - build\lib.win32-2.6\mod _python copying C:\mod_python\lib\python\mod_python\importer.py - build\lib.win32-2.6\m od_python copying C:\mod_python\lib\python\mod_python\psp.py - build\lib.win32-2.6\mod_py thon copying C:\mod_python\lib\python\mod_python\publisher.py - build\lib.win32-2.6\ mod_python copying C:\mod_python\lib\python\mod_python\python22.py - build\lib.win32-2.6\m od_python copying C:\mod_python\lib\python\mod_python\Session.py - build\lib.win32-2.6\mo d_python copying C:\mod_python\lib\python\mod_python\testhandler.py - build\lib.win32-2. 6\mod_python copying C:\mod_python\lib\python\mod_python\util.py - build\lib.win32-2.6\mod_p ython copying C:\mod_python\lib\python\mod_python__init__.py - build\lib.win32-2.6\m od_python running build_ext building 'mod_python_so' extension creating build\temp.win32-2.6 creating build\temp.win32-2.6\Release creating build\temp.win32-2.6\Release\mod_python creating build\temp.win32-2.6\Release\mod_python\src C:\Program Files\Microsoft Visual Studio 9.0\VC\BIN\cl.exe /c /nologo /Ox /MD /W 3 /GS- /DNDEBUG -DWIN32 -DNDEBUG -D_WINDOWS -IC:\mod_python\src\include -Ic:\apa che\include -IC:\Python26\include -IC:\Python26\PC /TcC:\mod_python\src\mod_pyth on.c /Fobuild\temp.win32-2.6\Release\mod_python\src\mod_python.obj mod_python.c c:\apache\include\ap_config.h(25) : fatal error C1083: Cannot open include file: 'apr.h': No such file or directory error: command '"C:\Program Files\Microsoft Visual Studio 9.0\VC\BIN\cl.exe"' fa iled with exit status 2

    Read the article

  • MSBuild Validating Properties

    - by Brian Gillespie
    I'm working on a reusable MSBuild Target that will be consumed by several other tasks. This target requires that several properties be defined. What's the best way to validate that properties are defined, throwing an Error if the are not? Two attempts that I almost like: <?xml version="1.0" encoding="utf-8" ?> <Project ToolsVersion="3.5" DefaultTarget="Release" xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <Target Name="Release"> <Error Text="Property PropA required" Condition="'$(PropA)' == ''"/> <Error Text="Property PropB required" Condition="'$(PropB)' == ''"/> <!-- The body of the task --> </Target> </Project> Here's an attempt at batching. It's ugly because of the extra "Name" parameter. Is it possible to use the Include attribute instead? <?xml version="1.0" encoding="utf-8" ?> <Project ToolsVersion="3.5" DefaultTarget="Release" xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <Target Name="Release"> <!-- MSBuild BuildInParallel="true" Projects="@(ProjectsToBuild)"/ --> <ItemGroup> <RequiredProperty Include="PropA"><Name>PropA</Name></RequiredProperty> <RequiredProperty Include="PropB"><Name>PropB</Name></RequiredProperty> <RequiredProperty Include="PropC"><Name>PropC</Name></RequiredProperty> </ItemGroup> <Error Text="Property %(RequiredProperty.Name) required" Condition="'$(%(RequiredProperty.Name))' == ''" /> </Target> </Project>

    Read the article

  • Statically Compiling QWebKit 4.6.2

    - by geeko
    I tried to compile Qt+Webkit statically with MS VS 2008 and this worked. C:\Qt\4.6.2configure -release -static -opensource -no-fast -no-exceptions -no-accessibility -no-rtti -no-stl -no-opengl -no-openvg -no-incredibuild-xge -no-style-plastique -no-style-cleanlooks -no-style-motif -no-style-cde -no-style-windowsce -no-style-windowsmobile -no-style-s60 -no-gif -no-libpng -no-libtiff -no-libjpeg -no-libmng -no-qt3support -no-mmx -no-3dnow -no-sse -no-sse2 -no-iwmmxt -no-openssl -no-dbus -platform win32-msvc2008 -arch windows -no-phonon -no-phonon-backend -no-multimedia -no-audio-backend -no-script -no-scripttools -webkit -no-declarative However, I get these errors whenever building a project that links statically to QWebKit: 1 Creating library C:\Users\Geeko\Desktop\Qt\TestQ\Release\TestQ.lib and object C:\Users\Geeko\Desktop\Qt\TestQ\Release\TestQ.exp 1QtWebKit.lib(PluginPackageWin.obj) : error LNK2019: unresolved external symbol _VerQueryValueW@16 referenced in function "class WebCore::String __cdecl WebCore::getVersionInfo(void * const,class WebCore::String const &)" (?getVersionInfo@WebCore@@YA?AVString@1@QAXABV21@@Z) 1QtWebKit.lib(PluginPackageWin.obj) : error LNK2019: unresolved external symbol _GetFileVersionInfoW@16 referenced in function "private: bool __thiscall WebCore::PluginPackage::fetchInfo(void)" (?fetchInfo@PluginPackage@WebCore@@AAE_NXZ) 1QtWebKit.lib(PluginPackageWin.obj) : error LNK2019: unresolved external symbol _GetFileVersionInfoSizeW@8 referenced in function "private: bool __thiscall WebCore::PluginPackage::fetchInfo(void)" (?fetchInfo@PluginPackage@WebCore@@AAE_NXZ) 1QtWebKit.lib(PluginDatabaseWin.obj) : error LNK2019: unresolved external symbol _imp_PathRemoveFileSpecW@4 referenced in function "class WebCore::String __cdecl WebCore::safariPluginsDirectory(void)" (?safariPluginsDirectory@WebCore@@YA?AVString@1@XZ) 1QtWebKit.lib(PluginDatabaseWin.obj) : error LNK2019: unresolved external symbol _imp_SHGetValueW@24 referenced in function "void __cdecl WebCore::addWindowsMediaPlayerPluginDirectory(class WTF::Vector &)" (?addWindowsMediaPlayerPluginDirectory@WebCore@@YAXAAV?$Vector@VString@WebCore@@$0A@@WTF@@@Z) 1QtWebKit.lib(PluginDatabaseWin.obj) : error LNK2019: unresolved external symbol _imp_PathCombineW@12 referenced in function "void __cdecl WebCore::addMacromediaPluginDirectories(class WTF::Vector &)" (?addMacromediaPluginDirectories@WebCore@@YAXAAV?$Vector@VString@WebCore@@$0A@@WTF@@@Z) 1C:\Users\Geeko\Desktop\Qt\TestQ\Release\TestQ.exe : fatal error LNK1120: 6 unresolved externals Do I need to check something in the Qt project options ? I have QtCore, QtGui, Network and WebKit checked.

    Read the article

  • iPhone SDK Core Data: Fetch all entities with a nil relationship?

    - by Harkonian
    I have a core data project that has Books and Authors. In the data model Authors has a to-many relationship to Books and Books has a 1-1 relationship with Authors. I'm trying to pull all Books that do not have an Author. No matter how I try it, no results are returned. In my predicate I've also tried = NIL, == nil, == NIL. Any suggestions would be appreciated. // fetch all books without authors - (NSMutableArray *)fetchOrphanedBooks { NSFetchRequest *fetchRequest = [[NSFetchRequest alloc] init]; NSEntityDescription *entity = [NSEntityDescription entityForName:@"Book" inManagedObjectContext:self.managedObjectContext]; [fetchRequest setEntity:entity]; [fetchRequest setFetchBatchSize:20]; NSPredicate *predicate = [NSPredicate predicateWithFormat:@"author = nil"]; [fetchRequest setPredicate:predicate]; NSSortDescriptor *sortDescriptor = [[NSSortDescriptor alloc] initWithKey:@"name" ascending:NO]; NSArray *sortDescriptors = [[NSArray alloc] initWithObjects:sortDescriptor, nil]; [fetchRequest setSortDescriptors:sortDescriptors]; NSString *sectionKey = @"name";//nil; NSFetchedResultsController *aFetchedResultsController = [[NSFetchedResultsController alloc] initWithFetchRequest:fetchRequest managedObjectContext:managedObjectContext sectionNameKeyPath:sectionKey cacheName:nil]; BOOL success = [aFetchedResultsController performFetch:nil]; NSMutableArray *orphans = nil; // this is always 0 NSLog(@"Orphans found: %i", aFetchedResultsController.fetchedObjects.count); if (aFetchedResultsController.fetchedObjects.count > 0) { orphans = [[NSMutableArray alloc] init]; for (Note *note in aFetchedResultsController.fetchedObjects) { if (note.subject == nil) { [orphans addObject:note]; } } } [aFetchedResultsController release]; [fetchRequest release]; [sortDescriptor release]; [sortDescriptors release]; return [orphans autorelease]; }

    Read the article

  • MFMailComposeViewController hangs my app

    - by Neal L
    Hi, I am trying to add email functionality to my app. I can get the MFMailComposeViewController to display correctly and pre-populate its subject and body, but for some reason when the user clicks on the "Cancel" or "Send" buttons in the nav bar the app just hangs. I inserted a NSLog() statement into the first line of mailComposeController"didFinishWithResult:error and it doesn't even print that line out to the console. Does anybody have an idea what would cause the MFMailComposeViewController to hang like that? Here is my code from the header: #import "ManagedObjectEditor.h" #import <MessageUI/MessageUI.h> @interface MyManagedObjectEditor : ManagedObjectEditor <MFMailComposeViewControllerDelegate, UIImagePickerControllerDelegate, UINavigationControllerDelegate> { } - (IBAction)emailObject; @end from the implementation file: if ([MFMailComposeViewController canSendMail]) { MFMailComposeViewController *mailComposer = [[MFMailComposeViewController alloc] init]; mailComposer.delegate = self; [mailComposer setSubject:NSLocalizedString(@"An email from me", @"An email from me")]; [mailComposer setMessageBody:emailString isHTML:YES]; [self presentModalViewController:mailComposer animated:YES]; [mailComposer release]; } [error release]; [emailString release]; and here is the code from the callback: #pragma mark - #pragma mark Mail Compose Delegate Methods - (void)mailComposeController:(MFMailComposeViewController *)controller didFinishWithResult:(MFMailComposeResult)result error:(NSError *)error { NSLog(@"in didFinishWithResult:"); switch (result) { case MFMailComposeResultCancelled: NSLog(@"cancelled"); break; case MFMailComposeResultSaved: NSLog(@"saved"); break; case MFMailComposeResultSent: NSLog(@"sent"); break; case MFMailComposeResultFailed: { UIAlertView *alert = [[UIAlertView alloc] initWithTitle:NSLocalizedString(@"Error sending email!",@"Error sending email!") message:[error localizedDescription] delegate:nil cancelButtonTitle:NSLocalizedString(@"Bummer",@"Bummer") otherButtonTitles:nil]; [alert show]; [alert release]; break; } default: break; } [self dismissModalViewControllerAnimated:YES]; } Thanks!

    Read the article

  • Want to learn Objective-C but syntax is very confusing

    - by Sahat
    Coming from Java background I am guessing this is expected. I would really love to learn Objective-C and start developing Mac apps, but the syntax is just killing me. For example: -(void) setNumerator: (int) n { numerator = n; } What is that dash for and why is followed by void in parenthesis? I've never seen void in parenthesis in C/C++, Java or C#. Why don't we have a semicolon after (int) n? But we do have it here: -(void) setNumerator: (int) n; And what's with this alloc, init, release process? myFraction = [Fraction alloc]; myFraction = [myFraction init]; [myFraction release]; And why is it [myFraction release]; and not myFraction = [myFraction release]; ? And lastly what's with the @ signs and what's this implementation equivalent in Java? @implementation Fraction @end I am currently reading Programming in Objective C 2.0 and it's just so frustrating learning this new syntax for someone in Java background.

    Read the article

  • getting memory allocation at ActivityIndicator in Iphone sdk

    - by monish
    Hi Guys, Here Im gettimg memory allocation problem at activity indicator and My code is: - (id)init { if (self = [super init]) { self.title=@"Release Details"; contentView = [[UIView alloc] initWithFrame:[[UIScreen mainScreen] applicationFrame]]; contentView.backgroundColor = [UIColor clearColor]; self.view = contentView; [contentView release]; CGRect frame = CGRectMake(0,0, 320,1500); containerView = [[UIView alloc] initWithFrame:frame]; webView = [ [UIWebView alloc] initWithFrame:[[UIScreen mainScreen] bounds]]; webView.backgroundColor = [UIColor colorWithPatternImage: [UIImage imageNamed:@"background1.png"]]; webView.delegate = self; [containerView addSubview:webView]; CGRect activityViewframe = CGRectMake(20,8,20, 20); progressInd = [[UIActivityIndicatorView alloc] initWithFrame:activityViewframe]; progressInd.activityIndicatorViewStyle = UIActivityIndicatorViewStyleWhite; progressInd.autoresizingMask = (UIViewAutoresizingFlexibleLeftMargin | UIViewAutoresizingFlexibleRightMargin | UIViewAutoresizingFlexibleTopMargin | UIViewAutoresizingFlexibleBottomMargin); [containerView addSubview:progressInd]; [progressInd startAnimating]; progressInd.hidden = NO; [progressInd stopAnimating]; [self.view addSubview:containerView]; isFetch=YES; } return self; } -(void) displayInProgressRightBarButton { UIView* rightBarButtonView = [ [UIView alloc] initWithFrame:CGRectMake(270,5,45, 35)]; [rightBarButtonView addSubview:progressInd]; UIBarButtonItem* buttonItem = [[UIBarButtonItem alloc] initWithCustomView:rightBarButtonView]; self.navigationItem.rightBarButtonItem = buttonItem; [rightBarButtonView release]; [buttonItem release]; } - (void)webViewDidStartLoad:(UIWebView *)webView { [self displayInProgressRightBarButton]; [progressInd startAnimating]; [UIApplication sharedApplication].networkActivityIndicatorVisible = YES; } - (void)webViewDidFinishLoad:(UIWebView *)webView { [progressInd stopAnimating]; [UIApplication sharedApplication].networkActivityIndicatorVisible = NO; } Also I released the progressInd in dealloc eventhough it showing memory allocation at progressInd = [[UIActivityIndicatorView alloc] initWithFrame:activityViewframe]; in init. can anyone help me to solve this. Anyone's help will be much Appreciated. Thank you, Monish.

    Read the article

  • [NSFetchedResultsController sections] returns nil?

    - by Chris
    Hi Everyone, I am trying to resolve this for days at this stage and I'm hoping you can help. I have two ViewControllers which query two different tables from the same database using Core Data. The first ViewController is opened with the app and displays fine. The second is called from within the first ViewController, using a pretty standard fetch setup: - (NSFetchedResultsController *)fetchedClients { // Set up the fetched results controller if needed. if (fetchedClients == nil) { // Create the fetch request for the entity. NSFetchRequest *fetchRequest = [[NSFetchRequest alloc] init]; // Edit the entity name as appropriate. NSEntityDescription *entity = [NSEntityDescription entityForName:@"Clients" inManagedObjectContext:managedObjectContext]; [fetchRequest setEntity:entity]; // Edit the sort key as appropriate. NSSortDescriptor *sortDescriptor = [[NSSortDescriptor alloc] initWithKey:@"clientsName" ascending:YES]; NSArray *sortDescriptors = [[NSArray alloc] initWithObjects:sortDescriptor, nil]; [fetchRequest setSortDescriptors:sortDescriptors]; // Edit the section name key path and cache name if appropriate. // nil for section name key path means "no sections". NSFetchedResultsController *aFetchedResultsController = [[NSFetchedResultsController alloc] initWithFetchRequest:fetchRequest managedObjectContext:managedObjectContext sectionNameKeyPath:nil cacheName:@"Root"]; aFetchedResultsController.delegate = self; self.fetchedClients = aFetchedResultsController; [aFetchedResultsController release]; [fetchRequest release]; [sortDescriptor release]; [sortDescriptors release]; } return fetchedClients; } When I call [self.fetchedClients sections], I get a nil (0x0) return. I have examined the database using an external application to ensure data exists in the "Clients" table. Can anyone think of a reason why [self.fetchedClients sections] would return nil? Many thanks for any help you can provide. Regards, Chris

    Read the article

  • First Letter Section Headers with Core Data

    - by Cory Imdieke
    I'm trying to create a list of people sorted in a tableView of sections with the first letter as the title for each section - a la Address Book. I've got it all working, though there is a bit of an issue with the sort order. Here is the way I'm doing it now: NSFetchRequest *request = [[NSFetchRequest alloc] init]; [request setEntity:[NSEntityDescription entityForName:@"Contact" inManagedObjectContext:context]]; NSSortDescriptor *fullName = [[NSSortDescriptor alloc] initWithKey:@"fullName" ascending:YES]; NSArray *sortDescriptors = [[NSArray alloc] initWithObjects:fullName, nil]; [request setSortDescriptors:sortDescriptors]; [fullName release]; [sortDescriptors release]; NSError *error = nil; [resultController release]; resultController = [[NSFetchedResultsController alloc] initWithFetchRequest:request managedObjectContext:context sectionNameKeyPath:@"firstLetter" cacheName:nil]; [resultController performFetch:&error]; [request release]; fullName is a standard property, and firstLetter is a transient property which returns - as you'd expect - the first letter of the fullName. 95% of the time, this works perfectly. The problem is the result controller expects these two "lists" (the sorted fullName list and the sorted firstLetter list) to match exactly. If I have 2 contacts like John and Jack, my fullName list would sort these as Jack, John every time but my firstLetter list might sort them as John, Jack sometimes as it's only sorting by the first letter and leaving the rest to chance. When these lists don't match up, I get a blank tableView with 0 items in it. I'm not really sure how I should go about fixing this issue, but it's very frustrating. Has anyone else run into this? What did you guys find out?

    Read the article

  • Spring maven libraries

    - by Calm Storm
    I would like to know why some of the libraries are not released during a normal release cycle. For example, from http://repo2.maven.org/maven2/org/springframework/ while spring-core have 3.0.3-RELEASE, spring-remoting and spring-jmx were released only in 2.0.8. Can someone tell me what this would mean? I agree that if there are no changes in the component say spring-jmx then they don't have to release it, but since 90% of the world uses Maven for dependency management can they not just re-release the same libs (of spring-remoting and spring-jmx?) I ask this because I declare my deps like, <dependency> <groupId>org.springframework</groupId> <artifactId>spring-core</artifactId> <version>${spring.version}</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-remoting</artifactId> <version>${spring.version}</version> </dependency> and I would prefer supplying one spring.version instead of keeping version numbers upto date for all components? The four libraries of interest to me are spring-dao, spring-support, spring-jmx, spring-remoting

    Read the article

< Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >