Search Results

Search found 18014 results on 721 pages for 'build automation'.

Page 176/721 | < Previous Page | 172 173 174 175 176 177 178 179 180 181 182 183  | Next Page >

  • Oracle University Neue Kurse (KW 23)

    - by swalker
    In der letzten Woche wurden von Oracle University folgende neue Kurse (bzw. Versionen davon) veröffentlicht: Engineered Systems Exadata Database Machine Administration Workshop (Training On Demand) Development Tools Oracle Fusion Middleware 11g: Build Applications with ADF I (Training On Demand) Fusion Middleware Oracle AIA Foundation Pack 11g: Developing Applications (Training On Demand) Oracle Exalogic Elastic Cloud Administration (Training On Demand) Oracle GoldenGate 11g Fundamentals for Oracle (Training On Demand) Oracle Fusion Middleware 11g: Build Applications with ADF I (Training On Demand) Oracle WebCenter Portal 11g: Spaces Administration (3 Tage) Java Architect Enterprise Applications with Java EE (5 Tage) Hyperion Oracle Hyperion Planning 11.1.2: Create & Manage Applications (Training On Demand) Oracle Hyperion Financial Mgmt 11.1.2: Create & Manage Applications (Training On Demand) Wenn Sie weitere Einzelheiten erfahren oder sich über Kurstermine informieren möchten, wenden Sie sich einfach an Ihr lokales Oracle University-Team in. Bleiben Sie in Verbindung mit Oracle University: LinkedIn OracleMix Twitter Facebook Google+

    Read the article

  • Building a Debian package with two buildsystem

    - by queueoverflow
    I have a package that needs to be build with both a regular makefile and a setup.py. The thing is that the Debian packaging magic that is invoked via debuild would recognize a makefile and do the right make make install DESTDIR=??? thing and get it working right. When I only have a setup.py sitting there and have dh $@ --with python3 --buildsystem pybuild in debian/rules, it will correctly install the Python module with python3 setup.py build python3 setup.py install --install-layout deb --root=??? ??? I do not know all those flags. And I think that I do not need to. I just want the makefile magic to happen, and then the setup.py magic. How can I tell debuild to do both? When I do the following in debian/rules %: dh $@ dh $@ --with python3 --buildsystem pybuild it will only put the first one into the resulting package. I tried to delete the debhelper.log between those, but that did not change much.

    Read the article

  • Where can I get fresh new Wine DLLs?

    - by SuperScript
    I was acting careless today and I accidentally deleted one of the .dlls in my Ubuntu Wine installation, ~/.wine/drive_c/windows/system32/ole32.dll to be exact, and need a new fresh copy of only this one .dll. I know that reinstalling will fix it, but I have installed quite a few programs and do not want to have to do such a drastic thing just to fix this one problem. So, I'm wondering if there is somewhere that I can download this original .dll as it came with my original Wine installation. I have found the SourceForge repository, but it only has .h and .c files and I do not know how to build them into a .dll. Can anyone give me a link to download, or instructions to build my missing .dlls?

    Read the article

  • What "version naming convention" do you use?

    - by rjstelling
    Are different version naming conventions suited to different projects? What do you use and why? Personally, I prefer a build number in hexadecimal (e.g 11BCF), this should be incremented very regularly. And then for customers a simple 3 digit version number, i.e. 1.1.3. 1.2.3 (11BCF) <- Build number, should correspond with a revision in source control ^ ^ ^ | | | | | +--- Minor bugs, spelling mistakes, etc. | +----- Minor features, major bug fixes, etc. +------- Major version, UX changes, file format changes, etc.

    Read the article

  • Is it possible to develop apps with Kendo UI to submit to iOS App Store?

    - by Farshid
    The number of platforms I have to develops apps for is increasing and It brings me lots of stress to learn new technologies for each target platform. I found out that Telerik's Kendo UI is very good to build websites that look and feel native on mobile platforms (e.g. iOS and Android). My question is, is it possible to build HTML5 apps to deploy them on iTunes App Store and Google Play? Please note that I am eager to know the possibility of creating apps (complete apps bundled in standard format of Apple and Google for distribution in their respective mobile app markets) but not websites.

    Read the article

  • Solaris 11.1 changes building of code past the point of __NORETURN

    - by alanc
    While Solaris 11.1 was under development, we started seeing some errors in the builds of the upstream X.Org git master sources, such as: "Display.c", line 65: Function has no return statement : x_io_error_handler "hostx.c", line 341: Function has no return statement : x_io_error_handler from functions that were defined to match a specific callback definition that declared them as returning an int if they did return, but these were calling exit() instead of returning so hadn't listed a return value. These had been generating warnings for years which we'd been ignoring, but X.Org has made enough progress in cleaning up code for compiler warnings and static analysis issues lately, that the community turned up the default error levels, including the gcc flag -Werror=return-type and the equivalent Solaris Studio cc flags -v -errwarn=E_FUNC_HAS_NO_RETURN_STMT, so now these became errors that stopped the build. Yet on Solaris, gcc built this code fine, while Studio errored out. Investigation showed this was due to the Solaris headers, which during Solaris 10 development added a number of annotations to the headers when gcc was being used for the amd64 kernel bringup before the Studio amd64 port was ready. Since Studio did not support the inline form of these annotations at the time, but instead used #pragma for them, the definitions were only present for gcc. To resolve this, I fixed both sides of the problem, so that it would work for building new X.Org sources on older Solaris releases or with older Studio compilers, as well as fixing the general problem before it broke more software building on Solaris. To the X.Org sources, I added the traditional Studio #pragma does_not_return to recognize that functions like exit() don't ever return, in patches such as this Xserver patch. Adding a dummy return statement was ruled out as that introduced unreachable code errors from compilers and analyzers that correctly realized you couldn't reach that code after a return statement. And on the Solaris 11.1 side, I updated the annotation definitions in <sys/ccompile.h> to enable for Studio 12.0 and later compilers the annotations already existing in a number of system headers for functions like exit() and abort(). If you look in that file you'll see the annotations we currently use, though the forms there haven't gone through review to become a Committed interface, so may change in the future. Actually getting this integrated into Solaris though took a bit more work than just editing one header file. Our ELF binary build comparison tool, wsdiff, actually showed a large number of differences in the resulting binaries due to the compiler using this information for branch prediction, code path analysis, and other possible optimizations, so after comparing enough of the disassembly output to be comfortable with the changes, we also made sure to get this in early enough in the release cycle so that it would get plenty of test exposure before the release. It also required updating quite a bit of code to avoid introducing new lint or compiler warnings or errors, and people building applications on top of Solaris 11.1 and later may need to make similar changes if they want to keep their build logs similarly clean. Previously, if you had a function that was declared with a non-void return type, lint and cc would warn if you didn't return a value, even if you called a function like exit() or panic() that ended execution. For instance: #include <stdlib.h> int callback(int status) { if (status == 0) return status; exit(status); } would previously require a never executed return 0; after the exit() to avoid lint warning "function falls off bottom without returning value". Now the compiler & lint will both issue "statement not reached" warnings for a return 0; after the final exit(), allowing (or in some cases, requiring) it to be removed. However, if there is no return statement anywhere in the function, lint will warn that you've declared a function returning a value that never does so, suggesting you can declare it as void. Unfortunately, if your function signature is required to match a certain form, such as in a callback, you not be able to do so, and will need to add a /* LINTED */ to the end of the function. If you need your code to build on both a newer and an older release, then you will either need to #ifdef these unreachable statements, or, to keep your sources common across releases, add to your sources the corresponding #pragma recognized by both current and older compiler versions, such as: #pragma does_not_return(exit) #pragma does_not_return(panic) Hopefully this little extra work is paid for by the compilers & code analyzers being able to better understand your code paths, giving you better optimizations and more accurate errors & warning messages.

    Read the article

  • RIA Services - v1 Shipped!

    "Those Who Ship, Win!" This used to be written on a giant poster in the hallways of building 42 (original home of the .net framework) ... should have taken a picture of it while it used to be around. (missed classic photo opportunity - anyone have a shot of it?) Today, we delivered one of the most important features, shipping a v1. Yes, WCF RIA Services v1 is done, and shipped! You can get the final build along with the final build of Silverlight 4 tools, right here on the RIA Services...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Alps touchpad detected as a mouse, can't enable/disable multitouch

    - by Shota Bakuradze
    Now I know that this has been asked several times here but I couldn't find any decent solution to it. I am running Ubuntu 12.04 on my DELL N5110 and my touchpad is detected as a mouse, So I don't have the touchpad options availiable. Can't use multitouch and can't disable it either. I have tried the dkms driver from this link. But when I tried to install it with dpkg -i command, dpkg returns the following error: ERROR (dkms apport): unable to determine source package for psmouse-alps-dkms Error! Bad return status for module build on kernel: 3.2.0-25-generic-pae (i686) Consult /var/lib/dkms/psmouse-alps/0.10/build/make.log for more information. I have installed all the updates as well. Can someone help me out with this issue?

    Read the article

  • Hierachies....from the Top Down

    - by Joe G
    I've been struggling with how to write on the topic of the importance of hierarchy design.  It's not so much that hierarchies haven't always been important, it's more of that with Fusion, the timing of when the hierarchies are designed should take a higher priority.    I will attempt to explain..... When I was implementing applications, back in the day, we had the list of detailed account values to enter with the obvious parent accounts. Then, after the setup was complete and things were functioning, the reporting phase started.  Users explained the elements that they want on the reports, what totals should be included, and how things should be compared.  Frequently, there was at least one calculation that became a nightmare either because it was based on very specific things that didn't relate to anything else or because it was "hardcoded" so that when something changed, someone need to "fix" the report. With Fusion, the process changes slightly.  You still want to enter all of the detailed accounts, but before you start adding parent values, you should investigate the reporting requirements from the top-down.  It's better to build hierarchies based on the reporting requirements than it is to build reports based on random hierarchies. Build reports based on hierarchies that resemble the reports themselves, and maintain the hierarchies without rework of the reports. For example, if you look at an income statement, you may have line items for Material Costs, Employee Costs, Travel & Entertainment, and Total Operating Expenses.  In your hierarchy, you have detail values that roll up to Material Costs, Employee Costs, and Travel & Entertainment which roll up to Total Operating Expenses. Balances are stored automatically in the cube for each of these.  When you define the report, you pick each of these members - no calculations required.  If a new detail value is added, you simply add it to the hierarchy, and there is no need to modify the report. I realize that there are always exceptions that require special handling, but I am confident that you will end up with much fewer exceptions if you make reporting a priority and design your hierarchies from the top-down.

    Read the article

  • Play Framework Plugin for NetBeans IDE (Part 2)

    - by Geertjan
    After I published part 1 of this series, the first external contribution (i.e., not by me) to the NetBeans plugin for Play Framework 2 was committed today. Yann D'Isanto added support for creating new Play projects: That completely solves a problem I was working on, in a different way altogether. I was working on creating a new wizard that would call "play new" on the command line and pass into the command line the entered name and application type (1 for Java and 2 for Scala). However, Yann's solution is better, at least in the sense in that it works, as opposed to mine which didn't, because of problems I continually had with the command line, since one needs to press Enter multiple times on the Play command line when creating new projects, which I wasn't able to simulate in my new wizard. Yann's approach is simply to follow the approach taken in the Project Type Module Tutorial, which explains how to register a project sample in the IDE. I was inspired by Yann's contribution, especially when he mentioned that one needs to build Play projects on the command line. So, I added a new menu item on the right-click of a project for building Play projects, which simply passes "play compile" to the command line for the current project: Via the IDE's main menu bar, you can also Build and Run the application, though the code for the Clean function needs to be added still, which would be a cool thing for anyone out there to add, by using all the existing code and then passing "play clean compile" to the command line. Something else that Yann added is an Options Window extension, thanks to the Options Window Module Tutorial, for registering the Play installation, which is a step forward from my hard coded solution. I changed things slightly so that, when Build or Run are selected, without a Play installation being defined, the Options window opens, displaying the tab that Yann created, shown below. Notice that there's no Browse button, which would be a simple next step for anyone else to contribute. A small tip is to use the FileChooserBuilder from the NetBeans IDE APIs when working on the Browse button: Looking forward to more contributions to the Play Framework 2 plugin for NetBeans IDE. Just leave a message here with your ideas, with your java.net name, and then I'll add you to the project on java.net, where I very much look forward to your contributions: http://java.net/projects/nbplay/sources/nbplay

    Read the article

  • Using NPM to share resources between UI projects [on hold]

    - by guy mograbi
    I am a UI team leader. My team has a lot of projects using different languages/technologies. In some parts we will rewrite (gradually - @Ampt this is for you) the application in order to enable new fresh technologies in and get old dinosaurs out. I am going to use Node Package Manager to set up an "all powerful" build/dependency manager. Can I use NPM to depend on a private github repository? Can I use NPM to depend on SVN? Will NPM play nice with quickbuild? Since each project might have a slightly different structure (think jetty/maven or play!framework) can I configure NPM to install some dependencies in different folders while still running it from the project's root? How can I, using NPM, get development resources out and build a packaged product? (like a war) Yes/No - is there a reason to use grunt? No discussion, just one liners.

    Read the article

  • problem in installing binutils

    - by user3667930
    when am trying to install mspgcc on ubuntu 14.04 version am getting an error at "make" during installation of binutils... following are the commands i used.. sir please help me in fixing this error.Thanks in advance.. wget http://ftpmirror.gnu.org/binutils/binutils-2.21.1a.tar.bz2 tar xvfj binutils-2.21.1a.tar.bz2 cd binutils-2.21.1 patch -p1 < ../mspgcc-20120406/msp430-binutils-2.21.1a-20120406.patch cd .. mkdir -p BUILD/binutils cd BUILD/binutils ../../binutils-2.21.1/configure --target=msp430 --program-prefix="msp430-" --with-mpfr-include=/usr/local/include -with-mpfr-lib=/usr/local/lib --with-gmp-include=/usr/local/include -with-gmp-lib=/usr/local/lib --with-mpc-include=/usr/local/include -with-mpc-lib=/usr/local/lib make -j 4 sudo make install cd ../..

    Read the article

  • Fixing a bug while working on a different part of the code base

    - by imgx64
    This happened at least once to me. I'm working on some part of the code base and find a small bug in a different part, and the bug stops me from completing what I'm currently trying to do. Fixing the bug could be as simple as changing a single statement. What do you do in that situation? Fix the bug and commit it together with your current work Save your current work elsewhere, fix the bug in a separate commit, then continue your work [1] Continue what you're supposed to do, commit the code (even if it breaks the build fails some tests), then fix the bug (and the build make tests pass) in a separate commit [1] In practice, this would mean: clone the original repository elsewhere, fix the bug, commit/push the changes, pull the commit to the repository you're working on, merge the changes, and continue your work. Edit: I changed number three to reflect what I really meant.

    Read the article

  • Blender 2.6: How to Merge the Pros of Meshes and Surfaces

    - by fridojet
    there are two interesting kinds of objects: Meshes and Surfaces. Each of them offers very cool features. Object Type Specific Features Nice Features of Surfaces: (for example) They're as scalable as vector graphics (really nice!) You can build winding things real simply. Nice Features of Meshes: (for example) You can build organic things really good using the Sculpt Mode and a graphic tablet. You can use some special things like Physics. My Question There are things for which Surfaces are better and things for which Meshes are better. But how can I use both the best features of Surfaces and the best features of Meshes on one object at once? For example: How can I use Physics (like on Meshes) on lossless scalable objects (like Surfaces)? Thanks.

    Read the article

  • Releasing patches and updates to web service users

    - by Kalidoss.M
    I have written one web services using Java. Its already live (Up & Running). During development I have SVN(repository) + Jira for task maintenance + Maven for building the web services. Now i have some small update for my web services and i have created that task in Jira and committed the files in svn with respect to Jira-Id after all testing, etc.. Say my web services is used by 10 clients, we did not give our source code to them. Is there any steps/procedure available to release patch/updates? Is there any way to render/create the change log at the build time (maven). How do i manage the change log for all version or Patch updates during build time? (Automatically)

    Read the article

  • New Development Snapshot

    I finished all the .NET 4.0 security model changes. If you build from source, you can now (optionally) build on .NET 4.0 and get native .NET 4.0 assemblies that use the new .NET 4.0 security model (and also experimental class gc support). The .NET 2.0 binaries also work on .NET 4.0. This is probably the final development snapshot before the first 0.44 release candidate and it has been tested more than a typical development snapshot. Please start testing ...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How should UI layer pass user input to BL layer?

    - by BornToCode
    I'm building an n-tier application, I have UI, BL, DAL & Entities (built from POCO) projects. (All projects have a reference to the Entities). My question is - how should I pass user input from the UI to the BL, as a bunch of strings passed to the BL method and the BL will build the object from the parameters, or should I build the objects inside the UI submit_function and send objects as parameters? EDIT: I wrote n-tier application, but what I actually meant was just layers.

    Read the article

< Previous Page | 172 173 174 175 176 177 178 179 180 181 182 183  | Next Page >