Search Results

Search found 17076 results on 684 pages for 'nightly build'.

Page 94/684 | < Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >

  • How do I work around this problem creating a virtualenv environment with a custom-build Python?

    - by Daryl Spitzer
    I need to run some code on a Linux machine with Python 2.3.4 pre-installed. I'm not on the sudoers list for that machine, so I built Python 2.6.4 into (a subdirectory in) my home directory. Then I attempted to use virtualenv (for the first time), but got: $ Python-2.6.4/python virtualenv/virtualenv.py ENV New python executable in ENV/bin/python Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Installing setuptools......... Complete output from command /apps/users/dspitzer/ENV/bin/python -c "#!python \"\"\"Bootstrap setuptoo... " /apps/users/dspitzer/virtualen...6.egg: Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] 'import site' failed; use -v for traceback Traceback (most recent call last): File "<string>", line 67, in <module> ImportError: No module named md5 ---------------------------------------- ...Installing setuptools...done. Traceback (most recent call last): File "virtualenv/virtualenv.py", line 1488, in <module> main() File "virtualenv/virtualenv.py", line 529, in main use_distribute=options.use_distribute) File "virtualenv/virtualenv.py", line 619, in create_environment install_setuptools(py_executable, unzip=unzip_setuptools) File "virtualenv/virtualenv.py", line 361, in install_setuptools _install_req(py_executable, unzip) File "virtualenv/virtualenv.py", line 337, in _install_req cwd=cwd) File "virtualenv/virtualenv.py", line 590, in call_subprocess % (cmd_desc, proc.returncode)) OSError: Command /apps/users/dspitzer/ENV/bin/python -c "#!python \"\"\"Bootstrap setuptoo... " /apps/users/dspitzer/virtualen...6.egg failed with error code 1 Should I be setting PYTHONHOME to some value? (I intentionally named my ENV "ENV" for lack of a better name.) Not knowing if I can ignore those errors, I tried installing nose (0.11.1) into my ENV: $ cd nose-0.11.1/ $ ls AUTHORS doc/ lgpl.txt nose.egg-info/ selftest.py* bin/ examples/ MANIFEST.in nosetests.1 setup.cfg build/ functional_tests/ NEWS PKG-INFO setup.py CHANGELOG install-rpm.sh* nose/ README.txt unit_tests/ $ ~/ENV/bin/python setup.py install Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "setup.py", line 1, in <module> from nose import __version__ as VERSION File "/apps/users/dspitzer/nose-0.11.1/nose/__init__.py", line 1, in <module> from nose.core import collector, main, run, run_exit, runmodule File "/apps/users/dspitzer/nose-0.11.1/nose/core.py", line 3, in <module> from __future__ import generators ImportError: No module named __future__ Any advice?

    Read the article

  • PL/SQL pre-compile and Code Quality checks in an automatted build environment?

    - by Lars Corneliussen
    We build software using Hudson and Maven. We have C#, java and last, but not least PL/SQL sources (sprocs, packages, DDL, crud) For C# and Java we do unit tests and code analysis, but we don't really know the health of our PL/SQL sources before we actually publish them to the target database. Requirements There are a couple of things we wan't to test in the following priority: Are the sources valid, hence "compilable"? For packages, with respect to a certain database, would they compile? Code Quality: Do we have code flaws like duplicates, too complex methods or other violations to a defined set of rules? Also, the tool must run head-less (commandline, ant, ...) we wan't to do analysis on a partial code base (changed sources only) Tools We did a little research and found the following tools that could potencially help: Cast Application Intelligence Platform (AIP): Seems to be a server that grasps information about "anything". Couldn't find a console version that would export in readable format. Toad for Oracle: The Professional version is said to include something called Xpert validates a set of rules against a code base. Sonar + PL/SQL-Plugin: Uses Toad for Oracle to display code-health the sonar-way. This is for browsing the current state of the code base. Semantic Designs DMSToolkit: Quite general analysis of source code base. Commandline available? Semantic Designs Clones Detector: Detects clones. But also via command line? Fortify Source Code Analyzer: Seems to be focussed on security issues. But maybe it is extensible? more... So far, Toad for Oracle together with Sonar seems to be an elegant solution. But may be we are missing something here? Any ideas? Other products? Experiences? Related Questions on SO: http://stackoverflow.com/questions/531430/any-static-code-analysis-tools-for-stored-procedures http://stackoverflow.com/questions/839707/any-code-quality-tool-for-pl-sql http://stackoverflow.com/questions/956104/is-there-a-static-analysis-tool-for-python-ruby-sql-cobol-perl-and-pl-sql

    Read the article

  • How do I use accepts_nested_attributes_for? I cannot use the .build method (!)

    - by Angela
    Editing my question for conciseness and to update what I've done: How do I model having multiple Addresses for a Company and assign a single Address to a Contact, and be able to assign them when creating or editing a Contact? Here is my model for Contacts: class Contact < ActiveRecord::Base attr_accessible :first_name, :last_name, :title, :phone, :fax, :email, :company, :date_entered, :campaign_id, :company_name, :address_id, :address_attributes belongs_to :company belongs_to :address accepts_nested_attributes_for :address end Here is my model for Address: class Address < ActiveRecord::Base attr_accessible :street1, :street2, :city, :state, :zip has_many :contacts end I would like, when creating an new contact, access all the Addresses that belong to the other Contacts that belong to the Company. So here is how I represent Company: class Company < ActiveRecord::Base attr_accessible :name, :phone, :addresses has_many :contacts has_many :addresses, :through => :contacts end Here is how I am trying to create a field in the View for _form for Contact so that, when someone creates a new Contact, they pass the address to the Address model and associate that address to the Contact: <% f.fields_for :address, @contact.address do |builder| %> <p> <%= builder.label :street1, "Street 1" %> </br> <%= builder.text_field :street1 %> <p> <% end %> When I try to Edit, the field for Street 1 is blank. And I don't know how to display the value from show.html.erb. At the bottom is my error console -- can't seem to create values in the address table: My Contacts controller is as follows: def new @contact = Contact.new @contact.address.build # I GET AN ERROR HERE: says NIL CLASS @contact.date_entered = Date.today @campaigns = Campaign.find(:all, :order => "name") if params[:campaign_id].blank? else @campaign = Campaign.find(params[:campaign_id]) @contact.campaign_id = @campaign.id end if params[:company_id].blank? else @company = Company.find(params[:company_id]) @contact.company_name = @company.name end end def create @contact = Contact.new(params[:contact]) if @contact.save flash[:notice] = "Successfully created contact." redirect_to @contact else render :action => 'new' end end def edit @contact = Contact.find(params[:id]) @campaigns = Campaign.find(:all, :order => "name") end Here is a snippet of my error console: I am POSTING the attribute, but it is not CREATING in the Address table.... Processing ContactsController#create (for 127.0.0.1 at 2010-05-12 21:16:17) [POST] Parameters: {"commit"="Submit", "authenticity_token"="d8/gx0zy0Vgg6ghfcbAYL0YtGjYIUC2b1aG+dDKjuSs=", "contact"={"company_name"="Allyforce", "title"="", "campaign_id"="2", "address_attributes"={"street1"="abc"}, "fax"="", "phone"="", "last_name"="", "date_entered"="2010-05-12", "email"="", "first_name"="abc"}} Company Load (0.0ms)[0m [0mSELECT * FROM "companies" WHERE ("companies"."name" = 'Allyforce') LIMIT 1[0m Address Create (16.0ms)[0m [0;1mINSERT INTO "addresses" ("city", "zip", "created_at", "street1", "updated_at", "street2", "state") VALUES(NULL, NULL, '2010-05-13 04:16:18', NULL, '2010-05-13 04:16:18', NULL, NULL)[0m Contact Create (0.0ms)[0m [0mINSERT INTO "contacts" ("company", "created_at", "title", "updated_at", "campaign_id", "address_id", "last_name", "phone", "fax", "company_id", "date_entered", "first_name", "email") VALUES(NULL, '2010-05-13 04:16:18', '', '2010-05-13 04:16:18', 2, 2, '', '', '', 5, '2010-05-12', 'abc', '')[0m

    Read the article

  • How to efficently build an interpreter (lexer+parser) in C?

    - by Rizo
    I'm trying to make a meta-language for writing markup code (such as xml and html) wich can be directly embedded into C/C++ code. Here is a simple sample written in this language, I call it WDI (Web Development Interface): /* * Simple wdi/html sample source code */ #include <mySite> string name = "myName"; string toCapital(string str); html { head { title { mySiteTitle; } link(rel="stylesheet", href="style.css"); } body(id="default") { // Page content wrapper div(id="wrapper", class="some_class") { h1 { "Hello, " + toCapital(name) + "!"; } // Lists post ul(id="post_list") { for(post in posts) { li { a(href=post.getID()) { post.tilte; } } } } } } } Basically it is a C source with a user-friendly interface for html. As you can see the traditional tag-based style is substituted by C-like, with blocks delimited by curly braces. I need to build an interpreter to translate this code to html and posteriorly insert it into C, so that it can be compiled. The C part stays intact. Inside the wdi source it is not necessary to use prints, every return statement will be used for output (in printf function). The program's output will be clean html code. So, for example a heading 1 tag would be transformed like this: h1 { "Hello, " + toCapital(name) + "!"; } // would become: printf("<h1>Hello, %s!</h1>", toCapital(name)); My main goal is to create an interpreter to translate wdi source to html like this: tag(attributes) {content} = <tag attributes>content</tag> Secondly, html code returned by the interpreter has to be inserted into C code with printfs. Variables and functions that occur inside wdi should also be sorted in order to use them as printf parameters (the case of toCapital(name) in sample source). I am searching for efficient (I want to create a fast parser) way to create a lexer and parser for wdi. Already tried flex and bison, but as I am not sure if they are the best tools. Are there any good alternatives? What is the best way to create such an interpreter? Can you advise some brief literature on this issue?

    Read the article

  • help me to choose the best soulotion for my purpose to build my software.

    - by rima
    before answer me plz thinking about the futures of these kind of program and answer me plz. I wanna get some data from oracle server like: 1-get all the function,package,procedure and etc for showing them or drop them & etc... 2-compile my *.sql files,get the result if they have problem & etc... becuz I was beginner in oracle first of all I for solve the second problem I try to connect to sqlPlus by RUN sqlplus and trace the output(I mean,I change the output stream of shell and trace what happend and handle the assigned message to customer. NOW THIS PART SUCCEED. just a little bit I have problem with get all result because the output is asynchronous.any way... [in this case I log in to oracle Server by send argument to the sqlplus by make a process in c#] after that I try to get all function,package or procedure name,but I have problem in speed!so I try to use oracle.DataAccess.dll to connect the database. now I m so confusing about: which way is correct way to build a program that work like Oracle Developer! I do not have any experience for like these program how work. If Your answer is I must use the second way follow this part plz: I search a little bit the Golden,PLedit (Benthic software),I have little bit problem how I must create the connection string?because I thinking about how I can find the host name or port number that oracle work on them?? am I need read the TNSNames.Ora file? IF your answer is I must use the first way follow this part plz: do u have any Idea for how I parse the output?because for example the result of a table is so confusing...[i can handle & program it but I really need someone experience,because the important things to me learn how such software work so nice and with quick response?] All of the has different style in output... If you are not sure Can u help me which book can help me in this way i become expert? becuz for example all the C# write just about how u can connect to DB and the DB books write how u can use this DB program,I looking for a book that give me some Idea how develop an interface for do transaction between these two.not simple send and receive data,for example how write a compiler for them. the language of book is not different for me i know C#,java,VB,sql,Oracle Thanks.

    Read the article

  • How to reflect over T to build an expression tree for a query?

    - by Alex
    Hi all, I'm trying to build a generic class to work with entities from EF. This class talks to repositories, but it's this class that creates the expressions sent to the repositories. Anyway, I'm just trying to implement one virtual method that will act as a base for common querying. Specifically, it will accept a an int and it only needs to perform a query over the primary key of the entity in question. I've been screwing around with it and I've built a reflection which may or may not work. I say that because I get a NotSupportedException with a message of LINQ to Entities does not recognize the method 'System.Object GetValue(System.Object, System.Object[])' method, and this method cannot be translated into a store expression. So then I tried another approach and it produced the same exception but with the error of The LINQ expression node type 'ArrayIndex' is not supported in LINQ to Entities. I know it's because EF will not parse the expression the way L2S will. Anyway, I'm hopping someone with a bit more experience can point me into the right direction on this. I'm posting the entire class with both attempts I've made. public class Provider<T> where T : class { protected readonly Repository<T> Repository = null; private readonly string TEntityName = typeof(T).Name; [Inject] public Provider( Repository<T> Repository) { this.Repository = Repository; } public virtual void Add( T TEntity) { this.Repository.Insert(TEntity); } public virtual T Get( int PrimaryKey) { // The LINQ expression node type 'ArrayIndex' is not supported in // LINQ to Entities. return this.Repository.Select( t => (((int)(t as EntityObject).EntityKey.EntityKeyValues[0].Value) == PrimaryKey)).Single(); // LINQ to Entities does not recognize the method // 'System.Object GetValue(System.Object, System.Object[])' method, // and this method cannot be translated into a store expression. return this.Repository.Select( t => (((int)t.GetType().GetProperties().Single( p => (p.Name == (this.TEntityName + "Id"))).GetValue(t, null)) == PrimaryKey)).Single(); } public virtual IList<T> GetAll() { return this.Repository.Select().ToList(); } protected virtual void Save() { this.Repository.Update(); } }

    Read the article

  • What is the best way to connect to an Oracle database to build my own IDE? [closed]

    - by rima
    before answer me plz thinking about the futures of these kind of program and answer me plz. I wanna get some data from oracle server like: 1-get all the function,package,procedure and etc for showing them or drop them & etc... 2-compile my *.sql files,get the result if they have problem & etc... becuz I was beginner in oracle first of all I for solve the second problem I try to connect to sqlPlus by RUN sqlplus and trace the output(I mean,I change the output stream of shell and trace what happend and handle the assigned message to customer. NOW THIS PART SUCCEED. just a little bit I have problem with get all result because the output is asynchronous.any way... [in this case I log in to oracle Server by send argument to the sqlplus by make a process in c#] after that I try to get all function,package or procedure name,but I have problem in speed!so I try to use oracle.DataAccess.dll to connect the database. now I m so confusing about: which way is correct way to build a program that work like Oracle Developer! I do not have any experience for like these program how work. If Your answer is I must use the second way follow this part plz: I search a little bit the Golden,PLedit (Benthic software),I have little bit problem how I must create the connection string?because I thinking about how I can find the host name or port number that oracle work on them?? am I need read the TNSNames.Ora file? IF your answer is I must use the first way follow this part plz: do u have any Idea for how I parse the output?because for example the result of a table is so confusing...[i can handle & program it but I really need someone experience,because the important things to me learn how such software work so nice and with quick response?] All of the has different style in output... If you are not sure Can u help me which book can help me in this way i become expert? becuz for example all the C# write just about how u can connect to DB and the DB books write how u can use this DB program,I looking for a book that give me some Idea how develop an interface for do transaction between these two.not simple send and receive data,for example how write a compiler for them. the language of book is not different for me i know C#,java,VB,sql,Oracle Thanks.

    Read the article

  • TeamCity error: svn: connection refused by the server

    - by chrisk
    We have a Continuous Integration environment setup with TeamCity and subversion. TeamCity gets the latest source from svn and does a build (Visual Studio) on every commit. Sometimes we get the following TeamCity error when the build runs. Doing a couple of force builds gets TeamCity running succesfully. Build errors [12:35:24]: Patch is broken, can be found in file: C:\TeamCity\buildAgent\temp\cache\temp6036patch_803[12:35:24]: RunBuildException when running build stage UpdateSourcesFromServer: Failed to build patch for build 519 {build id=803}, VCS root: svn: https://svn.myDomain.com/repos/myApplication {id=2}, due to error: org.tmatesoft.svn.core.SVNException: svn: connection refused by the server svn: REPORT request failed on '/repos/myApplication/!svn/vcc/default' Any ideas why this might be happening ?

    Read the article

  • Command /usr/bin/ codesign failed with exit code 1

    - by sarmenhba
    i was playing with the keychain certificates i wanted to remove them all and do it all again so that i learn how to do it. i have a simple app it worked perfectly before i started playing with the certificates so the code is fine. when i tried to compile my app to sent it to my iphone device it would give the error you see in the title. i looked at the log and here is what i see: Build Untitled of project Untitled with configuration Debug CodeSign build/Debug-iphoneos/Untitled.app cd /Users/sarmenhb/Desktop/myapp/Untitled setenv IGNORE_CODESIGN_ALLOCATE_RADAR_7181968 /Developer/Platforms/iPhoneOS.platform/Developer/usr/bin/codesign_allocate setenv PATH "/Developer/Platforms/iPhoneOS.platform/Developer/usr/bin:/Developer/usr/bin:/usr/bin:/bin:/usr/sbin:/sbin" /usr/bin/codesign -f -s "iPhone Developer: sarm bo (2ZDTN5FTAL)" --resource-rules=/Users/sarmenhb/Desktop/myapp/Untitled/build/Debug-iphoneos/Untitled.app/ResourceRules.plist --entitlements /Users/sarmenhb/Desktop/myapp/Untitled/build/Untitled.build/Debug-iphoneos/Untitled.build/Untitled.xcent /Users/sarmenhb/Desktop/myapp/Untitled/build/Debug-iphoneos/Untitled.app iPhone Developer: sarm bo (2ZDTN5FTAL): no identity found Command /usr/bin/codesign failed with exit code 1 how do i fix this?

    Read the article

  • Building java project with TFS

    - by Jaco Pretorius
    A very small portion of our codebase is some legacy Java code. I'm trying to add a new build that would invoke ant to build this project. The first problem is that TFS doesn't allow you to create a build that doesn't build a .Net solution. I got around this by copying a previous build file and adding an EndToEndIteration task which is the entry point for the build. The problem is that none of the usual build variables are populated - $(BuildDirectory), $(SolutionRoot) - all blank. This pretty much means I can't invoke my ant task without hardcoding the paths (which I definitely can't do). Any ideas?

    Read the article

  • different location of AndroidManifest.xml

    - by didito
    i have a custom buildchain for an android project. there is a build.xml and build.properties. build.properties contains this line: manifest.file =${env.WORK_FOLDER}/AndroidManifest.xml and WORK_FOLDER is correctly set to proj_root_work. the layout is like the following: proj_root-_work [DIR] (some stuff gets preprocessed and all finally copied here - src, res, assets and the AndroidManifest.xml) -build.xml -build.properties then i call ant for my build configuration from the proj_root and it complains about not finding AndroidManifest.xml. when i put it in my proj_root it works, but i really want it in my _Work directory. i read that the file HAS TO BE in the root, what is the point then of the build.properties entry? also i saw some custom parameters to aapt.exe ... any hints welcome, thx

    Read the article

  • something wrong in my program by using GData xmlsupport

    - by ben
    Ld build/Debug-iphonesimulator/newParser.app/newParser normal i386 cd /Users/apple/Desktop/newParser setenv MACOSX_DEPLOYMENT_TARGET 10.5 setenv PATH "/Developer/Platforms/iPhoneSimulator.platform/Developer/usr/bin:/Developer/usr/bin:/usr/bin:/bin:/usr/sbin:/sbin" /Developer/Platforms/iPhoneSimulator.platform/Developer/usr/bin/gcc-4.2 -arch i386 -isysroot /Developer/Platforms/iPhoneSimulator.platform/Developer/SDKs/iPhoneSimulator3.1.sdk -L/Users/apple/Desktop/newParser/build/Debug-iphonesimulator -F/Users/apple/Desktop/newParser/build/Debug-iphonesimulator -filelist /Users/apple/Desktop/newParser/build/newParser.build/Debug-iphonesimulator/newParser.build/Objects-normal/i386/newParser.LinkFileList -mmacosx-version-min=10.5 -lxml2 -framework Foundation -framework UIKit -framework CoreGraphics -o /Users/apple/Desktop/newParser/build/Debug-iphonesimulator/newParser.app/newParser Undefined symbols: "_kGDataXMLXPathDefaultNamespacePrefix", referenced from: _kGDataXMLXPathDefaultNamespacePrefix$non_lazy_ptr in GDataXMLNode.o (maybe you meant: _kGDataXMLXPathDefaultNamespacePrefix$non_lazy_ptr) ld: symbol(s) not found collect2: ld returned 1 exit status

    Read the article

  • Python: undefined reference to `_imp __Py_InitModule4'

    - by Mark
    I'm trying to do a debug build of the Rabbyt library using mingw's gcc to run with my MSVC built python26_d.. I got a lot of undefined references which caused me to create libpython26_d.a, however one of the undefined references remains. Googling gives me: http://www.techlists.org/archives/programming/pythonlist/2003-03/msg01035.shtml But -rdynamic doesn't help. e:\MinGW/bin\gcc.exe -mno-cygwin -mdll -O -Wall -g -IE:\code\python\python\py26\ include -IE:\code\python\python\py26\PC -c rabbyt/rabbyt._rabbyt.c -o build\temp .win32-2.6-pydebug\Debug\rabbyt\rabbyt._rabbyt.o -O3 -fno-strict-aliasing rabbyt/rabbyt._rabbyt.c:1351: warning: '__Pyx_SetItemInt' defined but not used writing build\temp.win32-2.6-pydebug\Debug\rabbyt\_rabbyt_d.def e:\MinGW/bin\gcc.exe -mno-cygwin -shared -g build\temp.win32-2.6-pydebug\Debug\r abbyt\rabbyt._rabbyt.o build\temp.win32-2.6-pydebug\Debug\rabbyt\_rabbyt_d.def - LE:\code\python\python\py26\libs -LE:\code\python\python\py26\PCbuild -lopengl32 -lglu32 -lpython26_d -lmsvcr90 -o build\lib.win32-2.6-pydebug\rabbyt\_rabbyt_d. pyd build\temp.win32-2.6-pydebug\Debug\rabbyt\rabbyt._rabbyt.o: In function `init_ra bbyt': E:/code/python/rabbyt/rabbyt/rabbyt._rabbyt.c:1121: undefined reference to `_imp __Py_InitModule4'

    Read the article

  • iphone Memory gets freed in debug mode but not in release mode

    - by gdr
    I have been testing my iPhone debug build on both my device and simulator with activity monitor, leaks, and object allocations. The code is pretty well optimized so I have decided to test the release build. I went into the project Menu and set the target build to be release, I then added the necessary header paths that my app is using to the headers search paths and ran the release build on the device with the above mentioned instruments. What I have noticed now is that memory that was freed when I used the debug build does not get freed when using release version. There is one place in my App that I remove a scroll view with some images which frees up a significant amount of memory when I use the debug build, but no memory is freed up in that place when I use the release version. Does someone have any ideas where I need to start looking at? did I setup my release build wrong?

    Read the article

  • libtool failed with exit code 1 on XCode 4.3

    - by Doz
    I have XCode 4.3 and I am getting this frustrating xml-lib related error. I have a feeling its because of the fact that 4.3 is not using /Developer folder but instead the /Applications/XCode.app/... The error message is below: Libtool /Users/dkatz/Library/Developer/Xcode/DerivedData/RWEngines-ewchevfhokeivnffrputdqapsyxu/Build/Products/Release-iphonesimulator/RWEngines.framework/Versions/A/RWEngines normal i386 cd /Users/dkatz/Sites/xCode/RWA/RWEngines setenv MACOSX_DEPLOYMENT_TARGET 10.6 setenv PATH "/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneSimulator.platform/Developer/usr/bin:/Applications/Xcode.app/Contents/Developer/usr/bin:/usr/bin:/bin:/usr/sbin:/sbin" /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/libtool -static -arch_only i386 -syslibroot /Applications/Xcode.app/Contents/Developer/Platforms/iPhoneSimulator.platform/Developer/SDKs/iPhoneSimulator5.0.sdk -L/Users/dkatz/Library/Developer/Xcode/DerivedData/RWEngines-ewchevfhokeivnffrputdqapsyxu/Build/Products/Release-iphonesimulator -filelist /Users/dkatz/Library/Developer/Xcode/DerivedData/RWEngines-ewchevfhokeivnffrputdqapsyxu/Build/Intermediates/RWEngines.build/Release-iphonesimulator/RWEngines.build/Objects-normal/i386/RWEngines.LinkFileList -ObjC -Xlinker -no_implicit_dylibs -D__IPHONE_OS_VERSION_MIN_REQUIRED=50000 -framework UIKit /Users/dkatz/Library/Developer/Xcode/DerivedData/RWEngines-ewchevfhokeivnffrputdqapsyxu/Build/Products/Release-iphonesimulator/libCorePlot-CocoaTouch.a -framework SenTestingKit -framework QuartzCore -framework Foundation -framework RWCommon -o /Users/dkatz/Library/Developer/Xcode/DerivedData/RWEngines-ewchevfhokeivnffrputdqapsyxu/Build/Products/Release-iphonesimulator/RWEngines.framework/Versions/A/RWEngines And the actual error: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/libtool failed with exit code 1 Thanks guys!

    Read the article

  • What is the difference between these task definition syntaxes in gradle?

    - by bergyman
    A) task build << { description = "Build task." ant.echo('build') } B) task build { description = "Build task." ant.echo('build') } I notice that with type B, the code within the task seems to be executed when typing gradle -t - ant echoes out 'build' even when just listing all the various available tasks. The description is also actually displayed with type B. However, with type A no code is executed when listing out the available tasks, and the description is not displayed when executing gradle -t. The docs don't seem to go into the difference between these two syntaxes (that I've found), only that you can define a task either way.

    Read the article

  • How can I determine if a given git hash exists on a given branch?

    - by pinko
    Background: I use an automated build system which takes a git hash as input, as well as the name of the branch on which that hash exists, and builds it. However, the build system uses the hash alone to check out the code and build it -- it simply stores the branch name, as given, in the build DB metadata. I'm worried about developers accidentally providing the wrong branch name when they kick off a build, causing confusion when people are looking through the build history. So how can I confirm, before passing along the hash and branch name to the build system, that the given hash does in fact come from the given branch?

    Read the article

  • Database continuous integration step by step

    - by David Atkinson
    This post will describe how to set up basic database continuous integration using TeamCity to initiate the build process, SQL Source Control to put your database under source control, and the SQL Compare command line to keep a test database up to date. In my example I will be using Subversion as my source control repository. If you wish to follow my steps verbatim, please make sure you have TortoiseSVN, SQL Compare and SQL Source Control installed. Downloading and Installing TeamCity TeamCity (http://www.jetbrains.com/teamcity/index.html) is free for up to three agents, so it a great no-risk tool you can use to experiment with. 1. Download the latest version from the JetBrains website. For some reason the TeamCity executable didn't download properly for me, stalling frustratingly at 99%, so I tried again with the zip file download option (see screenshot below), which worked flawlessly. 2. Run the installer using the defaults. This results in a set-up with the server component and agent installed on the same machine, which is ideal for getting started with ease. 3. Check that the build agent is pointing to the server correctly. This has caught me out a few times before. This setting is in C:\TeamCity\buildAgent\conf\buildAgent.properties and for my installation is serverUrl=http\://localhost\:80 . If you need to change this value, if for example you've had to install the Server console to a different port number, the TeamCity Build Agent Service will need to be restarted for the change to take effect. 4. Open the TeamCity admin console on http://localhost , and specify your own designated username and password at first startup. Putting your database in source control using SQL Source Control 5. Assuming you've got SQL Source Control installed, select a development database in the SQL Server Management Studio Object Explorer and select Link Database to Source Control. 6. For the Link step you can either create your own empty folder in source control, or you can select Just Evaluating, which just creates a local subversion repository for you behind the scenes. 7. Once linked, note that your database turns green in the Object Explorer. Visit the Commit tab to do an initial commit of your database objects by typing in an appropriate comment and clicking Commit. 8. There is a hidden feature in SQL Source Control that opens up TortoiseSVN (provided it is installed) pointing to the linked repository. Keep Shift depressed and right click on the text to the right of 'Linked to', in the example below, it's the red Evaluation Repository text. Select Open TortoiseSVN Repo Browser. This screen should give you an idea of how SQL Source Control manages the object files behind the scenes. Back in the TeamCity admin console, we'll now create a new project to monitor the above repository location and to trigger a 'build' each time the repository changes. 9. In TeamCity Adminstration, select Create Project and give it a name, such as "My first database CI", and click Create. 10. Click on Create Build Configuration, and name it something like "Integration build". 11. Click VCS settings and then Create And Attach new VCS root. This is where you will tell TeamCity about the repository it should monitor. 12. In my case since I'm using the Just Evaluating option in SQL Source Control, I should select Subversion. 13. In the URL field paste your repository location. In my case this is file:///C:/Users/David.Atkinson/AppData/Local/Red Gate/SQL Source Control 3/EvaluationRepositories/WidgetDevelopment/WidgetDevelopment 14. Click on Test Connection to ensure that you can communicate with your source control system. Click Save. 15. Click Add Build Step, and Runner Type: Command Line. Should you be familiar with the other runner types, such as NAnt, MSBuild or Powershell, you can opt for these, but for the same of keeping it simple I will pick the simplest option. 16. If you have installed SQL Compare in the default location, set the Command Executable field to: C:\Program Files (x86)\Red Gate\SQL Compare 10\sqlcompare.exe 17. Flip back to SSMS briefly and add a new database to your server. This will be the database used for continuous integration testing. 18. Set the command parameters according to your server and the name of the database you have created. In my case I created database RedGateCI on server .\sql2008r2 /scripts1:. /server2:.\sql2008r2 /db2:RedGateCI /sync /verbose Note that if you pick a server instance that isn't on your local machine, you'll need the TCP/IP protocol enabled in SQL Server Configuration Manager otherwise the SQL Compare command line will not be able to connect. 19. Save and select Build Triggering / Add New Trigger / VCS Trigger. This is where you tell TeamCity when it should initiate a build. Click Save. 20. Now return to SQL Server Management Studio and make a schema change (eg add a new object) to your linked development database. A blue indicator will appear in the Object Explorer. Commit this change, typing in an appropriate check-in comment. All being good, within 60 seconds (a TeamCity default that can be changed) a build will be triggered. 21. Click on Projects in TeamCity to get back to the overview screen: The build log will show you the console output, which is useful for troubleshooting any issues: That's it! You now have continuous integration on your database. In future posts I'll cover how you can generate and test the database creation script, the database upgrade script, and run database unit tests as part of your continuous integration script. If you have any trouble getting this up and running please let me know, either by commenting on this post, or email me directly using the email address below. Technorati Tags: SQL Server

    Read the article

  • Building Python 3.2.3 on redhat 5: missing _posixsubprocess

    - by Oz123
    I am trying to build Python3 on a RHEL 5.7 machine, I successful managed to build Python 3.2.2, with : # Install required build dependencies yum install openssl-devel bzip2-devel expat-devel gdbm-devel readline-devel sqlite-devel # Fetch and extract source. Please refer to http://www.python.org/download/releases # to ensure the latest source is used. wget http://www.python.org/ftp/python/3.2/Python-3.2.tar.bz2 tar -xjf Python-3.2.tar.bz2 cd Python-3.2 # Configure the build with a prefix (install dir) of /opt/python3, compile, and install. ./configure --prefix=/opt/python3 make But I am failing (?) with Python 3.2.3: Failed to build these modules: _posixsubprocess Is this a problem that should bother me ? How do I build it? I found this patch, but it's not included in sources Python 3.2.3 I obtained from the website ... Applying this patch on my sources, didn't solve the problem ... Here is the output from stderr: ~/tmp/Python-3.2.3 $ make > build.log ldd: warning: you do not have execution permission for `/usr/local/lib/libreadline.so' /usr/bin/ld: skipping incompatible /usr/local/lib/libreadline.so when searching for -lreadline /usr/bin/ld: skipping incompatible /usr/local/lib/libreadline.a when searching for -lreadline /home/oznahum/tmp/Python-3.2.3/Modules/_posixsubprocess.c: In function '_close_open_fd_range_safe': /home/oznahum/tmp/Python-3.2.3/Modules/_posixsubprocess.c:205: error: 'O_CLOEXEC' undeclared (first use in this function) /home/oznahum/tmp/Python-3.2.3/Modules/_posixsubprocess.c:205: error: (Each undeclared identifier is reported only once /home/oznahum/tmp/Python-3.2.3/Modules/_posixsubprocess.c:205: error: for each function it appears in.) /usr/bin/ld: skipping incompatible /usr/local/lib/libz.so when searching for -lz /usr/bin/ld: skipping incompatible /usr/local/lib/libz.so when searching for -lz ~/tmp/Python-3.2.3 $ grep posix build.log gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I. -IInclude -I./Include -DPy_BUILD_CORE -c ./Modules/posixmodule.c -o Modules/posixmodule.o ar rc libpython3.2m.a Modules/_threadmodule.o Modules/signalmodule.o Modules/posixmodule.o Modules/errnomodule.o Modules/pwdmodule.o Modules/_sre.o Modules/_codecsmodule.o Modules/_weakref.o Modules/_functoolsmodule.o Modules/operator.o Modules/_collectionsmodule.o Modules/itertoolsmodule.o Modules/_localemodule.o Modules/_iomodule.o Modules/iobase.o Modules/fileio.o Modules/bytesio.o Modules/bufferedio.o Modules/textio.o Modules/stringio.o Modules/zipimport.o Modules/symtablemodule.o Modules/xxsubtype.o building '_posixsubprocess' extension gcc -pthread -fPIC -fno-strict-aliasing -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -IInclude -I/home/oznahum/localroot/include -I. -I./Include -I/usr/local/include -I/home/oznahum/tmp/Python-3.2.3 -c /home/oznahum/tmp/Python-3.2.3/Modules/_posixsubprocess.c -o build/temp.linux-x86_64-3.2/home/oznahum/tmp/Python-3.2.3/Modules/_posixsubprocess.o _posixsubprocess

    Read the article

  • debian/rules error "No rule to make target"

    - by Hairo
    i'm having some problems creating a .deb file with debuild before reading some tutorials i managed to make the file but i always get this error: make: *** No rule to make target «build». Stop. dpkg-buildpackage: failure: debian/rules build gave error exit status 2 debuild: fatal error at line 1329: dpkg-buildpackage -rfakeroot -D -us -uc -b failed Any help?? This is my debian rules file: #!/usr/bin/make -f # -*- makefile -*- # Sample debian/rules that uses debhelper. # This file was originally written by Joey Hess and Craig Small. # As a special exception, when this file is copied by dh-make into a # dh-make output file, you may use that output file without restriction. # This special exception was added by Craig Small in version 0.37 of dh-make. # Uncomment this to turn on verbose mode. #export DH_VERBOSE=1 build-stamp: configure-stamp dh_testdir touch build-stamp clean: dh_testdir dh_testroot rm -f build-stamp configure-stamp dh_clean install: build dh_testdir dh_testroot dh_clean -k dh_installdirs $(MAKE) install DESTDIR=$(CURDIR)/debian/pycounter mkdir -p $(CURDIR)/debian/pycounter # Copy .py files cp pycounter.py $(CURDIR)/debian/pycounter/opt/extras.ubuntu.com/pycounter/pycounter.py cp prefs.py $(CURDIR)/debian/pycounter/opt/extras.ubuntu.com/pycounter/prefs.py # desktop copyright and others (not complete, check) cp extras-pycounter.desktop $(CURDIR)/debian/pycounter/usr/share/applications/extras-pycounter.desktop

    Read the article

  • Best method to organize/manage dependencies in the VCS within a large solution

    - by SnOrfus
    A simple scenario: 2 projects are in version control The application The test(s) A significant number of checkins are made to the application daily. CI builds and runs all of the automation nightly. In order to write and/or run tests you need to have built the application (to reference/load instrumented assemblies). Now, consider the application to be massive, such that building it is prohibitive in time (an entire day to compile). The obvious side effect here, is that once you've performed a build locally, it is immediately inconsistent with latest. For instance: If I were to sync with latest, and open up one of the test projects, it would not locally build until I built the application. This is the same when syncing to another branch/build/tag. So, in order to even start working, I need to wait a day to build the application locally, so that the assemblies could be loaded - and then those assemblies wouldn't be latest. How do you organize the repository or (ideally) your development environment such that you can continually develop tests against whatever the current build is, or a given specific build, while minimizing building the application as much as possible?

    Read the article

  • sshd running but no PID file

    - by dunxd
    I'm recently started using monit to monitor the status of sshd on my CentOS 5.4 server. This works fine, but every so often monit reports that sshd is no longer running. This isn't true - I am still able to login to the server via ssh, however I note the following: There is no longer any PID file at /var/run/sshd.pid - after a reboot this file exists. Once it is gone, restarting sshd via service sshd restart does not create the PID file. sudo service sshd status reports openssh-daemon is stopped - again, restarting sshd does not change this, but a reboot does. sudo service sshd stop reports failed, presumably because of the missing PID file. Any idea what is going on? Update sudo netstat -lptun gives the following output relating to port 22 tcp 0 0 :::22 :::* LISTEN 20735/sshd Killing the process with this PID as suggested by @Henry and then starting sshd via service results in service sshd status recognising the process by PID again. Would still like to understand this better. RPM verify suggested by a couple of answerers shows this: sudo rpm -vV openssh openssh-server openssh-clients | grep 'S\.5' S.5....T c /etc/pam.d/sshd S.5....T c /etc/ssh/sshd_config /etc/pam.d/sshd has the following contents: #%PAM-1.0 auth include system-auth account required pam_nologin.so account include system-auth password include system-auth session optional pam_keyinit.so force revoke session include system-auth #session required pam_loginuid.so Should that last line be commented out? Update Here's the output of @YannickGirouard 's script: $ sudo ./sshd_test Searching for the process listening on port 22... Found the following PID: 21330 Command line for PID 21330: /usr/sbin/sshd Listing process(es) relating to PID 21330: UID PID PPID C STIME TTY TIME CMD root 21330 1 0 14:04 ? 00:00:00 /usr/sbin/sshd Listing RPM information about openssh packages: Name : openssh Relocations: (not relocatable) Version : 4.3p2 Vendor: CentOS Release : 72.el5_7.5 Build Date: Tue 30 Aug 2011 12:34:14 AM BST Install Date: Sun 06 Nov 2011 12:50:57 AM GMT Build Host: builder10.centos.org Group : Applications/Internet Source RPM: openssh-4.3p2-72.el5_7.5.src.rpm Size : 745390 License: BSD Signature : DSA/SHA1, Fri 02 Sep 2011 01:13:01 AM BST, Key ID a8a447dce8562897 URL : http://www.openssh.com/portable.html Summary : The OpenSSH implementation of SSH protocol versions 1 and 2 ------------------------------------------------------ Name : openssh-clients Relocations: (not relocatable) Version : 4.3p2 Vendor: CentOS Release : 72.el5_7.5 Build Date: Tue 30 Aug 2011 12:34:14 AM BST Install Date: Sun 06 Nov 2011 12:51:04 AM GMT Build Host: builder10.centos.org Group : Applications/Internet Source RPM: openssh-4.3p2-72.el5_7.5.src.rpm Size : 871132 License: BSD Signature : DSA/SHA1, Fri 02 Sep 2011 01:13:01 AM BST, Key ID a8a447dce8562897 URL : http://www.openssh.com/portable.html Summary : The OpenSSH client applications ------------------------------------------------------ Name : openssh-server Relocations: (not relocatable) Version : 4.3p2 Vendor: CentOS Release : 72.el5_7.5 Build Date: Tue 30 Aug 2011 12:34:14 AM BST Install Date: Sun 06 Nov 2011 12:51:04 AM GMT Build Host: builder10.centos.org Group : System Environment/Daemons Source RPM: openssh-4.3p2-72.el5_7.5.src.rpm Size : 492478 License: BSD Signature : DSA/SHA1, Fri 02 Sep 2011 01:13:01 AM BST, Key ID a8a447dce8562897 URL : http://www.openssh.com/portable.html Summary : The OpenSSH server daemon ------------------------------------------------------ However, I've since got things working by killing the process and starting afresh, as suggested by @Henry below, so perhaps I am no longer seeing the same thing. Will try again if I am seeing the issue again after next reboot. Update - 14 March Monit alerted me that sshd had disappeared, and again I am able to ssh onto the server. So now I can run the script $ sudo ./sshd_test Searching for the process listening on port 22... Found the following PID: 2208 Command line for PID 2208: /usr/sbin/sshd Listing process(es) relating to PID 2208: UID PID PPID C STIME TTY TIME CMD root 2208 1 0 Mar13 ? 00:00:00 /usr/sbin/sshd root 1885 2208 0 21:50 ? 00:00:00 sshd: dunx [priv] Listing RPM information about openssh packages: Name : openssh Relocations: (not relocatable) Version : 4.3p2 Vendor: CentOS Release : 72.el5_7.5 Build Date: Tue 30 Aug 2011 12:34:14 AM BST Install Date: Sun 06 Nov 2011 12:50:57 AM GMT Build Host: builder10.centos.org Group : Applications/Internet Source RPM: openssh-4.3p2-72.el5_7.5.src.rpm Size : 745390 License: BSD Signature : DSA/SHA1, Fri 02 Sep 2011 01:13:01 AM BST, Key ID a8a447dce8562897 URL : http://www.openssh.com/portable.html Summary : The OpenSSH implementation of SSH protocol versions 1 and 2 ------------------------------------------------------ Name : openssh-clients Relocations: (not relocatable) Version : 4.3p2 Vendor: CentOS Release : 72.el5_7.5 Build Date: Tue 30 Aug 2011 12:34:14 AM BST Install Date: Sun 06 Nov 2011 12:51:04 AM GMT Build Host: builder10.centos.org Group : Applications/Internet Source RPM: openssh-4.3p2-72.el5_7.5.src.rpm Size : 871132 License: BSD Signature : DSA/SHA1, Fri 02 Sep 2011 01:13:01 AM BST, Key ID a8a447dce8562897 URL : http://www.openssh.com/portable.html Summary : The OpenSSH client applications ------------------------------------------------------ Name : openssh-server Relocations: (not relocatable) Version : 4.3p2 Vendor: CentOS Release : 72.el5_7.5 Build Date: Tue 30 Aug 2011 12:34:14 AM BST Install Date: Sun 06 Nov 2011 12:51:04 AM GMT Build Host: builder10.centos.org Group : System Environment/Daemons Source RPM: openssh-4.3p2-72.el5_7.5.src.rpm Size : 492478 License: BSD Signature : DSA/SHA1, Fri 02 Sep 2011 01:13:01 AM BST, Key ID a8a447dce8562897 URL : http://www.openssh.com/portable.html Summary : The OpenSSH server daemon ------------------------------------------------------ Again, when I look for /var/run/sshd.pid I don't find it. $ cat /var/run/sshd.pid cat: /var/run/sshd.pid: No such file or directory $ sudo netstat -anp | grep sshd tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 2208/sshd $ sudo kill 2208 $ sudo service sshd start Starting sshd: [ OK ] $ cat /var/run/sshd.pid 3794 $ sudo service sshd status openssh-daemon (pid 3794) is running... Is it possible that sshd is restarting and not creating a pidfile for some reason?

    Read the article

  • ASMLib

    - by wcoekaer
    Oracle ASMlib on Linux has been a topic of discussion a number of times since it was released way back when in 2004. There is a lot of confusion around it and certainly a lot of misinformation out there for no good reason. Let me try to give a bit of history around Oracle ASMLib. Oracle ASMLib was introduced at the time Oracle released Oracle Database 10g R1. 10gR1 introduced a very cool important new features called Oracle ASM (Automatic Storage Management). A very simplistic description would be that this is a very sophisticated volume manager for Oracle data. Give your devices directly to the ASM instance and we manage the storage for you, clustered, highly available, redundant, performance, etc, etc... We recommend using Oracle ASM for all database deployments, single instance or clustered (RAC). The ASM instance manages the storage and every Oracle server process opens and operates on the storage devices like it would open and operate on regular datafiles or raw devices. So by default since 10gR1 up to today, we do not interact differently with ASM managed block devices than we did before with a datafile being mapped to a raw device. All of this is without ASMLib, so ignore that one for now. Standard Oracle on any platform that we support (Linux, Windows, Solaris, AIX, ...) does it the exact same way. You start an ASM instance, it handles storage management, all the database instances use and open that storage and read/write from/to it. There are no extra pieces of software needed, including on Linux. ASM is fully functional and selfcontained without any other components. In order for the admin to provide a raw device to ASM or to the database, it has to have persistent device naming. If you booted up a server where a raw disk was named /dev/sdf and you give it to ASM (or even just creating a tablespace without asm on that device with datafile '/dev/sdf') and next time you boot up and that device is now /dev/sdg, you end up with an error. Just like you can't just change datafile names, you can't change device filenames without telling the database, or ASM. persistent device naming on Linux, especially back in those days ways to say it bluntly, a nightmare. In fact there were a number of issues (dating back to 2004) : Linux async IO wasn't pretty persistent device naming including permissions (had to be owned by oracle and the dba group) was very, very difficult to manage system resource usage in terms of open file descriptors So given the above, we tried to find a way to make this easier on the admins, in many ways, similar to why we started working on OCFS a few years earlier - how can we make life easier for the admins on Linux. A feature of Oracle ASM is the ability for third parties to write an extension using what's called ASMLib. It is possible for any third party OS or storage vendor to write a library using a specific Oracle defined interface that gets used by the ASM instance and by the database instance when available. This interface offered 2 components : Define an IO interface - allow any IO to the devices to go through ASMLib Define device discovery - implement an external way of discovering, labeling devices to provide to ASM and the Oracle database instance This is similar to a library that a number of companies have implemented over many years called libODM (Oracle Disk Manager). ODM was specified many years before we introduced ASM and allowed third party vendors to implement their own IO routines so that the database would use this library if installed and make use of the library open/read/write/close,.. routines instead of the standard OS interfaces. PolyServe back in the day used this to optimize their storage solution, Veritas used (and I believe still uses) this for their filesystem. It basically allowed, in particular, filesystem vendors to write libraries that could optimize access to their storage or filesystem.. so ASMLib was not something new, it was basically based on the same model. You have libodm for just database access, you have libasm for asm/database access. Since this library interface existed, we decided to do a reference implementation on Linux. We wrote an ASMLib for Linux that could be used on any Linux platform and other vendors could see how this worked and potentially implement their own solution. As I mentioned earlier, ASMLib and ODMLib are libraries for third party extensions. ASMLib for Linux, since it was a reference implementation implemented both interfaces, the storage discovery part and the IO part. There are 2 components : Oracle ASMLib - the userspace library with config tools (a shared object and some scripts) oracleasm.ko - a kernel module that implements the asm device for /dev/oracleasm/* The userspace library is a binary-only module since it links with and contains Oracle header files but is generic, we only have one asm library for the various Linux platforms. This library is opened by Oracle ASM and by Oracle database processes and this library interacts with the OS through the asm device (/dev/asm). It can install on Oracle Linux, on SuSE SLES, on Red Hat RHEL,.. The library itself doesn't actually care much about the OS version, the kernel module and device cares. The support tools are simple scripts that allow the admin to label devices and scan for disks and devices. This way you can say create an ASM disk label foo on, currently /dev/sdf... So if /dev/sdf disappears and next time is /dev/sdg, we just scan for the label foo and we discover it as /dev/sdg and life goes on without any worry. Also, when the database needs access to the device, we don't have to worry about file permissions or anything it will be taken care of. So it's a convenience thing. The kernel module oracleasm.ko is a Linux kernel module/device driver. It implements a device /dev/oracleasm/* and any and all IO goes through ASMLib - /dev/oracleasm. This kernel module is obviously a very specific Oracle related device driver but it was released under the GPL v2 so anyone could easily build it for their Linux distribution kernels. Advantages for using ASMLib : A good async IO interface for the database, the entire IO interface is based on an optimal ASYNC model for performance A single file descriptor per Oracle process, not one per device or datafile per process reducing # of open filehandles overhead Device scanning and labeling built-in so you do not have to worry about messing with udev or devlabel, permissions or the likes which can be very complex and error prone. Just like with OCFS and OCFS2, each kernel version (major or minor) has to get a new version of the device drivers. We started out building the oracleasm kernel module rpms for many distributions, SLES (in fact in the early days still even for this thing called United Linux) and RHEL. The driver didn't make sense to get pushed into upstream Linux because it's unique and specific to the Oracle database. As it takes a huge effort in terms of build infrastructure and QA and release management to build kernel modules for every architecture, every linux distribution and every major and minor version we worked with the vendors to get them to add this tiny kernel module to their infrastructure. (60k source code file). The folks at SuSE understood this was good for them and their customers and us and added it to SLES. So every build coming from SuSE for SLES contains the oracleasm.ko module. We weren't as successful with other vendors so for quite some time we continued to build it for RHEL and of course as we introduced Oracle Linux end of 2006 also for Oracle Linux. With Oracle Linux it became easy for us because we just added the code to our build system and as we churned out Oracle Linux kernels whether it was for a public release or for customers that needed a one off fix where they also used asmlib, we didn't have to do any extra work it was just all nicely integrated. With the introduction of Oracle Linux's Unbreakable Enterprise Kernel and our interest in being able to exploit ASMLib more, we started working on a very exciting project called Data Integrity. Oracle (Martin Petersen in particular) worked for many years with the T10 standards committee and storage vendors and implemented Linux kernel support for DIF/DIX, data protection in the Linux kernel, note to those that wonder, yes it's all in mainline Linux and under the GPL. This basically gave us all the features in the Linux kernel to checksum a data block, send it to the storage adapter, which can then validate that block and checksum in firmware before it sends it over the wire to the storage array, which can then do another checksum and to the actual DISK which does a final validation before writing the block to the physical media. So what was missing was the ability for a userspace application (read: Oracle RDBMS) to write a block which then has a checksum and validation all the way down to the disk. application to disk. Because we have ASMLib we had an entry into the Linux kernel and Martin added support in ASMLib (kernel driver + userspace) for this functionality. Now, this is all based on relatively current Linux kernels, the oracleasm kernel module depends on the main kernel to have support for it so we can make use of it. Thanks to UEK and us having the ability to ship a more modern, current version of the Linux kernel we were able to introduce this feature into ASMLib for Linux from Oracle. This combined with the fact that we build the asm kernel module when we build every single UEK kernel allowed us to continue improving ASMLib and provide it to our customers. So today, we (Oracle) provide Oracle ASMLib for Oracle Linux and in particular on the Unbreakable Enterprise Kernel. We did the build/testing/delivery of ASMLib for RHEL until RHEL5 but since RHEL6 decided that it was too much effort for us to also maintain all the build and test environments for RHEL and we did not have the ability to use the latest kernel features to introduce the Data Integrity features and we didn't want to end up with multiple versions of asmlib as maintained by us. SuSE SLES still builds and comes with the oracleasm module and they do all the work and RHAT it certainly welcome to do the same. They don't have to rebuild the userspace library, it's really about the kernel module. And finally to re-iterate a few important things : Oracle ASM does not in any way require ASMLib to function completely. ASMlib is a small set of extensions, in particular to make device management easier but there are no extra features exposed through Oracle ASM with ASMLib enabled or disabled. Often customers confuse ASMLib with ASM. again, ASM exists on every Oracle supported OS and on every supported Linux OS, SLES, RHEL, OL withoutASMLib Oracle ASMLib userspace is available for OTN and the kernel module is shipped along with OL/UEK for every build and by SuSE for SLES for every of their builds ASMLib kernel module was built by us for RHEL4 and RHEL5 but we do not build it for RHEL6, nor for the OL6 RHCK kernel. Only for UEK ASMLib for Linux is/was a reference implementation for any third party vendor to be able to offer, if they want to, their own version for their own OS or storage ASMLib as provided by Oracle for Linux continues to be enhanced and evolve and for the kernel module we use UEK as the base OS kernel hope this helps.

    Read the article

  • Tree Surgeon 2.0 - The future on the T4 Express

    - by Malcolm Anderson
    If you've never been a fan of TreeSurgeon (http://treesurgeon.codeplex.com/) then skip this post.However, if have been there have been some interesting developments over the last couple of years.The biggest one is T4Recently Bill Simser wrote a detailed post about the potential future of tree surgeon, called "Tree Surgeon - Alive and Kicking or Dead and Buried" He raised the question:Times have changed. Since that last release in 2008 so much has changed for .NET developers. The question is, today is the project still viable? Do we still need a tool to generate a project tree given that we have things like scaffolding systems, NuGet, and T4 templates. Or should we just give the project its rightful and respectful send off as its had a good life and has outlived its usefulness.For myself, the answer is, keep it.I've spent the last couple of years doing agile engineering coaching and architecture and from my experience, I can tell you, there are a lot of shops out there that would benefit from having Tree Surgeon as a viable product.  Many would benefit simply from having the software engineering information that is embedded in the tree surgeon site be floating around their conversation.Little things like, keep all of your software needed to run the build, with the build in the version control system.Have your developers and the build system using the same build.Have a one-touch buildSeparate your code from your interfacePut unit tests in first, not lastI've seen companies with great developers suffer from the problems that naturally come from builds taking 3 and 4 hours to run.  It takes work to get that build down to 10 minutes, but the benefits are always worth it.  Tree Surgeon gives you a leg up, by starting you off with a project that you can drop into your Continuous Integration system, right out of the box.Well, it used to be right out of the box.  Today, you have to play with the project to make it work for you, but even with the issues (it hasn't been updated since 2008) it still gives you a framework, with logical separations that you can build from.If you have used Tree Surgeon in the past, take a few minutes and drop a comment about what difference it made in your development style, and what you are doing differently today because of it.

    Read the article

  • Maven. How to include specific folder or file when assemblying project depending on is it dev build or production?

    - by user563588
    Using maven-assembly-plugin <plugin> <artifactId>maven-assembly-plugin</artifactId> <version>2.1</version> <configuration> <descriptors> <descriptor>descriptor.xml</descriptor> </descriptors> <finalName>xxx-impl-${pom.version}</finalName> <outputDirectory>target/assembly</outputDirectory> <workDirectory>target/assembly/work</workDirectory> </configuration> in descriptor.xml file we can specify <fileSets> <fileSet> <directory>src/install</directory> <outputDirectory>/</outputDirectory> </fileSet> </fileSets> Is it possible to include specific file from this folder or sub-folder depending on profile? Or some other way... Like this: <profiles> <profile> <id>dev</id> <activation> <activeByDefault>false</activeByDefault> </activation> <build> <resources> <resource> <directory>src/install/dev</directory> <includes> <include>**/*</include> </includes> </resource> </resources> </build> </profile> <profile> <id>prod</id> <build> <resources> <resource> <directory>src/install/prod</directory> <includes> <include>**/*</include> </includes> </resource> </resources> </build> </profile> </profiles> But it puts resources in jar when packaging. But we need to put it in zip when assemblying as I already mentioned above :( Thanks!

    Read the article

< Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >