Search Results

Search found 71854 results on 2875 pages for 'build time'.

Page 54/2875 | < Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >

  • VS2008: Add custom build rule to a Property Sheet (vsprops)?

    - by shoosh
    I'm using the CUDA .rules file which comes with the CUDA SDK for custom build steps in my project. To save on property duplication I'd like to define the properties of the CUDA rule in a .vsprops file. For some reason, the CUDA rule branch of the properties tree does not show under any of my property sheets, only under the main configurations sheets. I tried editing the .vsprops with a text edition and add the section by hand but there is no change it still does't show when in visual studio. Is there a way to do this?

    Read the article

  • How can I pass a Visual Studio project's assembly version to another project for use in a post-build

    - by Coder7862396
    I have a solution with 2 projects: My Application 1.2.54 (C# WinForms) My Application Setup 1.0.0.0 (WiX Setup) I would like to add a post-build event to the WiX Setup project to run a batch file and pass it a command line parameter of My Application's assembly version number. The code may look something like this: CALL MyBatchFile.bat "$(fileVersion.ProductVersion($(var.My Application.TargetPath)))" But this results in the following error: Unhandled Exception:The expression """.My Application" cannot be evaluated. Method 'System.String.My Application' not found. C:\My Application\My Application Setup\My Application Setup.wixproj Error: The expression """.My Application" cannot be evaluated. Method 'System.String.My Application' not found. C:\My Application\My Application Setup\My Application Setup.wixproj I would like to be able to pass "1.2.54" to MyBatchFile.bat somehow.

    Read the article

  • How can the last command's wall time be put in the Bash prompt?

    - by Mr Fooz
    Is there a way to embed the last command's elapsed wall time in a Bash prompt? I'm hoping for something that would look like this: [last: 0s][/my/dir]$ sleep 10 [last: 10s][/my/dir]$ Background I often run long data-crunching jobs and it's useful to know how long they've taken so I can estimate how long it will take for future jobs. For very regular tasks, I go ahead and record this information rigorously using appropriate logging techniques. For less-formal tasks, I'll just prepend the command with time. It would be nice to automatically "time" every single interactive command and have the timing information printed in a few characters rather than 3 lines.

    Read the article

  • .Net/C# Build Tool - Is NAnt a preferred tool?

    - by Olle
    I'm about to set up an automatic build of a .net/C# project. I've searched the net quite a bit, and there are a lot of references to this tool called 'NAnt'. My questions are: Is NAnt considered a good tool for this, is it still used? Are there other toos that are the de facto standard for such a task? From the information on the projects's sourceforge page, it doesn't seem to have been much development going on the lates years. The same applies to the NAntContrib project. Thanks!

    Read the article

  • Creating Thumbnails in Build Phase of iPhone App Xcode project?

    - by nephilite
    Hey All, I have a group of png files in my bundle; but I also want smaller versions of them to be used at my app as well. Right now I am just resizing them all manually and putting them in the bundle as well: so i might have Graphic_1.png and Graphic_1-thumbnail.png. Instead, is there anyway to do something like: at build time, take all my png assets and create 1/4 scale versions of them, and save them into the bundle as well, with the -thumbnail in the filename as above? Thanks!

    Read the article

  • How to build custom sqlite with provider under Android?

    - by dr4cul4
    Hi, I have a really strange problem. I need to build custom sqlite3 database engine under Android OS, but I also want to use database provider implementation. Unfortunately when examining sources of Android 1.6 I noticed that it's not so easy. Many classes including android.database.; packages use original provider, also many other parts of framework use android.database.sqlite.; packages directly, wich ofcourse make this abstraction a bit confusing and unnesesary. But going to my question. If there is any way that I could extend database interfaces to use custom implmentation of sqlite (or any other database)?

    Read the article

  • Ruby (with Rails) convert a string of time into seconds?

    - by Ryan
    So, I've got a string of time... something along the lines of '4 hours' '48 hours' '3 days' '15 minutes' I would like to convert those all into seconds. For '4 hours', this works fine Time.parse('4 hours').to_i - Time.prase('0 hours').to_i => 14400 # 4 hours in seconds, yay However, this doesn't work for 48 hours (outside of range error). It also does not work for 3 days (no information error), etc. Is there a simple way to convert these strings into seconds?

    Read the article

  • ntpd on Fedora Core 6 with high negative time reset values

    - by Mark White
    The basic problem is we have a FC6 server instance running on a virtual machine, and the system time seems to have been slowly varying until it is now causing a problem. The server runs 24/7 and has been up for 155 days. It has been changed to show GMT, and reports the time as (example) 00:15:15 GMT whereas the actual time is 00:00:00 GMT. This is an offset of 915 seconds. selinux has been changed to 'setenforce 0' for testing and I am running as root. I stop the ntpd service and change the time in System|Administration|Date & Time. The time still shows the same with 'date' in bash. There are no error logs. I change the date with 'date --set' in bash. The response confirms the changed date. I run 'date' and the incorrect date is shown. There are no error logs. I start the ntpd service and /var/log/messages shows success with 'time reset -915.720139s'. The date remains unchanged. ntpq -p shows three three time servers all have offsets of around -915 seconds. I stop ntpd service and try 'ntpd -gqx' and get the same result as above - success, but a large negative time reset. I've tried varying combinations of the above, and a few more settings in System|Administration|Date & Time - no change. I just need to reset the system time to GMT. No offset. But I can't wait for ntpd to slew the time over the next few weeks. Any advice is welcome, cheers! Surely this shouldn't be this difficult... Mark...

    Read the article

  • Dual DC Time Service

    - by poconnor
    I believe I'm having an issue with my Domain Controllers and Time Server. On my back up DC, I keep seeing a warning stating "The time service has stopped advertising as a time source because the local clock is not synchronized." Does this mean that my backup DC believes it's a Time Server? My PDC should be the time server and I have gone through setting up the PDC as the time server. I was not around for the original setup of the time server with the old PDC and Backup DC. But I believe the old PDC was the time server so I setup the new PDC as the new time server, when I decommissioned the old PDC. Is it possible that the Backup DC was setup as the time server and it still thinks it's suppose to be giving out time to everyone? Registry for PDC has NTP Registry for Backup has NT5D5 Results of w32tm /monitor Getting AD DC list for default domain... Analyzing:delayoffset from DC1.local..com Stratum: 4 delayoffset from DC1.local..com Stratum: 3 Warning: Reverse name resolution is best effort. It may not be correct since RefID field in time packets differs across NTP implementations and may not be using IP addresses. DC2.local..com[192.168.1.8:123]: ICMP: 1ms NTP: -0.6349491s RefID: DC1.local..com [192.168.1.9] DC1.local..com *** PDC ***[192.168.1.9:123]: ICMP: 0ms NTP: +0.0000000s RefID: wwwco1test12.microsoft.com [65.55.21.20]

    Read the article

  • Build Mobile App for E-Business Suite Using SOA Suite and ADF Mobile

    - by Michelle Kimihira
    With the upcoming release of Oracle ADF Mobile, I caught up with Srikant Subramaniam, Senior Principal Product Manager, Oracle Fusion Middleware post OpenWorld to learn about the cool hands-on lab at OpenWorld.  For those of you who missed it, you will want to keep reading... Author: Srikant Subramaniam, Senior Principal Product Manager,Oracle Fusion Middleware Oracle ADF Mobile enables rapid and declarative development of native on-device mobile applications. These native applications provide a richer experience for smart devices users running Apple iOS or other mobile platforms. Oracle ADF Mobile protects Oracle customers from technology shifts by adopting a metadata-based development framework that enables developer to develop one app (using Oracle JDeveloper), and deploy to multiple device platforms (starting with iOS and Android).  Oracle ADF Mobile also enables IT organizations to leverage existing expertise in web-based and Java development by adopting a hybrid application architecture that brings together HTML5, Java, and device native container: HTML5 allows developer to deliver device-native user experiences while maintaining portability across different platforms Java allows developers to create modules to support business logic and data services Native container provides integration into device services such as camera, contacts, etc All these technologies are packaged into a development framework that supports declarative application development through Oracle JDeveloper. ADF Mobile also provides out of box integratoin with key Fusion Middleware components, such as SOA Suite and Business Process Management (BPM). Oracle Fusion Middleware provides the necessary infrastructure to extend business processes and services to the mobile device -- enabling the mobile user to participate in human tasks – without the additional “mobile middleware” layer. When coupled with Oracle SOA Suite, this combination can execute business transactions on Oracle E-Business Suite (or any Oracle Application). Demo Use Case: Mobile E-Business Suite (iExpense) Approvals Using an employee expense approval scenario, we illustrate how to use Oracle Fusion Middleware and Oracle ADF Mobile to build application extensions that integrate intelligently with Oracle Applications (For example, E-Business Suite). Building these extensions using Oracle Fusion middleware and ADF makes modifications simple, quick to implement, and easy to maintain/upgrade. As described earlier, this approach also extends Fusion Middleware to mobile users without the additional "Mobile Middleware" layer. The approver is presented with a list of expense reports that have been submitted for approval. These expense reports are retrieved from the backend E-Business Suite and displayed on the mobile device. Approval (or rejection) of the expense report kicks off the workflow in E-Business Suite and takes it to completion. The demo also shows how to integrate with native device services such as email, contacts, BI dashboards as well as a prebuilt PDF viewer (this is especially useful in the expense approval scenario, as there is often a need for the approver to access the submitted receipts). Summary Oracle recommends Fusion Middleware as the application integration platform to deliver critical enterprise data and processes to mobile applications.  Pre-built connectors between Fusion Middleware and Applications greatly accelerates the integration process.  Instead of building individual integration points between mobile applications and individual enterprise applications, Oracle Fusion Middleware enables IT organizations to leverage a common platform to support both desktop and mobile application.  Additional Information Product Information on Oracle.com: Oracle Fusion Middleware Follow us on Twitter and Facebook Subscribe to our regular Fusion Middleware Newsletter

    Read the article

  • How to Build Services from Legacy Applications

    - by Chris Falter
    The SOA consultants invaded the executive suite at your company or agency, preached the true religion, and converted the unbelievers. Now by divine imperative you must convert your legacy applications into a suite of reusable services.  But as usual, you lack the time and resources that you need in order to develop the services properly.  So you googled or bing’ed, found this blog post, and began crying in gratitude.  Yes, as the title implies, I am going to reveal my easy, 3-step, works-every-time process for converting silos of legacy applications into the inventory of services your CIO has been dreaming about.  So just close your eyes and count to 3 … now open them … and here it is…. Not. While wishful thinking is too often the coin of the IT realm, even the most naive practitioner knows that converting legacy applications into reusable services requires more than a magic wand.  The reason is simple: if your starting point is your legacy applications, then you will simply be bolting a web service technology layer on top of your legacy API.  And that legacy API is built in the image of the silo applications.  Enter the wide gate of the legacy API, follow the broad path of generating service interfaces from existing code, and you will arrive at the siloed enterprise destruction that you thought you were escaping. The Straight and Narrow Path This past week I had the opportunity to learn how the FBI Criminal Justice Information Systems department has been transitioning from silo applications to a service inventory.  Lafe Hutcheson, IT Specialist in the architecture group and fellow attendee at an SOA Architect Certification Workshop, was my guide.  Lafe has survived the chaos of an SOA initiative, so it is not surprising that he was able to return from a US Army deployment to Kabul, Afghanistan with nary a scratch.  According to Lafe, building their service inventory is a three-phase process: Model a business process.  This requires intense collaboration between the IT and business wings of the organization, of course.  The FBI uses IBM Websphere tools to model the process with BPMN. Identify candidate services to facilitate the business process. Convert the BPMN to an executable BPEL orchestration, model and develop the services, and use a BPEL engine to run the process.  The FBI uses ActiveVOS for orchestration services. The 12 Step Program to End Your Legacy API Addiction Thomas Erl has documented a process for building a web service inventory that is quite similar to the FBI process. Erl’s process adds a technology architecture definition phase, which allows for the technology environment to influence the inventory blueprint.  For example, if you are using an enterprise service bus, you will probably not need to build your own utility services for logging or intermediate routing.  Erl also lists a service-oriented analysis phase that highlights the 12-step process of applying the principles of service orientation to modeling your services.  Erl depicts the modeling of a service inventory as an iterative process: model a business process, define the relevant technology architecture, define the service inventory blueprint, analyze the services, then model another business process, rinse and repeat.  (Astute readers will note that Erl’s diagram, restricted to analysis and modeling process, does not include the implementation phase that concludes the FBI service development methodology.) The service-oriented analysis phase is where you find the 12 steps that will free you from your legacy API addiction. In a nutshell, you identify the steps in the process that need services; identify the different types of services (agnostic entity services, service compositions, and utility services) that are required; apply service-orientation principles; and normalize the inventory into cohesive service models. Rather than discuss each of the 12 steps individually, I will close by simply referring my readers to Erl’s explanation.

    Read the article

  • How to build a 4x game?

    - by Marco
    I'm trying to study how succefully implement a 4x game. Area of interest: 1) map data: how to store stellars systems (graphs?), how to generate them and so on.. 2) multiplayer: how to organize code in a non graphical server and a client to display it 3) command system: what are patters to catch user and ai decisions and handle them, adding at first "explore" and "colonize" then "combat", "research", "spy" and so on (commands can affect ships, planets, research, etc..) 4) ai system: ai can use commands to expand, upgrade planets and ship I know is a big questions, so help is appreciated :D 1) Map data Best choice is have a graph to model a galaxy. A node is a stellar system and every system have a list of planets. Ship cannot travel outside of predefined paths, like in Ascendancy: http://www.abandonia.com/files/games/221/Ascendancy_2.png Every connection between two stellar systems have a cost, in turns. Generate a galaxy is only a matter of: - dimension: number of stellar systems, - variety: randomize number of planets and types (desertic, earth, etc..), - positions of each stellar system on game space - connections: assure that exist a path between every node, so graph is "connected" (not sure if this a matematically correct term) 2) Multiplayer Game is organized in turns: player 1, player 2, ai1, ai2. Server take care of all data and clients just diplay it and collect data change. Because is a turn game, latency is not a problem :D 3) Command system I would like to design a hierarchy of commands to take care of this aspect: abstract Genericcommand (target) ExploreCommand (Ship) extends genericcommand colonizeCommand (Ship) buildcommand(planet, object) and so on. In my head all this commands are stored in a queue for every planets, ships or reasearch center or spy, and each turn a command is sent to a server to apply command and change data state 4) ai system I don't have any idea about this. Is a big topic and what I want is a simple ai. Something like "expand and fight against everyone". I think about a behaviour tree to control ai moves, so I can develop an ai that try to build ships to expand and then colonize planets, upgrade them throught science and combat enemies. Could be done with a finite state machine too ? any ideas, resources, article are welcome!

    Read the article

  • Insufficient memory issue during Build Process Customization

    - by jehan
    Normal 0 false false false EN-US ZH-CN X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} When customizing the Build Process Template in Workflow designer, I came across the OutOfMemoryException errors while performing Save as Image and Copy operations: "Insufficient memory to continue execution of program"   Normal 0 false false false EN-US ZH-CN X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} There is a fix available on Microsoft Connect  which has resolved the issue.

    Read the article

  • Running response time tests on php code - how much is 7.2E-5 microseconds?

    - by Ali
    Hi guys I'm using microtime() function of php to tell how long certain snippets of code take to run I do this by taking the time before and after the snippet and subtracting them using microtime function. I got the following results though for the different snippets: 1 - 0.022976 2 - 0.003656 3 - -0.196361 4- 0.006563 5- 7.2E-5 6- 0.847695 7- 0.005092 8- 7.6E-5 9- 0.08024 The first numbers represent the snippt and the following the time taken... I've forgotten whatever I learnt back in College on numerical methods :( - how big is 7.2E-5 microseconds?

    Read the article

  • Nant build fails - but only in TeamCity

    - by wayne
    Hi I have a nant build file set up which works fine from the cmd line but not in TeamCity. I've checked that the comand I execute is run from the same directory TC is working in and checked all the references but it still fails with the following error: [build] Compile the project using Debug configuration... [10:30:05]: [build] msbuild (1m:18s) [10:30:06]: [msbuild] Starting MSBuild... [10:30:07]: [msbuild] Starting 'C:\WINDOWS\Microsoft.NET\Framework\v3.5\msbuild.exe (@"G:\TeamCity\buildAgent\work\9de21b975852dd95\src\Irm.Web.App\Irm.Web.App.sln.teamcity.msbuild.tcargs")' in 'G:\TeamCity\buildAgent\work\9de21b975852dd95' [10:30:09]: [msbuild] MSBUILD : error MSB1025: An internal failure occurred while running MSBuild. [10:31:18]: [msbuild] Unhandled Exception: System.NullReferenceException: Object reference not set to an instance of an object. [10:31:18]: [msbuild] at Microsoft.Build.CommandLine.MSBuildApp.BuildProject(String projectFile, String[] targets, String toolsVersion, BuildPropertyGroup propertyBag, ILogger[] loggers, LoggerVerbosity verbosity, DistributedLoggerRecord[] distributedLoggerRecords, Boolean needToValidateProject, String schemaFile, Int32 cpuCount, Boolean enableNodeReuse) [10:31:18]: [msbuild] at Microsoft.Build.CommandLine.MSBuildApp.Execute(String commandLine) [10:31:18]: [msbuild] at Microsoft.Build.CommandLine.MSBuildApp.Main() [10:31:24]: G:\TeamCity\buildAgent\work\9de21b975852dd95\Irm-deploy.build(22,10): External Program Failed: msbuild (return code was -1073741819) Does anyone have any idea why TC would not be able to run the build yet I know it works? Cheers w://

    Read the article

  • Microsoft.Build.Engine Error (default targets): Target GetFrameworkPaths: Could not locate the .NET

    - by Mitchan Adams
    I am writing a webservice, that when called should build a C# project. I'm using the framework 2 reference, Microsoft.Buld.Engine and Microsoft.Build.Framework. If you look under the '<Import>' section .csproj file, by default it has: <Import Project="$(MSBuildToolsPath)\Microsoft.CSharp.targets" /> which I then changed to: <Import Project="$(MSBuildBinPath)\Microsoft.CSharp.targets" /> My code to build the csproj is: Engine buildEngine = new Engine(Path.Combine(Environment.GetEnvironmentVariable("SystemRoot"), @"Microsoft.NET\Framework\v2.0.50727")); FileLogger logger = new FileLogger(); logger.Parameters = @"logfile=c:\temp\build.log"; buildEngine.RegisterLogger(logger); bool success = buildEngine.BuildProjectFile([Path_Of_Directory]+ "ProjectName.csproj"); buildEngine.UnregisterAllLoggers(); The success variable returns a false because the build faild. I then check the build.log file and this is the error I recieve: *Build started 3/17/2010 11:16:56 AM. ______________________________ Project "[Path_Of_Directory]\ProjectName.csproj" (default targets): Target GetFrameworkPaths: Could not locate the .NET Framework SDK. The task is looking for the path to the .NET Framework SDK at the location specified in the SDKInstallRootv2.0 value of the registry key HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft.NETFramework. You may be able to solve the problem by doing one of the following: 1.) Install the .NET Framework SDK. 2.) Manually set the above registry key to the correct location. Target* I cant understand why it wont build. Any help will be much appreciated. Thanks

    Read the article

  • MonoDevelop 2.8.2: Build failed. Illegal characters in path

    - by user1056607
    I have just installed MonoDevelop 2.8.2. After opening a new solution named test I attempted to run the project. I push f5 and all I see is a "Build failed. Illegal characters in path" error in the bottom left. I open up the Error List and see no errors. I have done some searching and only find solutions pertaining to projects that are beyond the scope of just the pre-generated code. This is the code: using System; namespace Test { class MainClass { public static void Main(string[] args) { Console.WriteLine("Hello World!"); } } } I have tried to uninstall/reinstall, cut out any spaces in the path to the program or the solution, and even opened VS2010 and just copy pasted that code over. I've looked over my options under tools, solution options under project, and the project's options. I am running MD 2.8.2 with GTK# and Microsoft's .NET runtime. Let me know if you need anymore information. Any help would be appreciated. Thank you for your time!

    Read the article

  • setting up rhel 5.x RPM build server for mortal users

    - by Chen Levy
    My task is to setup a RHEL 5.x build host, that can build RPMs for mortal users. On RHEL 6.x with rpm version 4.8, I have in /usr/lib/macros: # Path to top of build area. %_topdir %{getenv:HOME}/rpmbuild On RHEL 5.x with rpm version 4.4, the %{getevn:HOME} is not available. I know that I can use /home/someuser/.rpmmacros: %_topdir /home/someuser/rpmbuild and this will work for that user, however I don't want to do this for every user separately. Moreover, since .rpmmacro will not expand ${HOME} or ~ I suspect it is unsafe to use those. This in turn make /etc/skel unstable for this task (or so I suspect). So in short, my question is: How to setup RHEL 5.x host that allow all users to build RPM packages in their home directory?

    Read the article

  • mysql query execution time - can i get this in milliseconds?

    - by Max Williams
    I'm comparing a few different approaches to getting some data in mysql, directly at the console, using the SQL_NO_CACHE option to make sure mysql keeps running the full query every time. Mysql gives me the execution time back in seconds, to two decimal places. I'd really like to get the result back in milliseconds (ideally to one or two decimal places), to get a better idea of improvements (or lack of). Is there an option i can set in mysql to achieve this? thanks, max

    Read the article

  • Ignoring build number when referencing dll

    - by brickner
    I have one solution with a .NET 4.0 project (C#) that produces a delayed signed dll, that I dotfuscate and sign. EDIT: This is how I version the dll: [assembly: AssemblyVersion("0.7.0.*")] [assembly: AssemblyFileVersion("0.7.0.0")] I have another solution with a .NET 4.0 project (C++/CLI) that references the signed dll and produces a signed dll (actually, delayed signed and signed in a post build because of a flaw in the C++ build system). The problem is that the reference to the dll contains a specific version number, which includes even the build number (I want to have a build number). Every time I build the referenced dll, I have to change the project settings file (.vcxproj) so it reference the new version dll. Since I work with source control, this is very inconvenient (different computers might have different build numbers since each computer build its own referenced dll - the referenced dll is not in the source control). If I don't change the reference, I get a warning: warning MSB3245: Could not resolve this reference. Could not locate the assembly... And many errors like this: error C3083: 'Foo': the symbol to the left of a '::' must be a type These are resolved once I change the reference. How do I make the reference ignore the build number or even the entire version number?

    Read the article

  • How to build compass in sublime text 2

    - by aurel
    I downloaded this package which allows use to build compass in though sublime text by pressing cmd+b. I created a sass file, made sure compass is selected from the tools build system compass menu. But when I press cmd+b the file does not compile and I get no error. I only get [Finished in 0.0s with exit code 1]. However I also tested to build the code with sass (tools build system sass) and it works fine. Do I need to do anything to the sass file for compass to work. Thanks

    Read the article

  • Linux: modpost does not build anything

    - by waffleman
    I am having problems getting any kernel modules to build on my machine. Whenever I build a module, modpost always says there are zero modules: MODPOST 0 modules To troubleshoot the problem, I wrote a test module (hello.c): #include <linux/module.h> /* Needed by all modules */ #include <linux/kernel.h> /* Needed for KERN_INFO */ #include <linux/init.h> /* Needed for the macros */ static int __init hello_start(void) { printk(KERN_INFO "Loading hello module...\n"); printk(KERN_INFO "Hello world\n"); return 0; } static void __exit hello_end(void) { printk(KERN_INFO "Goodbye Mr.\n"); } module_init(hello_start); module_exit(hello_end); Here is the Makefile for the module: obj-m = hello.o KVERSION = $(shell uname -r) all: make -C /lib/modules/$(KVERSION)/build M=$(shell pwd) modules clean: make -C /lib/modules/$(KVERSION)/build M=$(shell pwd) clean When I build it on my machine, I get the following output: make -C /lib/modules/2.6.32-27-generic/build M=/home/waffleman/tmp/mod-test modules make[1]: Entering directory `/usr/src/linux-headers-2.6.32-27-generic' CC [M] /home/waffleman/tmp/mod-test/hello.o Building modules, stage 2. MODPOST 0 modules make[1]: Leaving directory `/usr/src/linux-headers-2.6.32-27-generic' When I make the module on another machine, it is successful: make -C /lib/modules/2.6.24-27-generic/build M=/home/somedude/tmp/mod-test modules make[1]: Entering directory `/usr/src/linux-headers-2.6.24-27-generic' CC [M] /home/somedude/tmp/mod-test/hello.o Building modules, stage 2. MODPOST 1 modules CC /home/somedude/tmp/mod-test/hello.mod.o LD [M] /home/somedude/tmp/mod-test/hello.ko make[1]: Leaving directory `/usr/src/linux-headers-2.6.24-27-generic' I looked for any relevant documentation about modpost, but found little. Anyone know how modpost decides what to build? Is there an environment that I am possibly missing? BTW here is what I am running: uname -a Linux waffleman-desktop 2.6.32-27-generic #49-Ubuntu SMP Wed Dec 1 23:52:12 UTC 2010 i686 GNU/Linux

    Read the article

  • Specifying both multiple targets and multiple build files with ant or subant in 1.6

    - by Paul Marshall
    I'm trying to unify a build process, running one build to get multiple packages. My first shot at this is just having a central build script call <ant or <subant on each project's build.xml file. I'm using Ant 1.6, and I've run into a funny problem: either I use the <ant task, and I can specify multiple targets but not multiple build files, or I use the <subant task, and I can specify multiple build files but not multiple targets. I realize there's a few solutions here already: Just upgrade to Ant 1.7 already; <antcall can do multiple targets there. Edit the separate project build files to have a variety of top-level targets, so I can call each individual file with just one target, and use <antcall. Copy-paste a lot of <ant tasks, with a little help from <macrodef to help the sanity. Is there something I've missed, that will allow me to do what I want from this single central build.xml without a) editing individual project files, b) writing lots of repetitive code, or c) upgrading Ant, and that d) doesn't require editing every time I add a new project?

    Read the article

< Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >