Search Results

Search found 26596 results on 1064 pages for 'commandline build tool'.

Page 738/1064 | < Previous Page | 734 735 736 737 738 739 740 741 742 743 744 745  | Next Page >

  • Why can't tuxboot and ubuntu play well together?

    - by mmr
    I'm trying to get clonezilla to run off of a usb stick, and it seems that the right way to do that is via tuxboot. Tuxboot is not compilable on ubuntu. I used git to get it from the repository, and then when I run the 'install' script (because building it is apparently not allowed, since the build script just tries to install windows things). Qmake-linux wants my qmake executable to be in the same directory as the stuff I pulled down, and let's just say that if there's a way to do this easily, I ain't seein' it. So then I download the linux file, the most recent of which is tuxboot-linux-25. Try to run it, get a failure that libpng12.so.0 isn't found. OK, then I go to install that via the instructions I found on the web but firefox seems to have already deleted from my history (yay!) Then I add the /usr/local/lib directory to ldconfig via emacs (had to install that too, of course): http://ubuntuforums.org/showthread.php?t=369848 I still get the errors that libpng12.so.0 cannot be opened because 'No such file or directory'. ldconfig -p | grep libpng shows that the library is there, but it still doesn't seem to be findable. What to do next? (for the record, doing this in windows is painless-- download, click, and it's done. But I'm trying to be all linuxy and get away from Windows for this...)

    Read the article

  • Solving &ldquo;XmlSchemaException: The global element '&lt;elementName&gt;' has already been declare

    - by ChrisD
    I recently encountered this error when I attempted to consume a new hosted WCF service.  The service used the Request/Response model and had been properly decorated.  The response and request objects were marked as DataContracts and had a specified namespace.   My WCF service interface was marked as a ServiceContract and shared the namespace attribute value.   Everything should have been fine, right? [ServiceContract(Namespace = "http://schemas.myclient.com/09/12")] public interface IProductActivationService { [OperationContract] ActivateSoftwareResponse ActivateSoftware(ActivateSoftwareRequest request); } well, not exactly.  Apparently the WSDL generator was having an issue: System.Xml.Schema.XmlSchemaException: The global element 'http://schemas.myclient.com/09/12:ActivateSoftwareResponse' has already been declared. After digging I’ve found the problem; the WSDL generator has some reserved suffixes for its entities, including Response, Request, Solicit (see http://msdn.microsoft.com/en-us/library/ms731045.aspx).  The error message is actually the result of a naming conflict.  The WSDL generator uses the namespace of the service to build its reserved types.  The service contract and data contract share a namespace, which coupled with the response/request name suffixes I was using in my class names, resulted in the SchemaException. The Fix: Two options: Rename my data contract entities to use a non-reserved keyword suffix (i.e.  change ActivateSoftwareResponse to ActivateSoftwareResp). or; Change the namespace of the data contracts to differ from the service contract namespace. I chose option 2 and changed all my data contracts to use a “http://schemas.myclient.com/09/12/data” namespace value. This avoided a name collision and I was able to produce my WSDL and consume my service.

    Read the article

  • backup and file server for 50+ TB of data

    - by a-bomb
    our office wants to build a new server to handle our data, over the last 10 years our data was stored on CDs, DVDs, HDDs but now they want all of it in one place that is attached to the network for everybody in the office to access it. the data is 20TB new data and the rest is old, the important now is to store these 20tb and gradually store the other 30tb over time. so what is the best solution to do ? we thought of getting an hp server and connect it to an external enclosure that either had tape drives or HDDs (we haven;t decided yet) or to get a NAS server and connect it to the hp server. what should we do because this is new for us ...

    Read the article

  • Changing the Default Windows Phone 7 Deployment Target In Visual Studio 2010

    - by mbcrump
    After you download and install the January 2011 Windows Phone update, you will notice one annoying thing. The default deployment target for Windows Phone Projects in Visual Studio changes to Windows Phone 7 Device. Before the update, it defaulted to the Emulator. I found this extremely annoying as I’m more than likely going to test with the emulator before putting it on my actual device. Now to make things fair, Microsoft told you they were going to switch the default and even provided a solution, but you will have to check a tiny paragraph in the release notes. The good news is that its very easy to do: Simply navigate out to : %LocalAppData%\Microsoft\Phone Tools\CoreCon See the folder named, “10.0”? Go ahead and delete it. Now, the folder will be completely empty and if you fire up Visual Studio 2010 you will see we are now defaulting to the Emulator again. In my opinion, this should have been left at Emulator. Now, new WP7 developers will get a build error when they first start a WP7 project and will not know why until they read the error list.  Subscribe to my feed CodeProject

    Read the article

  • Block-level deduplicating filesystem

    - by James Haigh
    I'm looking for a deduplicating copy-on-write filesystem solution for general user data such as /home and backups of it. It should use online/inline/synchronous deduplication at the block-level using secure hashing (for negligible chance of collisions) such as SHA256 or TTH. Duplicate blocks need not even touch the disk. The idea is that I should be able to just copy /home/<user> to an external HDD with the same such filesystem to do a backup. Simple. No messing around with incremental backups where corruption to any of the snapshots will nearly always break all later snapshots, and no need to use a specific tool to delete or 'checkout' a snapshot. Everything should simply be done from the file browser without worry. Can you imagine how easy this would be? I'd never have to think twice about backing-up again! I don't mind a performance hit, reliability is the main concern. Although, with specific implementations of cp, mv and scp, and a file browser plugin, these operations would be very fast, especially when there is a lot of duplication as they would only need to transfer the absent blocks. Accidentally using conventional copy tools that do not integrate with the FS would merely take longer, waste some bandwidth when copying remotely and waste some CPU, as the duplicate data would be re-read, re-transferred and re-hashed (although nothing would be re-written), but would absolutely not corrupt anything. (Some filesharing software may also be able to benefit by integrating with the FS.) So what's the best way of doing this? I've looked at some options: lessfs - Looks unmaintained. Any good? [Opendedup/SDFS][3] - Java? Could I use this on Android?! What does [SDFS][4] stand for? [Btrfs][5] - Some patches floating around on mailing list archives, but no real support. [ZFS][6] - Hopefully they'll one day relicense under a true Free/Opensource GPL-compatible licence. Also, 2 years ago I had a go at an attempt in Python using Fuse at the file-level to be used over the top of a typical solid FS such as EXT4, but I found Fuse for Python underdocumented and didn't manage to implement all of the system calls. My first post here, so I can't post more than 2 links until I get over 10 rep: [3]: http://www.opendedup.org/ [4]: https://en.wikipedia.org/w/index.php?title=SDFS&action=edit&redlink=1 [5]: https://en.wikipedia.org/wiki/Btrfs#Features [6]: https://en.wikipedia.org/wiki/ZFS#Linux

    Read the article

  • How to make a good portfolio for IT student (who loves programming) like me?

    - by Viet
    I am currently a college student, and going to apply for an university in probably next month. Unlike art student who easily put on their works such as models, designs and so on on their portfolio; I am hitting a dead corner trying to find a "creative" way to showcase my work as a programmer. It would be normal if programmer shows his good project with source code and everything else. Well, it should be no problem with actual "good" projects, but all of my projects are crappy (can't help it because I am still student, and don't have much work experience) and I don't even know it's worth to show. Nonetheless, I have learned a lot in only 1 year since I started programming. I am now familiar with Java, PHP, Actionscript3, C#, Objective-C and on my way to learn Ruby. I plan to build a Flash portfolio using Actionscript with Ruby as backend to show what I have learnt. The problem is idea. How to show people that I learned a lot of useful thing? Otherwise I hit the dead end and LOL just show what I have on Github (but i certainly never want that...)

    Read the article

  • Is it common to only pay developers for the time they said a project would take?

    - by BAM
    I work at a small startup (<10 people), and I was recently assigned (along with one other developer) to a relatively small project. The project involved moving an existing iOS app to Android. The client told us they had built the app for iOS in 300 man-hours. Not knowing at the time that this figure was completely false, we naively and optimistically assumed that if they could build the app from scratch in that amount of time, we could easily "port" it in a similar amount of time. Therefore, we drafted up a fixed-price contract based on 350 man-hours, with a 5 week deadline. (We are well aware now of how big of a mistake this was... Never let the client tell you how long it's going to take!) Anyway, by week 4 we had already surpassed our 350 hours, and we estimated that there were at least 2 more weeks left on the project. We were told to continue working, but that the company could not afford to pay out on overdue projects anymore. I thought this just meant "be more careful about estimates in the future". However a few weeks later, the company president informed us that we would not be getting paid for any time past 350 man-hours. We argued over the issue for almost an hour. He claimed, however, that this is standard practice for many organizations, and that I was unreasonable for making a big deal out of it. So is this really a common thing, or am I justified in being upset about it? Thanks in advance for any advice!

    Read the article

  • Git can no longer open emacs as its editor

    - by mwilliams
    I'm running Git version 1.7.3.2 that I built from source, zsh is my shell, and emacs is my editor. Recently I started seeing the following: /usr/local/Cellar/git/1.7.3.2/libexec/git-core/git-sh-setup: line 106: emacs: command not found Could not execute editor My zshrc looks like the following so I can use the Cocoa build and the console binary provided with it. EMACS_HOME="/Applications/Emacs.app/Contents/MacOS" function e() { PATH=$EMACS_HOME/bin:$PATH $EMACS_HOME/Emacs -nw $@ } function ec() { PATH=$EMACS_HOME/bin:$PATH emacsclient -t $@ } function es() { e --daemon=$1 && ec -s $1 } function el() { ps ax|grep Emacs } function ek() { $EMACS_HOME/bin/emacsclient -e '(kill-emacs)' -s $1 } function ecompile() { e -eval "(setq load-path (cons (expand-file-name \".\") load-path))" \ -batch -f batch-byte-compile $@ } alias emacs=e alias emacsclient=ec And I also have export EDITOR="emacs" and have tried adding export GIT_EDITOR="emacs" (and swapping that out with "e") But whatever I try I can't get git to open emacs whenever I need to do a commit or an interactive rebase, etc etc...

    Read the article

  • Appropriate Network switch for small server cluster

    - by Chris Dutrow
    Need to build a small business server cluster for the purpose of crunching data. It will not host a web site that needs to be available 24/7. It does need to support servers that host Redis, a Cassandra database cluster, and a Python web server. Operating system will most likely be Centos 6.4 Other servers in the cluster should be able to communicate very fast with each other, especially the Redis server. This will probably require the use of internal IP addresses. We will need to use multi-data center replication to synchronize the Cassandra cluster with the one that we currently have hosted on the cloud Was looking into network switches and we are unsure of the appropriate specifications that we should be looking for. Does the switch need to be "managed" or can it be "unmanged"? Does the switch need to support IPv6 or just IPv4? Do we need an enterprise level Cisco switch, or can we go with something like a $200 DLink managed (or unmanaged) small business switch? Thanks so much!

    Read the article

  • How should I architect a personal schedule manager that runs 24/7?

    - by Crawford Comeaux
    I've developed an ADHD management system for myself that's attempting to change multiple habits at once. I know this is counter to conventional wisdom, but I've tried the conventional for years & am now trying it my way. (just wanted to say that to try and prevent it from distracting people from the actual question) Anyway, I'd like to write something to run on a remote server that monitors me, helps me build/avoid certain habits, etc. What this amounts to is a system that: runs 24/7 may have multiple independent tasks to run at once may have tasks that require other tasks to run first lets tasks be scheduled by specific time, recurrence (ie. "run every 5 mins"), or interval (ie. "run from 2pm to 3pm") My first naive attempt at this was just a single PHP script scheduled to run every minute by cron (language was chosen in order to use a certain library, but no longer necessary). The logic behind when to run this or that portion of code got hairy pretty quick. So my question is how should I approach this from here? I'm not tied to any one language, though I'm partial to python/javascript. Thoughts: Could be done as a set of scripts that include a scheduling mechanism with one script per bit of logic...but the idea just feels wrong to me. Building it as a daemon could be helpful, but still unsure what to do about dozens of if-else statements for detecting the current time

    Read the article

  • can't get a good install:11.10 server

    - by jack
    I screwed up my partitioning aparently tring to get lvm and raid1 going. the machine is an intel dual core dt with 2 gig of ram and 2 sata drives, one 250g and the other 500g. This a build for my school in n.e. Thailand. we have 20+ clients now, a website, email. Our old server is dying fast and we are going to add another 12 stations next week. I really need some help here! 1. have onboard gigabit ethernet that aparently uses same driver as realtek 811c. I installed a pcie gigabit card also 811c. At several points the eth0 has accessed the internet fine, but the eth1 will not communicate. 2. I saw a "fix" for this online which from root: rmmod r8169. this imediately killed the working onboard card. 3.I tried to re-install 11.10 figuring that would re-install r8169. However I messed something up in my partitioning and can't get a clean boot now. 6. so I think after 12 re-installs or so and 2 days. I can get through it right if I can start over with clean drives, but I can't figure out how to empty them out what with soft raid and lvm partitions. seems like i've had it going well and then trying to fix that one little problem, i go backwards.Please help! please send email.-thanks

    Read the article

  • Error 1069 the service did not start due to a logon failure

    - by Si
    Our CruiseControl.NET service on Win2003 Server (VMWare Virtual) was recently changed from a service account to a user account to allow for a new part of our build process to work. The new user has "Log on as a service" rights, verified by checking Local Security Settings - Local Policies - User Rights Assignment, and the user password is set to never expire. The problem I'm facing is every time the service is restarted, I get the 1069 error as described in this questions subject. I have to go into the properties of the service (log on tab) and re-enter the password, even though it hasn't changed, and the user already has the appropriate rights. Once I enter the password apply the changes, a prompt appears telling me that the user has been granted log on as a service rights. The service will then start will no problems. Not a show stopper, but a pain none-the-less. Why isn't the password persisting with the service?

    Read the article

  • VMWare tools on Ubuntu Server 10.10 kernel source problem

    - by Hamid Elaosta
    After install and running the vm-ware config, the config needs my kernel headers to compile some modules, ok, so I'll give it them, but it just won't work. It asks for the path of the directory of C header files that match my running kernel. If I uname -r I get 2.6.35-22-generic-pae So I tell it the source path is /lib/modules/2.6.25-22-generic-pae/build/include and it returns "The directory of kernel headers (version @@VMWARE@@ UTS_RELEASE) does not match your running kernel (version 2.6.35-22-generic-pae). ..I'm confused? can anyone offer suggestions please? I installed hte kernel source andh eaders myself using sudo apt-get install linux-headers-$(uname -r)

    Read the article

  • From Co-op to fulltime help with salary negotation [closed]

    - by Peter
    Hey I'm a coop student that worked at a particular medium size printing company for 8 months. I had a good time it was lax, sometimes insufficiently challenging but none the less I learned a whole lot. I stuck with them for another 5 months (including this month) at the same rate I was paid then, doing testing work, tool development, taking care of emergencies when the lead developers were away, and other smaller projects and now bigger projects and problem handling (bad printer output etc.). I know their website inside out (ecommerce), and I know their printing software inside out and have made many changes to them both without a hitch. I have also done a lot of refactoring of the existing code base which as far as Im concerned, I believe am the only one to do those sorts of restructuring even though there is constant talk about it. I guess the unit testing paid off and lets me see the value in modularity if even a tad more. Never the less I have faith in my skill and the restructuring I did turned out better than I had imagined . Now the problem is that I finish school next month and so I asked for a full time spot the month after. They have been expanding and have hired a new guy a few months after my coop spot, and just now they hired a new guy to deal with the CRM application. The lead developer who wrote all of the software had left 5 months ago so it was up to all of us to learn what he had done over 4 years (including db, networking). So now I'm afraid that if I assert myself for a salary similar to the other guys, which I believe I am certainly on par with, that I would be seen as ingrateful. It's hard to flip a switch and say, hey double my pay, although when I'm working with their bread and butter (printers) and writing new features, refactoring the whole application for extensibility. I love it regardless of pay. I also feel maybe I'm replaceeble, although nobody knows the website better than myself and the lead web dev (not by a long shot), and nobody knows the printer software/drivers better than myself. I just thought they would have brought up a raise earlier on, and now it feels like they don't value my work. I'm also tired of worrying about it. I think my question is, well what do I do next?

    Read the article

  • How to Reap Anticipated ROI in Large-Scale Capital Projects

    - by Sylvie MacKenzie, PMP
    Only a small fraction of companies in asset-intensive industries reliably achieve expected ROI for major capital projects 90 percent of the time, according to a new industry study. In addition, 12 percent of companies see expected ROIs in less than half of their capital projects. The problem: no matter how sophisticated and far-reaching the planning processes are, many organizations struggle to manage risks or reap the expected value from major capital investments. The data is part of the larger survey of companies in oil and gas, mining and metals, chemicals, and utilities industries. The results appear in Prepare for the Unexpected: Investment Planning in Asset-Intensive Industries, a comprehensive new report sponsored by Oracle and developed by the Economist Intelligence Unit. Analysts say the shortcomings in large-scale, long-duration capital-investments projects often stem from immature capital-planning processes. The poor decisions that result can lead to significant financial losses and disappointing project benefits, which are particularly harmful to organizations during economic downturns. The report highlights three other important findings. Teaming the right data and people doesn’t guarantee that ROI goals will be achieved. Despite involving cross-functional teams and looking at all the pertinent data, executives are still failing to identify risks and deliver bottom-line results on capital projects. Effective processes are the missing link. Project-planning processes are weakest when it comes to risk management and predicting costs and ROI. Organizations participating in the study said they fail to achieve expected ROI because they regularly experience unexpected events that derail schedules and inflate budgets. But executives believe that using more-robust risk management and project planning strategies will help avoid delays, improve ROI, and more accurately predict the long-term cost of initiatives. Planning for unexpected events is a key to success. External factors, such as changing market conditions and evolving government policies are difficult to forecast precisely, so organizations need to build flexibility into project plans to make it easier to adapt to the changes. The report outlines a series of steps executives can take to address these shortcomings and improve their capital-planning processes. Read the full report or take the benchmarking survey and find out how your organization compares.

    Read the article

  • What are some good resources for creating a game engine in XNA?

    - by Glasser
    I'm currently a student game programmer working on an indie project. We have a team of eleven people (five programmers, four artists, and two audio designers) aboard, all working hard to help design this game. We've been meeting for months now and so far we have a pretty buffed out Game Design Document as well as much audio/visual concept art. Our programmers are itching to progress on our own end. Each person in our programming team is well versed in C++, but is very familiar with C#. We have enough experience and skill that we're confident that we will be successful with our game, and we're looking to build our own game engine in XNA as it seems like it would be worth our time and effort in the end. The game itself will be a 2D beat 'em up style game to be released over xbox live and the PC. It's play style will be similar to that of Castle Crashers or Scott Pilgrim vs The World. We want to design the game engine to allow us to better implement our assets into the game as well as to simplify the creation of design elements/mechanics. Currently between our programmers, we have books such as "XNA 4.0" and "Game Coding Complete, Third Edition," but we'd still like more information on both XNA and (especially) building a game engine from scratch. What are any other good books, websites, or resources we could use to further map out and program our game engine?

    Read the article

  • Chipset GPU causes a massive slowdown

    - by zyboxenterprises
    My AMD Radeon HD 7700 recently broke (fan stopped working and GPU overheated), and now I'm running on internal chipset graphics, and it causes a massive slowdown of the whole PC. I've changed the graphics memory from 32MB (minimum) to 256MB (highest), and it hasn't made any difference whatsoever. I'm using Windows Aero, and disabling it should have made a small difference, but it didn't; the whole PC is still slow. I know that it's not the computer build, because I built it myself, and it was a lot faster when it had the AMD Radeon HD 7700 in it, which is the reason why I believe it's the internal chipset graphics that are causing the problem. Is this behavior normal? I don't have the cash right now to go out and buy a new dedicated GPU. I'm using an ASRock N68C-GS FX motherboard with an AMD FX 4100 (overclocked to 4.3GHZ), with 4GB RAM. The overclock was an attempt to resolve this issue, and it isn't related to this issue that the integrated graphics is causing a slowdown.

    Read the article

  • Today's Links (6/27/2011)

    - by Bob Rhubart
    2011 Entrepreneurs of the Year, Northern California Region Drake Martinet reports on the new batch of entrepreneurs joining the ranks of Oracle CEO Larry Ellison, Yahoo CEO Carol Bartz and eBay co-founder Pierre Omidyar as the Norther California Region winners of Ernst & Young's Entrepreneurs of the Year awards. Technical Article: Caching Strategies for Oracle Service Bus 11g William Markito Oliveira illustrates how the right caching strategy can make a big difference in application performance. Kscope 11 - Day 1 and 2 Oracle ACE Director Markus Eisele checks in from Long Beach. Kaleidoscope 2011: Sunday’s Symposium And so does Oracle ACE Director Marco Gralike. Yet another GlassFish 3.1.1 promoted build | The Aquarium "This version was carefully designed to be highly compatible with the previous 3.x versions," says Alexis, "thus leaving you with little reasons not to upgrade as soon as it comes out this summer." Using NoSQL database in your Java EE 6 Applications on GlassFish - MongoDB for now! "The NoSQL databases are not intended to be a replacement for the mainstream RDBMS," says Arun Gupta. I have a performance problem | Alan Hargreaves Good (and entertaining) advice from an Australian Solaris and Network Domain TSC* Principal Field Technologist.

    Read the article

  • Learning curve webdevelopment

    - by refro
    At the moment our team has a huge challenge, we're being asked to deliver a new GUI for an embedded controller. De deadline is very tight and is set on april 2013. Our team is very diverse some people are on the level of functional programming (mostly C), others (including myself) also master object oriented programming (C++, C#). We build a prototype with android, although it has its quirks it is mostly just OO. For the future there is a wish to support multiple platforms (Windows, Android, iOS). In my opinion a HTML5 app with a native app shell is the way to go. When gathering more information on the frameworks to use etc it becomes obvious to me a paradigm shift is needed. None of us have a web background so we need to learn from the ground up. The shift from functional to oo took us about 6 months to become productive (and some of the early subsystems were rewritten because they were a total mess) . Can we expect the learning curve to be similar? Can this be pulled off with a webapp? (My feeling says it will already be hard to pull off as a native app which is at the edge of our comfort zone)

    Read the article

  • Windows file server access control by device

    - by Ori Shavit
    I'm trying to build a system where access to certain resources (file shares) in Windows Server, is limited not only by the username (in a Active Directory domain), but also by the client machine. So far, I haven't found a good way to do this; adding the computer account to the DACL is apparently not the way to do it. Windows Server 2012 supports this with Dynamic Access Control, but this method requires all clients to be Windows 8, it seems, with no way to use this with Windows 7 clients. Is there a supported way to do this? (or alternatively, add support for device authorization with Windows 7).

    Read the article

  • Is there a usage count for packages or programs?

    - by math
    Motivation: I want to remove applications I do not use to speed up my package processing tasks like dist upgrades, regular updates, but also for saving disk space and other reasons. I know this is a complex topic so first I will ask my question and second I will give some answers I already found out. Question: How do I find out which package I did not used at all? For example I always use the VLC so I could remove totem package. (Which I could have been used some day, yes.) Of course package dependencies could force me to have programs installed which I will never use. Notes: Find the packages which consume much space via synaptic: Select "Status" in lower left, select "Installed" in upper left, sort column on "size" in upper right. Then you can decide which big packages you really need. Use aptitude autoremove Use ubuntu-tweak's Janitor for removing old kernel packages, old configs, apt-cache entries, etc. Manually search for applications for a given task that you usually solve with your standard app. E.g. Movie player, Music player, Office program, Browser etc. (BTW: this is what I want to be helped with my question) When removing packages I always favour "apt-get purge" over "aptitude remove --purge" as aptitude often will also remove essential packages due to package dependencies. E.g. when removing "evolution" (as I use thunderbird) aptitude wants to remove also "ubuntu-desktop" and 756 other packages as well, while apt-get just removes evolution and its helping pacakges like evolution-common. Ubuntu lense gives me most recent used applications which are candidates for keeping :) Employ deborphan as I read in this related answer: How do I clean up my harddrive? I should certainly keep essential packages: Keep only essential packages This question is pretty much a duplicate of How to see what installed packages I have never used for cleaning purposes but covering only few aspects. However one answer suggests to use a program called unusedpkg but the link seems down. There is also a program called Kleen http://code.google.com/p/kleen/ but it won't compile in 11.10. However I hacked it to compile but the results are unusable, as for example the g++ package was marked as not used for 203, but actually I used it seconds ago for compiling Kleen itself ;) So don't use this tool. On http://wiki.debian.org/DebianPackageInformation I read the the package popularity-contest will produce log files with usage statistics. Unfortunately I didn't enabled the popularity contest so I can't find this log file.

    Read the article

  • How can I avoid heroku stopping my dyno?

    - by iwein
    I build MVP's for clients regularly. Often I deploy on Heroku so they can see if the product works and demo it to prospects and investors. Then I have an application deployed on heroku, and it works like a charm, if not for one little thing. The app takes about 30 seconds to start up and heroku has the annoying habit of killing dyno's if they don't get traffic. My client is using the application for demo purposes now, so the load is extremely low and intermittent. I'm looking for a solution that is preferably: cost effective can be applied to multiple apps simultaneously What is the best way to avoid having the first request taking 30 seconds?

    Read the article

  • Is Scala ready for prime time?

    - by jayraynet
    Now that I've done a few trivial things with Scala (which I love for "hello world" and contrived applications!) I am left wondering.. part about maturity of the tools to support development, and part about general applicability. Are the toolsets ready? Is Scala appropriate for use on enterprise / business applications? Would "you" use it on a non-trivial project? Some of my (possibly unfounded) concerns would be: are the IDE and toolsets as rich as what we have to develop .net and java applications (eclipse for Scala seems limited compared to eclipse for java)? are the build / CI / testing toolsets able to effectively deal with Scala? how maintainable is the concise code that can be (encouraged?) written in the language? is it possible to find developers with Scala experience? is there enough critical mass to get help through on-line reference and books that are more than "intro" to the language? So bottom line - is the ecosystem mature enough to use now, or better off waiting to see how it evolves? EDIT: let's say "non-trivial" is a multi-year, multi-release, 10-20 developers project.

    Read the article

  • How do I elevate privileges when running appcmd from a nant task?

    - by Rune
    We are using a Windows 7 box as build server. As part of our continuous integration process I would like to stop and start an IIS 7 website. I have tried doing this from the command line using appcmd: appcmd start site "my website" However, this only works if I start the console window by choosing "Run as Administrator", so it won't work out-of-the-box from NAnt etc. How do I script appcmd to be run with elevated privileges (or am I going about this in the wrong way)? Thank you.

    Read the article

  • Eclipse: Organising Files

    - by someguy
    I want to import a project that I'm planning to build upon. The problem is that it is very messy; with source files, class files and libraries under one directory. How would I organise these files using Eclipse? I know you can change the source folder and output folder, but when I do change the source folder, the files that I want inside it do not physically move to that folder. Output folder is fine, though. Also, I would like a separate folder for libraries. I'm not sure how I would go about this, however. Here's how I would like it: src: This folder will contain source files. bin: This folder will contain binary (class) files. lib: This folder will contain external libraries.

    Read the article

< Previous Page | 734 735 736 737 738 739 740 741 742 743 744 745  | Next Page >