Search Results

Search found 1098 results on 44 pages for 'continuous'.

Page 28/44 | < Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >

  • Scrolling a WriteableBitmap

    - by Skoder
    I need to simulate my background scrolling but I want to avoid moving my actual image control. Instead, I'd like to use a WriteableBitmap and use a blitting method. What would be the way to simulate an image scrolling upwards? I've tried various things buy I can't seem to get my head around the logic: //X pos, Y pos, width, height Rect src = new Rect(0, scrollSpeed , 480, height); Rect dest = new Rect(0, 700 - scrollSpeed , 480, height); //destination rect, source WriteableBitmap, source Rect, blend mode wb.Blit(destRect, wbSource, srcRect, BlendMode.None); scrollSpeed += 5; if (scrollSpeed > 700) scrollSpeed = 0; If height is 10, the image is quite fuzzy and moreso if the height is 1. If the height is a taller, the image is clearer, but it only seems to do a one to one copy. How can I 'scroll' the image so that it looks like it's moving up in a continuous loop? (The height of the screen is 700).

    Read the article

  • Undeploy multiple SOA composites with WLST or ANT by Danilo Schmiedel

    - by JuergenKress
    As part of our current project the Build Management team asked for a solution to undeploy multiple composites at one time. Of course you have the “Undeploy All from This Partition” menu option in Enterprise Manager but since we have a lot of deployments every day the guys wanted to have a script solution. It is even more important for the nightly deployments on our continuous integration environment – strange, we couldn’t find anybody who wants to do the undeployment via Enterprise Manager manually every night ;-) However with WLST or ANT the SOA Suite comes with two options to undeploy composites via script. In this article I’d like to explain you both ways. Undeployment with WLST You can test the steps below on Oracle's Pre-built Virtual Machine for SOA Suite and BPM Suite 11g. Change to the WLST directory under MIDDLEWARE_HOME/Oracle_SOA1/common/bin. cd /oracle/fmwhome/Oracle_SOA1/common/bin/ Open WLST ./wlst.sh Connect to the SOA server Read the full article by Danilo Schmiedel SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: undeploy soa,Danilo Schmiedel,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Considering Embedding a Database? Choose MySQL!

    - by Bertrand Matthelié
    The M of the LAMP stack and the #1 database for Web-based applications, MySQL is also an extremely popular choice as embedded database. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Access our Resource Kit to discover the top reasons why:   3,000 ISVs and OEMs rely on MySQL as their embedded database 8 of the top 10 software vendors and hundreds of startups selected MySQL to power their cloud, on-premise and appliance-based offerings Leading mobile and SaaS providers ensure continuous service availability and scalability with lower cost and risk using MySQL Cluster. Learn how you can reduce costs and accelerate time to market while increasing performance and reliability. Access white papers, webinars, case studies and other resources in our Resource Kit.  

    Read the article

  • Why isn't DSM for unstructured memory done today?

    - by sinned
    Ages ago, Djikstra invented IPC through mutexes which then somehow led to shared memory (SHM) in multics (which afaik had the necessary mmap first). Then computer networks came up and DSM (distributed SHM) was invented for IPC between computers. So DSM is basically a not prestructured memory region (like a SHM) that magically get's synchronized between computers without the applications programmer taking action. Implementations include Treadmarks (inofficially dead now) and CRL. But then someone thought this is not the right way to do it and invented Linda & tuplespaces. Current implementations include JavaSpaces and GigaSpaces. Here, you have to structure your data into tuples. Other ways to achieve similar effects may be the use of a relational database or a key-value-store like RIAK. Although someone might argue, I don't consider them as DSM since there is no coherent memory region where you can put data structures in as you like but have to structure your data which can be hard if it is continuous and administration like locking can not be done for hard coded parts (=tuples, ...). Why is there no DSM implementation today or am I just unable to find one?

    Read the article

  • Topeka Dot Net User Group (DNUG) Meeting &ndash; April 6, 2010

    - by Robz / Fervent Coder
    Topeka DNUG is free for anyone to attend! Mark your calendars now! SPEAKER: Troy Tuttle is a self-described pragmatic agilist, and Kanban practitioner, with more than a decade of experience in delivering software in the finance and health industries and as a consultant. He advocates teams improve their performance through pursuit of better practices like continuous integration and automated testing. Troy is the founder of the Kansas City Limited WIP Society and is a speaker at local area groups on team related topics. He currently works as a Project Lead Consultant with AdventureTech Group of Kansas City, KS. TOPIC: Why Kanban? Kanban is receiving a large amount of attention recently. What does it offer compared to other approaches? Answering that question may require you to hit the “reset” button on previously held biases and assumptions. Kanban blends Lean thought with ideas from first generation agile methodologies. To get started with Kanban, we will examine what steps are necessary to establish a transparent, work-limited, pull system. We will highlight the perils of allowing too much work-in-progress and how it affects development performance. Once established, Kanban teams need only a few metrics and tools to monitor their performance and improvement. WHERE: Federal Home Loan Bank Topeka on the Security Benefit Campus – Directions? WHEN: 11:30 AM - 1:00 PM on April 6th, 2010 REGISTER: http://topekadotnet.wufoo.com/forms/topeka-dnug-meeting-attendance/ ADDITIONAL INFO: As always, please sign in and out of FHLBank to help them with their accountability. Please park in the visitors section at the front of the building when you arrive. If  there are no spots in visitors you may park in the overflow lot at the far east end of the facility.  Lunch will be provided and we will have some great door prizes!

    Read the article

  • How should I generate and store the boundries of a cave?

    - by Bob Roberts
    I am making a small cave copter game (seriously, where did this type of game come from anyway) and I am trying to figure out how to make and store the procedural generated walls. I am thinking about creating the walls by randomly picking two points away from the center of the screen. They will be no closer than the height of helicopter and no further than the edge of the screen, weighted to prefer to go in the same direction as the point prior so I end up with stalactites and stalagmites and not just noise, at set intervals of distance. To store, perhaps parallel arrays/lists, one for distance from center to top screen and one for distance from center to bottom. Am I way off base with my thinking? I just want the cave to be varied and challenging, I just have never worked with generating data like this. Edit: Woah, I just realized that my idea would lead to a player being able to stay in the middle of the screen and win. That isn't right at all. So the very basis of how I was going to generate is wrong. Edit 2: I also realized I left out a very crucial point. Part of the mechanics of the game will let the player go backwards therefor the data structure should be continuous.

    Read the article

  • The latest Oracle Social Network News from Open World

    - by me
    Highlights Oracle and Partners showcase the latest development around  Oracle Social Network  (OSN) Integration of OSN Social Fabric into Business Applications like Finance, HCM and Customer Experience Partners like Cisco WebEx, Avaya, Weemo, Lingotek and HarQen showcase OSN integration Oracle shares details around internal OSN deployment Please visit us at 2413 Moscone South  Exhibition Hall  and  experience a live OSN demo Social Fabric  Oracle Social Network socializes your Applications, Process and Content within your Enterprise. Here are some examples what is shown at Oracle Open World. Socialize the Finance department Enable Finance departments to collaborate instantly during quarter close with real-time information access Enable finance professionals in the back office to easily interact with the rest of the company Provide privacy when discussing sensitive financial results within Conversations  Socialize Human Capital Management (HCM) Promotes attainable performance goals that achieve the business objectives of the enterprise Capture expertise across the network Continuous feedback loop provided that results in productivity and innovation improvement tied to higher employee engagement OSN and Customer Experience Find the person with the best skills to assist with the issue Real-time collaboration in  context of the issue Track an Agent’s collaboration contributions Identify and contribute relevant knowledge back to the system Cisco/Webex integration The Web Conferencing tool of your choice can be integrated with OSN. In the example below you can see the integration of the Cisco WebEx solution into OSN. and sure - this works on mobile devices as well  OSN @ Oracle Oracle has deployed OSN as part of the internal Fusion CRM application rollout. After just 4 month we can see impressive usage patterns.

    Read the article

  • Keeping files that are often changed in sync between desktop and laptop

    - by N.N.
    I'm looking for a way to keep a desktop and a laptop in sync. What I want to keep in sync are some folders, mainly ~/Documents, that are changed often when working on them. If it matters I can connect to my desktop from anywhere via an URL but my laptop is harder to access since it might be behind NAT and such. I have been looking at Ubuntu One but it seems to not go well with working on documents written in LaTeX. If I work on a .tex file in the Ubuntu One directory and compile it (with pdflatex) every now and then (as often as every 10 sec when working) it will create several new files including a pdf which are uploaded to Ubuntu One and this seems stupid since it will create continuous upload when working on .tex files. I also usually keep .tex documents version controlled by git and then every commit (which also can happen frequently) will cause upload (by changes in ./.git) so that it happens continuously when working. Another example is editing images that are saved often. What I think would be best is for sync to happen every tenth minute or at the end of every working session (but there might be some other way to handle this?).

    Read the article

  • Tree Surgeon 2.0 - The future on the T4 Express

    - by Malcolm Anderson
    If you've never been a fan of TreeSurgeon (http://treesurgeon.codeplex.com/) then skip this post.However, if have been there have been some interesting developments over the last couple of years.The biggest one is T4Recently Bill Simser wrote a detailed post about the potential future of tree surgeon, called "Tree Surgeon - Alive and Kicking or Dead and Buried" He raised the question:Times have changed. Since that last release in 2008 so much has changed for .NET developers. The question is, today is the project still viable? Do we still need a tool to generate a project tree given that we have things like scaffolding systems, NuGet, and T4 templates. Or should we just give the project its rightful and respectful send off as its had a good life and has outlived its usefulness.For myself, the answer is, keep it.I've spent the last couple of years doing agile engineering coaching and architecture and from my experience, I can tell you, there are a lot of shops out there that would benefit from having Tree Surgeon as a viable product.  Many would benefit simply from having the software engineering information that is embedded in the tree surgeon site be floating around their conversation.Little things like, keep all of your software needed to run the build, with the build in the version control system.Have your developers and the build system using the same build.Have a one-touch buildSeparate your code from your interfacePut unit tests in first, not lastI've seen companies with great developers suffer from the problems that naturally come from builds taking 3 and 4 hours to run.  It takes work to get that build down to 10 minutes, but the benefits are always worth it.  Tree Surgeon gives you a leg up, by starting you off with a project that you can drop into your Continuous Integration system, right out of the box.Well, it used to be right out of the box.  Today, you have to play with the project to make it work for you, but even with the issues (it hasn't been updated since 2008) it still gives you a framework, with logical separations that you can build from.If you have used Tree Surgeon in the past, take a few minutes and drop a comment about what difference it made in your development style, and what you are doing differently today because of it.

    Read the article

  • What is the basic loadout for an open source web developer?

    - by DeveloperDon
    Thus far, I have mainly been an embedded developer, but I am interested in having the flexibility to do mobile and web development as well. I think my tools should include the following, but probably a lot more. LAMP stack. Java IDEs like Eclipse and IntelliJ. JS frameworks like Dojo, Node.JS, AngularJS, (is it better to mix or commit to one?). Cloud solutions like EC2 and Azure (again, ok to mix or better to commit to one?). Google APIs. Continuous integration server. Source control tools with Git for new work, SVN, CVS, +others for imports. FTP server. Unit test runners. Bug trackers. OOAD modeling tools or plug-ins? Graphic design tools? Hosting services. XML / JSON / other markup? Content management, SEO? I am also interested to know if there are tools where it might be better to mix, match, or support all available (maybe for source control) and others where the full focus should be on one (maybe Java vs. C# or Windows vs. Linux vs. MacOS). Perhaps some of these questions need context of whether the projects will be greenfield (just pick favorite) or maintenance (no choice, each project continues legacy, sometimes with a poor tools).

    Read the article

  • Get Unlimited Oracle Training for Your Team for an Entire Year

    - by KJones
    Written By Amit Kumar, Senior Director Oracle University Digital Training  Oracle University has been in the training business for a long time (over 30 years!) and has worked with many Oracle customers over the years.  We understand that getting your teams trained on the latest Oracle technologies is not always easy. Training becomes more challenging when you have remote teams, team members with different skill levels or experienced team members who just need the content that covers the latest product features. It can also be challenging to predict your training needs for the year, making it all the more difficult to provide training in a timely manner. Oracle Unlimited Learning Subscription is the Answer We’ve listened to our customers and we’ve worked hard to put together a flexible training solution that enables team members to get the training that addresses their individual needs, right when they need it. This new Oracle Unlimited Learning Subscription provides teams with one year of unlimited access to: •    All of Oracle's Training On Demand video courses for in-depth product training •    All of Oracle's Learning Streams, which provide fresh product content from Oracle experts for continuous learning •    Live connections with Oracle's top instructors  •    Dedicated labs for hands-on practice The Oracle Unlimited Learning Subscription is 100% digital, giving you maximum flexibility. It simplifies how you plan and budget for your team training.  Learning Oracle and staying connected with Oracle really has never been easier. Take a tour and contact your Oracle University representative today to learn more and request a demo. 

    Read the article

  • Is committing/checking in code everyday a good practice?

    - by ArtB
    I've been reading Martin Fowler's note on Continuous Integration and he lists as a must "Everyone Commits To the Mainline Every Day". I do not like to commit code unless the section I'm working on is complete and that in practice I commit my code every three days: one day to investigate/reproduce the task and make some preliminary changes, a second day to complete the changes, and a third day to write the tests and clean it up^ for submission. I would not feel comfortable submitting the code sooner. Now, I pull changes from the repository and integrate them locally usually twice a day, but I do not commit that often unless I can carve out a smaller piece of work. Question: is committing everyday such a good practice that I should change my workflow to accomodate it, or it is not that advisable? Edit: I guess I should have clarified that I meant "commit" in the CVS meaning of it (aka "push") since that is likely what Fowler would have meant in 2006 when he wrote this. ^ The order is more arbitrary and depends on the task, my point was to illustrate the time span and activities, not the exact sequence.

    Read the article

  • The 2012 JAX Innovation Awards

    - by Janice J. Heiss
    A new article, now up on otn/java, titled “The 2012 JAX Innovation Awards” reports on  important Java developments celebrated by the Awards, which were announced in July of 2012. The Awards, given by S&S Media Group, aim to, "Reward those technologies, companies, organizations and individuals that make outstanding contributions to Java." The Awards fall into three categories: Most Innovative Java Technology, Most Innovative Java Company, and Top Java Ambassador. In addition, a finalist who did not win an award receives a Special Jury prize, "in acknowledgement of their unique contribution and positive impact on the Java ecosystem."The winners were: JetBrains for Most Innovative Java Company; Adam Bien as Top Java Ambassador; Restructure 101, created by Headway Software, as Most Innovative Technology; and Charles Nutter, Special Jury award. Each winner received a $2,500 prize. The five finalists in each category were invited to attend the JAX Conference in San Francisco, California. This year's winners each received a $2,500 prize. JetBrains Fellow, Ann Oreshnikova, listed her favorite JetBrains innovations: * Nullability annotations and nullability checker* CamelCase navigation and completion* Continuous Integration in grid (on multiple agents), in TeamCity* IntelliJ Platform and its language support framework* MPS language workbench* Kotlin programming languageWhen asked what currently excites him about Java, Adam Bien, winner of the Java Ambassador Award, expressed enthusiasm over the increasing interest of smaller companies and startups for Java EE. “This is a very good sign,” he said. “Only a few years ago J2EE was mostly used by larger companies -- now it becomes interesting even for one-person shows. Enterprise Java events are also extremely popular. On the Java SE side, I'm really excited about Project Nashorn.”Special Jury Prize Winner, Charles Nutter of Red Hat, remarked that, “JRuby seems to have hit a tipping point this past year, moving from ‘just another Ruby implementation’ to ‘the best Ruby implementation for X,’ where X may be performance, scaling, big data, stability, reliability, security, and a number of other features important for today's applications. Check out the complete article here.

    Read the article

  • How to store bitmaps in memory?

    - by Geotarget
    I'm working with general purpose image rendering, and high-performance image processing, and so I need to know how to store bitmaps in-memory. (24bpp/32bpp, compressed/raw, etc) I'm not working with 3D graphics or DirectX / OpenGL rendering and so I don't need to use graphics card compatible bitmap formats. My questions: What is the "usual" or "normal" way to store bitmaps in memory? (in C++ engines/projects?) How to store bitmaps for high-performance algorithms, such that read/write times are the fastest? (fixed array? with/without padding? 24-bpp or 32-bpp?) How to store bitmaps for applications handling a lot of bitmap data, to minimize memory usage? (JPEG? or a faster [de]compression algorithm?) Some possible methods: Use a fixed packed 24-bpp or 32-bpp int[] array and simply access pixels using pointer access, all pixels are allocated in one continuous memory chunk (could be 1-10 MB) Use a form of "sparse" data storage so each line of the bitmap is allocated separately, reusing more memory and requiring smaller contiguous memory segments Store bitmaps in its compressed form (PNG, JPG, GIF, etc) and unpack only when its needed, reducing the amount of memory used. Delete the unpacked data if its not used for 10 secs.

    Read the article

  • Why should I use MSBuild instead of Visual Studio Solution files?

    - by Sid
    We're using TeamCity for continuous integration and it's building our releases via the solution file (.sln). I've used Makefiles in the past for various systems but never msbuild (which I've heard is sorta like Makefiles + XML mashup). I've seen many posts on how to use msbuild directly instead of the solution files but I don't see a very clear answer on why to do it. So, why should we bother migrating from solution files to an MSBuild 'makefile'? We do have a a couple of releases that differ by a #define (featurized builds) but for the most part everything works. The bigger concern is that now we'd have to maintain two systems when adding projects/source code. UPDATE: Can folks shed light on the lifecycle and interplay of the following three components? The Visual Studio .sln file The many project level .csproj files (which I understand an "sub" msbuild scripts) The custom msbuild script Is it safe to say that the .sln and .csproj are consumed/maintained as usual from within the Visual Studio IDE GUI while the custom msbuild script is hand-written and usually consumes the already existing individual .csproj "as-is"? That's one way I can see reduce overlap/duplicate in maintenance... Would appreciate some light on this from other folks' operational experience

    Read the article

  • Neural network input preprocessing

    - by TND
    It's clear that the effectiveness of a neural network depends strongly on the format you give it to work with. You want to preprocess it into the most convenient form you can algorithmically get to, so that the neural network doesn't have to account for that itself. I'm working on a little project that (surprise!) is going to be using neural networks. My future goal is to eventually use NEAT, which I'm really excited about. Anyway, one of my ideas involves moving entities in continuous 2D space, from a top-down perspective (this would be a really cool game AI). Of course, unless these guys are blind, they're going to be able to see the world around them. There's a lot of different ways this information could be fed into the network. One interesting but expensive way is to simply render a top-down "view" of things, with the entities as dots on the picture, and feed that in. I was hoping for something much simpler to use (at least at first), such as a list of the x (maybe 7 or so) nearest entities and their position in relative polar coordinates, orientation, health, etc., but I'm trying to think of the best way to do it. My first instinct was to order them by distance, which would inherently also train the neural network to consider those more "important". However, I was thinking- what if there's two entities that are nearly the same distance away? They could easily alternate indexes in that list, confusing the network. My question is, is there a better way of representing this? Essentially, the issue is the network needs a good way of keeping track of who's who, while knowing (by being inputted) relevant information about the list of entities it can see. Thanks!

    Read the article

  • What do I need to learn to decide on rename/recompile source package names because of company rebranding?

    - by Roberto Linares
    My company is currently at a rebranding process and the brand names have been used in the sources' package names but these names are only visible to developers who maintain this code so nobody from project management is really interested in changing them considering also that it would imply the recompiling of several old components. What factors do I need to consider when deciding on a change like that? I don't know if I should worry about legal issues or not and if so, how to address this with project management. More background details. I have all the sources and dependencies but since the company rebranding, other development areas have adopted some of the code that needs package name-changing so I cannot take the decision only by myself so I don't make everyone else's code to crash with my core components and I cannot change other areas' code without the permission of those areas' users so yes, my concern is more political than technical. I am going try to coordinate the involved it areas to make the change anyway, since it seems to be the best approach.   Unfortunatelly in my company there's no continuous integration build server so we build our code manually on demand and to get something to production I have to justify the change (even just the package name changing) to QA with an user requirement and some other bureaucratic documentation so that's why I was hesitating the change in first place.

    Read the article

  • Recommended formats to store bitmaps in memory?

    - by Geotarget
    I'm working with general purpose image rendering, and high-performance image processing, and so I need to know how to store bitmaps in-memory. (24bpp/32bpp, compressed/raw, etc) I'm not working with 3D graphics or DirectX / OpenGL rendering and so I don't need to use graphics card compatible bitmap formats. My questions: What is the "usual" or "normal" way to store bitmaps in memory? (in C++ engines/projects?) How to store bitmaps for high-performance algorithms, such that read/write times are the fastest? (fixed array? with/without padding? 24-bpp or 32-bpp?) How to store bitmaps for applications handling a lot of bitmap data, to minimize memory usage? (JPEG? or a faster [de]compression algorithm?) Some possible methods: Use a fixed packed 24-bpp or 32-bpp int[] array and simply access pixels using pointer access, all pixels are allocated in one continuous memory chunk (could be 1-10 MB) Use a form of "sparse" data storage so each line of the bitmap is allocated separately, reusing more memory and requiring smaller contiguous memory segments Store bitmaps in its compressed form (PNG, JPG, GIF, etc) and unpack only when its needed, reducing the amount of memory used. Delete the unpacked data if its not used for 10 secs.

    Read the article

  • I'm a Subversion geek, why should I consider or not consider Mercurial or Git or any other DVCS?

    - by user2567
    I try to understand the benefits of distributed version control system (DVCS). I found Subversion Re-education and this article by Martin Fowler very useful. Mercurial and others DVCS promote a new way of working on code with changesets and local commits. It prevents from merging hell and other collaboration issues We are not affected by this as I practice continuous integration and working alone in a private branch is not an option, unless we are experimenting. We use a branch for every major version, in which we fix bugs merged from the trunk. Mercurial allows you to have lieutenants I understand this can be useful for very large projects like Linux, but I don't see the value in small and highly collaborative teams (5 to 7 people). Mercurial is faster, takes less disk space and full local copy allows faster logs & diffs operations. I'm not concerned by this either, as I didn't notice speed or space problems with SVN even with very large projects I'm working on. I'm seeking for your personal experiences and/or opinions from former SVN geeks. Especially regarding the changesets concept and overall performance boost you measured. UPDATE (12th Jan): I'm now convinced that it worth a try. UPDATE (12th Jun): I kissed Mercurial and I liked it. The taste of his cherry local commits. I kissed Mercurial just to try it. I hope my SVN Server don't mind it. It felt so wrong. It felt so right. Don't mean I'm in love tonight. FINAL UPDATE (29th Jul): I had the privilege to review Eric Sink's next book called Version Control by Example. He finished to convince me. I'll go for Mercurial.

    Read the article

  • What sorts of tools make a Django Developer valuable? [closed]

    - by MrOodles
    I am a Django Consultant and I want to increase the value that I provide to clients. My first question was an epic failure according to the FAQ. So I'll try again before I delete it. What types of tools should a Django developer have in his tool belt to increase the value to the client? Would a collection of project templates be useful? Are there open-source project templates available that can be forked and altered? Is there a proper way to configure templates to include dependencies for certain types of projects? What about deployment scripts using tools like Puppet or Chef? Does it make a lot of sense to fork Django apps on GitHub and make contributions to open source projects there? Would clients percieve extra value in programmers that are contributing to open source projects? Are there industry best practices for implementing continuous integration in a Django project? I want the answer to be open ended, as I'm at the beginning of my research. I am curious to know what sorts of tools other Django consultants use on a daily and per-project basis, and how they use them.

    Read the article

  • Installing packages into local directory?

    - by Gili
    I'd like to install software packages, similar to apt-get install <foo> but: Without sudo, and Into a local directory The purpose of this exercise is to isolate independent builds in my continuous integration server. I don't mind compiling from source, if that's what it takes, but obviously I'd prefer the simplest approach possible. I tried apt-get source --compile <foo> as mentioned here but I can't get it working for packages like autoconf. I get the following error: dpkg-checkbuilddeps: Unmet build dependencies: help2man I've got help2man compiled in a local directory, but I don't know how to inform apt-get of that. Any ideas? UPDATE: I found an answer that almost works at http://askubuntu.com/a/350/23678. The problem with chroot is that it requires sudo. The problem with apt-get source is that I don't know how to resolve dependencies. I must say, chroot looks very appealing. Is there an equivalent command that doesn't require sudo?

    Read the article

  • Downclock CPU reaching critical temperature

    - by Draga
    I have an HP tx1250 laptop. It always had serious overheating problems and although usually it runs fine I'm now running a continuous test for my dissertation, this brings the CPU temp close to the critical and from time to time the computer shutdown for reaching it (checked the log). I use to have the same problem on Win XP but I noticed Win Vista and 7 downclock the CPU when is necessary to cool it down so I was thinking if the same is possible on Ubuntu 12. The only program I've found that may do the job is computer temp ( http://computertemp.berlios.de/ ) but it doesn't seems to work under Ubuntu 12. The inside of the laptop is fairly clean, the thermal paste is quite recent, I'm keeping it lifted from the desk and judjung by the sound of the fan that's running fine as well. The pc in now running between 78 and 91 degrees C but about once a day it shut down for reaching 95. I need the results of the test it's running pretty soon so it's important that it runs non-stop. I've though to set the maximum clock of the CPU to slightly less the maximum but then these tests I'm running would take much more time. Thanks! PS: first and last HP laptop for me.

    Read the article

  • Advice and resources on collaborative environments

    - by Tjaart
    I need some advice on collaborative software environments. More specifically, I am looking for books and reference materials that can aid me in understanding team and code structures and the interactions thereof. In other words books, blogs or white papers explaining: Different strategies for structuring teams that share common code between each other but have distinct individual functions? To summarise my question I would like to know what would be a good source of knowledge if I were to set up teams in an organisation that shared code but each unit still remained autonomous. I have done some research on this subject and explored: code review tools, distributed VCS, continuous integration tools, Unit testing automation. The tough part about implementing these tools are to determine where a good place would be to start, which tools are low hanging fruit, which tools or methods provide higher success rates. If someone asks me about code quality reference I point them to Code Complete. I am looking for an equivalent guide on software team structures and tools to make this equation work better. I realise that this question is quite vague but it arose as "we need to share code between teams without breaking each others stuff and causing management headaches and reams of red tape" The answer is definitely not simple and requires changes on many levels, hence the question. If the question is too vague please vote to close or delete. I would accept any good starting point as an answer.

    Read the article

  • Is it OK to create all primary partitions.?

    - by james
    I have a 320GB hard disk. I only use either ubuntu or kubuntu (12.04 for now). I don't want to use windows or any other dual boot os. And i need only 3 partitions on my hard disk. One for the OS and remaining two for data storage. I don't want to create swap also. Now can i create all primary partitions on the hard disk. Are there any disadvantages in doing so. If all the partitions are primary i think i can easily resize partitions in future. On second thought i have the idea of using seperate partition for /home. Is it good practice . If i have to do this, i will create 4 partitions all primary. In any case i don't want to create more than 4 partitions . And i know the limit will be 4. So is it safe to create all 3 or 4 primary partitions. Pls suggest me, What are the good practices . (previously i used win-xp and win-7 on dual boot with 2 primary partitions and that bugged me somehow i don't remember. Since then i felt there should be only one primary partition in a hard disk.) EDIT 1 : Now i will use four partitions in the sequence - / , /home , /for-data , /swap . I have another question. Does a partition need continuous blocks on the disk. I mean if i want to resize partitions later, can i add space from sda3 to sda1. Is it possible and is it safe to do ?

    Read the article

  • Eclipse and NetBeans replacing embedded IDEs (part 2 and part 3)

    - by Geertjan
    After part 1, in Embedded Insights, the series Eclipse and NetBeans replacing embedded IDEs by principal analyst Robert Cravotta continues below. Many embedded tool developers are choosing to migrate their embedded development toolset to an open source IDE platform for a number of reasons. Maintaining an up-to-date IDE with the latest ideas, innovations, and features requires continuous effort from the tool development team. In contrast to maintaining a proprietary IDE, adopting an open source IDE platform enables the tool developers to leverage the ideas and effort of the community and take advantage of advances in IDE features much sooner and without incurring the full risk of experimenting with new features in their own toolsets. Both the Eclipse and NetBeans platforms deliver regular releases that enable tool developers to more easily take advantage of the newest features in the platform architecture.  Read more of part 2 here, in an article published Thursday, May 17th, 2012. Both the NetBeans and Eclipse projects began as development environments and both evolved into platforms that support a wider array of software products. Both platforms have been actively supported and evolving open source projects that have competed and coexisted together for the past decade and this has led to a level a parity between the two platforms. From the perspective of a tool developer, applications are built the same way on either platform – the difference is in the specific terminology and tools. Read more of part 3 here, in an article published Tuesday, June 12th, 2012. And, as a bonus in this blog entry, here's how to get started creating an IDE on the NetBeans Platform:  http://netbeans.dzone.com/how-to-create-commercial-quality-ide

    Read the article

< Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >