Search Results

Search found 22884 results on 916 pages for 'team build'.

Page 614/916 | < Previous Page | 610 611 612 613 614 615 616 617 618 619 620 621  | Next Page >

  • Mobile Web Applications – A guide for professional development

    - by JuergenKress
    (Tobias Bosch, Stefan Scheidt, Torsten Winterberg / Opitz Consulting Deutschland GmbH). There is a real hype around mobile solutions. Smartphones and tablets are everywhere. Frontend architecture is changing quickly to adopt cross browser technologies like HTML5 and extensive JavaScript-based development. In this book we introduce our software development process to build test-driven Single-Page JavaScript Web Applications, which will be the future next to native apps. We start with a short introduction of our RYLC showcase (know from our SOA articles), give a very short introduction to JavaScript, then talk about jQuery Mobile, Angular JS, Testing, Backend-communication and we close with deploying our RYLC-Webapp as a hybrid app using the PhoneGap (Cordova) framework. Don’t expect too much theory – it’s a practical guide explaining how RYLC Web App was built, to kickstart your own development. Currently only available in German as paperback and eBook. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. BlogTwitterLinkedInMixForumWiki Technorati Tags: adf mobil

    Read the article

  • relationship between the model and the renderer

    - by acrilige
    I tried to build a simple graphics engine, and faced with this problems: i have a list of models that i need to draw, and object (renderer) that implements IRenderer interface with method DrawObject(Object* obj). Implementation of renderer depends on using graphics library (opengl/directx). 1st question: model should not know nothing about renderer implementation, but in this case where can i hold (cache) information that depends on renderer implementation? For example, if model have this definition: class Model { public: Model(); Vertex* GetVertices() const; private: Vertex* m_vertices; }; what is the best way to cache, for example, vertex buffer of this model for dx11? Hold it in renderer object? 2nd question: what is the best way for model to say renderer HOW it must be rendered (for example with texture, bump mapping, or may be just in one color). I thought it can be done with flags, like this: model-SetRenderOptions(RENDER_TEXTURE | RENDER_BUMPMAPPING | RENDER_LIGHTING); and in Renderer::DrawModel method check for each flag. But looks like it will become uncomfortable with the options count growth...

    Read the article

  • Dealing with the node.js callback pyramid

    - by thecoop
    I've just started using node, and one thing I've quickly noticed is how quickly callbacks can build up to a silly level of indentation: doStuff(arg1, arg2, function(err, result) { doMoreStuff(arg3, arg4, function(err, result) { doEvenMoreStuff(arg5, arg6, function(err, result) { omgHowDidIGetHere(); }); }); }); The official style guide says to put each callback in a separate function, but that seems overly restrictive on the use of closures, and making a single object declared in the top level available several layers down, as the object has to be passed through all the intermediate callbacks. Is it ok to use function scope to help here? Put all the callback functions that need access to a global-ish object inside a function that declares that object, so it goes into a closure? function topLevelFunction(globalishObject, callback) { function doMoreStuffImpl(err, result) { doMoreStuff(arg5, arg6, function(err, result) { callback(null, globalishObject); }); } doStuff(arg1, arg2, doMoreStuffImpl); } and so on for several more layers... Or are there frameworks etc to help reduce the levels of indentation without declaring a named function for every single callback? How do you deal with the callback pyramid?

    Read the article

  • Joining a company to get experience vs. going alone [closed]

    - by daniels
    My goal is to build a successful web startup, say the next Digg or Twitter, and I am in doubt regarding what is the best route to follow as a programmer. I see basically two options: Get an internship/job with an established online company, so that I could get a mentor and learn from more experienced programmers, learn their processes, methodology and so on. I could do this for 1-2 years, and then quit to start working on my own stuff. Start working on my own projects right away, starting with small ones and moving up gradually. This would give me more control on the things I would be working with, but I would lack contact with more experienced people, so I would need to figure basic things on my own. Doing both is not an option in my opinion, cause I would need to put a lot of effort/time into each if I was to learn/improve as a programmer. So is one route definitely better than the other? Is there a third one I am not considering? Background: I already work by myself developing content-based websites and doing SEO, and I am decent at it so money is not a problem. Last year I started learning to program, first by myself and now I enrolled in a CS degree on a good university.

    Read the article

  • How can I stop Chromium from putting focus in the address bar when using the up arrow key to scroll

    - by codeLes
    This is bugging me. I'm at a web page, any web page, and I am scrolling back to the top of the page with the up arrow key as I tend to do when browsing a page and once I get to the top of the page if I don't pay attention and hit the key one more times than is needed Chromium in all it's wisdom decides I must want to do something with the address so it places the focus in the address bar. ARGH! Any way to stop this? I'm running Ubuntu 9.04 Jaunty and have installed the nightly build version of Chromium PPA.

    Read the article

  • Setting the date format in Atlassian Crucible

    - by pwan
    When I set up a review in Atlassian Crucible, and I set a date, it displays it in DD/MM/YYYY I'd like to switch that over to MM/DD/YYYY, but I don't see anywhere in the configuration to switch that. The closest setting I see is for the timezone, which I have set to EST. I have Crucible running on a CentOS 6.2 machine that's set to UTC I'm using Crucible version 2.7.14 Build:20120612060728 2012-06-12 My locale is currently set to en_US.UTF-8 > locale | grep LC_TIME LC_TIME="en_US.UTF-8" Can someone point out how I can switch the date format in Crucible ?

    Read the article

  • Using Ogre with android [closed]

    - by Rich
    I am trying get Ogre 3d to work on android, I have managed to download and run the ogre sample browser but I am really struggling with trying to get a basic application working i have been trying for days now with no avail. Does anyone have any pointers on where to start with this? Thanks if anyone can help EDIT: Very sorry for my rubbish question! I am a bit new to this and I am just trying to seek some guidance. Ok so i followed the instructions on the Ogre wiki to build ogre for android and the sample browser here: http://www.ogre3d.org/tikiwiki/tiki-index.php?page=CMake+Quick+Start+Guide&tikiversion=Android so it is deffinately possible. The issue I am having is knowing what I need to do to get started with ogre e.g just a simple hello world style app where it might just show the ogre head, so tutorials might actually be good because I could not really find any simple ones as I am very new to 3D development. I just found that the sample browser was just massive and yes it has everything in it but it's very difficult to understand how it all works. What I am asking is basically some help, as I have been trying to pull out some parts of the sample browser to just create a view with a 3D model. Hope this is better?

    Read the article

  • Benefits of log rotation

    - by Manfred Moser
    I have been using logrotation for years and never thought too much of it being a problem until I came across a question on stackoverflow (http://stackoverflow.com/questions/1508734/disable-java-log-rotation/) where someone wants to disable log rotation. To me with experience in having build server and even production servers cleaned up manually because logs are not rotated and discs are running out and suddenly machines come to a halt that all seems crazy, but it occurred to me that maybe it is not so obvious after all. So what are the benefits of log rotation? And what are the drawbacks (e.g. more difficult to debug/analyze maybe)? What tools do you find useful for working with rotated log files? Splunk I assume, but what else?

    Read the article

  • Oracle GoldenGate 11gR2 Event Marker System

    - by Doug Reid
    0 false 18 pt 18 pt 0 0 false false false /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Times New Roman"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} Oracle GoldenGate 11gR2 includes a number of refinements to the Event Marker system. Using event markers enables GoldenGate processes to take a defined action based on an event in the data stream. This feature within Oracle GoldenGate simplifies methods to embed specific custom processing in the areas of error handling, alerts, and notification. The event marker system effectively allows for DML driven workflows to be created within GoldenGate and enables customers to craft non-standard processing based on special events. There are a number of supported event actions including: trace, log, checkpoint before, suspend, abort, and several others. With 11gR1 events can now be triggered by DDL operations, plus variables can be passed in and out of the system to shell scripts. Some good use cases for this feature are Automatic switchover to the secondary system during planned outages Better monitoring over source systems’ performance and automated switchover to the standby system in case of an outage with the primary system Automatic switchover from initial load to changed data movement Automatic synchronization of any type of batch processing taking place on both the source and target databases for database consistency Automatic stoppage of the Delivery module to allow end-of-day reporting Finding, tracking, and reporting on transactions that are of interest including the ones that do not have primary keys or transaction record numbers If you would like to see a demo, please visit our youtube channel (http://youtube.com/oraclegoldengate)  To learn more about the new features of Oracle GoldenGate 11gR2 and to ask questions to the PM team, please join us on September 12th  8am or 10am PST for our live webcast. Click here to register.

    Read the article

  • OOF checklist

    - by Daniel Moth
    When going on vacation or otherwise being out of office (known as OOF in Microsoft), it is polite and professional that our absence creates the minimum disruption possible to the rest of the business, and especially our colleagues. Below is my OOF checklist - I try to do these as soon as I know I'll be OOF, rather than leave it for the night before. Let the relevant folks on the team know the planned dates of absence and check if anybody was expecting something from you during that timeframe. Reset expectations with them, and as applicable try to find another owner for individual activities that cannot wait. Go through your calendar for the OOF period and decline every meeting occurrence so the owner of the meeting knows that you won't be attending (similar to my post about responding to invites). If it is your meeting cancel it so that people don’t turn up without the meeting organizer being there. Do this even for meetings were the folks should know due to step #1. Over-communicating is a good thing here and keeps calendars all around up to date. Enter your OOF dates in whatever tool your company uses. Typically that is the notification to your manager. In your Outlook calendar, create a local Appointment (don't invite anyone) for the date range (All day event) setting the "Show As" dropdown to "Out of Office". This way, people won’t try to schedule meetings with you on that day. If you use Lync, set the status to "Off Work" for that period. If you won't be responding to email (which when on your vacation you definitely shouldn't) then in Outlook setup "Automatic Replies (Out of Office)" for that period. This way people won’t think you are rude when not replying to their emails. In your OOF message point to an alias (ideally of many people) as a fallback for urgent queries. If you want to proactively notify individuals of your OOFage then schedule and send a multi-day meeting request for the entire period. Remember to set the "Show As" to "Free" (so their calendar doesn’t show busy/oof to others), set the "Reminder" to "None" (so they don’t get a reminder about it), set "Low Importance", and uncheck both "Response Options" so if they don't want this on their calendar, it is just one click for them to get rid of it. Aside: I have another post with advice on sending invites. If you care about people who would not observe the above but could drop by your office, stick a physical OOF note at your office door or chair/monitor or desk. Have I missed any? Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Supporting and testing multiple versions of a software library in a Maven project

    - by Duncan Jones
    My company has several versions of its software in use by our customers at any one time. My job is to write bespoke Java software for the customers based on the version of software they happen to be running. I've created a Java library that performs many of the tasks I regularly require in a normal project. This is a Maven project that I deploy to our local Artifactory and pull down into other Maven projects when required. I can't decide the best way to support the range of software versions used by our customers. Typically, we have about three versions in use at any one time. They are normally backwards compatible with one another, but that cannot be guaranteed. I have considered the following options for managing this issue: Separate editions for each library version I make a separate release of my library for each version of my company software. Using some Maven cunningness I could automatically produce a tested version linked to each of the then-current company software versions. This is feasible, but not without its technical challenges. The advantage is that this would be fairly automatic and my unit tests have definitely executed against the correct software version. However, I would have to keep updating the versions supported and may end up maintaining a large collection of libraries. One supported version, but others tested I support the oldest software version and make a release against that. I then perform tests with the newer software versions to ensure it still works. I could try and make this testing automatic by having some non-deployed Maven projects that import the software library, the associated test JAR and override the company software version used. If those projects build, then the library is compatible. I could ensure these meta-projects are included in our CI server builds. I welcome comments on which approach is better or a suggestion for a different approach entirely. I'm leaning towards the second option.

    Read the article

  • New CAM Editor v2.3 with Open-XDX for Open Data APIs

    - by drrwebber
    Creating actual working XML exchanges, loading data from data stores, generating XML, testing, integrating with web services and then deployment delivery takes a lot of coding and effort. Then writing the documentation, models, schema and doing naming and design rule (NDR) checks and packaging all this together (such as for NIEM IEPD use). What if there was a tool that helped you do all that easily and simply? Welcome to the new Open-XDX and the CAM Editor! Open-XDX uses code-free techniques in combination with CAM templates and visual drag and drop to rapidly design your XML exchange. Then Open-XDX will automatically generate all the SQL for you, read the database data, generate and populate the valid output XML, and filter with parameters. To complete the processing solution Open-XDX works with web services and JDBC database connections as a callable module that can be deployed plug and play with your middleware stack, all with just a few lines of Java code (about 5 actually). You can build either Query/Response or Publish/Subscribe services from existing data stores to XML literally in minutes. To see a demonstration of using Open-XDX, a MySQL data store and integrating with Oracle Web Logic server please see this short few minutes video - http://youtube.com/user/TheCameditor There is also a Quick Guide available that provides more technical insights along with a sample pack download of templates and SQL that you can try for yourself. Head on over to our project resource site to learn more, download the latest CAM Editor and see links to all the resources and materials. We look forward to seeing how the developer community is able to jump start information sharing initiatives using this new innovative approach.

    Read the article

  • Do your own design jobs and make it look professional

    - by Webgui
    Looks and design is becoming more and more important for customers and organizations event when we deal with internal enterprise applications. However,  many web developers who work on business apps end up not investing resources on the design. The reason may be that they ran out of time so with their client's pressure there was no choice but to skip past the design process. In some cases, especially in sall software houses, there are no trained professional designers and the developers have to do both jobs. Since designing web applications can be very complex and requires mastering several languages and concepts, unless a big budget was allocated to the project it is very hard to produce a professional custom design. For that exact reasons, Visual WebGui integrated Point & Click Design Tools within its Web/Cloud Development Platform. Those tools allow developers to customize the UI look of the applications they build in a visual way that is fairly simple and doesn't require coding or mastering HTML, CSS and JavaScript in order to design. The development tools also allow professional designers easier work interface with the developers and quicly create new skins. So if you are interested in getting your design job done much easier, you should probably tune in for about an hour and find out how. Click here to register: https://www1.gotomeeting.com/register/740450625

    Read the article

  • How to set global PATH on OS X Server 10.6.6?

    - by Adam Lindberg
    I'm running Tomcat on OS X Server 10.6.6 under the normal Web component that comes with the OS. This has worked fine so far, but I need to add some entries to the $PATH environment variable for programs that I want access to from the web server (more specifically, I'm running Hudson under Tomcat which needs access to build tools that I have installed). Tomcat and the Web component seems to run under the user _appserver which has a different $PATH than the administrator account. What's the proper way to add a global entry to the $PATH in OS X Server for the Web component? Preferable it should be done only once so that both the _appserver and Administrator user can access the same $PATH.

    Read the article

  • .NET Dependency Management Systems

    - by StriplingWarrior
    I have some .NET projects that are starting to get large enough to merit looking into Dependency Management solutions, so we don't have to copy binaries from one project to another. Here's what I've found so far: NPanday is based on a port of Maven. I can't tell how recently it was worked on, but the last release was in May 2011. NuGet seems to be under active development, and it appears to have support directly from Microsoft. Some people complained that it "only addresses dependency resolution," but I don't know what else it should address, or whether it has added more features since that point. It does appear to have recently added the ability to import binaries as part of the build process so we don't have to commit them to our repositories. Refix appears to still be in Beta, after having received no attention since Sept 2011. Would somebody with recent experience using any of these dependency management tools (or any others that work well) share your experience? Is NuGet mature enough to use it for dependency management? If not, what does it lack?

    Read the article

  • Making money from a custom built interpreter?

    - by annoying_squid
    I have been making considerable progress lately on building an interpreter. I am building it from NASM assembly code (for the core engine) and C (cl.exe the Microsoft compiler for the parser). I really don't have a lot of time but I have a lot of good ideas on how to build this so it appeals to a certain niche market. I'd love to finish this but I need to face reality here ... unless I can make some good monetary return on my investment, there is not a lot of time for me to invest. So I ask the following questions to anyone out there, especially those who have experience in monetizing their programs: 1) How easy is it for a programmer to make good money from one design? (I know this is vague but it will be interesting to hear from those who have experience or know of others' experiences). 2) What are the biggest obstacles to making money from a programming design? 3) For the parser, I am using the Microsoft compiler (no IDE) I got from visual express, so will this be an issue? Will I have to pay royalties or a license fee? 4) As far as I know NASM is a 2-clause BSD licensed application. So this should allow me to use NASM for commericial development unless I am missing something? It's good to know these things before launching into the meat and potatoes of the project.

    Read the article

  • Thermal risks to other components when watercooling CPU

    - by B Sharp
    I recently ordered all the components for a new desktop system to replace my old, dying computer. I wanted to have a really quiet desktop, so I got a case rated for being quiet and opted to try a closed-system CPU water cooling kit (Antec Kuhler H2O 620) that was on sale for a very good price over the Thanksgiving weekend. Most of my components are still in transit, but I became somewhat worried when a friend mentioned that abandoning air cooling units can result in heat buildup inside the case due to heat generated by the video card and other components such as RAM, the northbridge, MOSFETs and voltage regulators radiating heat that the CPU fan would normally at least keep circulating around so it doesn't build up in localized areas. Is this a realistic problem? What other precautions should I take to remove heat from other components? Adding more case fans seems like it would get really noisy. Are there quiet alternatives?

    Read the article

  • Self-Executing Anonymous Function vs Prototype

    - by Robotsushi
    In Javascript there are a few clearly prominent techniques for create and manage classes/namespaces in javascript. I am curious what situations warrant using one technique vs. the other. I want to pick one and stick with it moving forward. I write enterprise code that is maintained and shared across multiple teams, and I want to know what is the best practice when writing maintainable javascript ? I tend to prefer Self-Executing Anonymous Functions however I am curious what the community vote is on these techniques. Prototype : function obj() { } obj.prototype.test = function() { alert('Hello?'); }; var obj2 = new obj(); obj2.test(); Self-Closing Anonymous Function : //Self-Executing Anonymous Function (function( skillet, $, undefined ) { //Private Property var isHot = true; //Public Property skillet.ingredient = "Bacon Strips"; //Public Method skillet.fry = function() { var oliveOil; addItem( "\t\n Butter \n\t" ); addItem( oliveOil ); console.log( "Frying " + skillet.ingredient ); }; //Private Method function addItem( item ) { if ( item !== undefined ) { console.log( "Adding " + $.trim(item) ); } } }( window.skillet = window.skillet || {}, jQuery )); //Public Properties console.log( skillet.ingredient ); //Bacon Strips //Public Methods skillet.fry(); //Adding Butter & Fraying Bacon Strips //Adding a Public Property skillet.quantity = "12"; console.log( skillet.quantity ); //12 //Adding New Functionality to the Skillet (function( skillet, $, undefined ) { //Private Property var amountOfGrease = "1 Cup"; //Public Method skillet.toString = function() { console.log( skillet.quantity + " " + skillet.ingredient + " & " + amountOfGrease + " of Grease" ); console.log( isHot ? "Hot" : "Cold" ); }; }( window.skillet = window.skillet || {}, jQuery )); //end of skillet definition try { //12 Bacon Strips & 1 Cup of Grease skillet.toString(); //Throws Exception } catch( e ) { console.log( e.message ); //isHot is not defined } I feel that I should mention that the Self-Executing Anonymous Function is the pattern used by the jQuery team. Update When I asked this question I didn't truly see the importance of what I was trying to understand. The real issue at hand is whether or not to use new to create instances of your objects or to use patterns which do not require constructors of the use of the new keyword. I added my own answer, because in my opinion we should make use of patterns which don't use the new keyword. For more information please see my answer.

    Read the article

  • Enterprise Cloud Infrastructure for Dummies eBook

    - by ferhat
    Are you considering "going to the cloud" as a way to cut IT costs and maximize your virtualization investments? Then Enterprise Cloud Infrastructure for Dummies is a no-nonsense guide to help you navigate this hot topic. This user friendly guide explains how to cut through the noise and take advantage of integrated virtualization and management tools to implement a cloud infrastructure that not only lowers operational costs but that can easily adapt and scale to run a broad range of application services safely and securely. &amp;amp;<span id="XinhaEditingPostion"></span>amp;amp;amp;amp;amp;lt;span id=&amp;amp;amp;amp;amp;amp;amp;quot;XinhaEditingPostion&amp;amp;amp;amp;amp;amp;amp;quot;&amp;amp;amp;amp;amp;amp;amp;gt;&amp;amp;amp;amp;amp;amp;amp;lt;/span&amp;amp;amp;amp;amp;amp;amp;gt; This e-book will serve as a valuable Cloud computing guide covering important topics such as: The current overall cloud landscape and how to best leverage private cloud infrastructure How to build an effective Enterprise Cloud Infrastructure using the Oracle Optimized Solution methodology Quantifiable costs savings gained using Oracle's integrated hardware and software and Optimized Solutions Download your exclusive copy of Enterprise Cloud Infrastructure, Oracle Special Edition today.

    Read the article

  • The Minimalist Approach to Content Governance - Create Phase

    - by Kellsey Ruppel
     Originally posted by John Brunswick. In this installment of our Minimalist Approach to Content Governance we finally get to the fun part of the content creation process! Once the content requester has addressed the items outlined in the Request Phase it is time to setup and begin the production of content.   For this to be done correctly it is important the the content be assigned appropriate workflow and security information. As in our prior phase, let's take a look at what can be done to streamline this process - as contributors are focused on getting information to their end users as quickly as possible. This often means that details around how to ensure that the materials are properly managed can be overlooked, but fortunately there are some techniques that leverage our content management system's native capabilities to automatically take care of some of the details. 1. Determine Access Why - Even if content is not something that needs to restricted due to security reasons, it is helpful to apply access rights so that the content ends up being visible only to users that it relates to. This will greatly improve user experience. For instance, if your team is working on a group project many of your fellow company employees do not need to see the content that is being worked on for that project. How - Make use of native content features that allow propagation of security and meta data from parent folders within your content system that have been setup for your particular effort. This makes it painless to enforce security, as well as meta data policies for even the most unorganized users. The default settings at a parent level can be set once the content creation request has been accepted and a location in the content management system is assigned for your specific project. Impact - Users can find information will less effort, as they will only be exposed to what they need for their work and can leverage advanced search features to take advantage of meta data assigned to content. The combination of default security and meta data will also help in running reports against the content in the Manage and Retire stages that we will discuss in the next 2 posts. 2. Assign Workflow (optional depending on nature of content) Why - Every case for workflow is going to be a bit different, but it generally involves ensuring that content conforms to management, legal and or editorial requirements. How - Oracle's Universal Content Management offers two ways of helping to workflow content without much effort. Workflow can be applied to content based on Criteria acting on meta data or explicitly assigned to content with a Basic workflow. Impact - Any content that needs additional attention before release is addressed, allowing users to comment and version until a suitable result is reached. By using inheritance from parent folders within the content management system content can automatically be given the right security, meta data and workflow information for a particular project's content. This relieves the burden of doing this for every piece of content from management teams and content contributors. We will cover more about the management phase within the content lifecycle in our next installment.

    Read the article

  • Network connectivity issues with Windows Store

    - by Duy Tran
    I have my Windows 8 Pro build 9200 installed on my Dell laptop. I want to install some new apps and updates from the Store but there might be some network problem that caused the downloading gauge showing up but not really running at all. I followed some instructions that switched from local user to my Microsoft account, but this "Please wait" screen keeps showing and I don't really know why. I still have internet access and can use some apps like People, Mail, etc. with my account logged in, I can surf the net using Firefox, Chrome and Internet Explorer. I did another test using cmd with ping -t google.com and it showed that my laptop has internet access. Anybody knows a solution to make the Store working properly? Or is there any workaround to switch to the Microsoft account instead of a local user account?

    Read the article

  • Minimum Requirements for (open) Solaris?

    - by Electrons_Ahoy
    I'm thinking about knocking together a Solaris box at home to act as a combination server and learning exercise. What are the minimum hardware specs I can throw at it such that it'll be actually usable? I'd be cobbling the machine together from a stack of various x86 PC spares/leftovers. Does anyone have experience with Solaris at the lower end of the spectrum? The Sun site, for example, claims it'll run with as little as 255 megs of ram, but is it worth the exercise with less than a gig? Will my old Pentium II 450 cut the mustard? (I'm willing to throw a couple of bucks at pricewatch/mwave/newegg on this, but if I need to build a better rig than my main PC, I may not bother.)

    Read the article

  • How to check use of userva boot option on Win 2K3 server

    - by Tim Sylvester
    I have some 32-bit Win2K3 servers running an application that fails now and then apparently due to heap fragmentation. (Process virtual bytes grows, private bytes does not) I do not have access to the source code or build process of this application. I have modified the boot.ini file on one of these servers to include /userva=2560, half way between the normal mode of operation and the /3GB option. Normally it takes weeks to reach the point of failure, but I'd like to see right away whether this has actually had any effect. As I understand it, this option limits the kernel to the remaining address space (1536MB instead of 2048), but does not necessarily give an application the extra address space, depending on the flags in the application's PE header. How can I determine whether the O/S is allowing a particular application, running in production, to access address space above 2GB? Additionally, what's the best way to monitor the system to ensure that the kernel is not starved for address space, and more generally how should I go about finding the optimal value for this setting?

    Read the article

  • What is a good way to share internal helpers?

    - by toplel32
    All my projects share the same base library that I have build up over quite some time. It contains utilities and static helper classes to assist them where .NET doesn't exactly offer what I want. Originally all the helpers were written mainly to serve an internal purpose and it has to stay that way, but sometimes they prove very useful to other assemblies. Now making them public in a reliable way is more complicated than most would think, for example all methods that assume nullable types must now contain argument checking while not charging internal utilities with the price of doing so. The price might be negligible, but it is far from right. While refactoring, I have revised this case multiple times and I've come up with the following solutions so far: Have an internal and public class for each helper The internal class contains the actual code while the public class serves as an access point which does argument checking. Cons: The internal class requires a prefix to avoid ambiguity (the best presentation should be reserved for public types) It isn't possible to discriminate methods that don't need argument checking   Have one class that contains both internal and public members (as conventionally implemented in .NET framework). At first, this might sound like the best possible solution, but it has the same first unpleasant con as solution 1. Cons: Internal methods require a prefix to avoid ambiguity   Have an internal class which is implemented by the public class that overrides any members that require argument checking. Cons: Is non-static, atleast one instantiation is required. This doesn't really fit into the helper class idea, since it generally consists of independent fragments of code, it should not require instantiation. Non-static methods are also slower by a negligible degree, which doesn't really justify this option either. There is one general and unavoidable consequence, alot of maintenance is necessary because every internal member will require a public counterpart. A note on solution 1: The first consequence can be avoided by putting both classes in different namespaces, for example you can have the real helper in the root namespace and the public helper in a namespace called "Helpers".

    Read the article

  • What is a preferred method for automatically configuring and setting up an Ubuntu instance?

    - by sutch
    I am tired of manually configuring instances of Ubuntu for testing web applications and for setting up workstations. I'm even more frustrated by the issues caused by inconsistent configurations. Is there a method (hopefully not too time consuming to learn and setup) that allows for automation of the setup and configuration of an Ubuntu server or workstation from an ISO. This is primarily for virtual machine instances, but it would be helpful to also create instances on hardware. I am specifically looking for a method to automate the installation of libraries (apt-get), configure services (such as Apache and MySQL), add 3rd party software (download, extract and build), and add libraries to scripting languages (for example, Ruby Gems or CPAN packages for Perl).

    Read the article

< Previous Page | 610 611 612 613 614 615 616 617 618 619 620 621  | Next Page >