Search Results

Search found 7651 results on 307 pages for 'execution plan'.

Page 51/307 | < Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >

  • Using h:outputFormat to message-format the f:selectItems of a h:selectOneRadio

    - by msharma
    I am having some trouble with using h:selectOneRadio. I have a list of objects which is being returned which needs to be displayed. I am trying something like this: <h:selectOneRadio id="selectPlan" layout="pageDirection"> <f:selectItems value="#{detailsHandler.planList}" /> </h:selectOneRadio> and planList is a List of Plans. Plan is defined as: public class Plan { protected String id; protected String partNumber; protected String shortName; protected String price; protected boolean isService; protected boolean isOption; //With all getters/setters } The text that must appear for each radio button is actually in a properties file, and I need to insert params in the text to fill out some value in the bean. For example the text in my properties file is: plan_price=The price of this plan is {0}. I was hoping to do something like this: <f:selectItems value="<h:outputFormat value="#{i18n.plan_price}"> <f:param value="#{planHandler.price}"> </h:outputFormat>" /> Usually if it's not a h:selectOneRadio component, if it's just text I use the h:outputFormat along with f:param tags to display the messages in my .property file called i18n above, and insert a param which is in the backing bean. here this does not work. Does anyone have any ideas how I can deal with this? I am being returned a list of Plans each with their own prices and the text to be displayed is held in property file. Any help much appreciated. Thanks!

    Read the article

  • How do you make an added row from QueryAddRow() the first row of the result from a query?

    - by JS
    I am outputting a query but need to specify the first row of the result. I am adding the row with QueryAddRow() and setting the values with QuerySetCell(). I can create the row fine, I can add the content to that row fine. If I leave the argument for the row number off of QuerySetCell() then it all works great as the last result of the query when output. However, I need it to be first row of the query but when I try to set the row attribute with the QuerySetCell it just overwrites the first returned row from my query (i.e. my QueryAddRow() replaces the first record from my query). What I currently have is setting a variable from recordCount and arranging the output but there has to be a really simple way to do this that I am just not getting. This code sets the row value to 1 but overwrites the first returned row from the query. <cfquery name="qxLookup" datasource="#application.datasource#"> SELECT xID, xName, execution FROM table </cfquery> <cfset QueryAddRow(qxLookup)/> <cfset QuerySetCell(qxLookup, "xID","0",1)/> <cfset QuerySetCell(qxLookup, "xName","Delete",1)/> <cfset QuerySetCell(qxLookup, "execution", "Select this to delete",1)/> <cfoutput query="qxLookup"> <tr> <td> <a href="##" onclick="javascript:ColdFusion.navigate('xSelect/x.cfm?xNameVar=#url.xNameVar#&xID=#qxLookup.xID#&xName=#URLEncodedFormat(qxLookup.xName)#', '#xNameVar#');ColdFusion.Window.hide('#url.window#')">#qxLookup.xName#</a> </td> <td>#qxLookup.execution#</td> </tr> </cfoutput> </table> Thanks for any help.

    Read the article

  • In maven2, how do I assemble bits and pieces of different modules to create final distributions?

    - by Carcassi
    I have four maven project: client api jar web service war ui jar web interface war The service war will need to be packaged to include the client api jar, together with javadocs (so that each version is distributed with a matching client and documentation). The web interface war will need the ui jar and all the dependencies (webstart/applet deployment). So I need a 5th project that does all the packaging. How to do this with ant or a script is perfectly clear to me, but not in maven. I tried the following: having the javadocs included as part of the war packaging: this requires the execution of the javadocs goal in project 1 before execution of package in project 2. Haven't found a way to bind plugins/goals across different projects. Using the assembly plugin in project2 had the same problem. create a fifth project and use the assembly plugin. Still the same problems as before, with the problem that since I need different pieces from each sub-project I do not understand how this can be done using the assembly. Is this too hard to do in maven, and should I just give up? Or I am looking at it wrong, in which case, how should I be looking at it? Thanks! Upon further reflection, here is a partial answer: Each project should build all its artifacts. This is done by having the plugins configured to run as per the prepare-resources and package phases. So, in my case, I prepare all that needs to be generated (jar, javadocs, xsd documentation, ...) as different artifacts so that a single "package" goal execution creates all. So, it's not "how project 2 forces project 1 to run different goals", but it's "make project 1 create all of its artifact as part as the normal lifecycle). This seems to simplify things.

    Read the article

  • Anything like Heroku for PHP or .NET?

    - by Wayne M
    In my area PHP is very widespread, so is .NET. Ruby not so much; most places have never heard of it. For some personal things I am "forced" to choose Rails because I want to take advantage of Heroku - the ability to deploy and scale on the cloud very easily is the main reason. Also, they offer a small FREE plan that I can use for demo sites or, in this case, for my business' static page; as a totally bootstrapped startup I have maybe $50 or so in initial capital and cannot afford to pay monthly fees while I'm getting started. Are there any similar offerings for other languages? Specifically, I really like the small, 5MB site for free that Heroku offers - is there anything like that for PHP and/or .NET? I'm not even that concerned about the "cloud" part, but that would be a nice bonus. If there is, I might be able to kill two birds with one stone and pick up a useful skill as I'm doing my own thing instead of using something that nobody else knows or cares about. I should add I'm specifically interested in something that offers a free plan. As I said, Heroku has a 5mb plan that you can have as many as you want for free; I have yet to find anything similar for any other platform, and to be honest I'm not too thrilled about using Ruby on Rails for everything simply to take advantage of this.

    Read the article

  • What happens when modifying Gemfile.lock directly?

    - by Mik378
    Since the second time of bundle install execution, dependencies are loaded from Gemfile.lock when Gemfile isn't changed. But I wonder how detection of changes is made between those two files. For instance, if I'm adding a new dependency directly into Gemfile.lock without adding it into Gemfile (as opposed to the best practice since Gemfile.lock is auto-generated from Gemfile), would a bundle install consider Gemfile as changed ? Indeed, does bundle install process compares the whole Gemfile and Gemfile.lock trees in order to detect changes? If it is, even if I'm adding a dependency directly to Gemfile.lock, Gemfile would be detected as changed (since different) and would re-erase Gemfile.lock (so losing the added dependency...) What is the process of bundle install since the launch for the second time ? To be more clear, my question is: Are changes based only from Gemfile ? That means bundler would keep a Gemfile snapshot of every bundle install execution number N and merely compares it to the bundle install execution N+1 ? Or none snapshot are created in bundler memory and bundler makes a comparison with Gemfile.lock each time to detect if Gemfile must be considered as changed.

    Read the article

  • How to handle ids and polymorphic associations in views if compound keys are not supported?

    - by duncan
    I have a Movie plan table: movie_plans (id, description) Each plan has items, which describe a sequence of movies and the duration in minutes: movie_plan_items (id, movie_plan_id, movie_id, start_minutes, end_minutes) A specific instance of that plan happens in: movie_schedules (id, movie_plan_id, start_at) However the schedule items can be calculated from the movie_plan_items and the schedule start time by adding the minutes create view movie_schedule_items as select CONCAT(p.id, '-', s.id) as id, s.id as movie_schedule_id, p.id as movie_plan_item_id, p.movie_id, p.movie_plan_id, (s.start_at + INTERVAL p.start_minutes MINUTE) as start_at, (s.start_at + INTERVAL p.end_minutes MINUTE) as end_at from movie_plan_items p, movie_schedules s where s.movie_plan_id=p.movie_plan_id; I have a model over this view (readonly), it works ok, except that the id is right now a string. I now want to add a polymorphic property (like comments) to various of the previous tables. Therefore for movie_schedule_items I need a unique and persistent numeric id. I have the following dilemma: I could avoid the id and have movie_schedule_items just use the movie_plan_id and movie_schedule_id as a compound key, as it should. But Rails sucks in this regard. I could create an id using String#hash or a md5, thus making it slower or collision prone (and IIRC String#hash is no longer persistent across processes in Ruby 1.9) Any ideas on how to handle this situation?

    Read the article

  • Should I be backing up a webapp's data to another host continuously ?

    - by user196289
    I have webapp in development. I need to plan for what happens if the host goes down. I will lose some very recent session status (which I can live with) and everything else should be persistently stored in the database. If I am starting up again after an outage, can I expect a good host to reconstruct the database to within minutes of where I was up to ? Or seconds ? Or should I build in a background process to continually mirror the database elsewhere ? What is normal / sensible ? Obviously a good host will have RAID and other redundancy so the likelihood of total loss should be low, and if they have periodic backups I should lose only very recent stuff but this is presumably likely to be designed with almost static web content in mind, and my site is transactional with new data being filed continuously (with a customer expectation that I don't ever lose it). Any suggestions / advice ? Are there off the shelf frameworks for doing this ? (I'm primarily working in Java) And should I just plan to save the data or should I plan to have an alternative usable host implementation ready to launch in case the host doesn't come back up in a suitable timeframe ?

    Read the article

  • How do I execute a program using Maven?

    - by Will
    I would like to have a Maven goal trigger the execution of a java class. I'm trying to migrate over a Makefile with the lines: neotest: mvn exec:java -Dexec.mainClass="org.dhappy.test.NeoTraverse" And I would like mvn neotest to produce what make neotest does currently. Neither the exec plugin documentation nor the Maven Ant tasks pages had any sort of straightforward example. Currently, I'm at: <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>exec-maven-plugin</artifactId> <version>1.1</version> <executions><execution> <goals><goal>java</goal></goals> </execution></executions> <configuration> <mainClass>org.dhappy.test.NeoTraverse</mainClass> </configuration> </plugin> I don't know how to trigger the plugin from the command line, though.

    Read the article

  • Towards Database Continuous Delivery – What Next after Continuous Integration? A Checklist

    - by Ben Rees
    .dbd-banner p{ font-size:0.75em; padding:0 0 10px; margin:0 } .dbd-banner p span{ color:#675C6D; } .dbd-banner p:last-child{ padding:0; } @media ALL and (max-width:640px){ .dbd-banner{ background:#f0f0f0; padding:5px; color:#333; margin-top: 5px; } } -- Database delivery patterns & practices STAGE 4 AUTOMATED DEPLOYMENT If you’ve been fortunate enough to get to the stage where you’ve implemented some sort of continuous integration process for your database updates, then hopefully you’re seeing the benefits of that investment – constant feedback on changes your devs are making, advanced warning of data loss (prior to the production release on Saturday night!), a nice suite of automated tests to check business logic, so you know it’s going to work when it goes live, and so on. But what next? What can you do to improve your delivery process further, moving towards a full continuous delivery process for your database? In this article I describe some of the issues you might need to tackle on the next stage of this journey, and how to plan to overcome those obstacles before they appear. Our Database Delivery Learning Program consists of four stages, really three – source controlling a database, running continuous integration processes, then how to set up automated deployment (the middle stage is split in two – basic and advanced continuous integration, making four stages in total). If you’ve managed to work through the first three of these stages – source control, basic, then advanced CI, then you should have a solid change management process set up where, every time one of your team checks in a change to your database (whether schema or static reference data), this change gets fully tested automatically by your CI server. But this is only part of the story. Great, we know that our updates work, that the upgrade process works, that the upgrade isn’t going to wipe our 4Tb of production data with a single DROP TABLE. But – how do you get this (fully tested) release live? Continuous delivery means being always ready to release your software at any point in time. There’s a significant gap between your latest version being tested, and it being easily releasable. Just a quick note on terminology – there’s a nice piece here from Atlassian on the difference between continuous integration, continuous delivery and continuous deployment. This piece also gives a nice description of the benefits of continuous delivery. These benefits have been summed up by Jez Humble at Thoughtworks as: “Continuous delivery is a set of principles and practices to reduce the cost, time, and risk of delivering incremental changes to users” There’s another really useful piece here on Simple-Talk about the need for continuous delivery and how it applies to the database written by Phil Factor – specifically the extra needs and complexities of implementing a full CD solution for the database (compared to just implementing CD for, say, a web app). So, hopefully you’re convinced of moving on the the next stage! The next step after CI is to get some sort of automated deployment (or “release management”) process set up. But what should I do next? What do I need to plan and think about for getting my automated database deployment process set up? Can’t I just install one of the many release management tools available and hey presto, I’m ready! If only it were that simple. Below I list some of the areas that it’s worth spending a little time on, where a little planning and prep could go a long way. It’s also worth pointing out, that this should really be an evolving process. Depending on your starting point of course, it can be a long journey from your current setup to a full continuous delivery pipeline. If you’ve got a CI mechanism in place, you’re certainly a long way down that path. Nevertheless, we’d recommend evolving your process incrementally. Pages 157 and 129-141 of the book on Continuous Delivery (by Jez Humble and Dave Farley) have some great guidance on building up a pipeline incrementally: http://www.amazon.com/Continuous-Delivery-Deployment-Automation-Addison-Wesley/dp/0321601912 For now, in this post, we’ll look at the following areas for your checklist: You and Your Team Environments The Deployment Process Rollback and Recovery Development Practices You and Your Team It’s a cliché in the DevOps community that “It’s not all about processes and tools, really it’s all about a culture”. As stated in this DevOps report from Puppet Labs: “DevOps processes and tooling contribute to high performance, but these practices alone aren’t enough to achieve organizational success. The most common barriers to DevOps adoption are cultural: lack of manager or team buy-in, or the value of DevOps isn’t understood outside of a specific group”. Like most clichés, there’s truth in there – if you want to set up a database continuous delivery process, you need to get your boss, your department, your company (if relevant) onside. Why? Because it’s an investment with the benefits coming way down the line. But the benefits are huge – for HP, in the book A Practical Approach to Large-Scale Agile Development: How HP Transformed LaserJet FutureSmart Firmware, these are summarized as: -2008 to present: overall development costs reduced by 40% -Number of programs under development increased by 140% -Development costs per program down 78% -Firmware resources now driving innovation increased by a factor of 8 (from 5% working on new features to 40% But what does this mean? It means that, when moving to the next stage, to make that extra investment in automating your deployment process, it helps a lot if everyone is convinced that this is a good thing. That they understand the benefits of automated deployment and are willing to make the effort to transform to a new way of working. Incidentally, if you’re ever struggling to convince someone of the value I’d strongly recommend just buying them a copy of this book – a great read, and a very practical guide to how it can really work at a large org. I’ve spoken to many customers who have implemented database CI who describe their deployment process as “The point where automation breaks down. Up to that point, the CI process runs, untouched by human hand, but as soon as that’s finished we revert to manual.” This deployment process can involve, for example, a DBA manually comparing an environment (say, QA) to production, creating the upgrade scripts, reading through them, checking them against an Excel document emailed to him/her the night before, turning to page 29 in his/her notebook to double-check how replication is switched off and on for deployments, and so on and so on. Painful, error-prone and lengthy. But the point is, if this is something like your deployment process, telling your DBA “We’re changing everything you do and your toolset next week, to automate most of your role – that’s okay isn’t it?” isn’t likely to go down well. There’s some work here to bring him/her onside – to explain what you’re doing, why there will still be control of the deployment process and so on. Or of course, if you’re the DBA looking after this process, you have to do a similar job in reverse. You may have researched and worked out how you’d like to change your methodology to start automating your painful release process, but do the dev team know this? What if they have to start producing different artifacts for you? Will they be happy with this? Worth talking to them, to find out. As well as talking to your DBA/dev team, the other group to get involved before implementation is your manager. And possibly your manager’s manager too. As mentioned, unless there’s buy-in “from the top”, you’re going to hit problems when the implementation starts to get rocky (and what tool/process implementations don’t get rocky?!). You need to have support from someone senior in your organisation – someone you can turn to when you need help with a delayed implementation, lack of resources or lack of progress. Actions: Get your DBA involved (or whoever looks after live deployments) and discuss what you’re planning to do or, if you’re the DBA yourself, get the dev team up-to-speed with your plans, Get your boss involved too and make sure he/she is bought in to the investment. Environments Where are you going to deploy to? And really this question is – what environments do you want set up for your deployment pipeline? Assume everyone has “Production”, but do you have a QA environment? Dedicated development environments for each dev? Proper pre-production? I’ve seen every setup under the sun, and there is often a big difference between “What we want, to do continuous delivery properly” and “What we’re currently stuck with”. Some of these differences are: What we want What we’ve got Each developer with their own dedicated database environment A single shared “development” environment, used by everyone at once An Integration box used to test the integration of all check-ins via the CI process, along with a full suite of unit-tests running on that machine In fact if you have a CI process running, you’re likely to have some sort of integration server running (even if you don’t call it that!). Whether you have a full suite of unit tests running is a different question… Separate QA environment used explicitly for manual testing prior to release “We just test on the dev environments, or maybe pre-production” A proper pre-production (or “staging”) box that matches production as closely as possible Hopefully a pre-production box of some sort. But does it match production closely!? A production environment reproducible from source control A production box which has drifted significantly from anything in source control The big question is – how much time and effort are you going to invest in fixing these issues? In reality this just involves figuring out which new databases you’re going to create and where they’ll be hosted – VMs? Cloud-based? What about size/data issues – what data are you going to include on dev environments? Does it need to be masked to protect access to production data? And often the amount of work here really depends on whether you’re working on a new, greenfield project, or trying to update an existing, brownfield application. There’s a world if difference between starting from scratch with 4 or 5 clean environments (reproducible from source control of course!), and trying to re-purpose and tweak a set of existing databases, with all of their surrounding processes and quirks. But for a proper release management process, ideally you have: Dedicated development databases, An Integration server used for testing continuous integration and running unit tests. [NB: This is the point at which deployments are automatic, without human intervention. Each deployment after this point is a one-click (but human) action], QA – QA engineers use a one-click deployment process to automatically* deploy chosen releases to QA for testing, Pre-production. The environment you use to test the production release process, Production. * A note on the use of the word “automatic” – when carrying out automated deployments this does not mean that the deployment is happening without human intervention (i.e. that something is just deploying over and over again). It means that the process of carrying out the deployment is automatic in that it’s not a person manually running through a checklist or set of actions. The deployment still requires a single-click from a user. Actions: Get your environments set up and ready, Set access permissions appropriately, Make sure everyone understands what the environments will be used for (it’s not a “free-for-all” with all environments to be accessed, played with and changed by development). The Deployment Process As described earlier, most existing database deployment processes are pretty manual. The following is a description of a process we hear very often when we ask customers “How do your database changes get live? How does your manual process work?” Check pre-production matches production (use a schema compare tool, like SQL Compare). Sometimes done by taking a backup from production and restoring in to pre-prod, Again, use a schema compare tool to find the differences between the latest version of the database ready to go live (i.e. what the team have been developing). This generates a script, User (generally, the DBA), reviews the script. This often involves manually checking updates against a spreadsheet or similar, Run the script on pre-production, and check there are no errors (i.e. it upgrades pre-production to what you hoped), If all working, run the script on production.* * this assumes there’s no problem with production drifting away from pre-production in the interim time period (i.e. someone has hacked something in to the production box without going through the proper change management process). This difference could undermine the validity of your pre-production deployment test. Red Gate is currently working on a free tool to detect this problem – sign up here at www.sqllighthouse.com, if you’re interested in testing early versions. There are several variations on this process – some better, some much worse! How do you automate this? In particular, step 3 – surely you can’t automate a DBA checking through a script, that everything is in order!? The key point here is to plan what you want in your new deployment process. There are so many options. At one extreme, pure continuous deployment – whenever a dev checks something in to source control, the CI process runs (including extensive and thorough testing!), before the deployment process keys in and automatically deploys that change to the live box. Not for the faint hearted – and really not something we recommend. At the other extreme, you might be more comfortable with a semi-automated process – the pre-production/production matching process is automated (with an error thrown if these environments don’t match), followed by a manual intervention, allowing for script approval by the DBA. One he/she clicks “Okay, I’m happy for that to go live”, the latter stages automatically take the script through to live. And anything in between of course – and other variations. But we’d strongly recommended sitting down with a whiteboard and your team, and spending a couple of hours mapping out “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” NB: Most of what we’re discussing here is about production deployments. It’s important to note that you will also need to map out a deployment process for earlier environments (for example QA). However, these are likely to be less onerous, and many customers opt for a much more automated process for these boxes. Actions: Sit down with your team and a whiteboard, and draw out the answers to the questions above for your production deployments – “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” Repeat for earlier environments (QA and so on). Rollback and Recovery If only every deployment went according to plan! Unfortunately they don’t – and when things go wrong, you need a rollback or recovery plan for what you’re going to do in that situation. Once you move in to a more automated database deployment process, you’re far more likely to be deploying more frequently than before. No longer once every 6 months, maybe now once per week, or even daily. Hence the need for a quick rollback or recovery process becomes paramount, and should be planned for. NB: These are mainly scenarios for handling rollbacks after the transaction has been committed. If a failure is detected during the transaction, the whole transaction can just be rolled back, no problem. There are various options, which we’ll explore in subsequent articles, things like: Immediately restore from backup, Have a pre-tested rollback script (remembering that really this is a “roll-forward” script – there’s not really such a thing as a rollback script for a database!) Have fallback environments – for example, using a blue-green deployment pattern. Different options have pros and cons – some are easier to set up, some require more investment in infrastructure; and of course some work better than others (the key issue with using backups, is loss of the interim transaction data that has been added between the failed deployment and the restore). The best mechanism will be primarily dependent on how your application works and how much you need a cast-iron failsafe mechanism. Actions: Work out an appropriate rollback strategy based on how your application and business works, your appetite for investment and requirements for a completely failsafe process. Development Practices This is perhaps the more difficult area for people to tackle. The process by which you can deploy database updates is actually intrinsically linked with the patterns and practices used to develop that database and linked application. So you need to decide whether you want to implement some changes to the way your developers actually develop the database (particularly schema changes) to make the deployment process easier. A good example is the pattern “Branch by abstraction”. Explained nicely here, by Martin Fowler, this is a process that can be used to make significant database changes (e.g. splitting a table) in a step-wise manner so that you can always roll back, without data loss – by making incremental updates to the database backward compatible. Slides 103-108 of the following slidedeck, from Niek Bartholomeus explain the process: https://speakerdeck.com/niekbartho/orchestration-in-meatspace As these slides show, by making a significant schema change in multiple steps – where each step can be rolled back without any loss of new data – this affords the release team the opportunity to have zero-downtime deployments with considerably less stress (because if an increment goes wrong, they can roll back easily). There are plenty more great patterns that can be implemented – the book Refactoring Databases, by Scott Ambler and Pramod Sadalage is a great read, if this is a direction you want to go in: http://www.amazon.com/Refactoring-Databases-Evolutionary-paperback-Addison-Wesley/dp/0321774515 But the question is – how much of this investment are you willing to make? How often are you making significant schema changes that would require these best practices? Again, there’s a difference here between migrating old projects and starting afresh – with the latter it’s much easier to instigate best practice from the start. Actions: For your business, work out how far down the path you want to go, amending your database development patterns to “best practice”. It’s a trade-off between implementing quality processes, and the necessity to do so (depending on how often you make complex changes). Socialise these changes with your development group. No-one likes having “best practice” changes imposed on them, so good to introduce these ideas and the rationale behind them early.   Summary The next stages of implementing a continuous delivery pipeline for your database changes (once you have CI up and running) require a little pre-planning, if you want to get the most out of the work, and for the implementation to go smoothly. We’ve covered some of the checklist of areas to consider – mainly in the areas of “Getting the team ready for the changes that are coming” and “Planning our your pipeline, environments, patterns and practices for development”, though there will be more detail, depending on where you’re coming from – and where you want to get to. This article is part of our database delivery patterns & practices series on Simple Talk. Find more articles for version control, automated testing, continuous integration & deployment.

    Read the article

  • macport selfupdate not working

    - by eistrati
    macbookpro:~ eistrati$ port -v MacPorts 2.1.2 macbookpro:~ eistrati$ xcodebuild -version Xcode 4.5.2 Build version 4G2008a macbookpro:~ eistrati$ sudo port -d selfupdate DEBUG: Copying /Users/eistrati/Library/Preferences/com.apple.dt.Xcode.plist to /opt/local/var/macports/home/Library/Preferences DEBUG: MacPorts sources location: /opt/local/var/macports/sources/rsync.macports.org/release/tarballs ---> Updating MacPorts base sources using rsync rsync: failed to connect to rsync.macports.org: Connection refused (61) rsync error: error in socket IO (code 10) at /SourceCache/rsync/rsync-42/rsync/clientserver.c(105) [receiver=2.6.9] Command failed: /usr/bin/rsync -rtzv --delete-after rsync://rsync.macports.org/release/tarballs/base.tar /opt/local/var/macports/sources/rsync.macports.org/release/tarballs Exit code: 10 DEBUG: Error synchronizing MacPorts sources: command execution failed while executing "macports::selfupdate [array get global_options] base_updated" Error: /opt/local/bin/port: port selfupdate failed: Error synchronizing MacPorts sources: command execution failed Ideas? Please help!

    Read the article

  • Bad performance issue on dedicated server

    - by Pierre Espenan
    I just subscribed to a dedicated server offer, and encounter some bad PHP execution performances. Actually, the time execution may be 2 times bigger than it is on my old mutualized server! I'm definitely not an expert in server management, so I'm wondering what I missed. Here are some stuff that can help you understand what's wrong here : My server (in french but easy to understand) : http://www.online.net/fr/serveur-dedie/dedibox-sc phpinfo(); output : http://jsfiddle.net/E8b7W/embedded/result/ PHP bench script (dedicated server) : http://jsfiddle.net/EhXzK/embedded/result/ PHP bench script (old mutualized) : http://jsfiddle.net/ANbWt/embedded/result/ Is it normal to get such poor performances after a kernel update and basics "apt-get install" for apache2 and php ? Thanks !

    Read the article

  • How to set _optimizer_search_limit and _optimizer_max_permutations in Oracle10g.

    - by user52856
    I am working on a product that must support both MSSQL and Oracle (10g and 11g). I have some very complex queries that seem to run without issue on MSSQL 2005/2008, but very, very slow with Oracle. The CPU on the oracle server skyrockets for long periods of time, and it seems like the optimizer may be trying to find the best execution plan for the very complex query. I did some Googling to figure out how to limit the amount of time the optimizer spends on this, and came up with _optimizer_search_limit and _optimizer_max_permutations. Both of these parameters are hidden in Oracle 10g, and setting them in init.ora doesn't seem to make any difference. How do I set these parameters in Oracle. Or am I just totally barking up the wrong tree with the assumption that the optimizer is spending several minutes finding an execution plan? Thanks.

    Read the article

  • Please help with System.Runtime.InteropServices.SEHException: External component has thrown an excep

    - by Brandon
    My aspx page gives me this error sometimes: External component has thrown an exception. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Runtime.InteropServices.SEHException: External component has thrown an exception. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [SEHException (0x80004005): External component has thrown an exception.] Luxand.FSDK.Initialize(String DataFilesPath) +0 WebService.onLoad() +70 WebService..ctor() +91 facematch.Page_Load(Object sender, EventArgs e) +50 System.Web.Util.CalliHelper.EventArgFunctionCaller(IntPtr fp, Object o, Object t, EventArgs e) +15 System.Web.Util.CalliEventHandlerDelegateProxy.Callback(Object sender, EventArgs e) +33 System.Web.UI.Control.OnLoad(EventArgs e) +99 System.Web.UI.Control.LoadRecursive() +47 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +1436

    Read the article

  • SQL Server replication - Log Reader Agent Read Latency Issue, Please help

    - by envykok
    Hi all, I am facing one transactional replication delay issue on log reader agent. The log reader output is : ********* STATISTICS SINCE AGENT STARTED ************** 02-28-2011 20:12:08 Execution time (ms): 304141 Work time (ms): 304016 Distribute Repl Cmds Time(ms): 303764 Fetch time(ms): 300813 Repldone time(ms): 1826 Write time(ms): 5319 Num Trans: 15500 Num Trans/Sec: 50.984159 Num Cmds: 191639 Num Cmds/Sec: 630.358271 It seems Log Reader Reader-Thread Latency, and I also run 'sp_replcounters' and see more than 20,000 sec replication latency and keep on increasing. I used SQL profiler to monitor sp_replcmds and found sp_replcmds execution time was 11 sec to 15 sec Is it there any way to optimize to make Log Reader read faster from transaction log??? Other information: SQL Server 2008 (SP2) Standard 64 bit

    Read the article

  • Kerberos SSO browser integration?

    - by MrZombie
    I'm installing a bunch of web apps for the office, and one of the wants would be Kerberos-managed SSO. Now, I have found some information on the matter, and I wondered, what browsers integrate Kerberos SSO? Of course I could just use the underlying web app to authenticate in case of lack of Kerberos capability, which is exactly the plan, but I'd like to know which browsers would work for that, so I can plan ahead and decide if it's even worth it to do that, which I believe it would considering that one of the web apps I'm implementing will be an ERP.

    Read the article

  • Migrating Shared Hosting and Email

    - by Chrisc
    Hey Guys, I know there has already been a question that has been posted here about migrating shared hosting accounts, but wanted to get a second opinion on my plan to move hosting providers. My business is moving our shared hosting account (hosting and email) to a new provider, and would like to have minimal downtime. Here is our current plan: Create a backup of our old site Upload our site to the new host Set up duplicate email accounts with our new host Change the name server records on our domain to point to our new host Leave our old site up long enough for DNS records to propagate completely. I'm hoping this should result in little downtime for both website and email. Because of the volume of high-importance emails our company receives on a daily basis downtime is very expensive and not tolerated. Thanks, Chris

    Read the article

  • Media Temple-like hosting services?

    - by antonpug
    I have a couple of wordpress sites which do not get much traffic now, but I plan on expanding to something like a 1000-2000visits/day in a year or two. Media Temple has some really nice offerings, but their Wordpress plan is 20/month...which is a little too much, seeing as at this point my site is more of a hobby than a money making machine. I currently host with HostGator (just switched from GodaddyiPageBluehost). All these cheaper/pop hosting services are okay, but it would be nice to find something a little bit more "premium", but at a lower cost than MT. Anyone know anything worth looking at?

    Read the article

  • Home ZFS based NAS...What processor/chipset to use?

    - by MrBlargityBlarg
    So, I'm building a home/personal NAS. My plan is to expose both SMB fileshares for sharing files/media between hosts, but also to carve an iSCSI target LUN out of it for use by VMWare as a datastore. I want to use ZFS (software RAID) so that means I'll either be using FreeNAS, Solaris Express, or OpenIndiana. My question is basically: How much horsepower do I need? Obviously I/O is going to be my bottleneck but I want to be sure that I am not limiting my I/O because of a slow processor or chipset. So far the hardware plan is to use an Intel i3 and motherboard with one of the H87, Q87, or Z87 chipsets, a SAS controller (JBOD, no RAID) and if budget allows, I'm also hoping to get an SSD for the ZFS L2ARC and ZIL. Does anyone think I could get away with an Intel Atom or cheaper/less-capable processor/chipset than the i3 and [HQZ]87 listed above?

    Read the article

  • JavaMail application won't send email to external SMTP server

    - by Luiz Cruz
    This is actually a question from an exam, but I believe it could help others troubleshooting a similar situation. In a system, an e-mail needs to be sent to a certain mailbox. The following Java code, which is part of a larger system, was developed for that. Assume that "example.com" corresponds to a valid registered internet domain. public void sendEmail(){ String s1=”Warning”; String b1=”Contact IT support.”; String r1=”[email protected]”; String d1=”[email protected]”; String h1=”mx.intranet”; Properties p1 = new Properties(); p1.put(“mail.host”, h1); Session session = Session.getDefaultInstance(p1, null); MimeMessage message = new MimeMessage(session); try { message.setFrom(new InternetAddress(r1)); message.addRecipient(Message.RecipientType.TO, new InternetAddress(d1)); message.setSubject(s1); message.setText(b1); Transport.send(message); } catch (MessagingException e){ System.err.println(e); } } The execution of this code, within the testing environment of an application server, does NOT work as expected. The mailbox of the "example.com" server never receives the email, even tough all string values in the code are correctly attributed. The output for the command "netstat -np TCP" in the application server during execution is shown bellow: Src Add Src Port Dest Add Dest Port State 192.168.5.5 54395 192.168.7.1 25 SYN_SENT 192.168.5.5 54390 192.168.7.1 110 TIME_WAIT 192.168.5.5 52001 200.218.208.118 80 CLOSE_WAIT 192.168.5.5 52050 200.218.208.118 80 ESTABLISHED 192.168.5.5 50001 200.255.94.202 25 TIME_WAIT 192.168.5.5 50000 200.255.94.202 25 ESTABLISHED With the exception of the lines that were NAT'd, all others are associated with the Java application server, which created them after the execution of the code above. The e-mail server used in this environment is the production server, which is online and does not require any authentication for internal connections. Based on this situation, point out three possible causes for the problem.

    Read the article

  • Import GraphViz graph to Microsoft Word 14

    - by rmetzger
    I have created a GraphViz dot-file to visualize a data flow. I have to write a documentation using Microsoft Word and I'd like to include the graph in the document. For some wired reason, MS Word is not able to import SVG files. Then, I generated a .eps file using dot -Teps plan.dot -o plan.eps But once imported into Word, the picture looks horrible. I also tried to convert the svg to wmf using Inkscape. It also looked horrible. Is there a clean way to generate a file using GraphViz that Word can read?

    Read the article

  • where to look for computer technician jobs

    - by Kareem
    Hi I am currently studying for the A+ certification, I plan to have it by the end of this month and I plan to go for farther education. I’ve built two high end computers by myself for a friend and family member. Install OS and everything. I’m looking in to finding either a computer assembly or computer technician job . Where is the best place to look for one? I’ve looked in to best buy but I find their geek squad to be a little bit shady. Where is a good place to look for a full time entry level computer technician job just starting out in Tampa, FL?

    Read the article

  • Process spawned by exec-maven-plugin blocks the maven process

    - by Arnab Biswas
    I am trying to execute the following scenario using maven : pre-integration-phase : Start a java based application using a main class (using exec-maven-plugin) integration-phase : Run the integration test cases (using maven-failsafe-plugin) post-integration-phase: Stop the application gracefully (using exec-maven-plugin) Here is pom.xml snip: <plugins> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>exec-maven-plugin</artifactId> <version>1.2.1</version> <executions> <execution> <id>launch-myApp</id> <phase>pre-integration-test</phase> <goals> <goal>exec</goal> </goals> </execution> </executions> <configuration> <executable>java</executable> <arguments> <argument>-DMY_APP_HOME=/usr/home/target/local</argument> <argument>-Djava.library.path=/usr/home/other/lib</argument> <argument>-classpath</argument> <classpath/> <argument>com.foo.MyApp</argument> </arguments> </configuration> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-failsafe-plugin</artifactId> <version>2.12</version> <executions> <execution> <goals> <goal>integration-test</goal> <goal>verify</goal> </goals> </execution> </executions> <configuration> <forkMode>always</forkMode> </configuration> </plugin> </plugins> If I execute mvn post-integration-test, my application is getting started as a child process of the maven process, but the application process is blocking the maven process from executing the integration tests which comes in the next phase. Later I found that there is a bug (or missing functionality?) in maven exec plugin, because of which the application process blocks the maven process. To address this issue, I have encapsulated the invocation of MyApp.java in a shell script and then appended “/dev/null 2&1 &” to spawn a separate background process. Here is the snip (this is just a snip and not the actual one) from runTest.sh: java - DMY_APP_HOME =$2 com.foo.MyApp > /dev/null 2>&1 & Although this solves my issue, is there any other way to do it? Am I missing any argument for exec-maven-plugin?

    Read the article

  • How can I get Transaction Logs to auto-truncate after Backup

    - by Yaakov Ellis
    Setup: Sql Server 2008 R2, databases set up with Full recovery mode. I have set up a maintenance plan that backs up the transaction logs for a number of databases on the server. It is set to create backup files in sub-directories for each database, verify backup integrity is turned on, and backup compression is used. The job is set to run once every 2 hours during business hours (8am-6pm). I have tested the job and it runs fine, creates the log backup files as it should. However, from what I have read, once the transaction log is backed up, it should be ok to truncate the transaction log. I do not see any option for doing this in the Sql Server Maintenance Plan designer. How can I set this up?

    Read the article

  • Renaming server - sql 2008 server instance affected

    - by user986363
    Will changing the computer name on which SQL Server 2008 is installed on affect SQL Server in anyway? Or will changing the computer name be transparent to SQL Server? For example: I plan to install Windows 2008 Server and naming it "BobStage". Next I will install SQL Server 2008 R2 and restore a few DB's. Finally I plan to rename the windows machine to "BobLive". Will me renaming the computer's name affect SQL Server's instance ID/name and possibly break something?

    Read the article

  • Debuggin in Eclipse - Run till breakpoint

    - by pragadheesh
    I am trying to debug my java code in eclipse. By using break points and Debug mode, the control hits the break point after which I can use F6 to navigate through my code. Consider I have my break point inside a for loop. In Visual Studio 2005, if we hit Execute (F5), it would stop at next breakpoint. How can I achieve the same in eclipse. Also consider, if I make a change while debugging. So I want to Stop the execution and restart it again from beginning. Like we have Stop Execution in VS 2005. Mainly for those who have extensively used Visual Studio 2005, how does eclipse provide similar functionality.

    Read the article

< Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >