Search Results

Search found 7864 results on 315 pages for 'pre commit hook'.

Page 41/315 | < Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >

  • Amazon AMIs and Oracle VM templates

    - by llaszews
    I have worked with Oracle VM templates and most recently with Amazon Machine Images (AMI). The similarities in the functionality and capabilities they provide are striking. Just take a look a the definitions: An Amazon Machine Image (AMI) is a special type of pre-configured operating system and virtual application software which is used to create a virtual machine within the Amazon Elastic Compute Cloud (EC2). It serves as the basic unit of deployment for services delivered using EC2. AWS AMIs Oracle VM Templates provide an innovative approach to deploying a fully configured software stack by offering pre-installed and pre-configured software images. Use of Oracle VM Templates eliminates the installation and configuration costs, and reduces the ongoing maintenance costs helping organizations achieve faster time to market and lower cost of operations. Oracle VM Templates Other things they have in common: 1. Both have 35 Oracle images or templates: AWS AMI pre-built images Oracle pre-built VM Templates 2. Both allow to build your own images or templates: A. OVM template builder - OVM Template Builder - Oracle VM Template Builder, an open source, graphical utility that makes it easy to use Oracle Enterprise Linux “Just enough OS” (JeOS)–based scripts for developing pre-packaged virtual machines for Oracle VM. B. AMI 'builder' - AMI builder However, AWS has the added feature/benefit of adding your own AMI to the AWS AMI catalog: AMI - Adding to the AWS AMI catalog Another plus with AWS and AMI is there are hundreds of MySQL AMIs (AWS MySQL AMIs ). A benefit of Oracle VM templates is they can run on any public or private cloud environment, not just AWS EC2. However, with Oracle VM templates they first need to be images as AMIs before they can run in the AWS cloud.

    Read the article

  • "unresolvable problem" error when upgrading from 12.04 to 14.04

    - by flyingfisch
    So I have solved this issue, but now I have another problem: An unresolvable problem occurred while calculating the upgrade. This can be caused by: * Upgrading to a pre-release version of Ubuntu * Running the current pre-release version of Ubuntu * Unofficial software packages not provided by Ubuntu If none of this applies, then please report this bug using the command 'ubuntu-bug ubuntu-release-upgrader-core' in a terminal. I am not upgrading to a pre-release version of Ubuntu and I am not running a pre-release either. I have unchecked all my 3rd-party packages using Ubuntu Software Manager, EditSoftware Sources... What else might be wrong? UPDATE After doing sudo update-manager -d and sudo apt-get update;sudo apt-get dist-upgrade as per JimB's post, and then running sudo do-release-upgrade, here what I get: Err http://extras.ubuntu.com trusty/main Translation-en Err http://extras.ubuntu.com trusty/main Translation-en_US Err http://extras.ubuntu.com trusty/main Translation-en Ign http://extras.ubuntu.com trusty/main Translation-en_US Ign http://extras.ubuntu.com trusty/main Translation-en Fetched 0 B in 0s (0 B/s) Checking package manager Reading package lists... Done Building dependency tree Reading state information... Done Building data structures... Done Calculating the changes Calculating the changes Could not calculate the upgrade An unresolvable problem occurred while calculating the upgrade. This can be caused by: * Upgrading to a pre-release version of Ubuntu * Running the current pre-release version of Ubuntu * Unofficial software packages not provided by Ubuntu If none of this applies, then please report this bug using the command 'ubuntu-bug ubuntu-release-upgrader-core' in a terminal. Restoring original system state Aborting Reading package lists... Done Building dependency tree Reading state information... Done Building data structures... Done === Command detached from window (Mon Aug 18 23:53:10 2014) === === Command terminated with exit status 1 (Mon Aug 18 23:53:10 2014) ===

    Read the article

  • Fluid CSS: floating column with max-width and overflow

    - by Ates Goral
    I'm using a fluid layout in the new theme that I'm working on for my blog. I often blog about code and include <pre> blocks within the posts. The float: left column for the content area has a max-width so that the column stops at a certain maximum width and can also be shrunk: +----------+ +------+ | text | | text | | | | | | | | | | | | | | | | | | | | | +----------+ +------+ max shrunk What I want is for the <pre> elements to be wider than the text column so that I can fit 80-character-wrapped code without horizontal scroll bars. But I want the <pre> elements to overflow from the content area, without affecting its fluidity: +----------+ +------+ | text | | text | | | | | +----------+--+ +------+------+ | code | | code | +----------+--+ +------+------+ | | | | +----------+ +------+ max shrunk But, max-width stops being fluid once I insert the overhanging <pre> in there: the width of the column remains at the specified max-width even when I shrink the browser beyond that width. I've reproduced the issue with this bare-minimum scenario: <div style="float: left; max-width: 460px; border: 1px solid red"> <p>Lorem ipsum dolor sit amet, consectetur adipisicing elit</p> <pre style="max-width: 700px; border: 1px solid blue"> function foo() { // Lorem ipsum dolor sit amet, consectetur adipisicing elit } </pre> <p>Lorem ipsum dolor sit amet, consectetur adipisicing elit</p> </div> I noticed that doing either of the following brings back the fluidity: Remove the <pre> (doh...) Remove the float: left The workaround I'm currently using is to insert the <pre> elements into "breaks" in the post column, so that the widths of the post segments and the <pre> segments are managed mutually exclusively: +----------+ +------+ | text | | text | +----------+ +------+ +-------------+ +-------------+ | code | | code | +-------------+ +-------------+ +----------+ +------+ +----------+ +------+ max shrunk But this forces me to insert additional closing and opening <div> elements into the post markup which I'd rather keep semantically pristine. Admittedly, I don't have a full grasp of how the box model works with floats with overflowing content, so I don't understand why the combination of float: left on the container and the <pre> inside it cripple the max-width of the container. I'm observing the same problem on Firefox/Chrome/Safari/Opera. IE6 (the crazy one) seems happy all the time. This also doesn't seem dependent on quirks/standards mode. Update I've done further testing to observe that max-width seems to get ignored when the element has a float: left. I glanced at the W3C box model chapter but couldn't immediately see an explicit mention of this behaviour. Any pointers?

    Read the article

  • Cocos2d: Changing b2Body x val every frame causes jitter

    - by Joey Green
    So, I have a jumping mechanism similar to what you would see in doodle jump where character jumps and you use the accelerometer to make character change direction left or right. I have a player object with position and a box2d b2Body with position. I'm changing the player X position via the accelerometer and the Y position according to box2d. pseudocode for this is like so -----accelerometer acceleration------ player.position = new X -----world update--------- physicsWorld-step() //this will get me the new Y according to the physics similation //so we keep the bodys Y value but change x to new X according to accelerometer data playerPhysicsBody.position = new pos(player.position.x, keepYval) player.position = playerPhysicsBody.position Now this is simplifying my code, but I'm doing the position conversion back and forth via mult or divide by PTM_. Well, I'm getting a weird jitter effect after I get big jump in acceleration data. So, my questions are: 1) Is this the right approach to have the accelerometer control the x pos and box2d control the y pos and just sync everthing up every frame? 2) Is there some issue with updating a b2body x position every frame? 3) Any idea what might be creating this jitter effect? I've collecting some data while running the game. Pre-body is before I set the x value on the b2Body in my update method after I world-step(). Post of course is afterwards. As you can see there is definitively a pattern. 012-06-19 08:14:13.118 Game[1073:707] pre-body pos 5.518720~24.362963 2012-06-19 08:14:13.120 Game[1073:707] post-body pos 5.060156~24.362963 2012-06-19 08:14:13.131 Game[1073:707] player velocity x: -31.833529 2012-06-19 08:14:13.133 Game[1073:707] delta 0.016669 2012-06-19 08:14:13.135 Game[1073:707] pre-body pos 5.060156~24.689455 2012-06-19 08:14:13.137 Game[1073:707] post-body pos 5.502138~24.689455 2012-06-19 08:14:13.148 Game[1073:707] player velocity x: -31.833529 2012-06-19 08:14:13.150 Game[1073:707] delta 0.016667 2012-06-19 08:14:13.151 Game[1073:707] pre-body pos 5.502138~25.006948 2012-06-19 08:14:13.153 Game[1073:707] post-body pos 5.043575~25.006948 2012-06-19 08:14:13.165 Game[1073:707] player velocity x: -31.833529 2012-06-19 08:14:13.167 Game[1073:707] delta 0.016644 2012-06-19 08:14:13.169 Game[1073:707] pre-body pos 5.043575~25.315441 2012-06-19 08:14:13.170 Game[1073:707] post-body pos 5.485580~25.315441 2012-06-19 08:14:13.180 Game[1073:707] player velocity x: -31.833529 2012-06-19 08:14:13.182 Game[1073:707] delta 0.016895 2012-06-19 08:14:13.185 Game[1073:707] pre-body pos 5.485580~25.614935 2012-06-19 08:14:13.188 Game[1073:707] post-body pos 5.026768~25.614935 2012-06-19 08:14:13.198 Game[1073:707] player velocity x: -31.833529 2012-06-19 08:14:13.199 Game[1073:707] delta 0.016454 2012-06-19 08:14:13.207 Game[1073:707] pre-body pos 5.026768~25.905428 2012-06-19 08:14:13.211 Game[1073:707] post-body pos 5.469213~25.905428 2012-06-19 08:14:13.217 Game[1073:707] acceleration x -0.137421 2012-06-19 08:14:13.223 Game[1073:707] player velocity x: -65.022644 2012-06-19 08:14:13.229 Game[1073:707] delta 0.016603

    Read the article

  • Towards Database Continuous Delivery – What Next after Continuous Integration? A Checklist

    - by Ben Rees
    .dbd-banner p{ font-size:0.75em; padding:0 0 10px; margin:0 } .dbd-banner p span{ color:#675C6D; } .dbd-banner p:last-child{ padding:0; } @media ALL and (max-width:640px){ .dbd-banner{ background:#f0f0f0; padding:5px; color:#333; margin-top: 5px; } } -- Database delivery patterns & practices STAGE 4 AUTOMATED DEPLOYMENT If you’ve been fortunate enough to get to the stage where you’ve implemented some sort of continuous integration process for your database updates, then hopefully you’re seeing the benefits of that investment – constant feedback on changes your devs are making, advanced warning of data loss (prior to the production release on Saturday night!), a nice suite of automated tests to check business logic, so you know it’s going to work when it goes live, and so on. But what next? What can you do to improve your delivery process further, moving towards a full continuous delivery process for your database? In this article I describe some of the issues you might need to tackle on the next stage of this journey, and how to plan to overcome those obstacles before they appear. Our Database Delivery Learning Program consists of four stages, really three – source controlling a database, running continuous integration processes, then how to set up automated deployment (the middle stage is split in two – basic and advanced continuous integration, making four stages in total). If you’ve managed to work through the first three of these stages – source control, basic, then advanced CI, then you should have a solid change management process set up where, every time one of your team checks in a change to your database (whether schema or static reference data), this change gets fully tested automatically by your CI server. But this is only part of the story. Great, we know that our updates work, that the upgrade process works, that the upgrade isn’t going to wipe our 4Tb of production data with a single DROP TABLE. But – how do you get this (fully tested) release live? Continuous delivery means being always ready to release your software at any point in time. There’s a significant gap between your latest version being tested, and it being easily releasable. Just a quick note on terminology – there’s a nice piece here from Atlassian on the difference between continuous integration, continuous delivery and continuous deployment. This piece also gives a nice description of the benefits of continuous delivery. These benefits have been summed up by Jez Humble at Thoughtworks as: “Continuous delivery is a set of principles and practices to reduce the cost, time, and risk of delivering incremental changes to users” There’s another really useful piece here on Simple-Talk about the need for continuous delivery and how it applies to the database written by Phil Factor – specifically the extra needs and complexities of implementing a full CD solution for the database (compared to just implementing CD for, say, a web app). So, hopefully you’re convinced of moving on the the next stage! The next step after CI is to get some sort of automated deployment (or “release management”) process set up. But what should I do next? What do I need to plan and think about for getting my automated database deployment process set up? Can’t I just install one of the many release management tools available and hey presto, I’m ready! If only it were that simple. Below I list some of the areas that it’s worth spending a little time on, where a little planning and prep could go a long way. It’s also worth pointing out, that this should really be an evolving process. Depending on your starting point of course, it can be a long journey from your current setup to a full continuous delivery pipeline. If you’ve got a CI mechanism in place, you’re certainly a long way down that path. Nevertheless, we’d recommend evolving your process incrementally. Pages 157 and 129-141 of the book on Continuous Delivery (by Jez Humble and Dave Farley) have some great guidance on building up a pipeline incrementally: http://www.amazon.com/Continuous-Delivery-Deployment-Automation-Addison-Wesley/dp/0321601912 For now, in this post, we’ll look at the following areas for your checklist: You and Your Team Environments The Deployment Process Rollback and Recovery Development Practices You and Your Team It’s a cliché in the DevOps community that “It’s not all about processes and tools, really it’s all about a culture”. As stated in this DevOps report from Puppet Labs: “DevOps processes and tooling contribute to high performance, but these practices alone aren’t enough to achieve organizational success. The most common barriers to DevOps adoption are cultural: lack of manager or team buy-in, or the value of DevOps isn’t understood outside of a specific group”. Like most clichés, there’s truth in there – if you want to set up a database continuous delivery process, you need to get your boss, your department, your company (if relevant) onside. Why? Because it’s an investment with the benefits coming way down the line. But the benefits are huge – for HP, in the book A Practical Approach to Large-Scale Agile Development: How HP Transformed LaserJet FutureSmart Firmware, these are summarized as: -2008 to present: overall development costs reduced by 40% -Number of programs under development increased by 140% -Development costs per program down 78% -Firmware resources now driving innovation increased by a factor of 8 (from 5% working on new features to 40% But what does this mean? It means that, when moving to the next stage, to make that extra investment in automating your deployment process, it helps a lot if everyone is convinced that this is a good thing. That they understand the benefits of automated deployment and are willing to make the effort to transform to a new way of working. Incidentally, if you’re ever struggling to convince someone of the value I’d strongly recommend just buying them a copy of this book – a great read, and a very practical guide to how it can really work at a large org. I’ve spoken to many customers who have implemented database CI who describe their deployment process as “The point where automation breaks down. Up to that point, the CI process runs, untouched by human hand, but as soon as that’s finished we revert to manual.” This deployment process can involve, for example, a DBA manually comparing an environment (say, QA) to production, creating the upgrade scripts, reading through them, checking them against an Excel document emailed to him/her the night before, turning to page 29 in his/her notebook to double-check how replication is switched off and on for deployments, and so on and so on. Painful, error-prone and lengthy. But the point is, if this is something like your deployment process, telling your DBA “We’re changing everything you do and your toolset next week, to automate most of your role – that’s okay isn’t it?” isn’t likely to go down well. There’s some work here to bring him/her onside – to explain what you’re doing, why there will still be control of the deployment process and so on. Or of course, if you’re the DBA looking after this process, you have to do a similar job in reverse. You may have researched and worked out how you’d like to change your methodology to start automating your painful release process, but do the dev team know this? What if they have to start producing different artifacts for you? Will they be happy with this? Worth talking to them, to find out. As well as talking to your DBA/dev team, the other group to get involved before implementation is your manager. And possibly your manager’s manager too. As mentioned, unless there’s buy-in “from the top”, you’re going to hit problems when the implementation starts to get rocky (and what tool/process implementations don’t get rocky?!). You need to have support from someone senior in your organisation – someone you can turn to when you need help with a delayed implementation, lack of resources or lack of progress. Actions: Get your DBA involved (or whoever looks after live deployments) and discuss what you’re planning to do or, if you’re the DBA yourself, get the dev team up-to-speed with your plans, Get your boss involved too and make sure he/she is bought in to the investment. Environments Where are you going to deploy to? And really this question is – what environments do you want set up for your deployment pipeline? Assume everyone has “Production”, but do you have a QA environment? Dedicated development environments for each dev? Proper pre-production? I’ve seen every setup under the sun, and there is often a big difference between “What we want, to do continuous delivery properly” and “What we’re currently stuck with”. Some of these differences are: What we want What we’ve got Each developer with their own dedicated database environment A single shared “development” environment, used by everyone at once An Integration box used to test the integration of all check-ins via the CI process, along with a full suite of unit-tests running on that machine In fact if you have a CI process running, you’re likely to have some sort of integration server running (even if you don’t call it that!). Whether you have a full suite of unit tests running is a different question… Separate QA environment used explicitly for manual testing prior to release “We just test on the dev environments, or maybe pre-production” A proper pre-production (or “staging”) box that matches production as closely as possible Hopefully a pre-production box of some sort. But does it match production closely!? A production environment reproducible from source control A production box which has drifted significantly from anything in source control The big question is – how much time and effort are you going to invest in fixing these issues? In reality this just involves figuring out which new databases you’re going to create and where they’ll be hosted – VMs? Cloud-based? What about size/data issues – what data are you going to include on dev environments? Does it need to be masked to protect access to production data? And often the amount of work here really depends on whether you’re working on a new, greenfield project, or trying to update an existing, brownfield application. There’s a world if difference between starting from scratch with 4 or 5 clean environments (reproducible from source control of course!), and trying to re-purpose and tweak a set of existing databases, with all of their surrounding processes and quirks. But for a proper release management process, ideally you have: Dedicated development databases, An Integration server used for testing continuous integration and running unit tests. [NB: This is the point at which deployments are automatic, without human intervention. Each deployment after this point is a one-click (but human) action], QA – QA engineers use a one-click deployment process to automatically* deploy chosen releases to QA for testing, Pre-production. The environment you use to test the production release process, Production. * A note on the use of the word “automatic” – when carrying out automated deployments this does not mean that the deployment is happening without human intervention (i.e. that something is just deploying over and over again). It means that the process of carrying out the deployment is automatic in that it’s not a person manually running through a checklist or set of actions. The deployment still requires a single-click from a user. Actions: Get your environments set up and ready, Set access permissions appropriately, Make sure everyone understands what the environments will be used for (it’s not a “free-for-all” with all environments to be accessed, played with and changed by development). The Deployment Process As described earlier, most existing database deployment processes are pretty manual. The following is a description of a process we hear very often when we ask customers “How do your database changes get live? How does your manual process work?” Check pre-production matches production (use a schema compare tool, like SQL Compare). Sometimes done by taking a backup from production and restoring in to pre-prod, Again, use a schema compare tool to find the differences between the latest version of the database ready to go live (i.e. what the team have been developing). This generates a script, User (generally, the DBA), reviews the script. This often involves manually checking updates against a spreadsheet or similar, Run the script on pre-production, and check there are no errors (i.e. it upgrades pre-production to what you hoped), If all working, run the script on production.* * this assumes there’s no problem with production drifting away from pre-production in the interim time period (i.e. someone has hacked something in to the production box without going through the proper change management process). This difference could undermine the validity of your pre-production deployment test. Red Gate is currently working on a free tool to detect this problem – sign up here at www.sqllighthouse.com, if you’re interested in testing early versions. There are several variations on this process – some better, some much worse! How do you automate this? In particular, step 3 – surely you can’t automate a DBA checking through a script, that everything is in order!? The key point here is to plan what you want in your new deployment process. There are so many options. At one extreme, pure continuous deployment – whenever a dev checks something in to source control, the CI process runs (including extensive and thorough testing!), before the deployment process keys in and automatically deploys that change to the live box. Not for the faint hearted – and really not something we recommend. At the other extreme, you might be more comfortable with a semi-automated process – the pre-production/production matching process is automated (with an error thrown if these environments don’t match), followed by a manual intervention, allowing for script approval by the DBA. One he/she clicks “Okay, I’m happy for that to go live”, the latter stages automatically take the script through to live. And anything in between of course – and other variations. But we’d strongly recommended sitting down with a whiteboard and your team, and spending a couple of hours mapping out “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” NB: Most of what we’re discussing here is about production deployments. It’s important to note that you will also need to map out a deployment process for earlier environments (for example QA). However, these are likely to be less onerous, and many customers opt for a much more automated process for these boxes. Actions: Sit down with your team and a whiteboard, and draw out the answers to the questions above for your production deployments – “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” Repeat for earlier environments (QA and so on). Rollback and Recovery If only every deployment went according to plan! Unfortunately they don’t – and when things go wrong, you need a rollback or recovery plan for what you’re going to do in that situation. Once you move in to a more automated database deployment process, you’re far more likely to be deploying more frequently than before. No longer once every 6 months, maybe now once per week, or even daily. Hence the need for a quick rollback or recovery process becomes paramount, and should be planned for. NB: These are mainly scenarios for handling rollbacks after the transaction has been committed. If a failure is detected during the transaction, the whole transaction can just be rolled back, no problem. There are various options, which we’ll explore in subsequent articles, things like: Immediately restore from backup, Have a pre-tested rollback script (remembering that really this is a “roll-forward” script – there’s not really such a thing as a rollback script for a database!) Have fallback environments – for example, using a blue-green deployment pattern. Different options have pros and cons – some are easier to set up, some require more investment in infrastructure; and of course some work better than others (the key issue with using backups, is loss of the interim transaction data that has been added between the failed deployment and the restore). The best mechanism will be primarily dependent on how your application works and how much you need a cast-iron failsafe mechanism. Actions: Work out an appropriate rollback strategy based on how your application and business works, your appetite for investment and requirements for a completely failsafe process. Development Practices This is perhaps the more difficult area for people to tackle. The process by which you can deploy database updates is actually intrinsically linked with the patterns and practices used to develop that database and linked application. So you need to decide whether you want to implement some changes to the way your developers actually develop the database (particularly schema changes) to make the deployment process easier. A good example is the pattern “Branch by abstraction”. Explained nicely here, by Martin Fowler, this is a process that can be used to make significant database changes (e.g. splitting a table) in a step-wise manner so that you can always roll back, without data loss – by making incremental updates to the database backward compatible. Slides 103-108 of the following slidedeck, from Niek Bartholomeus explain the process: https://speakerdeck.com/niekbartho/orchestration-in-meatspace As these slides show, by making a significant schema change in multiple steps – where each step can be rolled back without any loss of new data – this affords the release team the opportunity to have zero-downtime deployments with considerably less stress (because if an increment goes wrong, they can roll back easily). There are plenty more great patterns that can be implemented – the book Refactoring Databases, by Scott Ambler and Pramod Sadalage is a great read, if this is a direction you want to go in: http://www.amazon.com/Refactoring-Databases-Evolutionary-paperback-Addison-Wesley/dp/0321774515 But the question is – how much of this investment are you willing to make? How often are you making significant schema changes that would require these best practices? Again, there’s a difference here between migrating old projects and starting afresh – with the latter it’s much easier to instigate best practice from the start. Actions: For your business, work out how far down the path you want to go, amending your database development patterns to “best practice”. It’s a trade-off between implementing quality processes, and the necessity to do so (depending on how often you make complex changes). Socialise these changes with your development group. No-one likes having “best practice” changes imposed on them, so good to introduce these ideas and the rationale behind them early.   Summary The next stages of implementing a continuous delivery pipeline for your database changes (once you have CI up and running) require a little pre-planning, if you want to get the most out of the work, and for the implementation to go smoothly. We’ve covered some of the checklist of areas to consider – mainly in the areas of “Getting the team ready for the changes that are coming” and “Planning our your pipeline, environments, patterns and practices for development”, though there will be more detail, depending on where you’re coming from – and where you want to get to. This article is part of our database delivery patterns & practices series on Simple Talk. Find more articles for version control, automated testing, continuous integration & deployment.

    Read the article

  • unexplainable packet drops with 5 ethernet NICs and low traffic on Ubuntu

    - by jon
    I'm stuck on problem where my machine started to drops packets with no sign of ANY system load or high interrupt usage after an upgrade to Ubuntu 12.04. My server is a network monitoring sensor, running Ubuntu LTS 12.04, it passively collects packets from 5 interfaces doing network intrusion type stuff. Before the upgrade I managed to collect 200+GB of packets a day while writing them to disk with around 0% packet loss depending on the day with the help of CPU affinity and NIC IRQ to CPU bindings. Now I lose a great deal of packets with none of my applications running and at very low PPS rate which a modern workstation NIC would have no trouble with. Specs: x64 Xeon 4 cores 3.2 Ghz 16 GB RAM NICs: 5 Intel Pro NICs using the e1000 driver (NAPI). [1] eth0 and eth1 are integrated NICs (in the motherboard) There are 2 other PCI-X network cards, each with 2 Ethernet ports. 3 of the interfaces are running at Gigabit Ethernet, the others are not because they're attached to hubs. Specs: [2] http://support.dell.com/support/edocs/systems/pe2850/en/ug/t1390aa.htm uptime 17:36:00 up 1:43, 2 users, load average: 0.00, 0.01, 0.05 # uname -a Linux nms 3.2.0-29-generic #46-Ubuntu SMP Fri Jul 27 17:03:23 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux I also have the CPU governor set to performance mode and irqbalance off. The problem still occurs with them on. # lspci -t -vv -[0000:00]-+-00.0 Intel Corporation E7520 Memory Controller Hub +-02.0-[01-03]--+-00.0-[02]----0e.0 Dell PowerEdge Expandable RAID controller 4 | \-00.2-[03]-- +-04.0-[04]-- +-05.0-[05-07]--+-00.0-[06]----07.0 Intel Corporation 82541GI Gigabit Ethernet Controller | \-00.2-[07]----08.0 Intel Corporation 82541GI Gigabit Ethernet Controller +-06.0-[08-0a]--+-00.0-[09]--+-04.0 Intel Corporation 82546EB Gigabit Ethernet Controller (Copper) | | \-04.1 Intel Corporation 82546EB Gigabit Ethernet Controller (Copper) | \-00.2-[0a]--+-02.0 Digium, Inc. Wildcard TE210P/TE212P dual-span T1/E1/J1 card 3.3V | +-03.0 Intel Corporation 82546EB Gigabit Ethernet Controller (Copper) | \-03.1 Intel Corporation 82546EB Gigabit Ethernet Controller (Copper) +-1d.0 Intel Corporation 82801EB/ER (ICH5/ICH5R) USB UHCI Controller #1 +-1d.1 Intel Corporation 82801EB/ER (ICH5/ICH5R) USB UHCI Controller #2 +-1d.2 Intel Corporation 82801EB/ER (ICH5/ICH5R) USB UHCI Controller #3 +-1d.7 Intel Corporation 82801EB/ER (ICH5/ICH5R) USB2 EHCI Controller +-1e.0-[0b]----0d.0 Advanced Micro Devices [AMD] nee ATI RV100 QY [Radeon 7000/VE] +-1f.0 Intel Corporation 82801EB/ER (ICH5/ICH5R) LPC Interface Bridge \-1f.1 Intel Corporation 82801EB/ER (ICH5/ICH5R) IDE Controller I believe the NIC nor the NIC drivers are dropping the packets because ethtool reports 0 under rx_missed_errors and rx_no_buffer_count for each interface. On the old system, if it couldn't keep up this is where the drops would be. I drop packets on multiple interfaces just about every second, usually in small increments of 2-4. I tried all these sysctl values, I'm currently using the uncommented ones. # cat /etc/sysctl.conf # high net.core.netdev_max_backlog = 3000000 net.core.rmem_max = 16000000 net.core.rmem_default = 8000000 # defaults #net.core.netdev_max_backlog = 1000 #net.core.rmem_max = 131071 #net.core.rmem_default = 163480 # moderate #net.core.netdev_max_backlog = 10000 #net.core.rmem_max = 33554432 #net.core.rmem_default = 33554432 Here's an example of an interface stats report with ethtool. They are all the same, nothing is out of the ordinary ( I think ), so I'm only going to show one: ethtool -S eth2 NIC statistics: rx_packets: 7498 tx_packets: 0 rx_bytes: 2722585 tx_bytes: 0 rx_broadcast: 327 tx_broadcast: 0 rx_multicast: 1504 tx_multicast: 0 rx_errors: 0 tx_errors: 0 tx_dropped: 0 multicast: 1504 collisions: 0 rx_length_errors: 0 rx_over_errors: 0 rx_crc_errors: 0 rx_frame_errors: 0 rx_no_buffer_count: 0 rx_missed_errors: 0 tx_aborted_errors: 0 tx_carrier_errors: 0 tx_fifo_errors: 0 tx_heartbeat_errors: 0 tx_window_errors: 0 tx_abort_late_coll: 0 tx_deferred_ok: 0 tx_single_coll_ok: 0 tx_multi_coll_ok: 0 tx_timeout_count: 0 tx_restart_queue: 0 rx_long_length_errors: 0 rx_short_length_errors: 0 rx_align_errors: 0 tx_tcp_seg_good: 0 tx_tcp_seg_failed: 0 rx_flow_control_xon: 0 rx_flow_control_xoff: 0 tx_flow_control_xon: 0 tx_flow_control_xoff: 0 rx_long_byte_count: 2722585 rx_csum_offload_good: 0 rx_csum_offload_errors: 0 alloc_rx_buff_failed: 0 tx_smbus: 0 rx_smbus: 0 dropped_smbus: 01 # ifconfig eth0 Link encap:Ethernet HWaddr 00:11:43:e0:e2:8c UP BROADCAST RUNNING NOARP PROMISC ALLMULTI MULTICAST MTU:1500 Metric:1 RX packets:373348 errors:16 dropped:95 overruns:0 frame:16 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:356830572 (356.8 MB) TX bytes:0 (0.0 B) eth1 Link encap:Ethernet HWaddr 00:11:43:e0:e2:8d UP BROADCAST RUNNING NOARP PROMISC ALLMULTI MULTICAST MTU:1500 Metric:1 RX packets:13616 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:8690528 (8.6 MB) TX bytes:0 (0.0 B) eth2 Link encap:Ethernet HWaddr 00:04:23:e1:77:6a UP BROADCAST RUNNING NOARP PROMISC ALLMULTI MULTICAST MTU:1500 Metric:1 RX packets:7750 errors:0 dropped:471 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:2780935 (2.7 MB) TX bytes:0 (0.0 B) eth3 Link encap:Ethernet HWaddr 00:04:23:e1:77:6b UP BROADCAST RUNNING NOARP PROMISC ALLMULTI MULTICAST MTU:1500 Metric:1 RX packets:5112 errors:0 dropped:206 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:639472 (639.4 KB) TX bytes:0 (0.0 B) eth4 Link encap:Ethernet HWaddr 00:04:23:b6:35:6c UP BROADCAST RUNNING NOARP PROMISC ALLMULTI MULTICAST MTU:1500 Metric:1 RX packets:961467 errors:0 dropped:935 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:958561305 (958.5 MB) TX bytes:0 (0.0 B) eth5 Link encap:Ethernet HWaddr 00:04:23:b6:35:6d inet addr:192.168.1.6 Bcast:192.168.1.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:4264 errors:0 dropped:16 overruns:0 frame:0 TX packets:699 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:572228 (572.2 KB) TX bytes:124456 (124.4 KB) I tried the defaults, then started to play around with settings. I wasn't using any flow control and I increased the RxDescriptor count to 4096 before the upgrade as well without any problems. # cat /etc/modprobe.d/e1000.conf options e1000 XsumRX=0,0,0,0,0 RxDescriptors=4096,4096,4096,4096,4096 FlowControl=0,0,0,0,0 debug=16 Here's my network configuration file, I turned off checksumming and various offloading mechanisms along with setting CPU affinity with heavy use interfaces getting an entire CPU and light use interfaces sharing a CPU. I used these settings prior to the upgrade without problems. # cat /etc/network/interfaces # The loopback network interface auto lo iface lo inet loopback # The primary network interface auto eth0 iface eth0 inet manual pre-up /sbin/ethtool -G eth0 rx 4096 tx 0 pre-up /sbin/ethtool -K eth0 gro off gso off rx off pre-up /sbin/ethtool -A eth0 rx off autoneg off up ifconfig eth0 0.0.0.0 -arp promisc mtu 1500 allmulti txqueuelen 0 up post-up echo "4" > /proc/irq/48/smp_affinity down ifconfig eth0 down post-down /sbin/ethtool -G eth0 rx 256 tx 256 post-down /sbin/ethtool -K eth0 gro on gso on rx on post-down /sbin/ethtool -A eth0 rx on autoneg on auto eth1 iface eth1 inet manual pre-up /sbin/ethtool -G eth1 rx 4096 tx 0 pre-up /sbin/ethtool -K eth1 gro off gso off rx off pre-up /sbin/ethtool -A eth1 rx off autoneg off up ifconfig eth1 0.0.0.0 -arp promisc mtu 1500 allmulti txqueuelen 0 up post-up echo "4" > /proc/irq/49/smp_affinity down ifconfig eth1 down post-down /sbin/ethtool -G eth1 rx 256 tx 256 post-down /sbin/ethtool -K eth1 gro on gso on rx on post-down /sbin/ethtool -A eth1 rx on autoneg on auto eth2 iface eth2 inet manual pre-up /sbin/ethtool -G eth2 rx 4096 tx 0 pre-up /sbin/ethtool -K eth2 gro off gso off rx off pre-up /sbin/ethtool -A eth2 rx off autoneg off up ifconfig eth2 0.0.0.0 -arp promisc mtu 1500 allmulti txqueuelen 0 up post-up echo "1" > /proc/irq/82/smp_affinity down ifconfig eth2 down post-down /sbin/ethtool -G eth2 rx 256 tx 256 post-down /sbin/ethtool -K eth2 gro on gso on rx on post-down /sbin/ethtool -A eth2 rx on autoneg on auto eth3 iface eth3 inet manual pre-up /sbin/ethtool -G eth3 rx 4096 tx 0 pre-up /sbin/ethtool -K eth3 gro off gso off rx off pre-up /sbin/ethtool -A eth3 rx off autoneg off up ifconfig eth3 0.0.0.0 -arp promisc mtu 1500 allmulti txqueuelen 0 up post-up echo "2" > /proc/irq/83/smp_affinity down ifconfig eth3 down post-down /sbin/ethtool -G eth3 rx 256 tx 256 post-down /sbin/ethtool -K eth3 gro on gso on rx on post-down /sbin/ethtool -A eth3 rx on autoneg on auto eth4 iface eth4 inet manual pre-up /sbin/ethtool -G eth4 rx 4096 tx 0 pre-up /sbin/ethtool -K eth4 gro off gso off rx off pre-up /sbin/ethtool -A eth4 rx off autoneg off up ifconfig eth4 0.0.0.0 -arp promisc mtu 1500 allmulti txqueuelen 0 up post-up echo "4" > /proc/irq/77/smp_affinity down ifconfig eth4 down post-down /sbin/ethtool -G eth4 rx 256 tx 256 post-down /sbin/ethtool -K eth4 gro on gso on rx on post-down /sbin/ethtool -A eth4 rx on autoneg on auto eth5 iface eth5 inet static pre-up /etc/fw.conf address 192.168.1.1 netmask 255.255.255.0 broadcast 192.168.1.255 gateway 192.168.1.1 dns-nameservers 192.168.1.2 192.168.1.3 up ifconfig eth5 up post-up echo "8" > /proc/irq/77/smp_affinity down ifconfig eth5 down Here's a few examples of packet drops, i ran one after another, probabling totaling 3 or 4 seconds. You can see increases in the drops from the 1st and 3rd. This was a non-busy time, very little traffic. # awk '{ print $1,$5 }' /proc/net/dev Inter-| face drop eth3: 225 lo: 0 eth2: 505 eth1: 0 eth5: 17 eth0: 105 eth4: 1034 # awk '{ print $1,$5 }' /proc/net/dev Inter-| face drop eth3: 225 lo: 0 eth2: 507 eth1: 0 eth5: 17 eth0: 105 eth4: 1034 # awk '{ print $1,$5 }' /proc/net/dev Inter-| face drop eth3: 227 lo: 0 eth2: 512 eth1: 0 eth5: 17 eth0: 105 eth4: 1039 I tried the pci=noacpi options. With and without, it's the same. This is what my interrupt stats looked like before the upgrade, after, with ACPI on PCI it showed multiple NICs bound to an interrupt and shared with other devices such as USB drives which I didn't like so I think i'm going to keep it with ACPI off as it's easier to designate sole purpose interrupts. Is there any advantage I would have using the default i.e. ACPI w/ PCI. ? # cat /etc/default/grub | grep CMD_LINE GRUB_CMDLINE_LINUX_DEFAULT="ipv6.disable=1 noacpi pci=noacpi" GRUB_CMDLINE_LINUX="" # cat /proc/interrupts CPU0 CPU1 CPU2 CPU3 0: 45 0 0 16 IO-APIC-edge timer 1: 1 0 0 7936 IO-APIC-edge i8042 2: 0 0 0 0 XT-PIC-XT-PIC cascade 6: 0 0 0 3 IO-APIC-edge floppy 8: 0 0 0 1 IO-APIC-edge rtc0 9: 0 0 0 0 IO-APIC-edge acpi 12: 0 0 0 1809 IO-APIC-edge i8042 14: 1 0 0 4498 IO-APIC-edge ata_piix 15: 0 0 0 0 IO-APIC-edge ata_piix 16: 0 0 0 0 IO-APIC-fasteoi uhci_hcd:usb2 18: 0 0 0 1350 IO-APIC-fasteoi uhci_hcd:usb4, radeon 19: 0 0 0 0 IO-APIC-fasteoi uhci_hcd:usb3 23: 0 0 0 4099 IO-APIC-fasteoi ehci_hcd:usb1 38: 0 0 0 61963 IO-APIC-fasteoi megaraid 48: 0 0 1002319 4 IO-APIC-fasteoi eth0 49: 0 0 38772 3 IO-APIC-fasteoi eth1 77: 0 0 130076 432159 IO-APIC-fasteoi eth4 78: 0 0 0 23917 IO-APIC-fasteoi eth5 82: 1329033 0 0 4 IO-APIC-fasteoi eth2 83: 0 4886525 0 6 IO-APIC-fasteoi eth3 NMI: 5 6 4 5 Non-maskable interrupts LOC: 61409 57076 64257 114764 Local timer interrupts SPU: 0 0 0 0 Spurious interrupts IWI: 0 0 0 0 IRQ work interrupts RES: 17956 25333 13436 14789 Rescheduling interrupts CAL: 22436 607 539 478 Function call interrupts TLB: 1525 1458 4600 4151 TLB shootdowns TRM: 0 0 0 0 Thermal event interrupts THR: 0 0 0 0 Threshold APIC interrupts MCE: 0 0 0 0 Machine check exceptions MCP: 16 16 16 16 Machine check polls ERR: 0 MIS: 0 Here's sample output of vmstat, showing the system. Barebones system right now. root@nms:~# vmstat -S m 1 procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 0 0 0 14992 192 1029 0 0 56 2 419 29 1 0 99 0 0 0 0 14992 192 1029 0 0 0 0 922 27 0 0 100 0 0 0 0 14991 192 1029 0 0 0 36 763 50 0 0 100 0 0 0 0 14991 192 1029 0 0 0 0 646 35 0 0 100 0 0 0 0 14991 192 1029 0 0 0 0 722 54 0 0 100 0 0 0 0 14991 192 1029 0 0 0 0 793 27 0 0 100 0 ^C Here's dmesg output. I can't figure out why my PCI-X slots are negotiated as PCI. The network cards are all PCI-X with the exception of the integrated NICs that came with the server. In the output below it looks as if eth3 and eth2 negotiated at PCI-X speeds rather than PCI:66Mhz. Wouldn't they all drop to PCI:66Mhz? If your integrated NICs are PCI, as labeled below (eth0,eth1), then wouldn't all devices on your bus speed drop down to that slower bus speed? If not, I still don't know why only one of my NICs ( each has two ethernet ports) is labeled as PCI-X in the output below. Does that mean it is running at PCI-X speeds are is it showing that it's capable? # dmesg | grep e1000 [ 3678.349337] e1000: Intel(R) PRO/1000 Network Driver - version 7.3.21-k8-NAPI [ 3678.349342] e1000: Copyright (c) 1999-2006 Intel Corporation. [ 3678.349394] e1000 0000:06:07.0: PCI->APIC IRQ transform: INT A -> IRQ 48 [ 3678.409725] e1000 0000:06:07.0: Receive Descriptors set to 4096 [ 3678.409730] e1000 0000:06:07.0: Checksum Offload Disabled [ 3678.409734] e1000 0000:06:07.0: Flow Control Disabled [ 3678.586409] e1000 0000:06:07.0: eth0: (PCI:66MHz:32-bit) 00:11:43:e0:e2:8c [ 3678.586419] e1000 0000:06:07.0: eth0: Intel(R) PRO/1000 Network Connection [ 3678.586642] e1000 0000:07:08.0: PCI->APIC IRQ transform: INT A -> IRQ 49 [ 3678.649854] e1000 0000:07:08.0: Receive Descriptors set to 4096 [ 3678.649859] e1000 0000:07:08.0: Checksum Offload Disabled [ 3678.649863] e1000 0000:07:08.0: Flow Control Disabled [ 3678.826436] e1000 0000:07:08.0: eth1: (PCI:66MHz:32-bit) 00:11:43:e0:e2:8d [ 3678.826444] e1000 0000:07:08.0: eth1: Intel(R) PRO/1000 Network Connection [ 3678.826627] e1000 0000:09:04.0: PCI->APIC IRQ transform: INT A -> IRQ 82 [ 3679.093266] e1000 0000:09:04.0: Receive Descriptors set to 4096 [ 3679.093271] e1000 0000:09:04.0: Checksum Offload Disabled [ 3679.093275] e1000 0000:09:04.0: Flow Control Disabled [ 3679.130239] e1000 0000:09:04.0: eth2: (PCI-X:133MHz:64-bit) 00:04:23:e1:77:6a [ 3679.130246] e1000 0000:09:04.0: eth2: Intel(R) PRO/1000 Network Connection [ 3679.130449] e1000 0000:09:04.1: PCI->APIC IRQ transform: INT B -> IRQ 83 [ 3679.397312] e1000 0000:09:04.1: Receive Descriptors set to 4096 [ 3679.397318] e1000 0000:09:04.1: Checksum Offload Disabled [ 3679.397321] e1000 0000:09:04.1: Flow Control Disabled [ 3679.434350] e1000 0000:09:04.1: eth3: (PCI-X:133MHz:64-bit) 00:04:23:e1:77:6b [ 3679.434360] e1000 0000:09:04.1: eth3: Intel(R) PRO/1000 Network Connection [ 3679.434553] e1000 0000:0a:03.0: PCI->APIC IRQ transform: INT A -> IRQ 77 [ 3679.704072] e1000 0000:0a:03.0: Receive Descriptors set to 4096 [ 3679.704077] e1000 0000:0a:03.0: Checksum Offload Disabled [ 3679.704081] e1000 0000:0a:03.0: Flow Control Disabled [ 3679.738364] e1000 0000:0a:03.0: eth4: (PCI:33MHz:64-bit) 00:04:23:b6:35:6c [ 3679.738371] e1000 0000:0a:03.0: eth4: Intel(R) PRO/1000 Network Connection [ 3679.738538] e1000 0000:0a:03.1: PCI->APIC IRQ transform: INT B -> IRQ 78 [ 3680.046060] e1000 0000:0a:03.1: eth5: (PCI:33MHz:64-bit) 00:04:23:b6:35:6d [ 3680.046067] e1000 0000:0a:03.1: eth5: Intel(R) PRO/1000 Network Connection [ 3682.132415] e1000: eth0 NIC Link is Up 100 Mbps Half Duplex, Flow Control: None [ 3682.224423] e1000: eth1 NIC Link is Up 100 Mbps Half Duplex, Flow Control: None [ 3682.316385] e1000: eth2 NIC Link is Up 100 Mbps Half Duplex, Flow Control: None [ 3682.408391] e1000: eth3 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None [ 3682.500396] e1000: eth4 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None [ 3682.708401] e1000: eth5 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX At first I thought it was the NIC drivers but I'm not so sure. I really have no idea where else to look at the moment. Any help is greatly appreciated as I'm struggling with this. If you need more information just ask. Thanks! [1]http://www.cs.fsu.edu/~baker/devices/lxr/http/source/linux/Documentation/networking/e1000.txt?v=2.6.11.8 [2] http://support.dell.com/support/edocs/systems/pe2850/en/ug/t1390aa.htm

    Read the article

  • Exception Handling

    - by raghu.yadav
    Here is the few links on which andre had demonstrateddifferences-of-handling-jboexception-in handling-exceptions-in-oracle-ui-shell However in this post we can see how to display exception in popup being in the same page. I use similar usecase as andre however we'll not be using Exception Handling property from taskflow, instead we use popup and invoke the same programmatically. This is a dynamic region example where user can select jobs or locations links to edit the records of corresponding tables being in the same page and click commit to save changes. To generate exception we deliberately change commit to CommitAction in commit action binding code created in the bean (same as andre) and catch the exception and add brief description of exception into #{pageFlowScope.message}. Drop Popup component after Commit button and add dialog within in popup button, bind the popup component to backing bean and invoke the same in catch clause as shown below. public String Commit() { try{ BindingContainer bindings = getBindings(); OperationBinding operationBinding = bindings.getOperationBinding("CommitAction"); Object result = operationBinding.execute(); if (!operationBinding.getErrors().isEmpty()) { return null; } }catch (NullPointerException e) { setELValue("#{pageFlowScope.message}", "NullPointerException..."); e.printStackTrace(); String popupId = this.getPopup().getClientId(FacesContext.getCurrentInstance()); PatternsPublicUtil.invokePopup(popupId); } return null; } } private void setELValue(String el, String value) { FacesContext facesContext = FacesContext.getCurrentInstance(); ELContext elContext = facesContext.getELContext(); ExpressionFactory expressionFactory = facesContext.getApplication().getExpressionFactory(); ValueExpression valueExp = expressionFactory.createValueExpression(elContext, el, Object.class); valueExp.setValue(elContext, value); } .

    Read the article

  • Create a Remote Git Repository from an Existing XCode Repository

    - by codeWithoutFear
    Introduction Distributed version control systems (VCS’s), like Git, provide a rich set of features for managing source code.  Many development tools, including XCode, provide built-in support for various VCS’s.  These tools provide simple configuration with limited customization to get you up and running quickly while still providing the safety net of basic version control. I hate losing (and re-doing) work.  I have OCD when it comes to saving and versioning source code.  Save early, save often, and commit to the VCS often.  I also hate merging code.  Smaller and more frequent commits enable me to minimize merge time and effort as well. The work flow I prefer even for personal exploratory projects is: Make small local changes to the codebase to create an incrementally improved (and working) system. Commit these changes to the local repository.  Local repositories are quick to access, function even while offline, and provides the confidence to continue making bold changes to the system.  After all, I can easily recover to a recent working state. Repeat 1 & 2 until the codebase contains “significant” functionality and I have connectivity to the remote repository. Push the accumulated changes to the remote repository.  The smaller the change set, the less likely extensive merging will be required.  Smaller is better, IMHO. The remote repository typically has a greater degree of fault tolerance and active management dedicated to it.  This can be as simple as a network share that is backed up nightly or as complex as dedicated hardware with specialized server-side processing and significant administrative monitoring. XCode’s out-of-the-box Git integration enables steps 1 and 2 above.  Time Machine backups of the local repository add an additional degree of fault tolerance, but do not support collaboration or take advantage of managed infrastructure such as on-premises or cloud-based storage. Creating a Remote Repository These are the steps I use to enable the full workflow identified above.  For simplicity the “remote” repository is created on the local file system.  This location could easily be on a mounted network volume. Create a Test Project My project is called HelloGit and is located at /Users/Don/Dev/HelloGit.  Be sure to commit all outstanding changes.  XCode always leaves a single changed file for me after the project is created and the initial commit is submitted. Clone the Local Repository We want to clone the XCode-created Git repository to the location where the remote repository will reside.  In this case it will be /Users/Don/Dev/RemoteHelloGit. Open the Terminal application. Clone the local repository to the remote repository location: git clone /Users/Don/Dev/HelloGit /Users/Don/Dev/RemoteHelloGit Convert the Remote Repository to a Bare Repository The remote repository only needs to contain the Git database.  It does not need a checked out branch or local files. Go to the remote repository folder: cd /Users/Don/Dev/RemoteHelloGit Indicate the repository is “bare”: git config --bool core.bare true Remove files, leaving the .git folder: rm -R * Remove the “origin” remote: git remote rm origin Configure the Local Repository The local repository should reference the remote repository.  The remote name “origin” is used by convention to indicate the originating repository.  This is set automatically when a repository is cloned.  We will use the “origin” name here to reflect that relationship. Go to the local repository folder: cd /Users/Don/Dev/HelloGit Add the remote: git remote add origin /Users/Don/Dev/RemoteHelloGit Test Connectivity Any changes made to the local Git repository can be pushed to the remote repository subject to the merging rules Git enforces. Create a new local file: date > date.txt /li> Add the new file to the local index: git add date.txt Commit the change to the local repository: git commit -m "New file: date.txt" Push the change to the remote repository: git push origin master Now you can save, commit, and push/pull to your OCD hearts’ content! Code without fear! --Don

    Read the article

  • What is a correct/polite way to inherit from an abandoned open-source project for a new open-source project?

    - by Kabumbus
    My team just tried to contact some guys from an old open source project hosted on code.google.com. We told them that we'd like to join their project and commit to it — at least to some branch of it — but no one responded to us. We tried everyone, owners and committers; no one was in any way active, and no one replied. But we have some code to commit and we really would love to continue work on that project. So we need to create a new project. We came up with a name for it which is close to but not a duplicate of the name of the project we want to inherit from. How should we do our first commit, and what should the commit message be? Should we just copy their code to our repository with a comment like "we inherited this code, we found it here under such and such a license ... now we're upgrading it to this more/less strict license ..."? Or should we just use their code as our first commit, with updates saying "we inherited from ... we made such and such changes ..."?

    Read the article

  • [Windows 8] Update TextBox’s binding on TextChanged

    - by Benjamin Roux
    Since UpdateSourceTrigger is not available in WinRT we cannot update the text’s binding of a TextBox at will (or at least not easily) especially when using MVVM (I surely don’t want to write behind-code to do that in each of my apps !). Since this kind of demand is frequent (for example to disable of button if the TextBox is empty) I decided to create some attached properties to to simulate this missing behavior. namespace Indeed.Controls { public static class TextBoxEx { public static string GetRealTimeText(TextBox obj) { return (string)obj.GetValue(RealTimeTextProperty); } public static void SetRealTimeText(TextBox obj, string value) { obj.SetValue(RealTimeTextProperty, value); } public static readonly DependencyProperty RealTimeTextProperty = DependencyProperty.RegisterAttached("RealTimeText", typeof(string), typeof(TextBoxEx), null); public static bool GetIsAutoUpdate(TextBox obj) { return (bool)obj.GetValue(IsAutoUpdateProperty); } public static void SetIsAutoUpdate(TextBox obj, bool value) { obj.SetValue(IsAutoUpdateProperty, value); } public static readonly DependencyProperty IsAutoUpdateProperty = DependencyProperty.RegisterAttached("IsAutoUpdate", typeof(bool), typeof(TextBoxEx), new PropertyMetadata(false, OnIsAutoUpdateChanged)); private static void OnIsAutoUpdateChanged(DependencyObject sender, DependencyPropertyChangedEventArgs e) { var value = (bool)e.NewValue; var textbox = (TextBox)sender; if (value) { Observable.FromEventPattern<TextChangedEventHandler, TextChangedEventArgs>( o => textbox.TextChanged += o, o => textbox.TextChanged -= o) .Do(_ => textbox.SetValue(TextBoxEx.RealTimeTextProperty, textbox.Text)) .Subscribe(); } } } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The code is composed of two attached properties. The first one “RealTimeText” reflects the text in real time (updated after each TextChanged event). The second one is only used to enable the functionality. To subscribe to the TextChanged event I used Reactive Extensions (Rx-Metro package in Nuget). If you’re not familiar with this framework just replace the code with a simple: textbox.TextChanged += textbox.SetValue(TextBoxEx.RealTimeTextProperty, textbox.Text); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } To use these attached properties, it’s fairly simple <TextBox Text="{Binding Path=MyProperty, Mode=TwoWay}" ic:TextBoxEx.IsAutoUpdate="True" ic:TextBoxEx.RealTimeText="{Binding Path=MyProperty, Mode=TwoWay}" /> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Just make sure to create a binding (in TwoWay) for both Text and RealTimeText. Hope this helps !

    Read the article

  • Non-Dom Element Event Binding with jQuery

    - by Rick Strahl
    Yesterday I had a short discussion with Dave Reed on Twitter regarding setting up fake ‘events’ on objects that are hookable. jQuery makes it real easy to bind events on DOM elements and with a little bit of extra work (that I didn’t know about) you can also set up binding to non-DOM element ‘event’ bindings. Assume for a second that you have a simple JavaScript object like this: var item = { sku: "wwhelp" , foo: function() { alert('orginal foo function'); } }; and you want to be notified when the foo function is called. You can use jQuery to bind the handler like this: $(item).bind("foo", function () { alert('foo Hook called'); } ); Binding alone won’t actually cause the handler to be triggered so when you call: item.foo(); you only get the ‘original’ message. In order to fire both the original handler and the bound event hook you have to use the .trigger() function: $(item).trigger("foo"); Now if you do the following complete sequence: var item = { sku: "wwhelp" , foo: function() { alert('orginal foo function'); } }; $(item).bind("foo", function () { alert('foo hook called'); } ); $(item).trigger("foo"); You’ll see the ‘hook’ message first followed by the ‘original’ message fired in succession. In other words, using this mechanism you can hook standard object functions and chain events to them in a way similar to the way you can do with DOM elements. The main difference is that the ‘event’ has to be explicitly triggered in order for this to happen rather than just calling the method directly. .trigger() relies on some internal logic that checks for event bindings on the object (attached via an expando property) which .trigger() searches for in its bound event list. Once the ‘event’ is found it’s called prior to execution of the original function. This is pretty useful as it allows you to create standard JavaScript objects that can act as event handlers and are effectively hookable without having to explicitly override event definitions with JavaScript function handlers. You get all the benefits of jQuery’s event methods including the ability to hook up multiple events to the same handler function and the ability to uniquely identify each specific event instance with post fix string names (ie. .bind("MyEvent.MyName") and .unbind("MyEvent.MyName") to bind MyEvent). Watch out for an .unbind() Bug Note that there appears to be a bug with .unbind() in jQuery that doesn’t reliably unbind an event and results in a elem.removeEventListener is not a function error. The following code demonstrates: var item = { sku: "wwhelp", foo: function () { alert('orginal foo function'); } }; $(item).bind("foo.first", function () { alert('foo hook called'); }); $(item).bind("foo.second", function () { alert('foo hook2 called'); }); $(item).trigger("foo"); setTimeout(function () { $(item).unbind("foo"); // $(item).unbind("foo.first"); // $(item).unbind("foo.second"); $(item).trigger("foo"); }, 3000); The setTimeout call delays the unbinding and is supposed to remove the event binding on the foo function. It fails both with the foo only value (both if assigned only as “foo” or “foo.first/second” as well as when removing both of the postfixed event handlers explicitly. Oddly the following that removes only one of the two handlers works: setTimeout(function () { //$(item).unbind("foo"); $(item).unbind("foo.first"); // $(item).unbind("foo.second"); $(item).trigger("foo"); }, 3000); this actually works which is weird as the code in unbind tries to unbind using a DOM method that doesn’t exist. <shrug> A partial workaround for unbinding all ‘foo’ events is the following: setTimeout(function () { $.event.special.foo = { teardown: function () { alert('teardown'); return true; } }; $(item).unbind("foo"); $(item).trigger("foo"); }, 3000); which is a bit cryptic to say the least but it seems to work more reliably. I can’t take credit for any of this – thanks to Dave Reed and Damien Edwards who pointed out some of these behaviors. I didn’t find any good descriptions of the process so thought it’d be good to write it down here. Hope some of you find this helpful.© Rick Strahl, West Wind Technologies, 2005-2010Posted in jQuery  

    Read the article

  • [Windows 8] Application bar popup button

    - by Benjamin Roux
    Here is a small control to create an application bar button which will display a content in a popup when the button is clicked. Visually it gives this So how to create this? First you have to use the AppBarPopupButton control below.   namespace Indeed.Controls { public class AppBarPopupButton : Button { public FrameworkElement PopupContent { get { return (FrameworkElement)GetValue(PopupContentProperty); } set { SetValue(PopupContentProperty, value); } } public static readonly DependencyProperty PopupContentProperty = DependencyProperty.Register("PopupContent", typeof(FrameworkElement), typeof(AppBarPopupButton), new PropertyMetadata(null, (o, e) => (o as AppBarPopupButton).CreatePopup())); private Popup popup; private SerialDisposable sizeChanged = new SerialDisposable(); protected override void OnTapped(Windows.UI.Xaml.Input.TappedRoutedEventArgs e) { base.OnTapped(e); if (popup != null) { var transform = this.TransformToVisual(Window.Current.Content); var offset = transform.TransformPoint(default(Point)); sizeChanged.Disposable = PopupContent.ObserveSizeChanged().Do(_ => popup.VerticalOffset = offset.Y - (PopupContent.ActualHeight + 20)).Subscribe(); popup.HorizontalOffset = offset.X + 24; popup.DataContext = this.DataContext; popup.IsOpen = true; } } private void CreatePopup() { popup = new Popup { IsLightDismissEnabled = true }; popup.Closed += (o, e) => this.GetParentOfType<AppBar>().IsOpen = false; popup.ChildTransitions = new Windows.UI.Xaml.Media.Animation.TransitionCollection(); popup.ChildTransitions.Add(new Windows.UI.Xaml.Media.Animation.PopupThemeTransition()); var container = new Grid(); container.Children.Add(PopupContent); popup.Child = container; } } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The ObserveSizeChanged method is just an extension method which observe the SizeChanged event (using Reactive Extensions - Rx-Metro package in Nuget). If you’re not familiar with Rx, you can replace this line (and the SerialDisposable stuff) by a simple subscription to the SizeChanged event (using +=) but don’t forget to unsubscribe to it ! public static IObservable<Unit> ObserveSizeChanged(this FrameworkElement element) { return Observable.FromEventPattern<SizeChangedEventHandler, SizeChangedEventArgs>( o => element.SizeChanged += o, o => element.SizeChanged -= o) .Select(_ => Unit.Default); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The GetParentOfType extension method just retrieve the first parent of type (it’s a common extension method that every Windows 8 developer should have created !). You can of course tweak to control (for example if you want to center the content to the button or anything else) to fit your needs. How to use this control? It’s very simple, in an AppBar control just add it and define the PopupContent property. <ic:AppBarPopupButton Style="{StaticResource RefreshAppBarButtonStyle}" HorizontalAlignment="Left"> <ic:AppBarPopupButton.PopupContent> <Grid> [...] </Grid> </ic:AppBarPopupButton.PopupContent> </ic:AppBarPopupButton> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } When the button is clicked the popup is displayed. When the popup is closed, the app bar is closed too. I hope this will help you !

    Read the article

  • Doing unit and integration tests with the Web API HttpClient

    - by cibrax
    One of the nice things about the new HttpClient in System.Net.Http is the support for mocking responses or handling requests in a http server hosted in-memory. While the first option is useful for scenarios in which we want to test our client code in isolation (unit tests for example), the second one enables more complete integration testing scenarios that could include some more components in the stack such as model binders or message handlers for example.   The HttpClient can receive a HttpMessageHandler as argument in one of its constructors. public class HttpClient : HttpMessageInvoker { public HttpClient(); public HttpClient(HttpMessageHandler handler); public HttpClient(HttpMessageHandler handler, bool disposeHandler); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } For the first scenario, you can create a new HttpMessageHandler that fakes the response, which you can use in your unit test. The only requirement is that you somehow inject an HttpClient with this custom handler in the client code. public class FakeHttpMessageHandler : HttpMessageHandler { HttpResponseMessage response; public FakeHttpMessageHandler(HttpResponseMessage response) { this.response = response; } protected override Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, System.Threading.CancellationToken cancellationToken) { var tcs = new TaskCompletionSource<HttpResponseMessage>(); tcs.SetResult(response); return tcs.Task; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } In an unit test, you can do something like this. var fakeResponse = new HttpResponse(); var fakeHandler = new FakeHttpMessageHandler(fakeResponse); var httpClient = new HttpClient(fakeHandler); var customerService = new CustomerService(httpClient); // Do something // Asserts .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } CustomerService in this case is the class under test, and the one that receives an HttpClient initialized with our fake handler. For the second scenario in integration tests, there is a In-Memory host “System.Web.Http.HttpServer” that also derives from HttpMessageHandler and you can use with a HttpClient instance in your test. This has been discussed already in these two great posts from Pedro and Filip. 

    Read the article

  • Emacs: annoying Flymake dialog box.

    - by baol
    Hello I have the following lines in my ~/.emacs.d/init.el (custom-set-variables '(flymake-allowed-file-name-masks (quote ( ("\\.cc\\'" flymake-simple-make-init) ("\\.cpp\\'" flymake-simple-make-init))))) (add-hook 'find-file-hook 'flymake-find-file-hook) When I open a cc/cpp file that has a Makefile with the following content in the same folder I get proper on-the-fly compilation and error reporting (Flymake will check the syntax and report errors and warnings during code editing) .PHONY: check-syntax check-syntax: $(CXX) -Wall -Wextra -pedantic -fsyntax-only $(CHK_SOURCES) The problem is that when I open a .cc file that has no corresponding Makefile i get an annoying dialog box that warns me about flymake being disabled for every file opened. Is there some hook I can use to disable that warning? Can you provide sample elisp code and explanation on how you found the proper hook?

    Read the article

  • Populate a WCF syndication podcast using MP3 ID3 metadata tags

    - by brian_ritchie
    In the last post, I showed how to create a podcast using WCF syndication.  A podcast is an RSS feed containing a list of audio files to which users can subscribe.  The podcast not only contains links to the audio files, but also metadata about each episode.  A cool approach to building the feed is reading this metadata from the ID3 tags on the MP3 files used for the podcast. One library to do this is TagLib-Sharp.  Here is some sample code: .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: Consolas, "Courier New", Courier, Monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } 1: var taggedFile = TagLib.File.Create(f); 2: var fileInfo = new FileInfo(f); 3: var item = new iTunesPodcastItem() 4: { 5: title = taggedFile.Tag.Title, 6: size = fileInfo.Length, 7: url = feed.baseUrl + fileInfo.Name, 8: duration = taggedFile.Properties.Duration, 9: mediaType = feed.mediaType, 10: summary = taggedFile.Tag.Comment, 11: subTitle = taggedFile.Tag.FirstAlbumArtist, 12: id = fileInfo.Name 13: }; 14: if (!string.IsNullOrEmpty(taggedFile.Tag.Album)) 15: item.publishedDate = DateTimeOffset.Parse(taggedFile.Tag.Album); This reads the ID3 tags into an object for later use in creating the syndication feed.  When the MP3 is created, these tags are set...or they can be set after the fact using the Properties dialog in Windows Explorer.  The only "hack" is that there isn't an easily accessible tag for "subtitle" or "published date" so I used other tags in this example. Feel free to change this to meet your purposes.  You could remove the subtitle & use the file modified data for example. That takes care of the episodes, for the feed level settings we'll load those from an XML file: .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: Consolas, "Courier New", Courier, Monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } 1: <?xml version="1.0" encoding="utf-8" ?> 2: <iTunesPodcastFeed 3: baseUrl ="" 4: title="" 5: subTitle="" 6: description="" 7: copyright="" 8: category="" 9: ownerName="" 10: ownerEmail="" 11: mediaType="audio/mp3" 12: mediaFiles="*.mp3" 13: imageUrl="" 14: link="" 15: /> Here is the full code put together. Read the feed XML file and deserialize it into an iTunesPodcastFeed classLoop over the files in a directory reading the ID3 tags from the audio files .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: Consolas, "Courier New", Courier, Monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } 1: public static iTunesPodcastFeed CreateFeedFromFiles(string podcastDirectory, string podcastFeedFile) 2: { 3: XmlSerializer serializer = new XmlSerializer(typeof(iTunesPodcastFeed)); 4: iTunesPodcastFeed feed; 5: using (var fs = File.OpenRead(Path.Combine(podcastDirectory, podcastFeedFile))) 6: { 7: feed = (iTunesPodcastFeed)serializer.Deserialize(fs); 8: } 9: foreach (var f in Directory.GetFiles(podcastDirectory, feed.mediaFiles)) 10: { 11: try 12: { 13: var taggedFile = TagLib.File.Create(f); 14: var fileInfo = new FileInfo(f); 15: var item = new iTunesPodcastItem() 16: { 17: title = taggedFile.Tag.Title, 18: size = fileInfo.Length, 19: url = feed.baseUrl + fileInfo.Name, 20: duration = taggedFile.Properties.Duration, 21: mediaType = feed.mediaType, 22: summary = taggedFile.Tag.Comment, 23: subTitle = taggedFile.Tag.FirstAlbumArtist, 24: id = fileInfo.Name 25: }; 26: if (!string.IsNullOrEmpty(taggedFile.Tag.Album)) 27: item.publishedDate = DateTimeOffset.Parse(taggedFile.Tag.Album); 28: feed.Items.Add(item); 29: } 30: catch 31: { 32: // ignore files that can't be accessed successfully 33: } 34: } 35: return feed; 36: } Usually putting a "try...catch" like this is bad, but in this case I'm just skipping over files that are locked while they are being uploaded to the web site.Here is the code from the last couple of posts.  

    Read the article

  • Implementing synchronous MediaTypeFormatters in ASP.NET Web API

    - by cibrax
    One of main characteristics of MediaTypeFormatter’s in ASP.NET Web API is that they leverage the Task Parallel Library (TPL) for reading or writing an model into an stream. When you derive your class from the base class MediaTypeFormatter, you have to either implement the WriteToStreamAsync or ReadFromStreamAsync methods for writing or reading a model from a stream respectively. These two methods return a Task, which internally does all the serialization work, as it is illustrated bellow. public abstract class MediaTypeFormatter { public virtual Task WriteToStreamAsync(Type type, object value, Stream writeStream, HttpContent content, TransportContext transportContext); public virtual Task<object> ReadFromStreamAsync(Type type, Stream readStream, HttpContent content, IFormatterLogger formatterLogger); }   .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } However, most of the times, serialization is a safe operation that can be done synchronously. In fact, many of the serializer classes you will find in the .NET framework only provide sync methods. So the question is, how you can transform that synchronous work into a Task ?. Creating a new task using the method Task.Factory.StartNew for doing all the serialization work would be probably the typical answer. That would work, as a new task is going to be scheduled. However, that might involve some unnecessary context switches, which are out of our control and might be affect performance on server code specially.   If you take a look at the source code of the MediaTypeFormatters shipped as part of the framework, you will notice that they actually using another pattern, which uses a TaskCompletionSource class. public Task WriteToStreamAsync(Type type, object value, Stream writeStream, HttpContent content, TransportContext transportContext) {   var tsc = new TaskCompletionSource<AsyncVoid>(); tsc.SetResult(default(AsyncVoid));   //Do all the serialization work here synchronously   return tsc.Task; }   /// <summary> /// Used as the T in a "conversion" of a Task into a Task{T} /// </summary> private struct AsyncVoid { } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } They are basically doing all the serialization work synchronously and using a TaskCompletionSource for returning a task already done. To conclude this post, this is another approach you might want to consider when using serializers that are not compatible with an async model. Update: Henrik Nielsen from the ASP.NET team pointed out the existence of a built-in media type formatter for writing sync formatters. BufferedMediaTypeFormatter http://t.co/FxOfeI5x

    Read the article

  • How to add custom hooks to controllers in ASP.NET MVC2

    - by Adrian
    Hi, I've just started a new project in ASP.net 4.0 with MVC 2. What I need to be able to do is have a custom hook at the start and end of each action of the controller. e.g. public void Index() { *** call to the start custom hook to externalfile.cs (is empty so does nothing) ViewData["welcomeMessage"] = "Hello World"; *** call to the end custom hook to externalfile.cs (changes "Hello World!" to "Hi World") return View(); } The View then see welcomeMessage as "Hi World" after being changed in the custom hook. The custom hook would need to be in an external file and not change the "core" compiled code. This causes a problem as with my limited knowledge ASP.net MVC has to be compiled. Does anyone have any advice on how this can be achieved? Thanks

    Read the article

  • Retrieving recent tweets using LINQ

    - by brian_ritchie
    There are a few different APIs for accessing Twitter from .NET.  In this example, I'll use linq2twitter.  Other APIs can be found on Twitter's development site. First off, we'll use the LINQ provider to pull in the recent tweets. .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: Consolas, "Courier New", Courier, Monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } 1: public static Status[] GetLatestTweets(string screenName, int numTweets) 2: { 3: try 4: { 5: var twitterCtx = new LinqToTwitter.TwitterContext(); 6: var list = from tweet in twitterCtx.Status 7: where tweet.Type == StatusType.User && 8: tweet.ScreenName == screenName 9: orderby tweet.CreatedAt descending 10: select tweet; 11: // using Take() on array because it was failing against the provider 12: var recentTweets = list.ToArray().Take(numTweets).ToArray(); 13: return recentTweets; 14: } 15: catch 16: { 17: return new Status[0]; 18: } 19: } Once they have been retrieved, they would be placed inside an MVC model. Next, the tweets need to be formatted for display. I've defined an extension method to aid with date formatting: .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: Consolas, "Courier New", Courier, Monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } 1: public static class DateTimeExtension 2: { 3: public static string ToAgo(this DateTime date2) 4: { 5: DateTime date1 = DateTime.Now; 6: if (DateTime.Compare(date1, date2) >= 0) 7: { 8: TimeSpan ts = date1.Subtract(date2); 9: if (ts.TotalDays >= 1) 10: return string.Format("{0} days", (int)ts.TotalDays); 11: else if (ts.Hours > 2) 12: return string.Format("{0} hours", ts.Hours); 13: else if (ts.Hours > 0) 14: return string.Format("{0} hours, {1} minutes", 15: ts.Hours, ts.Minutes); 16: else if (ts.Minutes > 5) 17: return string.Format("{0} minutes", ts.Minutes); 18: else if (ts.Minutes > 0) 19: return string.Format("{0} mintutes, {1} seconds", 20: ts.Minutes, ts.Seconds); 21: else 22: return string.Format("{0} seconds", ts.Seconds); 23: } 24: else 25: return "Not valid"; 26: } 27: } Finally, here is the piece of the view used to render the tweets. .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: Consolas, "Courier New", Courier, Monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } 1: <ul class="tweets"> 2: <% 3: foreach (var tweet in Model.Tweets) 4: { 5: %> 6: <li class="tweets"> 7: <span class="tweetTime"><%=tweet.CreatedAt.ToAgo() %> ago</span>: 8: <%=tweet.Text%> 9: </li> 10: <%} %> 11: </ul>  

    Read the article

  • Emacs auto-minor-mode based on extension

    - by vermiculus
    I found this question somewhat on the topic, but is there a way [in emacs] to set a minor mode (or a list thereof) based on extension? For example, it's pretty easy to find out that major modes can be manipulated like so (setq auto-mode-alist (cons '("\\.notes$" . text-mode) auto-mode-alist)) and what I'd ideally like to be able to do is (setq auto-minor-mode-alist (cons '("\\.notes$" . auto-fill-mode) auto-minor-mode-alist)) The accept answer of the linked question mentions hooks, specifically temp-buffer-setup-hook. To use this, you have to add a function to the hook like so (add-hook 'temp-buffer-setup-hook 'my-func-to-set-minor-mode) My question is two-fold: Is there an easier way to do this, similar to major modes? If not, how would one write the function for the hook? It needs to check the file path against a regular expression. If it matches, activate the desired mode (e.g. auto-fill-mode).

    Read the article

  • Fluid CSS: float column with overflow

    - by Ates Goral
    I'm using a fluid layout in the new theme that I'm working on for my blog. I often blog about code and include <pre> blocks within the posts. The float: left column for the content area has a max-width so that the column stops at a certain maximum width and can also be shrunk: +----------+ +------+ | text | | text | | | | | | | | | | | | | | | | | | | | | +----------+ +------+ max shrunk What I want is for the <pre> elements to be wider than the text column so that I can fit 80-character-wrapped code without horizontal scroll bars. But I want the <pre> elements to overflow from the content area, without affecting its fluidity: +----------+ +------+ | text | | text | | | | | +----------+--+ +------+------+ | code | | code | +----------+--+ +------+------+ | | | | +----------+ +------+ max shrunk But, max-width stops being fluid once I insert the overhanging <pre> in there: the width of the column remains at the specified max-width even when I shrink the browser beyond that width. I've played around with a bare-minimum scenario to reproduce the problem and noticed that doing either of the following brings back the fluidity: Remove the <pre> (doh...) Remove the float: left The workaround I'm currently using is to insert the <pre> elements into "breaks" in the post column, so that the widths of the post segments and the <pre> segments are managed mutually exclusively: +----------+ +------+ | text | | text | +----------+ +------+ +-------------+ +-------------+ | code | | code | +-------------+ +-------------+ +----------+ +------+ +----------+ +------+ max shrunk But this forces me to insert additional closing and opening <div> elements into the post text which I'd rather keep semantically pristine. Admittedly, I don't have a full grasp of how the box model works with floats with overflowing content, so I don't understand why the combination of float: left on the container and the <pre> inside it cripple the max-width of the container. I'm observing the same problem on Firefox/Chrome/Safari/Opera. IE6 (the crazy one) seems happy all the time. This also doesn't seem dependent on quirks/standards mode.

    Read the article

  • Mercurial Editor: "abort: The system cannot find the file specified"

    - by Killroy
    I have a problem getting Mercurial to recognise my editor. I have a file, c:\windows\notepad.exe and typing "notepad" at the command prompt works. I can commit by using the "-m" argument to supply the commit title. But a simple "hg commit" brings up the error. A call to "hg --traceback commit" brings up: Traceback (most recent call last): File "mercurial\dispatch.pyc", line 47, in _runcatch File "mercurial\dispatch.pyc", line 466, in _dispatch File "mercurial\dispatch.pyc", line 336, in runcommand File "mercurial\dispatch.pyc", line 517, in _runcommand File "mercurial\dispatch.pyc", line 471, in checkargs File "mercurial\dispatch.pyc", line 465, in <lambda> File "mercurial\util.pyc", line 401, in check File "mercurial\commands.pyc", line 708, in commit File "mercurial\cmdutil.pyc", line 1150, in commit File "mercurial\commands.pyc", line 706, in commitfunc File "mercurial\localrepo.pyc", line 836, in commit File "mercurial\cmdutil.pyc", line 1155, in commiteditor File "mercurial\cmdutil.pyc", line 1184, in commitforceeditor File "mercurial\ui.pyc", line 361, in edit File "mercurial\util.pyc", line 383, in system File "subprocess.pyc", line 470, in call File "subprocess.pyc", line 621, in __init__ File "subprocess.pyc", line 830, in _execute_child WindowsError: [Error 2] The system cannot find the file specified abort: The system cannot find the file specified I've tried setting the HGEDITOR environment variable, setting "visual =" and "editor =" in the Mercurial.ini file. I tried full path as well as command only. I also tried copying the notepad.exe file into both the current folder as well as the mercurial folder. Ideally I would like to use the editor at this location "C:\PortableApps\Notepad++Portable\Notepad++Portable.exe", but at this stage I would be happy with any editor!

    Read the article

  • plugin from github not successfully installing

    - by JohnMerlino
    Hey all, I tried to install the highcharts-rails plugin from github as specified in the instructions: Installation Get the plugin: script/plugin install git://github.com/loudpixel/highcharts-rails.git Run the rake setup: rake highcharts_rails:install But when I run the script/plugin install... It installs a couple of files only and not all the required files, I presume, because when I run rake highcharts_rails:install I get the following: rake aborted! Don't know how to build task 'highcharts_rails:install' All it installed for me was: jquery.js jrails.js jquery-ui.js I noticed on the site http://github.com/loudpixel/highcharts-rails It has all this: file MIT-LICENSE February 08, 2010 Initial commit [abbottry] file README.md February 09, 2010 Added installation section to README [jsiarto] file Rakefile February 08, 2010 Initial commit [abbottry] directory generators/ February 08, 2010 Initial commit [abbottry] file init.rb February 08, 2010 Initial commit [abbottry] directory javascripts/ February 08, 2010 Added jquery 1.3.2 script [abbottry] directory lib/ February 08, 2010 Initial commit [abbottry] directory tasks/ February 08, 2010 Incorrect path to plugin for rake task [abbottry] directory test/ February 08, 2010 Initial commit [abbottry] file uninstall.rb February 08, 2010 Initial commit [abbottry] So I'm not sure what I'm doing wrong to not get these files installed properly. Thanks for any response.

    Read the article

  • Using ExpressionEngine or Joomla templates inside a pre-existing page?

    - by Ethan
    Hey SO, So I'm new to both Joomla and Expression Engine, and want to know if I can use it like I'd like. I've already made a full site, and would like to integrate blogging into the site. The site is on CodeIgniter. Is there a way that I could create a form template for submitting a post which would then save to my Joomla/CodeIgniter DB. Then, on a different page, use a different Joomla/CodeIgniter template to display the blog in the form I would like. Note that this wouldn't necessarily be powered by EE or Joomla. From what I understand, and from all the examples I've seen, you have to make the html of the entire page inside of their templates. At worst, if neither work, is there anything I can use to do this? Thanks!

    Read the article

  • itunes connect rejection: "The binary you uploaded was invalid. A pre-release beta version of the SD

    - by Adam
    I'm having trouble submitting apps on Apple's app store. I was using a beta version of xcode- 3.2.3/iphone sdk 4.0- and I assumed that if I set the base sdk and deployment target to an acceptable version it would work, but it didn't. I deleted the beta version of xcode/sdk using "sudo /Developer/Library/uninstall-devtools --mode=all" and did a fresh install of the old release version- xcode 3.2.2/iphone sdk 3.2- but I still have the same problem. Has anyone run into this before? Is there something left over from the beta version that could still be hanging around causing problems?

    Read the article

  • Why is my Pre to Postfix code not working?

    - by Anthony Glyadchenko
    For a class assignment, I have to use two stacks in C++ to make an equation to be converted to its left to right equivalent: 2+4*(3+4*8) -- 35*4+2 -- 142 Here is the main code: #include <iostream> #include <cstring> #include "ctStack.h" using namespace std; int main (int argc, char * const argv[]) { string expression = "2+4*2"; ctstack *output = new ctstack(expression.length()); ctstack *stack = new ctstack(expression.length()); bool previousIsANum = false; for(int i = 0; i < expression.length(); i++){ switch (expression[i]){ case '(': previousIsANum = false; stack->cmstackPush(expression[i]); break; case ')': previousIsANum = false; char x; while (x != '('){ stack->cmstackPop(x); output->cmstackPush(x); } break; case '0': case '1': case '2': case '3': case '4': case '5': case '6': case '7': case '8': case '9': cout << "A number" << endl; previousIsANum = true; output->cmstackPush(expression[i]); break; case '+': previousIsANum = false; cout << "+" << endl; break; case '-': previousIsANum = false; cout << "-" << endl; break; case '*': previousIsANum = false; cout << "*" << endl; break; case '/': previousIsANum = false; cout << "/" << endl; break; default: break; } } char i = ' '; while (stack->ltopOfStack > 0){ stack->cmstackPop(i); output->cmstackPush(i); cout << i << endl; } return 0; } Here is the stack code (watch out!): #include <cstdio> #include <assert.h> #include <new.h> #include <stdlib.h> #include <iostream> class ctstack { private: long* lpstack ; // the stack itself long ltrue ; // constructor sets to 1 long lfalse ; // constructor sets to 0 // offset to top of the stack long lmaxEleInStack ; // maximum possible elements of stack public: long ltopOfStack ; ctstack ( long lnbrOfEleToAllocInStack ) { // Constructor lfalse = 0 ; // set to zero ltrue = 1 ; // set to one assert ( lnbrOfEleToAllocInStack > 0 ) ; // assure positive argument ltopOfStack = -1 ; // ltopOfStack is really an index lmaxEleInStack = lnbrOfEleToAllocInStack ; // set lmaxEleInStack to max ele lpstack = new long [ lmaxEleInStack ] ; // allocate stack assert ( lpstack ) ; // assure new succeeded } ~ctstack ( ) { // Destructor delete [ ] lpstack ; // Delete the stack itself } ctstack& operator= ( const ctstack& ctoriginStack) { // Assignment if ( this == &ctoriginStack ) // verify x not assigned to x return *this ; if ( this -> lmaxEleInStack < ctoriginStack . lmaxEleInStack ) { // if destination stack is smaller than delete [ ] this -> lpstack ; // original stack, delete dest and alloc this -> lpstack = // sufficient memory new long [ ctoriginStack . lmaxEleInStack ] ; assert ( this -> lpstack ) ; // assure new succeeded // reset stack size attribute this -> lmaxEleInStack = ctoriginStack . lmaxEleInStack ; } // copy original to destination stack for ( long i = 0 ; i < ctoriginStack . lmaxEleInStack ; i ++ ) *( this -> lpstack + i ) = *( ctoriginStack . lpstack + i ) ; this -> ltopOfStack = ctoriginStack . ltopOfStack ; // reset stack position attribute return *this ; } long cmstackPush (char lplaceInStack ) { // Push Method if ( ltopOfStack == lmaxEleInStack - 1 ) // stack is full can't add element return lfalse ; ltopOfStack ++ ; // acquire free slot *(lpstack + ltopOfStack ) = lplaceInStack ; // add element return ltrue ; // any number other than zero is true } long cmstackPop (char& lretrievedStackEle ) { // Pop Method if ( ltopOfStack < 0 ) { // stack has no elements lretrievedStackEle = -1 ; // dummy element return lfalse ; } lretrievedStackEle = *( lpstack + ltopOfStack ) ; // stack has element -- return it ltopOfStack -- ; // stack is pop'd return ltrue ; // any number other than zero is true } long cmstackLookAtTop (char& lretrievedStackEle ) { // Pop Method if ( ltopOfStack < 0 ) { // stack has no elements lretrievedStackEle = -1 ; // dummy element return lfalse ; } lretrievedStackEle = *( lpstack + ltopOfStack ) ; // stack has element -- return it return ltrue ; // any number other than zero is true } long cmstackHasAnEle (char& lretrievedTopOfStack ) { // Has element method lretrievedTopOfStack = ltopOfStack ; return ltopOfStack < 0 ? lfalse : ltrue ; // 0 - false stack does not have any ele } // 1 - true stack has at least one element long cmstackMaxNbrOfEle (char& lretrievedMaxStackEle ) { // Maximum element method lretrievedMaxStackEle = lmaxEleInStack ; // return stack size in reference var return ltrue ; // Return Maximum Size of Stack } } ; Thanks, Anthony.

    Read the article

< Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >