Search Results

Search found 2788 results on 112 pages for 'ben martin'.

Page 20/112 | < Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >

  • Dependency errors on installing Banshee

    - by Ben Cracknell
    I just installed Ubuntu 12.10 (Verified the ISO hash as well). The VERY first thing I did was open the software centre and try to install banshee. I am met with the following error: The following packages have unmet dependencies: banshee: Depends: libc6 (>= 2.7) but 2.15-0ubuntu20 is to be installed Depends: libglib2.0-0 (>= 2.34.1) but 2.34.0-1ubuntu1 is to be installed Depends: libgtk2.0-0 (>= 2.24.0) but 2.24.13-0ubuntu2 is to be installed Depends: libsoup-gnome2.4-1 (>= 2.27.4) but 2.40.0-0ubuntu1 is to be installed Depends: libsoup2.4-1 (>= 2.26.1) but 2.40.0-0ubuntu1 is to be installed Depends: libx11-6 (>= 2:1.4.99.1) but 2:1.5.0-1 is to be installed Depends: mono-runtime (>= 2.10.1) but 2.10.8.1-5ubuntu1 is to be installed Depends: libc0.1 (>= 2.15) but it is not going to be installed Depends: libgconf2.0-cil (>= 2.24.0) but 2.24.2-2 is to be installed Depends: libgdk-pixbuf2.0-0 (>= 2.26.4) but 2.26.4-0ubuntu1 is to be installed Depends: libglib2.0-cil (>= 2.12.10-1ubuntu1) but 2.12.10-4 is to be installed Depends: libgtk2.0-cil (>= 2.12.10-1ubuntu1) but 2.12.10-4 is to be installed Depends: libmono-cairo4.0-cil (>= 2.10.1) but 2.10.8.1-5ubuntu1 is to be installed Depends: libmono-corlib4.0-cil (>= 2.10.1) but 2.10.8.1-5ubuntu1 is to be installed Depends: libmono-posix4.0-cil (>= 2.10.1) but 2.10.8.1-5ubuntu1 is to be installed Depends: libmono-system-core4.0-cil (>= 2.10.3) but 2.10.8.1-5ubuntu1 is to be installed Depends: libmono-system4.0-cil (>= 2.10.7) but 2.10.8.1-5ubuntu1 is to be installed Depends: gnome-icon-theme (>= 2.16) but 3.6.0-0ubuntu2 is to be installed I should note that the banshee application appears three times when searching for it: http://i.imgur.com/fJOsb.png Other applications install fine though. I installed the latest updates and still received the same error. I even tried reinstalling Ubuntu, but the same thing happened.

    Read the article

  • Virtualbox shared folder mount from fstab fails; works once bootup is complete

    - by Ben
    I've got Ubuntu 13.10 installed in Virtualbox 4.3. The host machine is Windows. I have a couple of Virtualbox shared folders being mounted by /etc/fstab. Until recently this setup worked just fine, but after upgrading from Ubuntu 13.04 and Virtualbox 4.2 (at essentially the same time) the fstab mounting stopped working. I get the following error during boot: An error occurred while mounting /home/benme/Documents. keys:Press S to skip mounting or M for manual recovery Pressing M for manual recovery and then trying to mount manually also fails: root@benme-vb:~# cd /home/benme root@benme-vb:/home/benme# mount Documents /sbin/mount.vboxsf: mounting failed with the error: No such device But if I instead skip mounting during boot, wait for Unity to start and then mount manually in a shell, everything works fine: benme-vb ~ % ls Documents benme-vb ~ % sudo mount Documents [sudo] password for benme: benme-vb ~ % ls Documents # actual file list omitted Note that when I mount manually I'm letting mount take all the options from /etc/fstab, and it works. This suggests to me that it's some sort of timing issue, where Virtualbox isn't "ready" to provide the shared file mounts at the point /etc/fstab mounts are run during bootup. Here's the fstab line, just for completeness: Documents /home/benme/Documents vboxsf uid=benme,gid=benme,dmode=774,fmode=664 0 0 Is there something I can do about this from the Ubuntu side? Or does anyone happen to know more about this from the Virtualbox angle? I've found an old report on the Virtualbox bug-tracker with identical symptoms, but in that case the user had updated Virtualbox without updating their guest additions and resolving that fixed the problem; this isn't happening here, I've definitely got the 4.3 guest additions installed.

    Read the article

  • Ubuntu One and iPad

    - by Martin G Miller
    I have installed the Ubuntu One app for my iPad 3 running iOS 6. It runs and asked to access my pictures folder on the iPad, which I granted. Now it just displays a splash screen for the Ubuntu One but does not seem to be doing anything else. I can't figure out how to get it to see my regular Ubuntu One account that I have had for the last few years and use regularly between various Ubuntu computers I have.

    Read the article

  • Connecting/Removing a Second Monitor with Multiple Workspaces

    - by Ben
    I am running 12.04 on a laptop. I use workspaces extensively to manage different projects. When I take my laptop home, I plug in an additional monitor. The problem I am having is that whenever I plug in the additional monitor, all of my windows, regardless of what workspace they were in before I connected the additional monitor, move to workspace 1. This forces me to go through all the windows and manually move the windows back to where I had them. I don't think adding a monitor to each workspace should cause windows to move around between workspaces. Any thoughts or ideas on how to fix this? Know of any workarounds? Thanks in advance!

    Read the article

  • Positioning a sprite in XNA: Use ClientBounds or BackBuffer?

    - by Martin Andersson
    I'm reading a book called "Learning XNA 4.0" written by Aaron Reed. Throughout most of the chapters, whenever he calculates the position of a sprite to use in his call to SpriteBatch.Draw, he uses Window.ClientBounds.Width and Window.ClientBounds.Height. But then all of a sudden, on page 108, he uses PresentationParameters.BackBufferWidth and PresentationParameters.BackBufferHeight instead. I think I understand what the Back Buffer and the Client Bounds are and the difference between those two (or perhaps not?). But I'm mighty confused about when I should use one or the other when it comes to positioning sprites. The author uses for the most part Client Bounds both for checking whenever a moving sprite is of the screen and to find a spawn point for new sprites. However, he seems to make two exceptions from this pattern in his book. The first time is when he wants some animated sprites to "move in" and cross the screen from one side to another (page 108 as mentioned). The second and last time is when he positions a texture to work as a button in the lower right corner of a Windows Phone 7 screen (page 379). Anyone got an idea? I shall provide some context if it is of any help. Here's how he usually calls SpriteBatch.Draw (code example from where he positions a sprite in the middle of the screen [page 35]): spriteBatch.Draw(texture, new Vector2( (Window.ClientBounds.Width / 2) - (texture.Width / 2), (Window.ClientBounds.Height / 2) - (texture.Height / 2)), null, Color.White, 0, Vector2.Zero, 1, SpriteEffects.None, 0); And here is the first case of four possible in a switch statement that will set the position of soon to be spawned moving sprites, this position will later be used in the SpriteBatch.Draw call (page 108): // Randomly choose which side of the screen to place enemy, // then randomly create a position along that side of the screen // and randomly choose a speed for the enemy switch (((Game1)Game).rnd.Next(4)) { case 0: // LEFT to RIGHT position = new Vector2( -frameSize.X, ((Game1)Game).rnd.Next(0, Game.GraphicsDevice.PresentationParameters.BackBufferHeight - frameSize.Y)); speed = new Vector2(((Game1)Game).rnd.Next( enemyMinSpeed, enemyMaxSpeed), 0); break;

    Read the article

  • Oracle Remarketer Level expansion in China

    - by martin.morganti(at)oracle.com
    Remarketer Level continues to expand and develop in Oracle's Asia Pacific region. Following the launch of Remarketer level in Korea and Taiwan earlier in FY11, it is great news to see the number of Remarketer VAD partners in China continue to increase. Recent weeks have seen Beijing Futong Dongfang Technology Co.,Ltd. and Digital China (China) Limited both execute the Remarketer VAD addendum. We are delighted that this takes the total of our Remarketer VADs in China to four. This means that we now have even broader coverage to address the opportunity that Remarketer level presents Oracle and our VAD remarketer partners. So welcome to our two latest additions. To find out who are the Remarketer VAD partners in your country, the latest list is posted at here.

    Read the article

  • How to hire support people?

    - by Martin
    I manage a tech support team at a mid-sized software company. We are the last line of support, so issues that we can't fix need to be escalated to the development team. When I joined the company, our team wasn't capable of much beyond using a specific set of troubleshooting steps to solve known issues and escalating anything else to the developers. It's always been a goal of mine for our team to shoulder as much of the support burden as possible without ever bothering a developer. Over the past few years, I, along with several new hires I've made, have made pretty good progress in that direction. We've coded our own troubleshooting tools which now ship with several of our products. When users have never-before-seen issues, we analyze stack traces and troubleshoot down to the code level, and if we need to submit a bug, half the time we've already identified in the code where in the code the bug is and offered a patch to fix it. Here's the problem I've always had: finding support people capable of the work I've described above is really difficult. I've hired 3 people in the past 3 years, and I've probably looked at several thousand resumes and conducted several hundred phone screens to do so. I know it's pretty well accepted that hiring good people is tough in the tech industry, but it seems that support is especially difficult -- there are clearly thousands of people walking around calling themselves support analysts, but 99%+ of them seemingly aren't capable of anything beyond reading a script. I'm curious if anyone has experience recruiting the sort of folks I'm talking about, and if you have any suggestions to share. We've tried all sorts of things -- different job titles/descriptions, using headhunters, etc. And while we've managed to hire a few good folks, it's basically taken us a year to find an appropriate candidate for each opening we've had, and I can't help but wonder if there's something we could be doing differently.

    Read the article

  • Ubuntuone fails to sync with 'File Sync starting...' displayed

    - by a different ben
    I am on 12.04 using ubuntuone-client 3.0.1-0ubuntu1.0.1. I actually have two machines that I sync with, having the same Ubuntu version and ubuntuone-client version. One is fine, the other is not. File sync has frozen within a user-defined folder under my home folder. The graphical client reports in the top-right corner: 'File Sync starting...', but this doesn't change. I have two files with changes that show a syncing overlay in Nautilus. They are both very small text files. Here are some details: harb@joan:~$ u1sdtool --status State: READY connection: With User Not Network description: ready to connect is_connected: False is_error: False is_online: False queues: WORKING harb@joan:~$ u1sdtool --current-transfers Current uploads: 0 Current downloads: 0 The status seems to suggest that I am not connected to a network, however I am connected to a network - in fact I am accessing this machine via NX. Is it not working because I am connected via NX? Happy to provide other info, just not sure what would be useful.

    Read the article

  • Enabling Multi-touch features of the Apple Magic Mouse on Ubuntu 12.04

    - by Martin
    I want to write a simple app that uses Apple's Magic Trackpad, nothing special, just so that it recognizes atleast one gesture. The thing is, Ubuntu itself doesnt really recognize this device. I'm using Ubuntu 12.04 and by default the device works with 1 finger, but without tap-click or doubletap, 3 fingers move the window and 3 finger spread makes it fullscreen. I managed to enable 2 finger scrolling with "xinput set-prop 8 'Two-Finger Scrolling' 1 1", but thats about it. No other gestures work, ginn doesnt start, giesview detects the device but doesnt respond to any of the gestures, and touchegg doesnt start either. I tried example apps from qt that come with ubuntu but they dont work. So... what do i do? i tried using qt but all i get from the app is "Got touch without getting TouchBegin for id XX" what else can i use to get my app to work with multitouch devices?

    Read the article

  • When clientTransferProhibited is off to transfer a domain name, couldn't the name be stolen?

    - by Cedric Martin
    I'd like to transfer a domain from one registrar (Key-systems) to another registrar (OVH). I don't really understand the procedure and I'm a bit confused... I read everywhere that clientTransferProhibited prevents people from stealing your domain name. Now apparently during the course of transferring my domain from Key-systems to OVH, I'll to change clientTransferProhibited so that the transfer is allowed. Wouldn't my domain then become "stealable" during some amount of time? (a few hours / days / week)

    Read the article

  • Project Euler 6: (Iron)Python

    - by Ben Griswold
    In my attempt to learn (Iron)Python out in the open, here’s my solution for Project Euler Problem 6.  As always, any feedback is welcome. # Euler 6 # http://projecteuler.net/index.php?section=problems&id=6 # Find the difference between the sum of the squares of # the first one hundred natural numbers and the square # of the sum. import time start = time.time() square_of_sums = sum(range(1,101)) ** 2 sum_of_squares = reduce(lambda agg, i: agg+i**2, range(1,101)) print square_of_sums - sum_of_squares print "Elapsed Time:", (time.time() - start) * 1000, "millisecs" a=raw_input('Press return to continue')

    Read the article

  • Should I add old code into my repository?

    - by Ben Brocka
    I've got an SVN repository of a PHP site and the last programmer didn't use source control properly. As a result, only code since I started working here is in the Repo. I have a bunch of old copies of the full code base saved in files as "backups" but they're not in source control. I don't know why most of the copies were saved nor do I have any reasonable way to tag them to a version number. Due to upgrades to the frameworks and database drivers involved, the old code is quite defunct; it no longer works on the current server config. However, the previous programmers had some...unique...logic, so I hate to be completely without old copies to refer to what on earth they were doing. Should I keep this stuff in version control? How? Wall off the old code in separate Tags/branches?

    Read the article

  • Tiled/TMX C++ Library/Parser

    - by Ben
    Where can I find an easy to use and up to date C++ parser/library for the .tmx map format (used by the Tiled Map Editor) ? EDIT: David's comment, 'Unless you want to build your game around the format of the parser..', got me thinking... So I have downloaded pugixml, which is an easy to use xml-parser with very straightforward documentation. Together with the spec for the TMX Map Format, I think I'll give it a try myself. I'll probably compare with Cocos2d-x's CCTMXTiledMap at some point.

    Read the article

  • Tessellation Texture Coordinates

    - by Stuart Martin
    Firstly some info - I'm using DirectX 11 , C++ and I'm a fairly good programmer but new to tessellation and not a master graphics programmer. I'm currently implementing a tessellation system for a terrain model, but i have reached a snag. My current system produces a terrain model from a height map complete with multiple texture coordinates, normals, binormals and tangents for rendering. Now when i was using a simple vertex and pixel shader combination everything worked perfectly but since moving to include a hull and domain shader I'm slightly confused and getting strange results. My terrain is a high detail model but the textured results are very large patches of solid colour. My current setup passes the model data into the vertex shader then through the hull into the domain and then finally into the pixel shader for use in rendering. My only thought is that in my hull shader i pass the information into the domain shader per patch and this is producing the large areas of solid colour because each patch has identical information. Lighting and normal data are also slightly off but not as visibly as texturing. Below is a copy of my hull shader that does not work correctly because i think the way that i am passing the data through is incorrect. If anyone can help me out but suggesting an alternative way to get the required data into the pixel shader? or by showing me the correct way to handle the data in the hull shader id be very thankful! cbuffer TessellationBuffer { float tessellationAmount; float3 padding; }; struct HullInputType { float3 position : POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; float3 tangent : TANGENT; float3 binormal : BINORMAL; float2 tex2 : TEXCOORD1; }; struct ConstantOutputType { float edges[3] : SV_TessFactor; float inside : SV_InsideTessFactor; }; struct HullOutputType { float3 position : POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; float3 tangent : TANGENT; float3 binormal : BINORMAL; float2 tex2 : TEXCOORD1; float4 depthPosition : TEXCOORD2; }; ConstantOutputType ColorPatchConstantFunction(InputPatch<HullInputType, 3> inputPatch, uint patchId : SV_PrimitiveID) { ConstantOutputType output; output.edges[0] = tessellationAmount; output.edges[1] = tessellationAmount; output.edges[2] = tessellationAmount; output.inside = tessellationAmount; return output; } [domain("tri")] [partitioning("integer")] [outputtopology("triangle_cw")] [outputcontrolpoints(3)] [patchconstantfunc("ColorPatchConstantFunction")] HullOutputType ColorHullShader(InputPatch<HullInputType, 3> patch, uint pointId : SV_OutputControlPointID, uint patchId : SV_PrimitiveID) { HullOutputType output; output.position = patch[pointId].position; output.tex = patch[pointId].tex; output.tex2 = patch[pointId].tex2; output.normal = patch[pointId].normal; output.tangent = patch[pointId].tangent; output.binormal = patch[pointId].binormal; return output; } Edited to include the domain shader:- [domain("tri")] PixelInputType ColorDomainShader(ConstantOutputType input, float3 uvwCoord : SV_DomainLocation, const OutputPatch<HullOutputType, 3> patch) { float3 vertexPosition; PixelInputType output; // Determine the position of the new vertex. vertexPosition = uvwCoord.x * patch[0].position + uvwCoord.y * patch[1].position + uvwCoord.z * patch[2].position; output.position = mul(float4(vertexPosition, 1.0f), worldMatrix); output.position = mul(output.position, viewMatrix); output.position = mul(output.position, projectionMatrix); output.depthPosition = output.position; output.tex = patch[0].tex; output.tex2 = patch[0].tex2; output.normal = patch[0].normal; output.tangent = patch[0].tangent; output.binormal = patch[0].binormal; return output; }

    Read the article

  • Overheating laptop

    - by Moncef ben slimane
    i've been using ubuntu for ~2 months, when i installed it on my computer (laptop) it never overheat but a day, i don't know what happened, it over heated.. (70*C @ Idle) I've tryed what ever i found on the net, and as well, i can't change the CPU freq o.O, i5 M460 @ 2.53 GHz.. i have benn trying, jupiter (no result), lm-sensors (aswell), and the cpu freq thingy for unity (cpu wont move from 2.5GHz) Any help? (i'm a C++ user and PHP coder...)

    Read the article

  • Anonymous exposes sensitive bank emails

    - by martin.abrahams
    As expected for quite a while, emails purporting to reveal alleged naughtiness at a major bank have been released today. A bank spokesman says "We are confident that his extravagant assertions are untrue". The BBC report concludes…  “Firms are increasingly concerned about the prospect of disgruntled staff taking caches of sensitive e-mails with them when they leave, said Rami Habal, of security firm Proofpoint. "You can't do anything about people copying the content," he said. But firms can put measures in place, such as revoking encryption keys, which means stolen e-mails become unreadable, he added.” Actually, there is something you can do to guard against copying. While traditional encryption lets authorised recipients make unprotected copies long before you revoke the keys, Oracle IRM provides encryption AND guards against unprotected copies being made. Recipients can be authorised to save protected copies, and cut-and-paste within the scope of a protected workflow or email thread – but can be prevented from saving unprotected copies or pasting to unprotected files and emails.  The IRM audit trail would also help track down attempts to open the protected emails and documents by unauthorised individuals within or beyond your perimeter.

    Read the article

  • Recover files from NTFS drive with bad sectors

    - by Martin
    A few nights ago I have created a backup of my data on an external 500 GB NTFS USB hard drive. I have then formatted my computer, reinstalled Ubuntu and started transferring back the data from the external HDD. Unfortunately some files have became corrupted and Ubuntu is unable to copy them over. The same issue happens if I login using Windows 7. Disk Utility detects with SMART that there are "a few bad sectors". Some of files are perfectly intact, but other files cannot be accessed (nor read, copied...) although they are displayed within nautilus and show the correct file size. Is there anything I can do to recover this data? I have thought of using TestDisk but this utility seems more useful for repairing lost partitions or deleted files. I have also thought of using ddrescue so I could at least have a low level copy of the disk but I am not sure what use to make of it in order to recover the data!!!

    Read the article

  • Request Validation in ASP.NET 4.0

    - by Ben Bastiaensen
    Up to ASP.NET 3.5 Request Validation is enabled by default. In order to to disable this for a page you needed to set the ValidationRequest property in the page directive to false. This is no longer the default case in ASP.NET 4.0. If you want to use this behaviour you need to add the follwing setting in web.config  <httpRuntime requestValidationMode="2.0" /> Of course you need to check all input in the page for XSS or other malicious input if you set the pages request validation to false.

    Read the article

  • How can I mount an AFS filesystem?

    - by Ben
    My current method is to mount the filesystem via SSH using Nautilus's graphical interface, but I would much prefer to be able to use some tool that mounts the AFS filesystem and gives me access to AFS-specific features (permissions, etc.). I've tried installing OpenAFS via apt-get, but so far the kernel module has refused to compile. Also, assuming I get OpenAFS installed, I'm not quite sure how to actually mount the remote filesystem to, say, /media/afs or some directory. I'm running Maverick with the 2.6.36-020636-generic kernel from http://kernel.ubuntu.com/~kernel-ppa/mainline/ Thanks for the help!

    Read the article

  • Common way of animating 'motion' for walk cycle animations

    - by Ben Hymers
    I've just posted this at the Blender artists' forums before realising I would probably get a better response from a more game development-specific audience, so apologies for cross-posting! It's for the right reasons :) I'm a programmer trying to animate a character walking for a game project, using Ogre. I've made a very simple walk cycle in Blender and exported it to Ogre, and it plays just fine. By fine, I mean it works, but there's terrible foot sliding. This is because I just animated the walk in-place (at the origin) in Blender, and of course I don't know what "speed of walk" that corresponds to, so when I move the character in-game the motion doesn't necessarily match up with the movement of the feet in the animation. So my question is: what's the normal approach for this kind of thing? At work we use Maya, and the animators either animate a special 'moveTrans' node that represents the "position" of the character (or have the exporter generate it for them from the movement of the root node), then the game can read this to know how fast the animation moves the character. So in the Maya file, the character will walk forward for one cycle and this extra node will follow along with them by their feet. I've not seen anything like this in open-source land, and there's certainly no provision for that in the Ogre Exporter script. What do you chaps normally do for this?

    Read the article

  • Do you know when to send a done email in Scrum?

    - by Martin Hinshelwood
    At SSW we have always sent done emails to the owner/requestor to let them know that it is done. Others who are dependent on that tasks are CC’ed so they know they can proceed. But how does that fit into Scrum?   Update 14th April 2010 Rule added to Rules to better Scrum with TFS If you are working on a task: When you complete a Task that is part of a User Story you need to send a done email to the Owner of that Story. You only need to add the Task #, Summary and link to the item in WIWA. Remember that all your tasks should be under 4 hours, do spending lots of time on a Done Email for a Task would be counter productive. Add more information if required, for example you may have completed the task a different way than previously discussed.  Make sure that every User Story has an Owner as per the rules. If you are the owner of a story: When you complete a story you should send a comprehensive done email as per the rules when the story had been completed. Make sure you add a list of all of the Tasks that were completed as part of the story and the Done criteria that you completed. If your done criteria says: Built Successfully 30% Code Coverage All tests passed Then add an illustration to show this. Figure: Show that you have met your Done criteria where possible.   This is all designed to help you Scrum Team members (Product Owner, ScrumMaster and Team) validate the quality of the work that has been completed. Remember that you are not DONE until your team says you are done.   Technorati Tags: SSW,Scrum,SSW Rules

    Read the article

  • How Mature is Your Database Change Management Process?

    - by Ben Rees
    .dbd-banner p{ font-size:0.75em; padding:0 0 10px; margin:0 } .dbd-banner p span{ color:#675C6D; } .dbd-banner p:last-child{ padding:0; } @media ALL and (max-width:640px){ .dbd-banner{ background:#f0f0f0; padding:5px; color:#333; margin-top: 5px; } } -- Database Delivery Patterns & Practices Further Reading Organization and team processes How do you get your database schema changes live, on to your production system? As your team of developers and DBAs are working on the changes to the database to support your business-critical applications, how do these updates wend their way through from dev environments, possibly to QA, hopefully through pre-production and eventually to production in a controlled, reliable and repeatable way? In this article, I describe a model we use to try and understand the different stages that customers go through as their database change management processes mature, from the very basic and manual, through to advanced continuous delivery practices. I also provide a simple chart that will help you determine “How mature is our database change management process?” This process of managing changes to the database – which all of us who have worked in application/database development have had to deal with in one form or another – is sometimes known as Database Change Management (even if we’ve never used the term ourselves). And it’s a difficult process, often painfully so. Some developers take the approach of “I’ve no idea how my changes get live – I just write the stored procedures and add columns to the tables. It’s someone else’s problem to get this stuff live. I think we’ve got a DBA somewhere who deals with it – I don’t know, I’ve never met him/her”. I know I used to work that way. I worked that way because I assumed that making the updates to production was a trivial task – how hard can it be? Pause the application for half an hour in the middle of the night, copy over the changes to the app and the database, and switch it back on again? Voila! But somehow it never seemed that easy. And it certainly was never that easy for database changes. Why? Because you can’t just overwrite the old database with the new version. Databases have a state – more specifically 4Tb of critical data built up over the last 12 years of running your business, and if your quick hotfix happened to accidentally delete that 4Tb of data, then you’re “Looking for a new role” pretty quickly after the failed release. There are a lot of other reasons why a managed database change management process is important for organisations, besides job security, not least: Frequency of releases. Many business managers are feeling the pressure to get functionality out to their users sooner, quicker and more reliably. The new book (which I highly recommend) Lean Enterprise by Jez Humble, Barry O’Reilly and Joanne Molesky provides a great discussion on how many enterprises are having to move towards a leaner, more frequent release cycle to maintain their competitive advantage. It’s no longer acceptable to release once per year, leaving your customers waiting all year for changes they desperately need (and expect) Auditing and compliance. SOX, HIPAA and other compliance frameworks have demanded that companies implement proper processes for managing changes to their databases, whether managing schema changes, making sure that the data itself is being looked after correctly or other mechanisms that provide an audit trail of changes. We’ve found, at Red Gate that we have a very wide range of customers using every possible form of database change management imaginable. Everything from “Nothing – I just fix the schema on production from my laptop when things go wrong, and write it down in my notebook” to “A full Continuous Delivery process – any change made by a dev gets checked in and recorded, fully tested (including performance tests) before a (tested) release is made available to our Release Management system, ready for live deployment!”. And everything in between of course. Because of the vast number of customers using so many different approaches we found ourselves struggling to keep on top of what everyone was doing – struggling to identify patterns in customers’ behavior. This is useful for us, because we want to try and fit the products we have to different needs – different products are relevant to different customers and we waste everyone’s time (most notably, our customers’) if we’re suggesting products that aren’t appropriate for them. If someone visited a sports store, looking to embark on a new fitness program, and the store assistant suggested the latest $10,000 multi-gym, complete with multiple weights mechanisms, dumb-bells, pull-up bars and so on, then he’s likely to lose that customer. All he needed was a pair of running shoes! To solve this issue – in an attempt to simplify how we understand our customers and our offerings – we built a model. This is a an attempt at trying to classify our customers in to some sort of model or “Customer Maturity Framework” as we rather grandly term it, which somehow simplifies our understanding of what our customers are doing. The great statistician, George Box (amongst other things, the “Box” in the Box-Jenkins time series model) gave us the famous quote: “Essentially all models are wrong, but some are useful” We’ve taken this quote to heart – we know it’s a gross over-simplification of the real world of how users work with complex legacy and new database developments. Almost nobody precisely fits in to one of our categories. But we hope it’s useful and interesting. There are actually a number of similar models that exist for more general application delivery. We’ve found these from ThoughtWorks/Forrester, from InfoQ and others, and initially we tried just taking these models and replacing the word “application” for “database”. However, we hit a problem. From talking to our customers we know that users are far less further down the road of mature database change management than they are for application development. As a simple example, no application developer, who wants to keep his/her job would develop an application for an organisation without source controlling that code. Sure, he/she might not be using an advanced Gitflow branching methodology but they’ll certainly be making sure their code gets managed in a repo somewhere with all the benefits of history, auditing and so on. But this certainly isn’t the case (yet) for the database – a very large segment of the people we speak to have no source control set up for their databases whatsoever, even at the most basic level (for example, keeping change scripts in a source control system somewhere). By the way, if this is you, Red Gate has a great whitepaper here, on the barriers people face getting a source control process implemented at their organisations. This difference in maturity is the same as you move in to areas such as continuous integration (common amongst app developers, relatively rare for database developers) and automated release management (growing amongst app developers, very rare for the database). So, when we created the model we started from scratch and biased the levels of maturity towards what we actually see amongst our customers. But, what are these stages? And what level are you? The table below describes our definitions for four levels of maturity – Baseline, Beginner, Intermediate and Advanced. As I say, this is a model – you won’t fit any of these categories perfectly, but hopefully one will ring true more than others. We’ve also created a PDF with a flow chart to help you find which of these groups most closely matches your team:  Download the Database Delivery Maturity Framework PDF here   Level D1 – Baseline Work directly on live databases Sometimes work directly in production Generate manual scripts for releases. Sometimes use a product like SQL Compare or similar to do this Any tests that we might have are run manually Level D2 – Beginner Have some ad-hoc DB version control such as manually adding upgrade scripts to a version control system Attempt is made to keep production in sync with development environments There is some documentation and planning of manual deployments Some basic automated DB testing in process Level D3 – Intermediate The database is fully version-controlled with a product like Red Gate SQL Source Control or SSDT Database environments are managed Production environment schema is reproducible from the source control system There are some automated tests Have looked at using migration scripts for difficult database refactoring cases Level D4 – Advanced Using continuous integration for database changes Build, testing and deployment of DB changes carried out through a proper database release process Fully automated tests Production system is monitored for fast feedback to developers   Does this model reflect your team at all? Where are you on this journey? We’d be very interested in knowing how you get on. We’re doing a lot of work at the moment, at Red Gate, trying to help people progress through these stages. For example, if you’re currently not source controlling your database, then this is a natural next step. If you are already source controlling your database, what about the next stage – continuous integration and automated release management? To help understand these issues, there’s a summary of the Red Gate Database Delivery learning program on our site, alongside a Patterns and Practices library here on Simple-Talk and a Training Academy section on our documentation site to help you get up and running with the tools you need to progress. All feedback is welcome and it would be great to hear where you find yourself on this journey! This article is part of our database delivery patterns & practices series on Simple Talk. Find more articles for version control, automated testing, continuous integration & deployment.

    Read the article

  • Correcting color-shifted mirrored i915 driver in 12.04?

    - by Will Martin
    I was called in to fix a friend's malfunctioning HP Pavilion. She's not sure exactly which model, but the sticker on the bottom says "G60". The problem was a failed upgrade to 12.04. I was able to mostly repair it with sudo apt-get -f install, which ran setup and configuration for several hundred packages. The biggest problem at the moment is Xorg. The login screen (lightdm) loads normally but at a reduced resolution (1024x768 instead of 1366x768). But once you log in, it looks like this: Observe that the colors of the dock on the left and the bar at the top are normal. But the background is filled with bizarro color-skewed ghost images of the desktop. In all cases, the actual contents of any programs you run is a totally illegible mess, except that the bar at the top of any program windows looks and acts normally. And the ghost images are interactive! For example, if you click the icon in the top right corner to get the "shut down" menu, the same menu will appear in the ghost images below. Starting a terminal will start a terminal window in both the real desktop and the ghost images, and moving it around updates both the real and ghost desktops. I suspect Xorg is using some kind of wrong driver and/or parameter for the graphics hardware. Here is the graphics-relevant portion of the lspci -v output: 00:00.0 Host bridge: Intel Corporation Mobile 4 Series Chipset Memory Controller Hub (rev 09) Subsystem: Hewlett-Packard Company Device 360b Flags: bus master, fast devsel, latency 0 Capabilities: [e0] Vendor Specific Information: Len=0a <?> Kernel driver in use: agpgart-intel 00:02.0 VGA compatible controller: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller (rev 09) (prog-if 00 [VGA controller]) Subsystem: Hewlett-Packard Company Device 360b Flags: bus master, fast devsel, latency 0, IRQ 44 Memory at d0000000 (64-bit, non-prefetchable) [size=4M] Memory at c0000000 (64-bit, prefetchable) [size=256M] I/O ports at 5110 [size=8] Expansion ROM at <unassigned> [disabled] Capabilities: [90] MSI: Enable+ Count=1/1 Maskable- 64bit- Capabilities: [d0] Power Management version 3 Kernel driver in use: i915 Kernel modules: i915 00:02.1 Display controller: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller (rev 09) Subsystem: Hewlett-Packard Company Device 360b Flags: bus master, fast devsel, latency 0 Memory at d2500000 (64-bit, non-prefetchable) [size=1M] Capabilities: [d0] Power Management version 3 I'm not sure what to check next. I would ordinarily check xorg.conf to see what it says, but that apparently doesn't exist any more, and my googling has not yielded any useful techniques for getting Xorg to tell me what settings it decided to use. The weird part is that it works fine on the login screen. It's only when you actually log in as a user that the display gets screwed up. Suggestions?

    Read the article

  • Unable to install files with apt-get: "unable to locate package"

    - by Ben Casling
    I'm having issues with my ubuntu server version 12.04 installed on a HP550 laptop, when i try sudo apt-get install <programname>, e.g apache2 it will not work, saying E: Unable to locate package apache2. I have tried to look/edit the sources. but they will not work either the gedit command is broken too, i am trying gedit /etc/apt/sources.list for those wondering, is this a case of the computer network not configured properly? it downloaded a language pack easily enough in the installation though. how do i fix this? a prompt reply would be appreciated.

    Read the article

  • At which point is a continuous integration server interesting?

    - by Cedric Martin
    I've been reading a bit about CI servers like Jenkins and I'm wondering: at which point is it useful? Because surely for a tiny project where you'd have only 5 classes and 10 unit tests, there's no real need. Here we've got about 1500 unit tests and they pass (on old Core 2 Duo workstations) in about 90 seconds (because they're really testing "units" and hence are very fast). The rule we have is that we cannot commit code when a test fail. So each developers launches all his tests to prevent regression. Obviously, because all the developers always launch all the test we catch errors due to conflicting changes as soon as one developer pulls the change of another (when any). It's still not very clear to me: should I set up a CI server like Jenkins? What would it bring? Is it just useful for the speed gain? (not an issue in our case) Is it useful because old builds can be recreated? (but we can do this to with Mercurial, by checking out old revs) Basically I understand it can be useful but I fail to see exactly why. Any explanation taking into account the points I raised above would be most welcome.

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >