Search Results

Search found 42307 results on 1693 pages for 'solid works'.

Page 416/1693 | < Previous Page | 412 413 414 415 416 417 418 419 420 421 422 423  | Next Page >

  • rewrite rule if iphone?

    - by daniel Crabbe
    hello there. just need one url on my site to check if its a mobile device and then rerite the url accordingly. want to rewrite; /play-reel/miranda-bowen/playpeaches-and-cream to /mobile/play-reel/miranda-bowen/playpeaches-and-cream RewriteCond %{HTTP_USER_AGENT} ^.*iPhone.*$ [NC] RewriteRule ^play-reel(.*)\$ mobile/play-reel$1 [R=302,NC] RewriteRule ^mobile/play-reel/([a-zA-Z0-9\-]+)/([a-zA-Z0-9\-]+)$ play-reel-new-html5-02.php?director=$1&video=$2 [L] # the 3rd line works but cant get the url to change for it to be picked up can anyone see what's wrong? There's no erro best, Dan

    Read the article

  • Subterranean IL: Constructor constraints

    - by Simon Cooper
    The constructor generic constraint is a slightly wierd one. The ECMA specification simply states that it: constrains [the type] to being a concrete reference type (i.e., not abstract) that has a public constructor taking no arguments (the default constructor), or to being a value type. There seems to be no reference within the spec to how you actually create an instance of a generic type with such a constraint. In non-generic methods, the normal way of creating an instance of a class is quite different to initializing an instance of a value type. For a reference type, you use newobj: newobj instance void IncrementableClass::.ctor() and for value types, you need to use initobj: .locals init ( valuetype IncrementableStruct s1 ) ldloca 0 initobj IncrementableStruct But, for a generic method, we need a consistent method that would work equally well for reference or value types. Activator.CreateInstance<T> To solve this problem the CLR designers could have chosen to create something similar to the constrained. prefix; if T is a value type, call initobj, and if it is a reference type, call newobj instance void !!0::.ctor(). However, this solution is much more heavyweight than constrained callvirt. The newobj call is encoded in the assembly using a simple reference to a row in a metadata table. This encoding is no longer valid for a call to !!0::.ctor(), as different constructor methods occupy different rows in the metadata tables. Furthermore, constructors aren't virtual, so we would have to somehow do a dynamic lookup to the correct method at runtime without using a MethodTable, something which is completely new to the CLR. Trying to do this in IL results in the following verification error: newobj instance void !!0::.ctor() [IL]: Error: Unable to resolve token. This is where Activator.CreateInstance<T> comes in. We can call this method to return us a new T, and make the whole issue Somebody Else's Problem. CreateInstance does all the dynamic method lookup for us, and returns us a new instance of the correct reference or value type (strangely enough, Activator.CreateInstance<T> does not itself have a .ctor constraint on its generic parameter): .method private static !!0 CreateInstance<.ctor T>() { call !!0 [mscorlib]System.Activator::CreateInstance<!!0>() ret } Going further: compiler enhancements Although this method works perfectly well for solving the problem, the C# compiler goes one step further. If you decompile the C# version of the CreateInstance method above: private static T CreateInstance() where T : new() { return new T(); } what you actually get is this (edited slightly for space & clarity): .method private static !!T CreateInstance<.ctor T>() { .locals init ( [0] !!T CS$0$0000, [1] !!T CS$0$0001 ) DetectValueType: ldloca.s 0 initobj !!T ldloc.0 box !!T brfalse.s CreateInstance CreateValueType: ldloca.s 1 initobj !!T ldloc.1 ret CreateInstance: call !!0 [mscorlib]System.Activator::CreateInstance<T>() ret } What on earth is going on here? Looking closer, it's actually quite a clever performance optimization around value types. So, lets dissect this code to see what it does. The CreateValueType and CreateInstance sections should be fairly self-explanatory; using initobj for value types, and Activator.CreateInstance for reference types. How does the DetectValueType section work? First, the stack transition for value types: ldloca.s 0 // &[!!T(uninitialized)] initobj !!T // ldloc.0 // !!T box !!T // O[!!T] brfalse.s // branch not taken When the brfalse.s is hit, the top stack entry is a non-null reference to a boxed !!T, so execution continues to to the CreateValueType section. What about when !!T is a reference type? Remember, the 'default' value of an object reference (type O) is zero, or null. ldloca.s 0 // &[!!T(null)] initobj !!T // ldloc.0 // null box !!T // null brfalse.s // branch taken Because box on a reference type is a no-op, the top of the stack at the brfalse.s is null, and so the branch to CreateInstance is taken. For reference types, Activator.CreateInstance is called which does the full dynamic lookup using reflection. For value types, a simple initobj is called, which is far faster, and also eliminates the unboxing that Activator.CreateInstance has to perform for value types. However, this is strictly a performance optimization; Activator.CreateInstance<T> works for value types as well as reference types. Next... That concludes the initial premise of the Subterranean IL series; to cover the details of generic methods and generic code in IL. I've got a few other ideas about where to go next; however, if anyone has any itching questions, suggestions, or things you've always wondered about IL, do let me know.

    Read the article

  • How should I manage "reverting" a branch done with bookmarks in mercurial?

    - by Earlz
    I have an open source project on bitbucket. Recently, I've been working on an experimental branch which I (for whatever reason) didn't make an actual branch for. Instead what I did was use bookmarks. So I made two bookmarks at the same revision test --the new code I worked on that should now be abandoned(due to an experiment failure) main -- the stable old code that works I worked in test. I also pushed from test to my server, which ended up switching the tip tag to the new unstable code, when I really would've rather it stayed at main. I "switched" back to the main bookmark by doing a hg update main and then committing an insignificant change. So, I pushed this with hg push -f and now my source control is "correct" on the server. I know that there should be a cleaner way to "switch" branches. What should I do in the future for this kind of operation?

    Read the article

  • Creating Wizard in ASP.NET MVC (Part 2)

    - by bipinjoshi
    In Part 1 of this article series you developed a wizard in an ASP.NET MVC application. Although the wizard developed in Part 1 works as expected it has one shortcoming. It causes full page postback whenever you click on Previous or Next button. This behavior may not pose much problem if a wizard has only a few steps. However, if a wizard has many steps and each step accepts many entries then full page postback can deteriorate the user experience. To overcome this shortcoming you can add Ajax to the wizard so that only the form is posted to the server. In this part of the series you will convert the application developed in Part 1 to use Ajax.http://www.binaryintellect.net/articles/8e278bfa-7244-4e3e-b5aa-2954a91331da.aspx 

    Read the article

  • Hot to get XChat to trust CA certificate?

    - by Silvio
    I have created an SSL certificate authority including a sub-authority to issue certificates. I have copied the root certificate to /usr/share/ca-certificates/extra and added it to /etc/ca-certificates, then ran dpkg-reconfigure ca-certificates and update-ca-certificates. After it said "Adding debian:.pem done." I am now fairly convinced that Ubuntu knows about my CA. Now I issued a certificate for the ZNC irc bouncer from the subca, installed it onto znc, but XChat will not trust the certificate despite all the above. I also issued a certificate to use with apache2 and that one works fine after adding the root ca to chromium. Does someone know how I can get XChat to trust the certificate?

    Read the article

  • Pair programming business logic with a non-IT person

    - by user1598390
    Have you have any experience in which a non-IT person works with a programmer during the coding process? It's like pair programming, but one person is a non-IT person that knows a lot about the business, maybe a process engineer with math background who knows how things are calculated and can understand non-idiomatic, procedural code. I've found that some procedural, domain-specific languages like PL/SQL are quite understandable by non-IT engineers. These person end up being co-authors of the code and guarantee the correctness of formulas, factors etc. I've found this kind of pair programming quite productive, this kind of engineer user feel they are also "owners" and "authors" of the code and help minimize misunderstanding in the communication process. They even help design the test cases. Is this practice common ? Does it have a name ? Have you had similar experiences ?

    Read the article

  • Partial upgrade error

    - by Dan
    this is an issue I have googled a lot and I have tried a lot of fixes, but non of them really worked. At some point (I can't remember when/how) my Update system sort of broke, and since then it is always complaining about "Not all updates can be installed, run a Partial Upgrade". If I click on Partial Upgrade, I get the following result But running apt-get install -f does not fix anything, and at the end I always get the following message Funny thing is that my apt-get system works perfect on Console. I can update my system through apt-get update, apt-get upgrade etc.. So.. how can I fix the graphic interface? I understand that my apt-get system is not broken, but somehow its GUI it is. Any thoughts about it? THANKS! P.D: I have already tried sudo dpkg --configure -a and sudo apt-get autoremove

    Read the article

  • Error 1130 connecting to MySQL on Ubuntu Server 12.04

    - by maGz
    I hope this is the right place for this...I currently am running Ubuntu Server 12.04 through VirtualBox on a Windows 7 host. I am trying to connect to the VM's MySQL engine using MyDB Studio for MySQL, and when I enter my MySQL login credentials, it gives me the following error back: Error 1130: Host '192.168.56.1' is not allowed to connect to this MySQL server I am running the VM with Adapter 1 enabled for NAT, and Adapter 2 enabled for Host-only Adapter. eth0 10.0.2.15 and eth1 192.168.56.21. I can connect to Apache at 192.168.56.21, and through PhpMyAdmin, everything works as it should. I did edit the /etc/mysql/my.cnf file and commented out the line bind-address = 127.0.0.1 by adding a # in front of it - I thought that this should have allowed remote connections. Any ideas on how I can solve this? What could be wrong? EDIT: I am trying to connect as 'root'. EDIT: SOLVED!!

    Read the article

  • Does XNA 4 support 3D affine transformations for 2D images?

    - by Paul Baker Salt Shaker
    Looooong story short I'm essentially trying to code Mode 7 in XNA. Before I continue bashing my brains out in research and various failed matrix math equations; I just want to make sure that XNA supports this just out-of-the-box (so to speak). I'd prefer not to have to import other libraries, because I want to learn how it works myself that way I understand the whole thing better. However that's all for naught if it won't work at all. So no opengl, directx, etc if possible (will eventually do it just to optimize everything, but not for now). tl;dr: Can I has Mode 7 in XNA?

    Read the article

  • Passenger (mod-rails) can't find libopenssl-ruby

    - by flintinatux
    Trying to build an nginx server with Phusion Passenger on Ubuntu 11.10 (hurray for the new version!). Running "passenger-install-nginx-module" outputs the following error: * OpenSSL support for Ruby... not found With the following suggestion to fix it: * To install OpenSSL support for Ruby: Please run apt-get install libopenssl-ruby as root. Running "sudo apt-get install libopenssl-ruby" yields the following output: Reading package lists... Done Building dependency tree Reading state information... Done Note, selecting 'libruby' instead of 'libopenssl-ruby' libruby is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. A little research shows that libruby is a virtual package that provides libopenssl-ruby as part of the package. However, the passenger-install-nginx-module script still can't find it, and keeps throwing the same error. Help me, please! I'm in a little over my head on this one, and the google-the-error-code method that usually works is failing me today.

    Read the article

  • Towards Database Continuous Delivery – What Next after Continuous Integration? A Checklist

    - by Ben Rees
    .dbd-banner p{ font-size:0.75em; padding:0 0 10px; margin:0 } .dbd-banner p span{ color:#675C6D; } .dbd-banner p:last-child{ padding:0; } @media ALL and (max-width:640px){ .dbd-banner{ background:#f0f0f0; padding:5px; color:#333; margin-top: 5px; } } -- Database delivery patterns & practices STAGE 4 AUTOMATED DEPLOYMENT If you’ve been fortunate enough to get to the stage where you’ve implemented some sort of continuous integration process for your database updates, then hopefully you’re seeing the benefits of that investment – constant feedback on changes your devs are making, advanced warning of data loss (prior to the production release on Saturday night!), a nice suite of automated tests to check business logic, so you know it’s going to work when it goes live, and so on. But what next? What can you do to improve your delivery process further, moving towards a full continuous delivery process for your database? In this article I describe some of the issues you might need to tackle on the next stage of this journey, and how to plan to overcome those obstacles before they appear. Our Database Delivery Learning Program consists of four stages, really three – source controlling a database, running continuous integration processes, then how to set up automated deployment (the middle stage is split in two – basic and advanced continuous integration, making four stages in total). If you’ve managed to work through the first three of these stages – source control, basic, then advanced CI, then you should have a solid change management process set up where, every time one of your team checks in a change to your database (whether schema or static reference data), this change gets fully tested automatically by your CI server. But this is only part of the story. Great, we know that our updates work, that the upgrade process works, that the upgrade isn’t going to wipe our 4Tb of production data with a single DROP TABLE. But – how do you get this (fully tested) release live? Continuous delivery means being always ready to release your software at any point in time. There’s a significant gap between your latest version being tested, and it being easily releasable. Just a quick note on terminology – there’s a nice piece here from Atlassian on the difference between continuous integration, continuous delivery and continuous deployment. This piece also gives a nice description of the benefits of continuous delivery. These benefits have been summed up by Jez Humble at Thoughtworks as: “Continuous delivery is a set of principles and practices to reduce the cost, time, and risk of delivering incremental changes to users” There’s another really useful piece here on Simple-Talk about the need for continuous delivery and how it applies to the database written by Phil Factor – specifically the extra needs and complexities of implementing a full CD solution for the database (compared to just implementing CD for, say, a web app). So, hopefully you’re convinced of moving on the the next stage! The next step after CI is to get some sort of automated deployment (or “release management”) process set up. But what should I do next? What do I need to plan and think about for getting my automated database deployment process set up? Can’t I just install one of the many release management tools available and hey presto, I’m ready! If only it were that simple. Below I list some of the areas that it’s worth spending a little time on, where a little planning and prep could go a long way. It’s also worth pointing out, that this should really be an evolving process. Depending on your starting point of course, it can be a long journey from your current setup to a full continuous delivery pipeline. If you’ve got a CI mechanism in place, you’re certainly a long way down that path. Nevertheless, we’d recommend evolving your process incrementally. Pages 157 and 129-141 of the book on Continuous Delivery (by Jez Humble and Dave Farley) have some great guidance on building up a pipeline incrementally: http://www.amazon.com/Continuous-Delivery-Deployment-Automation-Addison-Wesley/dp/0321601912 For now, in this post, we’ll look at the following areas for your checklist: You and Your Team Environments The Deployment Process Rollback and Recovery Development Practices You and Your Team It’s a cliché in the DevOps community that “It’s not all about processes and tools, really it’s all about a culture”. As stated in this DevOps report from Puppet Labs: “DevOps processes and tooling contribute to high performance, but these practices alone aren’t enough to achieve organizational success. The most common barriers to DevOps adoption are cultural: lack of manager or team buy-in, or the value of DevOps isn’t understood outside of a specific group”. Like most clichés, there’s truth in there – if you want to set up a database continuous delivery process, you need to get your boss, your department, your company (if relevant) onside. Why? Because it’s an investment with the benefits coming way down the line. But the benefits are huge – for HP, in the book A Practical Approach to Large-Scale Agile Development: How HP Transformed LaserJet FutureSmart Firmware, these are summarized as: -2008 to present: overall development costs reduced by 40% -Number of programs under development increased by 140% -Development costs per program down 78% -Firmware resources now driving innovation increased by a factor of 8 (from 5% working on new features to 40% But what does this mean? It means that, when moving to the next stage, to make that extra investment in automating your deployment process, it helps a lot if everyone is convinced that this is a good thing. That they understand the benefits of automated deployment and are willing to make the effort to transform to a new way of working. Incidentally, if you’re ever struggling to convince someone of the value I’d strongly recommend just buying them a copy of this book – a great read, and a very practical guide to how it can really work at a large org. I’ve spoken to many customers who have implemented database CI who describe their deployment process as “The point where automation breaks down. Up to that point, the CI process runs, untouched by human hand, but as soon as that’s finished we revert to manual.” This deployment process can involve, for example, a DBA manually comparing an environment (say, QA) to production, creating the upgrade scripts, reading through them, checking them against an Excel document emailed to him/her the night before, turning to page 29 in his/her notebook to double-check how replication is switched off and on for deployments, and so on and so on. Painful, error-prone and lengthy. But the point is, if this is something like your deployment process, telling your DBA “We’re changing everything you do and your toolset next week, to automate most of your role – that’s okay isn’t it?” isn’t likely to go down well. There’s some work here to bring him/her onside – to explain what you’re doing, why there will still be control of the deployment process and so on. Or of course, if you’re the DBA looking after this process, you have to do a similar job in reverse. You may have researched and worked out how you’d like to change your methodology to start automating your painful release process, but do the dev team know this? What if they have to start producing different artifacts for you? Will they be happy with this? Worth talking to them, to find out. As well as talking to your DBA/dev team, the other group to get involved before implementation is your manager. And possibly your manager’s manager too. As mentioned, unless there’s buy-in “from the top”, you’re going to hit problems when the implementation starts to get rocky (and what tool/process implementations don’t get rocky?!). You need to have support from someone senior in your organisation – someone you can turn to when you need help with a delayed implementation, lack of resources or lack of progress. Actions: Get your DBA involved (or whoever looks after live deployments) and discuss what you’re planning to do or, if you’re the DBA yourself, get the dev team up-to-speed with your plans, Get your boss involved too and make sure he/she is bought in to the investment. Environments Where are you going to deploy to? And really this question is – what environments do you want set up for your deployment pipeline? Assume everyone has “Production”, but do you have a QA environment? Dedicated development environments for each dev? Proper pre-production? I’ve seen every setup under the sun, and there is often a big difference between “What we want, to do continuous delivery properly” and “What we’re currently stuck with”. Some of these differences are: What we want What we’ve got Each developer with their own dedicated database environment A single shared “development” environment, used by everyone at once An Integration box used to test the integration of all check-ins via the CI process, along with a full suite of unit-tests running on that machine In fact if you have a CI process running, you’re likely to have some sort of integration server running (even if you don’t call it that!). Whether you have a full suite of unit tests running is a different question… Separate QA environment used explicitly for manual testing prior to release “We just test on the dev environments, or maybe pre-production” A proper pre-production (or “staging”) box that matches production as closely as possible Hopefully a pre-production box of some sort. But does it match production closely!? A production environment reproducible from source control A production box which has drifted significantly from anything in source control The big question is – how much time and effort are you going to invest in fixing these issues? In reality this just involves figuring out which new databases you’re going to create and where they’ll be hosted – VMs? Cloud-based? What about size/data issues – what data are you going to include on dev environments? Does it need to be masked to protect access to production data? And often the amount of work here really depends on whether you’re working on a new, greenfield project, or trying to update an existing, brownfield application. There’s a world if difference between starting from scratch with 4 or 5 clean environments (reproducible from source control of course!), and trying to re-purpose and tweak a set of existing databases, with all of their surrounding processes and quirks. But for a proper release management process, ideally you have: Dedicated development databases, An Integration server used for testing continuous integration and running unit tests. [NB: This is the point at which deployments are automatic, without human intervention. Each deployment after this point is a one-click (but human) action], QA – QA engineers use a one-click deployment process to automatically* deploy chosen releases to QA for testing, Pre-production. The environment you use to test the production release process, Production. * A note on the use of the word “automatic” – when carrying out automated deployments this does not mean that the deployment is happening without human intervention (i.e. that something is just deploying over and over again). It means that the process of carrying out the deployment is automatic in that it’s not a person manually running through a checklist or set of actions. The deployment still requires a single-click from a user. Actions: Get your environments set up and ready, Set access permissions appropriately, Make sure everyone understands what the environments will be used for (it’s not a “free-for-all” with all environments to be accessed, played with and changed by development). The Deployment Process As described earlier, most existing database deployment processes are pretty manual. The following is a description of a process we hear very often when we ask customers “How do your database changes get live? How does your manual process work?” Check pre-production matches production (use a schema compare tool, like SQL Compare). Sometimes done by taking a backup from production and restoring in to pre-prod, Again, use a schema compare tool to find the differences between the latest version of the database ready to go live (i.e. what the team have been developing). This generates a script, User (generally, the DBA), reviews the script. This often involves manually checking updates against a spreadsheet or similar, Run the script on pre-production, and check there are no errors (i.e. it upgrades pre-production to what you hoped), If all working, run the script on production.* * this assumes there’s no problem with production drifting away from pre-production in the interim time period (i.e. someone has hacked something in to the production box without going through the proper change management process). This difference could undermine the validity of your pre-production deployment test. Red Gate is currently working on a free tool to detect this problem – sign up here at www.sqllighthouse.com, if you’re interested in testing early versions. There are several variations on this process – some better, some much worse! How do you automate this? In particular, step 3 – surely you can’t automate a DBA checking through a script, that everything is in order!? The key point here is to plan what you want in your new deployment process. There are so many options. At one extreme, pure continuous deployment – whenever a dev checks something in to source control, the CI process runs (including extensive and thorough testing!), before the deployment process keys in and automatically deploys that change to the live box. Not for the faint hearted – and really not something we recommend. At the other extreme, you might be more comfortable with a semi-automated process – the pre-production/production matching process is automated (with an error thrown if these environments don’t match), followed by a manual intervention, allowing for script approval by the DBA. One he/she clicks “Okay, I’m happy for that to go live”, the latter stages automatically take the script through to live. And anything in between of course – and other variations. But we’d strongly recommended sitting down with a whiteboard and your team, and spending a couple of hours mapping out “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” NB: Most of what we’re discussing here is about production deployments. It’s important to note that you will also need to map out a deployment process for earlier environments (for example QA). However, these are likely to be less onerous, and many customers opt for a much more automated process for these boxes. Actions: Sit down with your team and a whiteboard, and draw out the answers to the questions above for your production deployments – “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” Repeat for earlier environments (QA and so on). Rollback and Recovery If only every deployment went according to plan! Unfortunately they don’t – and when things go wrong, you need a rollback or recovery plan for what you’re going to do in that situation. Once you move in to a more automated database deployment process, you’re far more likely to be deploying more frequently than before. No longer once every 6 months, maybe now once per week, or even daily. Hence the need for a quick rollback or recovery process becomes paramount, and should be planned for. NB: These are mainly scenarios for handling rollbacks after the transaction has been committed. If a failure is detected during the transaction, the whole transaction can just be rolled back, no problem. There are various options, which we’ll explore in subsequent articles, things like: Immediately restore from backup, Have a pre-tested rollback script (remembering that really this is a “roll-forward” script – there’s not really such a thing as a rollback script for a database!) Have fallback environments – for example, using a blue-green deployment pattern. Different options have pros and cons – some are easier to set up, some require more investment in infrastructure; and of course some work better than others (the key issue with using backups, is loss of the interim transaction data that has been added between the failed deployment and the restore). The best mechanism will be primarily dependent on how your application works and how much you need a cast-iron failsafe mechanism. Actions: Work out an appropriate rollback strategy based on how your application and business works, your appetite for investment and requirements for a completely failsafe process. Development Practices This is perhaps the more difficult area for people to tackle. The process by which you can deploy database updates is actually intrinsically linked with the patterns and practices used to develop that database and linked application. So you need to decide whether you want to implement some changes to the way your developers actually develop the database (particularly schema changes) to make the deployment process easier. A good example is the pattern “Branch by abstraction”. Explained nicely here, by Martin Fowler, this is a process that can be used to make significant database changes (e.g. splitting a table) in a step-wise manner so that you can always roll back, without data loss – by making incremental updates to the database backward compatible. Slides 103-108 of the following slidedeck, from Niek Bartholomeus explain the process: https://speakerdeck.com/niekbartho/orchestration-in-meatspace As these slides show, by making a significant schema change in multiple steps – where each step can be rolled back without any loss of new data – this affords the release team the opportunity to have zero-downtime deployments with considerably less stress (because if an increment goes wrong, they can roll back easily). There are plenty more great patterns that can be implemented – the book Refactoring Databases, by Scott Ambler and Pramod Sadalage is a great read, if this is a direction you want to go in: http://www.amazon.com/Refactoring-Databases-Evolutionary-paperback-Addison-Wesley/dp/0321774515 But the question is – how much of this investment are you willing to make? How often are you making significant schema changes that would require these best practices? Again, there’s a difference here between migrating old projects and starting afresh – with the latter it’s much easier to instigate best practice from the start. Actions: For your business, work out how far down the path you want to go, amending your database development patterns to “best practice”. It’s a trade-off between implementing quality processes, and the necessity to do so (depending on how often you make complex changes). Socialise these changes with your development group. No-one likes having “best practice” changes imposed on them, so good to introduce these ideas and the rationale behind them early.   Summary The next stages of implementing a continuous delivery pipeline for your database changes (once you have CI up and running) require a little pre-planning, if you want to get the most out of the work, and for the implementation to go smoothly. We’ve covered some of the checklist of areas to consider – mainly in the areas of “Getting the team ready for the changes that are coming” and “Planning our your pipeline, environments, patterns and practices for development”, though there will be more detail, depending on where you’re coming from – and where you want to get to. This article is part of our database delivery patterns & practices series on Simple Talk. Find more articles for version control, automated testing, continuous integration & deployment.

    Read the article

  • WiFi not working on Dell Inspiron 1564

    - by Zack Warren
    hope you can offer some help! I'm new to Ubuntu (and Linux for that matter), but I managed to install Ubuntu 12.04 yesterday on an old Dell Inspiron 1564 Laptop. Everything works fine, with the exception of the wireless connection. Using my wired ethernet, I was able to look up several solutions which seemed as if they would work, but I've had no luck. Currently, wireless does show as enabled, but all wireless networks are disconnected, in fact, none of the networks are even listed (and I can see them on other devices in the home). Let me know what you need to help me! Thanks in advance!

    Read the article

  • XNA matrix order problem

    - by user1990950
    I want a matrix that scales first and then rotates. I tried the code below, but it didn't work. zRotation, yRotation and xRotation are rotations that shouldn't be affected by the origin. allrot should be affected. xScale, yScale and zScale are the scaling variables. The code below works except that it rotates and then scales. Matrix worldMatrix = ( Matrix.CreateRotationZ(MathHelper.ToRadians(zRotation)) * Matrix.CreateRotationX(MathHelper.ToRadians(xRotation)) * Matrix.CreateRotationY(MathHelper.ToRadians(yRotation)) ) * ( Matrix.CreateTranslation(origin) * Matrix.CreateRotationY(MathHelper.ToRadians(allrot)) * Matrix.CreateScale(xScale, yScale, zScale) );

    Read the article

  • Developing Schema Compare for Oracle (Part 3): Ghost Objects

    - by Simon Cooper
    In the previous blog post, I covered how we solved the problem of dependencies between objects and between schemas. However, that isn’t the end of the issue. The dependencies algorithm I described works when you’re querying live databases and you can get dependencies for a particular schema direct from the server, and that’s all well and good. To throw a (rather large) spanner in the works, Schema Compare also has the concept of a snapshot, which is a read-only compressed XML representation of a selection of schemas that can be compared in the same way as a live database. This can be useful for keeping historical records or a baseline of a database schema, or comparing a schema on a computer that doesn’t have direct access to the database. So, how do snapshots interact with dependencies? Inter-database dependencies don't pose an issue as we store the dependencies in the snapshot. However, comparing a snapshot to a live database with cross-schema dependencies does cause a problem; what if the live database has a dependency to an object that does not exist in the snapshot? Take a basic example schema, where you’re only populating SchemaA: SOURCE   TARGET (using snapshot) CREATE TABLE SchemaA.Table1 ( Col1 NUMBER REFERENCES SchemaB.Table1(col1));   CREATE TABLE SchemaA.Table1 ( Col1 VARCHAR2(100)); CREATE TABLE SchemaB.Table1 ( Col1 NUMBER PRIMARY KEY);   CREATE TABLE SchemaB.Table1 ( Col1 VARCHAR2(100)); In this case, we want to generate a sync script to synchronize SchemaA.Table1 on the database represented by the snapshot. When taking a snapshot, database dependencies are followed, but because you’re not comparing it to anything at the time, the comparison dependencies algorithm described in my last post cannot be used. So, as you only take a snapshot of SchemaA on the target database, SchemaB.Table1 will not be in the snapshot. If this snapshot is then used to compare against the above source schema, SchemaB.Table1 will be included in the source, but the object will not be found in the target snapshot. This is the same problem that was solved with comparison dependencies, but here we cannot use the comparison dependencies algorithm as the snapshot has not got any information on SchemaB! We've now hit quite a big problem - we’re trying to include SchemaB.Table1 in the target, but we simply do not know the status of this object on the database the snapshot was taken from; whether it exists in the database at all, whether it’s the same as the target, whether it’s different... What can we do about this sorry state of affairs? Well, not a lot, it would seem. We can’t query the original database, as it may not be accessible, and we cannot assume any default state as it could be wrong and break the script (and we currently do not have a roll-back mechanism for failed synchronizes). The only way to fix this properly is for the user to go right back to the start and re-create the snapshot, explicitly including the schemas of these 'ghost' objects. So, the only thing we can do is flag up dependent ghost objects in the UI, and ask the user what we should do with it – assume it doesn’t exist, assume it’s the same as the target, or specify a definition for it. Unfortunately, such functionality didn’t make the cut for v1 of Schema Compare (as this is very much an edge case for a non-critical piece of functionality), so we simply flag the ghost objects up in the sync wizard as unsyncable, and let the user sort out what’s going on and edit the sync script as appropriate. There are some things that we do do to alleviate somewhat this rather unhappy situation; if a user creates a snapshot from the source or target of a database comparison, we include all the objects registered from the database, not just the ones in the schemas originally selected for comparison. This includes any extra dependent objects registered through the comparison dependencies algorithm. If the user then compares the resulting snapshot against the same database they were comparing against when it was created, the extra dependencies will be included in the snapshot as required and everything will be good. Fortunately, this problem will come up quite rarely, and only when the user uses snapshots and tries to sync objects with unknown cross-schema dependencies. However, the solution is not an easy one, and lead to some difficult architecture and design decisions within the product. And all this pain follows from the simple decision to allow schema pre-filtering! Next: why adding a column to a table isn't as easy as you would think...

    Read the article

  • How do rsync from within a python script?

    - by Viswa
    I plan to move file from one system to another system. For this, I am using rsync command in linux terminal. It works fine. But I need to implement this command to python. I am very new in python, so I don't know the way of defining the rsync command. So please tell the steps to define it. This is my rsync command: rsync -avrz /opt/data/filename root@ip:/opt/data/file I need to implement this command in a python script.

    Read the article

  • Create .deb form gambas2 project

    - by Mauricio Andrés
    I have been working with Gambas 2 (the one in the software center), Actially I took the source code of other program and create a new program basen on that. But now that I finished, I can't create .deb files, gambas show me this error: La creación del paquete ha fallado. Package.MakeDebPackage.368: File or directory does not exist So I dont know what to do now, I really need this program. I tried with Gambas 3, but is too much work to do that the program based in Gmabas 2 works in Gambas 3, also I tried creating a .deb package and gambas get frozen. Please some help

    Read the article

  • How do I open up firewall while keeping it safe?

    - by d3vid
    Since I've installed Firestarter I have encountered connectivity issues that are all resolved by disabling the firewall. I'd prefer to have the firewall running and allow all the traffic I normally use: Wired network + wireless network, whichever I'm connected to, or both (1) OpenVPN VirtualBox internal network Samba (for accessing shared Windows folders and sharing folders to Windows) (2) BitTorrent And everything else I use that I can't think of :) All the above works without a firewall. I thought Linux was super secure (I update regularly). Do I really need it? (1) I used the Firestarter wizard and selected wlan0 as my primary connection, now whenever I plug in a network cable, I lose all connectivity. Should I just redo the wizard for eth0, or will I then lose wlan0? (2) If it makes a difference I'm sharing a directory that I share between local users using bindfs. See my answer to Good and easy way to share files on local machine

    Read the article

  • Linq to SQL Lazy Loading in ASP.Net applications

    - by nikolaosk
    In this post I would like to talk about LINQ to SQL and its native lazy loading functionality. I will show you how you can change this behavior. We will create a simple ASP.Net application to demonstrate this. I have seen a lot of people struggling with performance issues. That is mostly due to the lack of knowledge of how LINQ internally works.Imagine that we have two tables Products and Suppliers (Northwind database). There is one to many relationship between those tables-entities. One supplier...(read more)

    Read the article

  • How can I generate a 2d navigation mesh in a dynamic environment at runtime?

    - by Stephen
    So I've grasped how to use A* for path-finding, and I am able to use it on a grid. However, my game world is huge and I have many enemies moving toward the player, which is a moving target, so a grid system is too slow for path-finding. I need to simplify my node graph by using a navigational mesh. I grasp the concept of "how" a mesh works (finding a path through nodes on the vertices and/or the centers of the edges of polygons). My game uses dynamic obstacles that are procedurally generated at run-time. I can't quite wrap my head around how to take a plane that has multiple obstacles in it and programatically divide the walkable area up into polygons for the navigation mesh, like the following image. Where do I start? How do I know when a segment of walk-able area is already defined, or worse, when I realize I need to subdivide a previously defined walk-able area as the algorithm "walks" through the map? I'm using javascript in nodejs, if it matters.

    Read the article

  • having trouble getting audio drivers

    - by barry
    i'm trying to install the audio driver for my laptop and having problems. everything works great except for no audio. i locate the file, download it, try to open it, and this is what i get: Archive: /home/barry/Downloads/Audio_Conexant_v.6.14.10.575_XPx86/HXFSetup.exe [/home/barry/Downloads/Audio_Conexant_v.6.14.10.575_XPx86/HXFSetup.exe] End-of-central-directory signature not found. Either this file is not a zipfile, or it constitutes one disk of a multi-part archive. In the latter case the central directory and zipfile comment will be found on the last disk(s) of this archive. zipinfo: cannot find zipfile directory in one of /home/barry/Downloads/Audio_Conexant_v.6.14.10.575_XPx86/HXFSetup.exe or /home/barry/Downloads/Audio_Conexant_v.6.14.10.575_XPx86/HXFSetup.exe.zip, and cannot find /home/barry/Downloads/Audio_Conexant_v.6.14.10.575_XPx86/HXFSetup.exe.ZIP, period. can anyone help me out here? thanks

    Read the article

  • CSS Style Element if it does not contain another specific type of Element [migrated]

    - by Chris S
    My CSS includes the following: #mainbody a[href ^='http'] { background:transparent url('/images/icons/external.svg') no-repeat top right; padding-right: 12px; } This places an "external" icon next to links that start with "http" (all internal site links are relative). Works perfectly except if I link an Image, it also get this icon. For example: <a href='http://example.com'><img src='whatever.jpg'/></a> would also get the "external" icon next to the image. I can live with this if necessary, but would like to eliminate it. This must be implement in CSS (no JS); must not require any special IDs, Classes, styling in the html for the image or anchor around the image. Is this possible?

    Read the article

  • Fetching templates via API. Who provides this service?

    - by Guandalino
    I'm mainly a server side developer. I'm not a designer, even if I understand web layouts, grids, CSS, typography, valid markup, etc. and I'm able to do some graphic work too (almost). It just takes a lot of time and the result is not always beautiful. I know there are tons of website templates sites out there, and I'd like to use their designs as a starting point for my customers' works, giving them the possibility to choose the design they like more. I'd just prefer to show the templates catalog to customers from within my site, fetching templates info (screenshots, description, etc) from a remote server using an API. TemplateMonster.com provides, or provided, such API. But the service responds with "Unauthorized usage". Are there other sites offering this kind of retrieval service?

    Read the article

  • Apache 2.4 PHP 5.4 mysql not found?

    - by jurchiks
    Just tried setting up the latest Apache (2.4.1 x86 VC10 from apachelounge) and PHP (5.4 VC9 TS X86, with the php5apache2_4.dll) on my PC and ran into this weird problem; in my php.ini I have enabled all of the following: extension=php_mysql.dll extension=php_mysqli.dll extension=php_pdo_mysql.dll but when I try to print the available PDO's using this: print_r(PDO::getAvailableDrivers()); It gives me an empty array. PhpMyAdmin and MantisBT also refuse to work, saying that the mysql extension is missing. phpinfo() gives this: PDO PDO support enabled PDO drivers no value and when searching for mysql, only the mysqlnd section pops up... The DLLs are there in the D:/php/ext folder and no errors pop up when starting up Apache, as well as no errors in the Windows Event Log, and PHP itself has no error log anyway. What could possibly be the problem with this? Before this I had Apache 2.2.22 from apachelounge and PHP 5.3.9 and I had no problems. Now nothing works...

    Read the article

  • Ubuntu Oneiric + Awesome WM

    - by janjust
    I upgraded to oneiric ocelot, running awesome wm. Everything works, more or less, fine but one thing I've noticed is that now my menu fonts and my menu symbols are larger than I'd like them to be. I used to set them in font settings, but now (for one I don't even know where font settings are anymore, I tried gnome-tweak-tool) the font-settings are gone? Surely I'm missing something. My prime example is the program evince whose symbols are ginormous. Any hints how to tweak it?

    Read the article

  • How do I clean build and installs, ie un-build?

    - by Kaustubh P
    I have installed and downloaded and built mongodb, and just one works. $ mongo mongo: error while loading shared libraries: libmozjs.so: cannot open shared object file: No such file or directory $ /opt/mongo/bin/mongo /opt/mongo/bin/mongo: error while loading shared libraries: libboost_system-mt.so.1.38.0: cannot open shared object file: No such file or directory $ /usr/bin/mongo MongoDB shell version: 1.6.5 connecting to: test > I can remove the installation via apt-get. But how do I remove all things mongo that were built with make, and get a clean system? I followed this guide to build and install mongodb. Thanks.

    Read the article

< Previous Page | 412 413 414 415 416 417 418 419 420 421 422 423  | Next Page >