Search Results

Search found 245 results on 10 pages for 'dp'.

Page 7/10 | < Previous Page | 3 4 5 6 7 8 9 10  | Next Page >

  • Double-byte characters in querystring using PHP

    - by Jeffrey Berthiaume
    I'm trying to figure out how to create personalized urls for double-byte languages. For example, this url from Amazon Japan has Japanese characters within the querystring (specifically, the path): http://www.amazon.co.jp/????????-DVD-???/dp/B00005R5J3/ref=sr_1_3?ie=UTF8&s=dvd&qid=1269891925&sr=8-3 What I would like to do is have: http://www.mysite.com/???????? or even http://www.mysite.com/index.php?name=???????? be able to properly decode the $GET[name] string. I think I have tried all of the urldecode and utf8_decode possibilities, but I just get gibberish in response. This all works fine in a form $_POST, but I need these urls to be emailable...

    Read the article

  • USB enumeration failure in ColdFire MCF52259

    - by Quicksilver
    Anybody with programming experience on the ColdFire line, please help! I'm using the CMX USB-Lite stack with the ColdFire MCF52259. For some reason, USB enumeration fails at the very first step, as soon as I enable the DP pull-up resistor. This is what I'm doing :- INT_ENB, OTG_INT_EN, ERR_ENB are set to 0x0 INT_STAT, OTG_INT_STAT and ERR_STAT are set to 0xff (This should clear all interrupts) In the Interrupt Status Register, bits 0 (Mask All) and 53 are made 0, all others are 1. TOK_DNE, USB_RST and STALL interrupts are enabled in INT_ENB. BDT base address is set. MCF_USB_CTL holds 0x1 PROBLEM: After the host issues the first reset (at which point I enable Control endpoint 0), instead of the expected Get Descriptor request I'm getting an endless series of resets. At least, that's what it looks like, because the TOK_DNE interrupt never triggers. Is there anything I'm doing wrong?

    Read the article

  • Logictech USB HID controller message

    - by rross
    I have a Logitech game controller(http://www.amazon.com/Logitech-Dual-Action-Game-Pad/dp/B0000ALFCI). I'm using c# and Microsoft's HID driver to track what buttons are being pressed. Each button press sends a Byte Array that has 8 values. The problems is that I don't know what those 8 value represent. Here is an example: 0, 128, 126, 127, 130, 24, 24, 0, 4, 252 0, 128, 126, 127, 130, 40, 40, 0, 4, 252 0, 128, 126, 127, 127, 72, 72, 0, 4, 252 0, 128, 126, 127, 127, 136, 136, 0, 4, 252 Those are the values of the Byte Array for button press 1, 2, 3, 4 respectively. I see where the values are changing, but I'm unsure what they represent. I'm unable to find any specs on Microsoft HID driver. Can someone point me in the right direction?

    Read the article

  • WCF Advanced Books

    - by hgulyan
    Hi, I've read all questions like mine and found a few good links. My question is about architect of WCF, how it is designed, how is generated reference.vb, wsdl and xsd files. How can I do that manually, some good examples of WCF Systems (mostly on desktop applications over TCP). I'd like a book or documentation or anything else, that can give me advanced knowledge of WCF. What do you think of this book http://www.amazon.com/Professional-WCF-Windows-Communication-Foundation/dp/0470563141/ref=sr_1_5?ie=UTF8&s=books&qid=1269425390&sr=8-5 ? And generally is there any source that would give me this all or the only way is practising, trying, looking in all that generated files and reading documentations? Thank you.

    Read the article

  • MySql UDF using shared library won't load

    - by Jarrod
    I am attempting to create a mysql UDF which will match a fingerprint using Digital Persona's free linux SDK library. I have written a trivial UDF as a learning experience which worked fine. However, when I added a dependency to Digital Persona's shared object I can no longer get MySql to load my UDF. I added includes to DP's headers and compiled my UDF using: gcc -fPIC -Wall -I/usr/src/mysql-5.0.45-linux-i686-icc-glibc23/include -shared -o dp_udf.so dp_udf.cc I also tried adding the -static argument, but whenever I restart MySql, I get the error: Can't open shared library 'dp_udf.so' (errno: 0 /usr/local/mysql/lib/plugin/dp_udf.so: undefined symbol: MC_verifyFeaturesEx) MC_verifyFeaturesEx is a function defined "dpMatch.h" which I included, and is implemented in libdpfpapi.so which I have tried placing in the same location as my dp_udf.so and in /usr/lib. Am I doing something wrong with my call to gcc (my C++ skills are rusty) or does MySql not allow UDFs to use additional shared objects?

    Read the article

  • WPF Animate a Matrix using interpolation

    - by Mark
    I'm having a issue with my application that is using touch gestures to scale, translate and rotate my scene. I was using a TransformGroup which contained TranslateTransform, ScaleTransform and a RotateTransform but I could not get the movement correct, it always jumps and skips, so I moved to a MaxtrixTransform which I was able to use much easier to get my scene to be zoomable, rotatable and panable nicely. However, what I later found out was that you cannot animate smoothly (using interpolation) the values of a Matrix, for what reason I have no idea, but its part of the MSDN doco and the properties of the Matrix are not dependency properties anyways... Has anyone had any luck animating a matrix to make it smooth? The only idea(s) I have had is to animate a few different, custom DP which all have callbacks that I update the matrix from OR To convert the matrix to a set of Transform objects that I then animate and then afterwords convert back. Is there a smarter way to do this?

    Read the article

  • Why do I get a nullpointerexception at line ds.getPort in class L1?

    - by Fred
    import java.awt.; import java.awt.event.; import javax.swing.; import java.io.; import java.net.; import java.util.; public class Draw extends JFrame { /* * Socket stuff */ static String host; static int port; static int localport; DatagramSocket ds; Socket socket; Draw d; Paper p = new Paper(ds); public Draw(int localport, String host, int port) { d = this; this.localport = localport; this.host = host; this.port = port; try { ds = new DatagramSocket(localport); InetAddress ia = InetAddress.getByName(host); System.out.println("Attempting to connect DatagramSocket. Local port " + localport + " , foreign host " + host + ", foreign port " + port + "..."); ds.connect(ia, port); System.out.println("Success, ds.localport: " + ds.getLocalPort() + ", ds.port: " + ds.getPort() + ", address: " + ds.getInetAddress()); Reciever r = new Reciever(ds); r.start(); } catch (Exception e) { e.printStackTrace(); } setDefaultCloseOperation(EXIT_ON_CLOSE); getContentPane().add(p, BorderLayout.CENTER); setSize(640, 480); setVisible(true); } public static void main(String[] args) { int x = 0; for (String s : args){ if (x==0){ localport = Integer.parseInt(s); x++; } else if (x==1){ host = s; x++; } else if (x==2){ port = Integer.parseInt(s); } } Draw d = new Draw(localport, host, port); } } class Paper extends JPanel { DatagramSocket ds; private HashSet hs = new HashSet(); public Paper(DatagramSocket ds) { this.ds=ds; setBackground(Color.white); addMouseListener(new L1(ds)); addMouseMotionListener(new L2()); } public void paintComponent(Graphics g) { super.paintComponent(g); g.setColor(Color.black); Iterator i = hs.iterator(); while(i.hasNext()) { Point p = (Point)i.next(); g.fillOval(p.x, p.y, 2, 2); } } private void addPoint(Point p) { hs.add(p); repaint(); } class L1 extends MouseAdapter { DatagramSocket ds; public L1(DatagramSocket ds){ this.ds=ds; } public void mousePressed(MouseEvent me) { addPoint(me.getPoint()); Point p = me.getPoint(); String message = Integer.toString(p.x) + " " + Integer.toString(p.y); System.out.println(message); try{ byte[] data = message.getBytes("UTF-8"); //InetAddress ia = InetAddress.getByName(ds.host); String convertedMessage = new String(data, "UTF-8"); System.out.println("The converted string is " + convertedMessage); DatagramPacket dp = new DatagramPacket(data, data.length); System.out.println(ds.getPort()); //System.out.println(message); //System.out.println(ds.toString()); //ds.send(dp); /*System.out.println("2Sending a packet containing data: " +data +" to " + ia + ":" + d.port + "...");*/ } catch (Exception e){ e.printStackTrace(); } } } class L2 extends MouseMotionAdapter { public void mouseDragged(MouseEvent me) { addPoint(me.getPoint()); Point p = me.getPoint(); String message = Integer.toString(p.x) + " " + Integer.toString(p.y); //System.out.println(message); } } } class Reciever extends Thread{ DatagramSocket ds; byte[] buffer; Reciever(DatagramSocket ds){ this.ds = ds; buffer = new byte[65507]; } public void run(){ try { DatagramPacket packet = new DatagramPacket(buffer, buffer.length); while(true){ try { ds.receive(packet); String s = new String(packet.getData()); System.out.println(s); } catch (Exception e) { e.printStackTrace(); } } } catch (Exception e) { e.printStackTrace(); } } }

    Read the article

  • Java interface and abstract class issue

    - by George2
    Hello everyone, I am reading the book -- Hadoop: The Definitive Guide, http://www.amazon.com/Hadoop-Definitive-Guide-Tom-White/dp/0596521979/ref=sr_1_1?ie=UTF8&s=books&qid=1273932107&sr=8-1 In chapter 2 (Page 25), it is mentioned "The new API favors abstract class over interfaces, since these are easier to evolve. For example, you can add a method (with a default implementation) to an abstract class without breaking old implementations of the class". What does it mean (especially what means "breaking old implementations of the class")? Appreciate if anyone could show me a sample why from this perspective abstract class is better than interface? thanks in advance, George

    Read the article

  • WPF Animate a Maxtrix using interpolation

    - by Mark
    Im having a issue with my application that is using touch gestures to scale, translate and rotate my scene. I was using a TransformGroup which contained TranslateTransform, ScaleTransform and a RotateTransform but I could not get the movement correct, it always jumps and skips, so I moved to a MaxtrixTransform which I was able to use much easier to get my scene to be zoomable, rotatable and panable nicely. However, what I later found out was that you cannot animate smoothly (using interpolation) the values of a Matrix, for what reason I have no idea, but its part of the MSDN doco and the properties of the Matrix are not dependency properties anyways... Has anyone had any luck animating a maxtrix to make it smooth? The only idea(s) I have had is to animate a few different, custom DP which all have callbacks that I update the Matrix from OR To convert the Maxtix to a set of Transform object that I then animate and then afterwords convert back. Is there a smarter way to do this?

    Read the article

  • Is it possible to have a Shared/Static Dependency Property?

    - by Matt H.
    [using VB.NET, but I can easily read C# code in responses] I have a class called QuestionClipboard with ALL shared methods/properties. I previously had a QuesitonClipboard.doesClipboardHaveContent function that returned true/false if there was a Object on my 'clipboard'. I'd prefer to implement a Dependency Property so I can allow this true/false value to participate in data binding. The "GetValue(dp as DependencyProperty)" method requires an object instance, which would mean that my Property CAN'T be shared! Here is what the code would look like in my perfect world... Of course, the word "Shared" before the property declaration renders this code useless. Private Shared clipboardHasContentPropertyKey As DependencyPropertyKey = DependencyProperty.RegisterReadOnly("clipboardHasContent", GetType(Boolean), GetType(QuestionClipboard), _ New PropertyMetadata(False, Nothing, New CoerceValueCallback(AddressOf coerceClipboardHasContent))) Private Shared clipboardHasContentProperty As DependencyProperty = clipboardHasContentPropertyKey.DependencyProperty Public SHARED Property clipboardHasContent As Boolean Get Return GetValue(clipboardHasContentProperty) End Get Set(ByVal value As Boolean) SetValue(value) End Set End Property

    Read the article

  • ImageView source scaling done right. How?

    - by Aleksey Malevaniy
    Scope Image bitmap have to be shown as imageView.setImageBitmap(bitmap) and scaled to fit UI. This could be done via: bitmap = Bitmap.createScaledBitmap(bitmap, newWidth, newHeight, true); xml's ImageView attributes such as android:layout_width="newWidth" android:layout_height="newHeight" android:adjustViewBounds="true" android:scaleType="fitCenter" Problem Which way is better for performance? I prefer xml 'cause this is UI specific problem and I prefer to use xmls for UI definition. Also we set width/height values in dp, it means we have the same UI for different screens. Thanks!

    Read the article

  • Count the number of ways in which a number 'A' can be broken into a sum of 'B' numbers such that all numbers are co-prime to 'C'

    - by rajneesh2k10
    I came across the solution of a problem which involve dynamic-programming approach, solved using a three dimensional matrix. Link to actual problem is: http://community.topcoder.com/stat?c=problem_statement&pm=12189&rd=15177 Solution to this problem is here under MuddyRoad2: http://apps.topcoder.com/wiki/display/tc/SRM+555 In the last paragraph of explanation, author describes a dynamic programming approach to count the number of ways in which a number 'A' can be broken into a sum of 'B' numbers (not necessarily different), such that every number is co-prime to 3 and the order in which these numbers appear does matter. I am not able to grasp that approach. Can anyone help me understand how DP is acting here. I can't understand what is a state here and how it is derived from the previous state.

    Read the article

  • How to make a resolution independent camera preview on android?

    - by histeria
    Hi all, I'm making an android 1.6 app that uses phone's camera. In order to do this app resolution independent, I need to set a compatible aspect ratio for previewing camera on a SurfaceLayout. In 1.6 sdk there is no way to get supported sizes for the camera preview. It is possible to use a 4:3 or 3:2 aspect ratio and get no errors whith that? On the other hand, I need a way to make a xml layout that represents this Surfacelayout in this (unknown) aspect ratio in every resolution. I assume that is not possible to change the SurfaceLayout size in runtime. Can I do it with "dp" units? The other way is making this layout programmatically? There are some apps like Vignette or android camera application with some tricks to make something like that, like black bars (vignette) or fixed buttons bar, but I don't know how to do it in any kind of resolution. Any ideas? Thanks!

    Read the article

  • wp7 and dependency property : onChanged and onRead?

    - by user233150
    hi here's my question : well I want build a wp7 control,so I write it and all go well, but the problem is that I can intercept on write(see onItemsSourcePropertyChanged)but not on read I would like explain better: public static readonly DependencyProperty ItemsSourceProperty= DependencyProperty.Register( "ItemsSource", typeof(ObservableCollection<ObjWithDesc>), typeof(HorizontalListBox), new PropertyMetadata(OnItemsSourcePropertyChanged) ); static void OnItemsSourcePropertyChanged(DependencyObject obj,DependencyPropertyChangedEventArgs e) { ((HorizontalListBox) obj).OnItemsSourcePropertyChanged(e); } OnItemsSourcePropertyChanged is called when I use SetValue(dp,..) but there isn't onItemsSourcePropertyRead ? that is called when I Use GetValue()? thanks

    Read the article

  • MySQL Database || Tables Issue

    - by user1780103
    I'm trying to write up a script where the user is able to purchase an amount of points for dollars. I want the transaction to be inserted into MySQL. I keep facing a: "Column count doesn't match value count at row 1" error. And I have no idea what I'm doing wrong. I have written up this: mysql_query("INSERT INTO paypal_donations VALUES (NULL, ".$account_id.", ".$char_id.", ".$price.", ".$dp.", NOW(), NOW(), 'Started', 0, 0, '', '');") or die(mysql_error()); But I don't know what to execute in MySQL, since I've never worked with it before. Could anyone write up a quick script that I can insert into MySQL for it to work.

    Read the article

  • Configure 27" 2560x1440 for a monitor with corrupt EDID

    - by Aras
    I am trying to get a monitor work with my Ubuntu laptop. The monitor is this cheap 27" Korean monitors which has a 2560x1440 resolution -- and nothing else. Here are some specifications of this monitor: 2560x1440 @60Hz Only one dual link DVI-D input -- no other input port (no HDMI or display port) no OSD no scalar reports corrupt EDID does 2560x1440 @60Hz, did I say that already? Anyways, the monitor works beautifully with my Ubuntu desktop which has an nVidia card with DVI output. However, I am having problem using this monitor with my laptop. After some searching around I found a few posts suggesting to use an active adaptor for mini display port, so I went and bought a mini display to dual link DVI-D adaptor.. When using this adaptor the monitor is recognized by nvidia-settings tool but with incorrect resolution information. As you can see the monitor is incorrectly recognized and there are no other resolution available to set. This post on ubuntu forums and this other post on overclock both suggest that the monitor is reporting corrupt EDID file. I have tried following their instructions, but so far I have not been able to display any image on the monitor from my laptop. The laptop I am using is an ASUS G75VW with a 1920x1080 screen. It has a VGA, an HDMI 1.4a, and a mini display port. The graphic card is an nvidia gforce gtx 660M with 2GB dedicated memory. I am running Ubuntu 12.10 on here which I upgrade from 12.04 a few weeks ago. As I said I have tried several suggestions, including specifying Modeline in xorg.conf and also linking to EDID files I found from those forum posts above. However, I am not sure if the EDID files I found are suitable for my monitor. I think the solution to my problem consist of obtaining the EDID file of my monitor and then fixing it and modifying xorg.conf to force nvidia driver to load the correct resolution. However, I am not sure what steps I need to take to do this. Here is the part of sudo xrandr --prop output that is related to this monitor: DP-1 connected 800x600+1920+0 (normal left inverted right x axis y axis) 0mm x 0mm SignalFormat: DisplayPort supported: DisplayPort ConnectorType: DisplayPort ConnectorNumber: 3 (0x00000003) _ConnectorLocation: 3 (0x00000003) 800x600 60.3*+ I was expecting to see the EDID file in this output as was mentioned in this post, but it is not there. After several hours of tweaking X configurations, I decided it was time to ask for help here. I would really appreciate if someone with experience regarding EDID and X configuration could give me a hand to solve this issue.

    Read the article

  • Book &ldquo;Team Foundation Server 2012 Starter&rdquo; published

    - by terje
    During the summer and fall this year, me and my colleague Jakob Ehn has worked together on a book project that has now finally hit the stores! The title of the book is Team Foundation Server 2012 Starter and is published by Packt Publishing. Get it from http://www.packtpub.com/team-foundation-server-2012-starter/book or from Amazon http://www.amazon.com/dp/1849688389                     The book is part of a concept that Packt have with starter-books, intended for people new to Team Foundation Server 2012 and who want a quick guideline to get it up and working.  It covers the fundamentals, from installing and configuring it, and how to use it with source control, work items and builds. It is done as a step-by-step guide, but also includes best practices advice in the different areas. It covers the use of both the on-premises and the TFS Services version. It also has a list of links and references in the end to the most relevant Visual Studio 2012 ALM sites. Our good friend and fellow ALM MVP Mathias Olausson have done the review of the book, thanks again Mathias! We hope the book fills the gap between the different online guide sites and the more advanced books that are out. Book Description Your quick start guide to TFS 2012, top features, and best practices with hands on examples Overview Install TFS 2012 from scratch Get up and running with your first project Streamline release cycles for maximum productivity In Detail Team Foundation Server 2012 is Microsoft's leading ALM tool, integrating source control, work item and process handling, build automation, and testing. This practical "Team Foundation Server 2012 Starter Guide" will provide you with clear step-by-step exercises covering all major aspects of the product. This is essential reading for anyone wishing to set up, organize, and use TFS server. This hands-on guide looks at the top features in Team Foundation Server 2012, starting with a quick installation guide and then moving into using it for your software development projects. Manage your team projects with Team Explorer, one of the many new features for 2012. Covering all the main features in source control to help you work more efficiently, including tools for branching and merging, we will delve into the Agile Planning Tools for planning your product and sprint backlogs. Learn to set up build automation, allowing your team to become faster, more streamlined, and ultimately more productive with this "Team Foundation Server 2012 Starter Guide". What you will learn from this book Install TFS 2012 on premise Access TFS Services in the cloud Quickly get started with a new project with product backlogs, source control, and build automation Work efficiently with source control using the top features Understand how the tools for branching and merging in TFS 2012 help you isolate work and teams Learn about the existing process templates, such as Visual Studio Scrum 2.0 Manage your product and sprint backlogs using the Agile planning tools Approach This Starter guide is a short, sharp introduction to Team Foundation Server 2012, covering everything you need to get up and running. Who this book is written for If you are a developer, project lead, tester, or IT administrator working with Team Foundation Server 2012 this guide will get you up to speed quickly and with minimal effort.

    Read the article

  • Towards Database Continuous Delivery – What Next after Continuous Integration? A Checklist

    - by Ben Rees
    .dbd-banner p{ font-size:0.75em; padding:0 0 10px; margin:0 } .dbd-banner p span{ color:#675C6D; } .dbd-banner p:last-child{ padding:0; } @media ALL and (max-width:640px){ .dbd-banner{ background:#f0f0f0; padding:5px; color:#333; margin-top: 5px; } } -- Database delivery patterns & practices STAGE 4 AUTOMATED DEPLOYMENT If you’ve been fortunate enough to get to the stage where you’ve implemented some sort of continuous integration process for your database updates, then hopefully you’re seeing the benefits of that investment – constant feedback on changes your devs are making, advanced warning of data loss (prior to the production release on Saturday night!), a nice suite of automated tests to check business logic, so you know it’s going to work when it goes live, and so on. But what next? What can you do to improve your delivery process further, moving towards a full continuous delivery process for your database? In this article I describe some of the issues you might need to tackle on the next stage of this journey, and how to plan to overcome those obstacles before they appear. Our Database Delivery Learning Program consists of four stages, really three – source controlling a database, running continuous integration processes, then how to set up automated deployment (the middle stage is split in two – basic and advanced continuous integration, making four stages in total). If you’ve managed to work through the first three of these stages – source control, basic, then advanced CI, then you should have a solid change management process set up where, every time one of your team checks in a change to your database (whether schema or static reference data), this change gets fully tested automatically by your CI server. But this is only part of the story. Great, we know that our updates work, that the upgrade process works, that the upgrade isn’t going to wipe our 4Tb of production data with a single DROP TABLE. But – how do you get this (fully tested) release live? Continuous delivery means being always ready to release your software at any point in time. There’s a significant gap between your latest version being tested, and it being easily releasable. Just a quick note on terminology – there’s a nice piece here from Atlassian on the difference between continuous integration, continuous delivery and continuous deployment. This piece also gives a nice description of the benefits of continuous delivery. These benefits have been summed up by Jez Humble at Thoughtworks as: “Continuous delivery is a set of principles and practices to reduce the cost, time, and risk of delivering incremental changes to users” There’s another really useful piece here on Simple-Talk about the need for continuous delivery and how it applies to the database written by Phil Factor – specifically the extra needs and complexities of implementing a full CD solution for the database (compared to just implementing CD for, say, a web app). So, hopefully you’re convinced of moving on the the next stage! The next step after CI is to get some sort of automated deployment (or “release management”) process set up. But what should I do next? What do I need to plan and think about for getting my automated database deployment process set up? Can’t I just install one of the many release management tools available and hey presto, I’m ready! If only it were that simple. Below I list some of the areas that it’s worth spending a little time on, where a little planning and prep could go a long way. It’s also worth pointing out, that this should really be an evolving process. Depending on your starting point of course, it can be a long journey from your current setup to a full continuous delivery pipeline. If you’ve got a CI mechanism in place, you’re certainly a long way down that path. Nevertheless, we’d recommend evolving your process incrementally. Pages 157 and 129-141 of the book on Continuous Delivery (by Jez Humble and Dave Farley) have some great guidance on building up a pipeline incrementally: http://www.amazon.com/Continuous-Delivery-Deployment-Automation-Addison-Wesley/dp/0321601912 For now, in this post, we’ll look at the following areas for your checklist: You and Your Team Environments The Deployment Process Rollback and Recovery Development Practices You and Your Team It’s a cliché in the DevOps community that “It’s not all about processes and tools, really it’s all about a culture”. As stated in this DevOps report from Puppet Labs: “DevOps processes and tooling contribute to high performance, but these practices alone aren’t enough to achieve organizational success. The most common barriers to DevOps adoption are cultural: lack of manager or team buy-in, or the value of DevOps isn’t understood outside of a specific group”. Like most clichés, there’s truth in there – if you want to set up a database continuous delivery process, you need to get your boss, your department, your company (if relevant) onside. Why? Because it’s an investment with the benefits coming way down the line. But the benefits are huge – for HP, in the book A Practical Approach to Large-Scale Agile Development: How HP Transformed LaserJet FutureSmart Firmware, these are summarized as: -2008 to present: overall development costs reduced by 40% -Number of programs under development increased by 140% -Development costs per program down 78% -Firmware resources now driving innovation increased by a factor of 8 (from 5% working on new features to 40% But what does this mean? It means that, when moving to the next stage, to make that extra investment in automating your deployment process, it helps a lot if everyone is convinced that this is a good thing. That they understand the benefits of automated deployment and are willing to make the effort to transform to a new way of working. Incidentally, if you’re ever struggling to convince someone of the value I’d strongly recommend just buying them a copy of this book – a great read, and a very practical guide to how it can really work at a large org. I’ve spoken to many customers who have implemented database CI who describe their deployment process as “The point where automation breaks down. Up to that point, the CI process runs, untouched by human hand, but as soon as that’s finished we revert to manual.” This deployment process can involve, for example, a DBA manually comparing an environment (say, QA) to production, creating the upgrade scripts, reading through them, checking them against an Excel document emailed to him/her the night before, turning to page 29 in his/her notebook to double-check how replication is switched off and on for deployments, and so on and so on. Painful, error-prone and lengthy. But the point is, if this is something like your deployment process, telling your DBA “We’re changing everything you do and your toolset next week, to automate most of your role – that’s okay isn’t it?” isn’t likely to go down well. There’s some work here to bring him/her onside – to explain what you’re doing, why there will still be control of the deployment process and so on. Or of course, if you’re the DBA looking after this process, you have to do a similar job in reverse. You may have researched and worked out how you’d like to change your methodology to start automating your painful release process, but do the dev team know this? What if they have to start producing different artifacts for you? Will they be happy with this? Worth talking to them, to find out. As well as talking to your DBA/dev team, the other group to get involved before implementation is your manager. And possibly your manager’s manager too. As mentioned, unless there’s buy-in “from the top”, you’re going to hit problems when the implementation starts to get rocky (and what tool/process implementations don’t get rocky?!). You need to have support from someone senior in your organisation – someone you can turn to when you need help with a delayed implementation, lack of resources or lack of progress. Actions: Get your DBA involved (or whoever looks after live deployments) and discuss what you’re planning to do or, if you’re the DBA yourself, get the dev team up-to-speed with your plans, Get your boss involved too and make sure he/she is bought in to the investment. Environments Where are you going to deploy to? And really this question is – what environments do you want set up for your deployment pipeline? Assume everyone has “Production”, but do you have a QA environment? Dedicated development environments for each dev? Proper pre-production? I’ve seen every setup under the sun, and there is often a big difference between “What we want, to do continuous delivery properly” and “What we’re currently stuck with”. Some of these differences are: What we want What we’ve got Each developer with their own dedicated database environment A single shared “development” environment, used by everyone at once An Integration box used to test the integration of all check-ins via the CI process, along with a full suite of unit-tests running on that machine In fact if you have a CI process running, you’re likely to have some sort of integration server running (even if you don’t call it that!). Whether you have a full suite of unit tests running is a different question… Separate QA environment used explicitly for manual testing prior to release “We just test on the dev environments, or maybe pre-production” A proper pre-production (or “staging”) box that matches production as closely as possible Hopefully a pre-production box of some sort. But does it match production closely!? A production environment reproducible from source control A production box which has drifted significantly from anything in source control The big question is – how much time and effort are you going to invest in fixing these issues? In reality this just involves figuring out which new databases you’re going to create and where they’ll be hosted – VMs? Cloud-based? What about size/data issues – what data are you going to include on dev environments? Does it need to be masked to protect access to production data? And often the amount of work here really depends on whether you’re working on a new, greenfield project, or trying to update an existing, brownfield application. There’s a world if difference between starting from scratch with 4 or 5 clean environments (reproducible from source control of course!), and trying to re-purpose and tweak a set of existing databases, with all of their surrounding processes and quirks. But for a proper release management process, ideally you have: Dedicated development databases, An Integration server used for testing continuous integration and running unit tests. [NB: This is the point at which deployments are automatic, without human intervention. Each deployment after this point is a one-click (but human) action], QA – QA engineers use a one-click deployment process to automatically* deploy chosen releases to QA for testing, Pre-production. The environment you use to test the production release process, Production. * A note on the use of the word “automatic” – when carrying out automated deployments this does not mean that the deployment is happening without human intervention (i.e. that something is just deploying over and over again). It means that the process of carrying out the deployment is automatic in that it’s not a person manually running through a checklist or set of actions. The deployment still requires a single-click from a user. Actions: Get your environments set up and ready, Set access permissions appropriately, Make sure everyone understands what the environments will be used for (it’s not a “free-for-all” with all environments to be accessed, played with and changed by development). The Deployment Process As described earlier, most existing database deployment processes are pretty manual. The following is a description of a process we hear very often when we ask customers “How do your database changes get live? How does your manual process work?” Check pre-production matches production (use a schema compare tool, like SQL Compare). Sometimes done by taking a backup from production and restoring in to pre-prod, Again, use a schema compare tool to find the differences between the latest version of the database ready to go live (i.e. what the team have been developing). This generates a script, User (generally, the DBA), reviews the script. This often involves manually checking updates against a spreadsheet or similar, Run the script on pre-production, and check there are no errors (i.e. it upgrades pre-production to what you hoped), If all working, run the script on production.* * this assumes there’s no problem with production drifting away from pre-production in the interim time period (i.e. someone has hacked something in to the production box without going through the proper change management process). This difference could undermine the validity of your pre-production deployment test. Red Gate is currently working on a free tool to detect this problem – sign up here at www.sqllighthouse.com, if you’re interested in testing early versions. There are several variations on this process – some better, some much worse! How do you automate this? In particular, step 3 – surely you can’t automate a DBA checking through a script, that everything is in order!? The key point here is to plan what you want in your new deployment process. There are so many options. At one extreme, pure continuous deployment – whenever a dev checks something in to source control, the CI process runs (including extensive and thorough testing!), before the deployment process keys in and automatically deploys that change to the live box. Not for the faint hearted – and really not something we recommend. At the other extreme, you might be more comfortable with a semi-automated process – the pre-production/production matching process is automated (with an error thrown if these environments don’t match), followed by a manual intervention, allowing for script approval by the DBA. One he/she clicks “Okay, I’m happy for that to go live”, the latter stages automatically take the script through to live. And anything in between of course – and other variations. But we’d strongly recommended sitting down with a whiteboard and your team, and spending a couple of hours mapping out “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” NB: Most of what we’re discussing here is about production deployments. It’s important to note that you will also need to map out a deployment process for earlier environments (for example QA). However, these are likely to be less onerous, and many customers opt for a much more automated process for these boxes. Actions: Sit down with your team and a whiteboard, and draw out the answers to the questions above for your production deployments – “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” Repeat for earlier environments (QA and so on). Rollback and Recovery If only every deployment went according to plan! Unfortunately they don’t – and when things go wrong, you need a rollback or recovery plan for what you’re going to do in that situation. Once you move in to a more automated database deployment process, you’re far more likely to be deploying more frequently than before. No longer once every 6 months, maybe now once per week, or even daily. Hence the need for a quick rollback or recovery process becomes paramount, and should be planned for. NB: These are mainly scenarios for handling rollbacks after the transaction has been committed. If a failure is detected during the transaction, the whole transaction can just be rolled back, no problem. There are various options, which we’ll explore in subsequent articles, things like: Immediately restore from backup, Have a pre-tested rollback script (remembering that really this is a “roll-forward” script – there’s not really such a thing as a rollback script for a database!) Have fallback environments – for example, using a blue-green deployment pattern. Different options have pros and cons – some are easier to set up, some require more investment in infrastructure; and of course some work better than others (the key issue with using backups, is loss of the interim transaction data that has been added between the failed deployment and the restore). The best mechanism will be primarily dependent on how your application works and how much you need a cast-iron failsafe mechanism. Actions: Work out an appropriate rollback strategy based on how your application and business works, your appetite for investment and requirements for a completely failsafe process. Development Practices This is perhaps the more difficult area for people to tackle. The process by which you can deploy database updates is actually intrinsically linked with the patterns and practices used to develop that database and linked application. So you need to decide whether you want to implement some changes to the way your developers actually develop the database (particularly schema changes) to make the deployment process easier. A good example is the pattern “Branch by abstraction”. Explained nicely here, by Martin Fowler, this is a process that can be used to make significant database changes (e.g. splitting a table) in a step-wise manner so that you can always roll back, without data loss – by making incremental updates to the database backward compatible. Slides 103-108 of the following slidedeck, from Niek Bartholomeus explain the process: https://speakerdeck.com/niekbartho/orchestration-in-meatspace As these slides show, by making a significant schema change in multiple steps – where each step can be rolled back without any loss of new data – this affords the release team the opportunity to have zero-downtime deployments with considerably less stress (because if an increment goes wrong, they can roll back easily). There are plenty more great patterns that can be implemented – the book Refactoring Databases, by Scott Ambler and Pramod Sadalage is a great read, if this is a direction you want to go in: http://www.amazon.com/Refactoring-Databases-Evolutionary-paperback-Addison-Wesley/dp/0321774515 But the question is – how much of this investment are you willing to make? How often are you making significant schema changes that would require these best practices? Again, there’s a difference here between migrating old projects and starting afresh – with the latter it’s much easier to instigate best practice from the start. Actions: For your business, work out how far down the path you want to go, amending your database development patterns to “best practice”. It’s a trade-off between implementing quality processes, and the necessity to do so (depending on how often you make complex changes). Socialise these changes with your development group. No-one likes having “best practice” changes imposed on them, so good to introduce these ideas and the rationale behind them early.   Summary The next stages of implementing a continuous delivery pipeline for your database changes (once you have CI up and running) require a little pre-planning, if you want to get the most out of the work, and for the implementation to go smoothly. We’ve covered some of the checklist of areas to consider – mainly in the areas of “Getting the team ready for the changes that are coming” and “Planning our your pipeline, environments, patterns and practices for development”, though there will be more detail, depending on where you’re coming from – and where you want to get to. This article is part of our database delivery patterns & practices series on Simple Talk. Find more articles for version control, automated testing, continuous integration & deployment.

    Read the article

  • Excel Solver vs Solver Foundation

    - by JoshReuben
    I recently read a book http://www.amazon.com/Scientific-Engineering-Cookbook-Cookbooks-OReilly/dp/0596008791/ref=sr_1_1?ie=UTF8&s=books&qid=1296593374&sr=8-1 - the Excel Scientific and Engineering Cookbook.     The 2 main tools that this book leveraged were the Data Analysis Pack and Excel Solver. I had previously been aquanted with Microsoft Solver Foundation - this is a full fledged API for solving optimization problems, and went beyond being a mere Excel plugin - it exposed a C# programmatic interface for in process and a web service interface for out of process integration. were they the same? apparently not!   2 different solver frameworks for Excel: http://www.solver.com/index.html http://www.solverfoundation.com/ I contacted both vendors to get their perspectives.   Heres what the Excel Solver guys had to say:   "The Solver Foundation requires you to learn and use a very specific modeling language (OML). The Excel solver allows you to formulate your optimization problems without learning any new language simply by entering the formulas into cells on the Excel spreadsheet, something that nearly everyone is already familiar with doing.   The Excel Solver also allows you to seamlessly upgrade to products that combine Monte Carlo Simulation capabilities (our Risk Solver Premium and Risk Solver Platform products) which allow you to include uncertainty into your models when appropriate.   Our advanced Excel Solver Products also have a number of built in reporting tools for advanced analysis of the your model and it's results"           And Heres what the Microsoft Solver Foundation guys had to say:   "  With the release of Solver Foundation 3.0, Solver Foundation has the same kinds of solvers (plus a few more) than what is found in Excel Solver. I think there are two main differences:   1.      Problems are described differently. In Excel Solver the goals and constraints are specified inside the spreadsheet, in formulas. In Solver Foundation they are described either in .Net code that uses the Solver Foundation Services API, or using the OML modeling language in Excel. 2.      Solver Foundation’s primary strength is on solving large linear, mixed integer, and constraint models. That is, models that contain arbitrary nonlinear functions (such as trig functions, IF(), powers, etc) are handled a bit better by the Excel Solver at this point. "

    Read the article

  • Correct permissions for /var/www and wordpress

    - by dpbklyn
    Hello and thank you in advance! I am relatively new to ubuntu, so please excuse the newbie-ness of this question... I have set up a LAMP server (ubuntu server 11.10) and I have access via SSH and to the "it works" page from a web browser from inside my network (via ip address) and from outside using dyndns. I have a couple of projects in development with some outside developers and I want to use this server as a development server for testing and for client approvals. We have some Wordpress projects that sit in subdirectories in /var/www/wordpress1 /var/www/wordpress2, etc. I cannot access these sub directories from a browser in order to set up WP--or (I assume) to see the content on a browser. I get a 403 Forbidden error on my browser. I assume that this is a permissions problem. Can you please tell me the proper settings for the permissions to: 1) Allow the developers and me to read/write. 2) to allow WP set up and do its thing 3) Allow visitors to access the site(s) via the web. I should also mention that the subfolder are actually simlinks to folder on another internal hdd--I don't think this will make a difference, but I thought I should disclose. Since I am a newbie to ubuntu, step-by-step directions are greatly appreciated! Thank you for taking the time! dp total 12 drwxr-xr-x 2 root root 4096 2012-07-12 10:55 . drwxr-xr-x 13 root root 4096 2012-07-11 20:02 .. lrwxrwxrwx 1 root root 43 2012-07-11 20:45 admin_media -> /root/django_src/django/contrib/admin/media -rw-r--r-- 1 root root 177 2012-07-11 17:50 index.html lrwxrwxrwx 1 root root 14 2012-07-11 20:42 media -> /hdd/web/media lrwxrwxrwx 1 root root 18 2012-07-12 10:55 wordpress -> /hdd/web/wordpress Here is the result of using chown -R www-data:www-data /var/www total 12 drwxr-xr-x 2 www-data www-data 4096 2012-07-12 10:55 . drwxr-xr-x 13 root root 4096 2012-07-11 20:02 .. lrwxrwxrwx 1 www-data www-data 43 2012-07-11 20:45 admin_media -> /root/django_src/django/contrib/admin/media -rw-r--r-- 1 www-data www-data 177 2012-07-11 17:50 index.html lrwxrwxrwx 1 www-data www-data 14 2012-07-11 20:42 media -> /hdd/web/media lrwxrwxrwx 1 www-data www-data 18 2012-07-12 10:55 wordpress -> /hdd/web/wordpress I am still unable to access via browser...

    Read the article

  • Taking too long to get skills for entry level programmer position [closed]

    - by greenonion
    I don't have the skills for an entry level position as a .Net programmer. I am trying to learn what I need but there is too much to learn and too little time. What can I do? About two months ago, I went to a job interview for an entry level C# .Net programming/consultant position in NYC. When I heard back from them, they told me that the knowledge gap between what I knew and what they needed me to know was too big and I might have been a better fit if I had 6 months of experience. This was the first interview that I went on since graduating college. before the interview, I read a book on visual C#. Turns out it wasn't a very good book and I was missing a lot of key areas of knowledge such as ADO.net SQL (I had learned some LINQ) A little bit about how memory is handled Multiple threaded programming, etc. Because the book wasn't very good, the stuff I did know, I didn't know very well. I felt crushed. I've applied for jobs to gain experience but when recruiters hear that I have no experience they lose interest. I figured that I can at least work on my knowledge. Since then, I read "SQL Essentials" to cover the SQL bit and I found a pretty awesome book that is good enough to clear up what's hazy in my mind and covers almost all of the extra topics. The book is "C# 4.0: The Complete Reference" by Herbert Schildt. I'm even learning a lot about the topics I was familiar with. For a month now I've been working my way through this beast of a book. However, gaining the knowledge I need is taking too long. I can't hold off not having a full-time job much longer. I'm not stupid and I'm studying constantly pouring through the book, asking questions on stackoverflow, referencing the C# specification, etc. I have made great progress but there is just too much ground to cover. I'm on chapter 12 which is about a 3rd through the book. To get an idea of what I know vs don't know, the table of contents is on amazon: http://www.amazon.com/C-4-0-The-Complete-Reference/dp/007174116X How on earth can someone know enough to function as a programmer in the real world? Can I try for a job in academia? Will I have time to finish learning the rest of the C# language or am I just un-hireable?

    Read the article

  • Disabling standby modus on Gear4 Blackbox speakers

    - by cj
    I have a set of bluetooth Gear4 Blackbox speakers. The sound quality and the design is great, but it switches to standby modus automatically every now and then, when I am listening to music and when it's silent. In that situation, the only way to get them working again is turning them off and on, since they do not react to the remote nor to the buttons on top of it. I emailed Gear4 with this problem and they recommended me to reflash the firmware again, but the problem is persisting after having done that. It has nothing to do with the device that is streaming music. So, the question is, is there a way to disable the standby modus permamently, so the speakers are all the time active even if I am not using them? To switch them off I would just turn the power switch off. If anyone is curious about what device I am talking about, here's a link to Amazon, but I wouldn't reccomend anyone to buy them because of this problem. http://www.amazon.co.uk/Gear-4-PG142-GEAR4-BlackBox/dp/B000QEFZI2 Thanks in advance.

    Read the article

  • Fedora-13 not detecting USB HDD enclosure (with HDD)

    - by Ramy
    I recently purchased this enclosure: http://www.amazon.com/Inland-2-5-Inc.../dp/B003SZ2Y12 and this HDD: http://www.amazon.com/Seagate-Barrac...3811667&sr=8-1 Now, I let my brother in law use the enclosure with his 160GB disk to back some stuff up. He then gave me that disk in my enclosure and I backed up my computer and my fiances computer. So...obviously, i had no problem mounting that disk. I plan on keeping this disk as my "natural disaster backup" (in case my apartment building burns down, i still have that disk with my stuff backed up). I want to use the 1.5T disk as my regular/more frequent backup device, but it doesn't seem to be mounting to my F-13 machine. I searched through this forum and found someone advising to run the following: # mount -t vfat /dev/sda1 /mnt this is the output i get when I run that: mount: /dev/sda1 already mounted or /mnt busy mount: according to mtab, /dev/sda1 is mounted on /boot Thing is, shouldn't this disk automatically mount just like the LAST disk in the same enclosure with the same USB cable and power supply? Any help would be greatly appreciated. THANKS!

    Read the article

  • What determines what resolutions a laptop is willing to output over VGA?

    - by Joshua McKinnon
    I'm responsible for several conference rooms and have setup 1080p projectors and I provide both HDMI and VGA connectivity. HDMI for DisplayPort and Mini-DisplayPort, and VGA as a fallback, universal option. Contrary to what I expected, people seem to have much more trouble with the HDMI than VGA, so VGA gets used a lot more than you'd think (even as most workstation laptops made in the last 3-4 years have DisplayPort or Mini-DisplayPort...). Also to my surprise, VGA outputs over 1080p on a 50ft cable run with very minimal degradation on certain laptops - other laptops just don't offer 1080p as a resolution choice and top out at 1600x1200 or something else. Specific example: a ThinkPad W530 will do 1080p, a W520 won't, over VGA. (both do 1080p over displayport/mini-DP) What determines what resolutions a laptop is willing to output over VGA? I'm thinking this will come down to either a video driver that says it supports only certain resolutions for output, or limitations of the RAMDAC (which wouldn't be in play, at least DAC wise, on a digital output, but WOULD on VGA, an analog output). The basic reason for the question is that I noticed, say, a ThinkPad W520 with 1080p built in display, will output 1080p fine over DisplayPort to a 1080p projector, but will cap out at 1600x1200 (practically the same pixel count, just a little shy) on VGA. Now, this wouldn't be surprising at all except SOME laptops have no issue outputting 1080p over VGA, even with lower native resolutions. Why do I care? Well if there's some way I could enable it... for situations where my users end up using VGA anyway, it's preferable for display mirroring if they can output their laptop's native resolution, which, you guessed it, is very often 1080p on 15" models. DISCLAIMER: This is primarily a curiosity, I'm not claiming 1080p over VGA is ideal by any means, but hey, if it works. I've seen HDMI start artifacting more over same-length, same gauge cabling (up to 50' run in certain rooms). If you think this is better suited to SuperUser, please move it, but this is framed from an IT standpoint of something that affects a real pool of users in a multiple conference room, 50+ deployed laptop scenario.

    Read the article

  • printing parallel over ethernet cable

    - by Crudler
    I have a bit of an interesting challenge :) I have a machine with a parallel printer output, i want it to be able to print instead to a printer in a different room and i know that parallel isnt great over big distances. i found this: http://www.amazon.com/over-Cat5-Extension-Cable-Adapter/dp/B002WJ9S6Y%3FSubscriptionId%3DAKIAINHICTCYYZGJWT4Q%26tag%3Dusbprintercables.net-20%26linkCode%3Dxm2%26camp%3D2025%26creative%3D165953%26creativeASIN%3DB002WJ9S6Y which will let me connect over cat5, but its usb to cat 5. my machine can only output on parallel (its not a computer) so what i was thinking of getting is a parallel(f) to usb and usb to parallel (M) for either side i.e. machine - parallel - usb - cat5 - usb - parallel -printer just seems a bit messy :) suggestions? another thing i would like to try is to get rid of the old school parallel printer and instead use a network based multi function. would this be possible? i.e. machine - parallel -usb - cat5 - ethernet print server - network printer this might be rougher because the machine cannot "know" that we are using a network printer. it can ONLY print to LPT1 Thanks!

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10  | Next Page >