Search Results

Search found 20883 results on 836 pages for 'wont say'.

Page 445/836 | < Previous Page | 441 442 443 444 445 446 447 448 449 450 451 452  | Next Page >

  • How to calculate an angle from three points?

    - by HelloMoon
    Lets say you have this: P1 = (x=2, y=50) P2 = (x=9, y=40) P3 = (x=5, y=20) Assume that P1 is the center point of a circle. It is always the same. I want the angle that is made up by P2 and P3, or in other words the angle that is next to P1. The inner angle to be precise. It will be always a sharp angle, so less than -90 degrees. I thought: Man, that's simplest geometry maths. But I looked for a formula for like 6 hours now and people talk about most complicated NASA stuff like arcos and vector scalar product stuff. My head feels like in a fridge. Some math gurus here that think this is a simple problem? I think the programing language doesn't matter here but for those who think it does: java and objective-c. need that for both. haven't tagged it for these, though.

    Read the article

  • on hover overlay image in CSS

    - by Juventus
    I need a div with picture bg to overlay an image (with some amount transparency) when hovered on. I need to be able to have one transparent overlay that can be used and reused throughout the site on any image. My first attempt was round-about to say the least. Because I found out you cannot roll-over an invisible div I devised a sandwhich system in which the original image is the first layer, the overlay is the second layer, and the original image is third layer again. This way, when you roll-over, the original image disappears revealing the overlay image over the original image: http://www.nightlylabs.com/uploads/test.html So it works. Kinda. Because of you cannot interact with an visibility:invisible element (why?!) the roll-over flickers unless you rest the cursor on it. Any help? This method is bad so if anyone has a better one please comment.

    Read the article

  • SQL Server -> 'SQL_Latin1_General_CP1_CI_AS' Collation -> Varchar Column -> Languages Supported

    - by Ajay Singh
    All, We are using SQL Server 2008 with Collation Setting as 'SQL_Latin1_General_CP1_CI_AS'. We are using Varchar column to store textual data. We know that we cannot store Double Byte data in Varchar column and hence cannot support languages like Japanese and Chinese without converting it to NVarchar. However, will it be safe to say that all Single Byte Characters can be stored in Varchar column without any problem? If yes then from where can I get the list of languages which needs Single Byte for storage and the list of languages which needs double byte? Any assistance in this regard is highly appreciated. Thanks in advance.

    Read the article

  • Algorithms for City Simulation?

    - by anon
    I want to create a city filled with virtual creatures. Say like Sim City, where each creature walks around, doing it's own tasks. I'd prefer the city to not 'explode' or do weird things -- like the population dies off, or the population leaves, or any other unexpected crap. Is there a set of basic rules I can encode each agent with so that the city will be 'stable'? (Much like how for physics simulations, we have some basic rules that govern everything; is there a set of rules that governs how a simulation of a virtual city will be stable?) I'm new to this area and have no idea what algorithms/books to look into. Insights deeply appreciated. Thanks!

    Read the article

  • Is there any good reason to use <rtexprvalue>false</rtexprvalue> in JSP tags?

    - by Superfilin
    Is there any good reason to disallow scriptlet or EL expression to be inserted as attribute value? Let's say we have tag: <tag> <name>mytag</name> <tag-class>org.apache.beehive.netui.tags.tree.Tree</tag-class> <attribute> <name>attr</name> <required>false</required> <rtexprvalue>false</rtexprvalue> <type>boolean</type> </attribute> </tag> What could be a good reason for dissallowing the below? <my:mytag attr="${setting}" />

    Read the article

  • Fortran arrays and subroutines

    - by ccook
    I'm going through a Fortran code, and one bit has me a little puzzled. There is a subroutine, say SUBROUTINE SSUB(X,...) REAL*8 X(0:N1,1:N2,0:N3-1),... ... RETURN END Which is called in another subroutine by: CALL SSUB(W(0,1,0,1),...) where W is a 'working array'. It appears that a specific value from W is passed to the X, however, X is dimensioned as an array. What's going on?

    Read the article

  • Python script repeated auto start up.

    - by Ali
    I am designing a python web app, where people can have an email sent to them on a particular day. So a user puts in his emai and date in a form and it gets stored in my database. My script would then search through the database looking for all records of todays date, retrive the email, sends them out and deletes the entry from the table. Is it possible to have a setup, where the script starts up automatically at a give time, say 1 pm everyday, sends out the email and then quits? If I have a continuously running script, i might go over the CPU limit of my shared web hosting. Or is the effect negligible? Ali

    Read the article

  • Fill spinner from cursor in android

    - by Rickard
    Hi, i have searched for a awnser for this for a while today. It all looks so easy but i never get it to work. I want to fill a spinner with my cursor. I have been trying to use SimpleCursorAdapter for this as a lot of sites say i shall but i never get it to work. Show me just how easy it is :) Thanks for your time! My cursor Cursor cursor = db.query(DATABASE_TABLE_Clients, new String[] {"_id", "C_Name"}, null, null, null, null, "C_Name"); My spinner (Spinner) findViewById(R.id.spnClients); My Code Cursor cursor_Names = SQLData.getClientNames(); startManagingCursor(cursor_Names); String[] columns = new String[] { "C_Name" }; int[] to = new int[] { R.id.txt_Address }; SimpleCursorAdapter mAdapter = new SimpleCursorAdapter(this, android.R.layout.simple_spinner_dropdown_item, cursor_Names, columns, to); Spinner spnClients = (Spinner) findViewById(R.id.spnClients); spnClients.setAdapter(mAdapter);

    Read the article

  • Checkbox not being checked when running function

    - by Rudiger
    I have a checkbox that when it is clicked it submits the form to the server to store the detail. All works fine except that after the form is submitted I update a div to say submitted but the checkbox isn't ticked. The page isn't refreshed of course and upon page refresh it is ticked. I thought I might be able to check the box myself as I'm using jQuery but I have a lot of these checkboxes each with a dynamic name so I'm not sure how I would call them. I thought something like: $('input[name=("favourite" + id)]').attr('checked', true); might work but no luck. If I don't call the function on the checkbox being ticked the checkbox behaves normally. Thanks for anything that could help.

    Read the article

  • Size Mismatch Between Swing Fonts and Printable Fonts in Java

    - by Caleb Rackliffe
    Hi All, I've got a panel displaying a JTextPane backed by a StyledDocument. When I print a string of text in, say Arial 16, the text it prints is of a different size than the Arial 16 Word prints. Is there some sort of flaw in the translation of Swing fonts to Windows system fonts or something of the sort that makes it difficult (or impossible) to print accurately? I can achieve an approximation by scaling down the size of my font before printing, but this never quite gets me the results I would like, as it's not possible in all cases to reproduce things like equivalent numbers of words on a line, etc. Has anybody run into this before?

    Read the article

  • Device Driver IRQL and Thread/Context Switches

    - by Christian Hoglund
    Hi, I'm new to Windows device driver programming. I know that certain operations can only be performed at IRQL PASSIVE_LEVEL. For example, Microsoft have this sample code of how to write to a file from a kernel driver: if (KeGetCurrentIrql() != PASSIVE_LEVEL) return STATUS_INVALID_DEVICE_STATE; Status = ZwCreateFile(...); My question is this: What is preventing the IRQL from being raised after the KeGetCurrentIrql() check above? Say a context or thread swithch occurs, couldn't the IRQL suddenly be DISPATCH_LEVEL when it gets back to my driver which would then result in a system crash? If this is NOT possible then why not just check the IRQL in the DriverEntry function and be done with it once for all?

    Read the article

  • How to write Haskell function to verify parentheses matching?

    - by Rizo
    I need to write a function par :: String -> Bool to verify if a given string with parentheses is matching using stack module. Ex: par "(((()[()])))" = True par "((]())" = False Here's my stack module implementation: module Stack (Stack, push, pop, top, empty, isEmpty) where data Stack a = Stk [a] deriving (Show) push :: a -> Stack a -> Stack a push x (Stk xs) = Stk (x:xs) pop :: Stack a -> Stack a pop (Stk (_:xs)) = Stk xs pop _ = error "Stack.pop: empty stack" top :: Stack a -> a top (Stk (x:_)) = x top _ = error "Stack.top: empty stack" empty :: Stack a empty = Stk [] isEmpty :: Stack a -> Bool isEmpty (Stk [])= True isEmpty (Stk _) = False So I need to implement a 'par' function that would test a string of parentheses and say if parentheses in it matches or not. How can I do that using a stack?

    Read the article

  • Composite Key Dictionary

    - by AaronLS
    I have some objects in List, let's say List<MyClass> and MyClass has several properties. I would like to create an index of the list based on 3 properties of of MyClass. In this case 2 of the properties are int's, and one property is a datetime. Basically I would like to be able to do something like: Dictionary< CompositeKey , MyClass > MyClassListIndex = Dictionary< CompositeKey , MyClass >(); //Populate dictionary with items from the List<MyClass> MyClassList MyClass aMyClass = Dicitonary[(keyTripletHere)]; I sometimes create multiple dictionaries on a list to index different properties of the classes it holds. I am not sure how best to handle composite keys though. I considered doing a checksum of the three values but this runs the risk of collisions.

    Read the article

  • Should I learn C?

    - by Justin Standard
    Original Question: Should I Learn C? In the theme of the stackoverflow podcast, here's a fun question: should I learn C? I expect Jeff & Joel will have something to say on this. Some info on my background: Primarily a Java programmer on "enterprisy" systems. Favorite languages: python, scheme 7 years programming experience A very small amount of C++ experience, practically no C experience No immediate "need" to learn C So should I learn C? If so, why? If not, why? C or Assembly? Lots of folks recomending Assembler, so add on question: Is it better to learn C or Assembler? If Assembler, which one? Recommended assemblers so far: Motorolla 68000 Intel Assembler (does he mean x86?) MASM32

    Read the article

  • BlackBerry Simulator & BIS Push Service

    - by Submerged
    I am hoping that someone knows if you can use RIMM's push service with BIS WITHOUT a hand held device. I have registered for the push evaluation and I want to program a push server that will send out notifications to BB clients. I got my email this morning containing: Server: Application: XXXXXXXXXXXXXXXXXXX Pwd: xxxXXXXX CPID (Content Provider ID):xxx Start Date (MM/DD/YYYY): X/X/XXXX Expiry Date (MM/DD/YYYY):X/X/XXXX First Name:XXXXXXX Last Name:XXXXX Email:[email protected] Account Type:Plus Source IP:xxx.xxx.xxx.xxx Usage:BIS AND Client: Application Credentials (for use in your client application): Application ID:XXX-xxxxxxxxxxxxxxx Push Port:xxxxx I am hoping someone can tell me where to get started - as an iPhone developer, I have to say, there is much more information. Lastly, if I DO need a device, does that device have to have a dataplan? I wanted to be able to serve my clients from WiFi as well, does the BB push system work only on Cell networks? Thank you

    Read the article

  • Launch market place with id of an application that doesn't exist in the android market place

    - by Gaurav
    Hi, I am creating an application that checks the installation of a package and then launches the market-place with its id. When I try to launch market place with id of an application say com.mybrowser.android by throwing an intent android.intent.action.VIEW with url: market://details?id=com.mybrowser.android, the market place application does launches but crashes after launch. Note: the application com.mybrowser.android doesn't exists in the market-place. MyApplication is my application. $ adb logcat I/ActivityManager( 1030): Starting activity: Intent { act=android.intent.action.MAIN cat=[android.intent.category.LAUNCHER] flg=0x10200000 cmp=myapp.testapp/.MyApplication } I/ActivityManager( 1030): Start proc myapp.testapp for activity myapp.testapp/.MyApplication: pid=3858 uid=10047 gids={1015, 3003} I/MyApplication( 3858): [ Activity CREATED ] I/MyApplication( 3858): [ Activity STARTED ] I/MyApplication( 3858): onResume D/dalvikvm( 1109): GC freed 6571 objects / 423480 bytes in 73ms I/MyApplication( 3858): Pressed OK button I/MyApplication( 3858): Broadcasting Intent: android.intent.action.VIEW, data: market://details?id=com.mybrowser.android I/ActivityManager( 1030): Starting activity: Intent { act=android.intent.action.VIEW dat=market://details?id=com.mybrowser.android flg=0x10000000 cmp=com.android.ven ding/.AssetInfoActivity } I/MyApplication( 3858): onPause I/ActivityManager( 1030): Start proc com.android.vending for activity com.android.vending/.AssetInfoActivity: pid=3865 uid=10023 gids={3003} I/ActivityThread( 3865): Publishing provider com.android.vending.SuggestionsProvider: com.android.vending.SuggestionsProvider D/dalvikvm( 1030): GREF has increased to 701 I/vending ( 3865): com.android.vending.api.RadioHttpClient$1.handleMessage(): Handle DATA_STATE_CHANGED event: NetworkInfo: type: WIFI[], state: CONNECTED/CO NNECTED, reason: (unspecified), extra: (none), roaming: false, failover: false, isAvailable: true I/ActivityManager( 1030): Displayed activity com.android.vending/.AssetInfoActivity: 609 ms (total 7678 ms) D/dalvikvm( 1030): GC freed 10458 objects / 676440 bytes in 128ms I/MyApplication( 3858): [ Activity STOPPED ] D/dalvikvm( 3865): GC freed 3538 objects / 254008 bytes in 84ms W/dalvikvm( 3865): threadid=19: thread exiting with uncaught exception (group=0x4001b180) E/AndroidRuntime( 3865): Uncaught handler: thread AsyncTask #1 exiting due to uncaught exception E/AndroidRuntime( 3865): java.lang.RuntimeException: An error occured while executing doInBackground() E/AndroidRuntime( 3865): at android.os.AsyncTask$3.done(AsyncTask.java:200) E/AndroidRuntime( 3865): at java.util.concurrent.FutureTask$Sync.innerSetException(FutureTask.java:273) E/AndroidRuntime( 3865): at java.util.concurrent.FutureTask.setException(FutureTask.java:124) E/AndroidRuntime( 3865): at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:307) E/AndroidRuntime( 3865): at java.util.concurrent.FutureTask.run(FutureTask.java:137) E/AndroidRuntime( 3865): at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1068) E/AndroidRuntime( 3865): at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:561) E/AndroidRuntime( 3865): at java.lang.Thread.run(Thread.java:1096) E/AndroidRuntime( 3865): Caused by: java.lang.NullPointerException E/AndroidRuntime( 3865): at com.android.vending.AssetItemAdapter$ReloadLocalAssetInformationTask.doInBackground(AssetItemAdapter.java:845) E/AndroidRuntime( 3865): at com.android.vending.AssetItemAdapter$ReloadLocalAssetInformationTask.doInBackground(AssetItemAdapter.java:831) E/AndroidRuntime( 3865): at android.os.AsyncTask$2.call(AsyncTask.java:185) E/AndroidRuntime( 3865): at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:305) E/AndroidRuntime( 3865): ... 4 more I/Process ( 1030): Sending signal. PID: 3865 SIG: 3 I/dalvikvm( 3865): threadid=7: reacting to signal 3 I/dalvikvm( 3865): Wrote stack trace to '/data/anr/traces.txt' I/DumpStateReceiver( 1030): Added state dump to 1 crashes D/AndroidRuntime( 3865): Shutting down VM W/dalvikvm( 3865): threadid=3: thread exiting with uncaught exception (group=0x4001b180) E/AndroidRuntime( 3865): Uncaught handler: thread main exiting due to uncaught exception E/AndroidRuntime( 3865): java.lang.NullPointerException E/AndroidRuntime( 3865): at com.android.vending.controller.AssetInfoActivityController.getIdDeferToLocal(AssetInfoActivityController.java:637) E/AndroidRuntime( 3865): at com.android.vending.AssetInfoActivity.displayAssetInfo(AssetInfoActivity.java:556) E/AndroidRuntime( 3865): at com.android.vending.AssetInfoActivity.access$800(AssetInfoActivity.java:74) E/AndroidRuntime( 3865): at com.android.vending.AssetInfoActivity$LoadAssetInfoAction$1.run(AssetInfoActivity.java:917) E/AndroidRuntime( 3865): at android.os.Handler.handleCallback(Handler.java:587) E/AndroidRuntime( 3865): at android.os.Handler.dispatchMessage(Handler.java:92) E/AndroidRuntime( 3865): at android.os.Looper.loop(Looper.java:123) E/AndroidRuntime( 3865): at android.app.ActivityThread.main(ActivityThread.java:4363) E/AndroidRuntime( 3865): at java.lang.reflect.Method.invokeNative(Native Method) E/AndroidRuntime( 3865): at java.lang.reflect.Method.invoke(Method.java:521) E/AndroidRuntime( 3865): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:860) E/AndroidRuntime( 3865): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:618) E/AndroidRuntime( 3865): at dalvik.system.NativeStart.main(Native Method) I/Process ( 1030): Sending signal. PID: 3865 SIG: 3 W/ActivityManager( 1030): Process com.android.vending has crashed too many times: killing! D/ActivityManager( 1030): Force finishing activity com.android.vending/.AssetInfoActivity I/dalvikvm( 3865): threadid=7: reacting to signal 3 D/ActivityManager( 1030): Force removing process ProcessRecord{44e48548 3865:com.android.vending/10023} (com.android.vending/10023) However, when I try to launch the market place for a package that exists in the market place say com.opera.mini.android, everything works. Log for this case: D/dalvikvm( 966): GC freed 2781 objects / 195056 bytes in 99ms I/MyApplication( 1165): Pressed OK button I/MyApplication( 1165): Broadcasting Intent: android.intent.action.VIEW, data: market://details?id=com.opera.mini.android I/ActivityManager( 78): Starting activity: Intent { act=android.intent.action.VIEW dat=market://details?id=com.opera.mini.android flg=0x10000000 cmp=com.android.vending/.AssetInfoActivity } I/AndroidRuntime( 1165): AndroidRuntime onExit calling exit(0) I/WindowManager( 78): WIN DEATH: Window{44c72308 myapp.testapp/myapp.testapp.MyApplication paused=true} I/ActivityManager( 78): Process myapp.testapp (pid 1165) has died. I/WindowManager( 78): WIN DEATH: Window{44c72958 myapp.testapp/myapp.testapp.MyApplication paused=false} D/dalvikvm( 78): GC freed 31778 objects / 1796368 bytes in 142ms I/ActivityManager( 78): Displayed activity com.android.vending/.AssetInfoActivity: 214 ms (total 22866 ms) W/KeyCharacterMap( 978): No keyboard for id 65540 W/KeyCharacterMap( 978): Using default keymap: /system/usr/keychars/qwerty.kcm.bin V/RenderScript_jni( 966): surfaceCreated V/RenderScript_jni( 966): surfaceChanged V/RenderScript( 966): setSurface 480 762 0x573430 D/ViewFlipper( 966): updateRunning() mVisible=true, mStarted=true, mUserPresent=true, mRunning=true D/dalvikvm( 978): GC freed 10065 objects / 624440 bytes in 95ms Any ideas? Thanks in advance!

    Read the article

  • Continuous Integration for SQL Server Part II – Integration Testing

    - by Ben Rees
    My previous post, on setting up Continuous Integration for SQL Server databases using GitHub, Bamboo and Red Gate’s tools, covered the first two parts of a simple Database Continuous Delivery process: Putting your database in to a source control system, and, Running a continuous integration process, each time changes are checked in. However there is, of course, a lot more to to Continuous Delivery than that. Specifically, in addition to the above: Putting some actual integration tests in to the CI process (otherwise, they don’t really do much, do they!?), Deploying the database changes with a managed, automated approach, Monitoring what you’ve just put live, to make sure you haven’t broken anything. This post will detail how to set up a very simple pipeline for implementing the first of these (continuous integration testing). NB: A lot of the setup in this post is built on top of the configuration from before, so it might be difficult to implement this post without running through part I first. There’ll then be a third post on automated database deployment followed by a final post dealing with the last item – monitoring changes on the live system. In the previous post, I used a mixture of Red Gate products and other 3rd party software – GitHub and Atlassian Bamboo specifically. This was partly because I believe most people work in an heterogeneous environment, using software from different vendors to suit their purposes and I wanted to show how this could work for this process. For example, you could easily substitute Atlassian’s BitBucket or Stash for GitHub, depending on your needs, or use an alternative CI server such as TeamCity, TFS or Jenkins. However, in this, post, I’ll be mostly using Red Gate products only (other than tSQLt). I would do this, firstly because I work for Red Gate. However, I also think that in the area of Database Delivery processes, nobody else has the offerings to implement this process fully – so I didn’t have any choice!   Background on Continuous Delivery For me, a great source of information on what makes a proper Continuous Delivery process is the Jez Humble and David Farley classic: Continuous Delivery – Reliable Software Releases through Build, Test, and Deployment Automation This book is not of course, primarily about databases, and the process I outline here and in the previous article is a gross simplification of what Jez and David describe (not least because it’s that much harder for databases!). However, a lot of the principles that they describe can be equally applied to database development and, I would argue, should be. As I say however, what I describe here is a very simple version of what would be required for a full production process. A couple of useful resources on handling some of these complexities can be found in the following two references: Refactoring Databases – Evolutionary Database Design, by Scott J Ambler and Pramod J. Sadalage Versioning Databases – Branching and Merging, by Scott Allen In particular, I don’t deal at all with the issues of multiple branches and merging of those branches, an issue made particularly acute by the use of GitHub. The other point worth making is that, in the words of Martin Fowler: Continuous Delivery is about keeping your application in a state where it is always able to deploy into production.   I.e. we are not talking about continuously delivery updates to the production database every time someone checks in an amendment to a stored procedure. That is possible (and what Martin calls Continuous Deployment). However, again, that’s more than I describe in this article. And I doubt I need to remind DBAs or Developers to Proceed with Caution!   Integration Testing Back to something practical. The next stage, building on our set up from the previous article, is to add in some integration tests to the process. As I say, the CI process, though interesting, isn’t enormously useful without some sort of test process running. For this we’ll use the tSQLt framework, an open source framework designed specifically for running SQL Server tests. tSQLt is part of Red Gate’s SQL Test found on http://www.red-gate.com/products/sql-development/sql-test/ or can be downloaded separately from www.tsqlt.org - though I’ll provide a step-by-step guide below for setting this up. Getting tSQLt set up via SQL Test Click on the link http://www.red-gate.com/products/sql-development/sql-test/ and click on the blue Download button to download the Red Gate SQL Test product, if not already installed. Follow the install process for SQL Test to install the SQL Server Management Studio (SSMS) plugin on to your machine, if not already installed. Open SSMS. You should now see SQL Test under the Tools menu:   Clicking this link will give you the basic SQL Test dialogue: As yet, though we’ve installed the SQL Test product we haven’t yet installed the tSQLt test framework on to any particular database. To do this, we need to add our RedGateApp database using this dialogue, by clicking on the + Add Database to SQL Test… link, selecting the RedGateApp database and clicking the Add Database link:   In the next screen, SQL Test describes what will be installed on the database for the tSQLt framework. Also in this dialogue, uncheck the “Add SQL Cop tests” option (shown below). SQL Cop is a great set of pre-defined tests that work within the tSQLt framework to check the general health of your SQL Server database. However, we won’t be using them in this particular simple example: Once you’ve clicked on the OK button, the changes described in the dialogue will be made to your database. Some of these are shown in the left-hand-side below: We’ve now installed the framework. However, we haven’t actually created any tests, so this will be the next step. But, before we proceed, we’ve made an update to our database so should, again check this in to source control, adding comments as required:   Also worth a quick check that your build still runs with the new additions!: (And a quick check of the RedGateAppCI database shows that the changes have been made).   Creating and Testing a Unit Test There are, of course, a lot of very interesting unit tests that you could and should set up for a database. The great thing about the tSQLt framework is that you can write these in SQL. The example I’m going to use here is pretty Mickey Mouse – our database table is going to include some email addresses as reference data and I want to check whether these are all in a correct email format. Nothing clever but it illustrates the process and hopefully shows the method by which more interesting tests could be set up. Adding Reference Data to our Database To start, I want to add some reference data to my database, and have this source controlled (as well as the schema). First of all I need to add some data in to my solitary table – this can be done a number of ways, but I’ll do this in SSMS for simplicity: I then add some reference data to my table: Currently this reference data just exists in the database. For proper integration testing, this needs to form part of the source-controlled version of the database – and so needs to be added to the Git repository. This can be done via SQL Source Control, though first a Primary Key needs to be added to the table. Right click the table, select Design, then right-click on the first “id” row. Then click on “Set Primary Key”: NB: once this change is made, click Save to save the change to the table. Then, to source control this reference data, right click on the table (dbo.Email) and selecting the following option:   In the next screen, link the data in the Email table, by selecting it from the list and clicking “save and close”: We should at this point re-commit the changes (both the addition of the Primary Key, and the data) to the Git repo. NB: From here on, I won’t show screenshots for the GitHub side of things – it’s the same each time: whenever a change is made in SQL Source Control and committed to your local folder, you then need to sync this in the GitHub Windows client (as this is where the build server, Bamboo is taking it from). An interesting point to note here, when these changes are committed in SQL Source Control (right-click database and select “Commit Changes to Source Control..”): The display gives a warning about possibly needing a migration script for the “Add Primary Key” step of the changes. This isn’t actually necessary in this case, but this mechanism would allow you to create override scripts to replace the default change scripts created by the SQL Compare engine (which runs underneath SQL Source Control). Ignoring this message (!), we add a comment and commit the changes to Git. I then sync these, run a build (or the build gets run automatically), and check that the data is being deployed over to the target RedGateAppCI database:   Creating and Running the Test As I mention, the test I’m going to use here is a very simple one - are the email addresses in my reference table valid? This isn’t of course, a full test of email validation (I expect the email addresses I’ve chosen here aren’t really the those of the Fab Four) – but just a very basic check of format used. I’ve taken the relevant SQL from this Stack Overflow article. In SSMS select “SQL Test” from the Tools menu, then click on + New Test: In the next screen, give your new test a name, and also enter a name in the Test Class box (test classes are schemas that help you keep things organised). Also check that the database in which the test is going to be created is correct – RedGateApp in this example: Click “Create Test”. After closing a couple of subsequent dialogues, you’ll see a dummy script for the test, that needs filling in:   We now need to define the SQL for our test. As mentioned before, tSQLt allows you to write your unit tests in T-SQL, and the code I’m going to use here is as below. This needs to be copied and pasted in to the query window, to replace the default given by tSQLt: –  Basic email check test ALTER PROCEDURE [MyChecks].[test Check Email Addresses] AS BEGIN SET NOCOUNT ON         Declare @Output VarChar(max)     Set @Output = ”       SELECT  @Output = @Output + Email +Char(13) + Char(10) FROM dbo.Email WHERE email NOT LIKE ‘%_@__%.__%’       If @Output > ”         Begin             Set @Output = Char(13) + Char(10)                           + @Output             EXEC tSQLt.Fail@Output         End   END;   Once this script is entered, hit execute to add the Stored Procedure to the database. Before committing the test to source control,  it’s worth just checking that it works! For a positive test, click on “SQL Test” from the Tools menu, then click Run Tests. You should see output like the following: - a green tick to indicate success! But of course, what we also need to do is test that this is actually doing something by showing a failed test. Edit one of the email addresses in your table to an incorrect format: Now, re-run the same SQL Test as before and you’ll see the following: Great – we now know that our test is really doing something! You’ll also see a useful error message at the bottom of SSMS: (leave the email address as invalid for now, for the next steps). The next stage is to check this new test in to source control again, by right-clicking on the database and checking in the changes with a commit message (and not forgetting to sync in the GitHub client):   Checking that the Tests are Running as Integration Tests After the changes above are made, and after a build has run on Bamboo (manual or automatic), looking at the Stored Procedures for the RedGateAppCI, the SPROC for the new test has been moved over to the database. However this is not exactly what we were after. We didn’t want to just copy objects from one database to another, but actually run the tests as part of the build/integration test process. I.e. we’re continuously checking any changes we make (in this case, to the reference data emails), to ensure we’re not breaking a test that we’ve set up. The behaviour we want to see is that, if we check in static data that is incorrect (as we did in step 9 above) and we have the tSQLt test set up, then our build in Bamboo should fail. However, re-running the build shows the following: - sadly, a successful build! To make sure the tSQLt tests are run as part of the integration test, we need to amend a switch in the Red Gate CI config file. First, navigate to file sqlCI.targets in your working folder: Edit this document, make the following change, save the document, then commit and sync this change in the GitHub client: <!-- tSQLt tests --> <!-- Optional --> <!-- To run tSQLt tests in source control for the database, enter true. --> <enableTsqlt>true</enableTsqlt> Now, if we re-run the build in Bamboo (NB: I’ve moved to a new server here, hence different address and build number): - superb, a broken build!! The error message isn’t great here, so to get more detailed info, click on the full build log link on this page (below the fold). The interesting part of the log shown is towards the bottom. Pulling out this part:   21-Jun-2013 11:35:19 Build FAILED. 21-Jun-2013 11:35:19 21-Jun-2013 11:35:19 "C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj" (default target) (1) -> 21-Jun-2013 11:35:19 (sqlCI target) -> 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: RedGate.Deploy.SqlServerDbPackage.Shared.Exceptions.InvalidSqlException: Test Case Summary: 1 test case(s) executed, 0 succeeded, 1 failed, 0 errored. [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: [MyChecks].[test Check Email Addresses] failed: [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: ringo.starr@beatles [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: +----------------------+ [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: |Test Execution Summary| [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj]   As a final check, we should make sure that, if we now fix this error, the build succeeds. So in SSMS, I’m going to correct the invalid email address, then check this change in to SQL Source Control (with a comment), commit to GitHub, and re-run the build:   This should have fixed the build: It worked! Summary This has been a very quick run through the implementation of CI for databases, including tSQLt tests to test whether your database updates are working. The next post in this series will focus on automated deployment – we’ve tested our database changes, how can we now deploy these to target sites?  

    Read the article

  • Convert Environment.OSVersion to NTDDI version format

    - by David Brown
    In sdkddkver.h of the Windows Platform SDK, there are various OS versions defined as NTDDI_*. For example, Windows XP and its service packs are defined as: #define NTDDI_WINXP 0x05010000 #define NTDDI_WINXPSP1 0x05010100 #define NTDDI_WINXPSP2 0x05010200 #define NTDDI_WINXPSP3 0x05010300 #define NTDDI_WINXPSP4 0x05010400 There are also masks which, along with the OSVER, SPVER, and SUBVER macros, allow you to pull the respective parts out of the NTDDI version. #define OSVERSION_MASK 0xFFFF0000 #define SPVERSION_MASK 0x0000FF00 #define SUBVERSION_MASK 0x000000FF I have all of these defined as constants in C# and what I'd like to do now is convert the data returned by Environment.OSVersion to a value corresponding to one of the NTDDI versions in sdkddkver.h. I could make a massive switch statement, but that's not really as future-proof as I'd like it to be. I would need to update the conversion method every time a new OS or service pack is released. I have a feeling this could be done with the help of some bitwise operators, but I'll be honest and say that those aren't my strong point. I appreciate any help!

    Read the article

  • Tutorial: Simple WCF XML-RPC client

    - by mr.b
    Update: I have provided complete code example in answer below. I have built my own little custom XML-RPC server, and since I'd like to keep things simple, on both server and client side, what I would like to accomplish is to create a simplest possible client (in C# preferably) using WCF. Let's say that Contract for service exposed via XML-RPC is as follows: [ServiceContract] public interface IContract { [OperationContract(Action="Ping")] string Ping(); // server returns back string "Pong" [OperationContract(Action="Echo")] string Echo(string message); // server echoes back whatever message is } So, there are two example methods, one without any arguments, and another with simple string argument, both returning strings (just for sake of example). Service is exposed via http. Aaand, what's next? :)

    Read the article

  • When is a SQL function not a function?

    - by Rob Farley
    Should SQL Server even have functions? (Oh yeah – this is a T-SQL Tuesday post, hosted this month by Brad Schulz) Functions serve an important part of programming, in almost any language. A function is a piece of code that is designed to return something, as opposed to a piece of code which isn’t designed to return anything (which is known as a procedure). SQL Server is no different. You can call stored procedures, even from within other stored procedures, and you can call functions and use these in other queries. Stored procedures might query something, and therefore ‘return data’, but a function in SQL is considered to have the type of the thing returned, and can be used accordingly in queries. Consider the internal GETDATE() function. SELECT GETDATE(), SomeDatetimeColumn FROM dbo.SomeTable; There’s no logical difference between the field that is being returned by the function and the field that’s being returned by the table column. Both are the datetime field – if you didn’t have inside knowledge, you wouldn’t necessarily be able to tell which was which. And so as developers, we find ourselves wanting to create functions that return all kinds of things – functions which look up values based on codes, functions which do string manipulation, and so on. But it’s rubbish. Ok, it’s not all rubbish, but it mostly is. And this isn’t even considering the SARGability impact. It’s far more significant than that. (When I say the SARGability aspect, I mean “because you’re unlikely to have an index on the result of some function that’s applied to a column, so try to invert the function and query the column in an unchanged manner”) I’m going to consider the three main types of user-defined functions in SQL Server: Scalar Inline Table-Valued Multi-statement Table-Valued I could also look at user-defined CLR functions, including aggregate functions, but not today. I figure that most people don’t tend to get around to doing CLR functions, and I’m going to focus on the T-SQL-based user-defined functions. Most people split these types of function up into two types. So do I. Except that most people pick them based on ‘scalar or table-valued’. I’d rather go with ‘inline or not’. If it’s not inline, it’s rubbish. It really is. Let’s start by considering the two kinds of table-valued function, and compare them. These functions are going to return the sales for a particular salesperson in a particular year, from the AdventureWorks database. CREATE FUNCTION dbo.FetchSales_inline(@salespersonid int, @orderyear int) RETURNS TABLE AS  RETURN (     SELECT e.LoginID as EmployeeLogin, o.OrderDate, o.SalesOrderID     FROM Sales.SalesOrderHeader AS o     LEFT JOIN HumanResources.Employee AS e     ON e.EmployeeID = o.SalesPersonID     WHERE o.SalesPersonID = @salespersonid     AND o.OrderDate >= DATEADD(year,@orderyear-2000,'20000101')     AND o.OrderDate < DATEADD(year,@orderyear-2000+1,'20000101') ) ; GO CREATE FUNCTION dbo.FetchSales_multi(@salespersonid int, @orderyear int) RETURNS @results TABLE (     EmployeeLogin nvarchar(512),     OrderDate datetime,     SalesOrderID int     ) AS BEGIN     INSERT @results (EmployeeLogin, OrderDate, SalesOrderID)     SELECT e.LoginID, o.OrderDate, o.SalesOrderID     FROM Sales.SalesOrderHeader AS o     LEFT JOIN HumanResources.Employee AS e     ON e.EmployeeID = o.SalesPersonID     WHERE o.SalesPersonID = @salespersonid     AND o.OrderDate >= DATEADD(year,@orderyear-2000,'20000101')     AND o.OrderDate < DATEADD(year,@orderyear-2000+1,'20000101')     ;     RETURN END ; GO You’ll notice that I’m being nice and responsible with the use of the DATEADD function, so that I have SARGability on the OrderDate filter. Regular readers will be hoping I’ll show what’s going on in the execution plans here. Here I’ve run two SELECT * queries with the “Show Actual Execution Plan” option turned on. Notice that the ‘Query cost’ of the multi-statement version is just 2% of the ‘Batch cost’. But also notice there’s trickery going on. And it’s nothing to do with that extra index that I have on the OrderDate column. Trickery. Look at it – clearly, the first plan is showing us what’s going on inside the function, but the second one isn’t. The second one is blindly running the function, and then scanning the results. There’s a Sequence operator which is calling the TVF operator, and then calling a Table Scan to get the results of that function for the SELECT operator. But surely it still has to do all the work that the first one is doing... To see what’s actually going on, let’s look at the Estimated plan. Now, we see the same plans (almost) that we saw in the Actuals, but we have an extra one – the one that was used for the TVF. Here’s where we see the inner workings of it. You’ll probably recognise the right-hand side of the TVF’s plan as looking very similar to the first plan – but it’s now being called by a stack of other operators, including an INSERT statement to be able to populate the table variable that the multi-statement TVF requires. And the cost of the TVF is 57% of the batch! But it gets worse. Let’s consider what happens if we don’t need all the columns. We’ll leave out the EmployeeLogin column. Here, we see that the inline function call has been simplified down. It doesn’t need the Employee table. The join is redundant and has been eliminated from the plan, making it even cheaper. But the multi-statement plan runs the whole thing as before, only removing the extra column when the Table Scan is performed. A multi-statement function is a lot more powerful than an inline one. An inline function can only be the result of a single sub-query. It’s essentially the same as a parameterised view, because views demonstrate this same behaviour of extracting the definition of the view and using it in the outer query. A multi-statement function is clearly more powerful because it can contain far more complex logic. But a multi-statement function isn’t really a function at all. It’s a stored procedure. It’s wrapped up like a function, but behaves like a stored procedure. It would be completely unreasonable to expect that a stored procedure could be simplified down to recognise that not all the columns might be needed, but yet this is part of the pain associated with this procedural function situation. The biggest clue that a multi-statement function is more like a stored procedure than a function is the “BEGIN” and “END” statements that surround the code. If you try to create a multi-statement function without these statements, you’ll get an error – they are very much required. When I used to present on this kind of thing, I even used to call it “The Dangers of BEGIN and END”, and yes, I’ve written about this type of thing before in a similarly-named post over at my old blog. Now how about scalar functions... Suppose we wanted a scalar function to return the count of these. CREATE FUNCTION dbo.FetchSales_scalar(@salespersonid int, @orderyear int) RETURNS int AS BEGIN     RETURN (         SELECT COUNT(*)         FROM Sales.SalesOrderHeader AS o         LEFT JOIN HumanResources.Employee AS e         ON e.EmployeeID = o.SalesPersonID         WHERE o.SalesPersonID = @salespersonid         AND o.OrderDate >= DATEADD(year,@orderyear-2000,'20000101')         AND o.OrderDate < DATEADD(year,@orderyear-2000+1,'20000101')     ); END ; GO Notice the evil words? They’re required. Try to remove them, you just get an error. That’s right – any scalar function is procedural, despite the fact that you wrap up a sub-query inside that RETURN statement. It’s as ugly as anything. Hopefully this will change in future versions. Let’s have a look at how this is reflected in an execution plan. Here’s a query, its Actual plan, and its Estimated plan: SELECT e.LoginID, y.year, dbo.FetchSales_scalar(p.SalesPersonID, y.year) AS NumSales FROM (VALUES (2001),(2002),(2003),(2004)) AS y (year) CROSS JOIN Sales.SalesPerson AS p LEFT JOIN HumanResources.Employee AS e ON e.EmployeeID = p.SalesPersonID; We see here that the cost of the scalar function is about twice that of the outer query. Nicely, the query optimizer has worked out that it doesn’t need the Employee table, but that’s a bit of a red herring here. There’s actually something way more significant going on. If I look at the properties of that UDF operator, it tells me that the Estimated Subtree Cost is 0.337999. If I just run the query SELECT dbo.FetchSales_scalar(281,2003); we see that the UDF cost is still unchanged. You see, this 0.0337999 is the cost of running the scalar function ONCE. But when we ran that query with the CROSS JOIN in it, we returned quite a few rows. 68 in fact. Could’ve been a lot more, if we’d had more salespeople or more years. And so we come to the biggest problem. This procedure (I don’t want to call it a function) is getting called 68 times – each one between twice as expensive as the outer query. And because it’s calling it in a separate context, there is even more overhead that I haven’t considered here. The cheek of it, to say that the Compute Scalar operator here costs 0%! I know a number of IT projects that could’ve used that kind of costing method, but that’s another story that I’m not going to go into here. Let’s look at a better way. Suppose our scalar function had been implemented as an inline one. Then it could have been expanded out like a sub-query. It could’ve run something like this: SELECT e.LoginID, y.year, (SELECT COUNT(*)     FROM Sales.SalesOrderHeader AS o     LEFT JOIN HumanResources.Employee AS e     ON e.EmployeeID = o.SalesPersonID     WHERE o.SalesPersonID = p.SalesPersonID     AND o.OrderDate >= DATEADD(year,y.year-2000,'20000101')     AND o.OrderDate < DATEADD(year,y.year-2000+1,'20000101')     ) AS NumSales FROM (VALUES (2001),(2002),(2003),(2004)) AS y (year) CROSS JOIN Sales.SalesPerson AS p LEFT JOIN HumanResources.Employee AS e ON e.EmployeeID = p.SalesPersonID; Don’t worry too much about the Scan of the SalesOrderHeader underneath a Nested Loop. If you remember from plenty of other posts on the matter, execution plans don’t push the data through. That Scan only runs once. The Index Spool sucks the data out of it and populates a structure that is used to feed the Stream Aggregate. The Index Spool operator gets called 68 times, but the Scan only once (the Number of Executions property demonstrates this). Here, the Query Optimizer has a full picture of what’s being asked, and can make the appropriate decision about how it accesses the data. It can simplify it down properly. To get this kind of behaviour from a function, we need it to be inline. But without inline scalar functions, we need to make our function be table-valued. Luckily, that’s ok. CREATE FUNCTION dbo.FetchSales_inline2(@salespersonid int, @orderyear int) RETURNS table AS RETURN (SELECT COUNT(*) as NumSales     FROM Sales.SalesOrderHeader AS o     LEFT JOIN HumanResources.Employee AS e     ON e.EmployeeID = o.SalesPersonID     WHERE o.SalesPersonID = @salespersonid     AND o.OrderDate >= DATEADD(year,@orderyear-2000,'20000101')     AND o.OrderDate < DATEADD(year,@orderyear-2000+1,'20000101') ); GO But we can’t use this as a scalar. Instead, we need to use it with the APPLY operator. SELECT e.LoginID, y.year, n.NumSales FROM (VALUES (2001),(2002),(2003),(2004)) AS y (year) CROSS JOIN Sales.SalesPerson AS p LEFT JOIN HumanResources.Employee AS e ON e.EmployeeID = p.SalesPersonID OUTER APPLY dbo.FetchSales_inline2(p.SalesPersonID, y.year) AS n; And now, we get the plan that we want for this query. All we’ve done is tell the function that it’s returning a table instead of a single value, and removed the BEGIN and END statements. We’ve had to name the column being returned, but what we’ve gained is an actual inline simplifiable function. And if we wanted it to return multiple columns, it could do that too. I really consider this function to be superior to the scalar function in every way. It does need to be handled differently in the outer query, but in many ways it’s a more elegant method there too. The function calls can be put amongst the FROM clause, where they can then be used in the WHERE or GROUP BY clauses without fear of calling the function multiple times (another horrible side effect of functions). So please. If you see BEGIN and END in a function, remember it’s not really a function, it’s a procedure. And then fix it. @rob_farley

    Read the article

  • Bidirectional Serialization with Linq

    - by Andy
    Anyone know why only undirectional serialization is supported in the Linq designer? Consider the following example: Say we have a Customer who requested an Order containing Products. We set the Serialization Mode in the Linq designer to Unidirectional to enable serialization. When looking at the code for the Order object, the DataMember attribute is added to all its internal properties such as ID,OrderNumber, etc. and also to the EntitySet of Products, but not to Customer. One can get around this by manually adding the DataMember attribute to Customer, but this becomes quite cumbersome when there's loads of entities in the database.

    Read the article

  • UpdatePanel + ToolkitScriptManager work in FF but blows up in IE 6+

    - by Hans Gruber
    I just upgraded my ASP.NET application from AjaxToolkit version 1.0 to version 3.5. The only code that I had to change as a result of the upgrade was to replace instances of ScriptManager with ToolKitScriptManager. UpdatePanels that used to work flawlessly in both FF and IE6+ now only work in FF. The specific problem in IE is twofold: PostBackTriggers don't perform any PostBack at all (i.e button clicks do nothing) AsyncPostBackTriggers do perform an async PostBack, but outside of a single hidden field (created by the ToolKitScriptManager itself) no ViewState is being sent back to the server for any controls. Needless to say, controls tend to fail in rather spectacular fashion when they can't access their ViewState during a PostBack. :) The only thing I can think of that would account for this only failing in IE6+, is that there is some malformed JavaScript getting piped down that FF is able to work around/ignore but that causes IE to self-destruct Downgrading to the 1.0 version of AjaxToolkit would probably fix this issue, but there are several key features in the 3.5 I need to leverage so this would be painful. Thanks for reading!

    Read the article

  • DAL without web2py

    - by Jay
    I am using web2py to power my web site. I decided to use the web2py DAL for a long running program that runs behind the site. This program does not seem to update its data or the database (sometimes). from gluon.sql import * from gluon.sql import SQLDB from locdb import * # contains # db = SQLDB("mysql://user/pw@localhost/mydb", pool_size=10) # db.define_table('orders', Field('status', 'integer'), Field('item', 'string'), # migrate='orders.table') orderid = 20 # there is row with id == 20 in table orders #when I do db(db.orders.id==orderid).update(status=6703) db.commit() It does not update the database, and a select on orders with this id, shows the correct data. In some circumstances a "db.rollback()" after a commit seems to help. Very strange to say the least. Have you seen this, more importantly do you know the solution? Jay

    Read the article

  • Android: Custom view based on layout: how?

    - by Peterdk
    I am building a Android app and I am a bit struggling with custom Views. I would like to have a reusable View that consist of a few standard layout elements. Let's say a relativelayout with some buttons in it. How should I proceed. Should I create a custom view class that extends RelativeLayout and programmaticly add those buttons? I would think that's a bit overkill? What's the way to do it properly in Android?

    Read the article

  • WPF ListView column auto sizing

    - by Diego Mijelshon
    Let's say I have the following ListView: <ListView ScrollViewer.VerticalScrollBarVisibility="Auto"> <ListView.View> <GridView> <GridViewColumn Header="Something" DisplayMemberBinding="{Binding Path=ShortText}" /> <GridViewColumn Header="Description" DisplayMemberBinding="{Binding Path=VeryLongTextWithCRs}" /> <GridViewColumn Header="Something Else" DisplayMemberBinding="{Binding Path=AnotherShortText}" /> </GridView> </ListView.View> </ListView> I'd like the short text columns to always fit in the screen, and the long text column to use the remaining space, word-wrapping if necessary. Is that possible?

    Read the article

< Previous Page | 441 442 443 444 445 446 447 448 449 450 451 452  | Next Page >