Search Results

Search found 45502 results on 1821 pages for 'open source contribution'.

Page 235/1821 | < Previous Page | 231 232 233 234 235 236 237 238 239 240 241 242  | Next Page >

  • Json.Net CustomSerialization

    - by BonyT
    I am serializing a collection of objects that contains a dictionary called dynamic properties. The default Json emitted looks like this: [{"dynamicProperties":{"WatchId":7771,"Issues":0,"WatchType":"x","Location":"Equinix Source","Name":"PI_5570_5580"}}, {"dynamicProperties":{"WatchId":7769,"Issues":0,"WatchType":"x","Location":"Equinix Source","Name":"PI_5570_5574"}}, {"dynamicProperties":{"WatchId":7767,"Issues":0,"WatchType":"x","Location":"Equinix Source","Name":"PI_5570_5572"}}, {"dynamicProperties":{"WatchId":7765,"Issues":0,"WatchType":"y","Location":"Equinix Source","Name":"highlight_SM"}}, {"dynamicProperties":{"WatchId":8432,"Issues":0,"WatchType":"y","Location":"Test Devices","Name":"Cisco1700PI"}}] I'd like to produce Json that looks like this: [{"WatchId":7771,"Issues":0,"WatchType":"x","Location":"Equinix Source","Name":"PI_5570_5580"}, {"WatchId":7769,"Issues":0,"WatchType":"x","Location":"Equinix Source","Name":"PI_5570_5574"}, {"WatchId":7767,"Issues":0,"WatchType":"x","Location":"Equinix Source","Name":"PI_5570_5572"}, {"WatchId":7765,"Issues":0,"WatchType":"y","Location":"Equinix Source","Name":"highlight_SM"}, {"WatchId":8432,"Issues":0,"WatchType":"y","Location":"Test Devices","Name":"Cisco1700PI"}] From reading the Json.Net documentation it looks like I could build a CustomContractResolver for my class, but I cannot find any details on how to go about this... Can anyone shed any light on the direction I should be looking in?

    Read the article

  • Arguments? Path filing wrong? [on hold]

    - by user3034947
    I'm working through the Java SE 7 programming activity and I'm having trouble sort of understanding how arguments work. Here's the code: public class CopyFileTree implements FileVisitor<Path> { private Path source; private Path target; public CopyFileTree(Path source, Path target) { this.source = source; this.target = target; } @Override public FileVisitResult preVisitDirectory(Path dir, BasicFileAttributes attrs) { // Your code goes here Path newdir = target.resolve(source.relativize(dir)); try { Files.copy(dir, newdir); } catch (FileAlreadyExistsException x) { // ignore } catch (IOException x) { System.err.format("Unable to create: %s: %s%n", newdir, x); return SKIP_SUBTREE; } return CONTINUE; } @Override public FileVisitResult visitFile(Path file, BasicFileAttributes attrs ) { // Your code goes here Path newdir = target.resolve(source.relativize(file)); try { Files.copy(file, newdir, REPLACE_EXISTING); } catch (IOException x) { System.err.format("Unable to copy: %s: %s%n", source, x); } return CONTINUE; } @Override public FileVisitResult postVisitDirectory(Path dir, IOException exc ) { return CONTINUE; } @Override public FileVisitResult visitFileFailed(Path file, IOException exc ) { if (exc instanceof FileSystemLoopException) { System.err.println("cycle detected: " + file); } else { System.err.format("Unable to copy: %s: %s%n", file, exc); } return CONTINUE; } } It says to test this I need to enter arguments in properties of the project which I did? Can someone clarity what I'm doing wrong?

    Read the article

  • Gnome-shell freezes. What to do to keep apps open, like "alt+f2 r" when GS works

    - by user94592
    This is a repost, because no one had popper solution. What to do when gnome-shell freezes? Everything works correctly (music is still playing etc.), just the gui freezes. I am able to ctrl+alt+f1. What can I do after logging in the ctrl+alt+f1 terminal, in order to restore gnome-shell? If gnome-shell worked, I'd just hit alt+f2 and run r. I usually end up sudo rebooting which I don't like, because I have to reopen apps.

    Read the article

  • Visual studio 2008 disable auto open on add item(s).

    - by user515
    I have an solution with no projects in it. All I have in the solution are folders. When I add files to a folder in my solution by right clickcontext menuadd existing item, the new items I add open up by default. This is specially annoying when I have to add multiple files. Please note that the files that I am adding are not code files, they are documents and open up in their default application (e.g. .doc opens in word). Is there any way to disable this auto-open feature?

    Read the article

  • How can I build pyv8 from source on FreeBSD against the v8 port?

    - by Utkonos
    I am unable to build pyv8 from source on FreeBSD. I have installed the /usr/ports/lang/v8 port, and I'm running into the following error. It seems that pyv8 wants to build v8 itself even though v8 is already built and installed. How can I point pyv8 to the already installed location of v8? # python setup.py build Found Google v8 base on V8_HOME , update it to the latest SVN trunk at running build ==================== INFO: Installing or updating GYP... -------------------- INFO: Check out GYP from SVN ... DEBUG: make dependencies ERROR: Check out GYP from SVN failed: code=2 DEBUG: "Makefile", line 43: Missing dependency operator "Makefile", line 45: Need an operator "Makefile", line 46: Need an operator "Makefile", line 48: Need an operator "Makefile", line 50: Need an operator "Makefile", line 52: Need an operator "Makefile", line 54: Missing dependency operator "Makefile", line 56: Need an operator "Makefile", line 58: Missing dependency operator "Makefile", line 60: Need an operator "Makefile", line 62: Missing dependency operator "Makefile", line 64: Need an operator "Makefile", line 66: Missing dependency operator "Makefile", line 68: Need an operator "Makefile", line 70: Missing dependency operator "Makefile", line 72: Need an operator "Makefile", line 73: Missing dependency operator "Makefile", line 75: Need an operator "Makefile", line 77: Missing dependency operator "Makefile", line 79: Need an operator "Makefile", line 81: Missing dependency operator "Makefile", line 83: Need an operator "Makefile", line 85: Missing dependency operator "Makefile", line 87: Need an operator "Makefile", line 89: Need an operator "Makefile", line 91: Missing dependency operator "Makefile", line 93: Need an operator "Makefile", line 95: Need an operator "Makefile", line 97: Need an operator "Makefile", line 99: Missing dependency operator "Makefile", line 101: Need an operator "Makefile", line 103: Missing dependency operator "Makefile", line 105: Need an operator "Makefile", line 107: Missing dependency operator "Makefile", line 109: Need an operator "Makefile", line 111: Missing dependency operator "Makefile", line 113: Need an operator "Makefile", line 115: Missing dependency operator "Makefile", line 117: Need an operator Error expanding embedded variable. ==================== INFO: Patching the GYP scripts INFO: patch the Google v8 build/standalone.gypi file to enable RTTI and C++ Exceptions ==================== INFO: building Google v8 with GYP for x64 platform with release mode -------------------- INFO: build v8 from SVN ... DEBUG: make verifyheap=off component=shared_library visibility=on gdbjit=off liveobjectlist=off regexp=native disassembler=off objectprint=off debuggersupport=on extrachecks=off snapshot=on werror=on x64.release ERROR: build v8 from SVN failed: code=2 DEBUG: "Makefile", line 43: Missing dependency operator "Makefile", line 45: Need an operator "Makefile", line 46: Need an operator "Makefile", line 48: Need an operator "Makefile", line 50: Need an operator "Makefile", line 52: Need an operator "Makefile", line 54: Missing dependency operator "Makefile", line 56: Need an operator "Makefile", line 58: Missing dependency operator "Makefile", line 60: Need an operator "Makefile", line 62: Missing dependency operator "Makefile", line 64: Need an operator "Makefile", line 66: Missing dependency operator "Makefile", line 68: Need an operator "Makefile", line 70: Missing dependency operator "Makefile", line 72: Need an operator "Makefile", line 73: Missing dependency operator "Makefile", line 75: Need an operator "Makefile", line 77: Missing dependency operator "Makefile", line 79: Need an operator "Makefile", line 81: Missing dependency operator "Makefile", line 83: Need an operator "Makefile", line 85: Missing dependency operator "Makefile", line 87: Need an operator "Makefile", line 89: Need an operator "Makefile", line 91: Missing dependency operator "Makefile", line 93: Need an operator "Makefile", line 95: Need an operator "Makefile", line 97: Need an operator "Makefile", line 99: Missing dependency operator "Makefile", line 101: Need an operator "Makefile", line 103: Missing dependency operator "Makefile", line 105: Need an operator "Makefile", line 107: Missing dependency operator "Makefile", line 109: Need an operator "Makefile", line 111: Missing dependency operator "Makefile", line 113: Need an operator "Makefile", line 115: Missing dependency operator "Makefile", line 117: Need an operator Error expanding embedded variable. The files that are installed by the v8 port are the following (in /usr/local): bin/d8 include/v8.h include/v8-debug.h include/v8-preparser.h include/v8-profiler.h include/v8-testing.h include/v8stdint.h lib/libv8.so lib/libv8.so.1

    Read the article

  • Creating an ASP.NET report using Visual Studio 2010 - Part 3

    - by rajbk
    We continue building our report in this three part series. Creating an ASP.NET report using Visual Studio 2010 - Part 1 Creating an ASP.NET report using Visual Studio 2010 - Part 2 Adding the ReportViewer control and filter drop downs. Open the source code for index.aspx and add a ScriptManager control. This control is required for the ReportViewer control. Add a DropDownList for the categories and suppliers. Add the ReportViewer control. The markup after these steps is shown below. <div> <asp:ScriptManager ID="smScriptManager" runat="server"> </asp:ScriptManager> <div id="searchFilter"> Filter by: Category : <asp:DropDownList ID="ddlCategories" runat="server" /> and Supplier : <asp:DropDownList ID="ddlSuppliers" runat="server" /> </div> <rsweb:ReportViewer ID="rvProducts" runat="server"> </rsweb:ReportViewer> </div> The design view for index.aspx is shown below. The dropdowns will display the categories and suppliers in the database. Changing the selection in the drop downs will cause the report to be filtered by the selections in the dropdowns. You will see how to do this in the next steps.   Attaching the RDLC to the ReportViewer control by clicking on the top right of the control, going to Report Viewer tasks and selecting Products.rdlc.   Resize the ReportViewer control by dragging at the bottom right corner. I set mine to 800px x 500px. You can also set this value in source view. Defining the data sources. We will now define the Data Source used to populate the report. Go back to the “ReportViewer Tasks” and select “Choose Data Sources” Select a “New data source..” Select “Object” and name your Data Source ID “odsProducts”   In the next screen, choose “ProductRepository” as your business object. Choose “GetProductsProjected” in the next screen.   The method requires a SupplierID and CategoryID. We will set these so that our data source gets the values from the drop down lists we defined earlier. Set the parameter source to be of type “Control” and set the ControlIDs to be ddlSuppliers and ddlCategories respectively. Your screen will look like this: We are now going to define the data source for our drop downs. Select the ddlCategory drop down and pick “Choose Data Source”. Pick “Object” and give it an id “odsCategories”   In the next screen, choose “ProductRepository” Select the GetCategories() method in the next screen.   Select “CategoryName” and “CategoryID” in the next screen. We are done defining the data source for the Category drop down. Perform the same steps for the Suppliers drop down.   Select each dropdown and set the AppendDataBoundItems to true and AutoPostback to true.     The AppendDataBoundItems is needed because we are going to insert an “All“ list item with a value of empty. Go to each drop down and add this list item markup as shown below> Finally, double click on each drop down in the designer and add the following code in the code behind. This along with the “Autopostback= true” attribute refreshes the report anytime a drop down is changed. protected void ddlCategories_SelectedIndexChanged(object sender, EventArgs e) { rvProducts.LocalReport.Refresh(); }   protected void ddlSuppliers_SelectedIndexChanged(object sender, EventArgs e) { rvProducts.LocalReport.Refresh(); } Compile your report and run the page. You should see the report rendered. Note that the tool bar in the ReportViewer control gives you a couple of options including the ability to export the data to Excel, PDF or word.   Conclusion Through this three part series, we did the following: Created a data layer for use by our RDLC. Created an RDLC using the report wizard and define a dataset for the report. Used the report design surface to design our report including adding a chart. Used the ReportViewer control to attach the RDLC. Connected our ReportWiewer to a data source and take parameter values from the drop downlists. Used AutoPostBack to refresh the reports when the dropdown selection was changed. RDLCs allow you to create interactive reports including drill downs and grouping. For even more advanced reports you can use Microsoft® SQL Server™ Reporting Services with RDLs. With RDLs, the report is rendered on the report server instead of the web server. Another nice thing about RDLs is that you can define a parameter list for the report and it gets rendered automatically for you. RDLCs and RDLs both have their advantages and its best to compare them and choose the right one for your requirements. Download VS2010 RTM Sample project NorthwindReports.zip   Alfred Borden: Are you watching closely?

    Read the article

  • How to export ECC key and Cert from NSS DB and import into JKS keystore and Oracle Wallet

    - by mv
    How to export ECC key and Cert from NSS DB and import into JKS keystore and Oracle Wallet In this blog I will write about how to extract a cert and key from NSS Db and import it to a JKS Keystore and then import that JKS Keystore into Oracle Wallet. 1. Set Java Home I pointed it to JRE 1.6.0_22 $ export JAVA_HOME=/usr/java/jre1.6.0_22/ 2. Create a self signed ECC cert in NSS DB I created NSS DB with self signed ECC certificate. If you already have NSS Db with ECC cert (and key) skip this step. $export NSS_DIR=/export/home/nss/ $$NSS_DIR/certutil -N -d . $$NSS_DIR/certutil -S -x -s "CN=test,C=US" -t "C,C,C" -n ecc-cert -k ec -q nistp192 -d . 3. Export ECC cert and key using pk12util Use NSS tool pk12util to export this cert and key into a p12 file      $$NSS_DIR/pk12util -o ecc-cert.p12 -n ecc-cert -d . -W password 4. Use keytool to create JKS keystore and import this p12 file 4.1 Import p12 file created above into a JKS keystore $JAVA_HOME/bin/keytool -importkeystore -srckeystore ecc-cert.p12 -srcstoretype PKCS12 -deststoretype JKS -destkeystore ecc.jks -srcstorepass password -deststorepass password -srcalias ecc-cert -destalias ecc-cert -srckeypass password -destkeypass password -v But if an error as shown is encountered, keytool error: java.security.UnrecoverableKeyException: Get Key failed: EC KeyFactory not available java.security.UnrecoverableKeyException: Get Key failed: EC KeyFactory not available        at com.sun.net.ssl.internal.pkcs12.PKCS12KeyStore.engineGetKey(Unknown Source)         at java.security.KeyStoreSpi.engineGetEntry(Unknown Source)         at java.security.KeyStore.getEntry(Unknown Source)         at sun.security.tools.KeyTool.recoverEntry(Unknown Source)         at sun.security.tools.KeyTool.doImportKeyStoreSingle(Unknown Source)         at sun.security.tools.KeyTool.doImportKeyStore(Unknown Source)         at sun.security.tools.KeyTool.doCommands(Unknown Source)         at sun.security.tools.KeyTool.run(Unknown Source)         at sun.security.tools.KeyTool.main(Unknown Source) Caused by: java.security.NoSuchAlgorithmException: EC KeyFactory not available         at java.security.KeyFactory.<init>(Unknown Source)         at java.security.KeyFactory.getInstance(Unknown Source)         ... 9 more 4.2 Create a new PKCS11 provider If you didn't get an error as shown above skip this step. Since we already have NSS libraries built with ECC, we can create a new PKCS11 provider Create ${java.home}/jre/lib/security/nss.cfg as follows: name = NSS     nssLibraryDirectory = ${nsslibdir}    nssDbMode = noDb    attributes = compatibility where nsslibdir should contain NSS libs with ECC support. Add the following line to ${java.home}/jre/lib/security/java.security :      security.provider.9=sun.security.pkcs11.SunPKCS11 ${java.home}/lib/security/nss.cfg Note that those who are using Oracle iPlanet Web Server or Oracle Traffic Director, NSS libs built with ECC are in <ws_install_dir>/lib or <otd_install_dir>/lib. 4.3. Now keytool should work Now you can try the same keytool command and see that it succeeds : $JAVA_HOME/bin/keytool -importkeystore -srckeystore ecc-cert.p12 -srcstoretype PKCS12 -deststoretype JKS -destkeystore ecc.jks -srcstorepass password -deststorepass password -srcalias ecc-cert -destalias ecc-cert -srckeypass password -destkeypass password -v [Storing ecc.jks] 5. Convert JKS keystore into an Oracle Wallet You can export this cert and key from JKS keystore and import it into an Oracle Wallet if you need using orapki tool as shown below. Make sure that orapki you use supports ECC. Also for ECC you MUST use "-jsafe" option. $ orapki wallet create -pwd password  -wallet .  -jsafe $ orapki wallet jks_to_pkcs12 -wallet . -pwd password -keystore ecc.jks -jkspwd password -jsafe AS $orapki wallet display -wallet . -pwd welcome1  -jsafeOracle PKI Tool : Version 11.1.2.0.0Copyright (c) 2004, 2012, Oracle and/or its affiliates. All rights reserved.Requested Certificates:User Certificates:Subject:        CN=test,C=USTrusted Certificates:Subject:        OU=Class 3 Public Primary Certification Authority,O=VeriSign\, Inc.,C=USSubject:        CN=GTE CyberTrust Global Root,OU=GTE CyberTrust Solutions\, Inc.,O=GTE Corporation,C=USSubject:        OU=Class 2 Public Primary Certification Authority,O=VeriSign\, Inc.,C=USSubject:        OU=Class 1 Public Primary Certification Authority,O=VeriSign\, Inc.,C=USSubject:        CN=test,C=US As you can see our ECC cert in the wallet. You can follow the same steps for RSA certs as well. 6. References http://icedtea.classpath.org/bugzilla/show_bug.cgi?id=356 http://old.nabble.com/-PATCH-FOR-REVIEW-%3A-Support-PKCS11-cryptography-via-NSS-p25282932.html http://www.mozilla.org/projects/security/pki/nss/tools/pk12util.html

    Read the article

  • Why does my Belkin wireless router has eMule port open?

    - by Jeremy Powell
    I have a Belkin F6D4230-4 v1 router. When I port scan it with nmap I get the following: $ sudo nmap -sS -A -T5 192.168.2.1 -p- Starting Nmap 5.00 ( http://nmap.org ) at 2010-04-17 11:40 CDT Interesting ports on 192.168.2.1: Not shown: 65532 closed ports PORT STATE SERVICE VERSION 80/tcp open http Belkin 2307 wifi router http config (IP_SHARER httpd 1.0) |_ html-title: '+i1+' 4661/tcp filtered unknown 4662/tcp filtered edonkey MAC Address: 00:22:75:5D:52:D8 (Belkin International) Device type: WAP|broadband router|firewall|printer|specialized|webcam Running (JUST GUESSING) : Linksys embedded (95%), TRENDnet embedded (95%), Netgear embedded (92%), Canon embedded (89%), On Time RTOS (89%), Symantec embedded (89%), D-Link embedded (86%), Polycom embedded (85%) Aggressive OS guesses: Linksys WRT54GC or TRENDnet TEW-431BRP wireless broadband router (95%), TRENDnet TW100-BRF114 broadband router (95%), Netgear FR114P ProSafe VPN firewall (92%), Canon PIXMA MX850 printer (89%), On Time RTOS (89%), Symantec Firewall/VPN 100 (89%), D-Link DI-714P+ wireless broadband router (86%), Polycom ViewStation video conferencing system (85%) No exact OS matches for host (test conditions non-ideal). Network Distance: 1 hop Service Info: Device: WAP OS and Service detection performed. Please report any incorrect results at http://nmap.org/submit/ . Nmap done: 1 IP address (1 host up) scanned in 21.57 seconds Why are the 4461 and 4462 ports open? This is a basic, out-of-the-box installation.

    Read the article

  • what port should I open for mysql master-master replication?

    - by Vanddel
    I have two servers running php5-fpm and a load balancer running nginx, the three servers share /var/www/drupal using nfs. nfs is working correctly. I replicated the two servers' database using mysql master master replication. everything was working fine till I added my iptables rules. In my iptables script, I first drop all chains then I accept the ones I want, other than that there are no other drop statements. I opened port 3306 for mysql replication like this : (the rule is on both servers ) iptables -A INPUT -p tcp -s $ip_Of_Other_Server --dport 3306 -j ACCEPT iptables -A OUTPUT -p tcp -d $ip_Of_Other_Server --sport 3306 -j ACCEPT The problem is, when I run both servers and I try to log in using my account on drupal it doesn't log in although I find a successful log in attempt in drupal logs. When I run only one server of them I can log in normally. when I allow everything in my iptables rules it works normally. I believe there's some port I need to open using iptables for the replication to work correctly but I can't find which one to open.

    Read the article

  • SQL Server 2008 Bring Database Online trying to open a file from a drive that doesn't exist

    - by Nai
    This is my error I am facing TITLE: Microsoft.SqlServer.Smo Set offline failed for Database 'Go3D_Retailer ------------------------------ ADDITIONAL INFORMATION: An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo) Unable to open the physical file "E:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\ftrow_Go3D_catalog.ndf". Operating system error 2: "2(failed to retrieve text for this error. Reason: 15105)". Database 'Go3D_Retailer' cannot be opened due to inaccessible files or insufficient memory or disk space. See the SQL Server errorlog for details. ALTER DATABASE statement failed. (Microsoft SQL Server, Error: 5120) Background to this error I've been trying to move my destination logshipping database to another physical server for analysis purposes. Because I do not have domain keys and active directory set up, I had to hack my process by using the same username/password for both the source and destination servers to get the process to work. Following that, I used this guy's solution to move the destination database to another server. However, this error occurs when I try to bring the database back online. I don't have an E drive on my server and I have no idea why it's trying to open a file from E drive. I have over a 100gb left on my hard disk so it's definitely not a space issue. This sounds like a bug... Any ideas?

    Read the article

  • Is it possible to open a sqlite database from within microsoft sql management studio?

    - by Brian T Hannan
    Is there a way to open a .db file (sqlite database file) from within microsoft sql management studio? Right now we have a process that will grab the data from a microsoft sql server database and put it into a sqlite database file that will be used by an application later on. Is there a way to open the sqlite database file so that it can be compared to the data inside the sql server database ... using only one sql query? Is there a plug-in for microsoft sql management studio? Or maybe there is another way to do this same task using only one query. Right now we have to write two scripts ... one for sql server database and one for sqlite database ... then take the output from each in the same format and put them each in their own OpenOffice spreadsheet file. Finally, we compare the two files to see if there are any differences. Perhaps there's a better way to do this. P.S. Alot of applications use sqlite internally: Well-Known Users Of SQLite

    Read the article

  • Continuous Integration for SQL Server Part II – Integration Testing

    - by Ben Rees
    My previous post, on setting up Continuous Integration for SQL Server databases using GitHub, Bamboo and Red Gate’s tools, covered the first two parts of a simple Database Continuous Delivery process: Putting your database in to a source control system, and, Running a continuous integration process, each time changes are checked in. However there is, of course, a lot more to to Continuous Delivery than that. Specifically, in addition to the above: Putting some actual integration tests in to the CI process (otherwise, they don’t really do much, do they!?), Deploying the database changes with a managed, automated approach, Monitoring what you’ve just put live, to make sure you haven’t broken anything. This post will detail how to set up a very simple pipeline for implementing the first of these (continuous integration testing). NB: A lot of the setup in this post is built on top of the configuration from before, so it might be difficult to implement this post without running through part I first. There’ll then be a third post on automated database deployment followed by a final post dealing with the last item – monitoring changes on the live system. In the previous post, I used a mixture of Red Gate products and other 3rd party software – GitHub and Atlassian Bamboo specifically. This was partly because I believe most people work in an heterogeneous environment, using software from different vendors to suit their purposes and I wanted to show how this could work for this process. For example, you could easily substitute Atlassian’s BitBucket or Stash for GitHub, depending on your needs, or use an alternative CI server such as TeamCity, TFS or Jenkins. However, in this, post, I’ll be mostly using Red Gate products only (other than tSQLt). I would do this, firstly because I work for Red Gate. However, I also think that in the area of Database Delivery processes, nobody else has the offerings to implement this process fully – so I didn’t have any choice!   Background on Continuous Delivery For me, a great source of information on what makes a proper Continuous Delivery process is the Jez Humble and David Farley classic: Continuous Delivery – Reliable Software Releases through Build, Test, and Deployment Automation This book is not of course, primarily about databases, and the process I outline here and in the previous article is a gross simplification of what Jez and David describe (not least because it’s that much harder for databases!). However, a lot of the principles that they describe can be equally applied to database development and, I would argue, should be. As I say however, what I describe here is a very simple version of what would be required for a full production process. A couple of useful resources on handling some of these complexities can be found in the following two references: Refactoring Databases – Evolutionary Database Design, by Scott J Ambler and Pramod J. Sadalage Versioning Databases – Branching and Merging, by Scott Allen In particular, I don’t deal at all with the issues of multiple branches and merging of those branches, an issue made particularly acute by the use of GitHub. The other point worth making is that, in the words of Martin Fowler: Continuous Delivery is about keeping your application in a state where it is always able to deploy into production.   I.e. we are not talking about continuously delivery updates to the production database every time someone checks in an amendment to a stored procedure. That is possible (and what Martin calls Continuous Deployment). However, again, that’s more than I describe in this article. And I doubt I need to remind DBAs or Developers to Proceed with Caution!   Integration Testing Back to something practical. The next stage, building on our set up from the previous article, is to add in some integration tests to the process. As I say, the CI process, though interesting, isn’t enormously useful without some sort of test process running. For this we’ll use the tSQLt framework, an open source framework designed specifically for running SQL Server tests. tSQLt is part of Red Gate’s SQL Test found on http://www.red-gate.com/products/sql-development/sql-test/ or can be downloaded separately from www.tsqlt.org - though I’ll provide a step-by-step guide below for setting this up. Getting tSQLt set up via SQL Test Click on the link http://www.red-gate.com/products/sql-development/sql-test/ and click on the blue Download button to download the Red Gate SQL Test product, if not already installed. Follow the install process for SQL Test to install the SQL Server Management Studio (SSMS) plugin on to your machine, if not already installed. Open SSMS. You should now see SQL Test under the Tools menu:   Clicking this link will give you the basic SQL Test dialogue: As yet, though we’ve installed the SQL Test product we haven’t yet installed the tSQLt test framework on to any particular database. To do this, we need to add our RedGateApp database using this dialogue, by clicking on the + Add Database to SQL Test… link, selecting the RedGateApp database and clicking the Add Database link:   In the next screen, SQL Test describes what will be installed on the database for the tSQLt framework. Also in this dialogue, uncheck the “Add SQL Cop tests” option (shown below). SQL Cop is a great set of pre-defined tests that work within the tSQLt framework to check the general health of your SQL Server database. However, we won’t be using them in this particular simple example: Once you’ve clicked on the OK button, the changes described in the dialogue will be made to your database. Some of these are shown in the left-hand-side below: We’ve now installed the framework. However, we haven’t actually created any tests, so this will be the next step. But, before we proceed, we’ve made an update to our database so should, again check this in to source control, adding comments as required:   Also worth a quick check that your build still runs with the new additions!: (And a quick check of the RedGateAppCI database shows that the changes have been made).   Creating and Testing a Unit Test There are, of course, a lot of very interesting unit tests that you could and should set up for a database. The great thing about the tSQLt framework is that you can write these in SQL. The example I’m going to use here is pretty Mickey Mouse – our database table is going to include some email addresses as reference data and I want to check whether these are all in a correct email format. Nothing clever but it illustrates the process and hopefully shows the method by which more interesting tests could be set up. Adding Reference Data to our Database To start, I want to add some reference data to my database, and have this source controlled (as well as the schema). First of all I need to add some data in to my solitary table – this can be done a number of ways, but I’ll do this in SSMS for simplicity: I then add some reference data to my table: Currently this reference data just exists in the database. For proper integration testing, this needs to form part of the source-controlled version of the database – and so needs to be added to the Git repository. This can be done via SQL Source Control, though first a Primary Key needs to be added to the table. Right click the table, select Design, then right-click on the first “id” row. Then click on “Set Primary Key”: NB: once this change is made, click Save to save the change to the table. Then, to source control this reference data, right click on the table (dbo.Email) and selecting the following option:   In the next screen, link the data in the Email table, by selecting it from the list and clicking “save and close”: We should at this point re-commit the changes (both the addition of the Primary Key, and the data) to the Git repo. NB: From here on, I won’t show screenshots for the GitHub side of things – it’s the same each time: whenever a change is made in SQL Source Control and committed to your local folder, you then need to sync this in the GitHub Windows client (as this is where the build server, Bamboo is taking it from). An interesting point to note here, when these changes are committed in SQL Source Control (right-click database and select “Commit Changes to Source Control..”): The display gives a warning about possibly needing a migration script for the “Add Primary Key” step of the changes. This isn’t actually necessary in this case, but this mechanism would allow you to create override scripts to replace the default change scripts created by the SQL Compare engine (which runs underneath SQL Source Control). Ignoring this message (!), we add a comment and commit the changes to Git. I then sync these, run a build (or the build gets run automatically), and check that the data is being deployed over to the target RedGateAppCI database:   Creating and Running the Test As I mention, the test I’m going to use here is a very simple one - are the email addresses in my reference table valid? This isn’t of course, a full test of email validation (I expect the email addresses I’ve chosen here aren’t really the those of the Fab Four) – but just a very basic check of format used. I’ve taken the relevant SQL from this Stack Overflow article. In SSMS select “SQL Test” from the Tools menu, then click on + New Test: In the next screen, give your new test a name, and also enter a name in the Test Class box (test classes are schemas that help you keep things organised). Also check that the database in which the test is going to be created is correct – RedGateApp in this example: Click “Create Test”. After closing a couple of subsequent dialogues, you’ll see a dummy script for the test, that needs filling in:   We now need to define the SQL for our test. As mentioned before, tSQLt allows you to write your unit tests in T-SQL, and the code I’m going to use here is as below. This needs to be copied and pasted in to the query window, to replace the default given by tSQLt: –  Basic email check test ALTER PROCEDURE [MyChecks].[test Check Email Addresses] AS BEGIN SET NOCOUNT ON         Declare @Output VarChar(max)     Set @Output = ”       SELECT  @Output = @Output + Email +Char(13) + Char(10) FROM dbo.Email WHERE email NOT LIKE ‘%_@__%.__%’       If @Output > ”         Begin             Set @Output = Char(13) + Char(10)                           + @Output             EXEC tSQLt.Fail@Output         End   END;   Once this script is entered, hit execute to add the Stored Procedure to the database. Before committing the test to source control,  it’s worth just checking that it works! For a positive test, click on “SQL Test” from the Tools menu, then click Run Tests. You should see output like the following: - a green tick to indicate success! But of course, what we also need to do is test that this is actually doing something by showing a failed test. Edit one of the email addresses in your table to an incorrect format: Now, re-run the same SQL Test as before and you’ll see the following: Great – we now know that our test is really doing something! You’ll also see a useful error message at the bottom of SSMS: (leave the email address as invalid for now, for the next steps). The next stage is to check this new test in to source control again, by right-clicking on the database and checking in the changes with a commit message (and not forgetting to sync in the GitHub client):   Checking that the Tests are Running as Integration Tests After the changes above are made, and after a build has run on Bamboo (manual or automatic), looking at the Stored Procedures for the RedGateAppCI, the SPROC for the new test has been moved over to the database. However this is not exactly what we were after. We didn’t want to just copy objects from one database to another, but actually run the tests as part of the build/integration test process. I.e. we’re continuously checking any changes we make (in this case, to the reference data emails), to ensure we’re not breaking a test that we’ve set up. The behaviour we want to see is that, if we check in static data that is incorrect (as we did in step 9 above) and we have the tSQLt test set up, then our build in Bamboo should fail. However, re-running the build shows the following: - sadly, a successful build! To make sure the tSQLt tests are run as part of the integration test, we need to amend a switch in the Red Gate CI config file. First, navigate to file sqlCI.targets in your working folder: Edit this document, make the following change, save the document, then commit and sync this change in the GitHub client: <!-- tSQLt tests --> <!-- Optional --> <!-- To run tSQLt tests in source control for the database, enter true. --> <enableTsqlt>true</enableTsqlt> Now, if we re-run the build in Bamboo (NB: I’ve moved to a new server here, hence different address and build number): - superb, a broken build!! The error message isn’t great here, so to get more detailed info, click on the full build log link on this page (below the fold). The interesting part of the log shown is towards the bottom. Pulling out this part:   21-Jun-2013 11:35:19 Build FAILED. 21-Jun-2013 11:35:19 21-Jun-2013 11:35:19 "C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj" (default target) (1) -> 21-Jun-2013 11:35:19 (sqlCI target) -> 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: RedGate.Deploy.SqlServerDbPackage.Shared.Exceptions.InvalidSqlException: Test Case Summary: 1 test case(s) executed, 0 succeeded, 1 failed, 0 errored. [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: [MyChecks].[test Check Email Addresses] failed: [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: ringo.starr@beatles [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: +----------------------+ [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: |Test Execution Summary| [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj]   As a final check, we should make sure that, if we now fix this error, the build succeeds. So in SSMS, I’m going to correct the invalid email address, then check this change in to SQL Source Control (with a comment), commit to GitHub, and re-run the build:   This should have fixed the build: It worked! Summary This has been a very quick run through the implementation of CI for databases, including tSQLt tests to test whether your database updates are working. The next post in this series will focus on automated deployment – we’ve tested our database changes, how can we now deploy these to target sites?  

    Read the article

  • OData to the rescue. Exposing the eventlog as a data feed

    - by cibrax
    In one of the project where I was working one, we used the Microsoft Enterprise Library Exception Application Block integration with WCF for logging all the technical issues on the services/backend in Windows Event Log. This application block worked like a charm, all the errors were correctly logged on the Event Log without even needing to modify the service code. However, we also needed to provide a quick way to expose all those events to the different system users so they could get access to all the them remotely. In just a couple of minutes I came up with a simple solution based on ADO.NET Data Services. ADO.NET data services is very powerful in this sense, you only need to provide a IQueryable implementation, and that’s all. You get a RESTful service with rich query support for free. In this sample, I used Linq to Objects to get the latest entries from the Event Log, and I also filter the entries by the category used by the Application Block to avoid loading unnecessary entries in memory. public class LogDataSource     {         string source;         public LogDataSource(string source)         {             this.source = source;         }         public LogDataSource()         {         }         public IQueryable<LogEntry> LogEntries         {             get { return GetEntries().AsQueryable().OrderBy(e => e.TimeGenerated); }         }         private IEnumerable<LogEntry> GetEntries()         {             var applicationLog = System.Diagnostics.EventLog.GetEventLogs().Where(e => e.Log == "Application")                 .FirstOrDefault();             var entries = new List<LogEntry>();             if (applicationLog != null)             {                 foreach (EventLogEntry entry in applicationLog.Entries)                 {                     if (source == null || entry.Source.Equals(source, StringComparison.InvariantCultureIgnoreCase))                     {                         entries.Add(new LogEntry                         {                             Category = entry.Category,                             EventID = entry.InstanceId,                             Message = entry.Message,                             TimeGenerated = entry.TimeGenerated,                             Source = entry.Source,                         });                     }                 }             }             return entries.OrderByDescending(e => e.TimeGenerated)                         .Take(200);         }     } LogEntry is class I created for this service to expose an Event Log Entry.     [EntityPropertyMappingAttribute("Source",         SyndicationItemProperty.Title,         SyndicationTextContentKind.Plaintext, true)]     [EntityPropertyMapping("Message",         SyndicationItemProperty.Summary,         SyndicationTextContentKind.Plaintext, true)]     [EntityPropertyMapping("TimeGenerated",         SyndicationItemProperty.Updated,         SyndicationTextContentKind.Plaintext, true)]     [DataServiceKey("EventID")]     public class LogEntry     {         public long EventID         {             get;             set;         }         public string Category         {             get;             set;         }         public string Message         {             get;             set;         }         public DateTime TimeGenerated         {             get;             set;         }         public string Source         {             get;             set;         }     } As you can see, I used the new feature “Friendly feeds” to map several properties in the entries with standard ATOM elements. The “DataServiceKey” is only necessary because I am using the Reflection Provider (the exposed IQueryable implementation is just Linq to Objects) rather than the default Entity Framework Provider. The data service implementation is also quite simple, just a couple of lines were needed to expose the data source created previously. public class LogDataService : DataService<LogDataSource>     {         public static void InitializeService(IDataServiceConfiguration config)         {             config.SetEntitySetAccessRule("*", EntitySetRights.AllRead);         }         protected override LogDataSource CreateDataSource()         {             string source = ConfigurationManager.AppSettings["EventLogSource"];             if (source == null)             {                 throw new ApplicationException("The EventLogSource appsetting is missing in the configuration file");             }             return new LogDataSource(source);         }     } With this implementation in place, the final users not only get a feed with all the latest errors in the event log, but also support for performing queries against that data. This is one of the great things about ADO.NET Data services.

    Read the article

  • How to open call out MKAnnotationView programmatically? (iPhone, MapKit)

    - by StijnSpijker
    I want to open up the callout for an MKPinAnnotationView programmatically. Eg I drop 10 pins on the map, and want to open up the one closest to me. How would I go about doing this? Apple has specified the 'selected' parameter for MKAnnotationView's, but discourages setting it directly (this doesn't work, tried it). For the rest MKAnnotationView only has a setHighlighted (same story), and canShowCallout method.. Any hints if this is possible at all?

    Read the article

  • clickonce - what is a good open source alternative to clickonce? (DDay.Update)?

    - by Greg
    Hi, What is a good open source alternative to clickonce? One that is most popular and under active development I guess? DDay.Update perhaps? Is this the main one? thanks PS. I've come up with a few from searching, but would appreciate any feedback from people how have reviewed these and have an idea of which is most popular/worth looking into first. .NET Application Updater Component - http://windowsclient.net/articles/appupdater.aspx nlaunch - http://code.google.com/p/nlaunch/ dotnetautoupdate http://code.google.com/p/dotnetautoupdate/

    Read the article

  • Possible to open a text file in a MYSQL stored procedure?

    - by futureelite7
    Is it possible to open and read from a text file in a MYSQL stored procedure? I have a text file with a list of about 50k telephone numbers, and want to write a stored procedure that will open the file, read the 50k lines and store it as rows in a table. I cannot load the file directly using LOAD IN FILE as the table has additional columns that I have to set. Thanks!

    Read the article

  • Can I set the base path outside of my application directory when binding an image source path to a r

    - by zimmer62
    So I'm trying to display an image that is ouside the path of my application. I only have a relative image path such as "images/background.png" but my images are somewhere else, I might want to choose that base location at runtime so that the binding maps to the proper folder. Such as "e:\data\images\background.png" or "e:\data\theme\images\background.png" <Image Source="{Binding Path=ImagePathWithRelativePath}"/> Is there any way to specify either in XAML or code behind a base directory for those images?

    Read the article

  • VBScript file open dialog that works in XP and Vista?

    - by codeulike
    In XP, you can use VBScript with the UserAccounts.CommonDialog object to bring up a File Open dialog (as described here), but apparently this does not work under Vista. Is there a VBScript method for File-Open dialogs that will work for both? Or even one that will work nicely for Vista? Disclaimer: I'm a proper programmer, honest, and don't usually work with VBScript - I am asking this question 'for a friend'.

    Read the article

  • Is there an open source Wordpress plug-in to implement Facebook/Twitter/OpenID/... authentication?

    - by Nicolas
    Hi, I'm looking for a way to implement Facebook/Twitter/OpenID/... authentication on my WordPress blog. I have found plugins for Twitter, plugins for Facebook, plugins for OpenID.. but I'm afraid integration of all thos plugins will be tough. Also, I have found RPX that is doing the job perfectly, but I would prefer an open source soultion rather than relying on RPX web service. Would you have any clue? Nicolas

    Read the article

  • Any Open Source Pregel like framework for distributed processing of large Graphs?

    - by Akshay Bhat
    Google has described a novel framework for distributed processing on Massive Graphs. http://portal.acm.org/citation.cfm?id=1582716.1582723 I wanted to know if similar to Hadoop (Map-Reduce) are there any open source implementations of this framework? I am actually in process of writing a Pseudo distributed one using python and multiprocessing module and thus wanted to know if someone else has also tried implementing it. Since public information about this framework is extremely scarce. (A link above and a blog post at Google Research)

    Read the article

< Previous Page | 231 232 233 234 235 236 237 238 239 240 241 242  | Next Page >