Search Results

Search found 1816 results on 73 pages for 'attach'.

Page 69/73 | < Previous Page | 65 66 67 68 69 70 71 72 73  | Next Page >

  • "Row not found or changed" Problem

    - by winston schröder
    Hi there, I'm working on a SQL CE Database and get into the "Row nor found or changed" exception. The exception only occurs when I try to update. On the first Run after the insert it shows up a MemberChangeConflict which says, that my Column Created_at has in all three values (current, original, database) the same. But in a second attempt it doesn't appear anymore. The DataContext is instanciated on Startup and freed on Exit of my Local(!) Application. I use a sqlmetal generated mapping and code file. In the map I added some Associations and set the timemstamp columns UpdateCheck property to Always while all other have the setting never. The Timestamp Column is marked as isVersion="true", the Id Column as Primary Key. Since I don't dispose the datacontext I expected to be using implicit transaction. When I run the SubmitChanges Method within a TransactionScope. Can anyone tell me how I can update the timestamp within the code ? I know about the Problems one has to deal with if you dispose the datacontext. So I decided not to do this since I use a Single User Local DB Cache File. (I did already use a version where I disposed the datacontext after every usage, but this version had a real bad performance and error rate, so I decided to choose the other variant.) LibDB.Client.Vehicles tmp = null; try { tmp = e.Parameter as LibDB.Client.Vehicles; if (tmp == null) return; if (!this._dc.Vehicles.Contains(tmp)) { this._dc.Vehicles.Attach(tmp); } this.ShowChangesReport(this._dc.GetChangeSet()); using (TransactionScope ts = new TransactionScope()) { try { this._dc.SubmitChanges(); ts.Complete(); } catch (ChangeConflictException cce) { Console.WriteLine("Optimistic concurrency error."); Console.WriteLine(cce.Message); Console.ReadLine(); foreach (ObjectChangeConflict occ in this._dc.ChangeConflicts) { MetaTable metatable = this._dc.Mapping.GetTable(occ.Object.GetType()); LibDB.Client.Vehicles entityInConflict = (LibDB.Client.Vehicles)occ.Object; Console.WriteLine("Table name: {0}", metatable.TableName); Console.Write("Vin: "); Console.WriteLine(entityInConflict.Vin); foreach (MemberChangeConflict mcc in occ.MemberConflicts) { object currVal = mcc.CurrentValue; object origVal = mcc.OriginalValue; object databaseVal = mcc.DatabaseValue; MemberInfo mi = mcc.Member; Console.WriteLine("Member: {0}", mi.Name); Console.WriteLine("current value: {0}", currVal); Console.WriteLine("original value: {0}", origVal); Console.WriteLine("database value: {0}", databaseVal); } throw cce; } } catch (Exception ex) { this.ShowChangeConflicts(this._dc.ChangeConflicts); Console.WriteLine(ex.Message); } } this.ShowChangesReport(this._dc.GetChangeSet());

    Read the article

  • How to show form in front in C#

    - by corlettk
    Folks, Please does anyone know how to show a Form from an otherwise invisible application, and have it get the focus (i.e. appear on top of other windows)? I'm working in C# .NET 3.5. I suspect I've taken "completely the wrong approach"... I do not Application.Run(new TheForm ()) instead I (new TheForm()).ShowModal()... The Form is basically a modal dialogue, with a few check-boxes; a text-box, and OK and Cancel Buttons. The user ticks a checkbox and types in a description (or whatever) then presses OK, the form disappears and the process reads the user-input from the Form, Disposes it, and continues processing. This works, except when the form is show it doesn't get the focus, instead it appears behind the "host" application, until you click on it in the taskbar (or whatever). This is a most annoying behaviour, which I predict will cause many "support calls", and the existing VB6 version doesn't have this problem, so I'm going backwards in usability... and users won't accept that (and nor should they). So... I'm starting to think I need to rethink the whole shebang... I should show the form up front, as a "normal application" and attach the remainer of the processing to the OK-button-click event. It should work, But that will take time which I don't have (I'm already over time/budget)... so first I really need to try to make the current approach work... even by quick-and-dirty methods. So please does anyone know how to "force" a .NET 3.5 Form (by fair means or fowl) to get the focus? I'm thinking "magic" windows API calls (I know Twilight Zone: This only appears to be an issue at work, we're I'm using Visual Studio 2008 on Windows XP SP3... I've just failed to reproduce the problem with an SSCCE (see below) at home on Visual C# 2008 on Vista Ulimate... This works fine. Huh? WTF? Also, I'd swear that at work yesterday showed the form when I ran the EXE, but not when F5'ed (or Ctrl-F5'ed) straight from the IDE (which I just put up with)... At home the form shows fine either way. Totaly confusterpating! It may or may not be relevant, but Visual Studio crashed-and-burned this morning when the project was running in debug mode and editing the code "on the fly"... it got stuck what I presumed was an endless loop of error messages. The error message was something about "can't debug this project because it is not the current project, or something... So I just killed it off with process explorer. It started up again fine, and even offered to recover the "lost" file, an offer which I accepted. using System; using System.Windows.Forms; namespace ShowFormOnTop { static class Program { [STAThread] static void Main() { Application.EnableVisualStyles(); Application.SetCompatibleTextRenderingDefault(false); //Application.Run(new Form1()); Form1 frm = new Form1(); frm.ShowDialog(); } } } Background: I'm porting an existing VB6 implementation to .NET... It's a "plugin" for a "client" GIS application called MapInfo. The existing client "worked invisibly" and my instructions are "to keep the new version as close as possible to the old version", which works well enough (after years of patching); it's just written in an unsupported language, so we need to port it. About me: I'm pretty much a noob to C# and .NET generally, though I've got a bottoms wiping certificate, I have been a professional programmer for 10 years; So I sort of "know some stuff". Any insights would be most welcome... and Thank you all for taking the time to read this far. Consiseness isn't (apparently) my forte. Cheers. Keith.

    Read the article

  • Rails routing to XML/JSON without views gone mad

    - by John Schulze
    I have a mystifying problem. In a very simple Ruby app i have three classes: Drivers, Jobs and Vehicles. All three classes only consist of Id and Name. All three classes have the same #index and #show methods and only render in JSON or XML (this is in fact true for all their CRUD methods, they are identical in everything but name). There are no views. For example: def index @drivers= Driver.all respond_to do |format| format.js { render :json => @drivers} format.xml { render :xml => @drivers} end end def show @driver = Driver.find(params[:id]) respond_to do |format| format.js { render :json => @driver} format.xml { render :xml => @driver} end end The models are similarly minimalistic and only contain: class Driver< ActiveRecord::Base validates_presence_of :name end In routes.rb I have: map.resources :drivers map.resources :jobs map.resources :vehicles map.connect ':controller/:action/:id' map.connect ':controller/:action/:id.:format' I can perform POST/create, GET/index and PUT/update on all three classes and GET/read used to work as well until I installed the "has many polymorphs" ActiveRecord plugin and added to environment.rb: require File.join(File.dirname(__FILE__), 'boot') require 'has_many_polymorphs' require 'active_support' Now for two of the three classes I cannot do a read any more. If i go to localhost:3000/drivers they all list nicely in XML but if i go to localhost:3000/drivers/3 I get an error: Processing DriversController#show (for 127.0.0.1 at 2009-06-11 20:34:03) [GET] Parameters: {"id"=>"3"} [4;36;1mDriver Load (0.0ms)[0m [0;1mSELECT * FROM "drivers" WHERE ("drivers"."id" = 3) [0m ActionView::MissingTemplate (Missing template drivers/show.erb in view path app/views): app/controllers/drivers_controller.rb:14:in `show' ...etc This is followed a by another unexpected error: Processing ApplicationController#show (for 127.0.0.1 at 2009-06-11 21:35:52)[GET] Parameters: {"id"=>"3"} NameError (uninitialized constant ApplicationController::AreaAccessDenied): ...etc What is going on here? Why does the same code work for one class but not the other two? Why is it trying to do a #view on the ApplicationController? I found that if I create a simple HTML view for each of the three classes these work fine. To each class I add: format.html # show.html.erb With this in place, going to localhost:3000/drivers/3 renders out the item in HTML and I get no errors in the log. But if attach .xml to the URL it again fails for two of the classes (with the same error message as before) while one will output XML as expected. Even stranger, on the two failing classes, when adding .js to the URL (to trigger JSON rendering) I get the HTML output instead! Is it possible this has something to do with the "has many polymorphs" plugin? I have heard of people having routing issues after installing it. Removing "has many polymorphs" and "active support" from environment.rb (and rebooting the sever) seems to make no difference whatsoever. Yet my problems started after it was installed. I've spent a number of hours on this problem now and am starting to get a little desperate, Google turns up virtually no information which makes me suspect I must have missed something elementary. Any enlightenment or hint gratefully received! JS

    Read the article

  • Deleting unreferenced child records with nhibernate

    - by Chev
    Hi There I am working on a mvc app using nhibernate as the orm (ncommon framework) I have parent/child entities: Product, Vendor & ProductVendors and a one to many relationship between them with Product having a ProductVendors collection Product.ProductVendors. I currently am retrieving a Product object and eager loading the children and sending these down the wire to my asp.net mvc client. A user will then modify the list of Vendors and post the updated Product back. I am using a custom model binder to generate the modified Product entity. I am able to update the Product fine and insert new ProductVendors. My problem is that dereferenced ProductVendors are not cascade deleted when specifying Product.ProductVendors.Clear() and calling _productRepository.Save(product). The problem seems to be with attaching the detached instance. Here are my mapping files: Product <?xml version="1.0" encoding="utf-8" ?> <id name="Id"> <generator class="guid.comb" /> </id> <version name="LastModified" unsaved-value="0" column="LastModified" /> <property name="Name" type="String" length="250" /> ProductVendors <?xml version="1.0" encoding="utf-8" ?> <id name="Id"> <generator class="guid.comb" /> </id> <version name="LastModified" unsaved-value="0" column="LastModified" /> <property name="Price" /> <many-to-one name="Product" class="Product" column="ProductId" lazy="false" not-null="true" /> <many-to-one name="Vendor" class="Vendor" column="VendorId" lazy="false" not-null="true" /> Custom Model Binder: using System; using Test.Web.Mvc; using Test.Domain; namespace Spoked.MVC { public class ProductUpdateModelBinder : DefaultModelBinder { private readonly ProductSystem ProductSystem; public ProductUpdateModelBinder(ProductSystem productSystem) { ProductSystem = productSystem; } protected override void OnModelUpdated(ControllerContext controllerContext, ModelBindingContext bindingContext) { var product = bindingContext.Model as Product; if (product != null) { product.Category = ProductSystem.GetCategory(new Guid(bindingContext.ValueProvider["Category"].AttemptedValue)); product.Brand = ProductSystem.GetBrand(new Guid(bindingContext.ValueProvider["Brand"].AttemptedValue)); product.ProductVendors.Clear(); if (bindingContext.ValueProvider["ProductVendors"] != null) { string[] productVendorIds = bindingContext.ValueProvider["ProductVendors"].AttemptedValue.Split(','); foreach (string id in productVendorIds) { product.AddProductVendor(ProductSystem.GetVendor(new Guid(id)), 90m); } } } } } } Controller: [AcceptVerbs(HttpVerbs.Post)] public ActionResult Update(Product product) { using (var scope = new UnitOfWorkScope()) { //product.ProductVendors.Clear(); _productRepository.Save(product); scope.Commit(); } using (new UnitOfWorkScope()) { IList<Vendor> availableVendors = _productSystem.GetAvailableVendors(product); productDetailEditViewModel = new ProductDetailEditViewModel(product, _categoryRepository.Select(x => x).ToList(), _brandRepository.Select(x => x).ToList(), availableVendors); } return RedirectToAction("Edit", "Products", new {id = product.Id.ToString()}); } The following test does pass though: [Test] [NUnit.Framework.Category("ProductTests")] public void Can_Delete_Product_Vendors_By_Dereferencing() { Product product; using(UnitOfWorkScope scope = new UnitOfWorkScope()) { Console.Out.WriteLine("Selecting..."); product = _productRepository.First(); Console.Out.WriteLine("Adding Product Vendor..."); product.AddProductVendor(_vendorRepository.First(), 0m); scope.Commit(); } Console.Out.WriteLine("About to delete Product Vendors..."); using (UnitOfWorkScope scope = new UnitOfWorkScope()) { Console.Out.WriteLine("Clearing Product Vendor..."); _productRepository.Save(product); // seems to be needed to attach entity to the persistance manager product.ProductVendors.Clear(); scope.Commit(); } } Going nuts here as I almost have a very nice solution between mvc, custom model binders and nhibernate. Just not seeing my deletes cascaded. Any help greatly appreciated. Chev

    Read the article

  • nhibernate cascade - problem with detached entities

    - by Chev
    I am going nuts here trying to resolve a cascading update/delete issue :-) I have a Parent Entity with a collection Child Entities. If I modify the list of Child entities in a detached Parent object, adding, deleting etc - I am not seeing the updates cascaded correctly to the Child collection. Mapping Files: <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" assembly="Domain" namespace="Domain"> <class name="Parent" table="Parent" > <id name="Id"> <generator class="guid.comb" /> </id> <version name="LastModified" unsaved-value="0" column="LastModified" /> <property name="Name" type="String" length="250" /> <bag name="ParentChildren" lazy="false" table="Parent_Children" cascade="all-delete-orphan" inverse="true"> <key column="ParentId" on-delete="cascade" /> <one-to-many class="ParentChildren" /> </bag> </class> <class name="ParentChildren" table="Parent_Children"> <id name="Id"> <generator class="guid.comb" /> </id> <version name="LastModified" unsaved-value="0" column="LastModified" /> <many-to-one name="Parent" class="Parent" column="ParentId" lazy="false" not-null="true" /> </class> </hibernate-mapping> Test [Test] public void Test() { Guid id; int lastModified; // add a child into 1st session then detach using(ISession session = Store.Local.Get<ISessionFactory>("SessionFactory").OpenSession()) { Console.Out.WriteLine("Selecting..."); Parent parent = (Parent) session.Get(typeof (Parent), new Guid("4bef7acb-bdae-4dd0-ba1e-9c7500f29d47")); id = parent.Id; lastModified = parent.LastModified + 1; // ensure the detached version used later is equal to the persisted version Console.Out.WriteLine("Adding Child..."); Child child = (from c in session.Linq<Child>() select c).First(); parent.AddChild(child, 0m); session.Flush(); session.Dispose(); // not needed i know } // attach a parent, then save with no Children using (ISession session = Store.Local.Get<ISessionFactory>("SessionFactory").OpenSession()) { Parent parent = new Parent("Test"); parent.Id = id; parent.LastModified = lastModified; session.Update(parent); session.Flush(); } } I assume that the fact that the product has been updated to have no children in its collection - the children would be deleted in the Parent_Child table. The problems seems to be something to do with attaching the Product to the new session? As the cascade is set to all-delete-orphan I assume that changes to the collection would be propagated to the relevant entities/tables? In this case deletes? What am I missing here? C

    Read the article

  • When Clearing an ObservableCollection, There are No Items in e.OldItems

    - by cplotts
    I have something here that is really catching me off guard. I have an ObservableCollection of T that is filled with items. I also have an event handler attached to the CollectionChanged event. When you Clear the collection it causes an CollectionChanged event with e.Action set to NotifyCollectionChangedAction.Reset. Ok, that's normal. But what is weird is that neither e.OldItems or e.NewItems has anything in it. I would expect e.OldItems to be filled with all items that were removed from the collection. Has anyone else seen this? And if so, how have they gotten around it? Some background: I am using the CollectionChanged event to attach and detach from another event and thus if I don't get any items in e.OldItems ... I won't be able to detach from that event. CLARIFICATION: I do know that the documentation doesn't outright state that it has to behave this way. But for every other action, it is notifying me of what it has done. So, my assumption is that it would tell me ... in the case of Clear/Reset as well. Below is the sample code if you wish to reproduce it yourself. First off the xaml: <Window x:Class="ObservableCollection.Window1" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="Window1" Height="300" Width="300" > <StackPanel> <Button x:Name="addButton" Content="Add" Width="100" Height="25" Margin="10" Click="addButton_Click"/> <Button x:Name="moveButton" Content="Move" Width="100" Height="25" Margin="10" Click="moveButton_Click"/> <Button x:Name="removeButton" Content="Remove" Width="100" Height="25" Margin="10" Click="removeButton_Click"/> <Button x:Name="replaceButton" Content="Replace" Width="100" Height="25" Margin="10" Click="replaceButton_Click"/> <Button x:Name="resetButton" Content="Reset" Width="100" Height="25" Margin="10" Click="resetButton_Click"/> </StackPanel> </Window> Next, the code behind: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Windows; using System.Windows.Controls; using System.Windows.Data; using System.Windows.Documents; using System.Windows.Input; using System.Windows.Media; using System.Windows.Media.Imaging; using System.Windows.Navigation; using System.Windows.Shapes; using System.Collections.ObjectModel; namespace ObservableCollection { /// <summary> /// Interaction logic for Window1.xaml /// </summary> public partial class Window1 : Window { public Window1() { InitializeComponent(); _integerObservableCollection.CollectionChanged += new System.Collections.Specialized.NotifyCollectionChangedEventHandler(_integerObservableCollection_CollectionChanged); } private void _integerObservableCollection_CollectionChanged(object sender, System.Collections.Specialized.NotifyCollectionChangedEventArgs e) { switch (e.Action) { case System.Collections.Specialized.NotifyCollectionChangedAction.Add: break; case System.Collections.Specialized.NotifyCollectionChangedAction.Move: break; case System.Collections.Specialized.NotifyCollectionChangedAction.Remove: break; case System.Collections.Specialized.NotifyCollectionChangedAction.Replace: break; case System.Collections.Specialized.NotifyCollectionChangedAction.Reset: break; default: break; } } private void addButton_Click(object sender, RoutedEventArgs e) { _integerObservableCollection.Add(25); } private void moveButton_Click(object sender, RoutedEventArgs e) { _integerObservableCollection.Move(0, 19); } private void removeButton_Click(object sender, RoutedEventArgs e) { _integerObservableCollection.RemoveAt(0); } private void replaceButton_Click(object sender, RoutedEventArgs e) { _integerObservableCollection[0] = 50; } private void resetButton_Click(object sender, RoutedEventArgs e) { _integerObservableCollection.Clear(); } private ObservableCollection<int> _integerObservableCollection = new ObservableCollection<int> { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19 }; } }

    Read the article

  • Internet Explorer Automation: how to suppress Open/Save dialog?

    - by Vladimir Dyuzhev
    When controlling IE instance via MSHTML, how to suppress Open/Save dialogs for non-HTML content? I need to get data from another system and import it into our one. Due to budget constraints no development (e.g. WS) can be done on the other side for some time, so my only option for now is to do web scrapping. The remote site is ASP.NET-based, so simple HTML requests won't work -- too much JS. I wrote a simple C# application that uses MSHTML and SHDocView to control an IE instance. So far so good: I can perform login, navigate to desired page, populate required fields and do submit. Then I face a couple of problems: First is that report is opening in another window. I suspect I can attach to that window too by enumerating IE windows in the system. Second, more troublesome, is that report itself is CSV file, and triggers Open/Save dialog. I'd like to avoid it and make IE save the file into given location OR I'm fine with programmatically clicking dialog buttons too (how?) I'm actually totally non-Windows guy (unix/J2EE), and hope someone with better knowledge would give me a hint how to do those tasks. Thanks! UPDATE I've found a promising document on MSDN: http://msdn.microsoft.com/en-ca/library/aa770041.aspx Control the kinds of content that are downloaded and what the WebBrowser Control does with them once they are downloaded. For example, you can prevent videos from playing, script from running, or new windows from opening when users click on links, or prevent Microsoft ActiveX controls from downloading or executing. Slowly reading through... UPDATE 2: MADE IT WORK, SORT OF... Finally I made it work, but in an ugly way. Essentially, I register a handler "before navigate", then, in the handler, if the URL is matching my target file, I cancel the navigation, but remember the URL, and use WebClient class to access and download that temporal URL directly. I cannot copy the whole code here, it contains a lot of garbage, but here are the essential parts: Installing handler: _IE2.FileDownload += new DWebBrowserEvents2_FileDownloadEventHandler(IE2_FileDownload); _IE.BeforeNavigate2 += new DWebBrowserEvents2_BeforeNavigate2EventHandler(IE_OnBeforeNavigate2); Recording URL and then cancelling download (thus preventing Save dialog to appear): public string downloadUrl; void IE_OnBeforeNavigate2(Object ob1, ref Object URL, ref Object Flags, ref Object Name, ref Object da, ref Object Head, ref bool Cancel) { Console.WriteLine("Before Navigate2 "+URL); if (URL.ToString().EndsWith(".csv")) { Console.WriteLine("CSV file"); downloadUrl = URL.ToString(); } Cancel = false; } void IE2_FileDownload(bool activeDocument, ref bool cancel) { Console.WriteLine("FileDownload, downloading "+downloadUrl+" instead"); cancel = true; } void IE_OnNewWindow2(ref Object o, ref bool cancel) { Console.WriteLine("OnNewWindow2"); _IE2 = new SHDocVw.InternetExplorer(); _IE2.BeforeNavigate2 += new DWebBrowserEvents2_BeforeNavigate2EventHandler(IE_OnBeforeNavigate2); _IE2.Visible = true; o = _IE2; _IE2.FileDownload += new DWebBrowserEvents2_FileDownloadEventHandler(IE2_FileDownload); _IE2.Silent = true; cancel = false; return; } And in the calling code using the found URL for direct download: ... driver.ClickButton(".*_btnRunReport"); driver.WaitForComplete(); Thread.Sleep(10000); WebClient Client = new WebClient(); Client.DownloadFile(driver.downloadUrl, "C:\\affinity.dump"); (driver is a simple wrapper over IE instance = _IE) Hope that helps someone.

    Read the article

  • Nhibernate Migration from 1.0.2.0 to 2.1.2 and many-to-one save problems

    - by Meska
    Hi, we have an old, big asp.net application with nhibernate, which we are extending and upgrading some parts of it. NHibernate that was used was pretty old ( 1.0.2.0), so we decided to upgrade to ( 2.1.2) for the new features. HBM files are generated through custom template with MyGeneration. Everything went quite smoothly, except for one thing. Lets say we have to objects Blog and Post. Blog can have many posts, so Post will have many-to-one relationship. Due to the way that this application operates, relationship is done not through primary keys, but through Blog.Reference column. Sample mapings and .cs files: <?xml version="1.0" encoding="utf-8" ?> <id name="Id" column="Id" type="Guid"> <generator class="assigned"/> </id> <property column="Reference" type="Int32" name="Reference" not-null="true" /> <property column="Name" type="String" name="Name" length="250" /> </class> <?xml version="1.0" encoding="utf-8" ?> <id name="Id" column="Id" type="Guid"> <generator class="assigned"/> </id> <property column="Reference" type="Int32" name="Reference" not-null="true" /> <property column="Name" type="String" name="Name" length="250" /> <many-to-one name="Blog" column="BlogId" class="SampleNamespace.BlogEntity,SampleNamespace" property-ref="Reference" /> </class> And class files class BlogEntity { public Guid Id { get; set; } public int Reference { get; set; } public string Name { get; set; } } class PostEntity { public Guid Id { get; set; } public int Reference { get; set; } public string Name { get; set; } public BlogEntity Blog { get; set; } } Now lets say that i have a Blog with Id 1D270C7B-090D-47E2-8CC5-A3D145838D9C and with Reference 1 In old nhibernate such thing was possible: //this Blog already exists in database BlogEntity blog = new BlogEntity(); blog.Id = Guid.Empty; blog.Reference = 1; //Reference is unique, so we can distinguish Blog by this field blog.Name = "My blog"; //this is new Post, that we are trying to insert PostEntity post = new PostEntity(); post.Id = Guid.NewGuid(); post.Name = "New post"; post.Reference = 1234; post.Blog = blog; session.Save(post); However, in new version, i get an exception that cannot insert NULL into Post.BlogId. As i understand, in old version, for nhibernate it was enough to have Blog.Reference field, and it could retrieve entity by that field, and attach it to PostEntity, and when saving PostEntity, everything would work correctly. And as i understand, new NHibernate tries only to retrieve by Blog.Id. How to solve this? I cannot change DB design, nor can i assign an Id to BlogEntity, as objects are out of my control (they come prefilled as generic "ojbects" like this from external source)

    Read the article

  • get user selection and convert it to a String [Android]

    - by Kira
    Hello, I just got a Droid, and after having used it for a while, I felt like I wanted to make a program for it. The program that I am trying to make calculates the actual storage capacity of secondary storage mediums. The user select from a list of units that ranges from KB to YB and the size the entered gets put into a formula depending on the chosen unit. However, there is a bit of a problem with the program. From my testing, I have narrowed it down to the fact that the user's selection is not really being obtained from the spinner. Everything I look up seems to point me to a method quite similar to how it works in J2SE, but it does nothing. How am I actually supposed to get that data? Here is the Java source code for the app: package com.Actual.android; import android.app.Activity; import android.os.Bundle; import android.widget.*; import android.view.*; public class ActualStorageActivity extends Activity { Spinner selection; /* declare variable, in order to control spinner (ComboBox) */ ArrayAdapter adapter; /* declare an array adapter object, in order for spinner to work */ EditText size; /* declare variable to control textfield */ EditText result; /* declare variable to control textfield */ Button calculate; /* declare variable to control button */ Storage capacity = new Storage(); /* import custom class for formulas */ /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); // load content from XML selection = (Spinner)findViewById(R.id.spinner); adapter = ArrayAdapter.createFromResource(this, R.array.choices_array, android.R.layout.simple_spinner_dropdown_item); size = (EditText)findViewById(R.id.size); result = (EditText)findViewById(R.id.result); calculate = (Button)findViewById(R.id.submit); adapter.setDropDownViewResource(android.R.layout.simple_spinner_dropdown_item); /* set resource for dropdown */ selection.setAdapter(adapter); // attach adapter to spinner result.setEnabled(false); // make read-only result.setText("usable storage"); } public void calcAction(View view) { String initial = size.getText().toString(); String unit = selection.getSelectedItem().toString(); String end = "Nothing"; double convert = Double.parseDouble(initial); capacity.setStorage(convert); if (unit == "KB") { end = Double.toString(capacity.getKB()); } else if (unit == "MB") { end = Double.toString(capacity.getMB()); } else if (unit == "GB") { end = Double.toString(capacity.getGB()); } else if (unit == "TB") { end = Double.toString(capacity.getTB()); } else if (unit == "PB") { end = Double.toString(capacity.getPB()); } else if (unit == "EB") { end = Double.toString(capacity.getEB()); } else if (unit == "ZB") { end = Double.toString(capacity.getZB()); } else if (unit == "YB") { end = Double.toString(capacity.getYB()); } else; result.setText(end); } }

    Read the article

  • algorithm q: Fuzzy matching of structured data

    - by user86432
    I have a fairly small corpus of structured records sitting in a database. Given a tiny fraction of the information contained in a single record, submitted via a web form (so structured in the same way as the table schema), (let us call it the test record) I need to quickly draw up a list of the records that are the most likely matches for the test record, as well as provide a confidence estimate of how closely the search terms match a record. The primary purpose of this search is to discover whether someone is attempting to input a record that is duplicate to one in the corpus. There is a reasonable chance that the test record will be a dupe, and a reasonable chance the test record will not be a dupe. The records are about 12000 bytes wide and the total count of records is about 150,000. There are 110 columns in the table schema and 95% of searches will be on the top 5% most commonly searched columns. The data is stuff like names, addresses, telephone numbers, and other industry specific numbers. In both the corpus and the test record it is entered by hand and is semistructured within an individual field. You might at first blush say "weight the columns by hand and match word tokens within them", but it's not so easy. I thought so too: if I get a telephone number I thought that would indicate a perfect match. The problem is that there isn't a single field in the form whose token frequency does not vary by orders of magnitude. A telephone number might appear 100 times in the corpus or 1 time in the corpus. The same goes for any other field. This makes weighting at the field level impractical. I need a more fine-grained approach to get decent matching. My initial plan was to create a hash of hashes, top level being the fieldname. Then I would select all of the information from the corpus for a given field, attempt to clean up the data contained in it, and tokenize the sanitized data, hashing the tokens at the second level, with the tokens as keys and frequency as value. I would use the frequency count as a weight: the higher the frequency of a token in the reference corpus, the less weight I attach to that token if it is found in the test record. My first question is for the statisticians in the room: how would I use the frequency as a weight? Is there a precise mathematical relationship between n, the number of records, f(t), the frequency with which a token t appeared in the corpus, the probability o that a record is an original and not a duplicate, and the probability p that the test record is really a record x given the test and x contain the same t in the same field? How about the relationship for multiple token matches across multiple fields? Since I sincerely doubt that there is, is there anything that gets me close but is better than a completely arbitrary hack full of magic factors? Barring that, has anyone got a way to do this? I'm especially keen on other suggestions that do not involve maintaining another table in the database, such as a token frequency lookup table :). This is my first post on StackOverflow, thanks in advance for any replies you may see fit to give.

    Read the article

  • How to figure out who owns a worker thread that is still running when my app exits?

    - by Dave
    Not long after upgrading to VS2010, my application won't shut down cleanly. If I close the app and then hit pause in the IDE, I see this: The problem is, there's no context. The call stack just says [External code], which isn't too helpful. Here's what I've done so far to try to narrow down the problem: deleted all extraneous plugins to minimize the number of worker threads launched set breakpoints in my code anywhere I create worker threads (and delegates + BeginInvoke, since I think they are labeled "Worker Thread" in the debugger anyway). None were hit. set IsBackground = true for all threads While I could do the next brute force step, which is to roll my code back to a point where this didn't happen and then look over all of the change logs, this isn't terribly efficient. Can anyone recommend a better way to figure this out, given the notable lack of information presented by the debugger? The only other things I can think of include: read up on WinDbg and try to use it to stop anytime a thread is started. At least, I thought that was possible... :) comment out huge blocks of code until the app closes properly, then start uncommenting until it doesn't. UPDATE Perhaps this information will be of use. I decided to use WinDbg and attach to my application. I then closed it, and switched to thread 0 and dumped the stack contents. Here's what I have: ThreadCount: 6 UnstartedThread: 0 BackgroundThread: 1 PendingThread: 0 DeadThread: 4 Hosted Runtime: no PreEmptive GC Alloc Lock ID OSID ThreadOBJ State GC Context Domain Count APT Exception 0 1 1c70 005a65c8 6020 Enabled 02dac6e0:02dad7f8 005a03c0 0 STA 2 2 1b20 005b1980 b220 Enabled 00000000:00000000 005a03c0 0 MTA (Finalizer) XXXX 3 08504048 19820 Enabled 00000000:00000000 005a03c0 0 Ukn XXXX 4 08504540 19820 Enabled 00000000:00000000 005a03c0 0 Ukn XXXX 5 08516a90 19820 Enabled 00000000:00000000 005a03c0 0 Ukn XXXX 6 08517260 19820 Enabled 00000000:00000000 005a03c0 0 Ukn 0:008> ~0s eax=c0674960 ebx=00000000 ecx=00000000 edx=00000000 esi=0040f320 edi=005a65c8 eip=76c37e47 esp=0040f23c ebp=0040f258 iopl=0 nv up ei pl nz na po nc cs=0023 ss=002b ds=002b es=002b fs=0053 gs=002b efl=00000202 USER32!NtUserGetMessage+0x15: 76c37e47 83c404 add esp,4 0:000> !clrstack OS Thread Id: 0x1c70 (0) Child SP IP Call Site 0040f274 76c37e47 [InlinedCallFrame: 0040f274] 0040f270 6baa8976 DomainBoundILStubClass.IL_STUB_PInvoke(System.Windows.Interop.MSG ByRef, System.Runtime.InteropServices.HandleRef, Int32, Int32)*** WARNING: Unable to verify checksum for C:\Windows\assembly\NativeImages_v4.0.30319_32\WindowsBase\d17606e813f01376bd0def23726ecc62\WindowsBase.ni.dll 0040f274 6ba924c5 [InlinedCallFrame: 0040f274] MS.Win32.UnsafeNativeMethods.IntGetMessageW(System.Windows.Interop.MSG ByRef, System.Runtime.InteropServices.HandleRef, Int32, Int32) 0040f2c4 6ba924c5 MS.Win32.UnsafeNativeMethods.GetMessageW(System.Windows.Interop.MSG ByRef, System.Runtime.InteropServices.HandleRef, Int32, Int32) 0040f2dc 6ba8e5f8 System.Windows.Threading.Dispatcher.GetMessage(System.Windows.Interop.MSG ByRef, IntPtr, Int32, Int32) 0040f318 6ba8d579 System.Windows.Threading.Dispatcher.PushFrameImpl(System.Windows.Threading.DispatcherFrame) 0040f368 6ba8d2a1 System.Windows.Threading.Dispatcher.PushFrame(System.Windows.Threading.DispatcherFrame) 0040f374 6ba7fba0 System.Windows.Threading.Dispatcher.Run() 0040f380 62e6ccbb System.Windows.Application.RunDispatcher(System.Object)*** WARNING: Unable to verify checksum for C:\Windows\assembly\NativeImages_v4.0.30319_32\PresentationFramewo#\7f91eecda3ff7ce478146b6458580c98\PresentationFramework.ni.dll 0040f38c 62e6c8ff System.Windows.Application.RunInternal(System.Windows.Window) 0040f3b0 62e6c682 System.Windows.Application.Run(System.Windows.Window) 0040f3c0 62e6c30b System.Windows.Application.Run() 0040f3cc 001f00bc MyApplication.App.Main() [C:\code\trunk\MyApplication\obj\Debug\GeneratedInternalTypeHelper.g.cs @ 24] 0040f608 66c421db [GCFrame: 0040f608] EDIT -- not sure if this helps, but the main thread's call stack looks like this: [Managed to Native Transition] > WindowsBase.dll!MS.Win32.UnsafeNativeMethods.GetMessageW(ref System.Windows.Interop.MSG msg, System.Runtime.InteropServices.HandleRef hWnd, int uMsgFilterMin, int uMsgFilterMax) + 0x15 bytes WindowsBase.dll!System.Windows.Threading.Dispatcher.GetMessage(ref System.Windows.Interop.MSG msg, System.IntPtr hwnd, int minMessage, int maxMessage) + 0x48 bytes WindowsBase.dll!System.Windows.Threading.Dispatcher.PushFrameImpl(System.Windows.Threading.DispatcherFrame frame = {System.Windows.Threading.DispatcherFrame}) + 0x85 bytes WindowsBase.dll!System.Windows.Threading.Dispatcher.PushFrame(System.Windows.Threading.DispatcherFrame frame) + 0x49 bytes WindowsBase.dll!System.Windows.Threading.Dispatcher.Run() + 0x4c bytes PresentationFramework.dll!System.Windows.Application.RunDispatcher(object ignore) + 0x17 bytes PresentationFramework.dll!System.Windows.Application.RunInternal(System.Windows.Window window) + 0x6f bytes PresentationFramework.dll!System.Windows.Application.Run(System.Windows.Window window) + 0x26 bytes PresentationFramework.dll!System.Windows.Application.Run() + 0x1b bytes I did a search on it and found some posts related to WPF GUIs hanging, and maybe that'll give me some more clues.

    Read the article

  • Create and Share a File (Also a mysterious error)

    - by Kirk
    My goal is to create a XML file and then send it through the share Intent. I'm able to create a XML file using this code FileOutputStream outputStream = context.openFileOutput(fileName, Context.MODE_WORLD_READABLE); PrintStream printStream = new PrintStream(outputStream); String xml = this.writeXml(); // get XML here printStream.println(xml); printStream.close(); I'm stuck trying to retrieve a Uri to the output file in order to share it. I first tried to access the file by converting the file to a Uri File outFile = context.getFileStreamPath(fileName); return Uri.fromFile(outFile); This returns file:///data/data/com.my.package/files/myfile.xml but I cannot appear to attach this to an email, upload, etc. If I manually check the file length, it's proper and shows there is a reasonable file size. Next I created a content provider and tried to reference the file and it isn't a valid handle to the file. The ContentProvider doesn't ever seem to be called a any point. Uri uri = Uri.parse("content://" + CachedFileProvider.AUTHORITY + "/" + fileName); return uri; This returns content://com.my.package.provider/myfile.xml but I check the file and it's zero length. How do I access files properly? Do I need to create the file with the content provider? If so, how? Update Here is the code I'm using to share. If I select Gmail, it does show as an attachment but when I send it gives an error Couldn't show attachment and the email that arrives has no attachment. public void onClick(View view) { Log.d(TAG, "onClick " + view.getId()); switch (view.getId()) { case R.id.share_cancel: setResult(RESULT_CANCELED, getIntent()); finish(); break; case R.id.share_share: MyXml xml = new MyXml(); Uri uri; try { uri = xml.writeXmlToFile(getApplicationContext(), "myfile.xml"); //uri is "file:///data/data/com.my.package/files/myfile.xml" Log.d(TAG, "Share URI: " + uri.toString() + " path: " + uri.getPath()); File f = new File(uri.getPath()); Log.d(TAG, "File length: " + f.length()); // shows a valid file size Intent shareIntent = new Intent(); shareIntent.setAction(Intent.ACTION_SEND); shareIntent.putExtra(Intent.EXTRA_STREAM, uri); shareIntent.setType("text/plain"); startActivity(Intent.createChooser(shareIntent, "Share")); } catch (FileNotFoundException e) { e.printStackTrace(); } break; } } I noticed that there is an Exception thrown here from inside createChooser(...), but I can't figure out why it's thrown. E/ActivityThread(572): Activity com.android.internal.app.ChooserActivity has leaked IntentReceiver com.android.internal.app.ResolverActivity$1@4148d658 that was originally registered here. Are you missing a call to unregisterReceiver()? I've researched this error and can't find anything obvious. Both of these links suggest that I need to unregister a receiver. Chooser Activity Leak - Android Why does Intent.createChooser() need a BroadcastReceiver and how to implement? I have a receiver setup, but it's for an AlarmManager that is set elsewhere and doesn't require the app to register / unregister. Code for openFile(...) In case it's needed, here is the content provider I've created. public ParcelFileDescriptor openFile(Uri uri, String mode) throws FileNotFoundException { String fileLocation = getContext().getCacheDir() + "/" + uri.getLastPathSegment(); ParcelFileDescriptor pfd = ParcelFileDescriptor.open(new File(fileLocation), ParcelFileDescriptor.MODE_READ_ONLY); return pfd; }

    Read the article

  • Use JQuery to target unwrapped text inside a div

    - by Chris
    I'm trying to find a way to wrap just the inner text of an element, I don't want to target any other inner dom elements. For example. <ul> <li class="this-one"> this is my item <ul> <li> this is a sub element </li> </ul> </li> </ul> I want to use jQuery to do this. <ul> <li class="this-one"> <div class="tree-item-text">this is my item</div> <ul> <li> <div class="tree-item-text">this is a sub element</div> </li> </ul> </li> </ul> A little background is I need to make an in-house tree structure ui element, So I'm using the UL structure to represent this. But I don't want developers to have to do any special formatting to use the widget. update: I just wanted to add the purpose of this is I want to add a click listener to be able to expand the elements under the li, However, since those elements are within the li the click listener will activate even when clicking on the children, So I want to attach it to the text instead, to do this the text needs to be targetable, which is why I want to wrap it in a div of it's own. So far I've come up with wrapping all the inner elements of the li in a div and then moving all inner dom elements back to the original parent. But this code is pretty heavy for something that might be much simpler and not require so much DOM manipulation. EDIT: Want to share the first pseudo alternative I came up with but I think it is very tasking for what I want to accomplish. var innerTextThing = $("ul.tree ul").parents("li").wrapInner("<div class='tree-node-text'>"); $(innerTextThing.find(".tree-node-text")).each(function(){ $(this).after($(this).children("ul")); }); Answered: I ended up doing the following, FYI i only have to worry about FF and IE compatibility so it's untested in other browsers. //this will wrap all li textNodes in a div so we can target them. $(that).find("li").contents() .filter(function () { return this.nodeType == 3; }).each(function () { if ( //these are for IE and FF compatibility (this.textContent != undefined && this.textContent.trim() != "") || (this.innerText != undefined && this.innerText.trim() != "") ) { $(this).wrap("<div class='tree-node-text'>"); } });

    Read the article

  • Tablesorter - filter inside input fields and values

    - by Zeracoke
    I have a small quest to accomplish, and I reached a point when nothing works... So the problem is. I have a paged table with a lot of input fields inside the rows with values, and I would like to search inside these values. Let me Show this, I hope that somebody will got the idea what I should do... <script type="text/javascript"> // add parser through the tablesorter addParser method $.tablesorter.addParser({ id: 'inputs', is: function(s) { return false; }, format: function(s, table, cell, cellIndex) { var $c = $(cell); // return 1 for true, 2 for false, so true sorts before false if (!$c.hasClass('updateInput')) { $c .addClass('updateInput') .bind('keyup', function() { $(table).trigger('updateCell', [cell, false]); // false to prevent resort }); } return $c.find('input').val(); }, type: 'text' }); $(function() { $('table').tablesorter({ widgets: ['zebra', 'stickyHeaders', 'resizable', 'filter'], widgetOptions: { stickyHeaders : '', // number or jquery selector targeting the position:fixed element stickyHeaders_offset : 110, // added to table ID, if it exists stickyHeaders_cloneId : '-sticky', // trigger "resize" event on headers stickyHeaders_addResizeEvent : true, // if false and a caption exist, it won't be included in the sticky header stickyHeaders_includeCaption : true, // The zIndex of the stickyHeaders, allows the user to adjust this to their needs stickyHeaders_zIndex : 2, // jQuery selector or object to attach sticky header to stickyHeaders_attachTo : null, // scroll table top into view after filtering stickyHeaders_filteredToTop: true, resizable: true, filter_onlyAvail : 'filter-onlyAvail', filter_childRows : true, filter_startsWith : true, filter_useParsedData : true, filter_defaultAttrib : 'data-value' }, headers: { 1: {sorter: 'inputs', width: '50px'}, 2: {sorter: 'inputs'}, 3: {sorter: 'inputs'}, 4: {sorter: 'inputs'}, 5: {sorter: 'inputs'}, 6: {sorter: 'inputs'}, 7: {sorter: 'inputs', width: '100px'}, 8: {sorter: 'inputs', width: '140px'}, 9: {sorter: 'inputs'}, 10: {sorter: 'inputs'}, 11: {sorter: 'inputs'}, } }); $('table').tablesorterPager({container: $(".pager"), positionFixed: false, size: 50, pageDisplay : $(".pagedisplay"), pageSize : $(".pagesize"), }); $("#table1").tablesorter(options); /* make second table scroll within its wrapper */ options.widgetOptions.stickyHeaders_attachTo = '.wrapper'; // or $('.wrapper') $("#table2").tablesorter(options); }); </script> The structure of the tables: <tr class="odd" style="display: table-row;"> <form action="/self.php" method="POST"> </form><input type="hidden" name="f" value="data"> <td><input type="hidden" name="mod_id" value="741">741</td> <td class="updateInput"><input type="text" name="name" value="Test User Name"></td> <td class="updateInput"><input type="text" name="address" value="2548451 Random address"></td> <td class="updateInput"><input type="email" name="email" value=""></td> <td class="updateInput"><input type="text" name="entitlement" value="none"></td> <td class="updateInput"><input type="text" name="card_number" value="6846416548644352"></td> <td class="updateInput"><input type="checkbox" name="verify" value="1" checked=""></td> <td class="updateInput"><input type="checkbox" name="card_sended" value="1" checked=""></td> <td class="updateInput"><input type="text" name="create_date" value="2014-02-12 21:09:16"></td> <td class="updateInput"><a href="self.php?f=data&amp;del=741">X</a></td> <td class="updateInput"><input type="submit" value="SAVE"></td><td class="updateInput"></td></tr> So the thing is I don't know how to configure the filter to search these values... I already added some options, but none of them are working... Any help would be great!

    Read the article

  • CSS optimization - extra classes in dom or preprocessor-repetitive styling in css file?

    - by anna.mi
    I'm starting on a fairly large project and I'm considering the option of using LESS for pre-processing my css. the useful thing about LESS is that you can define a mixin that contains for example: .border-radius(@radius) { -webkit-border-radius: @radius; -moz-border-radius: @radius; -o-border-radius: @radius; -ms-border-radius: @radius; border-radius: @radius; } and then use it in a class declaration as .rounded-div { .border-radius(10px); } to get the outputted css as: .rounded-div { -webkit-border-radius: 10px; -moz-border-radius: 10px; -o-border-radius: 10px; -ms-border-radius: 10px; border-radius: 10px; } this is extremely useful in the case of browser prefixes. However this same concept could be used to encapsulate commonly-used css, for example: .column-container { overflow: hidden; display: block; width: 100%; } .column(@width) { float: left; width: @width; } and then use this mixin whenever i need columns in my design: .my-column-outer { .column-container(); background: red; } .my-column-inner { .column(50%); font-color: yellow; } (of course, using the preprocessor we could easily expand this to be much more useful, eg. pass the number of columns and the container width as variables and have LESS determine the width of each column depending on the number of columns and container width!) the problem with this is that when compliled, my final css file would have 100 such declarations, copy&pasted, making the file huge and bloated and repetitive. The alternative to this would be to use a grid system which has predefined classes for each column-layout option, eg .c-50 ( with a "float: left; width:50%;" definition ), .c-33, .c-25 to accomodate for a 2-column, 3-column and 4-column layout and then use these classes to my dom. i really mislike the idea of the extra classes, from experience it results to bloated dom (creating extra divs just to attach the grid classes to). Also the most basic tutorial for html/css would tell you that the dom should be separated from the styling - grid classes are styling related! to me, its the same as attaching a "border-radius-10" class to the .rounded-div example above! on the other hand, the large css file that would result from the repetitive code is also a disadvantage so i guess my question is, which one would you recommend? and which do you use? and, which solution is best for optimization? apart from the larger file size, has there even been any research on whether browser renders multiple classes faster than a large css file, or the other way round? tnx! i'd love to hear your opinion!

    Read the article

  • Attaching functions to elements in a loop

    - by user435377
    I have the following HTML and JavaScript it works for the first set of elements when I have a '1' in the selector but when I replace the '1' with an 'i' it doesn't attach itself to any of the elements. Any ideas as to why this might not be working? (the script is meant to add the first 3 columns of each row and display it in the fourth) <html> <head> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.3/jquery.min.js" type="text/javascript"></script> <script> $(document).ready(function(){ for (i = 2; i <= 14; i++) { $("#Q19_LND_"+i).keyup(function(){ $("#autoSumRow_"+i).val(Number($("#Q19_LND_"+i).val()) + Number($("#Q19_CE_"+i).val()) + Number($("#Q19_SOLSD_"+i).val())); }); $("#Q19_CE_"+i).keyup(function(){ $("#autoSumRow_"+i).val(Number($("#Q19_LND_"+i).val()) + Number($("#Q19_CE_"+i).val()) + Number($("#Q19_SOLSD_"+i).val())); }); $("#Q19_SOLSD_"+i).keyup(function(){ $("#autoSumRow_"+i).val(Number($("#Q19_LND_"+i).val()) + Number($("#Q19_CE_"+i).val()) + Number($("#Q19_SOLSD_"+i).val())); }); } }); </script> </head> <body> <table> <tr> <td><font face="arial" size="-1">Lap Roux-N-Y</font>&nbsp;</td> <td align="center"><input tabindex="1" type="text" name="Q19_LND_1" size="3" value="" id="Q19_LND_1"></td> <td align="center"><input tabindex="2" type="text" name="Q19_CE_1" size="3" value="" id="Q19_CE_1"></td> <td align="center"><input tabindex="3" type="text" name="Q19_SOLSD_1" size="3" value="" id="Q19_SOLSD_1"></td> <td align="center"><input tabindex="4" disabled type="text" name="autoSumRow_1" size="3" value="" id="autoSumRow_1"></td> </tr> <tr> <td nowrap width="1" bgcolor="#006699" colspan="9"><img src="/images/wi/nothing.gif" width="1" height="1"></td> </tr> <tr> <td><font face="arial" size="-1">Lap Esophagectomy</font>&nbsp;</td> <td align="center"><input tabindex="5" type="text" name="Q19_LND_2" size="3" value="" id="Q19_LND_2"></td> <td align="center"><input tabindex="6" type="text" name="Q19_CE_2" size="3" value="" id="Q19_CE_2"></td> <td align="center"><input tabindex="7" type="text" name="Q19_SOLSD_2" size="3" value="" id="Q19_SOLSD_2"></td> <td align="center"><input tabindex="8" disabled type="text" name="autoSumRow_2" size="3" value="" id="autoSumRow_2"></td> </tr> <tr> </table> </body> </html>

    Read the article

  • undefined method `new_record?' for nil:NilClass

    - by TopperH
    In rails 3.2 I created a post controller. Each post can have a different number of paperclip attachments. To achieve this I created a assets model where each asset has a paperclip attachment. One post has_many assets and assets belong_to post. Asset model class Asset < ActiveRecord::Base belongs_to :post has_attached_file :photo, :styles => { :thumb => "200x200>" } end Post model class Post < ActiveRecord::Base attr_accessible :content, :title has_many :assets, :dependent => :destroy validates_associated :assets after_update :save_assets def new_asset_attributes=(asset_attributes) asset_attributes.each do |attributes| assets.build(attributes) end end def existing_asset_attributes=(asset_attributes) assets.reject(&:new_record?).each do |asset| attributes = asset_attributes[asset.id.to_s] if attributes asset.attributes = attributes else asset.delete(asset) end end end def save_assets assets.each do |asset| asset.save(false) end end end Posts helper module PostsHelper def add_asset_link(name) link_to_function name do |post| post.insert_html :bottom, :assets, :partial => 'asset', :object => Asset.new end end end Form for post <%= form_for @post, :html => { :multipart => true } do |f| %> <% if @post.errors.any? %> <div id="error_explanation"> <h2><%= pluralize(@post.errors.count, "error") %> prohibited this post from being saved:</h2> <ul> <% @post.errors.full_messages.each do |msg| %> <li><%= msg %></li> <% end %> </ul> </div> <% end %> <div class="field"> <%= f.label :title %><br /> <%= f.text_field :title %> </div> <div class="field"> <%= f.label :content %><br /> <%= f.text_area :content %> </div> <div id="assets"> Attach a file or image<br /> <%= render 'asset', :collection => @post.assets %> </div> <div class="actions"> <%= f.submit %> </div> <% end %> Asset partial <div class="asset"> <% new_or_existing = asset.new_record? ? 'new' : 'existing' %> <% prefix = "post[#{new_or_existing}_asset_attributes][]" %> <% fields_for prefix, asset do |asset_form| -%> <p> Asset: <%= asset_form.file_field :photo %> <%= link_to_function "remove", "$(this).up('.asset').remove()" %> </p> <% end -%> </div> Most of the code is taken from here: https://gist.github.com/33011 and I understand this is a rails2 app, anyway I don't understand what this error means: undefined method `new_record?' for nil:NilClass Extracted source (around line #2): 1: <div class="asset"> 2: <% new_or_existing = asset.new_record? ? 'new' : 'existing' %> 3: <% prefix = "post[#{new_or_existing}_asset_attributes][]" %> 4: 5: <% fields_for prefix, asset do |asset_form| -%>

    Read the article

  • add_shown & add_hiding ModalPopupExtender Events

    - by Yousef_Jadallah
        In this topic, I’ll discuss the Client events we usually need while using ModalPopupExtender. The add_shown fires when the ModalPopupExtender had shown and add_hiding fires when the user cancels it by CancelControlID,note that it fires before hiding the modal. They are useful in many cases, for example may you need to set focus to specific Textbox when the user display the modal, or if you need to reset the controls values inside the Modal after it has been hidden. To declare Client event either in pageLoad javascript function or you can attach the function by Sys.Application.add_load like this: Sys.Application.add_load(modalInit); function modalInit() { var modalPopup = $find('mpeID'); modalPopup.add_hiding(onHiding); } function onHiding(sender, args) { } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   I’ll use the first way in the current example. So lets start with the illustration:   1- In this example am using simple panel which contain UserName and Password Textboxes besides submit and cancel buttons, this Panel will be used as PopupControlID in the ModalPopupExtender : <asp:Panel ID="panModal" runat="server" Height="180px" Width="300px" style="display:none" CssClass="ModalWindow"> <table width="100%" > <tr> <td> User Name </td> <td> <asp:TextBox ID="txtName" runat="server"></asp:TextBox> </td> </tr> <tr> <td> Password </td> <td> <asp:TextBox ID="txtPassword" runat="server" TextMode="Password"></asp:TextBox> </td> </tr> </table> <br /> <asp:Button ID="btnSubmit" runat="server" Text="Submit" /> <asp:Button ID="btnCancel" runat="server" Text="Cancel" /> </asp:Panel>   You can use this simple style for the Panel : <style type="text/css"> .ModalWindow { border: solid; border-width:3px; background:#f0f0f0; } </style> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   2- Create the view button (TargetControlID) as you know this contain the ID of the element that activates the modal popup: <asp:Button ID="btnView" runat="server" Text="View" /> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   3-Add the ModalPopupExtender ,moreover don’t forget to add the ScriptManager: <asp:ScriptManager ID="ScriptManager1" runat="server"/> <cc1:ModalPopupExtender ID="ModalPopupExtender1" runat="server" TargetControlID="btnView" PopupControlID="panModal" CancelControlID="btnCancel"/> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }     4-In the pageLoad javascript function inside add_shown event set the focus on the txtName , and inside add_hiding reset the two Textboxes. <script language="javascript" type="text/javascript"> function pageLoad() { $find('ModalPopupExtender1').add_shown(function() { alert('add_shown event fires'); $get('<%=txtName.ClientID%>').focus();   });   $find('ModalPopupExtender1').add_hiding(function() { alert('add_hiding event fires'); $get('<%=txtName.ClientID%>').value = ""; $get('<%=txtPassword.ClientID%>').value = "";   }); }   </script> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   I’ve added the two alerts just to let you show when the event fires.   Hope this simple example show you the benefit and how to use these events.

    Read the article

  • How to deal with transport level security policy with OSB

    - by Jian Liang
    Recently, we received a use case for Oracle Service Bus (OSB) 11gPS4 to consume a Web Service which is secured by HTTP transport level security policy. The WSDL of the remote web service looks like following where the part marked in red shows the security policy: <?xml version='1.0' encoding='UTF-8'?> <definitions xmlns:wssutil="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd" xmlns:wsp="http://schemas.xmlsoap.org/ws/2004/09/policy" xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/" xmlns:tns="https://httpsbasicauth" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns="http://schemas.xmlsoap.org/wsdl/" targetNamespace="https://httpsbasicauth" name="HttpsBasicAuthService"> <wsp:UsingPolicy wssutil:Required="true"/> <wsp:Policy wssutil:Id="WSHttpBinding_IPartyServicePortType_policy"> <wsp:ExactlyOne> <wsp:All> <ns1:TransportBinding xmlns:ns1="http://schemas.xmlsoap.org/ws/2005/07/securitypolicy"> <wsp:Policy> <ns1:TransportToken> <wsp:Policy> <ns1:HttpsToken RequireClientCertificate="false"/> </wsp:Policy> </ns1:TransportToken> <ns1:AlgorithmSuite> <wsp:Policy> <ns1:Basic256/> </wsp:Policy> </ns1:AlgorithmSuite> <ns1:Layout> <wsp:Policy> <ns1:Strict/> </wsp:Policy> </ns1:Layout> </wsp:Policy> </ns1:TransportBinding> <ns2:UsingAddressing xmlns:ns2="http://www.w3.org/2006/05/addressing/wsdl"/> </wsp:All> </wsp:ExactlyOne> </wsp:Policy> <types> <xsd:schema> <xsd:import namespace="https://proxyhttpsbasicauth" schemaLocation="http://localhost:7001/WS/HttpsBasicAuthService?xsd=1"/> </xsd:schema> <xsd:schema> <xsd:import namespace="https://httpsbasicauth" schemaLocation="http://localhost:7001/WS/HttpsBasicAuthService?xsd=2"/> </xsd:schema> </types> <message name="echoString"> <part name="parameters" element="tns:echoString"/> </message> <message name="echoStringResponse"> <part name="parameters" element="tns:echoStringResponse"/> </message> <portType name="HttpsBasicAuth"> <operation name="echoString"> <input message="tns:echoString"/> <output message="tns:echoStringResponse"/> </operation> </portType> <binding name="HttpsBasicAuthSoapPortBinding" type="tns:HttpsBasicAuth"> <wsp:PolicyReference URI="#WSHttpBinding_IPartyServicePortType_policy"/> <soap:binding transport="http://schemas.xmlsoap.org/soap/http" style="document"/> <operation name="echoString"> <soap:operation soapAction=""/> <input> <soap:body use="literal"/> </input> <output> <soap:body use="literal"/> </output> </operation> </binding> <service name="HttpsBasicAuthService"> <port name="HttpsBasicAuthSoapPort" binding="tns:HttpsBasicAuthSoapPortBinding"> <soap:address location="https://localhost:7002/WS/HttpsBasicAuthService"/> </port> </service> </definitions> The security assertion in the WSDL (marked in red) indicates that this is the HTTP transport level security policy which requires one way SSL with default authentication (aka. basic authenticate with username/password). Normally, there are two ways to handle web service security policy with OSB 11g: Use WebLogic 9.x policy Use OWSM Since OSB doesn’t support WebLogic 9.x WSSP transport level assertion (except for WS transport), when we tried to create the business service based on the imported WSDL, OSB complained with the following message: [OSB Kernel:398133]The service is based on WSDL with Web Services Security Policies that are not natively supported by Oracle Service Bus. Please select OWSM Policies - From OWSM Policy Store option and attach equivalent OWSM security policy. For the Business Service, either you can add the necessary client policies manually by clicking Add button or you can let Oracle Service Bus automatically pick and add compatible client policies by clicking Add Compatible button. Unfortunately, when tried with OWSM, we couldn’t find http_token_policy from OWSM since OSB PS4 doesn’t support OWSM http_token_policy. It seems that we ran into an unsupported situation that no appropriate policy can be used from both WebLogic and OWSM. As this security policy requires one way SSL with basic authentication at the transport level, a possible workaround is to meet the remote service's requirement at transport level without using web service policy. We can simply use OSB to establish SSL connection and provide username/password for authentication at the transport level to the remote web service. In this case, the business service within OSB will be transparent to the web service policy. However, we still need to deal with OSB console’s complaint related to unsupported security policy because the failure of WSDL validation prohibits OSB console to move forward. With the help from OSB Product Management team, we finally came up with the following solutions: Solution 1: OSB PS5 The good news is that the http_token_policy is made available in OSB PS5. With OSB PS5, you can simply add OWSM oracle/wss_http_token_over_ssl_client_policy to the business service. The simplest solution is to upgrade to OSB PS5 where the OWSM solution is provided out of the box. But if you are not in a position where upgrading is an immediate option, you might want to consider other two workaround solutions described below. Solution 2: Modifying WSDL This solution addresses OSB console’s complaint by removing the security policy from the imported WSDL within OSB. Without the security policy, OSB console allows the business service to be created based on modified WSDL.  Please bear in mind, modifying WSDL is done only for the OSB side via OSB console, no change is required on the remote Web Service. The main steps of this solution: Connect to OSB console import the remote WSDL into OSB remove security assertion (the red marked part) from the imported WSDL create a service account. In our sample, we simply take the user weblogic create the business service and check "Basic" for Authentication and select the created service account make sure that OSB consumes the web service via https. This solution requires modifying WSDL. It is suitable for any OSB version (10g or OSB 11g version) prior to PS5 without OWSM. However, modifying WSDL by hand is troublesome as it requires the user to remember that the original WSDL was edited.  It forces you to make the same edit each time you want to re-import the service WSDL when changes occur at the service level. This also prevents you from using UDDI to import WSDL.  Solution 3: Using original WSDL This solution keeps the WSDL intact and ignores the embedded policy by using OWSM. By design, OWSM doesn’t like WSDL with embedded security assertion. Since OWSM doesn’t provide the feature to explicitly ignore the embedded policy from a remote WSDL, in this solution, we use OWSM in a tricky way to ignore the embedded policy. Connect to OSB console import the remote WSDL into OSB create a service account create the business service in which check "Basic" for Authentication and select the created service account as the imported WSDL is intact, the OSB Kernel:398133 error is expected ignore this error message for the moment and navigate to the Policies Page of business service Select “From OWSM Policy Store” and click “Add” button, the list of policies will pop-up Here is the tricky part: select an arbitrary policy, and click “Cancel” Update and save By clicking “Cancel’ button, we didn’t add any OWSM policy to business service, but the embedded policy is ignored. Yes, this is tricky. According to Oracle OSB Product Manager, the future release of OWSM will add a button “None” which allows to ignore the embedded policy explicitly. This solution keeps the imported WSDL intact which is the big advantage over the solution 2. It is suitable for OSB 11g (version prior to PS5) domain with OWSM configured. This blog addressed the unsupported transport level web service security policy with OSB PS4. To summarize, if you are using OSB PS5 or in a position to upgrade to PS5, the recommendation is to use OWSM OOTB transport level security policy directly. With the release prior to 11g PS5, you can consider the solution 2 or 3 depending on if OWSM is configured.

    Read the article

  • Windows Azure Root CAs and SSL Client Certificates

    - by Your DisplayName here!
    I ran into some problems while trying to make SSL client certificates work for StarterSTS 1.5. In theory you have to do two things (via startup tasks): Unlock the SSL section in IIS Install all the root certificates for the client certs you want to accept I did that. But it still does not work. While inspecting the event log, I stumbled over an schannel error message that I’ve never seen before: “When asking for client authentication, this server sends a list of trusted certificate authorities to the client. The client uses this list to choose a client certificate that is trusted by the server. Currently, this server trusts so many certificate authorities that the list has grown too long. This list has thus been truncated. The administrator of this machine should review the certificate authorities trusted for client authentication and remove those that do not really need to be trusted.” WTF? And indeed standard Azure (web role) VMs trust 275 root CAs (see attached list). Including kinda obscure ones. I don’t really know why MS made this design decision. It seems just wrong (including breaking the SSL client cert functionality). Deleting like 60% of them made SSL client certs from my CA work. So I guess I now have to find an automated way to attach CTLs to my site…joy. Exported list of trusted CA (as of 30th Dec 2010) AC Raíz Certicámara S.A. (4/2/2030 9:42:02 PM) AC RAIZ FNMT-RCM (1/1/2030 12:00:00 AM) A-CERT ADVANCED (10/23/2011 2:14:14 PM) Actalis Authentication CA G1 (6/25/2022 2:06:00 PM) Agence Nationale de Certification Electronique (8/12/2037 9:03:17 AM) Agence Nationale de Certification Electronique (8/12/2037 9:58:14 AM) Agencia Catalana de Certificacio (NIF Q-0801176-I) (1/7/2031 10:59:59 PM) America Online Root Certification Authority 1 (11/19/2037 8:43:00 PM) America Online Root Certification Authority 2 (9/29/2037 2:08:00 PM) ANCERT Certificados CGN (2/11/2024 5:27:12 PM) ANCERT Certificados Notariales (2/11/2024 3:58:26 PM) ANCERT Corporaciones de Derecho Publico (2/11/2024 5:22:45 PM) A-Trust-nQual-01 (11/30/2014 11:00:00 PM) A-Trust-nQual-03 (8/17/2015 10:00:00 PM) A-Trust-Qual-01 (11/30/2014 11:00:00 PM) A-Trust-Qual-02 (12/2/2014 11:00:00 PM) A-Trust-Qual-03a (4/24/2018 10:00:00 PM) Austria Telekom-Control Kommission (9/24/2005 12:40:00 PM) Austrian Society for Data Protection (2/12/2009 11:30:30 AM) Austrian Society for Data Protection GLOBALTRUST Certification Service (9/18/2036 2:12:35 PM) Autoridad Certificadora Raiz de la Secretaria de Economia (5/9/2025 12:00:00 AM) Autoridad de Certificacion de la Abogacia (6/13/2030 10:00:00 PM) Autoridad de Certificacion Firmaprofesional CIF A62634068 (10/24/2013 10:00:00 PM) Autoridade Certificadora Raiz Brasileira (11/30/2011 11:59:00 PM) Baltimore CyberTrust Root (5/12/2025 11:59:00 PM) BIT AdminCA-CD-T01 (1/25/2016 12:36:19 PM) BIT Admin-Root-CA (11/10/2021 7:51:07 AM) Buypass Class 2 CA 1 (10/13/2016 10:25:09 AM) Buypass Class 3 CA 1 (5/9/2015 2:13:03 PM) CA Disig (3/22/2016 1:39:34 AM) CertEurope (3/27/2037 11:00:00 PM) CERTICAMARA S.A. (2/23/2015 5:10:37 PM) Certicámara S.A. (5/23/2011 10:00:00 PM) Certigna (6/29/2027 3:13:05 PM) Certipost E-Trust Primary Normalised CA (7/26/2020 10:00:00 AM) Certipost E-Trust Primary Qualified CA (7/26/2020 10:00:00 AM) Certipost E-Trust Primary TOP Root CA (7/26/2025 10:00:00 AM) Certisign Autoridade Certificadora AC1S (6/27/2018 12:00:00 AM) Certisign Autoridade Certificadora AC2 (6/27/2018 12:00:00 AM) Certisign Autoridade Certificadora AC3S (7/9/2018 8:56:32 PM) Certisign Autoridade Certificadora AC4 (6/27/2018 12:00:00 AM) CertPlus Class 1 Primary CA (7/6/2020 11:59:59 PM) CertPlus Class 2 Primary CA (7/6/2019 11:59:59 PM) CertPlus Class 3 Primary CA (7/6/2019 11:59:59 PM) CertPlus Class 3P Primary CA (7/6/2019 11:59:59 PM) CertPlus Class 3TS Primary CA (7/6/2019 11:59:59 PM) CertRSA01 (3/3/2010 2:59:59 PM) certSIGN Root CA (7/4/2031 5:20:04 PM) Certum (6/11/2027 10:46:39 AM) Certum Trusted Network CA (12/31/2029 12:07:37 PM) Chambers of Commerce Root - 2008 (7/31/2038 12:29:50 PM) Chambersign Chambers of Commerce Root (9/30/2037 4:13:44 PM) Chambersign Global Root (9/30/2037 4:14:18 PM) Chambersign Public Notary Root (9/30/2037 4:14:49 PM) Chunghwa Telecom Co. Ltd. (12/20/2034 2:31:27 AM) Cisco Systems (5/14/2029 8:25:42 PM) CNNIC Root (4/16/2027 7:09:14 AM) Common Policy (10/15/2027 4:08:00 PM) COMODO (12/31/2028 11:59:59 PM) COMODO (1/18/2038 11:59:59 PM) COMODO (12/31/2029 11:59:59 PM) ComSign Advanced Security CA (3/24/2029 9:55:55 PM) ComSign CA (3/19/2029 3:02:18 PM) ComSign Secured CA (3/16/2029 3:04:56 PM) Correo Uruguayo - Root CA (12/31/2030 2:59:59 AM) Cybertrust Global Root (12/15/2021 8:00:00 AM) DanID (2/11/2037 9:09:30 AM) DanID (4/5/2021 5:03:17 PM) Deutsche Telekom Root CA 2 (7/9/2019 11:59:00 PM) DigiCert (11/10/2031 12:00:00 AM) DigiCert (11/10/2031 12:00:00 AM) DigiCert (11/10/2031 12:00:00 AM) DigiNotar Root CA (3/31/2025 6:19:21 PM) DIRECCION GENERAL DE LA POLICIA (2/8/2036 10:59:59 PM) DST (ABA.ECOM) CA (7/9/2009 5:33:53 PM) DST (ANX Network) CA (12/9/2018 4:16:48 PM) DST (Baltimore EZ) CA (7/3/2009 7:56:53 PM) DST (National Retail Federation) RootCA (12/8/2008 4:14:16 PM) DST (United Parcel Service) RootCA (12/7/2008 12:25:46 AM) DST ACES CA X6 (11/20/2017 9:19:58 PM) DST Root CA X3 (9/30/2021 2:01:15 PM) DST RootCA X1 (11/28/2008 6:18:55 PM) DST RootCA X2 (11/27/2008 10:46:16 PM) DSTCA E1 (12/10/2018 6:40:23 PM) DSTCA E2 (12/9/2018 7:47:26 PM) DST-Entrust GTI CA (12/9/2018 12:32:24 AM) D-TRUST GmbH (5/16/2022 5:20:47 AM) D-TRUST GmbH (6/8/2012 11:47:46 AM) D-TRUST GmbH (5/16/2022 5:20:47 AM) EBG Elektronik Sertifika Hizmet Saglayicisi (8/14/2016 12:31:09 AM) E-Certchile (9/5/2028 7:39:41 PM) Echoworx Root CA2 (10/7/2030 10:49:13 AM) ECRaizEstado (6/23/2030 1:41:27 PM) EDICOM (4/13/2028 4:24:22 PM) E-GÜVEN Elektronik Sertifika Hizmet Saglayicisi (1/4/2017 11:32:48 AM) E-ME SSI (RCA) (5/19/2027 8:48:15 AM) Entrust (11/27/2026 8:53:42 PM) Entrust (5/25/2019 4:39:40 PM) Entrust.net (12/7/2030 5:55:54 PM) Equifax Secure eBusiness CA-1 (6/21/2020 4:00:00 AM) Equifax Secure eBusiness CA-2 (6/23/2019 12:14:45 PM) Equifax Secure Global eBusiness CA-1 (6/21/2020 4:00:00 AM) eSign Australia: eSign Imperito Primary Root CA (5/23/2012 11:59:59 PM) eSign Australia: Gatekeeper Root CA (5/23/2014 11:59:59 PM) eSign Australia: Primary Utility Root CA (5/23/2012 11:59:59 PM) Fabrica Nacional de Moneda y Timbre (3/18/2019 3:26:19 PM) GeoTrust (8/22/2018 4:41:51 PM) GeoTrust (7/16/2036 11:59:59 PM) GeoTrust Global CA (5/21/2022 4:00:00 AM) GeoTrust Global CA 2 (3/4/2019 5:00:00 AM) GeoTrust Primary Certification Authority - G2 (1/18/2038 11:59:59 PM) GeoTrust Primary Certification Authority - G3 (12/1/2037 11:59:59 PM) GeoTrust Universal CA (3/4/2029 5:00:00 AM) GeoTrust Universal CA 2 (3/4/2029 5:00:00 AM) Global Chambersign Root - 2008 (7/31/2038 12:31:40 PM) GlobalSign (1/28/2028 12:00:00 PM) GlobalSign (12/15/2021 8:00:00 AM) Go Daddy Class 2 Certification Authority (6/29/2034 5:06:20 PM) GTE CyberTrust Global Root (8/13/2018 11:59:00 PM) GTE CyberTrust Root (4/3/2004 11:59:00 PM) GTE CyberTrust Root (2/23/2006 11:59:00 PM) Halcom CA FO (6/5/2020 10:33:31 AM) Halcom CA PO 2 (2/7/2019 6:33:31 PM) Hongkong Post Root CA (1/16/2010 11:59:00 PM) Hongkong Post Root CA 1 (5/15/2023 4:52:29 AM) I.CA První certifikacní autorita a.s. (4/1/2018 12:00:00 AM) I.CA První certifikacní autorita a.s. (4/1/2018 12:00:00 AM) InfoNotary (3/6/2026 5:33:05 PM) IPS SERVIDORES (12/29/2009 11:21:07 PM) IZENPE S.A. (1/30/2018 11:00:00 PM) Izenpe.com (12/13/2037 8:27:25 AM) Japan Certification Services, Inc. SecureSign RootCA1 (9/15/2020 2:59:59 PM) Japan Certification Services, Inc. SecureSign RootCA11 (4/8/2029 4:56:47 AM) Japan Certification Services, Inc. SecureSign RootCA2 (9/15/2020 2:59:59 PM) Japan Certification Services, Inc. SecureSign RootCA3 (9/15/2020 2:59:59 PM) Japan Local Government PKI Application CA (3/31/2016 2:59:59 PM) Japanese Government ApplicationCA (12/12/2017 3:00:00 PM) Juur-SK AS Sertifitseerimiskeskus (8/26/2016 2:23:01 PM) KamuSM (8/21/2017 11:37:07 AM) KISA RootCA 1 (8/24/2025 8:05:46 AM) KISA RootCA 3 (11/19/2014 6:39:51 AM) Macao Post eSignTrust (1/29/2013 11:59:59 PM) MicroSec e-Szigno Root CA (4/6/2017 12:28:44 PM) Microsoft Authenticode(tm) Root (12/31/1999 11:59:59 PM) Microsoft Root Authority (12/31/2020 7:00:00 AM) Microsoft Root Certificate Authority (5/9/2021 11:28:13 PM) Microsoft Timestamp Root (12/30/1999 11:59:59 PM) MOGAHA Govt of Korea (4/21/2012 9:07:23 AM) MOGAHA Govt of Korea GPKI (3/15/2017 6:00:04 AM) NetLock Arany (Class Gold) Fotanúsítvány (12/6/2028 3:08:21 PM) NetLock Expressz (Class C) Tanusitvanykiado (2/20/2019 2:08:11 PM) NetLock Kozjegyzoi (Class A) Tanusitvanykiado (2/19/2019 11:14:47 PM) NetLock Minositett Kozjegyzoi (Class QA) Tanusitvanykiado (12/15/2022 1:47:11 AM) NetLock Platina (Class Platinum) Fotanúsítvány (12/6/2028 3:12:44 PM) NetLock Uzleti (Class B) Tanusitvanykiado (2/20/2019 2:10:22 PM) Netrust CA1 (3/30/2021 2:57:45 AM) Network Solutions (12/31/2029 11:59:59 PM) NLB Nova Ljubljanska Banka d.d. Ljubljana (5/15/2023 12:22:45 PM) OISTE WISeKey Global Root GA CA (12/11/2037 4:09:51 PM) Post.Trust Root CA (7/5/2022 9:12:33 AM) Post.Trust Root CA (8/20/2010 1:56:21 PM) Posta CA Root (10/20/2028 12:52:08 PM) POSTarCA (2/7/2023 11:06:58 AM) QuoVadis Root CA 2 (11/24/2031 6:23:33 PM) QuoVadis Root CA 3 (11/24/2031 7:06:44 PM) QuoVadis Root Certification Authority (3/17/2021 6:33:33 PM) Root CA Generalitat Valenciana (7/1/2021 3:22:47 PM) RSA Security 2048 V3 (2/22/2026 8:39:23 PM) SECOM Trust Systems CO LTD (6/6/2037 2:12:32 AM) SECOM Trust Systems CO LTD (6/25/2019 10:23:48 PM) SECOM Trust Systems CO LTD (9/30/2023 4:20:49 AM) Secretaria de Economia Mexico (5/8/2025 12:00:00 AM) Secrétariat Général de la Défense Nationale (10/17/2020 2:29:22 PM) SecureNet CA Class B (10/16/2009 9:59:00 AM) Serasa Certificate Authority I (11/21/2024 2:12:45 PM) Serasa Certificate Authority II (11/21/2024 12:44:48 PM) Serasa Certificate Authority III (11/21/2024 1:24:14 PM) SERVICIOS DE CERTIFICACION - A.N.C. (3/9/2009 9:08:07 PM) Sigen-CA (6/29/2021 9:57:46 PM) Sigov-CA (1/10/2021 2:22:52 PM) Skaitmeninio sertifikavimo centras (12/28/2026 12:05:04 PM) Skaitmeninio sertifikavimo centras (12/25/2026 12:08:26 PM) Skaitmeninio sertifikavimo centras (12/22/2026 12:11:30 PM) Sonera Class1 CA (4/6/2021 10:49:13 AM) Sonera Class2 CA (4/6/2021 7:29:40 AM) Spanish Property & Commerce Registry CA (4/27/2012 9:39:50 AM) Staat der Nederlanden Root CA (12/16/2015 9:15:38 AM) Staat der Nederlanden Root CA - G2 (3/25/2020 11:03:10 AM) Starfield Class 2 Certification Authority (6/29/2034 5:39:16 PM) Starfield Technologies (6/26/2019 12:19:54 AM) Starfield Technologies Inc. (12/31/2029 11:59:59 PM) StartCom Certification Authority (9/17/2036 7:46:36 PM) S-TRUST Authentication and Encryption Root CA 2005:PN (6/21/2030 11:59:59 PM) Swisscom Root CA 1 (8/18/2025 10:06:20 PM) SwissSign (10/25/2036 8:30:35 AM) SwissSign Platinum G2 Root CA (10/25/2036 8:36:00 AM) SwissSign Silver G2 Root CA (10/25/2036 8:32:46 AM) TC TrustCenter Class 1 CA (1/1/2011 11:59:59 AM) TC TrustCenter Class 2 CA (1/1/2011 11:59:59 AM) TC TrustCenter Class 2 CA II (12/31/2025 10:59:59 PM) TC TrustCenter Class 3 CA (1/1/2011 11:59:59 AM) TC TrustCenter Class 3 CA II (12/31/2025 10:59:59 PM) TC TrustCenter Class 4 CA (1/1/2011 11:59:59 AM) TC TrustCenter Class 4 CA II (12/31/2025 10:59:59 PM) TC TrustCenter Time Stamping CA (1/1/2011 11:59:59 AM) TC TrustCenter Universal CA I (12/31/2025 10:59:59 PM) TC TrustCenter Universal CA II (12/31/2030 10:59:59 PM) thawte (12/31/2020 11:59:59 PM) thawte (7/16/2036 11:59:59 PM) thawte (12/31/2020 11:59:59 PM) thawte (12/31/2020 11:59:59 PM) thawte (12/31/2020 11:59:59 PM) thawte (12/31/2020 11:59:59 PM) thawte (12/31/2020 11:59:59 PM) thawte Primary Root CA - G2 (1/18/2038 11:59:59 PM) thawte Primary Root CA - G3 (12/1/2037 11:59:59 PM) Thawte Timestamping CA (12/31/2020 11:59:59 PM) Trustis EVS Root CA (1/9/2027 11:56:00 AM) Trustis FPS Root CA (1/21/2024 11:36:54 AM) Trustwave (1/1/2035 5:37:19 AM) Trustwave (12/31/2029 7:40:55 PM) Trustwave (12/31/2029 7:52:06 PM) TURKTRUST Elektronik Islem Hizmetleri (9/16/2015 12:13:05 PM) TURKTRUST Elektronik Islem Hizmetleri (3/22/2015 10:04:51 AM) TURKTRUST Elektronik Sertifika Hizmet Saglayicisi (9/16/2015 10:07:57 AM) TURKTRUST Elektronik Sertifika Hizmet Saglayicisi (3/22/2015 10:27:17 AM) TÜRKTRUST Elektronik Sertifika Hizmet Saglayicisi (12/22/2017 6:37:19 PM) TW Government Root Certification Authority (12/5/2032 1:23:33 PM) TWCA Root Certification Authority 1 (12/31/2030 3:59:59 PM) TWCA Root Certification Authority 2 (12/31/2030 3:59:59 PM) U.S. Government FBCA (10/6/2010 6:53:56 PM) UCA Global Root (12/31/2037 12:00:00 AM) UCA Root (12/31/2029 12:00:00 AM) USERTrust (7/9/2019 6:40:36 PM) USERTrust (7/9/2019 5:36:58 PM) USERTrust (6/24/2019 7:06:30 PM) USERTrust (7/9/2019 6:19:22 PM) USERTrust (5/30/2020 10:48:38 AM) UTN - USERFirst-Network Applications (7/9/2019 6:57:49 PM) ValiCert Class 3 Policy Validation Authority (6/26/2019 12:22:33 AM) VAS Latvijas Pasts SSI(RCA) (9/13/2024 9:27:57 AM) VeriSign (5/18/2018 11:59:59 PM) VeriSign (7/16/2036 11:59:59 PM) VeriSign (8/1/2028 11:59:59 PM) VeriSign (12/31/1999 9:37:48 AM) VeriSign (1/7/2004 11:59:59 PM) VeriSign (5/18/2018 11:59:59 PM) VeriSign (1/7/2004 11:59:59 PM) VeriSign (8/1/2028 11:59:59 PM) VeriSign (8/1/2028 11:59:59 PM) VeriSign (1/7/2020 11:59:59 PM) VeriSign (12/31/1999 9:35:58 AM) VeriSign (8/1/2028 11:59:59 PM) VeriSign (7/16/2036 11:59:59 PM) VeriSign (1/7/2004 11:59:59 PM) VeriSign (7/16/2036 11:59:59 PM) VeriSign (1/7/2010 11:59:59 PM) VeriSign (5/18/2018 11:59:59 PM) VeriSign (8/1/2028 11:59:59 PM) VeriSign (1/7/2004 11:59:59 PM) VeriSign (7/16/2036 11:59:59 PM) VeriSign (7/16/2036 11:59:59 PM) VeriSign (8/1/2028 11:59:59 PM) VeriSign (5/18/2018 11:59:59 PM) VeriSign Class 3 Public Primary CA (8/1/2028 11:59:59 PM) VeriSign Class 3 Public Primary Certification Authority - G4 (1/18/2038 11:59:59 PM) VeriSign Time Stamping CA (1/7/2004 11:59:59 PM) VeriSign Universal Root Certification Authority (12/1/2037 11:59:59 PM) Visa eCommerce Root (6/24/2022 12:16:12 AM) Visa Information Delivery Root CA (6/29/2025 5:42:42 PM) VRK Gov. Root CA (12/18/2023 1:51:08 PM) Wells Fargo Root Certificate Authority (1/14/2021 4:41:28 PM) WellsSecure Public Certificate Authority (12/14/2022 12:07:54 AM) Xcert EZ by DST (7/11/2009 4:14:18 PM)

    Read the article

  • Using RIA DomainServices with ASP.NET and MVC 2

    - by Bobby Diaz
    Recently, I started working on a new ASP.NET MVC 2 project and I wanted to reuse the data access (LINQ to SQL) and business logic methods (WCF RIA Services) that had been developed for a previous project that used Silverlight for the front-end.  I figured that I would be able to instantiate the various DomainService classes from within my controller’s action methods, because after all, the code for those services didn’t look very complicated.  WRONG!  I didn’t realize at first that some of the functionality is handled automatically by the framework when the domain services are hosted as WCF services.  After some initial searching, I came across an invaluable post by Joe McBride, which described how to get RIA Service .svc files to work in an MVC 2 Web Application, and another by Brad Abrams.  Unfortunately, Brad’s solution was for an earlier preview release of RIA Services and no longer works with the version that I am running (PDC Preview). I have not tried the RC version of WCF RIA Services, so I am not sure if any of the issues I am having have been resolved, but I wanted to come up with a way to reuse the shared libraries so I wouldn’t have to write a non-RIA version that basically did the same thing.  The classes I came up with work with the scenarios I have encountered so far, but I wanted to go ahead and post the code in case someone else is having the same trouble I had.  Hopefully this will save you a few headaches! 1. Querying When I first tried to use a DomainService class to perform a query inside one of my controller’s action methods, I got an error stating that “This DomainService has not been initialized.”  To solve this issue, I created an extension method for all DomainServices that creates the required DomainServiceContext and passes it to the service’s Initialize() method.  Here is the code for the extension method; notice that I am creating a sort of mock HttpContext for those cases when the service is running outside of IIS, such as during unit testing!     public static class ServiceExtensions     {         /// <summary>         /// Initializes the domain service by creating a new <see cref="DomainServiceContext"/>         /// and calling the base DomainService.Initialize(DomainServiceContext) method.         /// </summary>         /// <typeparam name="TService">The type of the service.</typeparam>         /// <param name="service">The service.</param>         /// <returns></returns>         public static TService Initialize<TService>(this TService service)             where TService : DomainService         {             var context = CreateDomainServiceContext();             service.Initialize(context);             return service;         }           private static DomainServiceContext CreateDomainServiceContext()         {             var provider = new ServiceProvider(new HttpContextWrapper(GetHttpContext()));             return new DomainServiceContext(provider, DomainOperationType.Query);         }           private static HttpContext GetHttpContext()         {             var context = HttpContext.Current;   #if DEBUG             // create a mock HttpContext to use during unit testing...             if ( context == null )             {                 var writer = new StringWriter();                 var request = new SimpleWorkerRequest("/", "/",                     String.Empty, String.Empty, writer);                   context = new HttpContext(request)                 {                     User = new GenericPrincipal(new GenericIdentity("debug"), null)                 };             } #endif               return context;         }     }   With that in place, I can use it almost as normally as my first attempt, except with a call to Initialize():     public ActionResult Index()     {         var service = new NorthwindService().Initialize();         var customers = service.GetCustomers();           return View(customers);     } 2. Insert / Update / Delete Once I got the records showing up, I was trying to insert new records or update existing data when I ran into the next issue.  I say issue because I wasn’t getting any kind of error, which made it a little difficult to track down.  But once I realized that that the DataContext.SubmitChanges() method gets called automatically at the end of each domain service submit operation, I could start working on a way to mimic the behavior of a hosted domain service.  What I came up with, was a base class called LinqToSqlRepository<T> that basically sits between your implementation and the default LinqToSqlDomainService<T> class.     [EnableClientAccess()]     public class NorthwindService : LinqToSqlRepository<NorthwindDataContext>     {         public IQueryable<Customer> GetCustomers()         {             return this.DataContext.Customers;         }           public void InsertCustomer(Customer customer)         {             this.DataContext.Customers.InsertOnSubmit(customer);         }           public void UpdateCustomer(Customer currentCustomer)         {             this.DataContext.Customers.TryAttach(currentCustomer,                 this.ChangeSet.GetOriginal(currentCustomer));         }           public void DeleteCustomer(Customer customer)         {             this.DataContext.Customers.TryAttach(customer);             this.DataContext.Customers.DeleteOnSubmit(customer);         }     } Notice the new base class name (just change LinqToSqlDomainService to LinqToSqlRepository).  I also added a couple of DataContext (for Table<T>) extension methods called TryAttach that will check to see if the supplied entity is already attached before attempting to attach it, which would cause an error! 3. LinqToSqlRepository<T> Below is the code for the LinqToSqlRepository class.  The comments are pretty self explanatory, but be aware of the [IgnoreOperation] attributes on the generic repository methods, which ensures that they will be ignored by the code generator and not available in the Silverlight client application.     /// <summary>     /// Provides generic repository methods on top of the standard     /// <see cref="LinqToSqlDomainService&lt;TContext&gt;"/> functionality.     /// </summary>     /// <typeparam name="TContext">The type of the context.</typeparam>     public abstract class LinqToSqlRepository<TContext> : LinqToSqlDomainService<TContext>         where TContext : System.Data.Linq.DataContext, new()     {         /// <summary>         /// Retrieves an instance of an entity using it's unique identifier.         /// </summary>         /// <typeparam name="TEntity">The type of the entity.</typeparam>         /// <param name="keyValues">The key values.</param>         /// <returns></returns>         [IgnoreOperation]         public virtual TEntity GetById<TEntity>(params object[] keyValues) where TEntity : class         {             var table = this.DataContext.GetTable<TEntity>();             var mapping = this.DataContext.Mapping.GetTable(typeof(TEntity));               var keys = mapping.RowType.IdentityMembers                 .Select((m, i) => m.Name + " = @" + i)                 .ToArray();               return table.Where(String.Join(" && ", keys), keyValues).FirstOrDefault();         }           /// <summary>         /// Creates a new query that can be executed to retrieve a collection         /// of entities from the <see cref="DataContext"/>.         /// </summary>         /// <typeparam name="TEntity">The type of the entity.</typeparam>         /// <returns></returns>         [IgnoreOperation]         public virtual IQueryable<TEntity> GetEntityQuery<TEntity>() where TEntity : class         {             return this.DataContext.GetTable<TEntity>();         }           /// <summary>         /// Inserts the specified entity.         /// </summary>         /// <typeparam name="TEntity">The type of the entity.</typeparam>         /// <param name="entity">The entity.</param>         /// <returns></returns>         [IgnoreOperation]         public virtual bool Insert<TEntity>(TEntity entity) where TEntity : class         {             //var table = this.DataContext.GetTable<TEntity>();             //table.InsertOnSubmit(entity);               return this.Submit(entity, null, DomainOperation.Insert);         }           /// <summary>         /// Updates the specified entity.         /// </summary>         /// <typeparam name="TEntity">The type of the entity.</typeparam>         /// <param name="entity">The entity.</param>         /// <returns></returns>         [IgnoreOperation]         public virtual bool Update<TEntity>(TEntity entity) where TEntity : class         {             return this.Update(entity, null);         }           /// <summary>         /// Updates the specified entity.         /// </summary>         /// <typeparam name="TEntity">The type of the entity.</typeparam>         /// <param name="entity">The entity.</param>         /// <param name="original">The original.</param>         /// <returns></returns>         [IgnoreOperation]         public virtual bool Update<TEntity>(TEntity entity, TEntity original)             where TEntity : class         {             if ( original == null )             {                 original = GetOriginal(entity);             }               var table = this.DataContext.GetTable<TEntity>();             table.TryAttach(entity, original);               return this.Submit(entity, original, DomainOperation.Update);         }           /// <summary>         /// Deletes the specified entity.         /// </summary>         /// <typeparam name="TEntity">The type of the entity.</typeparam>         /// <param name="entity">The entity.</param>         /// <returns></returns>         [IgnoreOperation]         public virtual bool Delete<TEntity>(TEntity entity) where TEntity : class         {             //var table = this.DataContext.GetTable<TEntity>();             //table.TryAttach(entity);             //table.DeleteOnSubmit(entity);               return this.Submit(entity, null, DomainOperation.Delete);         }           protected virtual bool Submit(Object entity, Object original, DomainOperation operation)         {             var entry = new ChangeSetEntry(0, entity, original, operation);             var changes = new ChangeSet(new ChangeSetEntry[] { entry });             return base.Submit(changes);         }           private TEntity GetOriginal<TEntity>(TEntity entity) where TEntity : class         {             var context = CreateDataContext();             var table = context.GetTable<TEntity>();             return table.FirstOrDefault(e => e == entity);         }     } 4. Conclusion So there you have it, a fully functional Repository implementation for your RIA Domain Services that can be consumed by your ASP.NET and MVC applications.  I have uploaded the source code along with unit tests and a sample web application that queries the Customers table from inside a Controller, as well as a Silverlight usage example. As always, I welcome any comments or suggestions on the approach I have taken.  If there is enough interest, I plan on contacting Colin Blair or maybe even the man himself, Brad Abrams, to see if this is something worthy of inclusion in the WCF RIA Services Contrib project.  What do you think? Enjoy!

    Read the article

  • The Low Down Dirty Azure Blues

    - by SGWellens
    Remember the SETI screen savers that used to be on everyone's computer? As far I as know, it was the first bona-fide use of "Cloud" computing…albeit an ad hoc cloud. I still think it was a brilliant leveraging of computing power. My interest in clouds was re-piqued when I went to a technical seminar at the local .Net User Group. The speaker was Mike Benkovitch and he expounded magnificently on the virtues of the Azure platform. Mike always does a good job. One killer reason he gave for cloud computing is instant scalability. Not applicable for most applications, but it is there if needed. I have a bunch of files stored on Microsoft's SkyDrive platform which is cloud storage. It is painfully slow. Accessing a file means going through layers and layers of software, redirections and security. Am I complaining? Hell no! It's free! So my opinions of Cloud Computing are both skeptical and appreciative. What intrigued me at the seminar, in addition to its other features, is that Azure can serve as a web hosting platform. I have a client with an Asp.Net web site I developed who is not happy with the performance of their current hosting service. I checked the cost of Azure and since the site has low bandwidth/space requirements the cost would be competitive with the existing host provider: Azure Pricing Calculator. And, Azure has a three month free trial. Perfect! I could try moving the website and see how it works for free. I went through the signup process. Everything was proceeding fine until I went to the MS SQL database management screen. A popup window informed me that I needed to install Silverlight on my machine. Silverlight? No thanks. Buh-Bye. I half-heartedly found the Azure support button and logged a ticket telling them I didn't want Silverlight on my machine. Within 4 to 6 hours (and a myriad (5) of automated support emails) they sent me a link to a database management page that did not require Silverlight. Thanks! I was able to create a database immediately. One really nice feature was that after creating the database, I was given a list of connection strings. I went to the current host provider, made a backup of the database and saved it to my machine. I attached to the remote database using SQL Server Studio 2012 and looked for the Restore menu item. It was missing. So I tried using the SQL command: RESTORE DATABASE MyDatabase FROM DISK ='C:\temp\MyBackup.bak' Msg 40510, Level 16, State 1, Line 1 Statement 'RESTORE DATABASE' is not supported in this version of SQL Server. Are you kidding me? Why on earth…? This can't be happening! I opened both the source database and destination database in SQL Management Studio. I right clicked the source database, selected "Tasks" and noticed a menu selection called "Deploy Database to SQL Azure" Are you kidding me? Could it be? Oh yes, it be! There was a small problem because the database already existed on the Azure machine, I deployed to a new name, deleted the existing database and renamed the deployed database to what I needed. It was ridiculously easy. Being able to attach SQL Management Studio to remote databases is an awesome but scary feature. You can limit the IP addresses that can access the database which enhances security but when you give people, any people, me included, that much power, one errant mouse click could bring a live system down. My Advice: Tread softly and carry a large backup thumb-drive. Then I created a web site, the URL it returned look something like this: http://MyWebSite.azurewebsites.net/ Azure supports FTP, but I couldn't figure out the settings until I downloaded the publishing profile. It was an XML file that contained the needed information. I still couldn't connect with my FTP client (FileZilla). After about an hour of messing around, I deleted the port number from the FileZilla setup page….and voila, I was in like Flynn.   There are other options of deploying directly from Visual Studio, TFS, etc. but I do not like integrated tools that do things without my asking: It's usually hard to figure out what they did and how to undo it. I uploaded the aspx , cs , webconfig, etc. files. Bu it didn't run. The site I ported was in .NET 3.5. The Azure website configuration page gave me a choice between .NET 2.0 and 4.0. So, I switched to Visual Studio 2010, chose .NET 4.0 and upgraded the site. Of course I have the original version completely backed up and stored in a granite cave beneath the Nevada desert. And I have a backup CD under my pillow. The site uses ReportViewer to generate PDF documents. Of course it was the wrong version. I removed the old references to version 9 and added new references to version 10 (*see note below). Since the DLLs were not on the Azure Server, I uploaded them to the bin directory, crossed my fingers, burned some incense and gave it a try. After some fiddling around it ran. I don't know if I did anything particular to make it work or it just needed time to sort things out. However, one critical feature didn't work: ReportViewer could not programmatically generate PDF documents. I was getting this exception: "An error occurred during local report processing. Parameter is not valid." Rats. I did some searching and found other people were having the same problem, so I added a post saying I was having the same problem: http://social.msdn.microsoft.com/Forums/en-US/windowsazurewebsitespreview/thread/b4a6eb43-0013-435f-9d11-00ee26a8d017 Currently they are looking into this problem and I am waiting for the results. Hence I had the time to write this BLOG entry. How lucky you are. This was the last message I got from the Microsoft person: Hi Steve, Windows Azure Web Sites is a multi-tenant environment. For security issue, we limited some API calls. Unfortunately, some GDI APIS required by the PDF converting function are in this list. We have noticed this issue, and still investigation the best way to go. At this moment, there is no news to share. Sorry about this. Will keep you posted. If I had to guess, I would say they are concerned with people uploading images and doing intensive graphics programming which would hog CPU time.  But that is just a guess. Another problem. While trying to resolve the ReportViewer problem, I tried to write a file to the PDF directory to see if there was a permissions problem with some test code: String MyPath = MapPath(@"~\PDFs\Test.txt"); File.WriteAllText(MyPath, "Hello Azure");     I got this message: Access to the path <my path> is denied. After some research, I understood that since Azure is a cloud based platform, it can't allow web applications to save files to local directories. The application could be moved or replicated as scaling occurs and trying to manage local files would be problematic to say the least. There are other options: Use the Azure APIs to get a path. That way the location of the storage is separated from the application. However, the web site is then tied Azure and can't be moved to another hosting platform. Use the ApplicationData folder (not recommended). Write to BLOB storage. Or, I could try and stream the PDF output directly to the email and not save a file. I'm not going to work on a final solution until the ReportViewer is fixed. I am just sharing some of the things you need to be aware of if you decide to use Azure. I got this information from here. (Note the author of the BLOG added a comment saying he has updated his entry). Is my memory faulty? While getting this BLOG ready, I tried to write the test file again. And it worked. My memory is incorrect, or much more likely, something changed on the server…perhaps while they are trying to get ReportViewer to work. (Anyway, that's my story and I'm sticking to it). *Note: Since Visual Studio 2010 Express doesn't include a Report Editor, I downloaded and installed SQL Server Report Builder 2.0. It is a standalone Report Editor to replace the one not in Visual Studio 2010 Express. I hope someone finds this useful. Steve Wellens CodeProject

    Read the article

  • Restful Services, oData, and Rest Sharp

    - by jkrebsbach
    After a great presentation by Jason Sheehan at MDC about RestSharp, I decided to implement it. RestSharp is a .Net framework for consuming restful data sources via either Json or XML. My first step was to put together a Restful data source for RestSharp to consume.  Staying entirely withing .Net, I decided to use Microsoft's oData implementation, built on System.Data.Services.DataServices.  Natively, these support Json, or atom+pub xml.  (XML with a few bells and whistles added on) There are three main steps for creating an oData data source: 1)  override CreateDSPMetaData This is where the metadata data is returned.  The meta data defines the structure of the data to return.  The structure contains the relationships between data objects, along with what properties the objects expose.  The meta data can and should be somehow cached so that the structure is not rebuild with every data request. 2) override CreateDataSource The context contains the data the data source will publish.  This method is the conduit which will populate the metadata objects to be returned to the requestor. 3) implement static InitializeService At this point we can set up security, along with setting up properties of the web service (versioning, etc)   Here is a web service which publishes stock prices for various Products (stocks) in various Categories. namespace RestService {     public class RestServiceImpl : DSPDataService<DSPContext>     {         private static DSPContext _context;         private static DSPMetadata _metadata;         /// <summary>         /// Populate traversable data source         /// </summary>         /// <returns></returns>         protected override DSPContext CreateDataSource()         {             if (_context == null)             {                 _context = new DSPContext();                 Category utilities = new Category(0);                 utilities.Name = "Electric";                 Category financials = new Category(1);                 financials.Name = "Financial";                                 IList products = _context.GetResourceSetEntities("Products");                 Product electric = new Product(0, utilities);                 electric.Name = "ABC Electric";                 electric.Description = "Electric Utility";                 electric.Price = 3.5;                 products.Add(electric);                 Product water = new Product(1, utilities);                 water.Name = "XYZ Water";                 water.Description = "Water Utility";                 water.Price = 2.4;                 products.Add(water);                 Product banks = new Product(2, financials);                 banks.Name = "FatCat Bank";                 banks.Description = "A bank that's almost too big";                 banks.Price = 19.9; // This will never get to the client                 products.Add(banks);                 IList categories = _context.GetResourceSetEntities("Categories");                 categories.Add(utilities);                 categories.Add(financials);                 utilities.Products.Add(electric);                 utilities.Products.Add(electric);                 financials.Products.Add(banks);             }             return _context;         }         /// <summary>         /// Setup rules describing published data structure - relationships between data,         /// key field, other searchable fields, etc.         /// </summary>         /// <returns></returns>         protected override DSPMetadata CreateDSPMetadata()         {             if (_metadata == null)             {                 _metadata = new DSPMetadata("DemoService", "DataServiceProviderDemo");                 // Define entity type product                 ResourceType product = _metadata.AddEntityType(typeof(Product), "Product");                 _metadata.AddKeyProperty(product, "ProductID");                 // Only add properties we wish to share with end users                 _metadata.AddPrimitiveProperty(product, "Name");                 _metadata.AddPrimitiveProperty(product, "Description");                 EntityPropertyMappingAttribute att = new EntityPropertyMappingAttribute("Name",                     SyndicationItemProperty.Title, SyndicationTextContentKind.Plaintext, true);                 product.AddEntityPropertyMappingAttribute(att);                 att = new EntityPropertyMappingAttribute("Description",                     SyndicationItemProperty.Summary, SyndicationTextContentKind.Plaintext, true);                 product.AddEntityPropertyMappingAttribute(att);                 // Define products as a set of product entities                 ResourceSet products = _metadata.AddResourceSet("Products", product);                 // Define entity type category                 ResourceType category = _metadata.AddEntityType(typeof(Category), "Category");                 _metadata.AddKeyProperty(category, "CategoryID");                 _metadata.AddPrimitiveProperty(category, "Name");                 _metadata.AddPrimitiveProperty(category, "Description");                 // Define categories as a set of category entities                 ResourceSet categories = _metadata.AddResourceSet("Categories", category);                 att = new EntityPropertyMappingAttribute("Name",                     SyndicationItemProperty.Title, SyndicationTextContentKind.Plaintext, true);                 category.AddEntityPropertyMappingAttribute(att);                 att = new EntityPropertyMappingAttribute("Description",                     SyndicationItemProperty.Summary, SyndicationTextContentKind.Plaintext, true);                 category.AddEntityPropertyMappingAttribute(att);                 // A product has a category, a category has products                 _metadata.AddResourceReferenceProperty(product, "Category", categories);                 _metadata.AddResourceSetReferenceProperty(category, "Products", products);             }             return _metadata;         }         /// <summary>         /// Based on the requesting user, can set up permissions to Read, Write, etc.         /// </summary>         /// <param name="config"></param>         public static void InitializeService(DataServiceConfiguration config)         {             config.SetEntitySetAccessRule("*", EntitySetRights.All);             config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2;             config.DataServiceBehavior.AcceptProjectionRequests = true;         }     } }     The objects prefixed with DSP come from the samples on the oData site: http://www.odata.org/developers The products and categories objects are POCO business objects with no special modifiers. Three main options are available for defining the MetaData of data sources in .Net: 1) Generate Entity Data model (Potentially directly from SQL Server database).  This requires the least amount of manual interaction, and uses the edmx WYSIWYG editor to generate a data model.  This can be directly tied to the SQL Server database and generated from the database if you want a data access layer tightly coupled with your database. 2) Object model decorations.  If you already have a POCO data layer, you can decorate your objects with properties to statically inform the compiler how the objects are related.  The disadvantage is there are now tags strewn about your business layer that need to be updated as the business rules change.  3) Programmatically construct metadata object.  This is the object illustrated above in CreateDSPMetaData.  This puts all relationship information into one central programmatic location.  Here business rules are constructed when the DSPMetaData response object is returned.   Once you have your service up and running, RestSharp is designed for XML / Json, along with the native Microsoft library.  There are currently some differences between how Jason made RestSharp expect XML with how atom+pub works, so I found better results currently with the Json implementation - modifying the RestSharp XML parser to make an atom+pub parser is fairly trivial though, so use what implementation works best for you. I put together a sample console app which calls the RestSvcImpl.svc service defined above (and assumes it to be running on port 2000).  I used both RestSharp as a client, and also the default Microsoft oData client tools. namespace RestConsole {     class Program     {         private static DataServiceContext _ctx;         private enum DemoType         {             Xml,             Json         }         static void Main(string[] args)         {             // Microsoft implementation             _ctx = new DataServiceContext(new System.Uri("http://localhost:2000/RestServiceImpl.svc"));             var msProducts = RunQuery<Product>("Products").ToList();             var msCategory = RunQuery<Category>("/Products(0)/Category").AsEnumerable().Single();             var msFilteredProducts = RunQuery<Product>("/Products?$filter=length(Name) ge 4").ToList();             // RestSharp implementation                          DemoType demoType = DemoType.Json;             var client = new RestClient("http://localhost:2000/RestServiceImpl.svc");             client.ClearHandlers(); // Remove all available handlers             // Set up handler depending on what situation dictates             if (demoType == DemoType.Json)                 client.AddHandler("application/json", new RestSharp.Deserializers.JsonDeserializer());             else if (demoType == DemoType.Xml)             {                 client.AddHandler("application/atom+xml", new RestSharp.Deserializers.XmlDeserializer());             }                          var request = new RestRequest();             if (demoType == DemoType.Json)                 request.RootElement = "d"; // service root element for json             else if (demoType == DemoType.Xml)             {                 request.XmlNamespace = "http://www.w3.org/2005/Atom";             }                              // Return all products             request.Resource = "/Products?$orderby=Name";             RestResponse<List<Product>> productsResp = client.Execute<List<Product>>(request);             List<Product> products = productsResp.Data;             // Find category for product with ProductID = 1             request.Resource = string.Format("/Products(1)/Category");             RestResponse<Category> categoryResp = client.Execute<Category>(request);             Category category = categoryResp.Data;             // Specialized queries             request.Resource = string.Format("/Products?$filter=ProductID eq {0}", 1);             RestResponse<Product> productResp = client.Execute<Product>(request);             Product product = productResp.Data;                          request.Resource = string.Format("/Products?$filter=Name eq '{0}'", "XYZ Water");             productResp = client.Execute<Product>(request);             product = productResp.Data;         }         private static IEnumerable<TElement> RunQuery<TElement>(string queryUri)         {             try             {                 return _ctx.Execute<TElement>(new Uri(queryUri, UriKind.Relative));             }             catch (Exception ex)             {                 throw ex;             }         }              } }   Feel free to step through the code a few times and to attach a debugger to the service as well to see how and where the context and metadata objects are constructed and returned.  Pay special attention to the response object being returned by the oData service - There are several properties of the RestRequest that can be used to help troubleshoot when the structure of the response is not exactly what would be expected.

    Read the article

  • SOA Suite Integration: Part 2: A basic BPEL process

    - by Anthony Shorten
    This is the next in the series about SOA Suite integration with Oracle Utilities Application Framework. One of the first scenarios I am going to illustrate in this series is building a basic BPEL process using Web Service calls to the Oracle Utilities Application Framework. The scenario is this. I will pass in the userid and the BPEL process will call our the AS-User Web Service we created in Part 1. This is just a basic test and illustrate how to import the Web Service into SOA Suite. To use this scenario, you will need access to Oracle SOA Suite, access to a copy of any Oracle Utilities Application Framework based product and Oracle JDeveloper (to build the process). First of all you need to start Oracle JDeveloper and create a new SOA Project to house the BPEL process in. For the purposes of this example I will call the project simpleBPEL and verify that SOA is part of the project. I will select "Composite with BPEL" to denote it as a BPEL process. I can also the same process to create a Mediator or OSB project (refer to the JDeveloper documentation on these technologies). For this example I will use BPEL 1.1 as my specification standard (BPEL 2.0 can also be used if desired). I give the individual BPEL process as simpleBPEL (you can use a different name but I wanted to keep the project and process the same for this example). I will also build a Synchronous BPEL Process as I want a response from the Web Service. I will leave the defaults to save time. I have no have a blank canvas to build my BPEL process against. Note: for simplicity I am going to use as much defaulting as possible. In fact I am not going to specify an input schema for the incoming call as I will use the basic single field used by BPEL as default. The first step is to import the AS-User Web Service into my BPEL project. To do this I use the standard Web Service BPEL component from the Component Palette to import the WSDL into the BPEL project. Now the tricky part (a joke), you drag and drop the component from the Palette onto the right side of the canvas in the Partner Links swim lane. This swim lane is reserved for Partner Links that have a Partner Role (i.e. being called rather than calling). When you drop the Web Service onto the canvas the Create Web Service wizard is invoked to ask for details of the Web Service. At this point you give the BPEL node a name. I have used the name RetrieveUser as a name. I placed the WSDL URL from the XAI Inbound Service screen in the WSDL URL. Once you specify the URL you can press the Find existing WSDL's button to load the information into BPEL from the call. You will notice the Port Type is prefilled with the port from the WSDL. I also suggest that you check copy wsdl and it's dependent artifacts into the project if you intending to work on the BPEL process offline. If you do not check this your target application must be accessible when you work on the BPEL process (that is not always convenient). Note: For the perceptive of you will notice that the URL specified in this example is different to the URL in the last post. The reason is for the demonstrations I shifted to a new server and did not redo all of the past screen captures. If you copy the WSDL into the project you will get an information screen about Localize Files. It is just a confirmation screen. The last confirmation screen is a summary of the partner link (the main tab is locked for editing at this stage). At this stage you have successfully imported the Web Service. To complete the setup of the Web Service you need to set the credentials for the Web Service to use. Refer to the past post on how to do that. Now to use the Web Service. To call the Web Service (as it is just imported not connected to the BPEL process yet), you must add an Invoke action to your BPEL Process. To do this, select Invoke action from the BPEL Constructs zone on the Component Palette and drop it on the edit nodes between the receiveInput and replyOutput nodes This will create an empty Invoke action. You will notice some connectors on the Invoke node. Grab the node closest to your Web Service and drag it to connect the Invoke to your Web Service. This instructs BPEL to use the Invoke to call the Web Service. Once the Invoke action is connected to the Web Service an Edit Invoke edit dialog is displayed. At this point I suggest you name the Invoke node. It is important to name the nodes straightaway and name them appropriately for you to trace the logic. I used InvokeUser as the name in this example. To complete the node configuration you must create Variables to hold the input and output for the call. To do this clock on Automatically Create Input Variable on the Edit Invoke dialog. You will be presented with a default variable name. It uses the node name (that is why it is important to name the node before hitting this button) as a prefix. You can name the variable anything but I usually take the default. Repeat the same for the output variable. You now have a completed node for invoking the service. You have a very basic BPEL process which contains an input, invoke and output node. It is not complete yet though. You need to tell the BPEL process how to pass data from the input to the invoke step and how to take the output from the service call and pass it back to the service. You need to now add an Assign node to assign the input to the Web Service. To do this select Assign activity from BPEL Constructs zone in the Component Palette. Drag and drop the Assign activity between the receiveInput and InvokeUser nodes as you want to pass data between these two nodes. You have now added a new Assign node to your BPEL process Double clicking the node allows you to specify the name of the node. I use AssignUser to describe that I am assigning user data. On the Copy Rules tab you can specify the mapping between the input variable InputVariable/payload/process/input string and the input variable for the Web Service call. We are passing data from the input to BPEL to the relevant input variable on the Web Service. This is simply drag and drop between the two data structures. In the example, I am using the input to pass to the user element in my Web Service as the user is the primary key for the object. The fields become linked (which means data from source will be copied to target). Almost there. You now need to process the output from the Web Service call to the outputVariable of the client call. I have decided to pass back one piece of data, the name associated with the user by concatenating the firstName and lastName elements from the Web Service call. To do this I will use a Transform as it is not just a matter of an Assign action. It is a concatenation operation. This also illustrates how you can use BPEL functionality to transform data from a Web Service call. As with the other components you drag and drop the Transform component to the appropriate place in the BPEL process. In this case we want to transform the output from the Web Service call so we want it after the InvokeUser action and the replyOutput action. The Transform component is actually part of the Oracle Extensions to the BPEL specification. Double clicking the Transform node will allow you to name the node.  In this example I used TransformName. To complete the transform I need to tell the product the source of the transformation and the target of the transform. In the example this is the InvokeUser output variable. I also named the mapper file to TransformName. By clicking the + or pencil icon next to the map I can create the map. The mapping screen is shows the source and target schemas for me to map across. As with the assign I can map the relevant elements. In my example, I first map the firstName from the Web Service to the result element. As I want to concatenate the names, I drop the concat function on the call line. I now attach the last name to the function to indicate the concatenation of the field. By default the names will be concatenated with no space. To make the name legible I add a space between the field by clicking the function and adding a space in the call. I now have a completed mapping. I can now save the whole project as my BPEL process is now complete. As you can see the following happens: We accept input from the client (the userid for the call) in the receiveInput step. We assign that value to the input parameters for the Web Service call in the AssignUser step. We invoke the Web Service call to retrieve the data from the product in the InvokeUser step. We take the output from the InvokeUser step and concatenate the names in the TransformName step. We pass back the data in the replyOutput step. At this point we can deploy the BPEL process to the SOA Suite server. I will not cover this aspect as it really all SOA Suite specific (it is all done via Oracle JDeveloper). Now we need to test the service in SOA Suite. We will use the Fusion Middleware Control test facility. I will assume that credentials have also been setup as per our previous post (else you will get a 401 error). You navigate to the deployed BPEL process within Fusion Middleware Control and select the Test Service option. Specify some test data on the payload at the bottom of the Test Service screen. In my case I am returning my own userid information. On the response tab you will see the result. It works. You can verify the steps using the Audit trace facility on individual calls. As you can see this is a basic BPEL but you get the idea of importing the Web Service is pretty straightforward. You can create more sophisticated BPEL processes using the full facilities in Oracle SOA Suite. I just showed you the basic principals.

    Read the article

  • T-SQL Tuesday #33: Trick Shots: Undocumented, Underdocumented, and Unknown Conspiracies!

    - by Most Valuable Yak (Rob Volk)
    Mike Fal (b | t) is hosting this month's T-SQL Tuesday on Trick Shots.  I love this choice because I've been preoccupied with sneaky/tricky/evil SQL Server stuff for a long time and have been presenting on it for the past year.  Mike's directives were "Show us a cool trick or process you developed…It doesn’t have to be useful", which most of my blogging definitely fits, and "Tell us what you learned from this trick…tell us how it gave you insight in to how SQL Server works", which is definitely a new concept.  I've done a lot of reading and watching on SQL Server Internals and even attended training, but sometimes I need to go explore on my own, using my own tools and techniques.  It's an itch I get every few months, and, well, it sure beats workin'. I've found some people to be intimidated by SQL Server's internals, and I'll admit there are A LOT of internals to keep track of, but there are tons of excellent resources that clearly document most of them, and show how knowing even the basics of internals can dramatically improve your database's performance.  It may seem like rocket science, or even brain surgery, but you don't have to be a genius to understand it. Although being an "evil genius" can help you learn some things they haven't told you about. ;) This blog post isn't a traditional "deep dive" into internals, it's more of an approach to find out how a program works.  It utilizes an extremely handy tool from an even more extremely handy suite of tools, Sysinternals.  I'm not the only one who finds Sysinternals useful for SQL Server: Argenis Fernandez (b | t), Microsoft employee and former T-SQL Tuesday host, has an excellent presentation on how to troubleshoot SQL Server using Sysinternals, and I highly recommend it.  Argenis didn't cover the Strings.exe utility, but I'll be using it to "hack" the SQL Server executable (DLL and EXE) files. Please note that I'm not promoting software piracy or applying these techniques to attack SQL Server via internal knowledge. This is strictly educational and doesn't reveal any proprietary Microsoft information.  And since Argenis works for Microsoft and demonstrated Sysinternals with SQL Server, I'll just let him take the blame for it. :P (The truth is I've used Strings.exe on SQL Server before I ever met Argenis.) Once you download and install Strings.exe you can run it from the command line.  For our purposes we'll want to run this in the Binn folder of your SQL Server instance (I'm referencing SQL Server 2012 RTM): cd "C:\Program Files\Microsoft SQL Server\MSSQL11\MSSQL\Binn" C:\Program Files\Microsoft SQL Server\MSSQL11\MSSQL\Binn> strings *sql*.dll > sqldll.txt C:\Program Files\Microsoft SQL Server\MSSQL11\MSSQL\Binn> strings *sql*.exe > sqlexe.txt   I've limited myself to DLLs and EXEs that have "sql" in their names.  There are quite a few more but I haven't examined them in any detail. (Homework assignment for you!) If you run this yourself you'll get 2 text files, one with all the extracted strings from every SQL DLL file, and the other with the SQL EXE strings.  You can open these in Notepad, but you're better off using Notepad++, EditPad, Emacs, Vim or another more powerful text editor, as these will be several megabytes in size. And when you do open it…you'll find…a TON of gibberish.  (If you think that's bad, just try opening the raw DLL or EXE file in Notepad.  And by the way, don't do this in production, or even on a running instance of SQL Server.)  Even if you don't clean up the file, you can still use your editor's search function to find a keyword like "SELECT" or some other item you expect to be there.  As dumb as this sounds, I sometimes spend my lunch break just scanning the raw text for anything interesting.  I'm boring like that. Sometimes though, having these files available can lead to some incredible learning experiences.  For me the most recent time was after reading Joe Sack's post on non-parallel plan reasons.  He mentions a new SQL Server 2012 execution plan element called NonParallelPlanReason, and demonstrates a query that generates "MaxDOPSetToOne".  Joe (formerly on the Microsoft SQL Server product team, so he knows this stuff) mentioned that this new element was not currently documented and tried a few more examples to see what other reasons could be generated. Since I'd already run Strings.exe on the SQL Server DLLs and EXE files, it was easy to run grep/find/findstr for MaxDOPSetToOne on those extracts.  Once I found which files it belonged to (sqlmin.dll) I opened the text to see if the other reasons were listed.  As you can see in my comment on Joe's blog, there were about 20 additional non-parallel reasons.  And while it's not "documentation" of this underdocumented feature, the names are pretty self-explanatory about what can prevent parallel processing. I especially like the ones about cursors – more ammo! - and am curious about the PDW compilation and Cloud DB replication reasons. One reason completely stumped me: NoParallelHekatonPlan.  What the heck is a hekaton?  Google and Wikipedia were vague, and the top results were not in English.  I found one reference to Greek, stating "hekaton" can be translated as "hundredfold"; with a little more Wikipedia-ing this leads to hecto, the prefix for "one hundred" as a unit of measure.  I'm not sure why Microsoft chose hekaton for such a plan name, but having already learned some Greek I figured I might as well dig some more in the DLL text for hekaton.  Here's what I found: hekaton_slow_param_passing Occurs when a Hekaton procedure call dispatch goes to slow parameter passing code path The reason why Hekaton parameter passing code took the slow code path hekaton_slow_param_pass_reason sp_deploy_hekaton_database sp_undeploy_hekaton_database sp_drop_hekaton_database sp_checkpoint_hekaton_database sp_restore_hekaton_database e:\sql11_main_t\sql\ntdbms\hekaton\sqlhost\sqllang\hkproc.cpp e:\sql11_main_t\sql\ntdbms\hekaton\sqlhost\sqllang\matgen.cpp e:\sql11_main_t\sql\ntdbms\hekaton\sqlhost\sqllang\matquery.cpp e:\sql11_main_t\sql\ntdbms\hekaton\sqlhost\sqllang\sqlmeta.cpp e:\sql11_main_t\sql\ntdbms\hekaton\sqlhost\sqllang\resultset.cpp Interesting!  The first 4 entries (in red) mention parameters and "slow code".  Could this be the foundation of the mythical DBCC RUNFASTER command?  Have I been passing my parameters the slow way all this time? And what about those sp_xxxx_hekaton_database procedures (in blue)? Could THEY be the secret to a faster SQL Server? Could they promise a "hundredfold" improvement in performance?  Are these special, super-undocumented DIB (databases in black)? I decided to look in the SQL Server system views for any objects with hekaton in the name, or references to them, in hopes of discovering some new code that would answer all my questions: SELECT name FROM sys.all_objects WHERE name LIKE '%hekaton%' SELECT name FROM sys.all_objects WHERE object_definition(OBJECT_ID) LIKE '%hekaton%' Which revealed: name ------------------------ (0 row(s) affected) name ------------------------ sp_createstats sp_recompile sp_updatestats (3 row(s) affected)   Hmm.  Well that didn't find much.  Looks like these procedures are seriously undocumented, unknown, perhaps forbidden knowledge. Maybe a part of some unspeakable evil? (No, I'm not paranoid, I just like mysteries and thought that punching this up with that kind of thing might keep you reading.  I know I'd fall asleep without it.) OK, so let's check out those 3 procedures and see what they reveal when I search for "Hekaton": sp_createstats: -- filter out local temp tables, Hekaton tables, and tables for which current user has no permissions -- Note that OBJECTPROPERTY returns NULL on type="IT" tables, thus we only call it on type='U' tables   OK, that's interesting, let's go looking down a little further: ((@table_type<>'U') or (0 = OBJECTPROPERTY(@table_id, 'TableIsInMemory'))) and -- Hekaton table   Wellllll, that tells us a few new things: There's such a thing as Hekaton tables (UPDATE: I'm not the only one to have found them!) They are not standard user tables and probably not in memory UPDATE: I misinterpreted this because I didn't read all the code when I wrote this blog post. The OBJECTPROPERTY function has an undocumented TableIsInMemory option Let's check out sp_recompile: -- (3) Must not be a Hekaton procedure.   And once again go a little further: if (ObjectProperty(@objid, 'IsExecuted') <> 0 AND ObjectProperty(@objid, 'IsInlineFunction') = 0 AND ObjectProperty(@objid, 'IsView') = 0 AND -- Hekaton procedure cannot be recompiled -- Make them go through schema version bumping branch, which will fail ObjectProperty(@objid, 'ExecIsCompiledProc') = 0)   And now we learn that hekaton procedures also exist, they can't be recompiled, there's a "schema version bumping branch" somewhere, and OBJECTPROPERTY has another undocumented option, ExecIsCompiledProc.  (If you experiment with this you'll find this option returns null, I think it only works when called from a system object.) This is neat! Sadly sp_updatestats doesn't reveal anything new, the comments about hekaton are the same as sp_createstats.  But we've ALSO discovered undocumented features for the OBJECTPROPERTY function, which we can now search for: SELECT name, object_definition(OBJECT_ID) FROM sys.all_objects WHERE object_definition(OBJECT_ID) LIKE '%OBJECTPROPERTY(%'   I'll leave that to you as more homework.  I should add that searching the system procedures was recommended long ago by the late, great Ken Henderson, in his Guru's Guide books, as a great way to find undocumented features.  That seems to be really good advice! Now if you're a programmer/hacker, you've probably been drooling over the last 5 entries for hekaton (in green), because these are the names of source code files for SQL Server!  Does this mean we can access the source code for SQL Server?  As The Oracle suggested to Neo, can we return to The Source??? Actually, no. Well, maybe a little bit.  While you won't get the actual source code from the compiled DLL and EXE files, you'll get references to source files, debugging symbols, variables and module names, error messages, and even the startup flags for SQL Server.  And if you search for "DBCC" or "CHECKDB" you'll find a really nice section listing all the DBCC commands, including the undocumented ones.  Granted those are pretty easy to find online, but you may be surprised what those web sites DIDN'T tell you! (And neither will I, go look for yourself!)  And as we saw earlier, you'll also find execution plan elements, query processing rules, and who knows what else.  It's also instructive to see how Microsoft organizes their source directories, how various components (storage engine, query processor, Full Text, AlwaysOn/HADR) are split into smaller modules. There are over 2000 source file references, go do some exploring! So what did we learn?  We can pull strings out of executable files, search them for known items, browse them for unknown items, and use the results to examine internal code to learn even more things about SQL Server.  We've even learned how to use command-line utilities!  We are now 1337 h4X0rz!  (Not really.  I hate that leetspeak crap.) Although, I must confess I might've gone too far with the "conspiracy" part of this post.  I apologize for that, it's just my overactive imagination.  There's really no hidden agenda or conspiracy regarding SQL Server internals.  It's not The Matrix.  It's not like you'd find anything like that in there: Attach Matrix Database DM_MATRIX_COMM_PIPELINES MATRIXXACTPARTICIPANTS dm_matrix_agents   Alright, enough of this paranoid ranting!  Microsoft are not really evil!  It's not like they're The Borg from Star Trek: ALTER FEDERATION DROP ALTER FEDERATION SPLIT DROP FEDERATION   #tsql2sday

    Read the article

< Previous Page | 65 66 67 68 69 70 71 72 73  | Next Page >