Search Results

Search found 16113 results on 645 pages for 'cross domain'.

Page 271/645 | < Previous Page | 267 268 269 270 271 272 273 274 275 276 277 278  | Next Page >

  • Gmail rejects emails. Openspf.net fails the tests

    - by pablomedok
    I've got a problem with Gmail. It started after one of our trojan infected PCs sent spam for one day from our IP address. We've fixed the problem, but we got into 3 black lists. We've fixed that, too. But still every time we send an email to Gmail the message is rejected: So I've checked Google Bulk Sender's guide once again and found an error in our SPF record and fixed it. Google says everything should become fine after some time, but this doesn't happen. 3 weeks already passed but we still can't send emails to Gmail. Our MX setup is a bit complex, but not too much: We have a domain name delo-company.com, it has it's own mail @delo-company.com (this one is fine, but the problems are with sub-domain name corp.delo-company.com). Delo-company.com domain has several DNS records for the subdomain: corp A 82.209.198.147 corp MX 20 corp.delo-company.com corp.delo-company.com TXT "v=spf1 ip4:82.209.198.147 ~all" (I set ~all for testing purposes only, it was -all before that) These records are for our corporate Exchange 2003 server at 82.209.198.147. Its LAN name is s2.corp.delo-company.com so its HELO/EHLO greetings are also s2.corp.delo-company.com. To pass EHLO check we've also created some records in delo-company.com's DNS: s2.corp A 82.209.198.147 s2.corp.delo-company.com TXT "v=spf1 ip4:82.209.198.147 ~all" As I understand SPF verifications should be passed in this way: Out server s2 connects to MX of the recepient (Rcp.MX): EHLO s2.corp.delo-company.com Rcp.MX says Ok, and makes SPF check of HELO/EHLO. It does NSlookup for s2.corp.delo-company.com and gets the above DNS-records. TXT records says that s2.corp.delo-company.com should be only from IP 82.209.198.147. So it should be passed. Then our s2 server says RCPT FROM: Rcp.MX` server checks it, too. The values are the same so they should also be positive. Maybe there is also a rDNS check, but I'm not sure what is checked HELO or RCPT FROM. Our PTR record for 82.209.198.147 is: 147.198.209.82.in-addr.arpa. 86400 IN PTR s2.corp.delo-company.com. To me everything looks fine, but anyway all emails are rejected by Gmail. So, I've checked MXtoolbox.com - it says everything is fine, I passed http://www.kitterman.com/spf/validate.html Python check, I did 25port.com email test. It's fine, too: Return-Path: <[email protected]> Received: from s2.corp.delo-company.com (82.209.198.147) by verifier.port25.com id ha45na11u9cs for <[email protected]>; Fri, 2 Mar 2012 13:03:21 -0500 (envelope-from <[email protected]>) Authentication-Results: verifier.port25.com; spf=pass [email protected] Authentication-Results: verifier.port25.com; domainkeys=neutral (message not signed) [email protected] Authentication-Results: verifier.port25.com; dkim=neutral (message not signed) Authentication-Results: verifier.port25.com; sender-id=pass [email protected] Content-class: urn:content-classes:message MIME-Version: 1.0 Content-Type: multipart/alternative; boundary="----_=_NextPart_001_01CCF89E.BE02A069" Subject: test Date: Fri, 2 Mar 2012 21:03:15 +0300 X-MimeOLE: Produced By Microsoft Exchange V6.5 Message-ID: <[email protected]> X-MS-Has-Attach: X-MS-TNEF-Correlator: Thread-Topic: test Thread-Index: Acz4jS34oznvbyFQR4S5rXsNQFvTdg== From: =?koi8-r?B?89XQ0tXOwMsg8MHXxcw=?= <[email protected]> To: <[email protected]> I also checked with [email protected], but it FAILs all the time, no matter which SPF records I make: <s2.corp.delo-company.com #5.7.1 smtp;550 5.7.1 <[email protected]>: Recipient address rejected: SPF Tests: Mail-From Result="softfail": Mail From="[email protected]" HELO name="s2.corp.delo-company.com" HELO Result="softfail" Remote IP="82.209.198.147"> I've filled Gmail form twice, but nothing happens. We do not send spam, only emails for our clients. 2 or 3 times we did mass emails (like New Year Greetings and sales promos) from corp.delo-company.com addresses, but they where all complying to Gmail Bulk Sender's Guide (I mean SPF, Open Relays, Precedence: Bulk and Unsubscribe tags). So, this should be not a problem. Please, help me. What am I doing wrong? UPD: I also tried Unlocktheinbox.com test and the server also fails this test. Here is the result: http://bit.ly/wYr39h . Here is one more http://bit.ly/ypWLjr I also tried to send email from that server manually via telnet and everything is fine. Here is what I type: 220 mx.google.com ESMTP g15si4811326anb.170 HELO s2.corp.delo-company.com 250 mx.google.com at your service MAIL FROM: <[email protected]> 250 2.1.0 OK g15si4811326anb.170 RCPT TO: <[email protected]> 250 2.1.5 OK g15si4811326anb.170 DATA 354 Go ahead g15si4811326anb.170 From: [email protected] To: Pavel <[email protected]> Subject: Test 28 This is telnet test . 250 2.0.0 OK 1330795021 g15si4811326anb.170 QUIT 221 2.0.0 closing connection g15si4811326anb.170 And this is what I get: Delivered-To: [email protected] Received: by 10.227.132.73 with SMTP id a9csp96864wbt; Sat, 3 Mar 2012 09:17:02 -0800 (PST) Received: by 10.101.128.12 with SMTP id f12mr4837125ann.49.1330795021572; Sat, 03 Mar 2012 09:17:01 -0800 (PST) Return-Path: <[email protected]> Received: from s2.corp.delo-company.com (s2.corp.delo-company.com. [82.209.198.147]) by mx.google.com with SMTP id g15si4811326anb.170.2012.03.03.09.15.59; Sat, 03 Mar 2012 09:17:00 -0800 (PST) Received-SPF: pass (google.com: domain of [email protected] designates 82.209.198.147 as permitted sender) client-ip=82.209.198.147; Authentication-Results: mx.google.com; spf=pass (google.com: domain of [email protected] designates 82.209.198.147 as permitted sender) [email protected] Date: Sat, 03 Mar 2012 09:17:00 -0800 (PST) Message-Id: <[email protected]> From: [email protected] To: Pavel <[email protected]> Subject: Test 28 This is telnet test

    Read the article

  • DNS resolution problems; dig SERVFAIL error

    - by JustinP
    I'm setting up a couple of dedicated servers, and having problems setting up my nameservers properly. One of these is a LEMP server (LAMP with nginx in place of Apache), and the other will function solely as an email server, running exim/dovecot/ASSP antispam (no Apache). The LEMP server is CentOS 5.5, with no control panel, while the email server is CentOS 5.5 as well, with cPanel/WHM. So, I've had problems getting DNS set up properly. I have two domains, each one pointing to one of these servers. The nameservers are registered correctly with the domain registrar, and the nameserver IPs are entered correctly as well. I've spoken to tech support at the registrar and they confirm that everything is set up on their end. Not knowing much about DNS, I googled nameservers and DNS until I nearly went blind, and spent hours messing with the configuration. Eventually, I got the LEMP server's DNS working properly (no cPanel). Pleased with this triumph, I'm trying to mimic that configuration and repeat the process with the email server, and it's just not happening. The nameserver starts and stops, but the domain doesn't resolve. Things I have tried Going through standard procedures to set up DNS in WHM Clearing all DNS information, uninstalling BIND, then reinstalling all of that and again going through WHM procedures for setting up DNS Clearing all DNS information, and setting up BIND via shell (completely outside of cPanel) by using my config and zone files from the LEMP server as a template named runs just fine, but nothing is resolving. When I "dig any example.com" I get a SERVFAIL message. Nslookups return no information. Here are my config and zone files. named.conf controls { inet 127.0.0.1 allow { localhost; } keys { coretext-key; }; }; options { listen-on port 53 { any; }; listen-on-v6 port 53 { ::1; }; directory "/var/named"; dump-file "/var/named/data/cache_dump.db"; statistics-file "/var/named/data/named_stats.txt"; memstatistics-file "/var/named/data/named_mem_stats.txt"; // Those options should be used carefully because they disable port // randomization // query-source port 53; // query-source-v6 port 53; allow-query { any; }; allow-query-cache { any; }; }; logging { channel default_debug { file "data/named.run"; severity dynamic; }; }; view "localhost_resolver" { match-clients { 127.0.0.0/24; }; match-destinations { localhost; }; recursion yes; //zone "." IN { // type hint; // file "/var/named/named.ca"; //}; include "/etc/named.rfc1912.zones"; }; view "internal" { /* This view will contain zones you want to serve only to "internal" clients that connect via your directly attached LAN interfaces - "localnets" . */ match-clients { localnets; }; match-destinations { localnets; }; recursion yes; zone "." IN { type hint; file "/var/named/named.ca"; }; // include "/var/named/named.rfc1912.zones"; // you should not serve your rfc1912 names to non-localhost clients. // These are your "authoritative" internal zones, and would probably // also be included in the "localhost_resolver" view above : zone "example.com" { type master; file "data/db.example.com"; }; zone "3.2.1.in-addr.arpa" { type master; file "data/db.1.2.3"; }; }; view "external" { /* This view will contain zones you want to serve only to "external" clients * that have addresses that are not on your directly attached LAN interface subnets: */ match-clients { any; }; match-destinations { any; }; recursion no; // you'd probably want to deny recursion to external clients, so you don't // end up providing free DNS service to all takers allow-query-cache { none; }; // Disable lookups for any cached data and root hints // all views must contain the root hints zone: //include "/etc/named.rfc1912.zones"; zone "." IN { type hint; file "/var/named/named.ca"; }; zone "example.com" { type master; file "data/db.example.com"; }; zone "3.2.1.in-addr.arpa" { type master; file "data/db.1.2.3"; }; }; include "/etc/rndc.key"; db.example.com $TTL 1D ; ; Zone file for example.com ; ; Mandatory minimum for a working domain ; @ IN SOA ns1.example.com. contact.example.com. ( 2011042905 ; serial 8H ; refresh 2H ; retry 4W ; expire 1D ; default_ttl ) NS ns1.example.com. NS ns2.example.com. ns1 A 1.2.3.4 ns2 A 1.2.3.5 example.com. A 1.2.3.4 localhost A 127.0.0.1 www CNAME example.com. mail CNAME example.com. ; db.1.2.3 $TTL 1D $ORIGIN 3.2.1.in-addr.arpa. @ IN SOA ns1.example.com contact.example.com. ( 2011042908 ; 8H ; 2H ; 4W ; 1D ; ) NS ns1.example.com. NS ns2.example.com. 4 PTR hostname.example.com. 5 PTR hostname.example.com. ; Also of note: both of these servers are managed. Tech support is very responsive, and largely useless. Hours go by with them asking me questions to narrow down what could be wrong, then they pass the ticket to the tech on the next shift, who ignores everything that's happened already and spend his whole shift asking all the same questions the last guy asked. So, in summary: *Nameservers, with IPs, are correctly registered with domain registrar *named is configured and running *...and must not be configured correctly, because nothing resolves. Any help would be great. I changed domains and IPs in the files to generics, but let me know if you need to know the domain in question. Thanks! UPDATE I found that I didn't have 127.0.0.1 in /etc/resolv.conf, so I added it, along with my two public IPs that I have named listening on. resolv.conf search www.example.com example.com nameserver 127.0.0.1 nameserver 7.8.9.10 ;Was in here by default, authoritative nameserver of hosting company nameserver 1.2.3.4 ;Public IP #1 nameserver 1.2.3.5 ;Public IP #2 Now when I DIG example.com from the host, it resolves. If I try to DIG from my other server (in the same datacenter), or from the internet, it times out or I get SERVFAIL.

    Read the article

  • Series of abstract classes and NHibernate

    - by Chris Cowdery-Corvan
    Hello, and first off thanks for your time to look at this. For a research project I'm working on, I have a somewhat complex design (which I've been given) to persist to a database via NHibernate. Here's an example of the class hierarchy: TransitStrategy, TransportationCompany and TransportationLocation are all abstract classes. The XML configuration I have is presently: <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" assembly="Vacationizer" namespace="Vacationizer.Domain.Transit"> <class name="TransitStrategy"> <id name="TransitStrategyId"> <generator class="guid" /> </id> <property name="Restrictions" /> <joined-subclass name="Flight" table="Flight_TransitStrategy"> <key column="TransitStrategyId" /> <property name="DepartingAirport" /> <property name="ArrivingAirport" /> <property name="Airline" /> <property name="FlightNumber" /> <property name="FlightArrivalTime" /> <property name="FlightDepartureTime" /> </joined-subclass> <joined-subclass name="RentalCar" table="RentalCar_TransitStrategy"> <key column="TransitStrategyId" /> <property name="RentalCarBranch" /> <property name="CarMake" /> <property name="CarModel" /> <property name="CarYear" /> <property name="CarColor" /> <property name="RentalBegins" /> <property name="RentalEnds" /> </joined-subclass> </class> <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" assembly="Vacationizer" namespace="Vacationizer.Domain.Transit"> <class name="TransportationCompany"> <id name="TransportationCompanyId"> <generator class="guid" /> </id> <property name="Name" /> <property name="Reviews" /> <property name="Website" /> <property name="Photo" /> <joined-subclass name="Airline" table="Airline_TransportationCompany"> <key column="TransportationLocationId" /> </joined-subclass> <joined-subclass name="RentalCarAgency" table="RentalCarAgency_TransportationCompany"> <key column="TransportationLocationId" /> </joined-subclass> </class> <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" assembly="Vacationizer" namespace="Vacationizer.Domain.Transit"> <class name="TransportationLocation"> <id name="TransportationLocationId"> <generator class="guid" /> </id> <property name="Name" /> <property name="Image" /> <property name="Geolocation" /> <property name="Reviews" /> <!-- <property name="HoursOpen" />--> <property name="PhoneNumber" /> <property name="FaxNumber" /> <joined-subclass name="Airport" table="Airport_TransportationLocation"> <key column="TransportationLocationId" /> <property name="AirportCode" /> <property name="Website" /> </joined-subclass> <joined-subclass name="RentalCarBranch" table="RentalCarBranch_TransportationLocation"> <key column="TransitStrategyId" /> <property name="Agency" /> </joined-subclass> </class> However, whenever I try to use this schema I get this error/stack trace: ------ Test started: Assembly: Vacationizer.Tests.dll ------ TestCase 'M:Vacationizer.Tests.VacationRepository_Fixture.TestFixtureSetUp' failed: Could not compile the mapping document: Vacationizer.Mappings.TransitStrategy.hbm.xml NHibernate.MappingException: Could not compile the mapping document: Vacationizer.Mappings.TransitStrategy.hbm.xml ---> NHibernate.MappingException: Problem trying to set property type by reflection ---> NHibernate.MappingException: class Vacationizer.Domain.Transit.RentalCar, Vacationizer, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null not found while looking for property: RentalCarBranch ---> NHibernate.PropertyNotFoundException: Could not find a getter for property 'RentalCarBranch' in class 'Vacationizer.Domain.Transit.RentalCar' at NHibernate.Properties.BasicPropertyAccessor.GetGetter(Type type, String propertyName) at NHibernate.Util.ReflectHelper.ReflectedPropertyClass(String className, String name, String accessorName) --- End of inner exception stack trace --- at NHibernate.Util.ReflectHelper.ReflectedPropertyClass(String className, String name, String accessorName) at NHibernate.Mapping.SimpleValue.SetTypeUsingReflection(String className, String propertyName, String accesorName) --- End of inner exception stack trace --- at NHibernate.Mapping.SimpleValue.SetTypeUsingReflection(String className, String propertyName, String accesorName) at NHibernate.Cfg.XmlHbmBinding.ClassBinder.CreateProperty(IValue value, String propertyName, String className, XmlNode subnode, IDictionary`2 inheritedMetas) at NHibernate.Cfg.XmlHbmBinding.ClassBinder.PropertiesFromXML(XmlNode node, PersistentClass model, IDictionary`2 inheritedMetas, UniqueKey uniqueKey, Boolean mutable, Boolean nullable, Boolean naturalId) at NHibernate.Cfg.XmlHbmBinding.JoinedSubclassBinder.HandleJoinedSubclass(PersistentClass model, XmlNode subnode, IDictionary`2 inheritedMetas) at NHibernate.Cfg.XmlHbmBinding.ClassBinder.PropertiesFromXML(XmlNode node, PersistentClass model, IDictionary`2 inheritedMetas, UniqueKey uniqueKey, Boolean mutable, Boolean nullable, Boolean naturalId) at NHibernate.Cfg.XmlHbmBinding.RootClassBinder.Bind(XmlNode node, HbmClass classSchema, IDictionary`2 inheritedMetas) at NHibernate.Cfg.XmlHbmBinding.MappingRootBinder.AddRootClasses(XmlNode parentNode, IDictionary`2 inheritedMetas) at NHibernate.Cfg.XmlHbmBinding.MappingRootBinder.Bind(XmlNode node) at NHibernate.Cfg.Configuration.AddValidatedDocument(NamedXmlDocument doc) --- End of inner exception stack trace --- at NHibernate.Cfg.Configuration.LogAndThrow(Exception exception) at NHibernate.Cfg.Configuration.AddValidatedDocument(NamedXmlDocument doc) at NHibernate.Cfg.Configuration.ProcessMappingsQueue() at NHibernate.Cfg.Configuration.AddDocumentThroughQueue(NamedXmlDocument document) at NHibernate.Cfg.Configuration.AddXmlReader(XmlReader hbmReader, String name) at NHibernate.Cfg.Configuration.AddInputStream(Stream xmlInputStream, String name) at NHibernate.Cfg.Configuration.AddResource(String path, Assembly assembly) at NHibernate.Cfg.Configuration.AddAssembly(Assembly assembly) at NHibernate.Cfg.Configuration.AddAssembly(String assemblyName) at NHibernate.Cfg.Configuration.DoConfigure(IHibernateConfiguration hc) at NHibernate.Cfg.Configuration.Configure() VacationRepository_Fixture.cs(24,0): at Vacationizer.Tests.VacationRepository_Fixture.TestFixtureSetUp() 0 passed, 1 failed, 0 skipped, took 8.38 seconds (Ad hoc). Any ideas on how I can implement this differently? Thanks very much!

    Read the article

  • Using Teleriks new LINQ implementation to create OData feeds

    This week Telerik released a new LINQ implementation that is simple to use and produces domain models very fast. Built on top of the enterprise grade OpenAccess ORM, you can connect to any database that OpenAccess can connect to such as: SQL Server, MySQL, Oracle, SQL Azure, VistaDB, etc. While this is a separate LINQ implementation from traditional OpenAccess Entites, you can use the visual designer without ever interacting with OpenAccess, however, you can always hook into the advanced ORM features like caching, fetch plan optimization, etc, if needed. Just to show off how easy our LINQ implementation is to use, I will walk you through building an OData feed using Data Services Update for .NET Framework 3.5 SP1. (Memo to Microsoft: P-L-E-A-S-E hire someone from Apple to name your products.) How easy is it? If you have a fast machine, are skilled with the mouse, and type fast, you can do this in about 60 seconds via three easy steps. (I promise in about 2-3 weeks that you can do this in less then 30 seconds. Stay tuned for that.)  Step 1 (15-20 seconds): Building your Domain Model In your web project in Visual Studio, right click on the project and select Add|New Item and select Telerik OpenAccess Domain Model as your item template. Give the file a meaningful name as well. Select your database type (SQL Server, SQL Azure, Oracle, MySQL, VistaDB, etc) and build the connection string. If you already have a Visual Studio connection string already saved, this step is trivial.  Then select your tables, enter a name for your model and click Finish. In this case I connected to Northwind and selected only Customers, Orders, and Order Details.  I named my model NorthwindEntities and will use that in my DataService. Step 2 (20-25 seconds): Adding and Configuring your Data Service In your web project in Visual Studio, right click on the project and select Add|New Item and select ADO .NET Data Service as your item template and name your service. In the code behind for your Data Service you have to make three small changes. Add the name of your Telerik Domain Model (entered in Step 1) as the DataService name (shown on line 6 below as NorthwindEntities) and uncomment line 11 and add a * to show all entities. Optionally if you want to take advantage of the DataService 3.5 updates, add line 13 (and change IDataServiceConfiguration to DataServiceConfiguration in line 9.) 1: using System.Data.Services; 2: using System.Data.Services.Common; 3:   4: namespace Telerik.RLINQ.Astoria.Web 5: { 6: public class NorthwindService : DataService<NorthwindEntities> 7: { 8: //change the IDataServiceConfigurationto DataServiceConfiguration 9: public static void InitializeService(DataServiceConfiguration config) 10: { 11: config.SetEntitySetAccessRule("*", EntitySetRights.All); 12: //take advantage of the "Astoria3.5 Update" features 13: config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; 14: } 15: } 16: } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   Step 3 (~30 seconds): Adding the DataServiceKeys You now have to tell your data service what are the primary keys of each entity. To do this you have to create a new code file and create a few partial classes. If you type fast, use copy and paste from your first entity,  and use a refactoring productivity tool, you can add these 6-8 lines of code or so in about 30 seconds. This is the most tedious step, but dont worry, Ive bribed some of the developers and our next update will eliminate this step completely. Just create a partial class for each entity you have mapped and add the attribute [DataServiceKey] on top of it along with the keys field name. If you have any complex properties, you will need to make them a primitive type, as I do in line 15. Create this as a separate file, dont manipulate the generated data access classes in case you want to regenerate them again later (even thought that would be much faster.) 1: using System.Data.Services.Common; 2:   3: namespace Telerik.RLINQ.Astoria.Web 4: { 5: [DataServiceKey("CustomerID")] 6: public partial class Customer 7: { 8: } 9:   10: [DataServiceKey("OrderID")] 11: public partial class Order 12: { 13: } 14:   15: [DataServiceKey(new string[] { "OrderID", "ProductID" })] 16: public partial class OrderDetail 17: { 18: } 19:   20: } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   Done! Time to run the service. Now, lets run the service! Select the svc file and right click and say View in Browser. You will see your OData service and can interact with it in the browser. Now that you have an OData service set up, you can consume it in one of the many ways that OData is consumed: using LINQ, the Silverlight OData client, Excel PowerPivot, or PhP, etc. Happy Data Servicing! Technorati Tags: Telerik,Astoria,Data Services Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • SQL SERVER – Solution – Puzzle – Statistics are not Updated but are Created Once

    - by pinaldave
    Earlier I asked puzzle why statistics are not updated. Read the complete details over here: Statistics are not Updated but are Created Once In the question I have demonstrated even though statistics should have been updated after lots of insert in the table are not updated.(Read the details SQL SERVER – When are Statistics Updated – What triggers Statistics to Update) In this example I have created following situation: Create Table Insert 1000 Records Check the Statistics Now insert 10 times more 10,000 indexes Check the Statistics – it will be NOT updated Auto Update Statistics and Auto Create Statistics for database is TRUE Now I have requested two things in the example 1) Why this is happening? 2) How to fix this issue? I have many answers – here is the how I fixed it which has resolved the issue for me. NOTE: There are multiple answers to this problem and I will do my best to list all. Solution: Create nonclustered Index on column City Here is the working example for the same. Let us understand this script and there is added explanation at the end. -- Execution Plans Difference -- Estimated Execution Plan Vs Actual Execution Plan -- Create Sample Database CREATE DATABASE SampleDB GO USE SampleDB GO -- Create Table CREATE TABLE ExecTable (ID INT, FirstName VARCHAR(100), LastName VARCHAR(100), City VARCHAR(100)) GO CREATE NONCLUSTERED INDEX IX_ExecTable1 ON ExecTable (City); GO -- Insert One Thousand Records -- INSERT 1 INSERT INTO ExecTable (ID,FirstName,LastName,City) SELECT TOP 1000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%20 = 1 THEN 'New York' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 5 THEN 'San Marino' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 3 THEN 'Los Angeles' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 7 THEN 'La Cinega' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 13 THEN 'San Diego' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 17 THEN 'Las Vegas' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Display statistics of the table sp_helpstats N'ExecTable', 'ALL' GO -- Select Statement SELECT FirstName, LastName, City FROM ExecTable WHERE City  = 'New York' GO -- Display statistics of the table sp_helpstats N'ExecTable', 'ALL' GO -- Replace your Statistics over here DBCC SHOW_STATISTICS('ExecTable', IX_ExecTable1); GO -------------------------------------------------------------- -- Round 2 -- Insert One Thousand Records -- INSERT 2 INSERT INTO ExecTable (ID,FirstName,LastName,City) SELECT TOP 1000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%20 = 1 THEN 'New York' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 5 THEN 'San Marino' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 3 THEN 'Los Angeles' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 7 THEN 'La Cinega' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 13 THEN 'San Diego' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 17 THEN 'Las Vegas' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Select Statement SELECT FirstName, LastName, City FROM ExecTable WHERE City  = 'New York' GO -- Display statistics of the table sp_helpstats N'ExecTable', 'ALL' GO -- Replace your Statistics over here DBCC SHOW_STATISTICS('ExecTable', IX_ExecTable1); GO -- Clean up Database DROP TABLE ExecTable GO When I created non clustered index on the column city, it also created statistics on the same column with same name as index. When we populate the data in the column the index is update – resulting execution plan to be invalided – this leads to the statistics to be updated in next execution of SELECT. This behavior does not happen on Heap or column where index is auto created. If you explicitly update the index, often you can see the statistics are updated as well. You can see this is for sure happening if you follow the tell of John Sansom. John Sansom‘s suggestion: That was fun! Although the column statistics are invalidated by the time the second select statement is executed, the query is not compiled/recompiled but instead the existing query plan is reused. It is the “next” compiled query against the column statistics that will see that they are out of date and will then in turn instantiate the action of updating statistics. You can see this in action by forcing the second statement to recompile. SELECT FirstName, LastName, City FROM ExecTable WHERE City = ‘New York’ option(RECOMPILE) GO Kevin Cross also have another suggestion: I agree with John. It is reusing the Execution Plan. Aside from OPTION(RECOMPILE), clearing the Execution Plan Cache before the subsequent tests will also work. i.e., run this before round 2: ————————————————————– – Clear execution plan cache before next test DBCC FREEPROCCACHE WITH NO_INFOMSGS; ————————————————————– Nice puzzle! Kevin As this was puzzle John and Kevin both got the correct answer, there was no condition for answer to be part of best practices. I know John and he is finest DBA around – his tremendous knowledge has always impressed me. John and Kevin both will agree that clearing cache either using DBCC FREEPROCCACHE and recompiling each query every time is for sure not good advice on production server. It is correct answer but not best practice. By the way, if you have better solution or have better suggestion please advise. I am open to change my answer and publish further improvement to this solution. On very separate note, I like to have clustered index on my Primary Key, which I have not mentioned here as it is out of the scope of this puzzle. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, Readers Contribution, Readers Question, SQL, SQL Authority, SQL Index, SQL Puzzle, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Statistics

    Read the article

  • How to build Open JavaFX for Android.

    - by PictureCo
    Here's a short recipe for baking JavaFX for Android dalvik. We will need just a few ingredients but each one requires special care. So let's get down to the business.  SourcesThe first ingredient is an open JavaFX repository. This should be piece of cake. As always there's a catch. You probably know that dalvik is jdk6 compatible  and also that certain APIs are missing comparing to good old java vm from Oracle.  Fortunately there is a repository which is a backport of regular OpenJFX to jdk7 and going from jdk7 to jdk6 is possible. The first thing to do is to clone or download the repository from https://bitbucket.org/narya/jfx78. Main page of the project says "It works in some cases" so we will presume that it will work in most cases As I've said dalvik vm misses some APIs which would lead to a build failures. To get them use another compatibility repository which is available on GitHub https://github.com/robovm/robovm-jfx78-compat. Download the zip and unzip sources into jfx78/modules/base.We need also a javafx binary stubs. Use jfxrt.jar from jdk8.The last thing to download are freetype sources from http://freetype.org. These will be necessary for native font rendering. Toolchain setup I have to point out that these instructions were tested only on linux. I suppose they will work with minimal changes also on Mac OS. I also presume that you were able to build open JavaFX. That means all tools like ant, gradle, gcc and jdk8 have been installed and are working all right. In addition to this you will need to download and install jdk7, Android SDK and Android NDK for native code compilation.  Installing all of them will take some time. Don't forget to put them in your path. export ANDROID_SDK=/opt/android-sdk-linux export ANDROID_NDK=/opt/android-ndk-r9b export JAVA_HOME=/opt/jdk1.7.0 export PATH=$JAVA_HOME/bin:$ANDROID_SDK/tools:$ANDROID_SDK/platform-tools:$ANDROID_NDK FreetypeUnzip freetype release sources first. We will have to cross compile them for arm. Firstly we will create a standalone toolchain for cross compiling installed in ~/work/ndk-standalone-19. $ANDROID_NDK/build/tools/make-standalone-toolchain.sh  --platform=android-19 --install-dir=~/work/ndk-standalone-19 After the standalone toolchain has been created cross compile freetype with following script: export TOOLCHAIN=~/work/freetype/ndk-standalone-19 export PATH=$TOOLCHAIN/bin:$PATH export FREETYPE=`pwd` ./configure --host=arm-linux-androideabi --prefix=$FREETYPE/install --without-png --without-zlib --enable-shared sed -i 's/\-version\-info \$(version_info)/-avoid-version/' builds/unix/unix-cc.mk make make install It will compile and install freetype library into $FREETYPE/install. We will link to this install dir later on. It would be possible also to link openjfx font support dynamically against skia library available on Android which already contains freetype. It creates smaller result but can have compatibility problems. Patching Download patches javafx-android-compat.patch + android-tools.patch and patch jfx78 repository. I recommend to have look at patches. First one android-compat.patch updates openjfx build script, removes dependency on SharedSecret classes and updates LensLogger to remove dependency on jdk specific PlatformLogger. Second one android-tools.patch creates helper script in android-tools. The script helps to setup javaFX Android projects. Building Now is time to try the build. Run following script: JAVA_HOME=/opt/jdk1.7.0 JDK_HOME=/opt/jdk1.7.0 ANDROID_SDK=/opt/android-sdk-linux ANDROID_NDK=/opt/android-ndk-r9b PATH=$JAVA_HOME/bin:$ANDROID_SDK/tools:$ANDROID_SDK/platform-tools:$ANDROID_NDK:$PATH gradle -PDEBUG -PDALVIK_VM=true -PBINARY_STUB=~/work/binary_stub/linux/rt/lib/ext/jfxrt.jar \ -PFREETYPE_DIR=~/work/freetype/install -PCOMPILE_TARGETS=android If everything went all right the output is in build/android-sdk Create first JavaFX Android project Use gradle script int android-tools. The script sets the project structure for you.   Following command creates Android HelloWorld project which links to a freshly built javafx runtime and to a HelloWorld application. NAME is a name of Android project. DIR where to create our first project. PACKAGE is package name required by Android. It has nothing to do with a packaging of javafx application. JFX_SDK points to our recently built runtime. JFX_APP points to dist directory of javafx application. (where all application jars sit) JFX_MAIN is fully qualified name of a main class. gradle -PDEBUG -PDIR=/home/user/work -PNAME=HelloWorld -PPACKAGE=com.helloworld \ -PJFX_SDK=/home/user/work/jfx78/build/android-sdk -PJFX_APP=/home/user/NetBeansProjects/HelloWorld/dist \ -PJFX_MAIN=com.helloworld.HelloWorld createProject Now cd to the created project and use it like any other android project. ant clean, debug, uninstall, installd will work. I haven't tried it from any IDE Eclipse nor Netbeans. Special thanks to Stefan Fuchs and Daniel Zwolenski for the repositories used in this blog post.

    Read the article

  • Perl, LibXML and Schemas

    - by Xetius
    I have an example Perl script which I am trying to load and validate a file against a schema, them interrogate various nodes. #!/usr/bin/env perl use strict; use warnings; use XML::LibXML; my $filename = 'source.xml'; my $xml_schema = XML::LibXML::Schema->new(location=>'library.xsd'); my $parser = XML::LibXML->new (); my $doc = $parser->parse_file ($filename); eval { $xml_schema->validate ($doc); }; if ($@) { print "File failed validation: $@" if $@; } eval { print "Here\n"; foreach my $book ($doc->findnodes('/library/book')) { my $title = $book->findnodes('./title'); print $title->to_literal(), "\n"; } }; if ($@) { print "Problem parsing data : $@\n"; } Unfortunately, although it is validating the XML file fine, it is not finding any $book items and therefore not printing out anything. If I remove the schema from the XML file and the validation from the PL file then it works fine. I am using the default namespace. If I change it to not use the default namespace (xmlns:lib="http://libs.domain.com" and prefix all items in the XML file with lib and change the XPath expressions to include the namespace prefix (/lib:library/lib:book) then it again works file. Why? and what am I missing? XML: <?xml version="1.0" encoding="utf-8"?> <library xmlns="http://lib.domain.com" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://lib.domain.com .\library.xsd"> <book> <title>Perl Best Practices</title> <author>Damian Conway</author> <isbn>0596001738</isbn> <pages>542</pages> <image src="http://www.oreilly.com/catalog/covers/perlbp.s.gif" width="145" height="190"/> </book> <book> <title>Perl Cookbook, Second Edition</title> <author>Tom Christiansen</author> <author>Nathan Torkington</author> <isbn>0596003137</isbn> <pages>964</pages> <image src="http://www.oreilly.com/catalog/covers/perlckbk2.s.gif" width="145" height="190"/> </book> <book> <title>Guitar for Dummies</title> <author>Mark Phillips</author> <author>John Chappell</author> <isbn>076455106X</isbn> <pages>392</pages> <image src="http://media.wiley.com/product_data/coverImage/6X/07645510/076455106X.jpg" width="100" height="125"/> </book> </library> XSD: <?xml version="1.0" encoding="utf-8"?> <xs:schema xmlns="http://lib.domain.com" xmlns:xs="http://www.w3.org/2001/XMLSchema" elementFormDefault="qualified" targetNamespace="http://lib.domain.com"> <xs:attributeGroup name="imagegroup"> <xs:attribute name="src" type="xs:string"/> <xs:attribute name="width" type="xs:integer"/> <xs:attribute name="height" type="xs:integer"/> </xs:attributeGroup> <xs:element name="library"> <xs:complexType> <xs:sequence> <xs:element maxOccurs="unbounded" name="book"> <xs:complexType> <xs:sequence> <xs:element name="title" type="xs:string"/> <xs:element maxOccurs="unbounded" name="author" type="xs:string"/> <xs:element name="isbn" type="xs:string"/> <xs:element name="pages" type="xs:integer"/> <xs:element name="image"> <xs:complexType> <xs:attributeGroup ref="imagegroup"/> </xs:complexType> </xs:element> </xs:sequence> </xs:complexType> </xs:element> </xs:sequence> </xs:complexType> </xs:element> </xs:schema>

    Read the article

  • Hibernate mapping one-to-many problem

    - by Xorty
    Hello, I am not very experienced with Hibernate and I am trying to create one-to-many mapping. Here are relevant tables: And here are my mapping files: <hibernate-mapping package="com.xorty.mailclient.server.domain"> <class name="Attachment" table="Attachment"> <id name="id"> <column name="idAttachment"></column> </id> <property name="filename"> <column name="name"></column> </property> <property name="blob"> <column name="file"></column> <type name="blob"></type> </property> <property name="mailId"> <column name="mail_idmail"></column> </property> </class> </hibernate-mapping> <hibernate-mapping> <class name="com.xorty.mailclient.server.domain.Mail" table="mail"> <id name="id" type="integer" column="idmail"></id> <property name="content"> <column name="body"></column> </property> <property name="ownerAddress"> <column name="account_address"></column> </property> <property name="title"> <column name="head"></column> </property> <set name="receivers" table="mail_has_contact" cascade="all"> <key column="mail_idmail"></key> <many-to-many column="contact_address" class="com.xorty.mailclient.client.domain.Contact"></many-to-many> </set> <list name="attachments" cascade="save-update, delete" inverse="true"> <key column="mail_idmail" not-null="true"/> <index column="fk_Attachment_mail1"></index> <one-to-many class="com.xorty.mailclient.server.domain.Attachment"/> </list> </class> </hibernate-mapping> In plain english, one mail has more attachments. When I try to do CRUD on mail without attachments, everyting works just fine. When I add some attachment to mail, I cannot perform any CRUD operation. I end up with following trace: org.hibernate.exception.ConstraintViolationException: Could not execute JDBC batch update at org.hibernate.exception.SQLStateConverter.convert(SQLStateConverter.java:96) at org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:66) at org.hibernate.jdbc.AbstractBatcher.executeBatch(AbstractBatcher.java:275) at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:268) at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:184) at org.hibernate.event.def.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:321) at org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:51) at org.hibernate.impl.SessionImpl.flush(SessionImpl.java:1216) at org.hibernate.impl.SessionImpl.managedFlush(SessionImpl.java:383) at org.hibernate.transaction.JDBCTransaction.commit(JDBCTransaction.java:133) at domain.DatabaseTest.testPersistMailWithAttachment(DatabaseTest.java:355) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at junit.framework.TestCase.runTest(TestCase.java:168) at junit.framework.TestCase.runBare(TestCase.java:134) at junit.framework.TestResult$1.protect(TestResult.java:110) at junit.framework.TestResult.runProtected(TestResult.java:128) at junit.framework.TestResult.run(TestResult.java:113) at junit.framework.TestCase.run(TestCase.java:124) at junit.framework.TestSuite.runTest(TestSuite.java:232) at junit.framework.TestSuite.run(TestSuite.java:227) at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83) at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:49) at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197) Caused by: java.sql.BatchUpdateException: Cannot add or update a child row: a foreign key constraint fails (`maildb`.`attachment`, CONSTRAINT `fk_Attachment_mail1` FOREIGN KEY (`mail_idmail`) REFERENCES `mail` (`idmail`) ON DELETE NO ACTION ON UPDATE NO ACTION) at com.mysql.jdbc.PreparedStatement.executeBatchSerially(PreparedStatement.java:1666) at com.mysql.jdbc.PreparedStatement.executeBatch(PreparedStatement.java:1082) at org.hibernate.jdbc.BatchingBatcher.doExecuteBatch(BatchingBatcher.java:70) at org.hibernate.jdbc.AbstractBatcher.executeBatch(AbstractBatcher.java:268) ... 27 more Thank you

    Read the article

  • Windows 8.1 Will Start Encrypting Hard Drives By Default: Everything You Need to Know

    - by Chris Hoffman
    Windows 8.1 will automatically encrypt the storage on modern Windows PCs. This will help protect your files in case someone steals your laptop and tries to get at them, but it has important ramifications for data recovery. Previously, “BitLocker” was available on Professional and Enterprise editions of Windows, while “Device Encryption” was available on Windows RT and Windows Phone. Device encryption is included with all editions of Windows 8.1 — and it’s on by default. When Your Hard Drive Will Be Encrypted Windows 8.1 includes “Pervasive Device Encryption.” This works a bit differently from the standard BitLocker feature that has been included in Professional, Enterprise, and Ultimate editions of Windows for the past few versions. Before Windows 8.1 automatically enables Device Encryption, the following must be true: The Windows device “must support connected standby and meet the Windows Hardware Certification Kit (HCK) requirements for TPM and SecureBoot on ConnectedStandby systems.”  (Source) Older Windows PCs won’t support this feature, while new Windows 8.1 devices you pick up will have this feature enabled by default. When Windows 8.1 installs cleanly and the computer is prepared, device encryption is “initialized” on the system drive and other internal drives. Windows uses a clear key at this point, which is removed later when the recovery key is successfully backed up. The PC’s user must log in with a Microsoft account with administrator privileges or join the PC to a domain. If a Microsoft account is used, a recovery key will be backed up to Microsoft’s servers and encryption will be enabled. If a domain account is used, a recovery key will be backed up to Active Directory Domain Services and encryption will be enabled. If you have an older Windows computer that you’ve upgraded to Windows 8.1, it may not support Device Encryption. If you log in with a local user account, Device Encryption won’t be enabled. If you upgrade your Windows 8 device to Windows 8.1, you’ll need to enable device encryption, as it’s off by default when upgrading. Recovering An Encrypted Hard Drive Device encryption means that a thief can’t just pick up your laptop, insert a Linux live CD or Windows installer disc, and boot the alternate operating system to view your files without knowing your Windows password. It means that no one can just pull the hard drive from your device, connect the hard drive to another computer, and view the files. We’ve previously explained that your Windows password doesn’t actually secure your files. With Windows 8.1, average Windows users will finally be protected with encryption by default. However, there’s a problem — if you forget your password and are unable to log in, you’d also be unable to recover your files. This is likely why encryption is only enabled when a user logs in with a Microsoft account (or connects to a domain). Microsoft holds a recovery key, so you can gain access to your files by going through a recovery process. As long as you’re able to authenticate using your Microsoft account credentials — for example, by receiving an SMS message on the cell phone number connected to your Microsoft account — you’ll be able to recover your encrypted data. With Windows 8.1, it’s more important than ever to configure your Microsoft account’s security settings and recovery methods so you’ll be able to recover your files if you ever get locked out of your Microsoft account. Microsoft does hold the recovery key and would be capable of providing it to law enforcement if it was requested, which is certainly a legitimate concern in the age of PRISM. However, this encryption still provides protection from thieves picking up your hard drive and digging through your personal or business files. If you’re worried about a government or a determined thief who’s capable of gaining access to your Microsoft account, you’ll want to encrypt your hard drive with software that doesn’t upload a copy of your recovery key to the Internet, such as TrueCrypt. How to Disable Device Encryption There should be no real reason to disable device encryption. If nothing else, it’s a useful feature that will hopefully protect sensitive data in the real world where people — and even businesses — don’t enable encryption on their own. As encryption is only enabled on devices with the appropriate hardware and will be enabled by default, Microsoft has hopefully ensured that users won’t see noticeable slow-downs in performance. Encryption adds some overhead, but the overhead can hopefully be handled by dedicated hardware. If you’d like to enable a different encryption solution or just disable encryption entirely, you can control this yourself. To do so, open the PC settings app — swipe in from the right edge of the screen or press Windows Key + C, click the Settings icon, and select Change PC settings. Navigate to PC and devices -> PC info. At the bottom of the PC info pane, you’ll see a Device Encryption section. Select Turn Off if you want to disable device encryption, or select Turn On if you want to enable it — users upgrading from Windows 8 will have to enable it manually in this way. Note that Device Encryption can’t be disabled on Windows RT devices, such as Microsoft’s Surface RT and Surface 2. If you don’t see the Device Encryption section in this window, you’re likely using an older device that doesn’t meet the requirements and thus doesn’t support Device Encryption. For example, our Windows 8.1 virtual machine doesn’t offer Device Encryption configuration options. This is the new normal for Windows PCs, tablets, and devices in general. Where files on typical PCs were once ripe for easy access by thieves, Windows PCs are now encrypted by default and recovery keys are sent to Microsoft’s servers for safe keeping. This last part may be a bit creepy, but it’s easy to imagine average users forgetting their passwords — they’d be very upset if they lost all their files because they had to reset their passwords. It’s also an improvement over Windows PCs being completely unprotected by default.     

    Read the article

  • Accessing SharePoint 2010 Data with REST/OData on Windows Phone 7

    - by Jan Tielens
    Consuming SharePoint 2010 data in Windows Phone 7 applications using the CTP version of the developer tools is quite a challenge. The issue is that the SharePoint 2010 data is not anonymously available; users need to authenticate to be able to access the data. When I first tried to access SharePoint 2010 data from my first Hello-World-type Windows Phone 7 application I thought “Hey, this should be easy!” because Windows Phone 7 development based on Silverlight and SharePoint 2010 has a Client Object Model for Silverlight. Unfortunately you can’t use the Client Object Model of SharePoint 2010 on the Windows Phone platform; there’s a reference to an assembly that’s not available (System.Windows.Browser). My second thought was “OK, no problem!” because SharePoint 2010 also exposes a REST/OData API to access SharePoint data. Using the REST API in SharePoint 2010 is as easy as making a web request for a URL (in which you specify the data you’d like to retrieve), e.g. http://yoursiteurl/_vti_bin/listdata.svc/Announcements. This is very easy to accomplish in a Silverlight application that’s running in the context of a page in a SharePoint site, because the credentials of the currently logged on user are automatically picked up and passed to the WCF service. But a Windows Phone application is of course running outside of the SharePoint site’s page, so the application should build credentials that have to be passed to SharePoint’s WCF service. This turns out to be a small challenge in Silverlight 3, the WebClient doesn’t support authentication; there is a Credentials property but when you set it and make the request you get a NotImplementedException exception. Probably this issued will be solved in the very near future, since Silverlight 4 does support authentication, and there’s already a WCF Data Services download that uses this new platform feature of Silverlight 4. So when Windows Phone platform switches to Silverlight 4, you can just use the WebClient to get the data. Even more, if the OData Client Library for Windows Phone 7 gets updated after that, things should get even easier! By the way: the things I’m writing in this paragraph are just assumptions that I make which make a lot of sense IMHO, I don’t have any info all of this will happen, but I really hope so. So are SharePoint developers out of the Windows Phone development game until they get this fixed? Well luckily not, when the HttpWebRequest class is being used instead, you can pass credentials! Using the HttpWebRequest class is slightly more complex than using the WebClient class, but the end result is that you have access to your precious SharePoint 2010 data. The following code snippet is getting all the announcements of an Annoucements list in a SharePoint site: HttpWebRequest webReq =     (HttpWebRequest)HttpWebRequest.Create("http://yoursite/_vti_bin/listdata.svc/Announcements");webReq.Credentials = new NetworkCredential("username", "password"); webReq.BeginGetResponse(    (result) => {        HttpWebRequest asyncReq = (HttpWebRequest)result.AsyncState;         XDocument xdoc = XDocument.Load(            ((HttpWebResponse)asyncReq.EndGetResponse(result)).GetResponseStream());         XNamespace ns = "http://www.w3.org/2005/Atom";        var items = from item in xdoc.Root.Elements(ns + "entry")                    select new { Title = item.Element(ns + "title").Value };         this.Dispatcher.BeginInvoke(() =>        {            foreach (var item in items)                MessageBox.Show(item.Title);        });    }, webReq); When you try this in a Windows Phone 7 application, make sure you add a reference to the System.Xml.Linq assembly, because the code uses Linq to XML to parse the resulting Atom feed, so the Title of every announcement is being displayed in a MessageBox. Check out my previous post if you’d like to see a more polished sample Windows Phone 7 application that displays SharePoint 2010 data.When you plan to use this technique, it’s of course a good idea to encapsulate the code doing the request, so it becomes really easy to get the data that you need. In the following code snippet you can find the GetAtomFeed method that gets the contents of any Atom feed, even if you need to authenticate to get access to the feed. delegate void GetAtomFeedCallback(Stream responseStream); public MainPage(){    InitializeComponent();     SupportedOrientations = SupportedPageOrientation.Portrait |         SupportedPageOrientation.Landscape;     string url = "http://yoursite/_vti_bin/listdata.svc/Announcements";    string username = "username";    string password = "password";    string domain = "";     GetAtomFeed(url, username, password, domain, (s) =>    {        XNamespace ns = "http://www.w3.org/2005/Atom";        XDocument xdoc = XDocument.Load(s);         var items = from item in xdoc.Root.Elements(ns + "entry")                    select new { Title = item.Element(ns + "title").Value };         this.Dispatcher.BeginInvoke(() =>        {            foreach (var item in items)            {                MessageBox.Show(item.Title);            }        });    });} private static void GetAtomFeed(string url, string username,     string password, string domain, GetAtomFeedCallback cb){    HttpWebRequest webReq = (HttpWebRequest)HttpWebRequest.Create(url);    webReq.Credentials = new NetworkCredential(username, password, domain);     webReq.BeginGetResponse(        (result) =>        {            HttpWebRequest asyncReq = (HttpWebRequest)result.AsyncState;            HttpWebResponse resp = (HttpWebResponse)asyncReq.EndGetResponse(result);            cb(resp.GetResponseStream());        }, webReq);}

    Read the article

  • C# Extension Methods - To Extend or Not To Extend...

    - by James Michael Hare
    I've been thinking a lot about extension methods lately, and I must admit I both love them and hate them. They are a lot like sugar, they taste so nice and sweet, but they'll rot your teeth if you eat them too much.   I can't deny that they aren't useful and very handy. One of the major components of the Shared Component library where I work is a set of useful extension methods. But, I also can't deny that they tend to be overused and abused to willy-nilly extend every living type.   So what constitutes a good extension method? Obviously, you can write an extension method for nearly anything whether it is a good idea or not. Many times, in fact, an idea seems like a good extension method but in retrospect really doesn't fit.   So what's the litmus test? To me, an extension method should be like in the movies when a person runs into their twin, separated at birth. You just know you're related. Obviously, that's hard to quantify, so let's try to put a few rules-of-thumb around them.   A good extension method should:     Apply to any possible instance of the type it extends.     Simplify logic and improve readability/maintainability.     Apply to the most specific type or interface applicable.     Be isolated in a namespace so that it does not pollute IntelliSense.     So let's look at a few examples in relation to these rules.   The first rule, to me, is the most important of all. Once again, it bears repeating, a good extension method should apply to all possible instances of the type it extends. It should feel like the long lost relative that should have been included in the original class but somehow was missing from the family tree.    Take this nifty little int extension, I saw this once in a blog and at first I really thought it was pretty cool, but then I started noticing a code smell I couldn't quite put my finger on. So let's look:       public static class IntExtensinos     {         public static int Seconds(int num)         {             return num * 1000;         }           public static int Minutes(int num)         {             return num * 60000;         }     }     This is so you could do things like:       ...     Thread.Sleep(5.Seconds());     ...     proxy.Timeout = 1.Minutes();     ...     Awww, you say, that's cute! Well, that's the problem, it's kitschy and it doesn't always apply (and incidentally you could achieve the same thing with TimeStamp.FromSeconds(5)). It's syntactical candy that looks cool, but tends to rot and pollute the code. It would allow things like:       total += numberOfTodaysOrders.Seconds();     which makes no sense and should never be allowed. The problem is you're applying an extension method to a logical domain, not a type domain. That is, the extension method Seconds() doesn't really apply to ALL ints, it applies to ints that are representative of time that you want to convert to milliseconds.    Do you see what I mean? The two problems, in a nutshell, are that a) Seconds() called off a non-time value makes no sense and b) calling Seconds() off something to pass to something that does not take milliseconds will be off by a factor of 1000 or worse.   Thus, in my mind, you should only ever have an extension method that applies to the whole domain of that type.   For example, this is one of my personal favorites:       public static bool IsBetween<T>(this T value, T low, T high)         where T : IComparable<T>     {         return value.CompareTo(low) >= 0 && value.CompareTo(high) <= 0;     }   This allows you to check if any IComparable<T> is within an upper and lower bound. Think of how many times you type something like:       if (response.Employee.Address.YearsAt >= 2         && response.Employee.Address.YearsAt <= 10)     {     ...     }     Now, you can instead type:       if(response.Employee.Address.YearsAt.IsBetween(2, 10))     {     ...     }     Note that this applies to all IComparable<T> -- that's ints, chars, strings, DateTime, etc -- and does not depend on any logical domain. In addition, it satisfies the second point and actually makes the code more readable and maintainable.   Let's look at the third point. In it we said that an extension method should fit the most specific interface or type possible. Now, I'm not saying if you have something that applies to enumerables, you create an extension for List, Array, Dictionary, etc (though you may have reasons for doing so), but that you should beware of making things TOO general.   For example, let's say we had an extension method like this:       public static T ConvertTo<T>(this object value)     {         return (T)Convert.ChangeType(value, typeof(T));     }         This lets you do more fluent conversions like:       double d = "5.0".ConvertTo<double>();     However, if you dig into Reflector (LOVE that tool) you will see that if the type you are calling on does not implement IConvertible, what you convert to MUST be the exact type or it will throw an InvalidCastException. Now this may or may not be what you want in this situation, and I leave that up to you. Things like this would fail:       object value = new Employee();     ...     // class cast exception because typeof(IEmployee) != typeof(Employee)     IEmployee emp = value.ConvertTo<IEmployee>();       Yes, that's a downfall of working with Convertible in general, but if you wanted your fluent interface to be more type-safe so that ConvertTo were only callable on IConvertibles (and let casting be a manual task), you could easily make it:         public static T ConvertTo<T>(this IConvertible value)     {         return (T)Convert.ChangeType(value, typeof(T));     }         This is what I mean by choosing the best type to extend. Consider that if we used the previous (object) version, every time we typed a dot ('.') on an instance we'd pull up ConvertTo() whether it was applicable or not. By filtering our extension method down to only valid types (those that implement IConvertible) we greatly reduce our IntelliSense pollution and apply a good level of compile-time correctness.   Now my fourth rule is just my general rule-of-thumb. Obviously, you can make extension methods as in-your-face as you want. I included all mine in my work libraries in its own sub-namespace, something akin to:       namespace Shared.Core.Extensions { ... }     This is in a library called Shared.Core, so just referencing the Core library doesn't pollute your IntelliSense, you have to actually do a using on Shared.Core.Extensions to bring the methods in. This is very similar to the way Microsoft puts its extension methods in System.Linq. This way, if you want 'em, you use the appropriate namespace. If you don't want 'em, they won't pollute your namespace.   To really make this work, however, that namespace should only include extension methods and subordinate types those extensions themselves may use. If you plant other useful classes in those namespaces, once a user includes it, they get all the extensions too.   Also, just as a personal preference, extension methods that aren't simply syntactical shortcuts, I like to put in a static utility class and then have extension methods for syntactical candy. For instance, I think it imaginable that any object could be converted to XML:       namespace Shared.Core     {         // A collection of XML Utility classes         public static class XmlUtility         {             ...             // Serialize an object into an xml string             public static string ToXml(object input)             {                 var xs = new XmlSerializer(input.GetType());                   // use new UTF8Encoding here, not Encoding.UTF8. The later includes                 // the BOM which screws up subsequent reads, the former does not.                 using (var memoryStream = new MemoryStream())                 using (var xmlTextWriter = new XmlTextWriter(memoryStream, new UTF8Encoding()))                 {                     xs.Serialize(xmlTextWriter, input);                     return Encoding.UTF8.GetString(memoryStream.ToArray());                 }             }             ...         }     }   I also wanted to be able to call this from an object like:       value.ToXml();     But here's the problem, if i made this an extension method from the start with that one little keyword "this", it would pop into IntelliSense for all objects which could be very polluting. Instead, I put the logic into a utility class so that users have the choice of whether or not they want to use it as just a class and not pollute IntelliSense, then in my extensions namespace, I add the syntactical candy:       namespace Shared.Core.Extensions     {         public static class XmlExtensions         {             public static string ToXml(this object value)             {                 return XmlUtility.ToXml(value);             }         }     }   So now it's the best of both worlds. On one hand, they can use the utility class if they don't want to pollute IntelliSense, and on the other hand they can include the Extensions namespace and use as an extension if they want. The neat thing is it also adheres to the Single Responsibility Principle. The XmlUtility is responsible for converting objects to XML, and the XmlExtensions is responsible for extending object's interface for ToXml().

    Read the article

  • How to rotate a set of points on z = 0 plane in 3-D, preserving pairwise distances?

    - by cagirici
    I have a set of points double n[] on the plane z = 0. And I have another set of points double[] m on the plane ax + by + cz + d = 0. Length of n is equal to length of m. Also, euclidean distance between n[i] and n[j] is equal to euclidean distance between m[i] and m[j]. I want to rotate n[] in 3-D, such that for all i, n[i] = m[i] would be true. In other words, I want to turn a plane into another plane, preserving the pairwise distances. Here's my code in java. But it does not help so much: double[] rotate(double[] point, double[] currentEquation, double[] targetEquation) { double[] currentNormal = new double[]{currentEquation[0], currentEquation[1], currentEquation[2]}; double[] targetNormal = new double[]{targetEquation[0], targetEquation[1], targetEquation[2]}; targetNormal = normalize(targetNormal); double angle = angleBetween(currentNormal, targetNormal); double[] axis = cross(targetNormal, currentNormal); double[][] R = getRotationMatrix(axis, angle); return rotated; } double[][] getRotationMatrix(double[] axis, double angle) { axis = normalize(axis); double cA = (float)Math.cos(angle); double sA = (float)Math.sin(angle); Matrix I = Matrix.identity(3, 3); Matrix a = new Matrix(axis, 3); Matrix aT = a.transpose(); Matrix a2 = a.times(aT); double[][] B = { {0, axis[2], -1*axis[1]}, {-1*axis[2], 0, axis[0]}, {axis[1], -1*axis[0], 0} }; Matrix A = new Matrix(B); Matrix R = I.minus(a2); R = R.times(cA); R = R.plus(a2); R = R.plus(A.times(sA)); return R.getArray(); } This is what I get. The point set on the right side is actually part of a point set on the left side. But they are on another plane. Here's a 2-D representation of what I try to do: There are two lines. The line on the bottom is the line I have. The line on the top is the target line. The distances are preserved (a, b and c). Edit: I have tried both methods written in answers. They both fail (I guess). Method of Martijn Courteaux public static double[][] getRotationMatrix(double[] v0, double[] v1, double[] v2, double[] u0, double[] u1, double[] u2) { RealMatrix M1 = new Array2DRowRealMatrix(new double[][]{ {1,0,0,-1*v0[0]}, {0,1,0,-1*v0[1]}, {0,0,1,0}, {0,0,0,1} }); RealMatrix M2 = new Array2DRowRealMatrix(new double[][]{ {1,0,0,-1*u0[0]}, {0,1,0,-1*u0[1]}, {0,0,1,-1*u0[2]}, {0,0,0,1} }); Vector3D imX = new Vector3D((v0[1] - v1[1])*(u2[0] - u0[0]) - (v0[1] - v2[1])*(u1[0] - u0[0]), (v0[1] - v1[1])*(u2[1] - u0[1]) - (v0[1] - v2[1])*(u1[1] - u0[1]), (v0[1] - v1[1])*(u2[2] - u0[2]) - (v0[1] - v2[1])*(u1[2] - u0[2]) ).scalarMultiply(1/((v0[0]*v1[1])-(v0[0]*v2[1])-(v1[0]*v0[1])+(v1[0]*v2[1])+(v2[0]*v0[1])-(v2[0]*v1[1]))); Vector3D imZ = new Vector3D(findEquation(u0, u1, u2)); Vector3D imY = Vector3D.crossProduct(imZ, imX); double[] imXn = imX.normalize().toArray(); double[] imYn = imY.normalize().toArray(); double[] imZn = imZ.normalize().toArray(); RealMatrix M = new Array2DRowRealMatrix(new double[][]{ {imXn[0], imXn[1], imXn[2], 0}, {imYn[0], imYn[1], imYn[2], 0}, {imZn[0], imZn[1], imZn[2], 0}, {0, 0, 0, 1} }); RealMatrix rotationMatrix = MatrixUtils.inverse(M2).multiply(M).multiply(M1); return rotationMatrix.getData(); } Method of Sam Hocevar static double[][] makeMatrix(double[] p1, double[] p2, double[] p3) { double[] v1 = normalize(difference(p2,p1)); double[] v2 = normalize(cross(difference(p3,p1), difference(p2,p1))); double[] v3 = cross(v1, v2); double[][] M = { { v1[0], v2[0], v3[0], p1[0] }, { v1[1], v2[1], v3[1], p1[1] }, { v1[2], v2[2], v3[2], p1[2] }, { 0.0, 0.0, 0.0, 1.0 } }; return M; } static double[][] createTransform(double[] A, double[] B, double[] C, double[] P, double[] Q, double[] R) { RealMatrix c = new Array2DRowRealMatrix(makeMatrix(A,B,C)); RealMatrix t = new Array2DRowRealMatrix(makeMatrix(P,Q,R)); return MatrixUtils.inverse(c).multiply(t).getData(); } The blue points are the calculated points. The black lines indicate the offset from the real position.

    Read the article

  • jQuery Toggle with Cookie

    - by Cameron
    I have the following toggle system, but I want it to remember what was open/closed using the jQuery cookie plugin. So for example if I open a toggle and then navigate away from the page, when I come back it should be still open. This is code I have so far, but it's becoming rather confusing, some help would be much appreciated thanks. jQuery.cookie = function (name, value, options) { if (typeof value != 'undefined') { options = options || {}; if (value === null) { value = ''; options = $.extend({}, options); options.expires = -1; } var expires = ''; if (options.expires && (typeof options.expires == 'number' || options.expires.toUTCString)) { var date; if (typeof options.expires == 'number') { date = new Date(); date.setTime(date.getTime() + (options.expires * 24 * 60 * 60 * 1000)); } else { date = options.expires; } expires = '; expires=' + date.toUTCString(); } var path = options.path ? '; path=' + (options.path) : ''; var domain = options.domain ? '; domain=' + (options.domain) : ''; var secure = options.secure ? '; secure' : ''; document.cookie = [name, '=', encodeURIComponent(value), expires, path, domain, secure].join(''); } else { var cookieValue = null; if (document.cookie && document.cookie != '') { var cookies = document.cookie.split(';'); for (var i = 0; i < cookies.length; i++) { var cookie = jQuery.trim(cookies[i]); if (cookie.substring(0, name.length + 1) == (name + '=')) { cookieValue = decodeURIComponent(cookie.substring(name.length + 1)); break; } } } return cookieValue; } }; // var showTop = $.cookie('showTop'); if ($.cookie('showTop') == 'collapsed') { $(".toggle_container").hide(); $(".trigger").toggle(function () { $(this).addClass("active"); }, function () { $(this).removeClass("active"); }); $(".trigger").click(function () { $(this).next(".toggle_container").slideToggle("slow,"); }); } else { $(".toggle_container").show(); $(".trigger").toggle(function () { $(this).addClass("active"); }, function () { $(this).removeClass("active"); }); $(".trigger").click(function () { $(this).next(".toggle_container").slideToggle("slow,"); }); }; $(".trigger").click(function () { if ($(".toggle_container").is(":hidden")) { $(this).next(".toggle_container").slideToggle("slow,"); $.cookie('showTop', 'expanded'); } else { $(this).next(".toggle_container").slideToggle("slow,"); $.cookie('showTop', 'collapsed'); } return false; }); and this is a snippet of the HTML it works with: <li> <label for="small"><input type="checkbox" id="small" /> Small</label> <a class="trigger" href="#">Toggle</a> <div class="toggle_container"> <p class="funding"><strong>Funding</strong></p> <ul class="childs"> <li class="child"> <label for="fully-funded1"><input type="checkbox" id="fully-funded1" /> Fully Funded</label> <a class="trigger" href="#">Toggle</a> <div class="toggle_container"> <p class="days"><strong>Days</strong></p> <ul class="days clearfix"> <li><label for="1pre16">Pre 16</label> <input type="text" id="1pre16" /></li> <li><label for="2post16">Post 16</label> <input type="text" id="2post16" /></li> <li><label for="3teacher">Teacher</label> <input type="text" id="3teacher" /></li> </ul> </div> </li>

    Read the article

  • Set Context User Principal for Customized Authentication in SignalR

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2014/05/27/set-context-user-principal-for-customized-authentication-in-signalr.aspxCurrently I'm working on a single page application project which is built on AngularJS and ASP.NET WebAPI. When I need to implement some features that needs real-time communication and push notifications from server side I decided to use SignalR. SignalR is a project currently developed by Microsoft to build web-based, read-time communication application. You can find it here. With a lot of introductions and guides it's not a difficult task to use SignalR with ASP.NET WebAPI and AngularJS. I followed this and this even though it's based on SignalR 1. But when I tried to implement the authentication for my SignalR I was struggled 2 days and finally I got a solution by myself. This might not be the best one but it actually solved all my problem.   In many articles it's said that you don't need to worry about the authentication of SignalR since it uses the web application authentication. For example if your web application utilizes form authentication, SignalR will use the user principal your web application authentication module resolved, check if the principal exist and authenticated. But in my solution my ASP.NET WebAPI, which is hosting SignalR as well, utilizes OAuth Bearer authentication. So when the SignalR connection was established the context user principal was empty. So I need to authentication and pass the principal by myself.   Firstly I need to create a class which delivered from "AuthorizeAttribute", that will takes the responsible for authenticate when SignalR connection established and any method was invoked. 1: public class QueryStringBearerAuthorizeAttribute : AuthorizeAttribute 2: { 3: public override bool AuthorizeHubConnection(HubDescriptor hubDescriptor, IRequest request) 4: { 5: } 6:  7: public override bool AuthorizeHubMethodInvocation(IHubIncomingInvokerContext hubIncomingInvokerContext, bool appliesToMethod) 8: { 9: } 10: } The method "AuthorizeHubConnection" will be invoked when any SignalR connection was established. And here I'm going to retrieve the Bearer token from query string, try to decrypt and recover the login user's claims. 1: public override bool AuthorizeHubConnection(HubDescriptor hubDescriptor, IRequest request) 2: { 3: var dataProtectionProvider = new DpapiDataProtectionProvider(); 4: var secureDataFormat = new TicketDataFormat(dataProtectionProvider.Create()); 5: // authenticate by using bearer token in query string 6: var token = request.QueryString.Get(WebApiConfig.AuthenticationType); 7: var ticket = secureDataFormat.Unprotect(token); 8: if (ticket != null && ticket.Identity != null && ticket.Identity.IsAuthenticated) 9: { 10: // set the authenticated user principal into environment so that it can be used in the future 11: request.Environment["server.User"] = new ClaimsPrincipal(ticket.Identity); 12: return true; 13: } 14: else 15: { 16: return false; 17: } 18: } In the code above I created "TicketDataFormat" instance, which must be same as the one I used to generate the Bearer token when user logged in. Then I retrieve the token from request query string and unprotect it. If I got a valid ticket with identity and it's authenticated this means it's a valid token. Then I pass the user principal into request's environment property which can be used in nearly future. Since my website was built in AngularJS so the SignalR client was in pure JavaScript, and it's not support to set customized HTTP headers in SignalR JavaScript client, I have to pass the Bearer token through request query string. This is not a restriction of SignalR, but a restriction of WebSocket. For security reason WebSocket doesn't allow client to set customized HTTP headers from browser. Next, I need to implement the authentication logic in method "AuthorizeHubMethodInvocation" which will be invoked when any SignalR method was invoked. 1: public override bool AuthorizeHubMethodInvocation(IHubIncomingInvokerContext hubIncomingInvokerContext, bool appliesToMethod) 2: { 3: var connectionId = hubIncomingInvokerContext.Hub.Context.ConnectionId; 4: // check the authenticated user principal from environment 5: var environment = hubIncomingInvokerContext.Hub.Context.Request.Environment; 6: var principal = environment["server.User"] as ClaimsPrincipal; 7: if (principal != null && principal.Identity != null && principal.Identity.IsAuthenticated) 8: { 9: // create a new HubCallerContext instance with the principal generated from token 10: // and replace the current context so that in hubs we can retrieve current user identity 11: hubIncomingInvokerContext.Hub.Context = new HubCallerContext(new ServerRequest(environment), connectionId); 12: return true; 13: } 14: else 15: { 16: return false; 17: } 18: } Since I had passed the user principal into request environment in previous method, I can simply check if it exists and valid. If so, what I need is to pass the principal into context so that SignalR hub can use. Since the "User" property is all read-only in "hubIncomingInvokerContext", I have to create a new "ServerRequest" instance with principal assigned, and set to "hubIncomingInvokerContext.Hub.Context". After that, we can retrieve the principal in my Hubs through "Context.User" as below. 1: public class DefaultHub : Hub 2: { 3: public object Initialize(string host, string service, JObject payload) 4: { 5: var connectionId = Context.ConnectionId; 6: ... ... 7: var domain = string.Empty; 8: var identity = Context.User.Identity as ClaimsIdentity; 9: if (identity != null) 10: { 11: var claim = identity.FindFirst("Domain"); 12: if (claim != null) 13: { 14: domain = claim.Value; 15: } 16: } 17: ... ... 18: } 19: } Finally I just need to add my "QueryStringBearerAuthorizeAttribute" into the SignalR pipeline. 1: app.Map("/signalr", map => 2: { 3: // Setup the CORS middleware to run before SignalR. 4: // By default this will allow all origins. You can 5: // configure the set of origins and/or http verbs by 6: // providing a cors options with a different policy. 7: map.UseCors(CorsOptions.AllowAll); 8: var hubConfiguration = new HubConfiguration 9: { 10: // You can enable JSONP by uncommenting line below. 11: // JSONP requests are insecure but some older browsers (and some 12: // versions of IE) require JSONP to work cross domain 13: // EnableJSONP = true 14: EnableJavaScriptProxies = false 15: }; 16: // Require authentication for all hubs 17: var authorizer = new QueryStringBearerAuthorizeAttribute(); 18: var module = new AuthorizeModule(authorizer, authorizer); 19: GlobalHost.HubPipeline.AddModule(module); 20: // Run the SignalR pipeline. We're not using MapSignalR 21: // since this branch already runs under the "/signalr" path. 22: map.RunSignalR(hubConfiguration); 23: }); On the client side should pass the Bearer token through query string before I started the connection as below. 1: self.connection = $.hubConnection(signalrEndpoint); 2: self.proxy = self.connection.createHubProxy(hubName); 3: self.proxy.on(notifyEventName, function (event, payload) { 4: options.handler(event, payload); 5: }); 6: // add the authentication token to query string 7: // we cannot use http headers since web socket protocol doesn't support 8: self.connection.qs = { Bearer: AuthService.getToken() }; 9: // connection to hub 10: self.connection.start(); Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Reference Data Management and Master Data: Are Relation ?

    - by Mala Narasimharajan
    Submitted By:  Rahul Kamath  Oracle Data Relationship Management (DRM) has always been extremely powerful as an Enterprise Master Data Management (MDM) solution that can help manage changes to master data in a way that influences enterprise structure, whether it be mastering chart of accounts to enable financial transformation, or revamping organization structures to drive business transformation and operational efficiencies, or restructuring sales territories to enable equitable distribution of leads to sales teams following the acquisition of new products, or adding additional cost centers to enable fine grain control over expenses. Increasingly, DRM is also being utilized by Oracle customers for reference data management, an emerging solution space that deserves some explanation. What is reference data? How does it relate to Master Data? Reference data is a close cousin of master data. While master data is challenged with problems of unique identification, may be more rapidly changing, requires consensus building across stakeholders and lends structure to business transactions, reference data is simpler, more slowly changing, but has semantic content that is used to categorize or group other information assets – including master data – and gives them contextual value. In fact, the creation of a new master data element may require new reference data to be created. For example, when a European company acquires a US business, chances are that they will now need to adapt their product line taxonomy to include a new category to describe the newly acquired US product line. Further, the cross-border transaction will also result in a revised geo hierarchy. The addition of new products represents changes to master data while changes to product categories and geo hierarchy are examples of reference data changes.1 The following table contains an illustrative list of examples of reference data by type. Reference data types may include types and codes, business taxonomies, complex relationships & cross-domain mappings or standards. Types & Codes Taxonomies Relationships / Mappings Standards Transaction Codes Industry Classification Categories and Codes, e.g., North America Industry Classification System (NAICS) Product / Segment; Product / Geo Calendars (e.g., Gregorian, Fiscal, Manufacturing, Retail, ISO8601) Lookup Tables (e.g., Gender, Marital Status, etc.) Product Categories City à State à Postal Codes Currency Codes (e.g., ISO) Status Codes Sales Territories (e.g., Geo, Industry Verticals, Named Accounts, Federal/State/Local/Defense) Customer / Market Segment; Business Unit / Channel Country Codes (e.g., ISO 3166, UN) Role Codes Market Segments Country Codes / Currency Codes / Financial Accounts Date/Time, Time Zones (e.g., ISO 8601) Domain Values Universal Standard Products and Services Classification (UNSPSC), eCl@ss International Classification of Diseases (ICD) e.g., ICD9 à IC10 mappings Tax Rates Why manage reference data? Reference data carries contextual value and meaning and therefore its use can drive business logic that helps execute a business process, create a desired application behavior or provide meaningful segmentation to analyze transaction data. Further, mapping reference data often requires human judgment. Sample Use Cases of Reference Data Management Healthcare: Diagnostic Codes The reference data challenges in the healthcare industry offer a case in point. Part of being HIPAA compliant requires medical practitioners to transition diagnosis codes from ICD-9 to ICD-10, a medical coding scheme used to classify diseases, signs and symptoms, causes, etc. The transition to ICD-10 has a significant impact on business processes, procedures, contracts, and IT systems. Since both code sets ICD-9 and ICD-10 offer diagnosis codes of very different levels of granularity, human judgment is required to map ICD-9 codes to ICD-10. The process requires collaboration and consensus building among stakeholders much in the same way as does master data management. Moreover, to build reports to understand utilization, frequency and quality of diagnoses, medical practitioners may need to “cross-walk” mappings -- either forward to ICD-10 or backwards to ICD-9 depending upon the reporting time horizon. Spend Management: Product, Service & Supplier Codes Similarly, as an enterprise looks to rationalize suppliers and leverage their spend, conforming supplier codes, as well as product and service codes requires supporting multiple classification schemes that may include industry standards (e.g., UNSPSC, eCl@ss) or enterprise taxonomies. Aberdeen Group estimates that 90% of companies rely on spreadsheets and manual reviews to aggregate, classify and analyze spend data, and that data management activities account for 12-15% of the sourcing cycle and consume 30-50% of a commodity manager’s time. Creating a common map across the extended enterprise to rationalize codes across procurement, accounts payable, general ledger, credit card, procurement card (P-card) as well as ACH and bank systems can cut sourcing costs, improve compliance, lower inventory stock, and free up talent to focus on value added tasks. Change Management: Point of Sales Transaction Codes and Product Codes In the specialty finance industry, enterprises are confronted with usury laws – governed at the state and local level – that regulate financial product innovation as it relates to consumer loans, check cashing and pawn lending. To comply, it is important to demonstrate that transactions booked at the point of sale are posted against valid product codes that were on offer at the time of booking the sale. Since new products are being released at a steady stream, it is important to ensure timely and accurate mapping of point-of-sale transaction codes with the appropriate product and GL codes to comply with the changing regulations. Multi-National Companies: Industry Classification Schemes As companies grow and expand across geographies, a typical challenge they encounter with reference data represents reconciling various versions of industry classification schemes in use across nations. While the United States, Mexico and Canada conform to the North American Industry Classification System (NAICS) standard, European Union countries choose different variants of the NACE industry classification scheme. Multi-national companies must manage the individual national NACE schemes and reconcile the differences across countries. Enterprises must invest in a reference data change management application to address the challenge of distributing reference data changes to downstream applications and assess which applications were impacted by a given change. References 1 Master Data versus Reference Data, Malcolm Chisholm, April 1, 2006.

    Read the article

  • The Sensemaking Spectrum for Business Analytics: Translating from Data to Business Through Analysis

    - by Joe Lamantia
    One of the most compelling outcomes of our strategic research efforts over the past several years is a growing vocabulary that articulates our cumulative understanding of the deep structure of the domains of discovery and business analytics. Modes are one example of the deep structure we’ve found.  After looking at discovery activities across a very wide range of industries, question types, business needs, and problem solving approaches, we've identified distinct and recurring kinds of sensemaking activity, independent of context.  We label these activities Modes: Explore, compare, and comprehend are three of the nine recognizable modes.  Modes describe *how* people go about realizing insights.  (Read more about the programmatic research and formal academic grounding and discussion of the modes here: https://www.researchgate.net/publication/235971352_A_Taxonomy_of_Enterprise_Search_and_Discovery) By analogy to languages, modes are the 'verbs' of discovery activity.  When applied to the practical questions of product strategy and development, the modes of discovery allow one to identify what kinds of analytical activity a product, platform, or solution needs to support across a spread of usage scenarios, and then make concrete and well-informed decisions about every aspect of the solution, from high-level capabilities, to which specific types of information visualizations better enable these scenarios for the types of data users will analyze. The modes are a powerful generative tool for product making, but if you've spent time with young children, or had a really bad hangover (or both at the same time...), you understand the difficult of communicating using only verbs.  So I'm happy to share that we've found traction on another facet of the deep structure of discovery and business analytics.  Continuing the language analogy, we've identified some of the ‘nouns’ in the language of discovery: specifically, the consistently recurring aspects of a business that people are looking for insight into.  We call these discovery Subjects, since they identify *what* people focus on during discovery efforts, rather than *how* they go about discovery as with the Modes. Defining the collection of Subjects people repeatedly focus on allows us to understand and articulate sense making needs and activity in more specific, consistent, and complete fashion.  In combination with the Modes, we can use Subjects to concretely identify and define scenarios that describe people’s analytical needs and goals.  For example, a scenario such as ‘Explore [a Mode] the attrition rates [a Measure, one type of Subject] of our largest customers [Entities, another type of Subject] clearly captures the nature of the activity — exploration of trends vs. deep analysis of underlying factors — and the central focus — attrition rates for customers above a certain set of size criteria — from which follow many of the specifics needed to address this scenario in terms of data, analytical tools, and methods. We can also use Subjects to translate effectively between the different perspectives that shape discovery efforts, reducing ambiguity and increasing impact on both sides the perspective divide.  For example, from the language of business, which often motivates analytical work by asking questions in business terms, to the perspective of analysis.  The question posed to a Data Scientist or analyst may be something like “Why are sales of our new kinds of potato chips to our largest customers fluctuating unexpectedly this year?” or “Where can innovate, by expanding our product portfolio to meet unmet needs?”.  Analysts translate questions and beliefs like these into one or more empirical discovery efforts that more formally and granularly indicate the plan, methods, tools, and desired outcomes of analysis.  From the perspective of analysis this second question might become, “Which customer needs of type ‘A', identified and measured in terms of ‘B’, that are not directly or indirectly addressed by any of our current products, offer 'X' potential for ‘Y' positive return on the investment ‘Z' required to launch a new offering, in time frame ‘W’?  And how do these compare to each other?”.  Translation also happens from the perspective of analysis to the perspective of data; in terms of availability, quality, completeness, format, volume, etc. By implication, we are proposing that most working organizations — small and large, for profit and non-profit, domestic and international, and in the majority of industries — can be described for analytical purposes using this collection of Subjects.  This is a bold claim, but simplified articulation of complexity is one of the primary goals of sensemaking frameworks such as this one.  (And, yes, this is in fact a framework for making sense of sensemaking as a category of activity - but we’re not considering the recursive aspects of this exercise at the moment.) Compellingly, we can place the collection of subjects on a single continuum — we call it the Sensemaking Spectrum — that simply and coherently illustrates some of the most important relationships between the different types of Subjects, and also illuminates several of the fundamental dynamics shaping business analytics as a domain.  As a corollary, the Sensemaking Spectrum also suggests innovation opportunities for products and services related to business analytics. The first illustration below shows Subjects arrayed along the Sensemaking Spectrum; the second illustration presents examples of each kind of Subject.  Subjects appear in colors ranging from blue to reddish-orange, reflecting their place along the Spectrum, which indicates whether a Subject addresses more the viewpoint of systems and data (Data centric and blue), or people (User centric and orange).  This axis is shown explicitly above the Spectrum.  Annotations suggest how Subjects align with the three significant perspectives of Data, Analysis, and Business that shape business analytics activity.  This rendering makes explicit the translation and bridging function of Analysts as a role, and analysis as an activity. Subjects are best understood as fuzzy categories [http://georgelakoff.files.wordpress.com/2011/01/hedges-a-study-in-meaning-criteria-and-the-logic-of-fuzzy-concepts-journal-of-philosophical-logic-2-lakoff-19731.pdf], rather than tightly defined buckets.  For each Subject, we suggest some of the most common examples: Entities may be physical things such as named products, or locations (a building, or a city); they could be Concepts, such as satisfaction; or they could be Relationships between entities, such as the variety of possible connections that define linkage in social networks.  Likewise, Events may indicate a time and place in the dictionary sense; or they may be Transactions involving named entities; or take the form of Signals, such as ‘some Measure had some value at some time’ - what many enterprises understand as alerts.   The central story of the Spectrum is that though consumers of analytical insights (represented here by the Business perspective) need to work in terms of Subjects that are directly meaningful to their perspective — such as Themes, Plans, and Goals — the working realities of data (condition, structure, availability, completeness, cost) and the changing nature of most discovery efforts make direct engagement with source data in this fashion impossible.  Accordingly, business analytics as a domain is structured around the fundamental assumption that sense making depends on analytical transformation of data.  Analytical activity incrementally synthesizes more complex and larger scope Subjects from data in its starting condition, accumulating insight (and value) by moving through a progression of stages in which increasingly meaningful Subjects are iteratively synthesized from the data, and recombined with other Subjects.  The end goal of  ‘laddering’ successive transformations is to enable sense making from the business perspective, rather than the analytical perspective.Synthesis through laddering is typically accomplished by specialized Analysts using dedicated tools and methods. Beginning with some motivating question such as seeking opportunities to increase the efficiency (a Theme) of fulfillment processes to reach some level of profitability by the end of the year (Plan), Analysts will iteratively wrangle and transform source data Records, Values and Attributes into recognizable Entities, such as Products, that can be combined with Measures or other data into the Events (shipment of orders) that indicate the workings of the business.  More complex Subjects (to the right of the Spectrum) are composed of or make reference to less complex Subjects: a business Process such as Fulfillment will include Activities such as confirming, packing, and then shipping orders.  These Activities occur within or are conducted by organizational units such as teams of staff or partner firms (Networks), composed of Entities which are structured via Relationships, such as supplier and buyer.  The fulfillment process will involve other types of Entities, such as the products or services the business provides.  The success of the fulfillment process overall may be judged according to a sophisticated operating efficiency Model, which includes tiered Measures of business activity and health for the transactions and activities included.  All of this may be interpreted through an understanding of the operational domain of the businesses supply chain (a Domain).   We'll discuss the Spectrum in more depth in succeeding posts.

    Read the article

  • Why does Google mark one e-mail as spam while does not the other?

    - by nKn
    I've a Postfix installation which works fine, I don't get any trouble with mails sent through a mail client (in my case, Thunderbird or RoundCube) when the To: address is a GMail account. However, I recently needed to use the PHPMailer tool to send some e-mails to some GMail accounts, so I configured an account to be used via SASL authentication + TLS. I don't mean mass mailing, just 2-3 mails. If I send the e-mail from the Thunderbird or RoundCube clients, the mail is not marked as spam. However, if I use PHPMailer, it always gets catalogued as spam. So I compared both headers and I just can't find the reason why the second is marked as spam while the first one is just ok. The first header sent from a mail client which is not marked as spam: Delivered-To: [email protected] Received: by 10.76.153.102 with SMTP id vf6csp230573oab; Tue, 19 Aug 2014 11:08:19 -0700 (PDT) X-Received: by 10.60.23.39 with SMTP id j7mr45544050oef.20.1408471699715; Tue, 19 Aug 2014 11:08:19 -0700 (PDT) Return-Path: <[email protected]> Received: from mail.mydomain.com (X.ip-92-222-X.eu. [92.222.X.X]) by mx.google.com with ESMTPS id t5si27115082oej.10.2014.08.19.11.08.18 for <[email protected]> (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 19 Aug 2014 11:08:19 -0700 (PDT) Received-SPF: pass (google.com: domain of [email protected] designates 92.222.X.X as permitted sender) client-ip=92.222.X.X; Authentication-Results: mx.google.com; spf=pass (google.com: domain of [email protected] designates 92.222.X.X as permitted sender) [email protected]; dkim=pass (test mode) [email protected] Received: by mail.mydomain.com (Postfix, from userid 111) id D8F69120293D; Tue, 19 Aug 2014 19:08:17 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mydomain.com; s=mail; t=1408471697; bh=wKMX9gkQ7tCLv8ezrG5t4bICm/SSLQsNfTdZMToksWw=; h=Date:From:To:Subject:From; b=qRNcYVdmk+n3D1uuv0FInTx7/LzH2ojck9DgCmabFPvfke233lkojUOjezCUGx7iV DL8EayZ28mzzzHpB7ETeMzop/5OS3BmvFtGKVD9gzc78cDIFXTDoRFAnkRWDR2IOxI SOn5tiyODTFpkbDgJOndzQ6qL5K0S9ASNGCZrNL4= X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on vpsX.ovh.net X-Spam-Level: X-Spam-Status: No, score=-1.0 required=3.0 tests=ALL_TRUSTED,T_DKIM_INVALID autolearn=ham autolearn_force=no version=3.4.0 Received: from [192.168.1.111] (unknown [77.231.X.X]) (using TLSv1 with cipher ECDHE-RSA-AES128-SHA (128/128 bits)) (No client certificate requested) (Authenticated sender: [email protected]) by mail.mydomain.com (Postfix) with ESMTPSA id 910341202624 for <[email protected]>; Tue, 19 Aug 2014 19:08:17 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mydomain.com; s=mail; t=1408471697; bh=wKMX9gkQ7tCLv8ezrG5t4bICm/SSLQsNfTdZMToksWw=; h=Date:From:To:Subject:From; b=qRNcYVdmk+n3D1uuv0FInTx7/LzH2ojck9DgCmabFPvfke233lkojUOjezCUGx7iV DL8EayZ28mzzzHpB7ETeMzop/5OS3BmvFtGKVD9gzc78cDIFXTDoRFAnkRWDR2IOxI SOn5tiyODTFpkbDgJOndzQ6qL5K0S9ASNGCZrNL4= Message-ID: <[email protected]> Date: Tue, 19 Aug 2014 19:08:24 +0100 From: My Name <[email protected]> User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.6.0 MIME-Version: 1.0 To: My other account <[email protected]> Subject: . Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit . The second header sent from PHPMailer which is always marked as spam: Delivered-To: [email protected] Received: by 10.76.153.102 with SMTP id vf6csp230832oab; Tue, 19 Aug 2014 11:12:10 -0700 (PDT) X-Received: by 10.60.121.67 with SMTP id li3mr44086252oeb.17.1408471930520; Tue, 19 Aug 2014 11:12:10 -0700 (PDT) Return-Path: <[email protected]> Received: from mail.mydomain.com (X.ip-92-222-X.eu. [92.222.X.X]) by mx.google.com with ESMTPS id w8si27103806obn.30.2014.08.19.11.12.10 for <[email protected]> (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 19 Aug 2014 11:12:10 -0700 (PDT) Received-SPF: pass (google.com: domain of [email protected] designates 92.222.X.X as permitted sender) client-ip=92.222.X.X; Authentication-Results: mx.google.com; spf=pass (google.com: domain of [email protected] designates 92.222.X.X as permitted sender) [email protected]; dkim=pass (test mode) [email protected] Received: by mail.mydomain.com (Postfix, from userid 111) id 1999D120293D; Tue, 19 Aug 2014 19:12:09 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mydomain.com; s=mail; t=1408471929; bh=N1JuHq1S+8GrjHcEK3xn8P1JS+ygEBv5LKe0BiXuVJo=; h=Date:To:From:Reply-to:Subject:From; b=K7tcPyArzSTY91VEw6mAAFtDurSGwgTLGkfUZdC5mqsg0g/1LzmZkgwdjj4NdJa6M E2kDz3dwYN8FcZmbampJYFXxj4NQVtSnzjiWV40rpfOFqD2rXDGNIyB2QOjBZZ4WK3 7s4lyoJ/BrdQH4en8ctLVsDHed/KpHD4iGFEl67E= X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on vpsX.ovh.net X-Spam-Level: X-Spam-Status: No, score=-1.0 required=3.0 tests=ALL_TRUSTED,T_DKIM_INVALID autolearn=ham autolearn_force=no version=3.4.0 Received: from rpi.mydomain.com (unknown [77.231.X.X]) (using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) (Authenticated sender: [email protected]) by mail.mydomain.com (Postfix) with ESMTPSA id B42AF1202624 for <[email protected]>; Tue, 19 Aug 2014 19:12:08 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mydomain.com; s=mail; t=1408471928; bh=N1JuHq1S+8GrjHcEK3xn8P1JS+ygEBv5LKe0BiXuVJo=; h=Date:To:From:Reply-to:Subject:From; b=iXPM0tS36swudPTT4FOHHtPi5Ll6LbR60kNqCinZ8utcWoFE31SFTpoMEq5aCM5ux wQMdFiN8c6vkjRGabmvqFTTIbwJsrToHo/4+Lt5HEBoQQE2Y3T+xGmnmGAHCS6stKB yb7SVmtrIAsVtSMKA8VYIbmu2oYqV3afYt7g0OMQ= Date: Tue, 19 Aug 2014 20:12:07 +0200 To: [email protected] From: Trying another account <[email protected]> Reply-to: Trying another account <[email protected]> Subject: . Message-ID: <[email protected]> X-Priority: 3 X-Mailer: PHPMailer 5.1 (phpmailer.sourceforge.net) MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset="UTF-8" . I also tried: Adding a User-Agent header to match the first one. Removing the X-Mailer header. No one of them made a difference. Is there some significant difference which is making the second e-mail to be marked as spam by Google?

    Read the article

  • REST to Objects in C#

    RESTful interfaces for web services are all the rage for many Web 2.0 sites.  If you want to consume these in a very simple fashion, LINQ to XML can do the job pretty easily in C#.  If you go searching for help on this, youll find a lot of incomplete solutions and fairly large toolkits and frameworks (guess how I know this) this quick article is meant to be a no fluff just stuff approach to making this work. POCO Objects Lets assume you have a Model that you want to suck data into from a RESTful web service.  Ideally this is a Plain Old CLR Object, meaning it isnt infected with any persistence or serialization goop.  It might look something like this: public class Entry { public int Id; public int UserId; public DateTime Date; public float Hours; public string Notes; public bool Billable;   public override string ToString() { return String.Format("[{0}] User: {1} Date: {2} Hours: {3} Notes: {4} Billable {5}", Id, UserId, Date, Hours, Notes, Billable); } } Not that this isnt a completely trivial object.  Lets look at the API for the service.  RESTful HTTP Service In this case, its TickSpots API, with the following sample output: <?xml version="1.0" encoding="UTF-8"?> <entries type="array"> <entry> <id type="integer">24</id> <task_id type="integer">14</task_id> <user_id type="integer">3</user_id> <date type="date">2008-03-08</date> <hours type="float">1.00</hours> <notes>Had trouble with tribbles.</notes> <billable>true</billable> # Billable is an attribute inherited from the task <billed>true</billed> # Billed is an attribute to track whether the entry has been invoiced <created_at type="datetime">Tue, 07 Oct 2008 14:46:16 -0400</created_at> <updated_at type="datetime">Tue, 07 Oct 2008 14:46:16 -0400</updated_at> # The following attributes are derived and provided for informational purposes: <user_email>[email protected]</user_email> <task_name>Remove converter assembly</task_name> <sum_hours type="float">2.00</sum_hours> <budget type="float">10.00</budget> <project_name>Realign dilithium crystals</project_name> <client_name>Starfleet Command</client_name> </entry> </entries> Im assuming in this case that I dont necessarily care about all of the data fields the service is returning I just need some of them for my applications purposes.  Thus, you can see there are more elements in the <entry> XML than I have in my Entry class. Get The XML with C# The next step is to get the XML.  The following snippet does the heavy lifting once you pass it the appropriate URL: protected XElement GetResponse(string uri) { var request = WebRequest.Create(uri) as HttpWebRequest; request.UserAgent = ".NET Sample"; request.KeepAlive = false;   request.Timeout = 15 * 1000;   var response = request.GetResponse() as HttpWebResponse;   if (request.HaveResponse == true && response != null) { var reader = new StreamReader(response.GetResponseStream()); return XElement.Parse(reader.ReadToEnd()); } throw new Exception("Error fetching data."); } This is adapted from the Yahoo Developer article on Web Service REST calls.  Once you have the XML, the last step is to get the data back as your POCO. Use LINQ-To-XML to Deserialize POCOs from XML This is done via the following code: public IEnumerable<Entry> List(DateTime startDate, DateTime endDate) { string additionalParameters = String.Format("start_date={0}&end_date={1}", startDate.ToShortDateString(), endDate.ToShortDateString()); string uri = BuildUrl("entries", additionalParameters);   XElement elements = GetResponse(uri);   var entries = from e in elements.Elements() where e.Name.LocalName == "entry" select new Entry { Id = int.Parse(e.Element("id").Value), UserId = int.Parse(e.Element("user_id").Value), Date = DateTime.Parse(e.Element("date").Value), Hours = float.Parse(e.Element("hours").Value), Notes = e.Element("notes").Value, Billable = bool.Parse(e.Element("billable").Value) }; return entries; }   For completeness, heres the BuildUrl method for my TickSpot API wrapper: // Change these to your settings protected const string projectDomain = "DOMAIN.tickspot.com"; private const string authParams = "[email protected]&password=MyTickSpotPassword";   protected string BuildUrl(string apiMethod, string additionalParams) { if (projectDomain.Contains("DOMAIN")) { throw new ApplicationException("You must update your domain in ProjectRepository.cs."); } if (authParams.Contains("MyTickSpotPassword")) { throw new ApplicationException("You must update your email and password in ProjectRepository.cs."); } return string.Format("https://{0}/api/{1}?{2}&{3}", projectDomain, apiMethod, authParams, additionalParams); } Thats it!  Now go forth and consume XML and map it to classes you actually want to work with.  Have fun! Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • REST to Objects in C#

    RESTful interfaces for web services are all the rage for many Web 2.0 sites.  If you want to consume these in a very simple fashion, LINQ to XML can do the job pretty easily in C#.  If you go searching for help on this, youll find a lot of incomplete solutions and fairly large toolkits and frameworks (guess how I know this) this quick article is meant to be a no fluff just stuff approach to making this work. POCO Objects Lets assume you have a Model that you want to suck data into from a RESTful web service.  Ideally this is a Plain Old CLR Object, meaning it isnt infected with any persistence or serialization goop.  It might look something like this: public class Entry { public int Id; public int UserId; public DateTime Date; public float Hours; public string Notes; public bool Billable;   public override string ToString() { return String.Format("[{0}] User: {1} Date: {2} Hours: {3} Notes: {4} Billable {5}", Id, UserId, Date, Hours, Notes, Billable); } } Not that this isnt a completely trivial object.  Lets look at the API for the service.  RESTful HTTP Service In this case, its TickSpots API, with the following sample output: <?xml version="1.0" encoding="UTF-8"?> <entries type="array"> <entry> <id type="integer">24</id> <task_id type="integer">14</task_id> <user_id type="integer">3</user_id> <date type="date">2008-03-08</date> <hours type="float">1.00</hours> <notes>Had trouble with tribbles.</notes> <billable>true</billable> # Billable is an attribute inherited from the task <billed>true</billed> # Billed is an attribute to track whether the entry has been invoiced <created_at type="datetime">Tue, 07 Oct 2008 14:46:16 -0400</created_at> <updated_at type="datetime">Tue, 07 Oct 2008 14:46:16 -0400</updated_at> # The following attributes are derived and provided for informational purposes: <user_email>[email protected]</user_email> <task_name>Remove converter assembly</task_name> <sum_hours type="float">2.00</sum_hours> <budget type="float">10.00</budget> <project_name>Realign dilithium crystals</project_name> <client_name>Starfleet Command</client_name> </entry> </entries> Im assuming in this case that I dont necessarily care about all of the data fields the service is returning I just need some of them for my applications purposes.  Thus, you can see there are more elements in the <entry> XML than I have in my Entry class. Get The XML with C# The next step is to get the XML.  The following snippet does the heavy lifting once you pass it the appropriate URL: protected XElement GetResponse(string uri) { var request = WebRequest.Create(uri) as HttpWebRequest; request.UserAgent = ".NET Sample"; request.KeepAlive = false;   request.Timeout = 15 * 1000;   var response = request.GetResponse() as HttpWebResponse;   if (request.HaveResponse == true && response != null) { var reader = new StreamReader(response.GetResponseStream()); return XElement.Parse(reader.ReadToEnd()); } throw new Exception("Error fetching data."); } This is adapted from the Yahoo Developer article on Web Service REST calls.  Once you have the XML, the last step is to get the data back as your POCO. Use LINQ-To-XML to Deserialize POCOs from XML This is done via the following code: public IEnumerable<Entry> List(DateTime startDate, DateTime endDate) { string additionalParameters = String.Format("start_date={0}&end_date={1}", startDate.ToShortDateString(), endDate.ToShortDateString()); string uri = BuildUrl("entries", additionalParameters);   XElement elements = GetResponse(uri);   var entries = from e in elements.Elements() where e.Name.LocalName == "entry" select new Entry { Id = int.Parse(e.Element("id").Value), UserId = int.Parse(e.Element("user_id").Value), Date = DateTime.Parse(e.Element("date").Value), Hours = float.Parse(e.Element("hours").Value), Notes = e.Element("notes").Value, Billable = bool.Parse(e.Element("billable").Value) }; return entries; }   For completeness, heres the BuildUrl method for my TickSpot API wrapper: // Change these to your settings protected const string projectDomain = "DOMAIN.tickspot.com"; private const string authParams = "[email protected]&password=MyTickSpotPassword";   protected string BuildUrl(string apiMethod, string additionalParams) { if (projectDomain.Contains("DOMAIN")) { throw new ApplicationException("You must update your domain in ProjectRepository.cs."); } if (authParams.Contains("MyTickSpotPassword")) { throw new ApplicationException("You must update your email and password in ProjectRepository.cs."); } return string.Format("https://{0}/api/{1}?{2}&{3}", projectDomain, apiMethod, authParams, additionalParams); } Thats it!  Now go forth and consume XML and map it to classes you actually want to work with.  Have fun! Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • WCF Authentication on the Internet - HELP

    - by Eddie
    I have a WCF service using the basicHTTP binding. The service will be targeted to be deployed in production in a DMZ environment on a Windows Server 2008 64 bit running IIS 7.0 and is not in an Active Directory domain. The service will be accessed by a business partner over the Internet with SSL protection. Originally, I had built the service to use x.509 Message authentication with wsHTTPBinding and after a lot of problems I punted and decided to back up and use basicHTTP with UserName authentication. Result: same exact, obscure error message as I received with certificate mode. The service works perfectly inside our domain with the exact same authentication but as soon as I move it to the DMZ I get an error reading: "An unsecured or incorrectly secured fault was received from the other party. See the inner FaultException for the fault code and detail". The inner exception message is: "An error occurred when verifying security for the message." The services' web config with binding configuration is as follows: <services> <service behaviorConfiguration="HSSanoviaFacade.Service1Behavior" name="HSSanoviaFacade.HSSanoviaFacade"> <endpoint address="" binding="basicHttpBinding" contract="HSSanoviaFacade.IHSSanoviaFacade" bindingConfiguration="basicHttp"> <identity> <dns value="localhost" /> </identity> </endpoint> <endpoint address="mex" binding="mexHttpsBinding" contract="IMetadataExchange" /> <host> <baseAddresses> <add baseAddress="https://FULLY QUALIFIED HOST NAME CHANGED TO PROTECT/> </baseAddresses> </host> </service> </services> <bindings> <basicHttpBinding> <binding name="basicHttp"> <security mode="TransportWithMessageCredential"> <message clientCredentialType="UserName" /> </security> </binding> </basicHttpBinding> </bindings> <behaviors> <serviceBehaviors> <behavior name="HSSanoviaFacade.Service1Behavior"> <serviceMetadata httpsGetEnabled="True" /> <serviceDebug includeExceptionDetailInFaults="True" /> </behavior> </serviceBehaviors> </behaviors> The test client's configuration that gets the error: <bindings> <basicHttpBinding> <binding name="BasicHttpBinding_IHSSanoviaFacade" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" allowCookies="false" bypassProxyOnLocal="false" hostNameComparisonMode="StrongWildcard" maxBufferSize="65536" maxBufferPoolSize="524288" maxReceivedMessageSize="65536" messageEncoding="Text" textEncoding="utf-8" transferMode="Buffered" useDefaultWebProxy="true"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> <security mode="TransportWithMessageCredential"> <transport clientCredentialType="None" proxyCredentialType="None" realm="" /> <message clientCredentialType="UserName" algorithmSuite="Default" /> </security> </binding> </basicHttpBinding> </bindings> <client> <endpoint address="https://HOST NAME CHANGED TO PROTECT" binding="basicHttpBinding" bindingConfiguration="BasicHttpBinding_IHSSanoviaFacade" contract="MembersService.IHSSanoviaFacade" name="BasicHttpBinding_IHSSanoviaFacade" /> </client> As mentioned earlier, the service works perfectly on the domain and the production IIS box is not on a domain. I have been tweaking and pulling my hair out for 2 weeks now and nothing seems to work. If anyone can help I would appreciate it. Even a recommendation for a work around for authentication. I'd rather not use a custom authentication scheme but use built-in SOAP capabilities. The credentials pass in thru the proxy i.e. proxy.ClientCredentials.UserName.UserName and proxy.ClientCredentials.UserName.Password are valid accounts on both the internal domain in the test environment and as a machine account on the DMZ IIS box.

    Read the article

  • Complex event system for DungeonKeeper like game

    - by paul424
    I am working on opensource GPL3 game. http://opendungeons.sourceforge.net/ , new coders would be welcome. Now there's design question regarding Event System: We want to improve the game logic, that is program a new event system. I will just repost what's settled up already on http://forum.freegamedev.net/viewtopic.php?f=45&t=3033. From the discussion came the idea of the Publisher / Subscriber pattern + "domains": My current idea is to use the subscirbers / publishers model. Its similar to Observable pattern, but instead one subscribes to Events types, not Object's Events. For each Event would like to have both static and dynamic type. Static that is its's type would be resolved by belonging to the proper inherited class from Event. That is from Event we would have EventTile, EventCreature, EvenMapLoader, EventGameMap etc. From that there are of course subtypes like EventCreature would be EventKobold, EventKnight, EventTentacle etc. The listeners would collect the event from publishers, and send them subcribers , each of them would be a global singleton. The Listeners type hierachy would exactly mirror the type hierarchy of Events. In each constructor of Event type, the created instance would notify the proper listeners. That is when calling EventKnight the proper ctor would notify the Listeners : EventListener, CreatureLisener and KnightListener. The default action for an listner would be to notify all subscribers, but there would be some exceptions , like EventAttack would notify AttackListener which would dispatch event by the dynamic part ( that is the Creature pointer or hash). Any comments ? #include <vector> class Subscriber; class SubscriberAttack; class Event{ private: int foo; int bar; protected: // static std::vector<Publisher*> publishersList; static std::vector<Subscriber*> subscribersList; static std::vector<Event*> eventQueue; public: Event(){ eventQueue.push_back(this); } static int subscribe(Subscriber* ss); static int unsubscribe(Subscriber* ss); //static int reg_publisher(Publisher* pp); //static int unreg_publisher(Publisher* pp); }; // class Publisher{ // }; class Subscriber{ public: int (*newEvent) (Event* ee); Subscriber( ){ Event::subscribe(this); } Subscriber( int (*fp) (Event* ee) ):newEvent(fp){ Subscriber(); } ~Subscriber(){ Event::unsubscribe(this); } }; class EventAttack: Event{ private: int foo; int bar; protected: // static std::vector<Publisher*> publishersList; static std::vector<SubscriberAttack*> subscribersList; static std::vector<EventAttack*> eventQueue; public: EventAttack(){ eventQueue.push_back(this); } static int subscribe(SubscriberAttack* ss); static int unsubscribe(SubscriberAttack* ss); //static int reg_publisher(Publisher* pp); //static int unreg_publisher(Publisher* pp); }; class AttackSubscriber :Subscriber{ public: int (*newEvent) (EventAttack* ee); AttackSubscriber( ){ EventAttack::subscribe(this); } AttackSubscriber( int (*fp) (EventAttack* ee) ):newEventAttack(fp){ AttackSubscriber(); } ~AttackSubscriber(){ EventAttack::unsubscribe(this); } }; From that point, others wanted the Subject-Observer pattern, that is one would subscribe to all event types produced by particular object. That way it came out to add the domain system : Huh, to meet the ability to listen to particular game's object events, I though of introducing entity domains . Domains are trees, which nodes are labeled by unique names for each level. ( like the www addresses ). Each Entity wanting to participate in our event system ( that is be able to publish / produce events ) should at least now its domain name. That would end up in Player1/Room/Treasury/#24 or Player1/Creature/Kobold/#3 producing events. The subscriber picks some part of a tree. For example by specifiing subtree with the root in one of the nodes like Player1/Room/* ,would subscribe us to all Players1's room's event, and Player1/Creature/Kobold/#3 would subscribe to Players' third kobold's event. Does such event system make sense to you ? I have many implementation details to ask as well, but first let's start some general discussion. Note1: Notice that in the case of a fight between two creatues fight , the creature being attacked would have to throw an event, becuase it is HE/SHE/IT who have its domain address. So that would be BeingAttackedEvent() etc. I will edit that post if some other reflections on this would come out. Note2: the existing class hierarchy might be used to get the domains addresses being build in constructor . In a ctor you would just add + ."className" to domain address. If you are in a class'es hierarchy leaf constructor one might use nextID , hash or any other charactteristic, just to make the addresses distinguishable . Note3:subscribing to all entity's Events would require knowledge of all possible events produced by this entity . This could be done in one function call, but information on E produced would have to be handled for every Entity. SmartNote4 : Finding proper subscribers in a tree would be easy. One would start in particular Leaf for example Player1/Creature/Kobold/#3 and go up one parent a time , notifiying each Subscriber in a Node ie. : Player1/Creature/Kobold/* , Player1/Creature/* , Player1/* etc, , up to a root that is /* .<<<< Note5: The Event system was needed to have some way of incorporating Angelscript code into application. So the Event dispatcher was to be a gate to A-script functions. But it came out to this one.

    Read the article

  • To Make Diversity Work, Managers Must Stop Ignoring Difference

    - by HCM-Oracle
    By Kate Pavao - Originally posted on Profit Executive coaches Jane Hyun and Audrey S. Lee noticed something during their leadership development coaching and consulting: Frustrated employees and overwhelmed managers. “We heard from voices saying, ‘I wish my manager understood me better’ or ‘I hope my manager would take the time to learn more about me and my background,’” remembers Hyun. “At the same token, the managers we were coaching had a hard time even knowing how to start these conversations.”  Hyun and Lee wrote Flex to address some of the fears managers have when it comes to leading diverse teams—such as being afraid of offending their employees by stumbling into sensitive territory—and also to provide a sure-footed strategy for becoming a more effective leader. Here, Hyun talks about what it takes to create innovate and productive teams in an increasingly diverse world, including the key characteristics successful managers share. Q: What does it mean to “flex”? Hyun: Flexing is the art of switching between leadership styles to work more effectively with people who are different from you. It’s not fundamentally changing who you are, but it’s understanding when you need to adapt your style in a situation so that you can accommodate people and make them feel more comfortable. It’s understanding the gap that might exist between you and others who are different, and then flexing across that gap to get the result that you're looking for. It’s up to all of us, not just managers, but also employees, to learn how to flex. When you hire new people to the organization, they're expected to adapt. The new people in the organization may need some guidance around how to best flex. They can certainly take the initiative, but if you can give them some direction around the important rules, and connect them with insiders who can help them figure out the most critical elements of the job, that will accelerate how quickly they can contribute to your organization. Q: Why is it important right now for managers to understand flexing? Hyun: The workplace is becoming increasingly younger, multicultural and female. The numbers bear it out. Millennials are entering the workforce and becoming a larger percentage of it, which is a global phenomenon. Thirty-six percent of the workforce is multicultural, and close to half is female. It makes sense to better understand the people who are increasingly a part of your workforce, and how to best lead them and manage them as well. Q: What do companies miss out on when managers don’t flex? Hyun: There are high costs for losing people or failing to engage them. The estimated costs of replacing an employee is about 150 percent of that person’s salary. There are studies showing that employee disengagement costs the U.S. something like $450 billion a year. But voice is the biggest thing you miss out on if you don’t flex. Whenever you want innovation or increased productivity from your people, you need to figure out how to unleash these things. The way you get there is to make sure that everybody’s voice is at the table. Q: What are some of the common misassumptions that managers make about the people on their teams? Hyun: One is what I call the Golden Rule mentality: We assume when we go to the workplace that people are going to think like us and operate like us. But sometimes when you work with people from a different culture or a different generation, they may have a different mindset about doing something, or a different approach to solving a problem, or a different way to manage some situation. When see something that’s different, we don't understand it, so we don't trust it. We have this hidden bias for people who are like us. That gets in the way of really looking at how we can tap our team members best potential by understanding how their difference may help them be effective in our workplace. We’re trained, especially in the workplace, to make assumptions quickly, so that you can make the best business decision. But with people, it’s better to remain curious. If you want to build stronger cross-cultural, cross-generational, cross-gender relationships, before you make a judgment, share what you observe with that team member, and connect with him or her in ways that are mutually adaptive, so that you can work together more effectively. Q: What are the common characteristics you see in leaders who are successful at flexing? Hyun: One is what I call “adaptive ability”—leaders who are able to understand that someone on their team is different from them, and willing to adapt his or her style to do that. Another one is “unconditional positive regard,” which is basically acceptance of others, even in their vulnerable moments. This attitude of grace is critical and essential to a healthy environment in developing people. If you think about when people enter the workforce, they're only 21 years old. It’s quite a formative time for them. They may not have a lot of management experience, or experience managing complex or even global projects. Creating the best possible condition for their development requires turning their mistakes into teachable moments, and giving them an opportunity to really learn. Finally, these leaders are not rigid or constrained in a single mode or style. They have this insatiable curiosity about other people. They don’t judge when they see behavior that doesn’t make sense, or is different from their own. For example, maybe someone on their team is a less aggressive than they are. The leader needs to remain curious and thinks, “Wow, I wonder how I can engage in a dialogue with this person to get their potential out in the open.”

    Read the article

  • Active Directory and Apple's Workgroup Manager

    - by qbn
    I thought I'd share my experiences here. I work for a small business with only ~20 users. I wanted the ability to use managed client preferences to assign things like the software update server. Basically the ability to manage my Macs easily and in a native way. At first I tried the magic triangle solution, but I found this to be very complicated. Not only does it require a Mac OS X Server, but it gives you two points of failure. Additionally each Mac workstation must be bound to both servers. Eventually I sucked it up and went with the schema changes documented here. I was hesitant at first, because the instructions require a lot of manual work. However it was fairly basic and only took me about an hour and a half. Below you'll find the schema changes file that was a result of my work. I followed the instructions exactly and double checked everything, after six months of having this in place things have been running great. Too good to not share. I hope I save someone a couple of hours. # ================================================================== # # This file should be imported with the following command: # ldifde -i -u -f Apple AD Schema Changes.ldf -s server:port -b username domain password -j . -c "cn=Configuration,dc=X" #configurationNamingContext # LDIFDE.EXE from AD/AM V1.0 or above must be used. # This LDIF file should be imported into AD or AD/AM. It may not work for other directories. # # ================================================================== # ================================================================== # Attributes # ================================================================== # Attribute: apple-category dn: cn=apple-category,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.10.4 ldapDisplayName: apple-category attributeSyntax: 2.5.5.12 adminDescription: Category for the computer or neighborhood oMSyntax: 64 systemOnly: FALSE # Attribute: apple-computeralias dn: cn=apple-computeralias,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.20.3 ldapDisplayName: apple-computeralias attributeSyntax: 2.5.5.12 adminDescription: XML plist referring to a computer record oMSyntax: 64 systemOnly: FALSE # Attribute: apple-computer-list-groups dn: cn=apple-computer-list-groups,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.11.4 ldapDisplayName: apple-computer-list-groups attributeSyntax: 2.5.5.12 adminDescription: groups oMSyntax: 64 systemOnly: FALSE # Attribute: apple-computers dn: cn=apple-computers,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.11.3 ldapDisplayName: apple-computers attributeSyntax: 2.5.5.12 adminDescription: computers oMSyntax: 64 systemOnly: FALSE # Attribute: apple-data-stamp dn: cn=apple-data-stamp,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.12.2 ldapDisplayName: apple-data-stamp attributeSyntax: 2.5.5.5 adminDescription: data stamp oMSyntax: 22 isSingleValued: TRUE systemOnly: FALSE # Attribute: apple-dns-domain dn: cn=apple-dns-domain,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.18.1 ldapDisplayName: apple-dns-domain attributeSyntax: 2.5.5.12 adminDescription: DNS domain oMSyntax: 64 systemOnly: FALSE # Attribute: apple-dnsname dn: cn=apple-dnsname,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.19.4 ldapDisplayName: apple-dnsname attributeSyntax: 2.5.5.12 adminDescription: DNS name oMSyntax: 64 systemOnly: FALSE # Attribute: apple-dns-nameserver dn: cn=apple-dns-nameserver,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.18.2 ldapDisplayName: apple-dns-nameserver attributeSyntax: 2.5.5.12 adminDescription: DNS name server list oMSyntax: 64 systemOnly: FALSE # Attribute: apple-group-homeowner dn: cn=apple-group-homeowner,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.14.2 ldapDisplayName: apple-group-homeowner attributeSyntax: 2.5.5.5 adminDescription: group home owner settings oMSyntax: 22 isSingleValued: TRUE systemOnly: FALSE # Attribute: apple-group-homeurl dn: cn=apple-group-homeurl,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.14.1 ldapDisplayName: apple-group-homeurl attributeSyntax: 2.5.5.5 adminDescription: group home url oMSyntax: 22 isSingleValued: TRUE systemOnly: FALSE # Attribute: apple-imhandle dn: cn=apple-imhandle,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.1.21 ldapDisplayName: apple-imhandle attributeSyntax: 2.5.5.12 adminDescription: IM handle (service:account name) oMSyntax: 64 systemOnly: FALSE # Attribute: apple-keyword dn: cn=apple-keyword,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.1.19 ldapDisplayName: apple-keyword attributeSyntax: 2.5.5.12 adminDescription: keywords oMSyntax: 64 systemOnly: FALSE # Attribute: apple-mcxflags dn: cn=apple-mcxflags,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.1.10 ldapDisplayName: apple-mcxflags attributeSyntax: 2.5.5.12 adminDescription: mcx flags oMSyntax: 64 isSingleValued: TRUE systemOnly: FALSE # Attribute: apple-mcxsettings dn: cn=apple-mcxsettings,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.1.16 ldapDisplayName: apple-mcxsettings attributeSyntax: 2.5.5.12 adminDescription: mcx settings oMSyntax: 64 systemOnly: FALSE # Attribute: apple-neighborhoodalias dn: cn=apple-neighborhoodalias,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.20.2 ldapDisplayName: apple-neighborhoodalias attributeSyntax: 2.5.5.12 adminDescription: XML plist referring to another neighborhood record oMSyntax: 64 systemOnly: FALSE # Attribute: apple-networkview dn: cn=apple-networkview,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.10.3 ldapDisplayName: apple-networkview attributeSyntax: 2.5.5.12 adminDescription: Network view for the computer oMSyntax: 64 systemOnly: FALSE # Attribute: apple-nodepathxml dn: cn=apple-nodepathxml,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.20.1 ldapDisplayName: apple-nodepathxml attributeSyntax: 2.5.5.12 adminDescription: XML plist of directory node path oMSyntax: 64 systemOnly: FALSE # Attribute: apple-service-location dn: cn=apple-service-location,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.19.5 ldapDisplayName: apple-service-location attributeSyntax: 2.5.5.12 adminDescription: Service location oMSyntax: 64 systemOnly: FALSE # Attribute: apple-service-port dn: cn=apple-service-port,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.19.3 ldapDisplayName: apple-service-port attributeSyntax: 2.5.5.9 adminDescription: Service port number oMSyntax: 2 systemOnly: FALSE # Attribute: apple-service-type dn: cn=apple-service-type,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.19.1 ldapDisplayName: apple-service-type attributeSyntax: 2.5.5.5 adminDescription: type of service oMSyntax: 22 systemOnly: FALSE # Attribute: apple-service-url dn: cn=apple-service-url,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.19.2 ldapDisplayName: apple-service-url attributeSyntax: 2.5.5.5 adminDescription: URL of service oMSyntax: 22 systemOnly: FALSE # Attribute: apple-user-authenticationhint dn: cn=apple-user-authenticationhint,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.1.15 ldapDisplayName: apple-user-authenticationhint attributeSyntax: 2.5.5.12 adminDescription: password hint oMSyntax: 64 isSingleValued: TRUE systemOnly: FALSE # Attribute: apple-user-class dn: cn=apple-user-class,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.1.7 ldapDisplayName: apple-user-class attributeSyntax: 2.5.5.5 adminDescription: user class oMSyntax: 22 isSingleValued: TRUE systemOnly: FALSE # Attribute: apple-user-homequota dn: cn=apple-user-homequota,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.1.8 ldapDisplayName: apple-user-homequota attributeSyntax: 2.5.5.5 adminDescription: home directory quota oMSyntax: 22 isSingleValued: TRUE systemOnly: FALSE # Attribute: apple-user-homesoftquota dn: cn=apple-user-homesoftquota,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.1.17 ldapDisplayName: apple-user-homesoftquota attributeSyntax: 2.5.5.5 adminDescription: home directory soft quota oMSyntax: 22 isSingleValued: TRUE systemOnly: FALSE # Attribute: apple-user-homeurl dn: cn=apple-user-homeurl,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.1.6 ldapDisplayName: apple-user-homeurl attributeSyntax: 2.5.5.5 adminDescription: home directory URL oMSyntax: 22 isSingleValued: TRUE systemOnly: FALSE # Attribute: apple-user-mailattribute dn: cn=apple-user-mailattribute,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.1.9 ldapDisplayName: apple-user-mailattribute attributeSyntax: 2.5.5.12 adminDescription: mail attribute oMSyntax: 64 isSingleValued: TRUE systemOnly: FALSE # Attribute: apple-user-picture dn: cn=apple-user-picture,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.1.12 ldapDisplayName: apple-user-picture attributeSyntax: 2.5.5.12 adminDescription: picture oMSyntax: 64 isSingleValued: TRUE systemOnly: FALSE # Attribute: apple-user-printattribute dn: cn=apple-user-printattribute,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.1.13 ldapDisplayName: apple-user-printattribute attributeSyntax: 2.5.5.12 adminDescription: print attribute oMSyntax: 64 isSingleValued: TRUE systemOnly: FALSE # Attribute: apple-webloguri dn: cn=apple-webloguri,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.1.22 ldapDisplayName: apple-webloguri attributeSyntax: 2.5.5.12 adminDescription: Weblog URI oMSyntax: 64 isSingleValued: TRUE systemOnly: FALSE # Attribute: apple-xmlplist dn: cn=apple-xmlplist,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.17.1 ldapDisplayName: apple-xmlplist attributeSyntax: 2.5.5.12 adminDescription: XML plist data oMSyntax: 64 isSingleValued: TRUE systemOnly: FALSE # Attribute: ipHostNumber dn: cn=ipHostNumber,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.1.1.1.19 ldapDisplayName: ipHostNumber attributeSyntax: 2.5.5.5 adminDescription: IP address oMSyntax: 22 systemOnly: FALSE rangeUpper: 128 # Attribute: macAddress dn: cn=macAddress,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.1.1.1.22 ldapDisplayName: macAddress attributeSyntax: 2.5.5.5 adminDescription: MAC address oMSyntax: 22 systemOnly: FALSE rangeUpper: 128 # Attribute: mountDirectory dn: cn=apple-mountDirectory,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.8.1 ldapDisplayName: mountDirectory attributeSyntax: 2.5.5.12 adminDescription: mount path oMSyntax: 64 isSingleValued: TRUE systemOnly: FALSE # Attribute: mountDumpFrequency dn: cn=apple-mountDumpFrequency,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.8.4 ldapDisplayName: mountDumpFrequency attributeSyntax: 2.5.5.5 adminDescription: mount dump frequency oMSyntax: 22 isSingleValued: TRUE systemOnly: FALSE # Attribute: mountOption dn: cn=apple-mountOption,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.8.3 ldapDisplayName: mountOption attributeSyntax: 2.5.5.5 adminDescription: mount options oMSyntax: 22 systemOnly: FALSE # Attribute: mountPassNo dn: cn=apple-mountPassNo,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.8.5 ldapDisplayName: mountPassNo attributeSyntax: 2.5.5.5 adminDescription: mount passno oMSyntax: 22 isSingleValued: TRUE systemOnly: FALSE # Attribute: mountType dn: cn=apple-mountType,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.63.1000.1.1.1.8.2 ldapDisplayName: mountType attributeSyntax: 2.5.5.5 adminDescription: mount VFS type oMSyntax: 22 isSingleValued: TRUE systemOnly: FALSE # Attribute: ttl dn: cn=ttl,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: attributeSchema attributeId: 1.3.6.1.4.1.250.1.60 ldapDisplayName: ttl attributeSyntax: 2.5.5.9 oMSyntax: 2 isSingleValued: TRUE systemOnly: FALSE dn: changetype: modify add: schemaUpdateNow schemaUpdateNow: 1 - # ================================================================== # Classes # ================================================================== # Class: apple-computer dn: cn=apple-computer,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: classSchema governsID: 1.3.6.1.4.1.63.1000.1.1.2.10 ldapDisplayName: apple-computer adminDescription: computer objectClassCategory: 3 # subclassOf: top subclassOf: 2.5.6.0 # rdnAttId: cn rdnAttId: 2.5.4.3 # mayContain: apple-category mayContain: 1.3.6.1.4.1.63.1000.1.1.1.10.4 # mayContain: apple-computer-list-groups mayContain: 1.3.6.1.4.1.63.1000.1.1.1.11.4 # mayContain: apple-keyword mayContain: 1.3.6.1.4.1.63.1000.1.1.1.1.19 # mayContain: apple-mcxflags mayContain: 1.3.6.1.4.1.63.1000.1.1.1.1.10 # mayContain: apple-mcxsettings mayContain: 1.3.6.1.4.1.63.1000.1.1.1.1.16 # mayContain: apple-networkview mayContain: 1.3.6.1.4.1.63.1000.1.1.1.10.3 # mayContain: apple-service-url mayContain: 1.3.6.1.4.1.63.1000.1.1.1.19.2 # mayContain: apple-xmlplist mayContain: 1.3.6.1.4.1.63.1000.1.1.1.17.1 # mayContain: macAddress mayContain: 1.3.6.1.1.1.1.22 # mayContain: ttl mayContain: 1.3.6.1.4.1.250.1.60 # Class: apple-computer-list dn: cn=apple-computer-list,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: classSchema governsID: 1.3.6.1.4.1.63.1000.1.1.2.11 ldapDisplayName: apple-computer-list adminDescription: computer list objectClassCategory: 1 # subclassOf: top subclassOf: 2.5.6.0 # rdnAttId: cn rdnAttId: 2.5.4.3 # mayContain: apple-computer-list-groups mayContain: 1.3.6.1.4.1.63.1000.1.1.1.11.4 # mayContain: apple-computers mayContain: 1.3.6.1.4.1.63.1000.1.1.1.11.3 # mayContain: apple-keyword mayContain: 1.3.6.1.4.1.63.1000.1.1.1.1.19 # mayContain: apple-mcxflags mayContain: 1.3.6.1.4.1.63.1000.1.1.1.1.10 # mayContain: apple-mcxsettings mayContain: 1.3.6.1.4.1.63.1000.1.1.1.1.16 possSuperiors: organizationalUnit possSuperiors: container # Class: apple-configuration dn: cn=apple-configuration,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: classSchema governsID: 1.3.6.1.4.1.63.1000.1.1.2.12 ldapDisplayName: apple-configuration adminDescription: configuration objectClassCategory: 3 # subclassOf: top subclassOf: 2.5.6.0 # rdnAttId: cn rdnAttId: 2.5.4.3 # mayContain: apple-data-stamp mayContain: 1.3.6.1.4.1.63.1000.1.1.1.12.2 # mayContain: apple-keyword mayContain: 1.3.6.1.4.1.63.1000.1.1.1.1.19 # mayContain: apple-xmlplist mayContain: 1.3.6.1.4.1.63.1000.1.1.1.17.1 # mayContain: ttl mayContain: 1.3.6.1.4.1.250.1.60 possSuperiors: organizationalUnit possSuperiors: container # Class: apple-group dn: cn=apple-group,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: classSchema governsID: 1.3.6.1.4.1.63.1000.1.1.2.14 ldapDisplayName: apple-group adminDescription: group account objectClassCategory: 3 # subclassOf: top subclassOf: 2.5.6.0 # rdnAttId: cn rdnAttId: 2.5.4.3 # mayContain: apple-group-homeowner mayContain: 1.3.6.1.4.1.63.1000.1.1.1.14.2 # mayContain: apple-group-homeurl mayContain: 1.3.6.1.4.1.63.1000.1.1.1.14.1 # mayContain: apple-keyword mayContain: 1.3.6.1.4.1.63.1000.1.1.1.1.19 # mayContain: apple-mcxflags mayContain: 1.3.6.1.4.1.63.1000.1.1.1.1.10 # mayContain: apple-mcxsettings mayContain: 1.3.6.1.4.1.63.1000.1.1.1.1.16 # mayContain: apple-user-picture mayContain: 1.3.6.1.4.1.63.1000.1.1.1.1.12 # mayContain: ttl mayContain: 1.3.6.1.4.1.250.1.60 # Class: apple-location dn: cn=apple-location,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: classSchema governsID: 1.3.6.1.4.1.63.1000.1.1.2.18 ldapDisplayName: apple-location objectClassCategory: 1 # subclassOf: top subclassOf: 2.5.6.0 # rdnAttId: cn rdnAttId: 2.5.4.3 # mayContain: apple-dns-domain mayContain: 1.3.6.1.4.1.63.1000.1.1.1.18.1 # mayContain: apple-dns-nameserver mayContain: 1.3.6.1.4.1.63.1000.1.1.1.18.2 possSuperiors: organizationalUnit possSuperiors: container # Class: apple-neighborhood dn: cn=apple-neighborhood,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: classSchema governsID: 1.3.6.1.4.1.63.1000.1.1.2.20 ldapDisplayName: apple-neighborhood objectClassCategory: 1 # subclassOf: top subclassOf: 2.5.6.0 # rdnAttId: cn rdnAttId: 2.5.4.3 # mayContain: apple-category mayContain: 1.3.6.1.4.1.63.1000.1.1.1.10.4 # mayContain: apple-computeralias mayContain: 1.3.6.1.4.1.63.1000.1.1.1.20.3 # mayContain: apple-keyword mayContain: 1.3.6.1.4.1.63.1000.1.1.1.1.19 # mayContain: apple-neighborhoodalias mayContain: 1.3.6.1.4.1.63.1000.1.1.1.20.2 # mayContain: apple-nodepathxml mayContain: 1.3.6.1.4.1.63.1000.1.1.1.20.1 # mayContain: apple-xmlplist mayContain: 1.3.6.1.4.1.63.1000.1.1.1.17.1 # mayContain: ttl mayContain: 1.3.6.1.4.1.250.1.60 possSuperiors: 2.5.6.5 possSuperiors: container # Class: apple-serverassistant-config dn: cn=apple-serverassistant-config,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: classSchema governsID: 1.3.6.1.4.1.63.1000.1.1.2.17 ldapDisplayName: apple-serverassistant-config objectClassCategory: 1 # subclassOf: top subclassOf: 2.5.6.0 # rdnAttId: cn rdnAttId: 2.5.4.3 # mayContain: apple-xmlplist mayContain: 1.3.6.1.4.1.63.1000.1.1.1.17.1 possSuperiors: organizationalUnit possSuperiors: container # Class: apple-service dn: cn=apple-service,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: classSchema governsID: 1.3.6.1.4.1.63.1000.1.1.2.19 ldapDisplayName: apple-service objectClassCategory: 1 # subclassOf: top subclassOf: 2.5.6.0 # rdnAttId: cn rdnAttId: 2.5.4.3 # mustContain: apple-service-type mustContain: 1.3.6.1.4.1.63.1000.1.1.1.19.1 # mayContain: apple-dnsname mayContain: 1.3.6.1.4.1.63.1000.1.1.1.19.4 # mayContain: apple-keyword mayContain: 1.3.6.1.4.1.63.1000.1.1.1.1.19 # mayContain: apple-service-location mayContain: 1.3.6.1.4.1.63.1000.1.1.1.19.5 # mayContain: apple-service-port mayContain: 1.3.6.1.4.1.63.1000.1.1.1.19.3 # mayContain: apple-service-url mayContain: 1.3.6.1.4.1.63.1000.1.1.1.19.2 # mayContain: ipHostNumber mayContain: 1.3.6.1.1.1.1.19 possSuperiors: organizationalUnit possSuperiors: container # Class: apple-user dn: cn=apple-user,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: classSchema governsID: 1.3.6.1.4.1.63.1000.1.1.2.1 ldapDisplayName: apple-user adminDescription: apple user account objectClassCategory: 3 # subclassOf: top subclassOf: 2.5.6.0 # rdnAttId: cn rdnAttId: 2.5.4.3 # mayContain: apple-imhandle mayContain: 1.3.6.1.4.1.63.1000.1.1.1.1.21 # mayContain: apple-keyword mayContain: 1.3.6.1.4.1.63.1000.1.1.1.1.19 # mayContain: apple-mcxflags mayContain: 1.3.6.1.4.1.63.1000.1.1.1.1.10 # mayContain: apple-mcxsettings mayContain: 1.3.6.1.4.1.63.1000.1.1.1.1.16 # mayContain: apple-user-authenticationhint mayContain: 1.3.6.1.4.1.63.1000.1.1.1.1.15 # mayContain: apple-user-class mayContain: 1.3.6.1.4.1.63.1000.1.1.1.1.7 # mayContain: apple-user-homequota mayContain: 1.3.6.1.4.1.63.1000.1.1.1.1.8 # mayContain: apple-user-homesoftquota mayContain: 1.3.6.1.4.1.63.1000.1.1.1.1.17 # mayContain: apple-user-homeurl mayContain: 1.3.6.1.4.1.63.1000.1.1.1.1.6 # mayContain: apple-user-mailattribute mayContain: 1.3.6.1.4.1.63.1000.1.1.1.1.9 # mayContain: apple-user-picture mayContain: 1.3.6.1.4.1.63.1000.1.1.1.1.12 # mayContain: apple-user-printattribute mayContain: 1.3.6.1.4.1.63.1000.1.1.1.1.13 # mayContain: apple-webloguri mayContain: 1.3.6.1.4.1.63.1000.1.1.1.1.22 # Class: mount dn: cn=apple-mount,cn=Schema,cn=Configuration,dc=X changetype: ntdsschemaadd objectClass: classSchema governsID: 1.3.6.1.4.1.63.1000.1.1.2.8 ldapDisplayName: mount objectClassCategory: 1 # subclassOf: top subclassOf: 2.5.6.0 # rdnAttId: cn rdnAttId: 2.5.4.3 # mayContain: mountDirectory mayContain: 1.3.6.1.4.1.63.1000.1.1.1.8.1 # mayContain: mountDumpFrequency mayContain: 1.3.6.1.4.1.63.1000.1.1.1.8.4 # mayContain: mountOption mayContain: 1.3.6.1.4.1.63.1000.1.1.1.8.3 # mayContain: mountPassNo mayContain: 1.3.6.1.4.1.63.1000.1.1.1.8.5 # mayContain: mountType mayContain: 1.3.6.1.4.1.63.1000.1.1.1.8.2 possSuperiors: 2.5.6.5 possSuperiors: container dn: changetype: modify add: schemaUpdateNow schemaUpdateNow: 1 - # ================================================================== # Updating present elements # ================================================================== # Add the new class to the user object dn: CN=User,CN=Schema,CN=Configuration,DC=X changetype: modify add: auxiliaryClass auxiliaryClass: apple-user - # Add the new class to the computer object dn: CN=Computer,CN=Schema,CN=Configuration,DC=X changetype: modify add: auxiliaryClass auxiliaryClass: apple-computer - # Add the new class to the group object dn: CN=Group,CN=Schema,CN=Configuration,DC=X changetype: modify add: auxiliaryClass auxiliaryClass: apple-group - # Add the new class to the configuration object dn: CN=Configuration,CN=Schema,CN=Configuration,DC=X changetype: modify add: auxiliaryClass auxiliaryClass: apple-configuration -

    Read the article

  • Integrating Windows Form Click Once Application into SharePoint 2007 &ndash; Part 1 of 2

    - by Kelly Jones
    Last year, I had the opportunity to build a solution that involved integrating a Windows Form application into a SharePoint 2007 (WSS version 3.0). In this post, I’ll layout our architecture thinking and in part two, I’ll describe the technical details. Business Case Our challenge was this: we needed an easy way for a small group of our users to upload documents, in batches.  They also needed to quickly set the meta data values, as well as set security on individual files. Using the out of the box uploads just didn’t fit.  The single file upload allows set the meta data, but our users would be uploading dozens of files.  The multiple upload would allow our users to upload batches of files, but it doesn’t allow them to set the meta data during upload.  Also, neither upload method allows the users to set the permissions on the file. Our Solution We looked into building a web control of some kind, but ruled that out due to security complexities (if I remember correctly).  Another option would have been using a technology like Silverlight (or Flash?), but our team didn’t have the skills necessary to build with these. So, after looking at what was technically possible, and also what skills our team had, we settled on a Windows Form application.  We also decided to deliver it to the clients via Click Once, so we would have the ability to easily update the application in the future. Lessons Learned After deploying our solution, we’ve learned a few lessons.  First, you’ll need to have the .Net Framework installed on the client computers.  We knew this, but we still ran into issues making sure our users had the proper framework version installed.  Second, we had issues with authentication.  Our issues were due to our testing domain being a separate Active Directory domain from the domain that our end users and their workstations were members of.  (See my earlier post about Clearing Saved Passwords for the fix to our problem). Our third issue was how we dealt with uploading files that were named the same.  Our application would replace the existing file with the new file, which is the way we expected it to work.  However, our users wanted to upload weekly reports, named the same as the previous week.  We solved this by using folders within the document library to keep the sets of reports separate from previous weeks. One last thing to consider before implementing a solution like this, is what browsers and platforms your users will be working from.  We only needed to support IE and Windows, which works fine.  However, if you need to support Firefox, there are add-ons that allow Click Once to work with Firefox.  This is still a Windows only solution though.  In order to support Macs, you’d have to focus on either browser techniques (AJAX?) or Silverlight/Flash. Summary Our users are happy with the Click Once app.  It allowed them to move all of their content to our SharePoint site in under a couple hours, which they were thrilled with.  We’re happy because we can easily deploy updates, our development time was small, and we met all of our business requirements.

    Read the article

  • ArchBeat Top 10 for December 2-8, 2012

    - by Bob Rhubart
    The Top 10 most-clicked items shared on the OTN ArchBeat Facebook page for the week of December 2-8, 2012 Configure Oracle SOA JMSAdatper to Work with WLS JMS Topics Another of the four posts published on Dec 4 by the Fusion Middleware A-Team blogger identified as "fip" illlustrates "how to configure the JMS Topic, the JmsAdapter connection factory, as well as the composite so that the JMS Topic messages will be evenly distributed to same composite running off different SOA cluster nodes without causing duplication." Web Service Example - Part 3: Asynchronous Part 3 in this series from the Oracle ADF Mobile blog looks at "firing the web service asynchronously and then filling in the UI when it completes." Denis says, "This can be useful when you have data on the device in a local store and want to show that to the user while the application uses lazy loading from a web service to load more data." Advanced Oracle SOA Suite Oracle Open World 2012 SOA Presentations Oracle SOA & BPM Partner Community blogger Juergen Kress shares a list of 13 SOA presentations delivered or moderated by Oracle SOA Product Management at OOW12 in San Francisco. Oracle WebLogic Server WLS Domain Browser My colleague Jeff Davies, a frequent speaker at OTN Architect Day events and a genuinely nice guy, emailed me last night with this message: "I just came across this app on Google Play. It allows WebLogic administrators to browse WLS 12c domain information. I installed it on my phone and tried it out. Works very fast." I'm an iPhone guy, but I'm perfectly comfortable taking Jeff at his word. The app is called WLS Domain Browser. Follow the link for more info from the Google Play site. Retrieve Performance Data from SOA Infrastructure Database Another of the four blog posts published on Dec 4 by very busy Oracle Fusion Middleware A-Team member "fip," this one offers "examples of some basic SQL queries you can run against the infrastructure database of Oracle SOA Suite 11G to acquire the performance statistics for a given period of time." How to Achieve OC4J RMI Load Balancing "Having returned from a customer who faced challenges with OC4J RMI load balancing, I felt there is still some confusion in the field [about] how OC4J RMI load balancing works," says the Oracle Fusion Middleware A-Team member known only as "fip." "Hence I decide to dust off an old tech note that I wrote a few years back and share it with the general public." From XaaS to Java EE – Which damn cloud is right for me in 2012? Oracle ACE Director Markus Eisele wrestles with a timely technical issue and shares his observations on several of the alternatives. Exalogic 2.0.1 Tea Break Snippets - Creating a ModifyJeOS VirtualBox "One of the main advantages of this is that Templates can be created away from the Exalogic Environment," explains The Old Toxophilist. (BTW: I had to look it up: a toxophilist is one who collects bows and arrows.) ADF Mobile - Implementing Reusable Mobile Architecture "Reusability was always a strong part of ADF," says Oracle ACE Director Andrejus Baranovskis. "The same high reusability level is supported now in ADF Mobile." The objective of this post is "to prove technically that [the] reusable architecture concept works for ADF Mobile." Using BPEL Performance Statistics to Diagnose Performance Bottlenecks Someone had a busy day… This post, one of four published on DeC 4 by a member of the Oracle Fusion Middleware A-Team identified only as "fip," offers details on how to "enable, retrieve and interpret the performance statistics, before the future versions provides a more pleasant user experience." Thought for the Day "If you're afraid to change something it is clearly poorly designed." — Martin Fowler Source: SoftwareQuotes.com

    Read the article

< Previous Page | 267 268 269 270 271 272 273 274 275 276 277 278  | Next Page >