Search Results

Search found 26412 results on 1057 pages for 'product key'.

Page 414/1057 | < Previous Page | 410 411 412 413 414 415 416 417 418 419 420 421  | Next Page >

  • Oracle WebCenter Partner Program

    - by kellsey.ruppel
    In competitive marketplaces, your company needs to quickly respond to changes and new trends, in order to open opportunities and build long-term growth. Oracle has a variety of next-generation services, solutions and resources that will leverage the differentiators in your offerings. Name your partnering needs: Oracle has the answer. This week we’d like to focus on Partners and the value your organization can gain from working with the Oracle PartnerNetwork. The Oracle PartnerNetwork will empower your company with exceptional resources to distinguish your offerings from the competition, seize opportunities, and increase your sales. We’re happy to welcome Christine Kungl, and Brian Buzzell, from Oracle’s World Wide Alliances & Channels (WWA&C) WebCenter Partner Enablement team, as today’s guests on the Oracle WebCenter blog. Q: What is the Oracle PartnerNetwork (OPN)?A: Christine: Oracle’s PartnerNetwork (OPN) is a collaborative partnership which allows registered companies specific added value resources to help differentiate themselves from their competition. Through OPN programs it provides companies the ability to seize and target opportunities, educate and train their teams, and leverage unparalleled opportunity given Oracle’s large market footprint. OPN’s multi-level programs are targeted at different levels allowing companies to grow and evolve with Oracle based on their business needs.  As part of their OPN memberships partners are encouraged to become OPN Specialized allowing those partners additional differentiation in Oracle’s Partner Network Community.  Q: What is an OPN Specialization and what resources are available for Specialized Partners?A: Brian: Oracle wanted a better way for our partners to differentiate their special skills and expertise, as well a more effective way to communicate that difference to customers.  Oracle’s expanding product portfolio demanded that we be able to identify partners with significant product knowledge—those who had made an investment in Oracle and a continuing commitment to deliver Oracle solutions. And with more than 30,000 Oracle partners around the world, Oracle needed a way for our customers to choose the right partner for their business. So how did Oracle meet this need? With the new partner program:  Oracle PartnerNetwork (OPN) Specialized. In this new program, Oracle partners are: Specialized :  Differentiating themselves from the competition with expertise that set them apart Recognized:  Being acknowledged for investing in becoming Oracle experts in specialized areas. Preferred :  Connecting with potential customers who are seeking  value-added solutions for their business OPN Specialized provides all partners with educational opportunities, training, and tools specially designed to build competency and grow business.  Partners can serve their customers better through key resources:OPN Specialized Knowledge Zones – Located on the updated and enhanced OPN portal— provide a single point of entry for all education and training information for Oracle partners. Enablement 2.0 Resources —Enablement 2.0 helps Oracle partners build their competencies and skills through a variety of educational opportunities and expanded training choices. These resources include: Enablement 2.0 “Boot camps” provide three-tiered learning levels that help jump-start partner training The role-based training covers Oracle’s application and technology products and offers a combination of classroom lectures, hands-on lab exercises, and case studies. Enablement 2.0 Interactive guided learning paths (GLPs) with recommendations on how to achieve specialization Upgraded partner solution kits Enhanced, specialized business centers available 24/7 around the globe on the OPN portal OPN Competency Center—Tracking ProgressThe OPN Competency Center keeps track as a partner applies for and achieves specialization in selected areas. You start with an assessment that compares your organization’s current skills and experience with the requirements for specialization in the area you have chosen. The OPN Competency Center then provides a roadmap that itemizes the skills and the knowledge you need to earn specialized status. In summary, OPN Specialization not only includes key training resources but a way to track and show progression for your partner organization. Q: What is are the OPN Membership Levels and what are the benefits?A:  Christine: The base OPN membership levels are: Remarketer: At the Remarketer level, retailers can choose to resell select Oracle products with the backing of authorized, regionally located, value-added distributors (VADs). The Remarketer level has no fees and no partner agreement with Oracle, but does offer online training and sales tools through the OPN portal.Program Details: RemarketerSilver Level: The Silver level is for Oracle partners who are focused on reselling and developing business with products ordered through the Oracle 1-Click Ordering Program. The Silver level provides a cost-effective, yet scalable way for partners to start an OPN Specialized membership and offers a substantial set of benefits that lets partners increase their competitive positioning. Program Details: SilverGold Level: Gold-level partners have the ability to specialize, helping them grow their business and create differentiation in the marketplace. Oracle partners at the Gold level can develop, sell, or implement the full stack of Oracle solutions and can apply to resell Oracle Applications.Program Details: GoldPlatinum Level: The Platinum level is for Oracle partners who want the highest level of benefits and are committed to reaching a minimum of five specializations. Platinum partners are recognized for their expertise in a broad range of products and technology, and receive dedicated support from Oracle.Program Details: PlatinumIn addition we recently introduced a new level:Diamond Level: This level is the most prestigious level of OPN Specialized. It allows companies to differentiate further because of their focused depth and breadth of their expertise. Program Details: DiamondSo as you can see there are various levels cost effective ways that Partners can get assistance, differentiation through OPN membership. Q: What role does the Oracle's World Wide Alliances & Channels (WWA&C), Partner Enablement teams and the WebCenter Community play?  A: Brian: Oracle’s WWA&C teams are responsible for manage relationships, educating their teams, creating go-to-market solutions and fostering communities for Oracle partners worldwide.  The WebCenter Partner Enablement Middleware Team is tasked to create, manage and distribute Specialization resources for the WebCenter Partner community. Q: What WebCenter Specializations are currently available?A: Christine:  As of now here are the following WebCenter Specializations and their availability: Oracle WebCenter Portal Specialization (Oracle WebCenter Portal): Available NowThe Oracle WebCenter Specialization provides insight into the following products: WebCenter Services, WebCenter Spaces, and WebLogic Portal.Oracle WebCenter Specialized Partners can efficiently use Oracle WebCenter products to create social applications, enterprise portals, communities, composite applications, and Internet or intranet Web sites on a standards-based, service-oriented architecture (SOA). The suite combines the development of rich internet applications; a multi-channel portal framework; and a suite of horizontal WebCenter applications, which provide content, presence, and social networking capabilities to create a highly interactive user experience. Oracle WebCenter Content Specialization: Available NowThe Oracle WebCenter Content Specialization provides insight into the following products; Universal Content Management, WebCenter Records Management, WebCenter Imaging, WebCenter Distributed Capture, and WebCenter Capture.Oracle WebCenter Content Specialized Partners can efficiently build content-rich business applications, reuse content, and integrate hundreds of content services with other business applications. This allows our customers to decrease costs, automate processes, reduce resource bottlenecks, share content effectively, minimize the number of lost documents, and better manage risk. Oracle WebCenter Sites Specialization: Available Q1 2012Oracle WebCenter Sites is part of the broader Oracle WebCenter platform that provides organizations with a complete customer experience management solution.  Partners that align with the new Oracle WebCenter Sites platform allow their customers organizations to: Leverage customer information from all channels and systems Manage interactions across all channels Unify commerce, merchandising, marketing, and service across all channels Provide personalized, choreographed consumer journeys across all channels Integrate order orchestration, supply chain management and order fulfillment Q: What criteria does the Partner organization need to achieve Specialization? What about individual Sales, PreSales & Implementation Specialist/Technical consultants?A: Brian: Each Oracle WebCenter Specialization has unique Business Criteria that must be met in order to achieve that Specialization.  This includes a unique number of transactions (co-sell, re-sell, and referral), customer references and then unique number of specialists as part of a partner team (Sales, Pre-Sales, Implementation, and Support).   Each WebCenter Specialization provides training resources (GLPs, BootCamps, Assessments and Exams for individuals on a partner’s staff to fulfill those requirements.  That criterion can be found for each Specialization on the Specialize tab for each WebCenter Knowledge Zone.  Here are the sample criteria, recommended courses, exams for the WebCenter Portal Specialization: WebCenter Portal Specialization Criteria Q: Do you have any suggestions on the best way for partners to get started if they would like to know more?A: Christine:   The best way to start is for partners is look at their business and core Oracle team focus and then look to become specialized in one or more areas.  Once you have selected the Specializations that are right for your business, you need to follow the first 3 key steps described below. The fourth step outlines the additional process to follow if you meet the criteria to be Advanced Specialized. Note that Step 4 may not be done without first following Steps 1-3.1. Join the Knowledge Zone(s) where you want to achieve Specialized status Go to the Knowledge Zone lick on the "Why Partner" tab Click on the "Join Knowledge Zone" link 2. Meet the Specialization criteria - Define and implement plans in your organization to achieve the competency and business criteria targets of the Specialization. (Note: Worldwide OPN members at the Gold, Platinum, or Diamond level and their Associates at the Gold, Platinum, or Diamond level may count their collective resources to meet the business and competency criteria required for specialization in this area.) 3. Apply for Specialization – when you have met the business and competency criteria required, inform Oracle by completing the following steps: Click on the "Specialize" tab in the Knowledge Zone Click on the "Apply Now" button Complete the online application form Oracle will validate the information provided, and once approved, you will receive notification from Oracle of your awarded Specialized status. Need more information? Access our Step by Step Guide (PDF) 4. Apply for Advanced Specialization (Optional) – If your company has on staff 50 unique Certified Implementation Specialists in your company's approved Specialization's product set, let Oracle know by following these steps: Ensure that you have 50 or more unique individuals that are Certified Implementation Specialists in the specific Specialization awarded to your company If you are pooling resources from another Associate or Worldwide entity, ensure you know that company’s name and country Have your Oracle PRM Administrator complete the online Advanced Specialization Application Oracle will validate the information provided, and once approved, you will receive notification from Oracle of your awarded Advanced Specialized status. There are additional resources on OPN as well as the broader WebCenter Community: v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Handle "Cannot access a closed resource set"

    - by Philip
    I have a website with several languages in a database. From the database I use ResXResourceWriter to create my .resx files. This is working really good but sometimes I get this exception: MESSAGE: Cannot access a closed resource set. SOURCE: mscorlib FORM: QUERYSTRING: TARGETSITE: System.Object GetObject(System.String, Boolean, Boolean) STACKTRACE: at System.Resources.RuntimeResourceSet.GetObject(String key, Boolean ignoreCase, Boolean isString) at System.Resources.RuntimeResourceSet.GetString(String key, Boolean ignoreCase) at System.Resources.ResourceManager.GetString(String name, CultureInfo culture) at System.Linq.Expressions.Expression.ValidateStaticOrInstanceMethod(Expression instance, MethodInfo method) at System.Linq.Expressions.Expression.Call(Expression instance, MethodInfo method, IEnumerable`1 arguments) at System.Data.Linq.DataContext.GetMethodCall(Object instance, MethodInfo methodInfo, Object[] parameters) at System.Data.Linq.DataContext.ExecuteMethodCall(Object instance, MethodInfo methodInfo, Object[] parameters) at Business.DatabaseModelDataContext.Web_GetMostPlayedEvents(String cultureCode) at Presentation.Default.Page_Load(Object sender, EventArgs e) at System.Web.Util.CalliHelper.EventArgFunctionCaller(IntPtr fp, Object o, Object t, EventArgs e) at System.Web.UI.Control.LoadRecursive() at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) I don't know why this is happening or how to solve it. Does anyone know anything about this? Thanks, Philip

    Read the article

  • Linq To Objects Auto Increment Number

    - by Nathan
    This feels like a completely basic question, but, for the life of me, I can't seem to work out an elegant solution. Basically, I am doing a Linq Query creating a new object from the query. In the new object, I want to generate a auto-incremented number to allow me to keep a selection order for later use (named Iter in my example). Here is my current solution that does what I am needing: Dim query2 = From x As DictionaryEntry In MasterCalendarInstance _ Order By x.Key _ Select New With {.CalendarId = x.Key, .Iter = 0} For i = 0 To query2.Count - 1 query2(i).Iter = i Next Is there a way to do this within the context of the linq query (so that I don't have to loop the collection after the query)? Thanks!

    Read the article

  • NHibernate, could not load an entity when column exists in the database.

    - by Eitan
    This is probably a simple question to answer but I just can't figure it out. I have a "Company" class with a many-to-one to "Address" which has a many to one to a composite id in "City". When I load a "Company" it loads the "Address", but if I call any property of "Address" I get the error: {"could not load an entity: [IDI.Domain.Entities.Address#2213][SQL: SELECT address0_.AddressId as AddressId13_0_, address0_.Street as Street13_0_, address0_.floor as floor13_0_, address0_.room as room13_0_, address0_.postalcode as postalcode13_0_, address0_.CountryCode as CountryC6_13_0_, address0_.CityName as CityName13_0_ FROM Address address0_ WHERE address0_.AddressId=?]"} The inner exception is: {"Invalid column name 'CountryCode'.\r\nInvalid column name 'CityName'."} What I don't understand is that I can run the query in sql server 2005 and it works, furthermore both those columns exist in the address table. Here are my HBMs: <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" assembly="IDI.Domain" namespace="IDI.Domain.Entities" > <class name="IDI.Domain.Entities.Company,IDI.Domain" table="Companies"> <id column="CompanyId" name="CompanyId" unsaved-value="0"> <generator class="native"></generator> </id> <property column="Name" name="Name" not-null="true" type="String"></property> <property column="NameEng" name="NameEng" not-null="false" type="String"></property> <property column="Description" name="Description" not-null="false" type="String"></property> <property column="DescriptionEng" name="DescriptionEng" not-null="false" type="String"></property> <many-to-one name="Address" column="AddressId" not-null="false" cascade="save-update" class="IDI.Domain.Entities.Address,IDI.Domain"></many-to-one> <property column="Telephone" name="Telephone" not-null="false" type="String"></property> <property column="TelephoneTwo" name="TelephoneTwo" not-null="false" type="String"></property> <property column="Fax" name="Fax" not-null="false" type="String"></property> <property column="ContactMan" name="ContactMan" not-null="false" type="String"></property> <property column="ContactManEng" name="ContactManEng" not-null="false" type="String"></property> <property column="Email" name="Email" not-null="false" type="String"></property> </class> </hibernate-mapping> <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" assembly="IDI.Domain" namespace="IDI.Domain.Entities" > <class name="IDI.Domain.Entities.Address,IDI.Domain" table="Address"> <id name="AddressId" column="AddressId" type="Int32"> <generator class="native"></generator> </id> <property name="Street" column="Street" not-null="false" type="String"></property> <property name="Floor" column="floor" not-null="false" type="Int32"></property> <property name="Room" column="room" not-null="false" type="Int32"></property> <property name="PostalCode" column="postalcode" not-null="false" type="string"></property> <many-to-one class="IDI.Domain.Entities.City,IDI.Domain" name="City" update="false" insert="false"> <column name="CountryCode" sql-type="String" ></column> <column name="CityName" sql-type="String"></column> </many-to-one> </class> </hibernate-mapping> <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" assembly="IDI.Domain" namespace="IDI.Domain.Entities" > <class name="IDI.Domain.Entities.City,IDI.Domain" table="Cities"> <composite-id> <key-many-to-one class="IDI.Domain.Entities.Country,IDI.Domain" name="CountryCode" column="CountryCode"> </key-many-to-one> <key-property name="Name" column="Name" type="string"></key-property> </composite-id> </class> </hibernate-mapping> Here is my code that calls the Company: IList<BursaUser> user; if(String.IsNullOrEmpty(email) && String.IsNullOrEmpty(company)) return null; ICriteria criteria = Session.CreateCriteria(typeof (BursaUser), "user").CreateCriteria("Company", "comp"); if(String.IsNullOrEmpty(email) || String.IsNullOrEmpty(company) ) { user = String.IsNullOrEmpty(email) ? criteria.Add(Expression.Eq("comp.Name", company)).List<BursaUser>() : criteria.Add(Expression.Eq("user.Email", email)).List<BursaUser>(); } And finally here is where i get the error: "user" was already initialized with the code above: if (user.Company.Address.City == null) user.Company.Address.City = new City(); Thanks.

    Read the article

  • Python Naming Conventions for Dictionaries/Maps/Hashes

    - by pokstad
    While other questions have tackled the broader category of sequences and modules, I ask this very specific question: "What naming convention do you use for dictionaries and why?" Some naming convention samples I have been considering: # 'value' is the data type stored in the map, while 'key' is the type of key value_for_key={key1:value1, key2,value2} value_key={key1:value1, key2,value2} v_value_k_key={key1:value1, key2,value2} Don't bother answering the 'why' with "because my work tells me to", not very helpful. The reason driving the choice is more important. Are there any other good considerations for a dictionary naming convention aside from readability?

    Read the article

  • Using constructor to load data in subsonic3?

    - by Dennis
    I'm getting an error while trying to load an record through the constructor. The constructor is: public Document(Expression<Func<Document,bool>> expression); and i try to load a single item in like this var x = new Document(f=>f.publicationnumber=="xxx"); publicationnumber isn't a key but tried making an it an unique key and still no go.. Am i totally wrong regarding the use of the constructor? and can someone please tell me how to use that constructor? The error i'm getting is: Test method TestProject1.UnitTest1.ParseFileNameTwoProductSingleLanguage threw exception: System.NullReferenceException: with the following stacktrace: SubSonic.Query.SqlQuery.Where[T](Expression1` expression) Load`[T]`(T item, Expression1expression) db.Document..ctor(Expression``1 expression) in C:\@Projects\DocumentsSearchAndAdmin\DocumentsSearchAndAdmin\Generated\ActiveRecord.cs: line 5613 rest removed for simplicity Regards Dennis

    Read the article

  • Entity Framework - SaveChanges with GUID as EntityKey

    - by MissingLinq
    I have a SQL Server 2008 database table that uses uniqueidentifier as a primary key. On inserts, the key is generated on the database side using the newid() function. This works fine with ADO.NET. But when I set up this table as an entity in an Entity Framework 4 model, there's a problem. I am able to query the entity just fine, but when creating a new entity and invoking SaveChanges() on the context, the generated uniqueidentifier on the database is all zeros. I understand there was an issue with EF v1 where this scenario did not work, requiring creating the GUID on the client prior to calling SaveChanges. However, I had read in many places that they were planning to fix this in EF 4. My question -- is this scenario (DB-side generation of uniqueidentifier) still not supported in EF4? Are we still stuck with generating the GUID on the client?

    Read the article

  • Core Data NSPredicate for relationships.

    - by Mugunth Kumar
    My object graph is simple. I've a feedentry object that stores info about RSS feeds and a relationship called Tag that links to "TagValues" object. Both the relation (to and inverse) are to-many. i.e, a feed can have multiple tags and a tag can be associated to multiple feeds. I referred to http://stackoverflow.com/questions/844162/how-to-do-core-data-queries-through-a-relationship and created a NSFetchRequest. But when fetch data, I get an exception stating, NSInvalidArgumentException unimplemented SQL generation for predicate What should I do? I'm a newbie to core data :( I know I've done something terribly wrong... Please help... Thanks -- NSFetchRequest *fetchRequest = [[NSFetchRequest alloc] init]; // Edit the entity name as appropriate. NSEntityDescription *entity = [NSEntityDescription entityForName:@"FeedEntry" inManagedObjectContext:managedObjectContext]; [fetchRequest setEntity:entity]; // Edit the sort key as appropriate. NSSortDescriptor *sortDescriptor = [[NSSortDescriptor alloc] initWithKey:@"authorname" ascending:NO]; NSArray *sortDescriptors = [[NSArray alloc] initWithObjects:sortDescriptor, nil]; [fetchRequest setSortDescriptors:sortDescriptors]; NSEntityDescription *tagEntity = [NSEntityDescription entityForName:@"TagValues" inManagedObjectContext:self.managedObjectContext]; NSPredicate *tagPredicate = [NSPredicate predicateWithFormat:@"tagName LIKE[c] 'nyt'"]; NSFetchRequest *tagRequest = [[NSFetchRequest alloc] init]; [tagRequest setEntity:tagEntity]; [tagRequest setPredicate:tagPredicate]; NSError *error = nil; NSArray* predicates = [self.managedObjectContext executeFetchRequest:tagRequest error:&error]; TagValues *tv = (TagValues*) [predicates objectAtIndex:0]; NSLog(tv.tagName); // it is nyt here... NSPredicate *predicate = [NSPredicate predicateWithFormat:@"tag IN %@", predicates]; [fetchRequest setPredicate:predicate]; // Edit the section name key path and cache name if appropriate. // nil for section name key path means "no sections". NSFetchedResultsController *aFetchedResultsController = [[NSFetchedResultsController alloc] initWithFetchRequest:fetchRequest managedObjectContext:managedObjectContext sectionNameKeyPath:nil cacheName:@"Root"]; aFetchedResultsController.delegate = self; self.fetchedResultsController = aFetchedResultsController; --

    Read the article

  • Choosing between Berkeley DB Core and Berkeley DB JE

    - by zokier
    I'm designing a Java based web-app and I need a key-value store. Berkeley DB seems fitting enough for me, but there appears to be TWO Berkeley DBs to choose from: Berkeley DB Core which is implemented in C, and Berkeley DB Java Edition which is implemented in pure Java. The question is, how to choose which one to use? With web-apps scalability and performance is quite important (who knows, maybe my idea will become the next Youtube), and I couldn't find easily any meaningful benchmarks between the two. I have yet to familiarize with Cores Java API, but I find it hard to believe that it could be much worse than Java Editions, which seems to be quite nice. If some other key-value store would be much better, feel free to recommend that too. I'm storing smallish binary blobs, and keys probably will be hashes of the data, or some other unique id.

    Read the article

  • SSLException: Keystore does not support enabled cipher suites

    - by wurfkeks
    I want to implement a small android application, that works as SSL Server. After lot of problems with the right format of the keystore, I solved this and run into the next one. My keystore file is properly loaded by the KeyStore class. But when I try to open the server socket (socket.accept()) the following error is raised: javax.net.ssl.SSLException: Could not find any key store entries to support the enabled cipher suites. I generated my keystore with this command: keytool -genkey -keystore test.keystore -keyalg RSA -keypass ssltest -storepass ssltest -storetype BKS -provider org.bouncycastle.jce.provider.BouncyCastleProvider -providerpath bcprov.jar with the Unlimited Strength Jurisdiction Policy for Java SE6 applied to my jre6. I got a list of supported ciphers suites by calling socket.getSupportedCipherSuites() that prints a long list with very different combinations. But I don't know how to get a supported key. I also tried the android debug keystore after converting it to BKS format using portecle but get still the same error. Can anyone help and tell how I can generate a key that is compatible with one of the cipher suites? Version Information: targetSDK: 15 tested on emulator running 4.0.3 and real device running 2.3.3 BounceCastle 1.46 portecle 1.7 Code of my test application: public class SSLTestActivity extends Activity implements Runnable { SSLServerSocket mServerSocket; ToggleButton tglBtn; /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); this.tglBtn = (ToggleButton)findViewById(R.id.toggleButton1); tglBtn.setOnCheckedChangeListener(new CompoundButton.OnCheckedChangeListener() { @Override public void onCheckedChanged(CompoundButton buttonView, boolean isChecked) { if (isChecked) { new Thread(SSLTestActivity.this).run(); } else { try { if (mServerSocket != null) mServerSocket.close(); } catch (IOException e) { Log.e("SSLTestActivity", e.toString()); } } } }); } @Override public void run() { try { KeyStore keyStore = KeyStore.getInstance(KeyStore.getDefaultType()); keyStore.load(getAssets().open("test.keystore"), "ssltest".toCharArray()); ServerSocketFactory socketFactory = SSLServerSocketFactory.getDefault(); mServerSocket = (SSLServerSocket) socketFactory.createServerSocket(8080); while (!mServerSocket.isClosed()) { Socket client = mServerSocket.accept(); PrintWriter output = new PrintWriter(client.getOutputStream(), true); output.println("So long, and thanks for all the fish!"); client.close(); } } catch (Exception e) { Log.e("SSLTestActivity", e.toString()); } } }

    Read the article

  • PASS Summit Feedback

    - by Rob Farley
    PASS Feedback came in last week. I also saw my dentist for some fillings... At the PASS Summit this year, I delivered a couple of regular sessions and a Lightning Talk. People told me they enjoyed it, but when the rankings came out, they showed that I didn’t score particularly well. Brent Ozar was keen to discuss it with me. Brent: PASS speaker feedback is out. You did two sessions and a Lightning Talk. How did you go? Rob: Not so well actually, thanks for asking. Brent: Ha! Sorry. Of course you know that's why I wanted to discuss this with you. I was in one of your sessions at SQLBits in the UK a month before PASS, and I thought you rocked. You've got a really good and distinctive delivery style.  Then I noticed your talks were ranked in the bottom quarter of the Summit ratings and wanted to discuss it. Rob: Yeah, I know. You did ask me if we could do this...  I should explain – my presentation style is not the stereotypical IT conference one. I throw in jokes, and try to engage the audience thoroughly. I find many talks amazingly dry, and I guess I try to buck that trend. I also run training courses, and find that I get a lot of feedback from people thanking me for keeping things interesting. That said, I also get feedback criticising me for my style, and that’s basically what’s happened here. For the rest of this discussion, let’s focus on my talk about the Incredible Shrinking Execution Plan, which I considered to be my main talk. Brent: I thought that session title was the very best one at the entire Summit, and I had it on my recommended sessions list.  In four words, you managed to sum up the topic and your sense of humor.  I read that and immediately thought, "People need to be in this session," and then it didn't score well.  Tell me about your scores. Rob: The questions on the feedback form covered the usefulness of the information, the speaker’s presentation skills, their knowledge of the subject, how well the session was described, the amount of time allocated, and the quality of the presentation materials. Brent: Presentation materials? But you don’t do slides.  Did they rate your thong? Rob: No-one saw my flip-flops in this talk, Brent. I created a script in Management Studio, and published that afterwards, but I think people will have scored that question based on the lack of slides. I wasn’t expecting to do particularly well on that one. That was the only section that didn’t have 5/5 as the most popular score. Brent: See, that sucks, because cookbook-style scripts are often some of my favorites.  Adam Machanic's Service Broker workbench series helped me immensely when I was prepping for the MCM.  As an attendee, I'd rather have a commented script than a slide deck.  So how did you rank so low? Rob: When I look at the scores that you got (based on your blog post), you got very few scores below 3 – people that felt strong enough about your talk to post a negative score. In my scores, between 5% and 10% were below 3 (except on the question about whether I knew my stuff – I guess I came as knowledgeable). Brent: Wow – so quite a few people really didn’t like your talk then? Rob: Yeah. Mind you, based on the comments, some people really loved it. I’d like to think that there would be a certain portion of the room who may have rated the talk as one of the best of the conference. Some of my comments included “amazing!”, “Best presentation so far!”, “Wow, best session yet”, “fantastic” and “Outstanding!”. I think lots of talks can be “Great”, but not so many talks can be “Outstanding” without the word losing its meaning. One wrote “Pretty amazing presentation, considering it was completely extemporaneous.” Brent: Extemporaneous, eh? Rob: Yeah. I guess they don’t realise how much preparation goes into coming across as unprepared. In many ways it’s much easier to give a written speech than to deliver a presentation without slides as a prompt. Brent: That delivery style, the really relaxed, casual, college-professor approach was one of the things I really liked about your presentation at SQLbits.  As somebody who presents a lot, I "get" it - I know how hard it is to come off as relaxed and comfortable with your own material.  It's like improv done by jazz players and comedians - if you've never tried it, you don't realize how hard it is.  People also don't realize how hard it is to make a tough subject fun. Rob: Yeah well... There will be people writing comments on this post that say I wasn't trying to make the subject fun, and that I was making it all about me. Sometimes the style works, sometimes it doesn't. Most of the comments mentioned the fact that I tell jokes, some in a nice way, but some not so much (and it wasn't just a PASS thing - that's the mix of feedback I generally get). One comment at PASS was: “great stand up comedian - not what I'm looking for at pass”, and there were certainly a few that said “too many jokes”. I’m not trying to do stand-up – jokes are my way of engaging with the audience while I demonstrate some of the amazing things that the Query Optimizer can do if you write your queries the right way. Some people didn’t think it was technical enough, but I’ve also had some people tell me that the concepts I’m explaining are deep and profound. Brent: To me, that's a hallmark of a great explanation - when someone says, "But of course it has to work that way - how could it work any other way?  It seems so simple and logical."  Well, sure it does when it's explained correctly, but now pick up any number of thick SQL Server books and try to understand the Redundant Joins concept.  I guarantee it'll take more than 45 minutes. Rob: Some people in my audiences realise that, but definitely not everyone. There's only so much you can tell someone that something is profound. Generally it's something that they either have an epiphany on or not. I like to lull my audience into knowing what's going on, and do something that surprises them. Gain their trust, build a rapport, and then show them the deeper truth of what just happened. Brent: So you've learned your lesson about presentation scores, right?  From here on out, you're going to be dry, humorless, and all your presentations will consist of you reading bullet points off the screen. Rob: No Brent, I’m not. I'm also not going to suggest that most presentations at PASS are like that. No-one tries to present like that. There's a big space to occupy between what "dry and humourless" and me. My difference is to focus on the relationship I have with the crowd, rather than focussing on delivering the perfect session. I want to see people smiling and know they're relaxed. I think most presenters focus on the material, which is completely reasonable and safe. I remember once hearing someone talking about product creation. They talked about mediocrity. They said that one of the worst things that people can ever say about your product is that it’s “good”. What you want is for 10% of the world to love it enough to want to buy it. If 10% the world gave me a dollar, I’d have more money than I could ever use (assuming it wasn’t the SAME dollar they were giving me I guess). Brent: It's the Raving Fans theory.  It's better to have a small number of raving customers than a large number of almost-but-not-really customers who don't care that much about your product or service.  I know exactly how you feel - when I got survey feedback from my Quest video presentation when I was dressed up in a Richard Simmons costume, some of the attendees said I was unprofessional and distracting.  Some of the attendees couldn't get enough and Photoshopped all kinds of stuff into the screen captures.  On a whole, I probably didn't score that well, and I'm fine with that.  It sucks to look at the scores though - do those lower scores bother you? Rob: Of course they do. It hurts deeply. I open myself up and give presentations in a very personal way. All presenters do that, and we all feel the pain of negative feedback. I hate coming 146th & 162nd out of 185, but have to acknowledge that many sessions did worse still. Plus, once I feel the wounds have healed, I’ll be able to remember that there are people in the world that rave about my presentation style, and figure that people will hopefully talk about me. One day maybe those people that don’t like my presentation style will stay away and I might be able to score better. You don’t pay to hear country music if you prefer western... Lots of people find chili too spicy, but it’s still a popular food. Brent: But don’t you want to appeal to everyone? Rob: I do, but I don’t want to be lukewarm as in Revelation 3:16. I’d rather disgust and be discussed. Well, maybe not ‘disgust’, but I don’t want to conform. Conformity just isn’t the same any more. I’m not sure I’ve ever been one to do that. I try not to offend, but definitely like to be different. Brent: Count me among your raving fans, sir.  Where can we see you next? Rob: Considering I live in Adelaide in Australia, I’m not about to appear at anyone’s local SQL Saturday. I’m still trying to plan which events I’ll get to in 2011. I’ve submitted abstracts for TechEd North America, but won’t hold my breath. I’m also considering the SQLBits conferences in the UK in April, PASS in October, and I’m sure I’ll do some LiveMeeting presentations for user groups. Online, people download some of my recent SQLBits presentations at http://bit.ly/RFSarg and http://bit.ly/Simplification though. And they can download a 5-minute MP3 of my Lightning Talk at http://www.lobsterpot.com.au/files/Collation.mp3, in which I try to explain the idea behind collation, using thongs as an example. Brent: I was in the audience for http://bit.ly/RFSarg. That was a great presentation. Rob: Thanks, Brent. Now where’s my dollar?

    Read the article

  • Easiest way to decrypt PGP-encrypted files from VBA (MS Access)

    - by stucampbell
    I need to write code that picks up PGP-encrypted files from an FTP location and processes them. The files will be encrypted with my public key (not that I have one yet). Obviously, I need a PGP library that I can use from within Microsoft Access. Can you recommend one that is easy to use? I'm looking for something that doesn't require a huge amount of PKI knowledge. Ideally, something that will easily generate the one-off private/public key pair, and then have a simple routine for decryption.

    Read the article

  • Announcing ASP.NET MVC 3 (Release Candidate 2)

    - by ScottGu
    Earlier today the ASP.NET team shipped the final release candidate (RC2) for ASP.NET MVC 3.  You can download and install it here. Almost there… Today’s RC2 release is the near-final release of ASP.NET MVC 3, and is a true “release candidate” in that we are hoping to not make any more code changes with it.  We are publishing it today so that people can do final testing with it, let us know if they find any last minute “showstoppers”, and start updating their apps to use it.  We will officially ship the final ASP.NET MVC 3 “RTM” build in January. Works with both VS 2010 and VS 2010 SP1 Beta Today’s ASP.NET MVC 3 RC2 release works with both the shipping version of Visual Studio 2010 / Visual Web Developer 2010 Express, as well as the newly released VS 2010 SP1 Beta.  This means that you do not need to install VS 2010 SP1 (or the SP1 beta) in order to use ASP.NET MVC 3.  It works just fine with the shipping Visual Studio 2010.  I’ll do a blog post next week, though, about some of the nice additional feature goodies that come with VS 2010 SP1 (including IIS Express and SQL CE support within VS) which make the dev experience for both ASP.NET Web Forms and ASP.NET MVC even better. Bugs and Perf Fixes Today’s ASP.NET MVC 3 RC2 build contains many bug fixes and performance optimizations.  Our latest performance tests indicate that ASP.NET MVC 3 is now faster than ASP.NET MVC 2, and that existing ASP.NET MVC applications will experience a slight performance increase when updated to run using ASP.NET MVC 3. Final Tweaks and Fit-N-Finish In addition to bug fixes and performance optimizations, today’s RC2 build contains a number of last-minute feature tweaks and “fit-n-finish” changes for the new ASP.NET MVC 3 features.  The feedback and suggestions we’ve received during the public previews has been invaluable in guiding these final tweaks, and we really appreciate people’s support in sending this feedback our way.  Below is a short-list of some of the feature changes/tweaks made between last month’s ASP.NET MVC 3 RC release and today’s ASP.NET MVC 3 RC2 release: jQuery updates and addition of jQuery UI The default ASP.NET MVC 3 project templates have been updated to include jQuery 1.4.4 and jQuery Validation 1.7.  We are also excited to announce today that we are including jQuery UI within our default ASP.NET project templates going forward.  jQuery UI provides a powerful set of additional UI widgets and capabilities.  It will be added by default to your project’s \scripts folder when you create new ASP.NET MVC 3 projects. Improved View Scaffolding The T4 templates used for scaffolding views with the Add-View dialog now generates views that use Html.EditorFor instead of helpers such as Html.TextBoxFor. This change enables you to optionally annotate models with metadata (using data annotation attributes) to better customize the output of your UI at runtime. The Add View scaffolding also supports improved detection and usage of primary key information on models (including support for naming conventions like ID, ProductID, etc).  For example: the Add View dialog box uses this information to ensure that the primary key value is not scaffold as an editable form field, and that links between views are auto-generated correctly with primary key information. The default Edit and Create templates also now include references to the jQuery scripts needed for client validation.  Scaffold form views now support client-side validation by default (no extra steps required).  Client-side validation with ASP.NET MVC 3 is also done using an unobtrusive javascript approach – making pages fast and clean. [ControllerSessionState] –> [SessionState] ASP.NET MVC 3 adds support for session-less controllers.  With the initial RC you used a [ControllerSessionState] attribute to specify this.  We shortened this in RC2 to just be [SessionState]: Note that in addition to turning off session state, you can also set it to be read-only (which is useful for webfarm scenarios where you are reading but not updating session state on a particular request). [SkipRequestValidation] –> [AllowHtml] ASP.NET MVC includes built-in support to protect against HTML and Cross-Site Script Injection Attacks, and will throw an error by default if someone tries to post HTML content as input.  Developers need to explicitly indicate that this is allowed (and that they’ve hopefully built their app to securely support it) in order to enable it. With ASP.NET MVC 3, we are also now supporting a new attribute that you can apply to properties of models/viewmodels to indicate that HTML input is enabled, which enables much more granular protection in a DRY way.  In last month’s RC release this attribute was named [SkipRequestValidation].  With RC2 we renamed it to [AllowHtml] to make it more intuitive: Setting the above [AllowHtml] attribute on a model/viewmodel will cause ASP.NET MVC 3 to turn off HTML injection protection when model binding just that property. Html.Raw() helper method The new Razor view engine introduced with ASP.NET MVC 3 automatically HTML encodes output by default.  This helps provide an additional level of protection against HTML and Script injection attacks. With RC2 we are adding a Html.Raw() helper method that you can use to explicitly indicate that you do not want to HTML encode your output, and instead want to render the content “as-is”: ViewModel/View –> ViewBag ASP.NET MVC has (since V1) supported a ViewData[] dictionary within Controllers and Views that enables developers to pass information from a Controller to a View in a late-bound way.  This approach can be used instead of, or in combination with, a strongly-typed model class.  The below code demonstrates a common use case – where a strongly typed Product model is passed to the view in addition to two late-bound variables via the ViewData[] dictionary: With ASP.NET MVC 3 we are introducing a new API that takes advantage of the dynamic type support within .NET 4 to set/retrieve these values.  It allows you to use standard “dot” notation to specify any number of additional variables to be passed, and does not require that you create a strongly-typed class to do so.  With earlier previews of ASP.NET MVC 3 we exposed this API using a dynamic property called “ViewModel” on the Controller base class, and with a dynamic property called “View” within view templates.  A lot of people found the fact that there were two different names confusing, and several also said that using the name ViewModel was confusing in this context – since often you create strongly-typed ViewModel classes in ASP.NET MVC, and they do not use this API.  With RC2 we are exposing a dynamic property that has the same name – ViewBag – within both Controllers and Views.  It is a dynamic collection that allows you to pass additional bits of data from your controller to your view template to help generate a response.  Below is an example of how we could use it to pass a time-stamp message as well as a list of all categories to our view template: Below is an example of how our view template (which is strongly-typed to expect a Product class as its model) can use the two extra bits of information we passed in our ViewBag to generate the response.  In particular, notice how we are using the list of categories passed in the dynamic ViewBag collection to generate a dropdownlist of friendly category names to help set the CategoryID property of our Product object.  The above Controller/View combination will then generate an HTML response like below.    Output Caching Improvements ASP.NET MVC 3’s output caching system no longer requires you to specify a VaryByParam property when declaring an [OutputCache] attribute on a Controller action method.  MVC3 now automatically varies the output cached entries when you have explicit parameters on your action method – allowing you to cleanly enable output caching on actions using code like below: In addition to supporting full page output caching, ASP.NET MVC 3 also supports partial-page caching – which allows you to cache a region of output and re-use it across multiple requests or controllers.  The [OutputCache] behavior for partial-page caching was updated with RC2 so that sub-content cached entries are varied based on input parameters as opposed to the URL structure of the top-level request – which makes caching scenarios both easier and more powerful than the behavior in the previous RC. @model declaration does not add whitespace In earlier previews, the strongly-typed @model declaration at the top of a Razor view added a blank line to the rendered HTML output. This has been fixed so that the declaration does not introduce whitespace. Changed "Html.ValidationMessage" Method to Display the First Useful Error Message The behavior of the Html.ValidationMessage() helper was updated to show the first useful error message instead of simply displaying the first error. During model binding, the ModelState dictionary can be populated from multiple sources with error messages about the property, including from the model itself (if it implements IValidatableObject), from validation attributes applied to the property, and from exceptions thrown while the property is being accessed. When the Html.ValidationMessage() method displays a validation message, it now skips model-state entries that include an exception, because these are generally not intended for the end user. Instead, the method looks for the first validation message that is not associated with an exception and displays that message. If no such message is found, it defaults to a generic error message that is associated with the first exception. RemoteAttribute “Fields” -> “AdditionalFields” ASP.NET MVC 3 includes built-in remote validation support with its validation infrastructure.  This means that the client-side validation script library used by ASP.NET MVC 3 can automatically call back to controllers you expose on the server to determine whether an input element is indeed valid as the user is editing the form (allowing you to provide real-time validation updates). You can accomplish this by decorating a model/viewmodel property with a [Remote] attribute that specifies the controller/action that should be invoked to remotely validate it.  With the RC this attribute had a “Fields” property that could be used to specify additional input elements that should be sent from the client to the server to help with the validation logic.  To improve the clarity of what this property does we have renamed it to “AdditionalFields” with today’s RC2 release. ViewResult.Model and ViewResult.ViewBag Properties The ViewResult class now exposes both a “Model” and “ViewBag” property off of it.  This makes it easier to unit test Controllers that return views, and avoids you having to access the Model via the ViewResult.ViewData.Model property. Installation Notes You can download and install the ASP.NET MVC 3 RC2 build here.  It can be installed on top of the previous ASP.NET MVC 3 RC release (it should just replace the bits as part of its setup). The one component that will not be updated by the above setup (if you already have it installed) is the NuGet Package Manager.  If you already have NuGet installed, please go to the Visual Studio Extensions Manager (via the Tools –> Extensions menu option) and click on the “Updates” tab.  You should see NuGet listed there – please click the “Update” button next to it to have VS update the extension to today’s release. If you do not have NuGet installed (and did not install the ASP.NET MVC RC build), then NuGet will be installed as part of your ASP.NET MVC 3 setup, and you do not need to take any additional steps to make it work. Summary We are really close to the final ASP.NET MVC 3 release, and will deliver the final “RTM” build of it next month.  It has been only a little over 7 months since ASP.NET MVC 2 shipped, and I’m pretty amazed by the huge number of new features, improvements, and refinements that the team has been able to add with this release (Razor, Unobtrusive JavaScript, NuGet, Dependency Injection, Output Caching, and a lot, lot more).  I’ll be doing a number of blog posts over the next few weeks talking about many of them in more depth. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Optimize css vs Google page speed is messing with me

    - by The Disintegrator
    I'm using google page speed and it's telling me my css is inefficient... Very inefficient rules (good to fix on any page): * table.fancy thead td Tag key with 2 descendant selectors and Class overly qualified with tag * table.fancy tfoot td Tag key with 2 descendant selectors and Class overly qualified with tag The css rules are table.fancy {border: 1px solid white; padding:5px} table.fancy td {background:#656165} table.fancy thead td, table.fancy tfoot td {background:#767276} I want the header and footer in a different background color than the body of the table (a data table) On what grounds this is inefficient? How to make it more efficient? I will not add a class to the thead and tfoot for googles's sake.

    Read the article

  • In NHibernate, how do I combine two DetachedCriteria instances

    - by Trevor
    My scenario is this: I have a base NHibernate query to run of the form (I've coded it using DetachedCriteria , but describe it here using SQL syntax): SELECT * FROM Items I INNER JOIN SubItems S on S.FK = I.Key The user interface to show the results of this join allows the user to specify additional criteria: Say: I.SomeField = 'UserValue'. Now, I need the final load command to be: SELECT * FROM Items I INNER JOIN SubItems S on S.FK = I.Key WHERE I.SomeField = 'UserValue' My problem is: I've created a DetachedCriteria with the 'static' aspect of the query (the top join) and the UI creates a DetachedCriteria with the 'dynamic' component of the query. I need to combine the two into a final query that I can execute on the NHibernate session. DefaultCriteria.Add() takes an ICriterion (which are created using the Expression class, and maybe other classes I don't know of which could be the solution to my problem). Does anyone know how I might do what I want?

    Read the article

  • SOA Suite Integration: Part 2: A basic BPEL process

    - by Anthony Shorten
    This is the next in the series about SOA Suite integration with Oracle Utilities Application Framework. One of the first scenarios I am going to illustrate in this series is building a basic BPEL process using Web Service calls to the Oracle Utilities Application Framework. The scenario is this. I will pass in the userid and the BPEL process will call our the AS-User Web Service we created in Part 1. This is just a basic test and illustrate how to import the Web Service into SOA Suite. To use this scenario, you will need access to Oracle SOA Suite, access to a copy of any Oracle Utilities Application Framework based product and Oracle JDeveloper (to build the process). First of all you need to start Oracle JDeveloper and create a new SOA Project to house the BPEL process in. For the purposes of this example I will call the project simpleBPEL and verify that SOA is part of the project. I will select "Composite with BPEL" to denote it as a BPEL process. I can also the same process to create a Mediator or OSB project (refer to the JDeveloper documentation on these technologies). For this example I will use BPEL 1.1 as my specification standard (BPEL 2.0 can also be used if desired). I give the individual BPEL process as simpleBPEL (you can use a different name but I wanted to keep the project and process the same for this example). I will also build a Synchronous BPEL Process as I want a response from the Web Service. I will leave the defaults to save time. I have no have a blank canvas to build my BPEL process against. Note: for simplicity I am going to use as much defaulting as possible. In fact I am not going to specify an input schema for the incoming call as I will use the basic single field used by BPEL as default. The first step is to import the AS-User Web Service into my BPEL project. To do this I use the standard Web Service BPEL component from the Component Palette to import the WSDL into the BPEL project. Now the tricky part (a joke), you drag and drop the component from the Palette onto the right side of the canvas in the Partner Links swim lane. This swim lane is reserved for Partner Links that have a Partner Role (i.e. being called rather than calling). When you drop the Web Service onto the canvas the Create Web Service wizard is invoked to ask for details of the Web Service. At this point you give the BPEL node a name. I have used the name RetrieveUser as a name. I placed the WSDL URL from the XAI Inbound Service screen in the WSDL URL. Once you specify the URL you can press the Find existing WSDL's button to load the information into BPEL from the call. You will notice the Port Type is prefilled with the port from the WSDL. I also suggest that you check copy wsdl and it's dependent artifacts into the project if you intending to work on the BPEL process offline. If you do not check this your target application must be accessible when you work on the BPEL process (that is not always convenient). Note: For the perceptive of you will notice that the URL specified in this example is different to the URL in the last post. The reason is for the demonstrations I shifted to a new server and did not redo all of the past screen captures. If you copy the WSDL into the project you will get an information screen about Localize Files. It is just a confirmation screen. The last confirmation screen is a summary of the partner link (the main tab is locked for editing at this stage). At this stage you have successfully imported the Web Service. To complete the setup of the Web Service you need to set the credentials for the Web Service to use. Refer to the past post on how to do that. Now to use the Web Service. To call the Web Service (as it is just imported not connected to the BPEL process yet), you must add an Invoke action to your BPEL Process. To do this, select Invoke action from the BPEL Constructs zone on the Component Palette and drop it on the edit nodes between the receiveInput and replyOutput nodes This will create an empty Invoke action. You will notice some connectors on the Invoke node. Grab the node closest to your Web Service and drag it to connect the Invoke to your Web Service. This instructs BPEL to use the Invoke to call the Web Service. Once the Invoke action is connected to the Web Service an Edit Invoke edit dialog is displayed. At this point I suggest you name the Invoke node. It is important to name the nodes straightaway and name them appropriately for you to trace the logic. I used InvokeUser as the name in this example. To complete the node configuration you must create Variables to hold the input and output for the call. To do this clock on Automatically Create Input Variable on the Edit Invoke dialog. You will be presented with a default variable name. It uses the node name (that is why it is important to name the node before hitting this button) as a prefix. You can name the variable anything but I usually take the default. Repeat the same for the output variable. You now have a completed node for invoking the service. You have a very basic BPEL process which contains an input, invoke and output node. It is not complete yet though. You need to tell the BPEL process how to pass data from the input to the invoke step and how to take the output from the service call and pass it back to the service. You need to now add an Assign node to assign the input to the Web Service. To do this select Assign activity from BPEL Constructs zone in the Component Palette. Drag and drop the Assign activity between the receiveInput and InvokeUser nodes as you want to pass data between these two nodes. You have now added a new Assign node to your BPEL process Double clicking the node allows you to specify the name of the node. I use AssignUser to describe that I am assigning user data. On the Copy Rules tab you can specify the mapping between the input variable InputVariable/payload/process/input string and the input variable for the Web Service call. We are passing data from the input to BPEL to the relevant input variable on the Web Service. This is simply drag and drop between the two data structures. In the example, I am using the input to pass to the user element in my Web Service as the user is the primary key for the object. The fields become linked (which means data from source will be copied to target). Almost there. You now need to process the output from the Web Service call to the outputVariable of the client call. I have decided to pass back one piece of data, the name associated with the user by concatenating the firstName and lastName elements from the Web Service call. To do this I will use a Transform as it is not just a matter of an Assign action. It is a concatenation operation. This also illustrates how you can use BPEL functionality to transform data from a Web Service call. As with the other components you drag and drop the Transform component to the appropriate place in the BPEL process. In this case we want to transform the output from the Web Service call so we want it after the InvokeUser action and the replyOutput action. The Transform component is actually part of the Oracle Extensions to the BPEL specification. Double clicking the Transform node will allow you to name the node.  In this example I used TransformName. To complete the transform I need to tell the product the source of the transformation and the target of the transform. In the example this is the InvokeUser output variable. I also named the mapper file to TransformName. By clicking the + or pencil icon next to the map I can create the map. The mapping screen is shows the source and target schemas for me to map across. As with the assign I can map the relevant elements. In my example, I first map the firstName from the Web Service to the result element. As I want to concatenate the names, I drop the concat function on the call line. I now attach the last name to the function to indicate the concatenation of the field. By default the names will be concatenated with no space. To make the name legible I add a space between the field by clicking the function and adding a space in the call. I now have a completed mapping. I can now save the whole project as my BPEL process is now complete. As you can see the following happens: We accept input from the client (the userid for the call) in the receiveInput step. We assign that value to the input parameters for the Web Service call in the AssignUser step. We invoke the Web Service call to retrieve the data from the product in the InvokeUser step. We take the output from the InvokeUser step and concatenate the names in the TransformName step. We pass back the data in the replyOutput step. At this point we can deploy the BPEL process to the SOA Suite server. I will not cover this aspect as it really all SOA Suite specific (it is all done via Oracle JDeveloper). Now we need to test the service in SOA Suite. We will use the Fusion Middleware Control test facility. I will assume that credentials have also been setup as per our previous post (else you will get a 401 error). You navigate to the deployed BPEL process within Fusion Middleware Control and select the Test Service option. Specify some test data on the payload at the bottom of the Test Service screen. In my case I am returning my own userid information. On the response tab you will see the result. It works. You can verify the steps using the Audit trace facility on individual calls. As you can see this is a basic BPEL but you get the idea of importing the Web Service is pretty straightforward. You can create more sophisticated BPEL processes using the full facilities in Oracle SOA Suite. I just showed you the basic principals.

    Read the article

  • How to do this GQL query in JDO

    - by TheDon
    I have about 50k entities stored in appengine. I am able to look up an individual record via the GQL admin interface with a query like: SELECT * FROM Pet where __key__ = KEY( 'Pet','Fido') But I'm having trouble figuring out how to do a batch version of this via JDO. Right now I have this: PersistenceManager pm = ...; for(Pet pet : pets) { for(String k : getAllAliases(pet)) { keys.add(KeyFactory.createKeyString(Pet.class.getSimpleName(), k)); } } Query q = pm.newQuery("select from " + Pet.class.getName() + " where id == :keys"); List<Pet> petlist = (List<Pet>) q.execute(keys); But though 'Fido' works in the GQL case, it returns nothing when I use that Java + JDO code. What am I doing wrong?

    Read the article

  • Map problem when passing it as model to view in grails

    - by xain
    Hi, In a controller, I have populated a map that has a string as key and a list as value; in the gsp, I try to show them like this: <g:each in="${sector}" var="entry" > <br/>${entry.key}<br/> <g:each in="${entry.value}" var="item" > ${item.name}<br/> </g:each> </g:each> The problem is that item is considered as string, so I get the exception Error 500: Error evaluating expression [item.name] on line [11]: groovy.lang.MissingPropertyException: No such property: nombre for class: java.lang.String Any hints on how to fix it other than doing the find for the item explicitly in the gsp ?

    Read the article

  • How do I upload an NSImage(NSData)to Twitpic with OAMutableURLRequest?

    - by timothy5216
    I'm using OAConsumer in my xAuth twitterEngine and i'm adding Twitpic OAuth Echo to it. But it won't POST the NSData. here is some of my code: //other file NSArray *reps = [[imageToUpload image] representations]; NSData *imageData = [NSBitmapImageRep representationOfImageRepsInArray:reps usingType:NSJPEGFileType properties:nil]; [twitter testUploadImageData:imageData withMessage:@"Hello WORLD!!" toURL:[NSURL URLWithString:uploadURL.stringValue]]; // - (void)testUploadImageData:(NSData *)data withMessage:(NSString *)message toURL:(NSURL *)url; { //url = @"http://api.twitpic.com/2/upload.xml" //message = @"Hello WORLD!!" NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; NSString *String = [[NSString alloc] initWithData:data encoding:NSASCIIStringEncoding]; NSLog(@"dataString: %@",String); OAMutableURLRequest *request = [[OAMutableURLRequest alloc] initWithURL:url consumer:self.consumer token:_accessToken realm:nil signatureProvider:nil]; // Setup POST body [request setHTTPMethod:@"POST"]; //NSString *stringBoundary = [NSString stringWithString:@"0xKhTmLbOuNdArY"]; //NSString *contentType = [NSString stringWithFormat:@"multipart/form-data; boundary=%@", stringBoundary]; // NSString *stringBoundarySeparator = [NSString stringWithFormat:@"\r\n--%@\r\n", stringBoundary]; /* NSMutableString *postString = [NSMutableString string]; [postString appendString:@"\r\n"]; [postString appendString:stringBoundarySeparator]; [postString appendString:[NSString stringWithFormat:@"Content-Disposition: form-data; name=\"message\"\r\n\r\n%@", message]]; [postString appendString:stringBoundarySeparator]; [postString appendString:[NSString stringWithFormat:@"Content-Disposition: form-data; name=\"media\"; filename=\"%@\"\r\n", @"file.jpg"]]; [postString appendString:@"Content-Type: image/jpg\r\n"]; [postString appendString:@"Content-Transfer-Encoding: binary\r\n\r\n"]; // Setting up the POST request's multipart/form-data body NSMutableData *postBody = [NSMutableData data]; [postBody appendData:[postString dataUsingEncoding:NSUTF8StringEncoding]]; [postBody appendData:data]; [request setHTTPBody:postBody]; */ [request setHTTPMethod:@"POST"]; NSString *thing = [[NSString alloc] initWithData:data encoding:NSASCIIStringEncoding]; NSLog(@"%@",thing); [request setParameters:[NSArray arrayWithObjects: [OARequestParameter requestParameterWithName:@"oauth_token" value:_accessToken.key], [OARequestParameter requestParameterWithName:@"X-Auth-Service-Provider" value:@"https://api.twitter.com/1/account/verify_credentials.json"], [OARequestParameter requestParameterWithName:@"key" value:@"my-key-here :P"], [OARequestParameter requestParameterWithName:@"message" value:message], //iv'e changed this many times. I was just trying this to see if it works [OARequestParameter requestParameterWithName:@"media" value:thing], nil]]; OAAsynchronousDataFetcher *dataFetcher = [[OAAsynchronousDataFetcher alloc] init]; [dataFetcher initWithRequest:request delegate:self didFinishSelector:@selector(uploadDidUpload:withData:) didFailSelector:@selector(uploadDidFail:withData:)]; [dataFetcher start]; [dataFetcher release]; [request release]; [pool drain]; } I'm authenticated but it still won't POST the data :(

    Read the article

  • HTML5 web storage: can different websites overwrite each other’s data on a user’s computer?

    - by Deepak Mahalingam
    I have a few questions regarding the concept of HTML5 storage. I went through the w3c specification, books and tutorials on the same, but still I am a bit unclear about certain concepts: Assume that I access Website A. Some JavaScript runs in my browser that sets a key value pair, say ('username','deepak'). Then I access Website B which also adds a key,value pair in the localstorage as ('username','mahalingam'). How will they both be differentiated? Will Website B override the value set by website A in my localstorage? How can we ensure that a website would not erase all of my localstorage?

    Read the article

  • ASP.NET MVC How to convert ModelState errors to json

    - by JK
    How do you get a list of all ModelState error messages? I found this code to get all the keys: ( http://stackoverflow.com/questions/888521/returning-a-list-of-keys-with-modelstate-errors) var errorKeys = (from item in ModelState where item.Value.Errors.Any() select item.Key).ToList(); But how would I get the error messages as a IList or IQueryable? I could go: foreach (var key in errorKeys) { string msg = ModelState[error].Errors[0].ErrorMessage; errorList.Add(msg); } But thats doing it manually - surely there is a way to do it using LINQ? The .ErrorMessage property is so far down the chain that I don't know how to write the LINQ...

    Read the article

  • How do I send automated e-mails from Drupal using Messaging and Notifications?

    - by Adrian
    I am working on a Notifications plugin, and after starting to write my notes down about how to do this, decided to just post them here. Please feel free to come make modifications and changes. Eventually I hope to post this on the Drupal handbook as well. Thanks. --Adrian Sending automated e-mails from Drupal using Messaging and Notifications To implement a notifications plugin, you must implement the following functions: Use hook_messaging, hook_token_list and hook_token_values to create the messages that will be sent. Use hook_notifications to create the subscription types Add code to fire events (eg in hook_nodeapi) Add all UI elements to allow users to subscribe/unsubscribe Understanding Messaging The Messaging module is used to compose messages that can be delivered using various formats, such as simple mail, HTML mail, Twitter updates, etc. These formats are called "send methods." The backend details do not concern us here; what is important are the following concepts: TOKENS: tokens are provided by the "tokens" module. They allow you to write keywords in square brackets, [like-this], that can be replaced by any arbitrary value. Note: the token groups you create must match the keys you add to the $events-objects[$key] array. MESSAGE KEYS: A key is a part of a message, such as the greetings line. Keys can be different for each send method. For example, a plaintext mail's greeting might be "Hi, [user]," while an HTML greeing might be "Hi, [user]," and Twitter's might just be "[user-firstname]: ". Keys can have any arbitrary name. Keys are very simple and only have a machine-readable name and a user-readable description, the latter of which is only seen by admins. MESSAGE GROUPS: A group is a bunch of keys that often, but not always, might be used together to make up a complete message. For example, a generic group might include keys for a greeting, body, closing and footer. Groups can also be "subclassed" by selecting a "fallback" group that will supply any keys that are missing. Groups are also associated with modules; I'm not sure what these are used for. Understanding Notifications The Notifications module revolves around the following concepts: SUBSCRIPTIONS: Notifications plugins may define one or more types of subscriptions. For example, notifications_content defines subscriptions for: Threads (users are notified whenever a node or its comments change) Content types (users are notified whenever a node of a certain type is created or is changed) Users (users are notified whenever another user is changed) Subscriptions refer to both the user who's subscribed, how often they wish to be notified, the send method (for Messaging) and what's being subscribed to. This last part is defined in two steps. Firstly, a plugin defines several "subscription fields" (through a hook_notifications op of the same name), and secondly, "subscription types" (also an op) defines which fields apply to each type of subscription. For example, notifications_content defines the fields "nid," "author" and "type," and the subscriptions "thread" (nid), "nodetype" (type), "author" (author) and "typeauthor" (type and author), the latter referring to something like "any STORY by JOE." Fields are used to link events to subscriptions; an event must match all fields of a subscription (for all normal subscriptions) to be delivered to the recipient. The $subscriptions object is defined in subsequent sections. Notifications prefers that you don't create these objects yourself, preferring you to call the notifications_get_link() function to create a link that users may click on, but you can also use notifications_save_subscription and notifications_delete_subscription to do it yourself. EVENTS: An event is something that users may be notified about. Plugins create the $event object then call notifications_event($event). This either sends out notifications immediately, queues them to send out later, or both. Events include the type of thing that's changed (eg 'node', 'user'), the ID of the thing that's changed (eg $node-nid, $user-uid) and what's happened to it (eg 'create'). These are, respectively, $event-type, $event-oid (object ID) and $event-action. Warning: notifications_content_nodeapi also adds a $event-node field, referring to the node itself and not just $event-oid = $node-nid. This is not used anywhere in the core notifications module; however, when the $event is passed back to the 'query' op (see below), we assume the node is still present. Events do not refer to the user they will be referred to; instead, Notifications makes the connection between subscriptions and events, using the subscriptions' fields. MATCHING EVENTS TO SUBSCRIPTIONS: An event matches a subscription if it has the same type as the event (eg "node") and if the event matches all the correct fields. This second step is determined by the "query" hook op, which is called with the $event object as a parameter. The query op is responsible for giving Notifications a value for all the fields defined by the plugin. For example, notifications_content defines the 'nid', 'type' and 'author' fields, so its query op looks like this (ignore the case where $event_or_user = 'user' for now): $event_or_user = $arg0; $event_type = $arg1; $event_or_object = $arg2; if ($event_or_user == 'event' && $event_type == 'node' && ($node = $event_or_object->node) || $event_or_user == 'user' && $event_type == 'node' && ($node = $event_or_object)) { $query[]['fields'] = array( 'nid' => $node->nid, 'type' => $node->type, 'author' => $node->uid, ); return $query; After extracting the $node from the $event, we set $query[]['fields'] to a dictionary defining, for this event, all the fields defined by the module. As you can tell from the presence of the $query object, there's way more you can do with this op, but they are not covered here. DIGESTING AND DEDUPING: Understanding the relationship between Messaging and Notifications Usually, the name of a message group doesn't matter, but when being used with Notifications, the names must follow very strict patterns. Firstly, they must start with the name "notifications," and then are followed by either "event" or "digest," depending on whether the message group is being used to represent either a single event or a group of events. For 'events,' the third part of the name is the "type," which we get from Notification's $event-type (eg: notifications_content uses 'node'). The last part of the name is the operation being performed, which comes from Notification's $event-action. For example: notifications-event-node-comment might refer to the message group used when someone comments on a node notifications-event-user-update to a user who's updated their profile Hyphens cannot appear anywhere other than to separate the parts of these words. For 'digest' messages, the third and fourth part of the name come from hook_notification's "event types" callback, specifically this line: $types[] = array( 'type' => 'node', 'action' => 'insert', ... 'digest' => array('node', 'type'), ); $types[] = array( 'type' => 'node', 'action' => 'update', ... 'digest' => array('node', 'nid'), ); In this case, the first event type (node insertion) will be digested with the notifications-digest-node-type message template providing the header and footer, likely saying something like "the following [type] was created." The second event type (node update) will be digested with the notifications-digest-node-nid message template. Data Structure and Callback Reference $event The $event object has the following members: $event-type: The type of event. Must match the type in hook_notification::"event types". {notifications_event} $event-action: The action the event describes. Most events are sorted by [$event-type][$event-action]. {notifications_event}. $event-object[$object_type]: All objects relevant to the event. For example, $event-object['node'] might be the node that the event describes. $object_type can come from the 'event types' hook (see below). The main purpose appears to be to be passed to token_replace_multiple as the second parameter. $event-object[$event-type] is assumed to exist in the short digest processing functions, but this doesn't appear to be used anywhere. Not saved in the database; loaded by hook_notifications::"event load" $event-oid: apparently unused. The id of the primary object relevant to this event (eg the node's nid). $event-module: apparently unused $event-params[$key]: Mainly a place for plugins to save random data. The main module will serialize the contents of this array but does not use it in any way. However, notifications_ui appears to do something weird with it, possibly by using subscriptions' fields as keys into this array. I'm not sure why though. hook_notifications op 'subscription types': returns an array of subscription types provided by the plugin, in the form $key = array(...) with the following members: event_type: this subscription can only match events whose $event-type has this value. Stored in the database as notifications.event_type for every individual subscription. Apparently, this can be overiden in code but I wouldn't try it (see notifications_save_subscription). fields: an unkeyed array of fields that must be matched by an event (in addition to the event_type) for it to match this subscription. Each element of this array must be a key of the array returned by op 'subscription fields' which in turn must be used by op 'query' to actually perform the matching. title: user-readable title for their subscriptions page (eg the 'type' column in user/%uid/notifications/subscriptions) description: a user-readable description. page callback: used to add a supplementary page at user/%uid/notifications/blah. This and the following are used by notifications_ui as a part of hook_menu_alter. Appears to be partially deprecated. user page: user/%uid/notifications/blah. op 'event types': returns an array of event types, with each event type being an array with the following members: type: this will match $event-type action: this will match $event-action digest: an array with two ordered (non-keyed) elements, "type" and "field." 'type' is used as an index into $event-objects. 'field' is also used to group events like so: $event-objects[$type]-$field. For example, 'field' might be 'nid' - if the object is a node, the digest lines will be grouped by node ID. Finally, both are used to find the correct Messaging template; see discussion above. description: used on the admin "Notifications-Events" page name: unused, use Messaging instead line: deprecated, use Messaging instead Other Stuff This is an example of the main query that inserts an event into the queue: INSERT INTO {notifications_queue} (uid, destination, sid, module, eid, send_interval, send_method, cron, created, conditions) SELECT DISTINCT s.uid, s.destination, s.sid, s.module, %d, // event ID s.send_interval, s.send_method, s.cron, %d, // time of the event s.conditions FROM {notifications} s INNER JOIN {notifications_fields} f ON s.sid = f.sid WHERE (s.status = 1) AND (s.event_type = '%s') // subscription type AND (s.send_interval >= 0) AND (s.uid <> %d) AND ( (f.field = '%s' AND f.intval IN (%d)) // everything from 'query' op OR (f.field = '%s' AND f.intval = %d) OR (f.field = '%s' AND f.value = '%s') OR (f.field = '%s' AND f.intval = %d)) GROUP BY s.uid, s.destination, s.sid, s.module, s.send_interval, s.send_method, s.cron, s.conditions HAVING s.conditions = count(f.sid)

    Read the article

  • How is the presentation layer of a CALayer generated?

    - by KJ
    Hi, I'm having difficulties animating my custom layer property using Core Anmiation. My question is how the presentation of a CALayer is generated. Here is what I have now: @interface MyLayer : CALayer { NSMutableDictionary* customProperties; } @property (nonatomic, copy) NSMutableDictionary* customProperties; @end And when I try to animate the key path "customProperties.roll" using CABasicAnimation and addAnimation:forKey:, it seems that the customProperties variable doesn't get copied from the model layer to the presentation layer, and the customProperties of the presentation layer appears to be nil, failing to update the value for the key "roll". Is there a way to animate values in a dictionary correctly? What is the exact relationship between a model layer and a presentation layer while being animated? Thanks!

    Read the article

  • C# HMAC Implementation Problem

    - by Emanuel
    I want my application to encrypt a user password, and at one time password will be decrypted to be sent to the server for authentication. A friend advise me to use HMAC. I wrote the following code in C#: System.Text.ASCIIEncoding encoding = new System.Text.ASCIIEncoding(); byte[] key = encoding.GetBytes("secret"); HMACSHA256 myhmacsha256 = new HMACSHA256(key); byte[] hashValue = myhmacsha256.ComputeHash(encoding.GetBytes("text")); string resultSTR = Convert.ToBase64String(hashValue); myhmacsha256.Clear(); How to decode the password (resultSTR, in this case)? Thanks.

    Read the article

< Previous Page | 410 411 412 413 414 415 416 417 418 419 420 421  | Next Page >