Search Results

Search found 1508 results on 61 pages for 'deep'.

Page 46/61 | < Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >

  • SharePoint For Newbie Developers: Code Scope

    - by Mark Rackley
    So, I continue to try to come up with diagrams and information to help new SharePoint developers wrap their heads around this SharePoint beast, especially when those newer to development are on my team. To that end, I drew up the below diagram to help some of our junior devs understand where/when code is being executed in SharePoint at a high level. Note that I say “High Level”… This is a simplistic diagram that can get a LOT more complicated if you want to dive in deeper.  For the purposes of my lesson it served its purpose well. So, please no comments from you peanut gallery about information 3 levels down that’s missing unless it adds to the discussion.  Thanks So, the diagram below details where code is executed on a page load and gives the basic flow of the page load. There are actually many more steps, but again, we are staying high level here. I just know someone is still going to say something like “Well.. actually… the dlls are getting executed when…”  Anyway, here’s the diagram with some information I like to point out: Code Scope / Where it is executed So, looking at the diagram we see that dlls and XSL are executed on the server and that JavaScript/jQuery are executed on the client. This is the main thing I like to point out for the following reasons: XSL (for the most part) is faster than JavaScript I actually get this question a lot. Since XSL is executed on the server less data is getting passed over the wire and a beefier machine (hopefully) is doing the processing. The outcome of course is better performance. When You are using jQuery and making Web Service calls you are building XML strings and sending them to the server, then ALL the results come back and the client machine has to parse through the XML and use what it needs and ignore the rest (and there is a lot of garbage that comes back from SharePoint Web Service calls). XSL and JavaScript cannot work together in the same scope Let me clarify. JavaScript can send data back to SharePoint in postbacks that XSL can then use. XSL can output JavaScript and initiate JavaScript variables.  However, XSL cannot call a JavaScript method to get a value and JavaScript cannot directly interact with XSL and call its templates. They are executed in there scope only. No crossing of boundaries here. So, what does this all mean? Well, nothing too deep. This is just some basic fundamental information that all SharePoint devs need to understand. It will help you determine what is the best solution for your specific development situation and it will help the new guys understand why they get an error when trying to call a JavaScript Function from within XSL.  Let me know if you think quick little blogs like this are helpful or just add to the noise. I could probably put together several more that are similar.  As always, thanks for stopping by, hope you learned something new.

    Read the article

  • Four Easy Ways to Save a Rocky CRM Relationship

    - by Divya Malik
     Today, I am pleased to introduce our guest blogger Luke Christianson. Luke is  an Application Sales rep based out of Minneapolis, MN.  You can find him on LinkedIn and follow him on Twitter. In any relationship, sooner or later, the excitement fades away.  The honeymoon period gives way to the old routines you had, before you committed to each other and you eventually begin doing things apart from one another.  I’m not talking about a marriage…  Well, I guess I am.Commitment to a CRM tool and building a deep and lasting relationship is not much different than the basics of a traditional love story.  After your controlled CRM pilot program, and maybe the National Sales Meeting where you couldn’t escape those three wonderful letters, CRM, you will soon find that if you haven’t designed an environment where it’s going to enable your reps to make more money, the relationship is doomed.   . If you’re currently in a dysfunctional CRM relationship, here are 4 simple tips to re-engaging users and getting that spark back. Shadow a Sales Rep:   Chances are you can find out exactly what is preventing your sales reps from using the application by simply watching how they go about their day.  Sales reps are driven by money, not by additional administrative duties.  Your system needs to be setup so that they can get the information they need quickly, facilitate making key updates and run their business out of one easy-to-use application.  Increase your sales team’s productivity by 5% automatically:    Cancel the weekly forecast calls with your reps and require them update their opportunities in CRM.  Something else that I’ve seen work extremely well, is when you do Monthly or Quarterly reviews, do not let your sales reps bring anything into the room with them; no spreadsheets, notebooks, or computers.  Everything they need to tell you should be able to be put into CRM and fully accessible by the Sales Manager at any time.  Tool time:      Make sure the tools that you have selected meet both your short-term goals and your long term goals.   You need tools that can adapt like your business does.  You probably can’t wait two months for an update to a picklist value or for the addition of a simple workflow rule.  Do you feel the tools that are in place can create the experience you want for your users? and finally, if all else fails... Keep It Simple, Stupid:     Do you really need to require 15 fields to create an Opportunity?  Do you need to clutter the interface with different reports that don’t add daily value?  Most CRM systems on the market today are flexible enough today that your admin could clean up most of the unnecessary interface ‘noise’ in a few hours.  If they're not, see #3. Every strong relationship can be tedious at times, you’ll fight and eventually make amends, you may even threaten to upgrade to a newer model…  But be patient and think about what you want to achieve and you’ll find a partner for life.

    Read the article

  • Top 5 Reasons to Invest in Enterprise 2.0 Technologies

    - by kellsey.ruppel(at)oracle.com
    In 2010, Oracle's portal, content management, and collaboration solutions evolved rapidly, supported by increasingly deep integrations across Oracle Fusion Middleware and the entire Oracle stack. In light of these developments, we asked Vince Casarez, vice president of Enterprise 2.0 product management, for his top five reasons to invest in Enterprise 2.0 (E2.0) technologies--including real-world examples of businesses already realizing the benefits of next-generation E2.0 technologies. 1. Provide a modern user experience As E2.0 technologies gain widespread adoption, customers and employees expect intuitive Web experiences that are both interactive and community-based. By partnering with Oracle, Alcatel-Lucent Enterprise Group is already making that happen. With 76,000 employees and operations in more than 100 countries, the company wanted a streamlined, personalized user experience with more relevant content in fewer clicks. Working with Oracle, they created a global support portal that supports personalization and integration with Oracle Business Intelligence Enterprise Edition and Oracle E-Business Suite--and drives collaboration with tools such as wikis, blogs, and forums. Learn more about Alcatel-Lucent Enterprise Group's Global Support Portal in this Webcast. 2. Improve productivity and collaboration As E2.0 technologies mature, Oracle anticipates companies moving beyond the idea of simply creating yet another Facebook-like destination for its employees, and instead shaping work environments around specific business tasks. After rapid growth--both organic and through acquisition--construction and infrastructure services leader Balfour Beatty found itself with multiple homegrown intranet sites with very minimal content-sharing capabilities. Today, thanks to Oracle WebCenter Suite, Oracle WebCenter Spaces, Oracle WebCenter Services, and Oracle Universal Content Management, Balfour Beatty is benefiting from collaborative workspaces, a central place to use and work with documents, and unified search across content. 3. Leverage business processes and applications Modern portals are now able to integrate users, content, and business processes in unprecedented ways. To take advantage of these new possibilities, leading dairy provider Land O'Lakes has implemented a fully integrated ERP solution together with Oracle's ECM platform. As a result, Land O'Lakes has been able to achieve better information management and compliance, increased adoption rates for enterprise tools, and increased business process efficiency thanks to more effective information sharing and collaboration. 4. Enhance customer and supplier relationships Companies have begun to move beyond the idea that E2.0 simply means enabling customer reviews or embedding chat functionality. They are taking E2.0 to the next level and providing interactive experiences for their customers. For example, to enhance customer and supplier relationships, Wind River, a global leader in device software optimization, successfully partnered with Oracle to: Integrate ERP and ECM content to provide customers the latest and most relevant support information for products they own Enable customers to personalize their support experience and receive updates regarding patches, application notes, and other relevant content Enable discussions, wikis, and blogs for more efficient collaboration 5. Increase business visibility and responsiveness By strategically embedding collaboration and communication tools into specific business contexts, companies significantly increase visibility into changing business conditions--and can respond much more agilely. Texas A&M University System--one of the largest systems of higher education in the U.S.--partnered with Oracle to create a unified repository that would enable the retrieval of research and grant data from disparate systems via an Enterprise 2.0 user interface. By enabling researchers to customize their own portals with easy-to-use tools, they have also been able to significantly reduce their reliance on the IT department. Learn how other Oracle customers are leveraging Enterprise 2.0 technologies.

    Read the article

  • Finding Leaders Breakfasts - Adelaide and Perth

    - by rdatson-Oracle
    HR Executives Breakfast Roundtables: Find the best leaders using science and social media! Perth, 22nd July & Adelaide, 24th July What is leadership in the 21st century? What does the latest research tell us about leadership? How do you recognise leadership qualities in individuals? How do you find individuals with these leadership qualities, hire and develop them? Join the Neuroleadership Institute, the Hay Group, and Oracle to hear: 1. the latest neuroscience research about human bias, and how it applies to finding and building better leaders; 2. the latest techniques to recognise leadership qualities in people; 3. and how you can harness your people and social media to find the best people for your company. Reflect on your hiring practices at this thought provoking breakfast, where you will be challenged to consider whether you are using best practices aimed at getting the right people into your company. Speakers Abigail Scott, Hay Group Abigail is a UK registered psychologist with 10 years international experience in the design and delivery of talent frameworks and assessments. She has delivered innovative assessment programmes across a range of organisations to identify and develop leaders. She is experienced in advising and supporting clients through new initiatives using evidence-based approach and has published a number of research papers on fairness and predictive validity in assessment. Karin Hawkins, NeuroLeadership Institute Karin is the Regional Director of NeuroLeadership Institute’s Asia-Pacific region. She brings over 20 years experience in the financial services sector delivering cultural and commercial results across a variety of organisations and functions. As a leadership risk specialist Karin understands the challenge of building deep bench strength in teams and she is able to bring evidence, insight, and experience to support executives in meeting today’s challenges. Robert Datson, Oracle Robert is a Human Capital Management specialist at Oracle, with several years as a practicing manager at IBM, learning and implementing latest management techniques for hiring, deploying and developing staff. At Oracle he works with clients to enable best practices for HR departments, and drawing the linkages between HR initiatives and bottom-line improvements. Agenda 07:30 a.m. Breakfast and Registrations 08:00 a.m. Welcome and Introductions 08:05 a.m. Breaking Bias in leadership decisions - Karin Hawkins 08:30 a.m. Identifying and developing leaders - Abigail Scott 08:55 a.m. Finding leaders, the social way - Robert Datson 09:20 a.m. Q&A and Closing Remarks 09:30 a.m. Event concludes If you are an employee or official of a government organisation, please click here for important ethics information regarding this event. To register for Perth, Tuesday 22nd July, please click HERE To register for Adelaide, Thursday 24th July, please click HERE 1024x768 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 -"/ /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Contact: To register or have questions on the event? Contact Aaron Tait on +61 2 9491 1404

    Read the article

  • Olympics data available for all on Windows Azure SQL Database and Power View

    - by jamiet
    Are you looking around for some decent test data for your BI demos? Well, if so, Microsoft have provided some data about all medals won at the Olympics Games (1900 to 2008) at OlympicsData workbook - Excel, SSIS, Azure sample; it provides analysis over athletes, countries, medal type, sport, discipline and various other dimensions. The data has been provided in an Excel workbook along with instructions on how to load the data into a Windows Azure SQL Database using SQL Server Integration Services (SSIS). Frankly though, the rigmarole of standing up your own Windows Azure SQL Database ok, SQL Azure database, is both costly (SQL Azure isn’t free) and time consuming (the provided instructions aren’t exactly an idiot’s guide and getting SSIS to work properly with Excel isn’t a barrel of laughs either). To ease the pain for all you BI folks out there that simply want to party on the data I have loaded it all into the SQL Azure database that I use for hosting AdventureWorks on Azure. You can read more about AdventureWorks on Azure below however I’ll summarise here by saying it is a SQL Azure database provided for the use of the SQL Server community and which is supported by voluntary donations. To view the data the credentials you need are: Server mhknbn2kdz.database.windows.net  Database AdventureWorks2012 User sqlfamily Password sqlf@m1ly Type those into SSMS and away you go, the data is provided in four tables [olympics].[Sport], [olympics].[Discipline], [olympics].[Event] & [olympics].[Medalist]: I figured this would be a good candidate for a Power View report so I fired up Excel 2013 and built such a report to slice’n’dice through the data – here are some screenshots that should give you a flavour of what is available: A view of all the available data Where do all the gymastics medals go? Which countries do top ten all-time medal winners come from? You get the idea. There is masses of information here and if you have Excel 2013 handy Power View provides a quick and easy way of surfing through it. To save you the bother of setting up the Power View report yourself you can have the one that I took these screenshots from, it is available on my SkyDrive at OlympicsAnalysis.xlsx so just hit the link and download to play to your heart’s content. Party on, people! As I said above the data is hosted on a SQL Azure database that I use for hosting “AdventureWorks on Azure” which I first announced in March 2013 at AdventureWorks2012 now available for all on SQL Azure. I’ll repeat the pertinent parts of that blog post here: I am pleased to announce that as of today … [AdventureWorks2012] now resides on SQL Azure and is available for anyone, absolutely anyone, to connect to and use for their own means. This database is free for you to use but SQL Azure is of course not free so before I give you the credentials please lend me your ears eyes for a short while longer. AdventureWorks on Azure is being provided for the SQL Server community to use and so I am hoping that that same community will rally around to support this effort by making a voluntary donation to support the upkeep which, going on current pricing, is going to be $119.88 per year. If you would like to contribute to keep AdventureWorks on Azure up and running for that full year please donate via PayPal to [email protected] Any amount, no matter how small, will help. If those 50+ people that retweeted me beforehand all contributed $2 then that would just about be enough to keep this up for a year. If the community contributes more than we need then there are a number of additional things that could be done: Host additional databases (Northwind anyone??) Host in more datacentres (this first one is in Western Europe) Make a charitable donation That last one, a charitable donation, is something I would really like to do. The SQL Community have proved before that they can make a significant contribution to charitable orgnisations through purchasing the SQL Server MVP Deep Dives book and I harbour hopes that AdventureWorks on Azure can continue in that vein. So please, if you think AdventureWorks on Azure is something that is worth supporting please make a contribution. I’d like to emphasize that last point. If my hosting this Olympics data is useful to you please support this initiative by donating. Thanks in advance. @Jamiet

    Read the article

  • New Version 3.1 Endeca Information Discovery Now Available

    - by Mike.Hallett(at)Oracle-BI&EPM
    Normal 0 false false false EN-GB X-NONE X-NONE MicrosoftInternetExplorer4 Business User Self-Service Data Mash-up Analysis and Discovery integrated with OBI11g and Hadoop Oracle Endeca Information Discovery 3.1 (OEID) is a major release that incorporates significant new self-service discovery capabilities for business users, including agile data mashup, extended support for unstructured analytics, and an even tighter integration with Oracle BI.  · Self-Service Data Mashup and Discovery Dashboards: business users can combine information from multiple sources, including their own up-loaded spreadsheets, to conduct analysis on the complete set.  Creating discovery dashboards has been made even easier by intuitive drag-and drop layouts and wizard-based configuration.  Business users can now build new discovery applications in minutes, without depending on IT. · Enhanced Integration with Oracle BI: OEID 3.1 enhances its’ native integration with Oracle Business Intelligence Foundation. Business users can now incorporate information from trusted BI warehouses, leveraging dimensions and attributes defined in Oracle’s Common Enterprise Information Model, but evolve them based on the varying day-to-day demands and requirements that they personally manage. · Deep Unstructured Analysis: business users can gain new insights from a wide variety of enterprise and public sources, helping companies to build an actionable Big Data strategy.  With OEID’s long-standing differentiation in correlating unstructured information with structured data, business users can now perform their own text mining to identify hidden concepts, without having to request support from IT. They can augment these insights with best in class keyword search and pattern matching, all in the context of rich, interactive visualizations and analytic summaries. · Enterprise-Class Self-Service Discovery:  OEID 3.1 enables IT to provide a powerful self-service platform to the business as part of a broader Business Analytics strategy, preserving the value of existing investments in data quality, governance, and security.  Business users can take advantage of IT-curated information to drive discovery across high volumes and varieties of data, and share insights with colleagues at a moment’s notice. · Harvest Content from the Web with the Endeca Web Acquisition Toolkit:  Oracle now provides best-of-breed data access to website content through the Oracle Endeca Web Acquisition Toolkit.  This provides an agile, graphical interface for developers to rapidly access and integrate any information exposed through a web front-end.  Organizations can now cost-effectively include content from consumer sites, industry forums, government or supplier portals, cloud applications, and myriad other web sources as part of their overall strategy for data discovery and unstructured analytics. For more information: OEID 3.1 OTN Software and Documentation Download And Endeca available for download on Software Delivery Cloud (eDelivery) New OEID 3.1 Videos on YouTube Oracle.com Endeca Site /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-fareast-language:EN-US;}

    Read the article

  • Fun tips with Analytics

    - by user12620172
    If you read this blog, I am assuming you are at least familiar with the Analytic functions in the ZFSSA. They are basically amazing, very powerful and deep. However, you may not be aware of some great, hidden functions inside the Analytic screen. Once you open a metric, the toolbar looks like this: Now, I’m not going over every tool, as we have done that before, and you can hover your mouse over them and they will tell you what they do. But…. Check this out. Open a metric (CPU Percent Utilization works fine), and click on the “Hour” button, which is the 2nd clock icon. That’s easy, you are now looking at the last hour of data. Now, hold down your ‘Shift’ key, and click it again. Now you are looking at 2 hours of data. Hold down Shift and click it again, and you are looking at 3 hours of data. Are you catching on yet? You can do this with not only the ‘Hour’ button, but also with the ‘Minute’, ‘Day’, ‘Week’, and the ‘Month’ buttons. Very cool. It also works with the ‘Show Minimum’ and ‘Show Maximum’ buttons, allowing you to go to the next iteration of either of those. One last button you can Shift-click is the handy ‘Drill’ button. This button usually drills down on one specific aspect of your metric. If you Shift-click it, it will display a “Rainbow Highlight” of the current metric. This works best if this metric has many ‘Range Average’ items in the left-hand window. Give it a shot. Also, one will sometimes click on a certain second of data in the graph, like this:  In this case, I clicked 4:57 and 21 seconds, and the 'Range Average' on the left went away, and was replaced by the time stamp. It seems at this point to some people that you are now stuck, and can not get back to an average for the whole chart. However, you can actually click on the actual time stamp of "4:57:21" right above the chart. Even though your mouse does not change into the typical browser finger that most links look like, you can click it, and it will change your range back to the full metric. Another trick you may like is to save a certain view or look of a group of graphs. Most of you know you can save a worksheet, but did you know you could Sync them, Pause them, and then Save it? This will save the paused state, allowing you to view it forever the way you see it now.  Heatmaps. Heatmaps are cool, and look like this:  Some metrics use them and some don't. If you have one, and wish to zoom it vertically, try this. Open a heatmap metric like my example above (I believe every metric that deals with latency will show as a heatmap). Select one or two of the ranges on the left. Click the "Change Outlier Elimination" button. Click it again and check out what it does.  Enjoy. Perhaps my next blog entry will be the best Analytic metrics to keep your eyes on, and how you can use the Alerts feature to watch them for you. Steve 

    Read the article

  • Using Recursive SQL and XML trick to PIVOT(OK, concat) a "Document Folder Structure Relationship" table, works like MySQL GROUP_CONCAT

    - by Kevin Shyr
    I'm in the process of building out a Data Warehouse and encountered this issue along the way.In the environment, there is a table that stores all the folders with the individual level.  For example, if a document is created here:{App Path}\Level 1\Level 2\Level 3\{document}, then the DocumentFolder table would look like this:IDID_ParentFolderName1NULLLevel 121Level 232Level 3To my understanding, the table was built so that:Each proposal can have multiple documents stored at various locationsDifferent users working on the proposal will have different access level to the folder; if one user is assigned access to a folder level, she/he can see all the sub folders and their content.Now we understand from an application point of view why this table was built this way.  But you can quickly see the pain this causes the report writer to show a document link on the report.  I wasn't surprised to find the report query had 5 self outer joins, which is at the mercy of nobody creating a document that is buried 6 levels deep, and not to mention the degradation in performance.With the help of 2 posts (at the end of this post), I was able to come up with this solution:Use recursive SQL to build out the folder pathUse SQL XML trick to concat the strings.Code (a reminder, I built this code in a stored procedure.  If you copy the syntax into a simple query window and execute, you'll get an incorrect syntax error) Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} -- Get all folders and group them by the original DocumentFolderID in PTSDocument table;WITH DocFoldersByDocFolderID(PTSDocumentFolderID_Original, PTSDocumentFolderID_Parent, sDocumentFolder, nLevel)AS (-- first member      SELECT 'PTSDocumentFolderID_Original' = d1.PTSDocumentFolderID            , PTSDocumentFolderID_Parent            , 'sDocumentFolder' = sName            , 'nLevel' = CONVERT(INT, 1000000)      FROM (SELECT DISTINCT PTSDocumentFolderID                  FROM dbo.PTSDocument_DY WITH(READPAST)            ) AS d1            INNER JOIN dbo.PTSDocumentFolder_DY AS df1 WITH(READPAST)                  ON d1.PTSDocumentFolderID = df1.PTSDocumentFolderID      UNION ALL      -- recursive      SELECT ddf1.PTSDocumentFolderID_Original            , df1.PTSDocumentFolderID_Parent            , 'sDocumentFolder' = df1.sName            , 'nLevel' = ddf1.nLevel - 1      FROM dbo.PTSDocumentFolder_DY AS df1 WITH(READPAST)            INNER JOIN DocFoldersByDocFolderID AS ddf1                  ON df1.PTSDocumentFolderID = ddf1.PTSDocumentFolderID_Parent)-- Flatten out folder path, DocFolderSingleByDocFolderID(PTSDocumentFolderID_Original, sDocumentFolder)AS (SELECT dfbdf.PTSDocumentFolderID_Original            , 'sDocumentFolder' = STUFF((SELECT '\' + sDocumentFolder                                         FROM DocFoldersByDocFolderID                                         WHERE (PTSDocumentFolderID_Original = dfbdf.PTSDocumentFolderID_Original)                                         ORDER BY PTSDocumentFolderID_Original, nLevel                                         FOR XML PATH ('')),1,1,'')      FROM DocFoldersByDocFolderID AS dfbdf      GROUP BY dfbdf.PTSDocumentFolderID_Original) And voila, I use the second CTE to join back to my original query (which is now a CTE for Source as we can now use MERGE to do INSERT and UPDATE at the same time).Each part of this solution would not solve the problem by itself because:If I don't use recursion, I cannot build out the path properly.  If I use the XML trick only, then I don't have the originating folder ID info that I need to link to the document.If I don't use the XML trick, then I don't have one row per document to show in the report.I could conceivably do this in the report function, but I'd rather not deal with the beginning or ending backslash and how to attach the document name.PIVOT doesn't do strings and UNPIVOT runs into the same problem as the above.I'm excited that each version of SQL server provides us new tools to solve old problems and/or enables us to solve problems in a more elegant wayThe 2 posts that helped me along:Recursive Queries Using Common Table ExpressionHow to use GROUP BY to concatenate strings in SQL server?

    Read the article

  • Criminals and Other Illegal Characters

    - by Most Valuable Yak (Rob Volk)
    SQLTeam's favorite Slovenian blogger Mladen (b | t) had an interesting question on Twitter: http://www.twitter.com/MladenPrajdic/status/347057950470307841 I liked Kendal Van Dyke's (b | t) reply: http://twitter.com/SQLDBA/status/347058908801667072 And he was right!  This is one of those pretty-useless-but-sounds-interesting propositions that I've based all my presentations on, and most of my blog posts. If you read all the replies you'll see a lot of good suggestions.  I particularly like Aaron Bertrand's (b | t) idea of going into the Unicode character set, since there are over 65,000 characters available.  But how to find an illegal character?  Detective work? I'm working on the premise that if SQL Server will reject it as a name it would throw an error.  So all we have to do is generate all Unicode characters, rename a database with that character, and catch any errors. It turns out that dynamic SQL can lend a hand here: IF DB_ID(N'a') IS NULL CREATE DATABASE [a]; DECLARE @c INT=1, @sql NVARCHAR(MAX)=N'', @err NVARCHAR(MAX)=N''; WHILE @c<65536 BEGIN BEGIN TRY SET @sql=N'alter database ' + QUOTENAME(CASE WHEN @c=1 THEN N'a' ELSE NCHAR(@c-1) END) + N' modify name=' + QUOTENAME(NCHAR(@c)); RAISERROR(N'*** Trying %d',10,1,@c) WITH NOWAIT; EXEC(@sql); SET @c+=1; END TRY BEGIN CATCH SET @err=ERROR_MESSAGE(); RAISERROR(N'Ooops - %d - %s',10,1,@c,@err) WITH NOWAIT; BREAK; END CATCH END SET @sql=N'alter database ' + QUOTENAME(NCHAR(@c-1)) + N' modify name=[a]'; EXEC(@sql); The script creates a dummy database "a" if it doesn't already exist, and only tests single characters as a database name.  If you have databases with single character names then you shouldn't run this on that server. It takes a few minutes to run, but if you do you'll see that no errors are thrown for any of the characters.  It seems that SQL Server will accept any character, no matter where they're from.  (Well, there's one, but I won't tell you which. Actually there's 2, but one of them requires some deep existential thinking.) The output is also interesting, as quite a few codes do some weird things there.  I'm pretty sure it's due to the font used in SSMS for the messages output window, not all characters are available.  If you run it using the SQLCMD utility, and use the -o switch to output to a file, and -u for Unicode output, you can open the file in Notepad or another text editor and see the whole thing. I'm not sure what character I'd recommend to answer Mladen's question.  I think the standard tab (ASCII 9) is fine.  There's also several specific separator characters in the original ASCII character set (decimal 28-31). But of all the choices available in Unicode whitespace, I think my favorite would be the Mongolian Vowel Separator.  Or maybe the zero-width space. (that'll be fun to print!)  And since this is Mladen we're talking about, here's a good selection of "intriguing" characters he could use.

    Read the article

  • AD - Using UserPrincipal.FindByIdentity and PrincipalContext with nested OU - C#

    - by Solid Snake
    Here is what I am trying to achieve: I have a nested OU structure that is about 5 levels deep. OU=Portal,OU=Dev,OU=Apps,OU=Grps,OU=Admin,DC=test,DC=com I am trying to find out if the user has permissions/exists at OU=Portal. Here's a snippet of what I currently have: PrincipalContext domain = new PrincipalContext( ContextType.Domain, "test.com", "OU=Portal,OU=Dev,OU=Apps,OU=Grps,OU=Admin,DC=test,DC=com"); UserPrincipal user = UserPrincipal.FindByIdentity(domain, myusername); PrincipalSearchResult<Principal> group = user.GetAuthorizationGroups(); For some unknown reason, the value user generated from the above code is always null. However, if I were to drop all the OU as follows: PrincipalContext domain = new PrincipalContext( ContextType.Domain, "test.com", "DC=test,DC=com"); UserPrincipal user = UserPrincipal.FindByIdentity(domain, myusername); PrincipalSearchResult<Principal> group = user.GetAuthorizationGroups(); this would work just fine and return me the correct user. I am simply trying to reduce the number of results as opposed to getting everything from AD. Is there anything that I am doing wrong? I've googled for hours and tested various combinations without much luck. Any help is appreciated. Thanks. Dan

    Read the article

  • What good technology podcasts are out there?

    - by Michael Stum
    Yes, Podcasts, those nice little Audiobooks I can listen to on the way to work. With the current amount of Podcasts, it's like searching a needle in a haystack, except that the haystack happens to be the Internet and is filled with too many of these "Hot new Gadgets" stuff :( Now, even though I am mainly a .NET developer nowadays, maybe anyone knows some good Podcasts from people regarding the whole software lifecycle? Unit Testing, Continous Integration, Documentation, Deployment... So - what are you guys and gals listening to? Please note that the categorizations are somewhat subjective and may not be 100% accurate as many podcasts cover several areas. Categorization is made against what is considered the "main" area. General Software Engineering / Productivity Stack Overflow TekPub (Requires Paid Subscription) SE Radio 43 Folders Perspectives Dr. Dobb's (now a video feed) The Pragmatic Podcast (Inactive) IT Matters Agile Toolkit Podcast The Stack Trace (Inactive) Parleys Techzing The Startup Success Podcast Berkeley CS class lectures FOSS Weekly .NET / Visual Studio / Microsoft Herding Code Hanselminutes .NET Rocks! Deep Fried Bytes Alt.Net Podcast Polymorphic Podcast Sparkling Client (The Silverlight Podcast) dnrTV! Spaghetti Code ASP.NET Podcast Channel 9 Radio TFS PowerScripting Podcast The Thirsty Developer Elegant Code ConnectedShow Crafty Coders Coding QA jQuery yayQuery The official jQuery podcast Java / Groovy The Java Posse Grails Podcast Java Technology Insider Ruby / Rails Railscasts Rails Envy The Ruby on Rails Podcast Rubiverse Web Design / JavaScript / Ajax WebDevRadio Boagworld The Rissington podcast Ajaxian YUI Theater Unix / Linux / Mac / iPhone Mac Developer Network Hacker Public Radio Linux Outlaws Mac OS Ken LugRadio Linux radio show (Inactive) The Linux Action Show! Linux Kernel Mailing List (LKML) Summary Podcast Stanford's iPhone programming class SysAdmin, Security or Infrastructure RunAs Radio Security Now! Crypto-Gram Security Podcast Hak5 VMWare VMTN Windows Weekly PaulDotCom Security The Register - Semi-Coherent Computing FeatherCast General Tech / Business Tekzilla This Week in Tech The Guardian Tech Weekly PCMag Radio Podcast Entrepreneurship Corner Manager Tools Other / Misc. / Podcast Networks IT Conversations Retrobits Podcast No Agenda Netcast Cranky Geeks The Command Line Freelance Radio IBM developerWorks The Register - Open Season Drunk and Retired Technometria Sod This Radio4Nerds Hacker Medley

    Read the article

  • Entity Framework 4 Code First and the new() Operator

    - by Eric J.
    I have a rather deep hierarchy of objects that I'm trying to persist with Entity Framework 4, POCO, PI (Persistence Ignorance) and Code First. Suddenly things started working pretty well when it dawned on me to not use the new() operator. As originally written, the objects frequently use new() to create child objects. Instead I'm using my take on the Repository Pattern to create all child objects as needed. For example, given: class Adam { List<Child> children; void AddChildGivenInput(string input) { children.Add(new Child(...)); } } class Child { List<GrandChild> grandchildren; void AddGrandChildGivenInput(string input) { grandchildren.Add(new GrandChild(...)); } } class GrandChild { } ("GivenInput" implies some processing not shown here) I define an AdamRepository like: class AdamRepository { Adam Add() { return objectContext.Create<Adam>(); } Child AddChildGivenInput(Adam adam, string input) { return adam.children.Add(new Child(...)); } GrandChild AddGrandchildGivenInput(Child child, string input) { return child.grandchildren.Add(new GrandChild(...)); } } Now, this works well enough. However, I'm no longer "ignorant" of my persistence mechanism as I have abandoned the new() operator. Additionally, I'm at risk of an anemic domain model since so much logic ends up in the repository rather than in the domain objects. After much adieu, a question: Or rather several questions... Is this pattern required to work with EF 4 Code First? Is there a way to retain use of new() and still work with EF 4 / POCO / Code First? Is there another pattern that would leave logic in the domain object and still work with EF 4 / POCO / Code First? Will this restriction be lifted in later versions of Code First support? Sometimes trying to go the POCO / Persistence Ignorance route feels like swimming upstream, other times it feels like swimming up Niagra Falls.

    Read the article

  • Support for nested model and class validation with ASP.NET MVC 2.0

    - by Diep-Vriezer
    I'm trying to validate a model containing other objects with validation rules using the System.ComponentModel.DataAnnotations attributes was hoping the default MVC implementation would suffice: var obj = js.Deserialize(json, objectInfo.ObjectType); if(!TryValidateModel(obj)) { // Handle failed model validation. } The object is composed of primitive types but also contains other classes which also use DataAnnotications. Like so: public class Entry { [Required] public Person Subscriber { get; set; } [Required] public String Company { get; set; } } public class Person { public String FirstName { get; set;} [Required] public String Surname { get; set; } } The problem is that the ASP.NET MVC validation only goes down 1 level and only evaluates the properties of the top level class, as can be read on digitallycreated.net/Blog/54/deep-inside-asp.net-mvc-2-model-metadata-and-validation. Does anyone know an elegant solution to this? I've tried xVal, but they seem to use a non-recursive pattern (http://blog.stevensanderson.com/2009/01/10/xval-a-validation-framework-for-aspnet-mvc/). Someone must have run into this problem before right? Nesting objects in your model doesn't seem so weird if you're designing a web service.

    Read the article

  • EWS 2010: Public Folder Problem using .NET

    - by Daniel
    I've recently coded a .NET Console app using C#. It's purpose was to read the emails within a specific folder, parse them for specific values and save them to a database. Our email system, at the time I originally coded this, was Exchange 2003. However, I was made aware we would soon be upgrading to Exchange 2010: ergo, I built the code to work in both environments. Following the migration to 2010, however, the app has broken. The app uses the EWS API for 2010 functionality. When it attempts to use the ExchangeService's FindFolders method to find the publicfoldersroot, it throws an exception. Here's the code: ExchangeService service = new ExchangeService(); FindFoldersResults findRootFldrs; service.UseDefaultCredentials = true; service.AutodiscoverUrl("[email protected]", delegate(string x) { return true; }); FolderView fview = new FolderView(100); fview.Traversal = FolderTraversal.Deep; if (findRootFldrsSub == null) { //Set to root to test local folders -- findRootFldrs = service.FindFolders(WellKnownFolderName.PublicFoldersRoot, fview); } The exception: "The mailbox that was requested doesn't support the specified RequestServerVersion" I've attempted: -Setting the exchangeservice to 2007 (throws an exception: "An internal server error occurred. The operation failed.") -Giving myself the highest level of permission to the Public Folder (no effect) -Manually setting my credentials (no effect) I can view the public folders in outlook; the publicfoldersroot property is available in the intellisense; the code works on local folders (I can parse my inbox). My current thinking is that it's a setting on the recent setup of Exchange 2010: unfortunately that isn't really my field.

    Read the article

  • Drupal vs ExpressionEngine for any kind of project from simple commercial site to complex ecommerce

    - by artmania
    Hi friends... So far I've been using custom cms. lately I developed own cms with CodeIgniter, and I'm actually happy. But recently I take more design and front-end development works than deep development projects. I actually also prefer so... I have many things to do with custom cms, also some security issues, etc. I'm kind of tired of doing everyhing custom, also I want to give more time to my family... Recently I'm seriously considering to go for a ready cms, and develop custom plugins when project need sth specific. This cms should be very flexible to implement any layout. also secured (since i had some hack problems with my custom cms!) I googled so much about this. As a result 2 options: Drupal Expression Engine opensource or licensed matter is not an issue for me at all. I just consider to go for a cms that I can use for any kind of project from simple 4-5 pages company sites to complicated projects like hotels directory, ecommerce portals, etc... As I found out; EE is more userfriendly and doesnt hassle about implementing custom layout as much as Drupal does. Also EE use CodeIgniter that I'm familiar. on the other hand I found out that Drupal is 10000% flexible, we can do anything with that (requires good php knowledge), extremely powerful and has many plugins... So I can't decide!! I want to go for a cms that I will use for looooong years from now on with no problems to implement any kind of project. So which one do you recommend? Appreciate your helps! thanks a lot... Edited: http://expressionengine.com/ee2_sneak_preview/#cost this Commercial License $299.95 is for 1 setup? So I need to purchase new licence for each project? Nothing like I pay once, and use the cms for as many project as I want?

    Read the article

  • Improving long-polling Ajax performance

    - by Bears will eat you
    I'm writing a webapp (Firefox-compatible only) which uses long polling (via jQuery's ajax abilities) to send more-or-less constant updates from the server to the client. I'm concerned about the effects of leaving this running for long periods of time, say, all day or overnight. The basic code skeleton is this: function processResults(xml) { // do stuff with the xml from the server } function fetch() { setTimeout(function () { $.ajax({ type: 'GET', url: 'foo/bar/baz', dataType: 'xml', success: function (xml) { processResults(xml); fetch(); }, error: function (xhr, type, exception) { if (xhr.status === 0) { console.log('XMLHttpRequest cancelled'); } else { console.debug(xhr); fetch(); } } }); }, 500); } (The half-second "sleep" is so that the client doesn't hammer the server if the updates are coming back to the client quickly - which they usually are.) After leaving this running overnight, it tends to make Firefox crawl. I'd been thinking that this could be partially caused by a large stack depth since I've basically written an infinitely recursive function. However, if I use Firebug and throw a breakpoint into fetch, it looks like this is not the case. The stack that Firebug shows me is only about 4 or 5 frames deep, even after an hour. One of the solutions I'm considering is changing my recursive function to an iterative one, but I can't figure out how I would insert the delay in between Ajax requests without spinning. I've looked at the JS 1.7 "yield" keyword but I can't quite wrap my head around it, to figure out if it's what I need here. Is the best solution just to do a hard refresh on the page periodically, say, once every hour? Is there a better/leaner long-polling design pattern that won't put a hurt on the browser even after running for 8 or 12 hours? Or should I just skip the long polling altogether and use a different "constant update" pattern since I usually know how frequently the server will have a response for me?

    Read the article

  • Any Other Ideas for prototyping..

    - by davehamptonusa
    I've used Douglass Crockford's Object.beget, but augmented it slightly to: Object.spawn = function (o, spec) { var F = function () {}, that = {}, node = {}; F.prototype = o; that = new F(); for (node in spec) { if (spec.hasOwnProperty(node)) { that[node] = spec[node]; } } return that; }; This way you can "beget" and augment in one fell swoop. var fop = Object.spawn(bar, { a: 'fast', b: 'prototyping' }); In English that means, "Make me a new object called 'fop' with 'bar' as its prototype, but change or add the members 'a' and 'b'. You can even nest it the spec to prototype deeper elements, should you choose. var fop = Object.spawn(bar, { a: 'fast', b: Object.spawn(quux,{ farple: 'deep' }), c: 'prototyping' }); This can help avoid hopping into an object's prototype unintentionally in a long object name like: foo.bar.quux.peanut = 'farple'; If quux is part of the prototype and not foo's own object, your change to 'peanut' will actually change the protoype, affecting all objects prototyped by foo's prototype object. But I digress... My question is this. Because your spec can itself be another object and that object could itself have properties from it's prototype in your new object - and you may want those properties...(at least you should be aware of them before you decided to use it as a spec)... I want to be able to grab all of the elements from all of the spec's prototype chain, except for the prototype object itself... This would flatten them into the new object. Should I use: Object.spawn = function (o, spec) { var F = function () {}, that = {}, node = {}; F.prototype = o; that = new F(); for (node in spec) { that[node] = spec[node]; } that.prototype = o; return that; }; I would love thoughts and suggestions...

    Read the article

  • DOM: element created with cloneNode(true) missing element when added to DOM

    - by user149327
    I'm creating a tree control and I'm attempting to use a parent element as a template for its children. To this end I'm using the element.cloneNode(true) method to deep clone the parent element. However when I insert the cloned element into the DOM it is missing certain inner elements despite having an outerHTML value identical to its parent. Surprisingly I observe the same behavior is in IE, Firefox, and Chrome leading me to believe that it is by design. This is the HTML for the node I'm attempting to clone. <SPAN class=node><A class=nodeLink href="/SparklerRestService2.aspx?q={0}" name=http://dbpedia.org/data/Taylor_Swift.rdf> <IMG class=nodeIcon alt="Taylor Swift" src="images/node.png"><SPAN class=nodeText>Taylor Swift</SPAN></A><SPAN class=nodeDescription>Taylor Swift is a swell gall who is realy great.</SPAN></SPAN> Once I've cloned the node using cloneNode(true) I examine the outerHTML property and find that it is indeed identical to the original. <SPAN class=node><A class=nodeLink href="/SparklerRestService2.aspx?q={0}" name=http://dbpedia.org/data/Taylor_Swift.rdf><IMG class=nodeIcon alt="Taylor Swift" src="images/node.png"><SPAN class=nodeText>Taylor Swift</SPAN></A><SPAN class=nodeDescription>Taylor Swift is a swell gall who is realy great.</SPAN></SPAN> However when I insert it into the DOM and inspect the result using FireBug I find that the element has been transformed: <span class="node" style="top: 0px; left: 0px;"<a class=nodeLink href="/SparklerRestService2.aspx?q={0}" name=http://dbpedia.org/data/Taylor_Swift.rdf>Taylor Swift</a><span class="nodeDescription">It's great</span></span> Notice that the grandchildren of the node (the image tag and the span tag surrounding "Taylor Swift") are missing, although strangely the great grandchild "Taylor Swift" text node has made it into the tree. Can anyone shed some light on this behavior? Why would nodes disappear after insertion into the DOM, and why am I seeing the same result in all three major browser engines?

    Read the article

  • HTML 4, HTML 5, XHTML, MIME types - the definitive resource

    - by deceze
    The topics of HTML vs. XHTML and XHTML as text/html vs. XHTML as XHTML are quite complex. Unfortunately it's hard to get a complete picture, since information is spread mostly in bits and pieces around the web or is buried deep in W3C tech jargon. In addition there's some misinformation being circulated. I propose to make this the definite SO resource about the topic, describing the most important aspects of: HTML 4 HTML 5 XHTML 1.0/1.1 as text/html XHTML 1.0/1.1 as XHTML What are the practical implications of each? What are common pitfalls? What is the importance of proper MIME types for each? How do different browsers handle them? I'd like to see one answer per technology. I'm making this a community wiki, so rather than contributing redundant answers, please edit answers to complete the picture. Feel free to start with stubs. Also feel free to edit this question.

    Read the article

  • jquery ajax call from link loaded with ajax

    - by Jay
    //deep linking $("document").ready(function(){ contM = $('#main-content'); contS = $('#second-content'); $(contM).hide(); $(contM).addClass('hidden'); $(contS).hide(); $(contS).addClass('hidden'); function loadURL(URL) { //console.log("loadURL: " + URL); $.ajax({ url: URL, type: "POST", dataType: 'html', data: {post_loader: 1}, success: function(data){ $(contM).html(data); $(contM).animW(); } }); } // Event handlers $.address.init(function(event) { //console.log("init: " + $('[rel=address:' + event.value + ']').attr('href')); }).change(function(event) { $.ajax({ url: $('[rel=address:' + event.value + ']').attr('href'), type: "POST", dataType: 'html', data: {post_loader: 1}, success: function(data){ $(contM).html(data); $(contM).animW(); }}); //console.log("change"); }) $('.update-main a').live('click', function(){ loadURL($(this).attr('href')); }); $(".update-second a").live('click', function() { var link = $(this); $.ajax({ url: link.attr("href"), dataType: 'html', data: {post_loader: 1}, success: function(data){ $(contS).html(data); $(contS).animW(); }}); }); }); I'm using jquery and the 'addresses' plugin to load content with ajax and maintain pagination. The problem I'm having is some content loads with links which are intended to load content into a secondary window. I'm using the .live() method to allow jquery to listen for new links loaded into the primary content div. This works until the .ajax() method is called for these fresh links loaded with ajax, where the method begins, but follows the original link before data can be received. I'm assuming the problem is in the client-side scripting, but it may be a problem with the call made to the server. I'm using the wordpress loop to parse the url and generate the html loaded via jquery. Thanks for any tips!

    Read the article

  • HTML 4, HTML 5, XHTML, MIME types - the definite resource

    - by deceze
    The topics of HTML vs. XHTML and XHTML as text/html vs. XHTML as XHTML are quite complex. Unfortunately it's hard to get a complete picture, since information is spread mostly in bits and pieces around the web or is buried deep in W3C tech jargon. In addition there's some misinformation being circulated. I propose to make this the definite SO resource about the topic, describing the most important aspects of: HTML 4 HTML 5 XHTML 1.0/1.1 as text/html XHTML 1.0/1.1 as XHTML What are the practical implications of each? What are common pitfalls? What is the importance of proper MIME types for each? How do different browsers handle them? I'd like to see one answer per technology. I'm making this a community wiki, so rather than contributing redundant answers, please edit answers to complete the picture. Feel free to start with stubs. Also feel free to edit this question.

    Read the article

  • fluent nhibernate select n+1 problem

    - by Andrew Bullock
    I have a fairly deep object graph (5-6 nodes), and as I traverse portions of it NHProf is telling me I've got a "Select N+1" problem (which I do). The two solutions I'm aware of are Eager load children Break apart my object graph (and eager load) I don't really want to do either of these (although I may break the graph apart later as I forsee it growing) For now.... Is it possible to tell NHibernate (with fluentnhib) that whenever i try to access children, to load them all in one go, instead of selectn+1ing as i iterate over them? I'm also getting "unbounded results set"s, which is presumably the same problem (or rather, will be solved by the above solution if possible). Each child collection (throughout the graph) will only ever have about 20 members, but 20^5 is a lot, so i dont want to eager load everything when i get the root, but simply get all of a child collection whenever i go near it Edit: an afterthought.... what if i want to introduce paging when i want to render children? do i HAVE to break my object graph here, or is there some sneakyness i can employ to solve all these issues?

    Read the article

  • help with grouping and sorting for TreeView in xaml

    - by danhotb
    I am having problems getting my head around grouping and sorting in xaml and hope someone can get me straightened out! I have creaed an xml file from a tree of files and folders (just like windows explorer) that can be serveral levels deep. I have bound a TreeView control to an xml datasource and it works great! It sorts everything alphabetically but ... I would like it to sort all folders first then all files, rather than folders listed with files, as it does now. the xml : if you load this to a treeviw it will display the two files before the folder because they are first in alpha-order. here is my code: <!-- This will contain the XML-data. --> <XmlDataProvider x:Key="xmlDP" XPath="*"> <x:XData> <Select_Project /> </x:XData> </XmlDataProvider> <!-- This HierarchicalDataTemplate will visualize all XML-nodes --> <HierarchicalDataTemplate DataType="project" ItemsSource ="{Binding}"> <TextBlock Text="{Binding XPath=@name}" /> </HierarchicalDataTemplate> <HierarchicalDataTemplate DataType="folder" ItemsSource ="{Binding}"> <TextBlock Text="{Binding XPath=@name}" /> </HierarchicalDataTemplate> <HierarchicalDataTemplate DataType="file" ItemsSource ="{Binding}"> <TextBlock Text="{Binding XPath=@name}" /> </HierarchicalDataTemplate> <CollectionViewSource x:Key="projectView" Source="{StaticResource xmlDP}"> <CollectionViewSource.SortDescriptions> <!-- ADD SORT DESCRIPTION HERE --> </CollectionViewSource.SortDescriptions> </CollectionViewSource> <TreeView Margin="11,79.992,18,19.089" Name="tvProject" BorderThickness="1" FontSize="12" FontFamily="Verdana"> <TreeViewItem ItemsSource="{Binding Source={StaticResource xmlDP}, XPath=*}" Header="Project"/> </TreeView>

    Read the article

  • Dynamically loading sub-trees into YUI Treeview

    - by user319399
    When you create a YUI TreeView instance, you can pass in an object that represents an entire tree, and it will automatically build up the TextNodes for you. I'd like to send in a partial tree, such that the tree only goes, say, 2 levels deep, and anything deeper than that will invoke dynamic loading. I've got that much working. Now for the interesting part. In the dynamic loading callback I give to my tree instance, I want to again be able to just give YUI a big object representing more of the tree. I want to do something like this: // data is a array of objects organized into a tree, with some nodes requiring dynamic loading when they are navigated to tree = new YAHOO.widget.TreeView("treeDiv1", data); tree.setDynamicLoad(loadDataForNode); function loadDataForNode(node, onCompleteCallback) { if(node.children.length==0) { var subTree = { "label":"Cars", isLeaf:false, children:[ { "label":"Chevy", isLeaf:true }, { "label":"Ford", isLeaf:true }, ] }; // doesn't work, even though it has the required "label" field var tempNode = new YAHOO.widget.TextNode(subTree, node, true); } onCompleteCallback(); } Is this possible? Or do I have to iterate over all the nodes in my subtree and construct individual TextNodes for each one? Thanks much...

    Read the article

  • Using YQL multi-query & XPath to parse HTML, how to escape nested quotes?

    - by Tivac
    The title is more complicated than it has to be, here's the problem query. SELECT * FROM query.multi WHERE queries=" SELECT * FROM html WHERE url='http://www.stumbleupon.com/url/http://www.guildwars2.com' AND xpath='//li[@class=\"listLi\"]/div[@class=\"views\"]/a/span'; SELECT * FROM xml WHERE url='http://services.digg.com/1.0/endpoint?method=story.getAll&link=http://www.guildwars2.com'; SELECT * FROM json WHERE url='http://api.tweetmeme.com/url_info.json?url=http://www.guildwars2.com'; SELECT * FROM xml WHERE url='http://api.facebook.com/restserver.php?method=links.getStats&urls=http://www.guildwars2.com'; SELECT * FROM json WHERE url='http://www.reddit.com/button_info.json?url=http://www.guildwars2.com'" Specifically this line, xpath='//li[@class=\"listLi\"]/div[@class=\"views\"]/a/span' It's problematic because of the quoting, I have to nest them three levels deep and I've run out of quote characters to use. I've tried the following variations without success: //no attribute quoting xpath='//li[@class=listLi]/div[@class=views]/a/span' //try to quote attribute w/ backslash & single quote xpath='//li[@class=\'listLi\']/div[@class=\'views\']/a/span' //try to quote attribute w/ backslash & double quote xpath='//li[@class=\"listLi\"]/div[@class=\"views\"]/a/span' //try to quote attribute with double single quotes, like SQL xpath='//li[@class=''listLi'']/div[@class=''views'']/a/span' //try to quote attribute with double double quotes, like SQL xpath='//li[@class=""listLi""]/div[@class=""views""]/a/span' //try to quote attribute with quote entities xpath='//li[@class=&quot;listLi&quot;]/div[@class=&quot;views&quot;]/a/span' //try to surround XPath with backslash & double quote xpath=\"//li[@class='listLi']/div[@class='views']/a/span\" //try to surround XPath with double double quote xpath=""//li[@class='listLi']/div[@class='views']/a/span"" All without success. I don't see much out there about escaping XPath strings but everything I've found seems to be variations on using concat (which won't help because neither ' nor " are available) or html entities. Not using quotes for the attributes doesn't throw an error but fails because it's not the actual XPath string I need. I don't see anything in the YQL docs about how to handle escaping. I'm aware of how edge-casey this is but was hoping they'd have some sort of escaping guide.

    Read the article

< Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >