Search Results

Search found 13104 results on 525 pages for 'malcolm box'.

Page 481/525 | < Previous Page | 477 478 479 480 481 482 483 484 485 486 487 488  | Next Page >

  • Partition Wise Joins

    - by jean-pierre.dijcks
    Some say they are the holy grail of parallel computing and PWJ is the basis for a shared nothing system and the only join method that is available on a shared nothing system (yes this is oversimplified!). The magic in Oracle is of course that is one of many ways to join data. And yes, this is the old flexibility vs. simplicity discussion all over, so I won't go there... the point is that what you must do in a shared nothing system, you can do in Oracle with the same speed and methods. The Theory A partition wise join is a join between (for simplicity) two tables that are partitioned on the same column with the same partitioning scheme. In shared nothing this is effectively hard partitioning locating data on a specific node / storage combo. In Oracle is is logical partitioning. If you now join the two tables on that partitioned column you can break up the join in smaller joins exactly along the partitions in the data. Since they are partitioned (grouped) into the same buckets, all values required to do the join live in the equivalent bucket on either sides. No need to talk to anyone else, no need to redistribute data to anyone else... in short, the optimal join method for parallel processing of two large data sets. PWJ's in Oracle Since we do not hard partition the data across nodes in Oracle we use the Partitioning option to the database to create the buckets, then set the Degree of Parallelism (or run Auto DOP - see here) and get our PWJs. The main questions always asked are: How many partitions should I create? What should my DOP be? In a shared nothing system the answer is of course, as many partitions as there are nodes which will be your DOP. In Oracle we do want you to look at the workload and concurrency, and once you know that to understand the following rules of thumb. Within Oracle we have more ways of joining of data, so it is important to understand some of the PWJ ideas and what it means if you have an uneven distribution across processes. Assume we have a simple scenario where we partition the data on a hash key resulting in 4 hash partitions (H1 -H4). We have 2 parallel processes that have been tasked with reading these partitions (P1 - P2). The work is evenly divided assuming the partitions are the same size and we can scan this in time t1 as shown below. Now assume that we have changed the system and have a 5th partition but still have our 2 workers P1 and P2. The time it takes is actually 50% more assuming the 5th partition has the same size as the original H1 - H4 partitions. In other words to scan these 5 partitions, the time t2 it takes is not 1/5th more expensive, it is a lot more expensive and some other join plans may now start to look exciting to the optimizer. Just to post the disclaimer, it is not as simple as I state it here, but you get the idea on how much more expensive this plan may now look... Based on this little example there are a few rules of thumb to follow to get the partition wise joins. First, choose a DOP that is a factor of two (2). So always choose something like 2, 4, 8, 16, 32 and so on... Second, choose a number of partitions that is larger or equal to 2* DOP. Third, make sure the number of partitions is divisible through 2 without orphans. This is also known as an even number... Fourth, choose a stable partition count strategy, which is typically hash, which can be a sub partitioning strategy rather than the main strategy (range - hash is a popular one). Fifth, make sure you do this on the join key between the two large tables you want to join (and this should be the obvious one...). Translating this into an example: DOP = 8 (determined based on concurrency or by using Auto DOP with a cap due to concurrency) says that the number of partitions >= 16. Number of hash (sub) partitions = 32, which gives each process four partitions to work on. This number is somewhat arbitrary and depends on your data and system. In this case my main reasoning is that if you get more room on the box you can easily move the DOP for the query to 16 without repartitioning... and of course it makes for no leftovers on the table... And yes, we recommend up-to-date statistics. And before you start complaining, do read this post on a cool way to do stats in 11.

    Read the article

  • How to find domain registrar and DNS hosting with good DNSSEC support?

    - by rsp
    Simplified problem I want to buy a domain and make a website that is fully secured with DNSSEC. Background I've been hearing about the insecurity of DNS for years. I've watched all of the talks by Dan Kaminsky and others from DNS exploits to The future of DNS Security Panel. I knew that using DNS without security is a disaster waiting to happen. I followed the development of the DNSSEC standard. I celebrated the key signing ceremony. Everything was on the right track to finally have a secure DNS system in place. And now more than 2 years later I wanted to just do what everyone said I should do: use DNSSEC for a new domain. So I need a domain registrar and a DNS hosting service that supports DNSSEC. Surprisingly it is not that easy to even find out who does support DNSSEC. It was actually much easier to find info on DNSSEC two years ago when everyone was going to support DNSSEC Real Soon Now but now years passed and I hardly see any progress done. I just hope that I was just looking in the wrong places and someone here will explain all of the doubts. I hope that other people who want to have a secure website will also find this question useful. What is needed registrar and DNS servers with full DNSSEC support for .com domains What is not needed IPv6 support Web hosting anything more What I found out so far Go Daddy offers Premium DNS service for additional $36 per year that lets you "Secure up to 5 domains with DNSSEC". easyDNS has DNSSEC available in Beta across all service levels (you need to enable the "beta" flag in configuration) but it doesn't seem to be production ready and judging from the lack of updates it isn't a feature of highest priority (the last update from March 2011 on the easyDNS blog). Name.com - according to The Register (US domain registrar does IPv6, DNSSEC) it has DNSSEC support since 2010 but right now (October 2012) I couldn't find anything related to DNSSEC on their website. Dynadot that is very often recommended doesn't support DNSSEC Namecheap that is also often recommended doesn't support DNSSEC. The support answer from 2011 suggested that it was being added but in 2012 still no ETA is given to customers. DynDNS was supposed to support DNSSEC, I found a link explaining DNSSEC support but it gives 404 Not Found page and offers a search box - when searching for DNSSEC I get "No results were found for your query." GKG was recommended online for DNSSEC support but it's hard to find any information on the level of DNSSEC support - there is a brief explanation on what is DNSSEC and how to sign Delegation Signer records in their FAQ but no information about the level of actual support can be found. Ask Slashdot: Which Registrars Support DNSSEC? from July 2011 - Answers list Go Daddy, DynDNS, GKG, Name.com as registrars that support DNSSEC but: see above. Related questions How to find web hosting that meets my requirements? What is needed to add DNSSEC to my site? DNS hosting better managed by Domain provider or Hosting provider? Registrar with good security, DNS hosting, and DNSSEC and IPv6 resolvers? In no. 1 no one is ever mentioning DNS at all. In no. 2 answers only mention the .se TLD, there are very few answers and they seem very outdated. In no. 3 one answer says "On projects that demand higher security, I might look for a web host that supports DNSSEC" but no more information is provided. The only relevant answers are in no. 4 where easyDNS is recommended by someone who has never used them personally. Meanwhile, as of October 2012, the support of DNSSEC is described as "in beta" on the easyDNS feature list. Another one recommends SiteGround but searching their site for DNSSEC returns no results. Other answers recommend web hosting providers that don't meet the requirement of DNSSEC support. Also the question mentioned above lists 9 very specific requirements other than only DNSSEC (like eg. HTTP-only login cookies, two-factor authentications, no DNS record limits, DNS statistics of queries/day, audit trails etc.) which might have excluded many possible recommendations if one is only interested in DNSSEC support. Conclusions I thought that by the end of 2012 the support of DNSSEC among domain registrars and DNS providers would be nearly universal. I am shocked that the support seems virtually nonexistent. Is this a result of some serious problems with the DNSSEC adoption? Or is it just not a hot topic and no one bothers anymore? According to the DNSSEC Scoreboard roughly about 0.1% of .com domains support DNSSEC. Could that be caused by the lack of DNSSEC support among registrars and DNS providers, is the information too hard to find or maybe no one cares? There is even no "dnssec" tag here. Questions The information is surprisingly hard to find. That is why I am asking for first-hand experience and personal recommendations. Has anyone here actually set up a website with DNSSEC, from the domain registration to the configuration of DNS servers? Can anyone recommend any of the registrars mentioned above? Can anyone recommend any registrar not mentioned above?

    Read the article

  • Handling Trailing Delimiters in HL7 Messages

    - by Thomas Canter
    Applies to: BizTalk Server 2006 with the HL7 1.3 Accelerator Outline of the problem Trailing Delimiters are empty values at the end of an object in a HL7 ER7 formatted message. Examples: Empty Field NTE|P| NTE|P|| Empty component ORC|1|725^ Empty Subcomponent ORC|1|||||27& Empty repeat OBR|1||||||||027~ Trailing delimiters indicate the following object exists and is empty, which is quite different from null, null is an explicit value indicated by a pair of double quotes -> "". The BizTalk HL7 Accelerator by default does not allow trailing delimiters. There are three methods to allow trailing delimiters. NOTE: All Schemas always allow trailing delimiters in the MSH Segment Using party identifiers MSH3.1 – Receive/inbound processing, using this value as a party allows you to configure the system to allow inbound trailing delimiters. MSH5.1 – Send/outbound processing, using this value as a party allows you to configure the system to allow outbound trailing delimiters. Generally, if you allow inbound trailing delimiters, unless you are willing to programmatically remove all trailing delimiters, then you need to configure the send to allow trailing delimiters. Add the appropriate parties to the BizTalk Parties list from these two fields in your message stream. Open the BizTalk HL7 Configuration tool and for each party check the "Allow trailing delimiters (separators)" check box on the Validation tab. Disadvantage – Each MSH3.1 and MSH5.1 value must be represented in the parties list and configured. Advantage – granular control over system behavior for each inbound/outbound system. Using instance properties of a pipeline used in a send port or receive location. Open the BizTalk Server Administration console locate the send port or receive location that contains the BTAHL72XReceivePipeline or BTAHL72XSendPipeline pipeline. Open the properties To the right of the pipeline selected locate the […] ellipses button In the property list, locate the "TrailingDelimiterAllowed" property and set it to True. Advantage – All messages through a particular Send Port or Receive Location will allow trailing delimiters. Disadvantage – Must configure each Send Port or Receive Location. No granular control over which remote parties will send or receive messages with trailing delimiters. Using a custom pipeline that uses a pre-configured BTA HL7 Pipeline component. Use Visual Studio to construct a custom receive and send pipeline using the appropriate assembler or dissasembler. Set the component property to "TrailingDelimitersAllowed" to True Compile and deploy the custom pipeline Use the custom pipeline instead of the standard pipeline for all HL7 message processing Advantage – All messages using the custom pipeline will automatically allow trailing delimiters. Disadvantage – Requires custom coding and development to create and deploy the custom pipeline. No granular control over which remote parties will send or receive messages with trailing delimiters. What does a Trailing Delimiter do to the XML Schema? Allowing trailing delimiters does not have the impact often expected in the actual XML Schema.The Schema reproduces the message with no data loss.Thus, the message when represented in XML must contain the extra fields, in order to reproduce the outbound message.Thus, a trialing delimiter results in an empty XML field.Trailing Delmiters are not stripped from the inbound message. Example:<PID_21>44172</PID_21><PID_21>9257</PID_21> -> the original maximum number of repeats<PID_21></PID_21> -> The empty repeated field Allowing trailing delimiters not remove the trailing delimiters from the message, it simply suppresses the check that will cause the message to fail parse with trailing delimiters. When can you not fix the problem by enabling trailing delimiters Each object in a message must have a location in the target BTAHL7 schema for its content to reside.If you have more objects in the message than are contained at that location, then enabling trailing delimiters will not resolve the problem. The schema must be extended to accommodate the empty message content.Examples: Extra Field NTE|P||||Only 4 fields in NTE Segment, the 4th field exists, but is empty. Extra component PID|1|1523|47^^^^^^^Only 5 components in a CX data type, the 5th component exists, but is empty Extra subcomponent ORC|1|||||27&&Only 2 subcomponents in a CQ data type, the 3rd subcomponent is empty, but exists. Extra Repeat PID|1||||||||||||||||||||4419~5217~Only 2 repeats allowed for the field "Mother's identifier", the repeat is empty, but exists. In each of these cases, you must locate the failing object and extend the type to allow an additional object of that type. FieldAdd a field of ST to the end of the segment with a suitable name in the segments_nnn.xsd Component Create a new Custom CX data type (i.e. CX_XtraComp) in the datatypes_nnn.xsd and add a new component to the custom CX data type. Update the field in the segments_nnn.xsd file to use the custom data type instead of the standard datatype. Subcomponent Create a new Custom CQ data type that accepts an additional TS value at the end of the data type. Create a custom TQ data type that uses the new custom CQ data type as the first subcomponent. Modify the ORC segment to use the new CQ data type at ORC.7 instead of the standard CQ data type. RepeatModify the Field definition for PID.21 in the segments_nnn.xsd to allow more repeats in the field.

    Read the article

  • JavaScript Intellisense Improvements with VS 2010

    - by ScottGu
    This is the twentieth in a series of blog posts I’m doing on the upcoming VS 2010 and .NET 4 release.  Today’s blog post covers some of the nice improvements coming with JavaScript intellisense with VS 2010 and the free Visual Web Developer 2010 Express.  You’ll find with VS 2010 that JavaScript Intellisense loads much faster for large script files and with large libraries, and that it now provides statement completion support for more advanced scenarios compared to previous versions of Visual Studio. [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] Improved JavaScript Intellisense Providing Intellisense for a dynamic language like JavaScript is more involved than doing so with a statically typed language like VB or C#.  Correctly inferring the shape and structure of variables, methods, etc is pretty much impossible without pseudo-executing the actual code itself – since JavaScript as a language is flexible enough to dynamically modify and morph these things at runtime.  VS 2010’s JavaScript code editor now has the smarts to perform this type of pseudo-code execution as you type – which is how its intellisense completion is kept accurate and complete.  Below is a simple walkthrough that shows off how rich and flexible it is with the final release. Scenario 1: Basic Type Inference When you declare a variable in JavaScript you do not have to declare its type.  Instead, the type of the variable is based on the value assigned to it.  Because VS 2010 pseudo-executes the code within the editor, it can dynamically infer the type of a variable, and provide the appropriate code intellisense based on the value assigned to a variable. For example, notice below how VS 2010 provides statement completion for a string (because we assigned a string to the “foo” variable): If we later assign a numeric value to “foo” the statement completion (after this assignment) automatically changes to provide intellisense for a number: Scenario 2: Intellisense When Manipulating Browser Objects It is pretty common with JavaScript to manipulate the DOM of a page, as well as work against browser objects available on the client.  Previous versions of Visual Studio would provide JavaScript statement completion against the standard browser objects – but didn’t provide much help with more advanced scenarios (like creating dynamic variables and methods).  VS 2010’s pseudo-execution of code within the editor now allows us to provide rich intellisense for a much broader set of scenarios. For example, below we are using the browser’s window object to create a global variable named “bar”.  Notice how we can now get intellisense (with correct type inference for a string) with VS 2010 when we later try and use it: When we assign the “bar” variable as a number (instead of as a string) the VS 2010 intellisense engine correctly infers its type and modifies statement completion appropriately to be that of a number instead: Scenario 3: Showing Off Because VS 2010 is psudo-executing code within the editor, it is able to handle a bunch of scenarios (both practical and wacky) that you throw at it – and is still able to provide accurate type inference and intellisense. For example, below we are using a for-loop and the browser’s window object to dynamically create and name multiple dynamic variables (bar1, bar2, bar3…bar9).  Notice how the editor’s intellisense engine identifies and provides statement completion for them: Because variables added via the browser’s window object are also global variables – they also now show up in the global variable intellisense drop-down as well: Better yet – type inference is still fully supported.  So if we assign a string to a dynamically named variable we will get type inference for a string.  If we assign a number we’ll get type inference for a number.  Just for fun (and to show off!) we could adjust our for-loop to assign a string for even numbered variables (bar2, bar4, bar6, etc) and assign a number for odd numbered variables (bar1, bar3, bar5, etc): Notice above how we get statement completion for a string for the “bar2” variable.  Notice below how for “bar1” we get statement completion for a number:   This isn’t just a cool pet trick While the above example is a bit contrived, the approach of dynamically creating variables, methods and event handlers on the fly is pretty common with many Javascript libraries.  Many of the more popular libraries use these techniques to keep the size of script library downloads as small as possible.  VS 2010’s support for parsing and pseudo-executing libraries that use these techniques ensures that you get better code Intellisense out of the box when programming against them. Summary Visual Studio 2010 (and the free Visual Web Developer 2010 Express) now provide much richer JavaScript intellisense support.  This support works with pretty much all popular JavaScript libraries.  It should help provide a much better development experience when coding client-side JavaScript and enabling AJAX scenarios within your ASP.NET applications. Hope this helps, Scott P.S. You can read my previous blog post on VS 2008’s JavaScript Intellisense to learn more about our previous JavaScript intellisense (and some of the scenarios it supported).  VS 2010 obviously supports all of the scenarios previously enabled with VS 2008.

    Read the article

  • Talking JavaOne with Rock Star Martijn Verburg

    - by Janice J. Heiss
    JavaOne Rock Stars, conceived in 2005, are the top-rated speakers at each JavaOne Conference. They are awarded by their peers, who, through conference surveys, recognize them for their outstanding sessions and speaking ability. Over the years many of the world’s leading Java developers have been so recognized. Martijn Verburg has, in recent years, established himself as an important mover and shaker in the Java community. His “Diabolical Developer” session at the JavaOne 2011 Conference got people’s attention by identifying some of the worst practices Java developers are prone to engage in. Among other things, he is co-leader and organizer of the thriving London Java User Group (JUG) which has more than 2,500 members, co-represents the London JUG on the Executive Committee of the Java Community Process, and leads the global effort for the Java User Group “Adopt a JSR” and “Adopt OpenJDK” programs. Career highlights include overhauling technology stacks and SDLC practices at Mizuho International, mentoring Oracle on technical community management, and running off shore development teams for AIG. He is currently CTO at jClarity, a start-up focusing on automating optimization for Java/JVM related technologies, and Product Advisor at ZeroTurnaround. He co-authored, with Ben Evans, "The Well-Grounded Java Developer" published by Manning and, as a leading authority on technical team optimization, he is in high demand at major software conferences.Verburg is participating in five sessions, a busy man indeed. Here they are: CON6152 - Modern Software Development Antipatterns (with Ben Evans) UGF10434 - JCP and OpenJDK: Using the JUGs’ “Adopt” Programs in Your Group (with Csaba Toth) BOF4047 - OpenJDK Building and Testing: Case Study—Java User Group OpenJDK Bugathon (with Ben Evans and Cecilia Borg) BOF6283 - 101 Ways to Improve Java: Why Developer Participation Matters (with Bruno Souza and Heather Vancura-Chilson) HOL6500 - Finding and Solving Java Deadlocks (with Heinz Kabutz, Kirk Pepperdine, Ellen Kraffmiller and Henri Tremblay) When I asked Verburg about the biggest mistakes Java developers tend to make, he listed three: A lack of communication -- Software development is far more a social activity than a technical one; most projects fail because of communication issues and social dynamics, not because of a bad technical decision. Sadly, many developers never learn this lesson. No source control -- Developers simply storing code in local filesystems and emailing code in order to integrate Design-driven Design -- The need for some developers to cram every design pattern from the Gang of Four (GoF) book into their source code All of which raises the question: If these practices are so bad, why do developers engage in them? “I've seen a wide gamut of reasons,” said Verburg, who lists them as: * They were never taught at high school/university that their bad habits were harmful.* They weren't mentored in their first professional roles.* They've lost passion for their craft.* They're being deliberately malicious!* They think software development is a technical activity and not a social one.* They think that they'll be able to tidy it up later.A couple of key confusions and misconceptions beset Java developers, according to Verburg. “With Java and the JVM in particular I've seen a couple of trends,” he remarked. “One is that developers think that the JVM is a magic box that will clean up their memory, make their code run fast, as well as make them cups of coffee. The JVM does help in a lot of cases, but bad code can and will still lead to terrible results! The other trend is to try and force Java (the language) to do something it's not very good at, such as rapid web development. So you get a proliferation of overly complex frameworks, libraries and techniques trying to get around the fact that Java is a monolithic, statically typed, compiled, OO environment. It's not a Golden Hammer!”I asked him about the keys to running a good Java User Group. “You need to have a ‘Why,’” he observed. “Many user groups know what they do (typically, events) and how they do it (the logistics), but what really drives users to join your group and to stay is to give them a purpose. For example, within the LJC we constantly talk about the ‘Why,’ which in our case is several whys:* Re-ignite the passion that developers have for their craft* Raise the bar of Java developers in London* We want developers to have a voice in deciding the future of Java* We want to inspire the next generation of tech leaders* To bring the disparate tech groups in London together* So we could learn from each other* We believe that the Java ecosystem forms a cornerstone of our society today -- we want to protect that for the futureLooking ahead to Java 8 Verburg expressed excitement about Lambdas. “I cannot wait for Lambdas,” he enthused. “Brian Goetz and his group are doing a great job, especially given some of the backwards compatibility that they have to maintain. It's going to remove a lot of boiler plate and yet maintain readability, plus enable massive scaling.”Check out Martijn Verburg at JavaOne if you get a chance, and, stay tuned for a longer interview yours truly did with Martijn to be publish on otn/java some time after JavaOne. Originally published on blogs.oracle.com/javaone.

    Read the article

  • Talking JavaOne with Rock Star Martijn Verburg

    - by Janice J. Heiss
    JavaOne Rock Stars, conceived in 2005, are the top-rated speakers at each JavaOne Conference. They are awarded by their peers, who, through conference surveys, recognize them for their outstanding sessions and speaking ability. Over the years many of the world’s leading Java developers have been so recognized. Martijn Verburg has, in recent years, established himself as an important mover and shaker in the Java community. His “Diabolical Developer” session at the JavaOne 2011 Conference got people’s attention by identifying some of the worst practices Java developers are prone to engage in. Among other things, he is co-leader and organizer of the thriving London Java User Group (JUG) which has more than 2,500 members, co-represents the London JUG on the Executive Committee of the Java Community Process, and leads the global effort for the Java User Group “Adopt a JSR” and “Adopt OpenJDK” programs. Career highlights include overhauling technology stacks and SDLC practices at Mizuho International, mentoring Oracle on technical community management, and running off shore development teams for AIG. He is currently CTO at jClarity, a start-up focusing on automating optimization for Java/JVM related technologies, and Product Advisor at ZeroTurnaround. He co-authored, with Ben Evans, "The Well-Grounded Java Developer" published by Manning and, as a leading authority on technical team optimization, he is in high demand at major software conferences.Verburg is participating in five sessions, a busy man indeed. Here they are: CON6152 - Modern Software Development Antipatterns (with Ben Evans) UGF10434 - JCP and OpenJDK: Using the JUGs’ “Adopt” Programs in Your Group (with Csaba Toth) BOF4047 - OpenJDK Building and Testing: Case Study—Java User Group OpenJDK Bugathon (with Ben Evans and Cecilia Borg) BOF6283 - 101 Ways to Improve Java: Why Developer Participation Matters (with Bruno Souza and Heather Vancura-Chilson) HOL6500 - Finding and Solving Java Deadlocks (with Heinz Kabutz, Kirk Pepperdine, Ellen Kraffmiller and Henri Tremblay) When I asked Verburg about the biggest mistakes Java developers tend to make, he listed three: A lack of communication -- Software development is far more a social activity than a technical one; most projects fail because of communication issues and social dynamics, not because of a bad technical decision. Sadly, many developers never learn this lesson. No source control -- Developers simply storing code in local filesystems and emailing code in order to integrate Design-driven Design -- The need for some developers to cram every design pattern from the Gang of Four (GoF) book into their source code All of which raises the question: If these practices are so bad, why do developers engage in them? “I've seen a wide gamut of reasons,” said Verburg, who lists them as: * They were never taught at high school/university that their bad habits were harmful.* They weren't mentored in their first professional roles.* They've lost passion for their craft.* They're being deliberately malicious!* They think software development is a technical activity and not a social one.* They think that they'll be able to tidy it up later.A couple of key confusions and misconceptions beset Java developers, according to Verburg. “With Java and the JVM in particular I've seen a couple of trends,” he remarked. “One is that developers think that the JVM is a magic box that will clean up their memory, make their code run fast, as well as make them cups of coffee. The JVM does help in a lot of cases, but bad code can and will still lead to terrible results! The other trend is to try and force Java (the language) to do something it's not very good at, such as rapid web development. So you get a proliferation of overly complex frameworks, libraries and techniques trying to get around the fact that Java is a monolithic, statically typed, compiled, OO environment. It's not a Golden Hammer!”I asked him about the keys to running a good Java User Group. “You need to have a ‘Why,’” he observed. “Many user groups know what they do (typically, events) and how they do it (the logistics), but what really drives users to join your group and to stay is to give them a purpose. For example, within the LJC we constantly talk about the ‘Why,’ which in our case is several whys:* Re-ignite the passion that developers have for their craft* Raise the bar of Java developers in London* We want developers to have a voice in deciding the future of Java* We want to inspire the next generation of tech leaders* To bring the disparate tech groups in London together* So we could learn from each other* We believe that the Java ecosystem forms a cornerstone of our society today -- we want to protect that for the futureLooking ahead to Java 8 Verburg expressed excitement about Lambdas. “I cannot wait for Lambdas,” he enthused. “Brian Goetz and his group are doing a great job, especially given some of the backwards compatibility that they have to maintain. It's going to remove a lot of boiler plate and yet maintain readability, plus enable massive scaling.”Check out Martijn Verburg at JavaOne if you get a chance, and, stay tuned for a longer interview yours truly did with Martijn to be publish on otn/java some time after JavaOne.

    Read the article

  • Configuring Expert Search in Communicator 14 and SharePoint 2010

    Communicator 14 provides functionality to be able to search for contacts not just by name, but by skill.  For example a customer service agent at an airline can search for other agents with Travel Advisory experience by typing the search criteria into the Communicator search box and performing a search by keyword.  The search results will return users who have specified that skill in their profile on their SharePoint My Site.  This is actually pretty easy to configure, Ill show you how. Create Search and People Search Results Pages in SharePoint Communicator 14 Expert Search works by using the SharePoint 2010 Search Service to search SharePoint for user profiles with matching keywords.  This requires that you have an Enterprise Search site in your site collection which includes the search service and also the People Results pages.  The easiest way to do this is to create a Search Center site in your site collection. Note: I get an error when trying to create an Enterprise Search site in a Team Site in the SharePoint 2010 RTM bits, so I created it as a site collection that is evident in the URLs you see below. In the screenshots below, you can see that the URL of the SharePoint search service in the Search site collection is http://sps2010/sites/search/_vti_bin/search.asmx, and the URL of the People Search Results page is http://sps2010/sites/Search/Pages/peopleresults.aspx. Point Communications Server 14 to Search and People Search Results Pages For Communicator 14 to be able to perform an Expert Search, you need to configure Communications Server 14 to point to the Search Service and People Search Results page URLs. From a server with the OCS Core bits installed, fire up the Communications Server Management Shell and type Get-CsClientPolicy. Scroll down to the bottom of the output, were interested in setting the values of: SPSearchInternalURL SPSearchExternalURL SPSearchCenterInternalURL SPSearchCenterExternalURL SPSearchInternalURL and SPSearchExternalURL correspond to the internal and external URLs of the SharePoint search service in the Search site collection, while SPSearchCenterInternalURL and SPSearchCenterExternalURL correspond to the internal and external URLs of the people search results pages. Well use the Communications Server Management Shell to set the values of these CS policy properties. For simplicity, Im only going to set the internal URLs here. Set-CsClientPolicy SPSearchInternalURL http://sps2010/sites/search/_vti_bin/search.asmx     -SPSearchCenterInternalURL http://sps2010/sites/Search/Pages/peopleresults.aspx Log out and back into Communicator.  You can verify that these settings were applied by running the Get-CsClientPolicy cmdlet again from the Communications Server Management Shell. However, theres another super-secret ninja trick to verify that the settings were applied: Find the Communicator icon in the Windows System Tray Hold down the Ctrl button Click (left) the Communicator icon in the Windows System Tray do not depress the Ctrl button You should now see an extra menu item called Configuration Information, click it. Scroll down and locate the Expert Search URL and SharePoint Search Center URL keys and verify that their values correspond to those you set using the Set-CsClientPolicy PowerShell cmdlet. Configure a Sharepoint User Profile Import Im not going to provide detailed steps here except to say that you need to configure the SharePoint 2010 User Profile  Service Application to import user account details from Active Directory on a scheduled basis. This is a critical step because there are several user profile properties e.g. SipAddress that are only populated by a user profile import.  When performing an Expert Search, Communicator can only render results for users who have a SipAddress specified. Add Skills to User Profiles Navigate to your My Site and click on My Profile.  This page allows you to set many contact details that are searchable in SharePoint.  Were particularly interested in the Ask Me About property of a users profile.  Expert Search searches against this property to find users with matching skills. Configure a SharePoint Search Crawl Ensure that you have a scheduled job to crawl your Local SharePoint Sites content source.  Depending on how you have this configured, it will also crawl the My Site site collection and add user properties such as Ask Me About to the search index. Thats It! SharePoint 2010 provides new social and collaboration features to help users find other users with similar skills or interests. Expert Search extends this functionality directly into Microsoft Communicator 14, allowing you to interact with the users directly from the search results. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • HTML5 and CSS3 Editing in Windows Live Writer

    - by Rick Strahl
    Windows Live Writer is a wonderful tool for editing blog posts and getting them posted to your blog. What makes it nice is that it has a small set of useful features, plus a simple plug-in model that has spawned many useful add-ins. Small tool with a reasonably decent plug-in model to extend equals a great solution to a simple problem. If you're running Windows, have a blog and aren’t using Live Writer you’re probably doing it wrong…One of Live Writer’s nice features is that it can download your blog’s CSS for preview and edit displays. It lets you edit your content inside of the context of that CSS using the WYSIWYG editor, so your content actually looks very close to what you’ll see on your blog while you’re editing your post. Unfortunately Live Writer renders the HTML content in the Web Browser Control’s  default IE 7 rendering mode. Yeah you read that right: IE 7 is the default for the Web Browser control and most applications that use it, are stuck in this modus unless the application explicitly overrides this default. The Web Browser control does not use the version of Internet Explorer installed on the system (IE 10 on my Win8 machine) but uses IE 7 mode for ‘compatibility’ for old applications.If you are importing your blog’s CSS that may suck if you’re using rich HTML 5 and CSS 3 formatting. Hack the Registry to get Live Writer to render using IE 9 or 10In order to get Live Writer (or any other application that uses the Web Browser Control for that matter) to render you can apply a registry hack that overrides the Web Browser Control engine usage for a specific application. I wrote about this in detail in a previous blog post a couple of years back.Here’s how you can set up Windows Live Writer to render your CSS 3 by making a change in your registry:The above is for setup on a 64 bit machine, where I configure Live Writer which is a 32 bit application for using IE 10 rendering. The keys set are as follows:32bit Configuration on 64 bit machine:HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Internet Explorer\Main\FeatureControl\FEATURE_BROWSER_EMULATIONKey: WindowsLiveWriter.exeValue: 9000 or 10000  (IE 9 or 10 respectively) (DWORD value)On a 32 bit only machine: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Internet Explorer\Main\FeatureControl\FEATURE_BROWSER_EMULATIONKey: WindowsLiveWriter.exeValue: 9000 or 10000  (IE 9 or 10 respectively) (DWORD value)Use decimal values of 9000, 10000 or 11000 to specify specific versions of Internet Explorer. This is a minor tweak, but it’s nice to actually see my blog posts now with the proper CSS formatting intact. Notice the rounded borders and shadow on the code blocks as well as the overflow-x and scrollbars that show up. In this particular case I can see what the code blocks actually look like in a specific resolution – much better than in the old plain view which just chopped things off at the end of the window frame. There are a few other elements that now show properly in the editor as well including block quotes and note boxes that I occasionally use. It’s minor stuff, but it makes the editing experience better yet and closer to the final things so there are less republish operations than I previously had. Sweet!Note that this approach of putting an IE version override into the registry works with most applications that use the Web Browser control. If you are using the Web Browser control in your own applications, it’s a good idea to switch the browser to a more recent version so you can take advantage of HTML 5 and CSS 3 in your browser displayed content by automatically setting this flag in the registry or as part of the application’s startup routine if not dedicated setup tool is used. At the very least you might set it to 9000 (IE 9) which supports most of the basic CSS3 features and is a decent baseline that works for most Windows 7 and 8 machines. If running pre-IE9, the browser will fall back to IE7 rendering and look bad but at least more recent browsers will see an improved experience.I’m surprised that there aren’t more vendors and third party apps using this feature. You can see in my first screen shot that there are only very few entries in the registry key group on my machine – any other apps use the Web Browser control are using IE7. Go figure. Certainly Windows Live Writer should be writing this key into the registry automatically as part of installation to support this functionality out of the box, but alas since it does not, this registry hack lets you get your way anyway…Resources.reg Files to register Live Write Browser Emulation (set for IE9)Specifying Internet Explorer Version for ApplicationsSnagIt LiveWriter Plug-inDownload Windows Live WriterDownload Windows Live Writer with Chocolatey© Rick Strahl, West Wind Technologies, 2005-2013Posted in Live Writer  Windows   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Hello With Oracle Identity Manager Architecture

    - by mustafakaya
    Hi, my name is Mustafa! I'm a Senior Consultant in Fusion Middleware Team and living in Istanbul,Turkey. I worked many various Java based software development projects such as end-to-end web applications, CRM , Telco VAS and integration projects.I want to share my experiences and research about Fusion Middleware Products in this column. Customer always wants best solution from software consultants or developers. Solution will be a code snippet or change complete architecture. We faced different requests according to the case of customer. In my posts i want to discuss Fusion Middleware Products Architecture or how can extend usability with apis or UI customization and more and I look forward to engaging with you on your experiences and thoughts on this.  In my first post, i will be discussing Oracle Identity Manager architecture  and i plan to discuss Oracle Identity Manager 11g features in next posts. Oracle Identity Manager System Architecture Oracle Identity Governance includes Oracle Identity Manager,Oracle Identity Analytics and Oracle Privileged Account Manager. I will discuss Oracle Identity Manager architecture in this post.  In basically, Oracle Identity Manager is a n-tier standard  Java EE application that is deployed on Oracle WebLogic Server and uses  a database .  Oracle Identity Manager presentation tier has three different screen and two different client. Identity Self Service and Identity System Administration are web-based thin client. Design Console is a Java Swing Client that communicates directly with the Business Service Tier.  Identity Self Service provides end-user operations and delegated administration features. System Administration provides system administration functions. And Design Console mostly use for development management operations such as  create and manage adapter and process form,notification , workflow desing, reconciliation rules etc. Business service tier is implemented as an Enterprise JavaBeans(EJB) application. So you can extense Oracle Identity Manager capabilities.  -The SMPL and EJB APIs allow develop custom plug-ins such as management roles or identities.  -Identity Services allow use core business capabilites of Oracle Identity Manager such as The User provisioning or reconciliation service. -Integration Services allow develop custom connectors or adapters for various deployment needs. -Platform Services allow use Entitlement Servers, Scheduler or SOA composites. The Middleware tier allows you using capabilites ADF Faces,SOA Suites, Scheduler, Entitlement Server and BI Publisher Reports. So OIM allows you to configure workflows uses Oracle SOA Suite or define authorization policies use with Oracle Entitlement Server. Also you can customization of OIM UI without need to write code and using ADF Business Editor  you can extend custom attributes to user,role,catalog and other objects. Data tiers; Oracle Identity Manager is driven by data and metadata which provides flexibility and adaptability to Oracle Identity Manager functionlities.  -Database has five schemas these are OIM,SOA,MDS,OPSS and OES. Oracle Identity Manager uses database to store runtime and configuration data. And all of entity, transactional and audit datas are stored in database. -Metadata Store; customizations and personalizations are stored in file-based repository or database-based repository.And Oracle Identity Manager architecture,the metadata is in Oracle Identity Manager database to take advantage of some of the advanced performance and availability features that this mode provides. -Identity Store; Oracle Identity Manager provides the ability to integrate an LDAP-based identity store into Oracle Identity Manager architecture.  Oracle Identity Manager uses the human workflow module of Oracle Service Oriented Architecture Suite. OIM connects to SOA using the T3 URL which is front-end URL for the SOA server.Oracle Identity Manager uses embedded Oracle Entitlement Server for authorization checks in OIM engine.  Several Oracle Identity Manager modules use JMS queues. Each queue is processed by a separate Message Driven Bean (MDB), which is also part of the Oracle Identity Manager application. Message producers are also part of the Oracle Identity Manager application. Oracle Identity Manager uses a scheduled jobs for some activities in the background.Some of scheduled jobs come with Out-Of-Box such as the disable users after the end date of the users or you can define your custom schedule jobs with Oracle Identity Manager APIs. You can use Oracle BI Publisher for reporting Oracle Identity Manager transactions or audit data which are in database. About me: Mustafa Kaya is a Senior Consultant in Oracle Fusion Middleware Team, living in Istanbul. Before coming to Oracle, he worked in teams developing web applications and backend services at a telco company. He is a Java technology enthusiast, software engineer and addicted to learn new technologies,develop new ideas. Follow Mustafa on Twitter,Connect on LinkedIn, and visit his site for Oracle Fusion Middleware related tips.

    Read the article

  • The ugly evolution of running a background operation in the context of an ASP.NET app

    - by Jeff
    If you’re one of the two people who has followed my blog for many years, you know that I’ve been going at POP Forums now for over almost 15 years. Publishing it as an open source app has been a big help because it helps me understand how people want to use it, and having it translated to six languages is pretty sweet. Despite this warm and fuzzy group hug, there has been an ugly hack hiding in there for years. One of the things we find ourselves wanting to do is hide some kind of regular process inside of an ASP.NET application that runs periodically. The motivation for this has always been that a lot of people simply don’t have a choice, because they’re running the app on shared hosting, or don’t otherwise have access to a box that can run some kind of regular background service. In POP Forums, I “solved” this problem years ago by hiding some static timers in an HttpModule. Truthfully, this works well as long as you don’t run multiple instances of the app, which in the cloud world, is always a possibility. With the arrival of WebJobs in Azure, I’m going to solve this problem. This post isn’t about that. The other little hacky problem that I “solved” was spawning a background thread to queue emails to subscribed users of the forum. This evolved quite a bit over the years, starting with a long running page to mail users in real-time, when I had only a few hundred. By the time it got into the thousands, or tens of thousands, I needed a better way. What I did is launched a new thread that read all of the user data in, then wrote a queued email to the database (as in, the entire body of the email, every time), with the properly formatted opt-out link. It was super inefficient, but it worked. Then I moved my biggest site using it, CoasterBuzz, to an Azure Website, and it stopped working. So let’s start with the first stupid thing I was doing. The new thread was simply created with delegate code inline. As best I can tell, Azure Websites are more aggressive about garbage collection, because that thread didn’t queue even one message. When the calling server response went out of scope, so went the magic background thread. Duh, all I had to do was move the thread to a private static variable in the class. That’s the way I was able to keep stuff running from the HttpModule. (And yes, I know this is still prone to failure, particularly if the app recycles. For as infrequently as it’s used, I have not, however, experienced this.) It was still failing, but this time I wasn’t sure why. It would queue a few dozen messages, then die. Running in Azure, I had to turn on the application logging and FTP in to see what was going on. That led me to a helper method I was using as delegate to build the unsubscribe links. The idea here is that I didn’t want yet another config entry to describe the base URL, appended with the right path that would match the routing table. No, I wanted the app to figure it out for you, so I came up with this little thing: public static string FullUrlHelper(this Controller controller, string actionName, string controllerName, object routeValues = null) { var helper = new UrlHelper(controller.Request.RequestContext); var requestUrl = controller.Request.Url; if (requestUrl == null) return String.Empty; var url = requestUrl.Scheme + "://"; url += requestUrl.Host; url += (requestUrl.Port != 80 ? ":" + requestUrl.Port : ""); url += helper.Action(actionName, controllerName, routeValues); return url; } And yes, that should have been done with a string builder. This is useful for sending out the email verification messages, too. As clever as I thought I was with this, I was using a delegate in the admin controller to format these unsubscribe links for tens of thousands of users. I passed that delegate into a service class that did the email work: Func<User, string> unsubscribeLinkGenerator = user => this.FullUrlHelper("Unsubscribe", AccountController.Name, new { id = user.UserID, key = _profileService.GetUnsubscribeHash(user) }); _mailingListService.MailUsers(subject, body, htmlBody, unsubscribeLinkGenerator); Cool, right? Actually, not so much. If you look back at the helper, this delegate then will depend on the controller context to learn the routing and format for the URL. As you might have guessed, those things were turning null after a few dozen formatted links, when the original request to the admin controller went away. That this wasn’t already happening on my dedicated server is surprising, but again, I understand why the Azure environment might be eager to reclaim a thread after servicing the request. It’s already inefficient that I’m building the entire email for every user, but going back to check the routing table for the right link every time isn’t a win either. I put together a little hack to look up one generic URL, and use that as the basis for a string format. If you’re wondering why I didn’t just use the curly braces up front, it’s because they get URL formatted: var baseString = this.FullUrlHelper("Unsubscribe", AccountController.Name, new { id = "--id--", key = "--key--" }); baseString = baseString.Replace("--id--", "{0}").Replace("--key--", "{1}"); Func unsubscribeLinkGenerator = user => String.Format(baseString, user.UserID, _profileService.GetUnsubscribeHash(user)); _mailingListService.MailUsers(subject, body, htmlBody, unsubscribeLinkGenerator); And wouldn’t you know it, the new solution works just fine. It’s still kind of hacky and inefficient, but it will work until this somehow breaks too.

    Read the article

  • Hiding an item conditionally through SPEL in OAF ( VO Extension + Personalization )

    - by Manoj Madhusoodanan
    In this blog I will explain how to conditionally set property of an item through personalization.Let me discuss using a business scenario. My customer wants to make Hold from Payment/ All Invoices column readonly when the Operating Unit is UK ( Configured in a lookup XXCUST_EXCLUDED_ORGS ). Analysis First thing is we have to find out the page and business components. Page: /oracle/apps/pos/supplier/webui/QuickUpdatePGView Object: oracle.apps.pos.supplier.server.SitesVO Solution Download oracle.apps.pos.supplier.server.SitesVO from $JAVA_TOP to JDEV_USER_HOME/myprojects.Make sure the transfer mode of the file (See below table). Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} File Type Transfer Mode .xml ASCII .class Binary .tar Binary .java ASCII  Since there is no VO attribute available to determine the Site Org against the lookup Org we have to add the logic inside a custom VO attribute. So VO extension is required in this scenario. Add an attribute "isPymtReadOnlyStr" in the existing query.This column returns 'Y' if there is a match in the lookup otherwise 'N'. Create a transient attribute "isPymtReadOnly" of type BOOLEAN.This will return TRUE if "isPymtReadOnlyStr" is "Y" otherwise FALSE. The reason behind adding the "isPymtReadOnly" is we are setting the item property as readonly through SPEL.It will recognize only BOOLEAN.But SQL query doesn't support BOOLEAN.So we are building a BOOLEAN attribute from the SQL which will use in the personalization layer to set the item property. Steps 1) Create a new VO xxcust.oracle.apps.pos.supplier.server.XXCUSTSitesVO which extends from oracle.apps.pos.supplier.server.SitesVO. Make sure the binding style should be same as SitesVO. Create the XXCUSTSitesVO which same query of SitesVO.Later we will add the new attribute to XXCUSTSitesVO. At this point of time all the existing VO attributes are of Updatable property as "Always". Press Finish without creating XXCUSTSitesVORowImpl.java 2) Select the XXCUSTSitesVO from JDeveloper Application Navigator. Modify the query and add the extra column. 3) Create a new transient attribute as follows. 4) Once you modify the query all the existing attributes Updatable property will change to Never.So revert that property back to orginal. 5) Create XXCUSTSitesVORowImpl.java by checking the following check box. 6) Add the following code snippet in XXCUSTSitesVORowImpl.java 7) Create the substitution for SitesVO as follows. Following entry will get created in current jpx file.    <Substitutes>      <Substitute OldName ="oracle.apps.pos.supplier.server.SitesVO" NewName ="xxcust.oracle.apps.pos.supplier.server.XXCUSTSitesVO" />   </Substitutes> 8) Migrate XXCUSTSitesVOImpl.java,XXCUSTSitesVORowImpl.java and XXCUSTSitesVO.xml from desktop to actual instance. 9) Migrate the substitution using jpximporter. 10) Restart the server and verify the substitution has done perfectly. 11) Go to /oracle/apps/pos/supplier/webui/QuickUpdatePG and personalize the page.Set the item read only property to ${oa.SitesVO.IsPymtReadOnly} 12) Click on Apply and in next page click on Return to Application. Verify your output.  Note: You can remove the substitution using following script.Please click here.

    Read the article

  • Preview and Purchase Ebooks with Kindle for PC

    - by Matthew Guay
    Want to look over a new book, or buy it immediately in ebook format?  Here’s how you can preview and purchase most new books from your PC the easy way. Most new books, including almost all New York Times Bestsellers, are available in ebook format from Amazon’s Kindle store.  The Kindle store also includes numerous free ebooks, including out-of-print classics and a surprising amount of recent books.  With the free Kindle for PC reader, you can read any of these ebooks without having to purchase a Kindle device. Preview Ebooks Before you Purchase Sometimes, it can be hard to know if you want to purchase a new book without reading some of it first.  With Kindle for PC, however, you can download a sample of any ebook available for free.  The sample usually includes the table of contents, forward or introduction, and often part or all of the first chapter. To get an ebook sample, find the book you want in the Kindle store (link below). Now, under the Try it free box, select the correct computer or device to send the sample to, and click Send Sample now. Amazon will thank you for your order, even though this is only a free preview.  Click the Go to Kindle for PC button to open Kindle and read your ebook preview.   Or, if Kindle is already running, press the Refresh button in the top right corner to check for new ebooks and previews. Kindle will synchronize and download the previews you selected. The most recently downloaded items show up on the top left.  All sample books have a red “Sample” bar on the bottom of their cover, and they also include links to Buy or view more info about it on it’s cover.  Double-click your sample to start reading it. Your ebook sample will usually open at the introduction or beginning of the first chapter, but you can also view the index, cover, and more. When you reach the end of the sample book, you can click a link to buy the book or view more details about it.  Strangely, both of these links currently take you to the ebook’s page on Amazon.com, but perhaps in the future the Buy link will directly let you purchase the book. Or, you can also click Buy Now on a sample book directly from your Kindle library. If you clicked one of these links, you will be returned to the ebook’s page on Amazon.  Choose the PC or Kindle you want the book delivered to, and this time, select Buy Now with 1-Click. Add your payment info if you’re not already setup for 1-Click Shopping, and then you’ll be shown the same Thank you page as before.  Refresh Kindle for PC, and your new ebook will automatically download.  Strangely, the sample ebook is not automatically removed, so you can right-click on the sample and select Delete this Book.  Additionally, your last-read page in the sample is not synced to the purchased book, so you may have to find your place again. Now, enjoy your full ebook! Download Free Books for Kindle The Kindle Store has an amazing amount of free ebooks.  Some free books may only be free for a limited time as a promotion, while others, such as old classics, may always be free.  Either which way, once you download it, you can keep it forever. When you find a free ebook you want, select the Kindle or PC you want to download it to and click “Buy now with 1-Click”.  Notice that this book shows it’s price is $0.00, but the button still says Buy now.  Rest assured, if the book’s price show up as $0.00, you will not be charged anything for downloading it. Your ebook will download as usual after your next refresh.  Note that you can still download the sample first if you want, but since the book is free, just download the whole thing and delete it if you don’t want it. Redownload your Purchased or Free Books If you install Kindle on a new PC or delete a book from your library, you can always re-download it from your Amazon account.  Browse to the Manage your Kindle page on Amazon (link below) sign in with your Amazon account, and scroll down to the list of your purchased content. Select the book you wish to download, then choose the Kindle or PC you want to download it to and press Go. Note: There is a “Delete this title” button right below this.  If you press the Delete button, you will not ever be able to re-download it. Or, you can download the book directly from the Archived Items tab in Kindle on your other PC. And, if you have your Kindle content on multiple computers, your reading will be synced via Whispersync.  You can start reading on your desktop, and then resume where you left off from your laptop. Conclusion With these tips and tricks, it is much easier to preview and purchase new books, find and download free ebooks, and re-download any you’ve deleted from your PC.  Have fun filling up your digital library! Links Manage your Kindle account Similar Articles Productive Geek Tips Read Mobi eBooks on Kindle for PCRead Kindle Books On Your Computer with Kindle for PCHow to See Where a TinyUrl Is Really Linking ToEdit Microsoft Word 2007 Documents in Print PreviewWhy Can’t I Turn the Details/Preview Panes On or Off in Windows Vista Explorer? TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Whoa ! Use Printflush to Solve Printing Problems Icelandic Volcano Webcams Open Multiple Links At One Go NachoFoto Searches Images in Real-time Office 2010 Product Guides

    Read the article

  • Adding UCM as a search source in Windows Explorer

    - by kyle.hatlestad
    A customer recently pointed out to me that Windows 7 supports federated search within Windows Explorer. This means you can perform searches to external sources such as Google, Flickr, YouTube, etc right from within Explorer. While we do have the Desktop Integration Suite which offers searching within Explorer, I thought it would be interesting to look into this method which would not require any client software to implement. Basically, federated searching hooks up in Windows Explorer through the OpenSearch protocol. A Search Connector Descriptor file is run and it installs the search provider. The file is a .osdx file which is an OpenSearch Description document. It describes the search provider you are hooking up to along with the URL for the query. If those results can come back as an RSS or ATOM feed, then you're all set. So the first step is to install the RSS Feeds component from the UCM Samples page on OTN. If you're on 11g, I've found the RSS Feeds works just fine on that version too. Next, you want to perform a Quick Search with a particular search term and then copy the RSS link address for that search result. Here is what an example URL might looks like: http://server:16200/cs/idcplg?IdcService=GET_SCS_FEED&feedName=search_results&QueryText=%28+%3cqsch%3eoracle%3c%2fqsch %3e+%29&SortField=dInDate&SortOrder=Desc&ResultCount=20&SearchQueryFormat= Universal&SearchProviders=server& Now you want to create a new text file and start out with this information: <?xml version="1.0" encoding="UTF-8"?><OpenSearchDescription xmlns:ms-ose="http://schemas.microsoft.com/opensearchext/2009/"> <ShortName></ShortName> <Description></Description> <Url type="application/rss+xml" template=""/> <Url type="text/html" template=""/> </OpenSearchDescription> Enter a ShortName and Description. The ShortName will be the value used when displaying the search provider in Explorer. In the template attribute for the first Url element, enter the URL copied previously. You will then need to convert the ampersand symbols to '&' to make them XML compliant. Finally, you'll want to switch out the search term with '{searchTerms}'. For the second Url element, you can do the same thing except you want to copy the UCM search results URL from the page of results. That URL will look something like: http://server:16200/cs/idcplg?IdcService=GET_SEARCH_RESULTS&SortField=dInDate&SortOrder=Desc&ResultCount=20&QueryText=%3Cqsch%3Eoracle%3C%2Fqsch%3E&listTemplateId= &ftx=1&SearchQueryFormat=Universal&TargetedQuickSearchSelection= &MiniSearchText=oracle Again, convert the ampersand symbols and replace the search term with '{searchTerms}'. When complete, save the file with the .osdx extension. The completed file should look like: <?xml version="1.0" encoding="UTF-8"?> <OpenSearchDescription xmlns="http://a9.com/-/spec/opensearch/1.1/" xmlns:ms-ose="http://schemas.microsoft.com/opensearchext/2009/"> <ShortName>Universal Content Management</ShortName> <Description>OpenSearch for UCM via Windows 7 Search Federation.</Description> <Url type="application/rss+xml" template="http://server:16200/cs/idcplg?IdcService=GET_SCS_FEED&amp;feedName=search_results&amp;QueryText=%28+%3Cqsch%3E{searchTerms}%3C%2fqsch%3E+%29&amp;SortField=dInDate&amp;SortOrder=Desc&amp;ResultCount=200&amp;SearchQueryFormat=Universal"/> <Url type="text/html" template="http://server:16200/cs/idcplg?IdcService=GET_SEARCH_RESULTS&amp;SortField=dInDate&amp;SortOrder=Desc&amp;ResultCount=20&amp;QueryText=%3Cqsch%3E{searchTerms}%3C%2Fqsch%3E&amp;listTemplateId=&amp;ftx=1&amp;SearchQueryFormat=Universal&amp;TargetedQuickSearchSelection=&amp;MiniSearchText={searchTerms}"/> </OpenSearchDescription> After you save the file, simply double-click it to create the provider. It will ask if you want to add the search connector to Windows. Click Add and it will add it to the Searches folder in your user folder as well as your Favorites. Now just click on the search icon and in the upper right search box, enter your term. As you are typing, it begins executing searches and the results will come back in Explorer. Now when you double-click on an item, it will try and download the web viewable for viewing. You also have the ability to save the search, just as you would in UCM. And there is a link to Search On Website which will launch your browser and go directly to the search results page there. And with some tweaks to the RSS component, you can make the results a bit more interesting. It supports the Media RSS standard, so you can pass along the thumbnail of the documents in the results. To enable this, edit the rss_resources.htm file in the RSS Feeds component. In the std_rss_feed_begin resource include, add the namespace 'xmlns:media="http://search.yahoo.com/mrss/' to the rss definition: <rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:media="http://search.yahoo.com/mrss/"> Next, in the rss_channel_item_with_thumb include, below the closing image element, add this element: </images> <media:thumbnail url="<$if strIndexOf(thumbnailUrl, "@t") > 0 or strIndexOf(thumbnailUrl, "@g") > 0 or strIndexOf(thumbnailUrl, "@p") > 0$><$rssHttpHost$><$thumbnailUrl$><$elseif dGif$><$HttpWebRoot$>images/docgifs/<$dGif$><$endif$>" /> <description> This and lots of other tweaks can be done to the RSS component to help extend it for optimum use in Explorer. Hopefully this can get you started. *Note: This post also applies to Universal Records Management (URM).

    Read the article

  • doubleTwist is an iTunes Alternative that Supports Several Devices

    - by Mysticgeek
    There are a lot of iTunes users out there, but unfortunately you can’t use it with all of your portable devices. Today we take a look at doubleTwist, which allows you to sync your media with a multitude of portable devices and easily share it as well. Note: You can run doubleTwist on Windows or Mac, and here we take a look at the Windows version. Install & Setup doubleTwist Download and install doubleTwist using the defaults in the wizard… Installation takes several moments and you’ll see the progress while it finishes up. After installation is complete, sign up for an account if you don’t already have one. If you do have an account you can login right away. Enter in your username, email address, and password then click Sign Up.   You’ll get an confirmation email and need to activate the account before you can sign in. Once you’re all signed up, launch doubleTwist and you’ll be ready to start using it. doubleTwist Music The default music store is Amazon MP3 store which might appeal to those of you who are tired of the iTunes music store. A lot of times the music is cheaper and available at higher bit rates. You can start searching for music in the Amazon Music Store and previewing songs. To purchase anything though you will need to sign into your Amazon account.   Under Playlists it allows you to import your playlists from iTunes and Windows Media Player, which is a handy feature if you don’t want to set them up again. Of course you can play your songs through the music player on your desktop. Devices One of the coolest things about doubleTwist is that it supports a lot of different portable media devices including iPod, BlackBerry, Windows Mobile, Android, PSP, Smartphones, and much more. Unfortunately for Zune users…there isn’t any support for the Zune of Zune HD yet. Here we have a Creative Zen attached and can sync songs, pictures, and podcasts. An HTC-S620 Smartphone running Windows Mobile… Even a simple USB drive will be recognized and you can transfer your media to it as well.   Podcasts Finding your favorite audio and video podcasts is easy with the search feature. You can easily manage and subscribe to podcasts in the subscriptions section.   You can watch the video podcasts directly in doubleTwist. Sharing Media Also you can share digital media with your friends or add it to Flickr and YouTube. You can send any pictures, videos, or music in your library to other people by dragging it over. You can email users individually… Or access contacts from your Gmail and Yahoo accounts. There is a limit to how much you can send of video podcasts… only the first 10 minutes. The person you send it to will get a link in their email that points to your My Feed page on the doubleTwist site.   There they can access the media you sent…in this example it’s a video podcast but you can share any media. Other Features Under My Profile you can change your avatar and personal information.   In Preferences you can choose where media is stored, its startup actions, podcast subscriptions, and manage device syncing. Conclusion It’s still in beta stage so expect some bugs, but overall doubleTwist is a solid media player that is easy to use with a clean interface. It’s simple and doesn’t try to do too much so is fairly easy on system resources. The main annoyance is it tries to catalog all of your media out of the box. Which may be alright for some users with smaller media collections, but very irritating to advanced users with large collections. Also there is currently no support for the Zune, but according to their forums, it’s on the way. At the time of this writing it’s in public beta and can be downloaded for XP, Vista, Windows 7 (32 & 64 bit), and Mac OSX. If you’re looking for an iTunes alternative that works with several different portable devices, you might want to give DoubleTwist a try. Download DoubleTwist Public Beta See If Your Media Device is Supported by doubleTwist Similar Articles Productive Geek Tips MusicBee is a Fast and Powerful Music ManagerAvoid the Apple QuickTime Bloat with QT LiteBeginner Geek: Set Default Programs in Windows 7 and VistaBeginner Geeks: OpenOffice is a Free Cross Platform Alternative to MS OfficeManage Devices the Easy Way with Device Stage in Windows 7 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Play Music in Chrome by Simply Dragging a File 15 Great Illustrations by Chow Hon Lam Easily Sync Files & Folders with Friends & Family Amazon Free Kindle for PC Download Stretch popurls.com with a Stylish Script (Firefox) OldTvShows.org – Find episodes of Hitchcock, Soaps, Game Shows and more

    Read the article

  • BPM Suite 11gR1 Released

    - by Manoj Das
    This morning (April 27th, 2010), Oracle BPM Suite 11gR1 became available for download from OTN and eDelivery. If you have been following our plans in this area, you know that this is the release unifying BEA ALBPM product, which became Oracle BPM10gR3, with the Oracle stack. Some of the highlights of this release are: BPMN 2.0 modeling and simulation Web based Process Composer for BPMN and Rules authoring Zero-code environment with full access to Oracle SOA Suite’s rich set of application and other adapters Process Spaces – Out-of-box integration with Web Center Suite Process Analytics – Native process cubes as well as integration with Oracle BAM You can learn more about this release from the documentation. Notes about downloading and installing Please note that Oracle BPM Suite 11gR1 is delivered and installed as part of SOA 11.1.1.3.0, which is a sparse release (only incremental patch). To install: Download and install SOA 11.1.1.2.0, which is a full release (you can find the bits at the above location) Download and install SOA 11.1.1.3.0 During configure step (using the Fusion Middleware configuration wizard), use the Oracle Business Process Management template supplied with the SOA Suite11g (11.1.1.3.0) If you plan to use Process Spaces, also install Web Center 11.1.1.3.0, which also is delivered as a sparse release and needs to be installed on top of Web Center 11.1.1.2.0 Some early feedback We have been receiving very encouraging feedback on this release. Some quotes from partners are included below: “I just attended a preview workshop on BPM Studio, Oracle's BPMN 2.0 tool, held by Clemens Utschig Utschig from Oracle HQ. The usability and ease to get started are impressive. In the business view analysts can intuitively start modeling, then developers refine in their own, more technical view. The BPM Studio sets itself apart from pure play BPMN 2.0 tools by being seamlessly integrated inside a holistic SOA / BPM toolset: BPMN models are placed in SCA-Composites in SOA Suite 11g. This allows to abstract away the complexities of SOA integration aspects from business process aspects. For UIs in BPMN tasks, you have the richness of ADF 11g based Frontends. With BPM Studio we architects have a new modeling and development IDE that gives us interesting design challenges to grasp and elaborate, since many things BPMN 2.0 are different from good ol' BPEL. For example, for simple transformations, you don't use BPEL "assign" any more, but add the transformation directly to the service call. There is much less XPath involved. And, there is no translation from model to BPEL code anymore, so the awkward process model to BPEL roundtrip, which never really worked as well as it looked on marketing slides, is obsolete: With BPMN 2.0 "the model is the code". Now, these are great times to start the journey into BPM! Some tips: Start Projects smoothly, with initial processes being not overly complex and not using the more esoteric areas of BPMN, to manage the learning path and to stay successful with each iteration. Verify non functional requirements by conducting performance and load tests early. As mentioned above, separate all technical integration logic into SOA Suite or Oracle Service Bus. And - share your experience!” Hajo Normann, SOA Architect - Oracle ACE Director - Co-Leader DOAG SIG SOA   "Reuse of components across the Oracle 11G Fusion Middleware stack, like for instance a Database Adapter, is essential. It improves stability and predictability of the solution. BPM just is one of the components plugging into the stack and reuses all other components." Mr. Leon Smiers, Oracle Solution Architect, Capgemini   “I had the opportunity to follow a hands-on workshop held by Clemens for Oracle partners and I was really impressed of the overall offering of BPM11g. BPM11g allows the execution of BPMN 2.0 processes, without having to transform/translate them first to BPEL in order to be executable. The fact that BPMN uses the same underlying service infrastructure of SOA Suite 11g has a lot of benefits for us already familiar with SOA Suite 11g. BPMN is just another SCA component within a SCA composite and can (re)use all the existing components like Rules, Human Workflow, Adapters and Mediator. I also like the fact that BPMN runs on the same service engine as BPEL. By that all known best practices for making a BPEL  process reliable are valid for BPMN processes as well. Last but not least, BPMN is integrated into the superior end-to-end tracing of SOA Suite 11g. With BPM11g, Oracle offers a very competitive product which will have a big effect on the IT market. Clemens and Jürgen: Thanks for the great workshop! I’m really looking forward to my first project using Oracle BPM11g!” Guido Schmutz, Technology Manager / Oracle ACE Director for Fusion Middleware and SOA, Company:  Trivadis Some earlier feedback were summarized in this post.

    Read the article

  • How to get sound on macbook pro 4,1

    - by Thomas
    I have just installed Xubuntu 12.04.2. My soundcard is detected: thomas@thomas-pc:~$ sudo aplay -l **** List of PLAYBACK Hardware Devices **** Home directory /home/thomas not ours. card 0: Intel [HDA Intel], device 0: ALC889A Analog [ALC889A Analog] Subdevices: 1/1 Subdevice #0: subdevice #0 card 0: Intel [HDA Intel], device 1: ALC889A Digital [ALC889A Digital] Subdevices: 1/1 Subdevice #0: subdevice #0 Everything is put to max in alsamixer and nothing is muted (all the sliders are on OO. My speakers do not work, but when I plug in a headphone I hear it very soft. When I connect my stereo and put the sound VERY loud (3-blocks-of-complaining-neighbours loud) I hear it on a normal level but crackling. I added options snd-hda-intel model=mbp5 amixer set IEC958 off to at the end of /etc/modprobe.d/alsa-base.conf. When it's still not working I tried everything here: https://help.ubuntu.com/community/SoundTroubleshooting 1 >>> list-sinks 1 sink(s) available. * index: 0 name: <alsa_output.pci-0000_00_1b.0.analog-stereo> driver: <module-alsa-card.c> flags: HARDWARE HW_MUTE_CTRL HW_VOLUME_CTRL DECIBEL_VOLUME LATENCY DYNAMIC_LATENCY state: SUSPENDED suspend cause: IDLE priority: 9959 volume: 0: 100% 1: 100% 0: 0.00 dB 1: 0.00 dB balance 0.00 base volume: 100% 0.00 dB volume steps: 65537 muted: no current latency: 0.00 ms max request: 0 KiB max rewind: 0 KiB monitor source: 0 sample spec: s16le 2ch 44100Hz channel map: front-left,front-right Stereo used by: 0 linked by: 0 configured latency: 0.00 ms; range is 0.50 .. 371.52 ms card: 0 <alsa_card.pci-0000_00_1b.0> module: 4 properties: alsa.resolution_bits = "16" device.api = "alsa" device.class = "sound" alsa.class = "generic" alsa.subclass = "generic-mix" alsa.name = "ALC889A Analog" alsa.id = "ALC889A Analog" alsa.subdevice = "0" alsa.subdevice_name = "subdevice #0" alsa.device = "0" alsa.card = "0" alsa.card_name = "HDA Intel" alsa.long_card_name = "HDA Intel at 0x9b500000 irq 46" alsa.driver_name = "snd_hda_intel" device.bus_path = "pci-0000:00:1b.0" sysfs.path = "/devices/pci0000:00/0000:00:1b.0/sound/card0" device.bus = "pci" device.vendor.id = "8086" device.vendor.name = "Intel Corporation" device.product.name = "82801H (ICH8 Family) HD Audio Controller" device.form_factor = "internal" device.string = "front:0" device.buffering.buffer_size = "65536" device.buffering.fragment_size = "32768" device.access_mode = "mmap+timer" device.profile.name = "analog-stereo" device.profile.description = "Analog Stereo" device.description = "Built-in Audio Analog Stereo" alsa.mixer_name = "Realtek ALC889A" alsa.components = "HDA:10ec0885,106b3a00,00100103" module-udev-detect.discovered = "1" device.icon_name = "audio-card-pci" ports: analog-output-speaker: Speakers (priority 10000, available: unknown) properties: analog-output-headphones: Headphones (priority 9000, available: no) properties: active port: <analog-output-speaker> 2 and 3: Doesn't seem an permission issue, the sound is very far away (See opening paragraph). 4 thomas@thomas-pc:~$ sudo aplay -l **** List of PLAYBACK Hardware Devices **** Home directory /home/thomas not ours. card 0: Intel [HDA Intel], device 0: ALC889A Analog [ALC889A Analog] Subdevices: 1/1 Subdevice #0: subdevice #0 card 0: Intel [HDA Intel], device 1: ALC889A Digital [ALC889A Digital] Subdevices: 1/1 Subdevice #0: subdevice #0 5 thomas@thomas-pc:~$ find /lib/modules/`uname -r` | grep snd /lib/modules/3.2.0-48-generic/kernel/sound/core/snd-hwdep.ko /lib/modules/3.2.0-48-generic/kernel/sound/core/snd-pcm.ko [.. huge lists continues ..] /lib/modules/3.2.0-48-generic/kernel/sound/pcmcia/pdaudiocf/snd-pdaudiocf.ko /lib/modules/3.2.0-48-generic/kernel/sound/pcmcia/vx/snd-vxpocket.ko thomas@thomas-pc:~$ 6 thomas@thomas-pc:~$ lspci -v | grep -A7 -i "audio" 00:1b.0 Audio device: Intel Corporation 82801H (ICH8 Family) HD Audio Controller (rev 03) Subsystem: Apple Inc. Device 00a4 Flags: bus master, fast devsel, latency 0, IRQ 46 Memory at 9b500000 (64-bit, non-prefetchable) [size=16K] Capabilities: <access denied> Kernel driver in use: snd_hda_intel Kernel modules: snd-hda-intel 7 I guess it's supported. Linux mint and Xubuntu 13.04 had no trouble with sounds. Everything worked out of the box Thanks in advance Edit: alsa-info.sh output: WARNING: /etc/modprobe.d/alsa-base.conf line 45: ignoring bad line starting with 'amixer' ALSA Information Script v 0.4.62 -------------------------------- This script visits the following commands/files to collect diagnostic information about your ALSA installation and sound related hardware. dmesg lspci lsmod aplay amixer alsactl /proc/asound/ /sys/class/sound/ ~/.asoundrc (etc.) See './alsa-info.sh --help' for command line options. WARNING: /etc/modprobe.d/alsa-base.conf line 45: ignoring bad line starting with 'amixer' Automatically upload ALSA information to www.alsa-project.org? [y/N] : y Uploading information to www.alsa-project.org ... Done! Your ALSA information is located at http://www.alsa-project.org/db/?f=6cffc584284d4c0b266eb53249824ef83d6c4e3e Please inform the person helping you. thomas@thomas-pc:~$

    Read the article

  • Personal search – the future of search

    - by jamiet
    [Four months ago I wrote a meandering blog post on another blogging site entitled Personal search – the future of search. The points I made therein are becoming more relevant to what I'm reading about and hoping to get involved in in the future so I'm re-posting here to a wider audience to hopefully get some more feedback and guage reaction to it. This has been prompted by the book Pull by David Siegel that is forming my current holiday reading (recommended to me by a commenter on my previous post Interesting things – Twitter annotations and your phone as a web server) and in particular by Siegel's notion of us all in the future having a personal online data vault.] My one-time colleague Paul Dawson recently wrote an article called The Future of Search and in it he proposed some interesting ideas. Some choice quotes: The growth of Chinese search giant Baidu is an indicator that fully localised and tailored content and offerings have great traction with local audiences This trend is already driving an increase in the use of specialist searches … Look at how Farecast is now integrated into Bing for example, or how Flightstats is now integrated into Google. Search does not necessarily have to begin with a keyword, but could start instead with a click or a touch. Take a look at Retrievr. Start drawing a picture in the box and see what happens. This is certainly search without the need for typing in keywords search technology has advanced greatly in recent years. The recent launch of Microsoft Live Labs’ Pivot has given us a taste of what we can expect to see in the future This really got me thinking about where search might go in the future and as my mind wandered I realised that as the amount of data that we collect about ourselves increases so too will the need and the desire to search it. The amount of electronic data that exists about each and every person is increasing and in the near future I fully expect that we are going to be able to store personal data such as: A history of our location (in fact Google Latitude already offers this facility) Recordings of all our phone conversations Health information history (weight, blood pressure etc…) Energy usage Spending history What films we watch, what radio stations we listen to Voting history Of course, most of this stuff is already stored somewhere but crucially we don’t have easy access to it. My utilities supplier knows how much electricity I’m using but if I want to know for myself I have to go and dig through my statements (assuming I have kept them). Similarly my doctor probably has ready access to all of my health records, my bank knows exactly what I have spent my money on, my cable supplier knows what I watch on TV and my mobile phone supplier probably knows exactly where I am and where I’ve been for the past few years. Strange then that none of this electronic information is available to me in a way that I can really make use of it; after all, its MY information. Its MY data. I created it. That is set to change. As technologies mature and customers become more technically cognizant they will demand more access to the data that companies hold about them. The companies themselves will realise the benefit that they derive from giving users what they want and will embrace ways of providing it. As a result the amount of data that we store about ourselves is going to increase exponentially and the desire to search and derive value from that data is going to grow with it; we are about to enter the era of the “personal datastore” and we will want, and need, to search through it in order to make sense of it all. Its interesting then that today when we think of search we think of search engines and yet in these personal datastores we’re referring to data that search engines can’t touch because WE own it and we (hopefully) choose to keep it private. Someone, I know not who, is going to lead in this space by making it easy for us to search our data and retrieve information that we have either forgotten or maybe didn’t even know in the first place. We will learn new things about ourselves and about our habits; we will share these findings with whomever we choose; we will compare what we discover with others; we will collaborate for mutual benefit and, most of all, we will educate ourselves as to how to live our lives better. Search will be the means to that end, it will enable us to make sense of the wealth of information that we will collect day in day out. The future of search is personal, why would we be interested in anything else? @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Workarounds for supporting MVVM in the Silverlight TreeView Control

    - by cibrax
    MVVM (Model-View-ViewModel) is the pattern that you will typically choose for building testable user interfaces either in WPF or Silverlight. This pattern basically relies on the data binding support in those two technologies for mapping an existing model class (the view model) to the different parts of the UI or view. Unfortunately, MVVM was not threated as first citizen for some of controls released out of the box in the Silverlight runtime or the Silverlight toolkit. That means that using data binding for implementing MVVM is not always something trivial and usually requires some customization in the existing controls. In ran into different problems myself trying to fully support data binding in controls like the tree view or the context menu or things like drag & drop.  For that reason, I decided to write this post to show how the tree view control or the tree view items can be customized to support data binding in many of its properties. In first place, you will typically use a tree view for showing hierarchical data so the view model somehow must reflect that hierarchy. An easy way to implement hierarchy in a model is to use a base item element like this one, public abstract class TreeItemModel { public abstract IEnumerable<TreeItemModel> Children; } You can later derive your concrete model classes from that base class. For example, public class CustomerModel { public string FullName { get; set; } public string Address { get; set; } public IEnumerable<OrderModel> Orders { get; set; } }   public class CustomerTreeItemModel : TreeItemModel { public CustomerTreeItemModel(CustomerModel customer) { }   public override IEnumerable<TreeItemModel> Children { get { // Return orders } } } The Children property in the CustomerTreeItem model implementation can return for instance an ObservableCollection<TreeItemModel> with the orders, so the tree view will automatically subscribe to all the changes in the collection. You can bind this model to the tree view control in the UI by using a Hierarchical data template. <e:TreeView x:Name="TreeView" ItemsSource="{Binding Customers}"> <e:TreeView.ItemTemplate> <sdk:HierarchicalDataTemplate ItemsSource="{Binding Children}"> <!-- TEMPLATE --> </sdk:HierarchicalDataTemplate> </e:TreeView.ItemTemplate> </e:TreeView> An interesting behavior with the Children property and the Hierarchical data template is that the Children property is only invoked before the expansion, so you can use lazy load at this point (The tree view control will not expand the whole tree in the first expansion). The problem with using MVVM in this control is that you can not bind properties in model with specific properties of the TreeView item such as IsSelected or IsExpanded. Here is where you need to customize the existing tree view control to support data binding in tree items. public class CustomTreeView : TreeView { public CustomTreeView() { }   protected override DependencyObject GetContainerForItemOverride() { CustomTreeViewItem tvi = new CustomTreeViewItem(); Binding expandedBinding = new Binding("IsExpanded"); expandedBinding.Mode = BindingMode.TwoWay; tvi.SetBinding(CustomTreeViewItem.IsExpandedProperty, expandedBinding); Binding selectedBinding = new Binding("IsSelected"); selectedBinding.Mode = BindingMode.TwoWay; tvi.SetBinding(CustomTreeViewItem.IsSelectedProperty, selectedBinding); return tvi; } }   public class CustomTreeViewItem : TreeViewItem { public CustomTreeViewItem() { }   protected override DependencyObject GetContainerForItemOverride() { CustomTreeViewItem tvi = new CustomTreeViewItem(); Binding expandedBinding = new Binding("IsExpanded"); expandedBinding.Mode = BindingMode.TwoWay; tvi.SetBinding(CustomTreeViewItem.IsExpandedProperty, expandedBinding); Binding selectedBinding = new Binding("IsSelected"); selectedBinding.Mode = BindingMode.TwoWay; tvi.SetBinding(CustomTreeViewItem.IsSelectedProperty, selectedBinding); return tvi; } } You basically need to derive the TreeView and TreeViewItem controls to manually add a binding for the properties you need. In the example above, I am adding a binding for the “IsExpanded” and “IsSelected” properties in the items. The model for the tree items now needs to be extended to support those properties as well, public abstract class TreeItemModel : INotifyPropertyChanged { bool isExpanded = false; bool isSelected = false;   public abstract IEnumerable<TreeItemModel> Children { get; }   public bool IsExpanded { get { return isExpanded; } set { isExpanded = value; if (PropertyChanged != null) PropertyChanged(this, new PropertyChangedEventArgs("IsExpanded")); } }   public bool IsSelected { get { return isSelected; } set { isSelected = value; if (PropertyChanged != null) PropertyChanged(this, new PropertyChangedEventArgs("IsSelected")); } }   public event PropertyChangedEventHandler PropertyChanged; } However, as soon as you use this custom tree view control, you lose all the automatic styles from the built-in toolkit themes because they are tied to the control type (TreeView in this case).  The only ugly workaround I found so far for this problem is to copy the styles from the Toolkit source code and reuse them in the application.

    Read the article

  • Recovering from apt-get upgrade gone wrong due to a full disk

    - by Peter
    I was performing an apt-get upgrade on an Ubuntu 12.04.5 LTS box that hadn't been updated in a little while and the upgrade failed due to 'No space left on device'. After a little while I worked out space meant inodes and I have freed some up but unfortunately things have been left something askew. I have tried manually installing the old versions of packages mentioned using dpkg -i but that doesn't help. I have tried apt-get upgrade and apt-get -f install to no avail. Results are below. Any ideas how to fix things up? FIXED: Installing the earlier versions again manually via dpkg -i and then apt-get -f install has done the trick. Not sure why this didn't work the first time. The packages in question are listed below but they will presumably vary. libssl1.0.0_1.0.1-4ubuntu5.14_i386.deb linux-headers-3.2.0-64-generic-pae_3.2.0-64.97_i386.deb linux-image-generic-pae_3.2.0.64.76_i386.deb linux-headers-3.2.0-64_3.2.0-64.97_all.deb linux-headers-generic-pae_3.2.0.64.76_i386.deb root@unlinked:/tmp# apt-get upgrade Reading package lists... Done Building dependency tree Reading state information... Done You might want to run ‘apt-get -f install’ to correct these. The following packages have unmet dependencies. libssl-dev : Depends: libssl1.0.0 (= 1.0.1-4ubuntu5.14) but 1.0.1-4ubuntu5.17 is installed linux-generic-pae : Depends: linux-image-generic-pae (= 3.2.0.64.76) but 3.2.0.67.79 is installed Depends: linux-headers-generic-pae (= 3.2.0.64.76) but 3.2.0.67.79 is installed E: Unmet dependencies. Try using -f. root@unlinked:/tmp# apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following packages were automatically installed and are no longer required: linux-headers-3.2.0-43-generic-pae linux-headers-3.2.0-38-generic-pae linux-headers-3.2.0-41-generic-pae linux-headers-3.2.0-36-generic-pae linux-headers-3.2.0-63-generic-pae linux-headers-3.2.0-58-generic-pae linux-headers-3.2.0-60-generic-pae linux-headers-3.2.0-55-generic-pae linux-headers-3.2.0-40 linux-headers-3.2.0-41 linux-headers-3.2.0-36 linux-headers-3.2.0-37 linux-headers-3.2.0-43 linux-headers-3.2.0-38 linux-headers-3.2.0-44 linux-headers-3.2.0-39 linux-headers-3.2.0-45 linux-headers-3.2.0-51 linux-headers-3.2.0-52 linux-headers-3.2.0-53 linux-headers-3.2.0-48 linux-headers-3.2.0-54 linux-headers-3.2.0-60 linux-headers-3.2.0-55 linux-headers-3.2.0-61 linux-headers-3.2.0-56 linux-headers-3.2.0-57 linux-headers-3.2.0-63 linux-headers-3.2.0-58 linux-headers-3.2.0-59 linux-headers-3.2.0-52-generic-pae linux-headers-3.2.0-44-generic-pae linux-headers-3.2.0-39-generic-pae linux-headers-3.2.0-37-generic-pae linux-headers-3.2.0-59-generic-pae linux-headers-3.2.0-61-generic-pae linux-headers-3.2.0-56-generic-pae linux-headers-3.2.0-53-generic-pae linux-headers-3.2.0-48-generic-pae linux-headers-3.2.0-45-generic-pae linux-headers-3.2.0-40-generic-pae linux-headers-3.2.0-57-generic-pae linux-headers-3.2.0-54-generic-pae linux-headers-3.2.0-51-generic-pae Use 'apt-get autoremove' to remove them. The following extra packages will be installed: libssl-dev linux-generic-pae The following packages will be upgraded: libssl-dev linux-generic-pae 2 to upgrade, 0 to newly install, 0 to remove and 0 not to upgrade. 2 not fully installed or removed. Need to get 0 B/1,427 kB of archives. After this operation, 1,024 B of additional disk space will be used. Do you want to continue [Y/n]? y dpkg: dependency problems prevent configuration of libssl-dev: libssl-dev depends on libssl1.0.0 (= 1.0.1-4ubuntu5.14); however: Version of libssl1.0.0 on system is 1.0.1-4ubuntu5.17. dpkg: error processing libssl-dev (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates it's a follow-up error from a previous failure. dpkg: dependency problems prevent configuration of linux-generic-pae: linux-generic-pae depends on linux-image-generic-pae (= 3.2.0.64.76); however: Version of linux-image-generic-pae on system is 3.2.0.67.79. linux-generic-pae depends on linux-headers-generic-pae (= 3.2.0.64.76); however: Version of linux-headers-generic-pae on system is 3.2.0.67.79. dpkg: error processing linux-generic-pae (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates it's a follow-up error from a previous failure. Errors were encountered while processing: libssl-dev linux-generic-pae E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • Code Metrics: Number of IL Instructions

    - by DigiMortal
    In my previous posting about code metrics I introduced how to measure LoC (Lines of Code) in .NET applications. Now let’s take a step further and let’s take a look how to measure compiled code. This way we can somehow have a picture about what compiler produces. In this posting I will introduce you code metric called number of IL instructions. NB! Number of IL instructions is not something you can use to measure productivity of your team. If you want to get better idea about the context of this metric and LoC then please read my first posting about LoC. What are IL instructions? When code written in some .NET Framework language is compiled then compiler produces assemblies that contain byte code. These assemblies are executed later by Common Language Runtime (CLR) that is code execution engine of .NET Framework. The byte code is called Intermediate Language (IL) – this is more common language than C# and VB.NET by example. You can use ILDasm tool to convert assemblies to IL assembler so you can read them. As IL instructions are building blocks of all .NET Framework binary code these instructions are smaller and highly general – we don’t want very rich low level language because it executes slower than more general language. For every method or property call in some .NET Framework language corresponds set of IL instructions. There is no 1:1 relationship between line in high level language and line in IL assembler. There are more IL instructions than lines in C# code by example. How much instructions there are? I have no common answer because it really depends on your code. Here you can see some metrics from my current community project that is developed on SharePoint Server 2007. As average I have about 7 IL instructions per line of code. This is not metric you should use, it is just illustrative example so you can see the differences between numbers of lines and IL instructions. Why should I measure the number of IL instructions? Just take a look at chart above. Compiler does something that you cannot see – it compiles your code to IL. This is not intuitive process because you usually cannot say what is exactly the end result. You know it at greater plain but you don’t know it exactly. Therefore we can expect some surprises and that’s why we should measure the number of IL instructions. By example, you may find better solution for some method in your source code. It looks nice, it works nice and everything seems to be okay. But on server under load your fix may be way slower than previous code. Although you minimized the number of lines of code it ended up with increasing the number of IL instructions. How to measure the number of IL instructions? My choice is NDepend because Visual Studio is not able to measure this metric. Steps to make are easy. Open your NDepend project or create new and add all your application assemblies to project (you can also add Visual Studio solution to project). Run project analysis and wait until it is done. You can see over-all stats form global summary window. This is the same window I used to read the LoC and the number of IL instructions metrics for my chart. Meanwhile I made some changes to my code (enabled advanced caching for events and event registrations module) and then I ran code analysis again to get results for this section of this posting. NDepend is also able to tell you exactly what parts of code have problematically much IL instructions. The code quality section of CQL Query Explorer shows you how much problems there are with members in analyzed code. If you click on the line Methods too big (NbILInstructions) you can see all the problematic members of classes in CQL Explorer shown in image on right. In my case if have 10 methods that are too big and two of them have horrible number of IL instructions – just take a look at first two methods in this TOP10. Also note the query box. NDepend has easy and SQL-like query language to query code analysis results. You can modify these queries if you like and also you can define your own ones if default set is not enough for you. What is good result? As you can see from query window then the number of IL instructions per member should have maximally 200 IL instructions. Of course, like always, the less instructions you have, the better performing code you have. I don’t mean here little differences but big ones. By example, take a look at my first method in warnings list. The number of IL instructions it has is huge. And believe me – this method looks awful. Conclusion The number of IL instructions is useful metric when optimizing your code. For analyzing code at general level to find out too long methods you can use the number of LoC metric because it is more intuitive for you and you can therefore handle the situation more easily. Also you can use NDepend as code metrics tool because it has a lot of metrics to offer.

    Read the article

  • Plan your SharePoint 2010 Content Type Hub carefully

    - by Wayne
    Currently setting up a new environment on SharePoint 2010 (which was made available for download yesterday if anyone missed that :-). One of the new features of SharePoint 2010 is to set up a Content Type Hub (which is a part of the Metadata Service Application), which is a hub for all Content Types that other Site Collections can subscribe to. That is you only need to manage your content types in one location. Setting up the Content Type Hub is not that difficult but you must make it very careful to avoid a lot of work and troubleshooting. Here is a short tutorial with a few tips and tricks to make it easy for you to get started. Determine location of Content Type Hub First of all you need to decide in which Site Collection to place your Content Type Hub; in the root site collection or a specific one. I think using a specific Site Collection that only acts as a Content Type Hub is the best way, there are no best practice as of now. So I create a new Site Collection, at for instance http://server/sites/CTH/. The top-level site of this site collection should be for instance a Team Site. You cannot use Blank Site by default, which would have been the best option IMHO, since that site does not have the Taxonomy feature stapled upon it (check the TaxonomyFeatureStapler feature for which site templates that can be used). Configure Managed Metadata Service Application Next you need to create your Managed Metadata Service Application or configure the existing one, Central Administration > Application Management > Manage Service Applications. Select the Managed Metadata service application and click Properties if you already have created it. In the bottom of the dialog window when you are creating the service application or when you are editing the properties is a section to fill in the Content Type Hub. In this text box fill in the URL of the Content Type Hub. It is essential that you have decided where your Content Type Hub will reside, since once this is set you cannot change it. The only way to change it is to rebuild the whole managed metadata service application! Also make sure that you enter the URL correctly. I did copy and paste the URL once and got the /default.aspx in the URL which funked the whole service up. Make sure that you only use the URL to the Site Collection of the hub. Now you have to set up so that other Site Collections can consume the content types from the hub. This is done by selecting the connection for the managed metadata service application and clicking properties. A new dialog window opens and there you need to click the Consumes content types from the Content Type Gallery at nnnn. Now you are free to syndicate your Content Types from the Hub. Publish Content Types To publish a Content Type from the hub you need to go to Site Settings > Content Types and select the content type that you would like to publish. Then select Manage publishing for this content type. This takes you to a page from where you can Publish, Unpublish or Republish the content type. Once the content type is published it can take up to an hour for the subscribing Site Collections to get it. This is controlled by the Content Type Subscriber job that is scheduled to run once an hour. To speed up your publishing just go to Central Administration > Monitoring > Review Job Definitions > Content Type Subscriber and click Run now and you content type is very soon available for use. Published Content Type status You can check the status of the content type publishing in your destination site collections by selecting Site Settings > Content Type Publishing. From here you can force a refresh of all subscribed content types, see which ones that are subscribed and finally check the publishing error log. This error log is very useful for detecting errors during the publishing. For instance if you use any features such as ratings, metadata, document ids in your content type hub and your destination site collection does not have those features available this will be reported here.

    Read the article

  • Get your content off Blogger.com

    - by Daniel Moth
    Due to blogger.com deprecating FTP users I've decided to move my blog. When I think of the content of a blog, 4 items come to mind: blog posts, comments, binary files that the blog posts linked to (e.g. images, ZIP files) and the CSS+structure of the blog. 1. Binaries The binary files you used in your blog posts are sitting on your own web space, so really blogger.com is not involved with that. Nothing for you to do at this stage, I'll come back to these in another post. 2. CSS and structure In the best case this exists as a separate CSS file on your web space (so no action for now) or in a worst case, like me, your CSS is embedded with the HTML. In the latter case, simply navigate from you dashboard to "Template" then "Edit HTML" and copy paste the contents of the box. Save that locally in a txt file and we'll come back to that in another post. 3. Blog posts and Comments The blog posts and comments exist in all the HTML files on your own web space. Parsing HTML files to extract that can be painful, so it is easier to download the XML files from blogger's servers that contain all your blog posts and comments. 3.1 Single XML file, but incomplete The obvious thing to do is go into your dashboard "Settings" and under the "Basic" tab look at the top next to "Blog Tools". There is a link there to "Export blog" which downloads an XML file with both comments and posts. The problem with that is that it only contains 200 comments - if you have more than that, you will lose the surplus. Also, this XML file has a lot of noise, compared to the better solution described next. (note that a tool I will refer to in a future post deals with either kind of XML file) 3.2 Multiple XML files First you need to find your blog ID. In case you don't know what that is, navigate to the "Template" as described in section 2 above. You will find references to the blog id in the HTML there, but you can also see it as part of the URL in your browser: blogger.com/template-edit.g?blogID=YOUR_NUMERIC_ID. Mine is 7 digits. You can now navigate to these URLs to download the XML for your posts and comments respectively: blogger.com/feeds/YOUR_NUMERIC_ID/posts/default?max-results=500&start-index=1 blogger.com/feeds/YOUR_NUMERIC_ID/comments/default?max-results=200&start-index=1 Note that you can only get 500 posts at a time and only 200 comments at a time. To get more than that you have to change the URL and download the next batch. To get you started, to get the XML for the next 500 posts and next 200 comments respectively you’d have to use these URLs: blogger.com/feeds/YOUR_NUMERIC_ID/posts/default?max-results=500&start-index=501 blogger.com/feeds/YOUR_NUMERIC_ID/comments/default?max-results=200&start-index=201 ...and so on and so forth. Keep all the XML files in the same folder on your local machine (with nothing else in there). 4. Validating the XML aka editing older blog posts The XML files you just downloaded really contain HTML fragments inside for all your blog posts. If you are like me, your blog posts did not conform to XHTML so passing them to an XML parser (which is what we will want to do) will result in the XML parser choking. So the next step is to fix that. This can be no work at all for you, or a huge time sink or just a couple hours of pain (which was my case). The process I followed was to attempt to load the XML files using XmlDocument.Load and wait for the exception to be thrown from my code. The exception would point to the exact offending line and column which would help me fix the issue. Rather than fix it in the XML itself, I would go back and edit the offending blog post and fix it there - recommended! Then I'd repeat the cycle until the XML could be loaded in the XmlDocument. To give you an idea, some of the issues I encountered are: extra or missing quotes in img and href elements, direct usage of chevrons instead of encoding them as &lt;, missing closing tags, mismatched nested pairs of elements and capitalization of html elements. For a full list of things that may go wrong see this. 5. Opportunity for other changes I also found a few posts that did not have a category assigned so I fixed those too. I took the further opportunity to create new categories and tag some of my blog posts with that. Note that I did not remove/change categories of existing posts, but only added.   In an another post we'll see how to use the XML files you stored in the local folder… Comments about this post welcome at the original blog.

    Read the article

  • User is trying to leave! Set at-least confirm alert on browser(tab) close event!!

    - by kaushalparik27
    This is something that might be annoying or irritating for end user. Obviously, It's impossible to prevent end user from closing the/any browser. Just think of this if it becomes possible!!!. That will be a horrible web world where everytime you will be attacked by sites and they will not allow to close your browser until you confirm your shopping cart and do the payment. LOL:) You need to open the task manager and might have to kill the running browser exe processes.Anyways; Jokes apart, but I have one situation where I need to alert/confirm from the user in any anyway when they try to close the browser or change the url. Think of this: You are creating a single page intranet asp.net application where your employee can enter/select their TDS/Investment Declarations and you wish to at-least ALERT/CONFIRM them if they are attempting to:[1] Close the Browser[2] Close the Browser Tab[3] Attempt to go some other site by Changing the urlwithout completing/freezing their declaration.So, Finally requirement is clear. I need to alert/confirm the user what he is going to do on above bulleted events. I am going to use window.onbeforeunload event to set the javascript confirm alert box to appear.    <script language="JavaScript" type="text/javascript">        window.onbeforeunload = confirmExit;        function confirmExit() {            return "You are about to exit the system before freezing your declaration! If you leave now and never return to freeze your declaration; then they will not go into effect and you may lose tax deduction, Are you sure you want to leave now?";        }    </script>See! you are halfway done!. So, every time browser unloads the page, above confirm alert causes to appear on front of user like below:By saying here "every time browser unloads the page"; I mean to say that whenever page loads or postback happens the browser onbeforeunload event will be executed. So, event a button submit or a link submit which causes page to postback would tend to execute the browser onbeforeunload event to fire!So, now the hurdle is how can we prevent the alert "Not to show when page is being postback" via any button/link submit? Answer is JQuery :)Idea is, you just need to set the script reference src to jQuery library and Set the window.onbeforeunload event to null when any input/link causes a page to postback.Below will be the complete code:<head runat="server">    <title></title>    <script src="jquery.min.js" type="text/javascript"></script>    <script language="JavaScript" type="text/javascript">        window.onbeforeunload = confirmExit;        function confirmExit() {            return "You are about to exit the system before freezing your declaration! If you leave now and never return to freeze your declaration; then they will not go into effect and you may lose tax deduction, Are you sure you want to leave now?";        }        $(function() {            $("a").click(function() {                window.onbeforeunload = null;            });            $("input").click(function() {                window.onbeforeunload = null;            });        });    </script></head><body>    <form id="form1" runat="server">    <div></div>    </form></body></html>So, By this post I have tried to set the confirm alert if user try to close the browser/tab or try leave the site by changing the url. I have attached a working example with this post here. I hope someone might find it helpful.

    Read the article

  • Personal search – the future of search

    - by jamiet
    [Four months ago I wrote a meandering blog post on another blogging site entitled Personal search – the future of search. The points I made therein are becoming more relevant to what I'm reading about and hoping to get involved in in the future so I'm re-posting here to a wider audience to hopefully get some more feedback and guage reaction to it. This has been prompted by the book Pull by David Siegel that is forming my current holiday reading (recommended to me by a commenter on my previous post Interesting things – Twitter annotations and your phone as a web server) and in particular by Siegel's notion of us all in the future having a personal online data vault.] My one-time colleague Paul Dawson recently wrote an article called The Future of Search and in it he proposed some interesting ideas. Some choice quotes: The growth of Chinese search giant Baidu is an indicator that fully localised and tailored content and offerings have great traction with local audiences This trend is already driving an increase in the use of specialist searches … Look at how Farecast is now integrated into Bing for example, or how Flightstats is now integrated into Google. Search does not necessarily have to begin with a keyword, but could start instead with a click or a touch. Take a look at Retrievr. Start drawing a picture in the box and see what happens. This is certainly search without the need for typing in keywords search technology has advanced greatly in recent years. The recent launch of Microsoft Live Labs’ Pivot has given us a taste of what we can expect to see in the future This really got me thinking about where search might go in the future and as my mind wandered I realised that as the amount of data that we collect about ourselves increases so too will the need and the desire to search it. The amount of electronic data that exists about each and every person is increasing and in the near future I fully expect that we are going to be able to store personal data such as: A history of our location (in fact Google Latitude already offers this facility) Recordings of all our phone conversations Health information history (weight, blood pressure etc…) Energy usage Spending history What films we watch, what radio stations we listen to Voting history Of course, most of this stuff is already stored somewhere but crucially we don’t have easy access to it. My utilities supplier knows how much electricity I’m using but if I want to know for myself I have to go and dig through my statements (assuming I have kept them). Similarly my doctor probably has ready access to all of my health records, my bank knows exactly what I have spent my money on, my cable supplier knows what I watch on TV and my mobile phone supplier probably knows exactly where I am and where I’ve been for the past few years. Strange then that none of this electronic information is available to me in a way that I can really make use of it; after all, its MY information. Its MY data. I created it. That is set to change. As technologies mature and customers become more technically cognizant they will demand more access to the data that companies hold about them. The companies themselves will realise the benefit that they derive from giving users what they want and will embrace ways of providing it. As a result the amount of data that we store about ourselves is going to increase exponentially and the desire to search and derive value from that data is going to grow with it; we are about to enter the era of the “personal datastore” and we will want, and need, to search through it in order to make sense of it all. Its interesting then that today when we think of search we think of search engines and yet in these personal datastores we’re referring to data that search engines can’t touch because WE own it and we (hopefully) choose to keep it private. Someone, I know not who, is going to lead in this space by making it easy for us to search our data and retrieve information that we have either forgotten or maybe didn’t even know in the first place. We will learn new things about ourselves and about our habits; we will share these findings with whomever we choose; we will compare what we discover with others; we will collaborate for mutual benefit and, most of all, we will educate ourselves as to how to live our lives better. Search will be the means to that end, it will enable us to make sense of the wealth of information that we will collect day in day out. The future of search is personal, why would we be interested in anything else? @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • BIND9 server not responding to external queries

    - by Twitchy
    I have set up a BIND server on my dedicated box which I want to host a nameserver for my domain on. When I use dig @202.169.196.59 nzserver.co.nz locally on the server I get the following response... ; <<>> DiG 9.8.1-P1 <<>> @202.169.196.59 nzserver.co.nz ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 43773 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 2 ;; QUESTION SECTION: ;nzserver.co.nz. IN A ;; ANSWER SECTION: nzserver.co.nz. 3600 IN A 202.169.196.59 ;; AUTHORITY SECTION: nzserver.co.nz. 3600 IN NS ns2.nzserver.co.nz. nzserver.co.nz. 3600 IN NS ns1.nzserver.co.nz. ;; ADDITIONAL SECTION: ns1.nzserver.co.nz. 3600 IN A 202.169.196.59 ns2.nzserver.co.nz. 3600 IN A 202.169.196.59 ;; Query time: 0 msec ;; SERVER: 202.169.196.59#53(202.169.196.59) ;; WHEN: Sat Oct 27 15:40:45 2012 ;; MSG SIZE rcvd: 116 Which is good, and is the output I want. But when simply using dig nzserver.co.nz I get... ; <<>> DiG 9.8.1-P1 <<>> nzserver.co.nz ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 16970 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;nzserver.co.nz. IN A ;; Query time: 308 msec ;; SERVER: 202.169.192.61#53(202.169.192.61) ;; WHEN: Sat Oct 27 17:09:12 2012 ;; MSG SIZE rcvd: 32 And if I use dig @202.169.196.59 nzserver.co.nz on another linux machine I get... ; <<>> DiG 9.7.3 <<>> @202.169.196.59 nzserver.co.nz ; (1 server found) ;; global options: +cmd ;; connection timed out; no servers could be reached Am I doing something wrong here? Port 53 is definitely open. /etc/bind/named.conf.options options { directory "/var/cache/bind"; forwarders { 202.169.192.61; 202.169.206.10; }; listen-on { 202.169.196.59; }; }; /etc/bind/named.conf.local zone "nzserver.co.nz" { type master; file "/etc/bind/nzserver.co.nz.zone"; }; /etc/bind/nzserver.co.nz.zone ; BIND db file for nzserver.co.nz $ORIGIN nzserver.co.nz. @ IN SOA ns1.nzserver.co.nz. mr.steven.french.gmail.com. ( 2012102606 28800 7200 864000 3600 ) NS ns1.nzserver.co.nz. NS ns2.nzserver.co.nz. MX 10 mail.nzserver.co.nz. @ IN A 202.169.196.59 * IN A 202.169.196.59 ns1 IN A 202.169.196.59 ns2 IN A 202.169.196.59 www IN A 202.169.196.59 mail IN A 202.169.196.59

    Read the article

< Previous Page | 477 478 479 480 481 482 483 484 485 486 487 488  | Next Page >