Search Results

Search found 9861 results on 395 pages for 'embedded systems'.

Page 158/395 | < Previous Page | 154 155 156 157 158 159 160 161 162 163 164 165  | Next Page >

  • Moving rails javascript to public while keeping ruby code?

    - by tesmar
    Hi guys, I have a project to move some JS code outside of rails into the public direcotry, but some of it has ruby code embedded, and depends on the values of the variables from the controllers to set some of its code. How can I move it out of the view and still maintain the same structure, or do I need to just rewrite the JS from scratch?

    Read the article

  • possible to create custom scrollbar graphics without flash?

    - by Joel
    A friend is wanting me to help her convert her flash based website to html. She has an embedded textbox with a scrollbar that is using a flower instead of a normal scrollbar. Avoiding the obvious question of why a user would want a non-standard element to do this task, is it possible to do this without flash?

    Read the article

  • jQuery plugins: How can I stop divs from overlapping?

    - by anir
    I'm using Masonry and Embedly plugin to display embedded content, I've put together an example in jsfiddle.net/anir/pBtbb/3/ It appears the Masonry plugin loads first and it causes the overlapping problem (they only show correctly when you resize the result window) I've read that I could use callback functions or retrigger Masonry once embedly has finished rendering content, but I don't know how to do it. Can you help me? Is there any other solution to fix this?

    Read the article

  • Why should you choose Oracle WebLogic 12c instead of JBoss EAP 6?

    - by Ricardo Ferreira
    In this post, I will cover some technical differences between Oracle WebLogic 12c and JBoss EAP 6, which was released a couple days ago from Red Hat. This article claims to help you in the evaluation of key points that you should consider when choosing for an Java EE application server. In the following sections, I will present to you some important aspects that most customers ask us when they are seriously evaluating for an middleware infrastructure, specially if you are considering JBoss for some reason. I would suggest that you keep the following question in mind while you are reading the points: "Why should I choose JBoss instead of WebLogic?" 1) Multi Datacenter Deployment and Clustering - D/R ("Disaster & Recovery") architecture support is embedded on the WebLogic Server 12c product. JBoss EAP 6 on the other hand has no direct D/R support included, Red Hat relies on third-part tools with higher prices. When you consider a middleware solution to host your business critical application, you should worry with every architectural aspect that are related with the solution. Fail-over support is one little aspect of a truly reliable solution. If you do not worry about D/R, your solution will not be reliable. Having said that, with Red Hat and JBoss EAP 6, you have this extra cost that will increase considerably the total cost of ownership of the solution. As we commonly hear from analysts, open-source are not so cheaper when you start seeing the big picture. - WebLogic Server 12c supports advanced LAN clustering, detection of death servers and have a common alert framework. JBoss EAP 6 on the other hand has limited LAN clustering support with no server death detection. They do not generate any alerts when servers goes down (only if you buy JBoss ON which is a separated technology, but until now does not support JBoss EAP 6) and manual intervention are required when servers goes down. In most cases, admin people must rely on "kill -9", "tail -f someFile.log" and "ps ax | grep java" commands to manage failures and clustering anomalies. - WebLogic Server 12c supports the concept of Node Manager, which is a separated process that runs on the physical | virtual servers that allows extend the administration of the cluster to WebLogic managed servers that are often distributed across multiple machines and geographic locations. JBoss EAP 6 on the other hand has no equivalent technology. Whole server instances must be managed individually. - WebLogic Server 12c Node Manager supports Coherence to boost performance when managing servers. JBoss EAP 6 on the other hand has no similar technology. There is no way to coordinate JBoss and infiniband instances provided by JBoss using high throughput and low latency protocols like InfiniBand. The Node Manager feature also allows another very important feature that JBoss EAP lacks: secure the administration. When using WebLogic Node Manager, all the administration tasks are sent to the managed servers in a secure tunel protected by a certificate, which means that the transport layer that separates the WebLogic administration console from the managed servers are secured by SSL. - WebLogic Server 12c are now integrated with OTD ("Oracle Traffic Director") which is a web server technology derived from the former Sun iPlanet Web Server. This software complements the web server support offered by OHS ("Oracle HTTP Server"). Using OTD, WebLogic instances are load-balanced by a high powerful software that knows how to handle SDP ("Socket Direct Protocol") over InfiniBand, which boost performance when used with engineered systems technologies like Oracle Exalogic Elastic Cloud. JBoss EAP 6 on the other hand only offers support to Apache Web Server with custom modules created to deal with JBoss clusters, but only across standard TCP/IP networks.  2) Application and Runtime Diagnostics - WebLogic Server 12c have diagnostics capabilities embedded on the server called WLDF ("WebLogic Diagnostic Framework") so there is no need to rely on third-part tools. JBoss EAP 6 on the other hand has no diagnostics capabilities. Their only diagnostics tool is the log generated by the application server. Admin people are encouraged to analyse thousands of log lines to find out what is going on. - WebLogic Server 12c complement WLDF with JRockit MC ("Mission Control"), which provides to administrators and developers a complete insight about the JVM performance, behavior and possible bottlenecks. WebLogic Server 12c also have an classloader analysis tool embedded, and even a log analyzer tool that enables administrators and developers to view logs of multiple servers at the same time. JBoss EAP 6 on the other hand relies on third-part tools to do something similar. Again, only log searching are offered to find out whats going on. - WebLogic Server 12c offers end-to-end traceability and monitoring available through Oracle EM ("Enterprise Manager"), including monitoring of business transactions that flows through web servers, ESBs, application servers and database servers, all of this with high deep JVM analysis and diagnostics. JBoss EAP 6 on the other hand, even using JBoss ON ("Operations Network"), which is a separated technology, does not support those features. Red Hat relies on third-part tools to provide direct Oracle database traceability across JVMs. One of those tools are Oracle EM for non-Oracle middleware that manage JBoss, Tomcat, Websphere and IIS transparently. - WebLogic Server 12c with their JRockit support offers a tool called JRockit Flight Recorder, which can give developers a complete visibility of a certain period of application production monitoring with zero extra overhead. This automatic recording allows you to deep analyse threads latency, memory leaks, thread contention, resource utilization, stack overflow damages and GC ("Garbage Collection") cycles, to observe in real time stop-the-world phenomenons, generational, reference count and parallel collects and mutator threads analysis. JBoss EAP 6 don't even dream to support something similar, even because they don't have their own JVM. 3) Application Server Administration - WebLogic Server 12c offers a complete administration console complemented with scripting and macro-like recording capabilities. A single WebLogic console can managed up to hundreds of WebLogic servers belonging to the same domain. JBoss EAP 6 on the other hand has a limited console and provides a XML centric administration. JBoss, after ten years, started the development of a rudimentary centralized administration that still leave a lot of administration tasks aside, so admin people and developers must touch scripts and XML configuration files for most advanced and even simple administration tasks. This lead applications to error prone and risky deployments. Even using JBoss ON, JBoss EAP are not able to offer decent administration features for admin people which must be high skilled in JBoss internal architecture and its managing capabilities. - Oracle EM is available to manage multiple domains, databases, application servers, operating systems and virtualization, with a complete end-to-end visibility. JBoss ON does not provide management capabilities across the complete architecture, only basic monitoring. Even deployment must be done aside JBoss ON which does no integrate well with others softwares than JBoss. Until now, JBoss ON does not supports JBoss EAP 6, so even their minimal support for JBoss are not available for JBoss EAP 6 leaving customers uncovered and subject to high skilled JBoss admin people. - WebLogic Server 12c has the same administration model whatever is the topology selected by the customer. JBoss EAP 6 on the other hand differentiates between two operational models: standalone-mode and domain-mode, that are not consistent with each other. Depending on the mode used, the administration skill is different. - WebLogic Server 12c has no point-of-failures processes, and it does not need to define any specialized server. Domain model in WebLogic is available for years (at least ten years or more) and is production proven. JBoss EAP 6 on the other hand needs special processes to garantee JBoss integrity, the PC ("Process-Controller") and the HC ("Host-Controller"). Different from WebLogic, the domain model in JBoss is quite new (one year at tops) of maturity, and need to mature considerably until start doing things like WebLogic domain model does. - WebLogic Server 12c supports parallel deployment model which enables some artifacts being deployed at the same time. JBoss EAP 6 on the other hand does not have any similar feature. Every deployment are done atomically in the containers. This means that if you have a huge EAR (an EAR of 120 MB of size for instance) and deploy onto JBoss EAP 6, this EAR will take some minutes in order to starting accept thread requests. The same EAR deployed onto WebLogic Server 12c will reduce the deployment time at least in 2X compared to JBoss. 4) Support and Upgrades - WebLogic Server 12c has patch management available. JBoss EAP 6 on the other hand has no patch management available, each JBoss EAP instance should be patched manually. To achieve such feature, you need to buy a separated technology called JBoss ON ("Operations Network") that manage this type of stuff. But until now, JBoss ON does not support JBoss EAP 6 so, in practice, JBoss EAP 6 does not have this feature. - WebLogic Server 12c supports previuous WebLogic domains without any reconfiguration since its kernel is robust and mature since its creation in 1995. JBoss EAP 6 on the other hand has a proven lack of supportability between JBoss AS 4, 5, 6 and 7. Different kernels and messaging engines were implemented in JBoss stack in the last five years reveling their incapacity to create a well architected and proven middleware technology. - WebLogic Server 12c has patch prescription based on customer configuration. JBoss EAP 6 on the other hand has no such capability. People need to create ticket supports and have their installations revised by Red Hat support guys to gain some patch prescription from them. - Oracle WebLogic Server independent of the version has 8 years of support of new patches and has lifetime release of existing patches beyond that. JBoss EAP 6 on the other hand provides patches for a specific application server version up to 5 years after the release date. JBoss EAP 4 and previous versions had only 4 years. A good question that Red Hat will argue to answer is: "what happens when you find issues after year 5"?  5) RAC ("Real Application Clusters") Support - WebLogic Server 12c ships with a specific JDBC driver to leverage Oracle RAC clustering capabilities (Fast-Application-Notification, Transaction Affinity, Fast-Connection-Failover, etc). Oracle JDBC thin driver are also available. JBoss EAP 6 on the other hand ships only the standard Oracle JDBC thin driver. Load balancing with Oracle RAC are not supported. Manual intervention in case of planned or unplanned RAC downtime are necessary. In JBoss EAP 6, situation does not reestablish automatically after downtime. - WebLogic Server 12c has a feature called Active GridLink for Oracle RAC which provides up to 3X performance on OLTP applications. This seamless integration between WebLogic and Oracle database enable more value added to critical business applications leveraging their investments in Oracle database technology and Oracle middleware. JBoss EAP 6 on the other hand has no performance gains at all, even when admin people implement some kind of connection-pooling tuning. - WebLogic Server 12c also supports transaction and web session affinity to the Oracle RAC, which provides aditional gains of performance. This is particularly interesting if you are creating a reliable solution that are distributed not only in an LAN cluster, but into a different data center. JBoss EAP 6 on the other hand has no such support. 6) Standards and Technology Support - WebLogic Server 12c is fully Java EE 6 compatible and production ready since december of 2011. JBoss EAP 6 on the other hand became fully compatible with Java EE 6 only in the community version after three months, and production ready only in a few days considering that this article was written in June of 2012. Red Hat says that they are the masters of innovation and technology proliferation, but compared with Oracle and even other proprietary vendors like IBM, they historically speaking are lazy to deliver the most newest technologies and standards adherence. - Oracle is the steward of Java, driving innovation into the platform from commercial and open-source vendors. Red Hat on the other hand does not have its own JVM and relies on third-part JVMs to complete their application server offer. 95% of Red Hat customers are using Oracle HotSpot as JVM, which means that without Oracle involvement, their support are limited exclusively to the application server layer and we all know that most problems are happens in the JVM layer. - WebLogic Server 12c supports natively JDK 7, which empower developers to explore the maximum of the Java platform productivity when writing code. This feature differentiate WebLogic from others application servers (except GlassFish that are also managed by Oracle) because the usage of JDK 7 introduce such remarkable productivity features like the "try-with-resources" enhancement, catching multiple exceptions with one try block, Strings in the switch statements, JVM improvements in terms of JDBC, I/O, networking, security, concurrency and of course, the most important feature of Java 7: native support for multiple non-Java languages. More features regarding JDK 7 can be found here. JBoss EAP 6 on the other hand does not support JDK 7 officially, they comment in their community version that "Java SE 7 can be used with JBoss 7" which does not gives you any guarantees of enterprise support for JDK 7. - Oracle WebLogic Server 12c supports integration with Spring framework allowing Spring applications to use WebLogic special transaction manager, exposing bean interfaces to WebLogic MBeans to take advantage of all WebLogic monitoring and administration advantages. JBoss EAP 6 on the other hand has no special integration with Spring. In fact, Red Hat offers a suspicious package called "JBoss Web Platform" that in theory supports Spring, but in practice this package does not offers any special integration. It is just a facility for Red Hat customers to have support from both JBoss and Spring technology using the same customer support. 7) Lightweight Development - Oracle WebLogic Server 12c and Oracle GlassFish are completely integrated and can share applications without any modifications. Starting with the 12c version, WebLogic now understands natively GlassFish deployment descriptors and specific configurations in order to offer you a truly and reliable migration path from a community Java EE application server to a enterprise middleware product like WebLogic. JBoss EAP 6 on the other hand has no support to natively reuse an existing (or still in development) application from JBoss AS community server. Users of JBoss suffer of critical issues during deployment time that includes: changing the libraries and dependencies of the application, patching the DTD or XSD deployment descriptors, refactoring of the application layers due classloading issues and anomalies, rebuilding of persistence, business and web layers due issues with "usage of the certified version of an certain dependency" or "frameworks that Red Hat potentially does not recommend" etc. If you have the culture or enterprise IT directive of developing Java EE applications using community middleware to in a certain future, transition to enterprise (supported by a vendor) middleware, Oracle WebLogic plus Oracle GlassFish offers you a more sustainable solution. - WebLogic Server 12c has a very light ZIP distribution (less than 165 MB). JBoss EAP 6 ZIP size is around 130 MB, together with JBoss ON you have more 100 MB resulting in a higher download footprint. This is particularly interesting if you plan to use automated setup of application server instances (for example, to rapidly setup a development or staging environment) using Maven or Hudson. - WebLogic Server 12c has a complete integration with Maven allowing developers to setup WebLogic domains with few commands. Tasks like downloading WebLogic, installation, domain creation, data sources deployment are completely integrated. JBoss EAP 6 on the other hand has a limited offer integration with those tools.  - WebLogic Server 12c has a startup mode called WLX that turns-off EJB, JMS and JCA containers leaving enabled only the web container with Java EE 6 web profile. JBoss EAP 6 on the other hand has no such feature, you need to disable manually the containers that you do not want to use. - WebLogic Server 12c supports fastswap, which enables you to change classes without redeployment. This is particularly interesting if you are developing patches for the application that is already deployed and you do not want to redeploy the entire application. This is the same behavior that most application servers offers to JSP pages, but with WebLogic Server 12c, you have the same feature for Java classes in general. JBoss EAP 6 on the other hand has no such support. Even JBoss EAP 5 does not support this until now. 8) JMS and Messaging - WebLogic Server 12c has a proven and high scalable JMS implementation since its initial release in 1995. JBoss EAP 6 on the other hand has a still immature technology called HornetQ, which was introduced in JBoss EAP 5 replacing everything that was implemented in the previous versions. Red Hat loves to introduce new technologies across JBoss versions, playing around with customers and their investments. And when they are asked about why they have changed the implementation and caused such a mess, their answer is always: "the previous implementation was inadequate and not aligned with the community strategy so we are creating a new a improved one". This Red Hat practice leads to uncomfortable investments that in a near future (sometimes less than a year) will be affected in someway. - WebLogic Server 12c has troubleshooting and monitoring features included on the WebLogic console and WLDF. JBoss EAP 6 on the other hand has no direct monitoring on the console, activity is reflected only on the logs, no debug logs available in case of JMS issues. - WebLogic Server 12c has extremely good performance and scalability. JBoss EAP 6 on the other hand has a JMS storage mechanism relying on Oracle database or MySQL. This means that if an issue in production happens and Red Hat affirms that an performance issue is happening due to database problems, they will not support you on the performance issue. They will orient you to call Oracle instead. - WebLogic Server 12c supports messaging enterprise features like SAF ("Store and Forward"), Distributed Queues/Topics and Foreign JMS providers support that leverage JMS implementations without compromise developer code making things completely transparent. JBoss EAP 6 on the other hand do not even dream to support such features. 9) Caching and Grid - Coherence, which is the leading and most mature data grid technology from Oracle, is available since early 2000 and was integrated with WebLogic in 2009. Coherence and WebLogic clusters can be both managed from WebLogic administrative console. Even Node Manager supports Coherence. JBoss on the other hand discontinued JBoss Cache, which was their caching implementation just like they did with the messaging implementation (JBossMQ) which was a issue for long term customers. JBoss EAP 6 ships InfiniSpan version 1.0 which is immature and lack a proven record of successful cases and reliability. - WebLogic Server 12c has a feature called ActiveCache which uses Coherence to, without any code changes, replicate HTTP sessions from both WebLogic and other application servers like JBoss, Tomcat, Websphere, GlassFish and even Microsoft IIS. JBoss EAP 6 on the other hand does have such support and even when they do in the future, they probably will support only their own application server. - Coherence can be used to manage both L1 and L2 cache levels, providing support to Oracle TopLink and others JPA compliant implementations, even Hibernate. JBoss EAP 6 and Infinispan on the other hand supports only Hibernate. And most important of all: Infinispan does not have any successful case of L1 or L2 caching level support using Hibernate, which lead us to reflect about its viability. 10) Performance - WebLogic Server 12c is certified with Oracle Exalogic Elastic Cloud and can run unchanged applications at this engineered system. This approach can benefit customers from Exalogic optimization's of both kernel and JVM layers to boost performance in terms of 10X for web, OLTP, JMS and grid applications. JBoss EAP 6 on the other hand has no investment on engineered systems: customers do not have the choice to deploy on a Java ultra fast system if their project becomes relevant and performance issues are detected. - WebLogic Server 12c maintains a performance gain across each new release: starting on WebLogic 5.1, the overall performance gain has been close to 4X, which close to a 20% gain release by release. JBoss on the other hand does not provide SPECJAppServer or SPECJEnterprise performance benchmarks. Their so called "performance gains" remains hidden in their customer environments, which lead us to think if it is true or not since we will never get access to those environments. - WebLogic Server 12c has industry performance benchmarks with submissions across platforms and configurations leading SPECJ. Oracle WebLogic leads SPECJAppServer performance in multiple categories, fitting all customer topologies like: dual-node, single-node, multi-node and multi-node with RAC. JBoss... again, does not provide any SPECJAppServer performance benchmarks. - WebLogic Server 12c has a feature called work manager which allows your application to embrace new performance levels based on critical resource utilization of the CPUs usage. Work managers prioritizes work and allocates threads based on an execution model that takes into account administrator-defined parameters and actual run-time performance and throughput. JBoss EAP 6 on the other hand has no compared feature and probably they never will. Not supporting such feature like work managers, JBoss EAP 6 forces admin people and specially developers to uncover performance gains in a intrusive way, rewriting the code and doing performance refactorings. 11) Professional Services Support - WebLogic Server 12c and any other technology sold by Oracle give customers the possibility of hire OCS ("Oracle Consulting Services") to manage critical scenarios, deployment assistance of new applications, high skilled consultancy of architecture, best practices and people allocation together with customer teams. All OCS services are available without any restrictions, having the customer bought software from Oracle or just starting their implementation before any acquisition. JBoss EAP 6 or Red Hat to be more specifically, only offers professional services if you buy subscriptions from them. If you are developing a new critical application for your business and need the help of Red Hat for a serious issue or architecture decision, they will probably say: "OK... I can help you but after you buy subscriptions from me". Red Hat also does not allows their professional services consultants to manage environments that uses community based software. They will probably force you to first buy a subscription, download their "enterprise" version and them, optionally hire their consultants. - Oracle provides you our university to educate your team into our technologies, including of course specialized trainings of WebLogic application server. At any time and location, you can hire Oracle to train your team so you get trustful knowledge according to your specific needs. Certifications for the products are also available if your technical people desire to differentiate themselves as professionals. Red Hat on the other hand have a limited pool of resources to train your team in their technologies. Basically they are selling training and certification for RHEL ("Red Hat Enterprise Linux") but if you demand more specialized training in JBoss middleware, they will probably connect you to some "certified" partner localized training since they are apparently discontinuing their education center, at least here in Brazil. They were not able to reproduce their success with RHEL education to their middleware division since they need first sell the subscriptions to after gives you specialized training. And again, they only offer you specialized training based on their enterprise version (EAP in the case of JBoss) which means that the courses will be a quite outdated. There are reports of developers that took official training's from Red Hat at this year (2012) and in a certain JBoss advanced course, Red Hat supposedly covered JBossMQ as the messaging subsystem, and even the printed material provided was based on JBossMQ since the training was created for JBoss EAP 4.3. 12) Encouraging Transparency without Ulterior Motives - WebLogic Server 12c like any other software from Oracle can be downloaded any time from anywhere, you should only possess an OTN ("Oracle Technology Network") credential and you can download any enterprise software how many times you want. And is not some kind of "trial" version. It is the official binaries that will be running for ever in your data center. Oracle does not encourages the usage of "specific versions" of our software. The binaries you buy from Oracle are the same binaries anyone in the world could download and use for testing and personal education. JBoss EAP 6 on the other hand are not available for download unless you buy a subscription and get access to the Red Hat enterprise repositories. If you need to test, learn or just start creating your application using Red Hat's middleware software, you should download it from the community website. You are not allowed to download the enterprise version that, according to Red Hat are more secure, reliable and robust. But no one of us want to start the development of a software with an unsecured, unreliable and not scalable middleware right? So what you do? You are "invited" by Red Hat to buy subscriptions from them to get access to the "cool" version of the software. - WebLogic Server 12c prices are publicly available in the Oracle website. If you want to know right now how much WebLogic will cost to your organization, just click here and get access to our price list. In the case of WebLogic, check out the "US Oracle Technology Commercial Price List". Oracle also encourages you to get in touch with a sales representative to discuss discounts that would make possible the investment into our technology. But you are not required to do this, only if you are interested in buying our technology or maybe you want to discuss some discount scenarios. JBoss EAP 6 on the other hand does not have its cost publicly available in Red Hat's website or in any other media, at least is not so easy to get such information. The only link you will possibly find in their website is a "Contact a Sales Representative" link. This is not a very good relationship between an customer and an vendor. This is not an example of transparency, mainly when the software are sold as open. In this situations, customers expects to see the software prices publicly available, so they can have the chance to decide, based on the existing features of the software, if the cost is fair or not. Conclusion Oracle WebLogic is the most mature, secure, reliable and scalable Java EE application server of the market, and have a proven record of success around the globe to prove it's majority. Don't lose the chance to discover today how WebLogic could fit your needs and sustain your global IT middleware strategy, no matter if your strategy are completely based on the Cloud or not.

    Read the article

  • An Xml Serializable PropertyBag Dictionary Class for .NET

    - by Rick Strahl
    I don't know about you but I frequently need property bags in my applications to store and possibly cache arbitrary data. Dictionary<T,V> works well for this although I always seem to be hunting for a more specific generic type that provides a string key based dictionary. There's string dictionary, but it only works with strings. There's Hashset<T> but it uses the actual values as keys. In most key value pair situations for me string is key value to work off. Dictionary<T,V> works well enough, but there are some issues with serialization of dictionaries in .NET. The .NET framework doesn't do well serializing IDictionary objects out of the box. The XmlSerializer doesn't support serialization of IDictionary via it's default serialization, and while the DataContractSerializer does support IDictionary serialization it produces some pretty atrocious XML. What doesn't work? First off Dictionary serialization with the Xml Serializer doesn't work so the following fails: [TestMethod] public void DictionaryXmlSerializerTest() { var bag = new Dictionary<string, object>(); bag.Add("key", "Value"); bag.Add("Key2", 100.10M); bag.Add("Key3", Guid.NewGuid()); bag.Add("Key4", DateTime.Now); bag.Add("Key5", true); bag.Add("Key7", new byte[3] { 42, 45, 66 }); TestContext.WriteLine(this.ToXml(bag)); } public string ToXml(object obj) { if (obj == null) return null; StringWriter sw = new StringWriter(); XmlSerializer ser = new XmlSerializer(obj.GetType()); ser.Serialize(sw, obj); return sw.ToString(); } The error you get with this is: System.NotSupportedException: The type System.Collections.Generic.Dictionary`2[[System.String, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089],[System.Object, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]] is not supported because it implements IDictionary. Got it! BTW, the same is true with binary serialization. Running the same code above against the DataContractSerializer does work: [TestMethod] public void DictionaryDataContextSerializerTest() { var bag = new Dictionary<string, object>(); bag.Add("key", "Value"); bag.Add("Key2", 100.10M); bag.Add("Key3", Guid.NewGuid()); bag.Add("Key4", DateTime.Now); bag.Add("Key5", true); bag.Add("Key7", new byte[3] { 42, 45, 66 }); TestContext.WriteLine(this.ToXmlDcs(bag)); } public string ToXmlDcs(object value, bool throwExceptions = false) { var ser = new DataContractSerializer(value.GetType(), null, int.MaxValue, true, false, null); MemoryStream ms = new MemoryStream(); ser.WriteObject(ms, value); return Encoding.UTF8.GetString(ms.ToArray(), 0, (int)ms.Length); } This DOES work but produces some pretty heinous XML (formatted with line breaks and indentation here): <ArrayOfKeyValueOfstringanyType xmlns="http://schemas.microsoft.com/2003/10/Serialization/Arrays" xmlns:i="http://www.w3.org/2001/XMLSchema-instance"> <KeyValueOfstringanyType> <Key>key</Key> <Value i:type="a:string" xmlns:a="http://www.w3.org/2001/XMLSchema">Value</Value> </KeyValueOfstringanyType> <KeyValueOfstringanyType> <Key>Key2</Key> <Value i:type="a:decimal" xmlns:a="http://www.w3.org/2001/XMLSchema">100.10</Value> </KeyValueOfstringanyType> <KeyValueOfstringanyType> <Key>Key3</Key> <Value i:type="a:guid" xmlns:a="http://schemas.microsoft.com/2003/10/Serialization/">2cd46d2a-a636-4af4-979b-e834d39b6d37</Value> </KeyValueOfstringanyType> <KeyValueOfstringanyType> <Key>Key4</Key> <Value i:type="a:dateTime" xmlns:a="http://www.w3.org/2001/XMLSchema">2011-09-19T17:17:05.4406999-07:00</Value> </KeyValueOfstringanyType> <KeyValueOfstringanyType> <Key>Key5</Key> <Value i:type="a:boolean" xmlns:a="http://www.w3.org/2001/XMLSchema">true</Value> </KeyValueOfstringanyType> <KeyValueOfstringanyType> <Key>Key7</Key> <Value i:type="a:base64Binary" xmlns:a="http://www.w3.org/2001/XMLSchema">Ki1C</Value> </KeyValueOfstringanyType> </ArrayOfKeyValueOfstringanyType> Ouch! That seriously hurts the eye! :-) Worse though it's extremely verbose with all those repetitive namespace declarations. It's good to know that it works in a pinch, but for a human readable/editable solution or something lightweight to store in a database it's not quite ideal. Why should I care? As a little background, in one of my applications I have a need for a flexible property bag that is used on a free form database field on an otherwise static entity. Basically what I have is a standard database record to which arbitrary properties can be added in an XML based string field. I intend to expose those arbitrary properties as a collection from field data stored in XML. The concept is pretty simple: When loading write the data to the collection, when the data is saved serialize the data into an XML string and store it into the database. When reading the data pick up the XML and if the collection on the entity is accessed automatically deserialize the XML into the Dictionary. (I'll talk more about this in another post). While the DataContext Serializer would work, it's verbosity is problematic both for size of the generated XML strings and the fact that users can manually edit this XML based property data in an advanced mode. A clean(er) layout certainly would be preferable and more user friendly. Custom XMLSerialization with a PropertyBag Class So… after a bunch of experimentation with different serialization formats I decided to create a custom PropertyBag class that provides for a serializable Dictionary. It's basically a custom Dictionary<TType,TValue> implementation with the keys always set as string keys. The result are PropertyBag<TValue> and PropertyBag (which defaults to the object type for values). The PropertyBag<TType> and PropertyBag classes provide these features: Subclassed from Dictionary<T,V> Implements IXmlSerializable with a cleanish XML format ToXml() and FromXml() methods to export and import to and from XML strings Static CreateFromXml() method to create an instance It's simple enough as it's merely a Dictionary<string,object> subclass but that supports serialization to a - what I think at least - cleaner XML format. The class is super simple to use: [TestMethod] public void PropertyBagTwoWayObjectSerializationTest() { var bag = new PropertyBag(); bag.Add("key", "Value"); bag.Add("Key2", 100.10M); bag.Add("Key3", Guid.NewGuid()); bag.Add("Key4", DateTime.Now); bag.Add("Key5", true); bag.Add("Key7", new byte[3] { 42,45,66 } ); bag.Add("Key8", null); bag.Add("Key9", new ComplexObject() { Name = "Rick", Entered = DateTime.Now, Count = 10 }); string xml = bag.ToXml(); TestContext.WriteLine(bag.ToXml()); bag.Clear(); bag.FromXml(xml); Assert.IsTrue(bag["key"] as string == "Value"); Assert.IsInstanceOfType( bag["Key3"], typeof(Guid)); Assert.IsNull(bag["Key8"]); //Assert.IsNull(bag["Key10"]); Assert.IsInstanceOfType(bag["Key9"], typeof(ComplexObject)); } This uses the PropertyBag class which uses a PropertyBag<string,object> - which means it returns untyped values of type object. I suspect for me this will be the most common scenario as I'd want to store arbitrary values in the PropertyBag rather than one specific type. The same code with a strongly typed PropertyBag<decimal> looks like this: [TestMethod] public void PropertyBagTwoWayValueTypeSerializationTest() { var bag = new PropertyBag<decimal>(); bag.Add("key", 10M); bag.Add("Key1", 100.10M); bag.Add("Key2", 200.10M); bag.Add("Key3", 300.10M); string xml = bag.ToXml(); TestContext.WriteLine(bag.ToXml()); bag.Clear(); bag.FromXml(xml); Assert.IsTrue(bag.Get("Key1") == 100.10M); Assert.IsTrue(bag.Get("Key3") == 300.10M); } and produces typed results of type decimal. The types can be either value or reference types the combination of which actually proved to be a little more tricky than anticipated due to null and specific string value checks required - getting the generic typing right required use of default(T) and Convert.ChangeType() to trick the compiler into playing nice. Of course the whole raison d'etre for this class is the XML serialization. You can see in the code above that we're doing a .ToXml() and .FromXml() to serialize to and from string. The XML produced for the first example looks like this: <?xml version="1.0" encoding="utf-8"?> <properties> <item> <key>key</key> <value>Value</value> </item> <item> <key>Key2</key> <value type="decimal">100.10</value> </item> <item> <key>Key3</key> <value type="___System.Guid"> <guid>f7a92032-0c6d-4e9d-9950-b15ff7cd207d</guid> </value> </item> <item> <key>Key4</key> <value type="datetime">2011-09-26T17:45:58.5789578-10:00</value> </item> <item> <key>Key5</key> <value type="boolean">true</value> </item> <item> <key>Key7</key> <value type="base64Binary">Ki1C</value> </item> <item> <key>Key8</key> <value type="nil" /> </item> <item> <key>Key9</key> <value type="___Westwind.Tools.Tests.PropertyBagTest+ComplexObject"> <ComplexObject> <Name>Rick</Name> <Entered>2011-09-26T17:45:58.5789578-10:00</Entered> <Count>10</Count> </ComplexObject> </value> </item> </properties>   The format is a bit cleaner than the DataContractSerializer. Each item is serialized into <key> <value> pairs. If the value is a string no type information is written. Since string tends to be the most common type this saves space and serialization processing. All other types are attributed. Simple types are mapped to XML types so things like decimal, datetime, boolean and base64Binary are encoded using their Xml type values. All other types are embedded with a hokey format that describes the .NET type preceded by a three underscores and then are encoded using the XmlSerializer. You can see this best above in the ComplexObject encoding. For custom types this isn't pretty either, but it's more concise than the DCS and it works as long as you're serializing back and forth between .NET clients at least. The XML generated from the second example that uses PropertyBag<decimal> looks like this: <?xml version="1.0" encoding="utf-8"?> <properties> <item> <key>key</key> <value type="decimal">10</value> </item> <item> <key>Key1</key> <value type="decimal">100.10</value> </item> <item> <key>Key2</key> <value type="decimal">200.10</value> </item> <item> <key>Key3</key> <value type="decimal">300.10</value> </item> </properties>   How does it work As I mentioned there's nothing fancy about this solution - it's little more than a subclass of Dictionary<T,V> that implements custom Xml Serialization and a couple of helper methods that facilitate getting the XML in and out of the class more easily. But it's proven very handy for a number of projects for me where dynamic data storage is required. Here's the code: /// <summary> /// Creates a serializable string/object dictionary that is XML serializable /// Encodes keys as element names and values as simple values with a type /// attribute that contains an XML type name. Complex names encode the type /// name with type='___namespace.classname' format followed by a standard xml /// serialized format. The latter serialization can be slow so it's not recommended /// to pass complex types if performance is critical. /// </summary> [XmlRoot("properties")] public class PropertyBag : PropertyBag<object> { /// <summary> /// Creates an instance of a propertybag from an Xml string /// </summary> /// <param name="xml">Serialize</param> /// <returns></returns> public static PropertyBag CreateFromXml(string xml) { var bag = new PropertyBag(); bag.FromXml(xml); return bag; } } /// <summary> /// Creates a serializable string for generic types that is XML serializable. /// /// Encodes keys as element names and values as simple values with a type /// attribute that contains an XML type name. Complex names encode the type /// name with type='___namespace.classname' format followed by a standard xml /// serialized format. The latter serialization can be slow so it's not recommended /// to pass complex types if performance is critical. /// </summary> /// <typeparam name="TValue">Must be a reference type. For value types use type object</typeparam> [XmlRoot("properties")] public class PropertyBag<TValue> : Dictionary<string, TValue>, IXmlSerializable { /// <summary> /// Not implemented - this means no schema information is passed /// so this won't work with ASMX/WCF services. /// </summary> /// <returns></returns> public System.Xml.Schema.XmlSchema GetSchema() { return null; } /// <summary> /// Serializes the dictionary to XML. Keys are /// serialized to element names and values as /// element values. An xml type attribute is embedded /// for each serialized element - a .NET type /// element is embedded for each complex type and /// prefixed with three underscores. /// </summary> /// <param name="writer"></param> public void WriteXml(System.Xml.XmlWriter writer) { foreach (string key in this.Keys) { TValue value = this[key]; Type type = null; if (value != null) type = value.GetType(); writer.WriteStartElement("item"); writer.WriteStartElement("key"); writer.WriteString(key as string); writer.WriteEndElement(); writer.WriteStartElement("value"); string xmlType = XmlUtils.MapTypeToXmlType(type); bool isCustom = false; // Type information attribute if not string if (value == null) { writer.WriteAttributeString("type", "nil"); } else if (!string.IsNullOrEmpty(xmlType)) { if (xmlType != "string") { writer.WriteStartAttribute("type"); writer.WriteString(xmlType); writer.WriteEndAttribute(); } } else { isCustom = true; xmlType = "___" + value.GetType().FullName; writer.WriteStartAttribute("type"); writer.WriteString(xmlType); writer.WriteEndAttribute(); } // Actual deserialization if (!isCustom) { if (value != null) writer.WriteValue(value); } else { XmlSerializer ser = new XmlSerializer(value.GetType()); ser.Serialize(writer, value); } writer.WriteEndElement(); // value writer.WriteEndElement(); // item } } /// <summary> /// Reads the custom serialized format /// </summary> /// <param name="reader"></param> public void ReadXml(System.Xml.XmlReader reader) { this.Clear(); while (reader.Read()) { if (reader.NodeType == XmlNodeType.Element && reader.Name == "key") { string xmlType = null; string name = reader.ReadElementContentAsString(); // item element reader.ReadToNextSibling("value"); if (reader.MoveToNextAttribute()) xmlType = reader.Value; reader.MoveToContent(); TValue value; if (xmlType == "nil") value = default(TValue); // null else if (string.IsNullOrEmpty(xmlType)) { // value is a string or object and we can assign TValue to value string strval = reader.ReadElementContentAsString(); value = (TValue) Convert.ChangeType(strval, typeof(TValue)); } else if (xmlType.StartsWith("___")) { while (reader.Read() && reader.NodeType != XmlNodeType.Element) { } Type type = ReflectionUtils.GetTypeFromName(xmlType.Substring(3)); //value = reader.ReadElementContentAs(type,null); XmlSerializer ser = new XmlSerializer(type); value = (TValue)ser.Deserialize(reader); } else value = (TValue)reader.ReadElementContentAs(XmlUtils.MapXmlTypeToType(xmlType), null); this.Add(name, value); } } } /// <summary> /// Serializes this dictionary to an XML string /// </summary> /// <returns>XML String or Null if it fails</returns> public string ToXml() { string xml = null; SerializationUtils.SerializeObject(this, out xml); return xml; } /// <summary> /// Deserializes from an XML string /// </summary> /// <param name="xml"></param> /// <returns>true or false</returns> public bool FromXml(string xml) { this.Clear(); // if xml string is empty we return an empty dictionary if (string.IsNullOrEmpty(xml)) return true; var result = SerializationUtils.DeSerializeObject(xml, this.GetType()) as PropertyBag<TValue>; if (result != null) { foreach (var item in result) { this.Add(item.Key, item.Value); } } else // null is a failure return false; return true; } /// <summary> /// Creates an instance of a propertybag from an Xml string /// </summary> /// <param name="xml"></param> /// <returns></returns> public static PropertyBag<TValue> CreateFromXml(string xml) { var bag = new PropertyBag<TValue>(); bag.FromXml(xml); return bag; } } } The code uses a couple of small helper classes SerializationUtils and XmlUtils for mapping Xml types to and from .NET, both of which are from the WestWind,Utilities project (which is the same project where PropertyBag lives) from the West Wind Web Toolkit. The code implements ReadXml and WriteXml for the IXmlSerializable implementation using old school XmlReaders and XmlWriters (because it's pretty simple stuff - no need for XLinq here). Then there are two helper methods .ToXml() and .FromXml() that basically allow your code to easily convert between XML and a PropertyBag object. In my code that's what I use to actually to persist to and from the entity XML property during .Load() and .Save() operations. It's sweet to be able to have a string key dictionary and then be able to turn around with 1 line of code to persist the whole thing to XML and back. Hopefully some of you will find this class as useful as I've found it. It's a simple solution to a common requirement in my applications and I've used the hell out of it in the  short time since I created it. Resources You can find the complete code for the two classes plus the helpers in the Subversion repository for Westwind.Utilities. You can grab the source files from there or download the whole project. You can also grab the full Westwind.Utilities assembly from NuGet and add it to your project if that's easier for you. PropertyBag Source Code SerializationUtils and XmlUtils Westwind.Utilities Assembly on NuGet (add from Visual Studio) © Rick Strahl, West Wind Technologies, 2005-2011Posted in .NET  CSharp   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Building a better mouse-trap &ndash; Improving the creation of XML Message Requests using Reflection, XML &amp; XSLT

    - by paulschapman
    Introduction The way I previously created messages to send to the GovTalk service I used the XMLDocument to create the request. While this worked it left a number of problems; not least that for every message a special function would need to created. This is OK for the short term but the biggest cost in any software project is maintenance and this would be a headache to maintain. So the following is a somewhat better way of achieving the same thing. For the purposes of this article I am going to be using the CompanyNumberSearch request of the GovTalk service – although this technique would work for any service that accepted XML. The C# functions which send and receive the messages remain the same. The magic sauce in this is the XSLT which defines the structure of the request, and the use of objects in conjunction with reflection to provide the content. It is a bit like Sweet Chilli Sauce added to Chicken on a bed of rice. So on to the Sweet Chilli Sauce The Sweet Chilli Sauce The request to search for a company based on it’s number is as follows; <GovTalkMessage xsi:schemaLocation="http://www.govtalk.gov.uk/CM/envelope http://xmlgw.companieshouse.gov.uk/v1-0/schema/Egov_ch-v2-0.xsd" xmlns="http://www.govtalk.gov.uk/CM/envelope" xmlns:dsig="http://www.w3.org/2000/09/xmldsig#" xmlns:gt="http://www.govtalk.gov.uk/schemas/govtalk/core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" > <EnvelopeVersion>1.0</EnvelopeVersion> <Header> <MessageDetails> <Class>NumberSearch</Class> <Qualifier>request</Qualifier> <TransactionID>1</TransactionID> </MessageDetails> <SenderDetails> <IDAuthentication> <SenderID>????????????????????????????????</SenderID> <Authentication> <Method>CHMD5</Method> <Value>????????????????????????????????</Value> </Authentication> </IDAuthentication> </SenderDetails> </Header> <GovTalkDetails> <Keys/> </GovTalkDetails> <Body> <NumberSearchRequest xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://xmlgw.companieshouse.gov.uk/v1-0/schema/NumberSearch.xsd"> <PartialCompanyNumber>99999999</PartialCompanyNumber> <DataSet>LIVE</DataSet> <SearchRows>1</SearchRows> </NumberSearchRequest> </Body> </GovTalkMessage> This is the XML that we send to the GovTalk Service and we get back a list of companies that match the criteria passed A message is structured in two parts; The envelope which identifies the person sending the request, with the name of the request, and the body which gives the detail of the company we are looking for. The Chilli What makes it possible is the use of XSLT to define the message – and serialization to convert each request object into XML. To start we need to create an object which will represent the contents of the message we are sending. However there is a common properties in all the messages that we send to Companies House. These properties are as follows SenderId – the id of the person sending the message SenderPassword – the password associated with Id TransactionId – Unique identifier for the message AuthenticationValue – authenticates the request Because these properties are unique to the Companies House message, and because they are shared with all messages they are perfect candidates for a base class. The class is as follows; using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Security.Cryptography; using System.Text; using System.Text.RegularExpressions; using Microsoft.WindowsAzure.ServiceRuntime; namespace CompanyHub.Services { public class GovTalkRequest { public GovTalkRequest() { try { SenderID = RoleEnvironment.GetConfigurationSettingValue("SenderId"); SenderPassword = RoleEnvironment.GetConfigurationSettingValue("SenderPassword"); TransactionId = DateTime.Now.Ticks.ToString(); AuthenticationValue = EncodePassword(String.Format("{0}{1}{2}", SenderID, SenderPassword, TransactionId)); } catch (System.Exception ex) { throw ex; } } /// <summary> /// returns the Sender ID to be used when communicating with the GovTalk Service /// </summary> public String SenderID { get; set; } /// <summary> /// return the password to be used when communicating with the GovTalk Service /// </summary> public String SenderPassword { get; set; } // end SenderPassword /// <summary> /// Transaction Id - uses the Time and Date converted to Ticks /// </summary> public String TransactionId { get; set; } // end TransactionId /// <summary> /// calculate the authentication value that will be used when /// communicating with /// </summary> public String AuthenticationValue { get; set; } // end AuthenticationValue property /// <summary> /// encodes password(s) using MD5 /// </summary> /// <param name="clearPassword"></param> /// <returns></returns> public static String EncodePassword(String clearPassword) { MD5CryptoServiceProvider md5Hasher = new MD5CryptoServiceProvider(); byte[] hashedBytes; UTF32Encoding encoder = new UTF32Encoding(); hashedBytes = md5Hasher.ComputeHash(ASCIIEncoding.Default.GetBytes(clearPassword)); String result = Regex.Replace(BitConverter.ToString(hashedBytes), "-", "").ToLower(); return result; } } } There is nothing particularly clever here, except for the EncodePassword method which hashes the value made up of the SenderId, Password and Transaction id. Each message inherits from this object. So for the Company Number Search in addition to the properties above we need a partial number, which dataset to search – for the purposes of the project we only need to search the LIVE set so this can be set in the constructor and the SearchRows. Again all are set as properties. With the SearchRows and DataSet initialized in the constructor. public class CompanyNumberSearchRequest : GovTalkRequest, IDisposable { /// <summary> /// /// </summary> public CompanyNumberSearchRequest() : base() { DataSet = "LIVE"; SearchRows = 1; } /// <summary> /// Company Number to search against /// </summary> public String PartialCompanyNumber { get; set; } /// <summary> /// What DataSet should be searched for the company /// </summary> public String DataSet { get; set; } /// <summary> /// How many rows should be returned /// </summary> public int SearchRows { get; set; } public void Dispose() { DataSet = String.Empty; PartialCompanyNumber = String.Empty; DataSet = "LIVE"; SearchRows = 1; } } As well as inheriting from our base class, I have also inherited from IDisposable – not just because it is just plain good practice to dispose of objects when coding, but it gives also gives us more versatility when using the object. There are four stages in making a request and this is reflected in the four methods we execute in making a call to the Companies House service; Create a request Send a request Check the status If OK then get the results of the request I’ve implemented each of these stages within a static class called Toolbox – which also means I don’t need to create an instance of the class to use it. When making a request there are three stages; Get the template for the message Serialize the object representing the message Transform the serialized object using a predefined XSLT file. Each of my templates I have defined as an embedded resource. When retrieving a resource of this kind we have to include the full namespace to the resource. In making the code re-usable as much as possible I defined the full ‘path’ within the GetRequest method. requestFile = String.Format("CompanyHub.Services.Schemas.{0}", RequestFile); So we now have the full path of the file within the assembly. Now all we need do is retrieve the assembly and get the resource. asm = Assembly.GetExecutingAssembly(); sr = asm.GetManifestResourceStream(requestFile); Once retrieved  So this can be returned to the calling function and we now have a stream of XSLT to define the message. Time now to serialize the request to create the other side of this message. // Serialize object containing Request, Load into XML Document t = Obj.GetType(); ms = new MemoryStream(); serializer = new XmlSerializer(t); xmlTextWriter = new XmlTextWriter(ms, Encoding.ASCII); serializer.Serialize(xmlTextWriter, Obj); ms = (MemoryStream)xmlTextWriter.BaseStream; GovTalkRequest = Toolbox.ConvertByteArrayToString(ms.ToArray()); First off we need the type of the object so we make a call to the GetType method of the object containing the Message properties. Next we need a MemoryStream, XmlSerializer and an XMLTextWriter so these can be initialized. The object is serialized by making the call to the Serialize method of the serializer object. The result of that is then converted into a MemoryStream. That MemoryStream is then converted into a string. ConvertByteArrayToString This is a fairly simple function which uses an ASCIIEncoding object found within the System.Text namespace to convert an array of bytes into a string. public static String ConvertByteArrayToString(byte[] bytes) { System.Text.ASCIIEncoding enc = new System.Text.ASCIIEncoding(); return enc.GetString(bytes); } I only put it into a function because I will be using this in various places. The Sauce When adding support for other messages outside of creating a new object to store the properties of the message, the C# components do not need to change. It is in the XSLT file that the versatility of the technique lies. The XSLT file determines the format of the message. For the CompanyNumberSearch the XSLT file is as follows; <?xml version="1.0"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:template match="/"> <GovTalkMessage xsi:schemaLocation="http://www.govtalk.gov.uk/CM/envelope http://xmlgw.companieshouse.gov.uk/v1-0/schema/Egov_ch-v2-0.xsd" xmlns="http://www.govtalk.gov.uk/CM/envelope" xmlns:dsig="http://www.w3.org/2000/09/xmldsig#" xmlns:gt="http://www.govtalk.gov.uk/schemas/govtalk/core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" > <EnvelopeVersion>1.0</EnvelopeVersion> <Header> <MessageDetails> <Class>NumberSearch</Class> <Qualifier>request</Qualifier> <TransactionID> <xsl:value-of select="CompanyNumberSearchRequest/TransactionId"/> </TransactionID> </MessageDetails> <SenderDetails> <IDAuthentication> <SenderID><xsl:value-of select="CompanyNumberSearchRequest/SenderID"/></SenderID> <Authentication> <Method>CHMD5</Method> <Value> <xsl:value-of select="CompanyNumberSearchRequest/AuthenticationValue"/> </Value> </Authentication> </IDAuthentication> </SenderDetails> </Header> <GovTalkDetails> <Keys/> </GovTalkDetails> <Body> <NumberSearchRequest xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://xmlgw.companieshouse.gov.uk/v1-0/schema/NumberSearch.xsd"> <PartialCompanyNumber> <xsl:value-of select="CompanyNumberSearchRequest/PartialCompanyNumber"/> </PartialCompanyNumber> <DataSet> <xsl:value-of select="CompanyNumberSearchRequest/DataSet"/> </DataSet> <SearchRows> <xsl:value-of select="CompanyNumberSearchRequest/SearchRows"/> </SearchRows> </NumberSearchRequest> </Body> </GovTalkMessage> </xsl:template> </xsl:stylesheet> The outer two tags define that this is a XSLT stylesheet and the root tag from which the nodes are searched for. The GovTalkMessage is the format of the message that will be sent to Companies House. We first set up the XslCompiledTransform object which will transform the XSLT template and the serialized object into the request to Companies House. xslt = new XslCompiledTransform(); resultStream = new MemoryStream(); writer = new XmlTextWriter(resultStream, Encoding.ASCII); doc = new XmlDocument(); The Serialize method require XmlTextWriter to write the XML (writer) and a stream to place the transferred object into (writer). The XML will be loaded into an XMLDocument object (doc) prior to the transformation. // create XSLT Template xslTemplate = Toolbox.GetRequest(Template); xslTemplate.Seek(0, SeekOrigin.Begin); templateReader = XmlReader.Create(xslTemplate); xslt.Load(templateReader); I have stored all the templates as a series of Embedded Resources and the GetRequestCall takes the name of the template and extracts the relevent XSLT file. /// <summary> /// Gets the framwork XML which makes the request /// </summary> /// <param name="RequestFile"></param> /// <returns></returns> public static Stream GetRequest(String RequestFile) { String requestFile = String.Empty; Stream sr = null; Assembly asm = null; try { requestFile = String.Format("CompanyHub.Services.Schemas.{0}", RequestFile); asm = Assembly.GetExecutingAssembly(); sr = asm.GetManifestResourceStream(requestFile); } catch (Exception) { throw; } finally { asm = null; } return sr; } // end private static stream GetRequest We first take the template name and expand it to include the full namespace to the Embedded Resource I like to keep all my schemas in the same directory and so the namespace reflects this. The rest is the default namespace for the project. Then we get the currently executing assembly (which will contain the resources with the call to GetExecutingAssembly() ) Finally we get a stream which contains the XSLT file. We use this stream and then load an XmlReader with the contents of the template, and that is in turn loaded into the XslCompiledTransform object. We convert the object containing the message properties into Xml by serializing it; calling the Serialize() method of the XmlSerializer object. To set up the object we do the following; t = Obj.GetType(); ms = new MemoryStream(); serializer = new XmlSerializer(t); xmlTextWriter = new XmlTextWriter(ms, Encoding.ASCII); We first determine the type of the object being transferred by calling GetType() We create an XmlSerializer object by passing the type of the object being serialized. The serializer writes to a memory stream and that is linked to an XmlTextWriter. Next job is to serialize the object and load it into an XmlDocument. serializer.Serialize(xmlTextWriter, Obj); ms = (MemoryStream)xmlTextWriter.BaseStream; xmlRequest = new XmlTextReader(ms); GovTalkRequest = Toolbox.ConvertByteArrayToString(ms.ToArray()); doc.LoadXml(GovTalkRequest); Time to transform the XML to construct the full request. xslt.Transform(doc, writer); resultStream.Seek(0, SeekOrigin.Begin); request = Toolbox.ConvertByteArrayToString(resultStream.ToArray()); So that creates the full request to be sent  to Companies House. Sending the request So far we have a string with a request for the Companies House service. Now we need to send the request to the Companies House Service. Configuration within an Azure project There are entire blog entries written about configuration within an Azure project – most of this is out of scope for this article but the following is a summary. Configuration is defined in two files within the parent project *.csdef which contains the definition of configuration setting. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="OnlineCompanyHub" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> <WebRole name="CompanyHub.Host"> <InputEndpoints> <InputEndpoint name="HttpIn" protocol="http" port="80" /> </InputEndpoints> <ConfigurationSettings> <Setting name="DiagnosticsConnectionString" /> <Setting name="DataConnectionString" /> </ConfigurationSettings> </WebRole> <WebRole name="CompanyHub.Services"> <InputEndpoints> <InputEndpoint name="HttpIn" protocol="http" port="8080" /> </InputEndpoints> <ConfigurationSettings> <Setting name="DiagnosticsConnectionString" /> <Setting name="SenderId"/> <Setting name="SenderPassword" /> <Setting name="GovTalkUrl"/> </ConfigurationSettings> </WebRole> <WorkerRole name="CompanyHub.Worker"> <ConfigurationSettings> <Setting name="DiagnosticsConnectionString" /> </ConfigurationSettings> </WorkerRole> </ServiceDefinition>   Above is the configuration definition from the project. What we are interested in however is the ConfigurationSettings tag of the CompanyHub.Services WebRole. There are four configuration settings here, but at the moment we are interested in the second to forth settings; SenderId, SenderPassword and GovTalkUrl The value of these settings are defined in the ServiceDefinition.cscfg file; <?xml version="1.0"?> <ServiceConfiguration serviceName="OnlineCompanyHub" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration"> <Role name="CompanyHub.Host"> <Instances count="2" /> <ConfigurationSettings> <Setting name="DiagnosticsConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="DataConnectionString" value="UseDevelopmentStorage=true" /> </ConfigurationSettings> </Role> <Role name="CompanyHub.Services"> <Instances count="2" /> <ConfigurationSettings> <Setting name="DiagnosticsConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="SenderId" value="UserID"/> <Setting name="SenderPassword" value="Password"/> <Setting name="GovTalkUrl" value="http://xmlgw.companieshouse.gov.uk/v1-0/xmlgw/Gateway"/> </ConfigurationSettings> </Role> <Role name="CompanyHub.Worker"> <Instances count="2" /> <ConfigurationSettings> <Setting name="DiagnosticsConnectionString" value="UseDevelopmentStorage=true" /> </ConfigurationSettings> </Role> </ServiceConfiguration>   Look for the Role tag that contains our project name (CompanyHub.Services). Having configured the parameters we can now transmit the request. This is done by ‘POST’ing a stream of XML to the Companies House servers. govTalkUrl = RoleEnvironment.GetConfigurationSettingValue("GovTalkUrl"); request = WebRequest.Create(govTalkUrl); request.Method = "POST"; request.ContentType = "text/xml"; writer = new StreamWriter(request.GetRequestStream()); writer.WriteLine(RequestMessage); writer.Close(); We use the WebRequest object to send the object. Set the method of sending to ‘POST’ and the type of data as text/xml. Once set up all we do is write the request to the writer – this sends the request to Companies House. Did the Request Work Part I – Getting the response Having sent a request – we now need the result of that request. response = request.GetResponse(); reader = response.GetResponseStream(); result = Toolbox.ConvertByteArrayToString(Toolbox.ReadFully(reader));   The WebRequest object has a GetResponse() method which allows us to get the response sent back. Like many of these calls the results come in the form of a stream which we convert into a string. Did the Request Work Part II – Translating the Response Much like XSLT and XML were used to create the original request, so it can be used to extract the response and by deserializing the result we create an object that contains the response. Did it work? It would be really great if everything worked all the time. Of course if it did then I don’t suppose people would pay me and others the big bucks so that our programmes do not a) Collapse in a heap (this is an area of memory) b) Blow every fuse in the place in a shower of sparks (this will probably not happen this being real life and not a Hollywood movie, but it was possible to blow the sound system of a BBC Model B with a poorly coded setting) c) Go nuts and trap everyone outside the airlock (this was from a movie, and unless NASA get a manned moon/mars mission set up unlikely to happen) d) Go nuts and take over the world (this was also from a movie, but please note life has a habit of being of exceeding the wildest imaginations of Hollywood writers (note writers – Hollywood executives have no imagination and judging by recent output of that town have turned plagiarism into an art form). e) Freeze in total confusion because the cleaner pulled the plug to the internet router (this has happened) So anyway – we need to check to see if our request actually worked. Within the GovTalk response there is a section that details the status of the message and a description of what went wrong (if anything did). I have defined an XSLT template which will extract these into an XML document. <?xml version="1.0"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:ev="http://www.govtalk.gov.uk/CM/envelope" xmlns:gt="http://www.govtalk.gov.uk/schemas/govtalk/core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <xsl:template match="/"> <GovTalkStatus xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <Status> <xsl:value-of select="ev:GovTalkMessage/ev:Header/ev:MessageDetails/ev:Qualifier"/> </Status> <Text> <xsl:value-of select="ev:GovTalkMessage/ev:GovTalkDetails/ev:GovTalkErrors/ev:Error/ev:Text"/> </Text> <Location> <xsl:value-of select="ev:GovTalkMessage/ev:GovTalkDetails/ev:GovTalkErrors/ev:Error/ev:Location"/> </Location> <Number> <xsl:value-of select="ev:GovTalkMessage/ev:GovTalkDetails/ev:GovTalkErrors/ev:Error/ev:Number"/> </Number> <Type> <xsl:value-of select="ev:GovTalkMessage/ev:GovTalkDetails/ev:GovTalkErrors/ev:Error/ev:Type"/> </Type> </GovTalkStatus> </xsl:template> </xsl:stylesheet>   Only thing different about previous XSL files is the references to two namespaces ev & gt. These are defined in the GovTalk response at the top of the response; xsi:schemaLocation="http://www.govtalk.gov.uk/CM/envelope http://xmlgw.companieshouse.gov.uk/v1-0/schema/Egov_ch-v2-0.xsd" xmlns="http://www.govtalk.gov.uk/CM/envelope" xmlns:dsig="http://www.w3.org/2000/09/xmldsig#" xmlns:gt="http://www.govtalk.gov.uk/schemas/govtalk/core" If we do not put these references into the XSLT template then  the XslCompiledTransform object will not be able to find the relevant tags. Deserialization is a fairly simple activity. encoder = new ASCIIEncoding(); ms = new MemoryStream(encoder.GetBytes(statusXML)); serializer = new XmlSerializer(typeof(GovTalkStatus)); xmlTextWriter = new XmlTextWriter(ms, Encoding.ASCII); messageStatus = (GovTalkStatus)serializer.Deserialize(ms);   We set up a serialization object using the object type containing the error state and pass to it the results of a transformation between the XSLT above and the GovTalk response. Now we have an object containing any error state, and the error message. All we need to do is check the status. If there is an error then we can flag an error. If not then  we extract the results and pass that as an object back to the calling function. We go this by guess what – defining an XSLT template for the result and using that to create an Xml Stream which can be deserialized into a .Net object. In this instance the XSLT to create the result of a Company Number Search is; <?xml version="1.0" encoding="us-ascii"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:ev="http://www.govtalk.gov.uk/CM/envelope" xmlns:sch="http://xmlgw.companieshouse.gov.uk/v1-0/schema" exclude-result-prefixes="ev"> <xsl:template match="/"> <CompanySearchResult xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <CompanyNumber> <xsl:value-of select="ev:GovTalkMessage/ev:Body/sch:NumberSearch/sch:CoSearchItem/sch:CompanyNumber"/> </CompanyNumber> <CompanyName> <xsl:value-of select="ev:GovTalkMessage/ev:Body/sch:NumberSearch/sch:CoSearchItem/sch:CompanyName"/> </CompanyName> </CompanySearchResult> </xsl:template> </xsl:stylesheet> and the object definition is; using System; using System.Collections.Generic; using System.Linq; using System.Web; namespace CompanyHub.Services { public class CompanySearchResult { public CompanySearchResult() { CompanyNumber = String.Empty; CompanyName = String.Empty; } public String CompanyNumber { get; set; } public String CompanyName { get; set; } } } Our entire code to make calls to send a request, and interpret the results are; String request = String.Empty; String response = String.Empty; GovTalkStatus status = null; fault = null; try { using (CompanyNumberSearchRequest requestObj = new CompanyNumberSearchRequest()) { requestObj.PartialCompanyNumber = CompanyNumber; request = Toolbox.CreateRequest(requestObj, "CompanyNumberSearch.xsl"); response = Toolbox.SendGovTalkRequest(request); status = Toolbox.GetMessageStatus(response); if (status.Status.ToLower() == "error") { fault = new HubFault() { Message = status.Text }; } else { Object obj = Toolbox.GetGovTalkResponse(response, "CompanyNumberSearchResult.xsl", typeof(CompanySearchResult)); } } } catch (FaultException<ArgumentException> ex) { fault = new HubFault() { FaultType = ex.Detail.GetType().FullName, Message = ex.Detail.Message }; } catch (System.Exception ex) { fault = new HubFault() { FaultType = ex.GetType().FullName, Message = ex.Message }; } finally { } Wrap up So there we have it – a reusable set of functions to send and interpret XML results from an internet based service. The code is reusable with a little change with any service which uses XML as a transport mechanism – and as for the Companies House GovTalk service all I need to do is create various objects for the result and message sent and the relevent XSLT files. I might need minor changes for other services but something like 70-90% will be exactly the same.

    Read the article

  • How can I build pyv8 from source on FreeBSD against the v8 port?

    - by Utkonos
    I am unable to build pyv8 from source on FreeBSD. I have installed the /usr/ports/lang/v8 port, and I'm running into the following error. It seems that pyv8 wants to build v8 itself even though v8 is already built and installed. How can I point pyv8 to the already installed location of v8? # python setup.py build Found Google v8 base on V8_HOME , update it to the latest SVN trunk at running build ==================== INFO: Installing or updating GYP... -------------------- INFO: Check out GYP from SVN ... DEBUG: make dependencies ERROR: Check out GYP from SVN failed: code=2 DEBUG: "Makefile", line 43: Missing dependency operator "Makefile", line 45: Need an operator "Makefile", line 46: Need an operator "Makefile", line 48: Need an operator "Makefile", line 50: Need an operator "Makefile", line 52: Need an operator "Makefile", line 54: Missing dependency operator "Makefile", line 56: Need an operator "Makefile", line 58: Missing dependency operator "Makefile", line 60: Need an operator "Makefile", line 62: Missing dependency operator "Makefile", line 64: Need an operator "Makefile", line 66: Missing dependency operator "Makefile", line 68: Need an operator "Makefile", line 70: Missing dependency operator "Makefile", line 72: Need an operator "Makefile", line 73: Missing dependency operator "Makefile", line 75: Need an operator "Makefile", line 77: Missing dependency operator "Makefile", line 79: Need an operator "Makefile", line 81: Missing dependency operator "Makefile", line 83: Need an operator "Makefile", line 85: Missing dependency operator "Makefile", line 87: Need an operator "Makefile", line 89: Need an operator "Makefile", line 91: Missing dependency operator "Makefile", line 93: Need an operator "Makefile", line 95: Need an operator "Makefile", line 97: Need an operator "Makefile", line 99: Missing dependency operator "Makefile", line 101: Need an operator "Makefile", line 103: Missing dependency operator "Makefile", line 105: Need an operator "Makefile", line 107: Missing dependency operator "Makefile", line 109: Need an operator "Makefile", line 111: Missing dependency operator "Makefile", line 113: Need an operator "Makefile", line 115: Missing dependency operator "Makefile", line 117: Need an operator Error expanding embedded variable. ==================== INFO: Patching the GYP scripts INFO: patch the Google v8 build/standalone.gypi file to enable RTTI and C++ Exceptions ==================== INFO: building Google v8 with GYP for x64 platform with release mode -------------------- INFO: build v8 from SVN ... DEBUG: make verifyheap=off component=shared_library visibility=on gdbjit=off liveobjectlist=off regexp=native disassembler=off objectprint=off debuggersupport=on extrachecks=off snapshot=on werror=on x64.release ERROR: build v8 from SVN failed: code=2 DEBUG: "Makefile", line 43: Missing dependency operator "Makefile", line 45: Need an operator "Makefile", line 46: Need an operator "Makefile", line 48: Need an operator "Makefile", line 50: Need an operator "Makefile", line 52: Need an operator "Makefile", line 54: Missing dependency operator "Makefile", line 56: Need an operator "Makefile", line 58: Missing dependency operator "Makefile", line 60: Need an operator "Makefile", line 62: Missing dependency operator "Makefile", line 64: Need an operator "Makefile", line 66: Missing dependency operator "Makefile", line 68: Need an operator "Makefile", line 70: Missing dependency operator "Makefile", line 72: Need an operator "Makefile", line 73: Missing dependency operator "Makefile", line 75: Need an operator "Makefile", line 77: Missing dependency operator "Makefile", line 79: Need an operator "Makefile", line 81: Missing dependency operator "Makefile", line 83: Need an operator "Makefile", line 85: Missing dependency operator "Makefile", line 87: Need an operator "Makefile", line 89: Need an operator "Makefile", line 91: Missing dependency operator "Makefile", line 93: Need an operator "Makefile", line 95: Need an operator "Makefile", line 97: Need an operator "Makefile", line 99: Missing dependency operator "Makefile", line 101: Need an operator "Makefile", line 103: Missing dependency operator "Makefile", line 105: Need an operator "Makefile", line 107: Missing dependency operator "Makefile", line 109: Need an operator "Makefile", line 111: Missing dependency operator "Makefile", line 113: Need an operator "Makefile", line 115: Missing dependency operator "Makefile", line 117: Need an operator Error expanding embedded variable. The files that are installed by the v8 port are the following (in /usr/local): bin/d8 include/v8.h include/v8-debug.h include/v8-preparser.h include/v8-profiler.h include/v8-testing.h include/v8stdint.h lib/libv8.so lib/libv8.so.1

    Read the article

  • Choice of an OS for a home ZFS NAS

    - by OlafM
    I am preparing a home NAS with an old Athlon 64 X2 3800+, 4 GB ECC RAM, Asus M2V MX motherboard, and a single 3 TB WDC Green (another one as mirror may be installed in the future). It's the cheapest solution I found that includes ECC memory and the higher energy consumption is offset by the lower (zero) cost of acquisition. The system will be used for: music storage and stream to other desktop computers; storage of the scanned dia slides (3-4k slides, 180 MB TIFF each one plus reduced quality JPEG version); stream of these photos to a local iPad 2 (maybe Plex App? not yet sure); (one additional) remote backup via rsync/ssh or ZFS send/receive. It will be controlled via remote ssh, maybe VNC, no monitor attached. Absolute requirement is a reliable ZFS solution, plus the ability to easily install packets/software/virtual machines and to update remotely (I will be the admin and I don't live near the NAS). I have mainly three options: NAS4free/FreeNAS OpenIndiana Solaris Express 11 (yeah yeah I know the license requirements, I will write a perl script on it to count it as development machine). Problems: NAS4free/FreeNAS (I tested only NAS4free) required embedded installation for remote upgrading, but full install for easy addition of software packets. Since I need at least AirVideo Server (linux/win) and Plex App (win/linux) to stream the photos and some videos to iPad (they both require virtualbox), but I cannot be there to install updates, NAS4free/FreeNAS are excluded. http://www.nas4free.org/general_information.html explains the issue: embedded can be remotely updated, full cannot. Solaris has also another advantage: Crashplan client supports Solaris and I'm already using it for other backups. I would like to leave the option open, even if I will be doing backups probably through zfs send/receive. NexentaStor was left out because zfs send/receive are not included in the free version. The question is now Solaris 11 Express over OpenIndiana. To ease the management, I will be using http://www.napp-it.org Which one would you suggest and why? I found lots of informations and it's difficult for me to decide. I think (from the napp-it manual) that Solaris has some additional options for SMB shares, but are they really needed at home? I think I won't even use ACLs, since normal unix-style permissions are enough. OpenIndiana has maybe more frequent updates (Solaris offers only security updates between releases), but again, do I need them? I don't think so. Moreover, this is a NAS that has to work and nothing else, I cannot risk having problems that require me to access the server. Isn't OpenIndiana a bit more... cutting edge (in the Solaris world)? I'm just asking, no need to focus on this for the answer :-) I would limit myself to these two options (SE11.1/OI) also because I will be making a NAS for me in the future (where high performances with Mac shares are also required) and Solaris has kernel support for AFP. I will use this server to gather experience as well. After this long question, thanks in advance! If you need additional info, let me know and I will update this post.

    Read the article

  • Symantec Protection Suite and System Recovery 2011 Desktop Edition

    - by rihatum
    I am re-posting this as my previous question was being treated as if I am "Shopping or seeking Product Recommendations" even though I was NOT - BTW they have deleted my comments too which were not offensive in nature. anyway - I have re-phrased some parts of my question and I hope SF Admins "Do Not Modify / Edit" this one - will be most grateful for that. I have a lot of respect for the People who visit this SITE and help others ! Just To clarify : Just to go by SF rules - I am not seeking someone to Design this solution, I am simply seeking real world examples, experiences, technical expert opinions / suggestions, any tips or tricks they may have or any problems they may have faced while doing something similar above with these products. I am also not asking for Capacity Planning for Storage, We have done some research and I am seeking Expert Assurance / Suggestions. We (our company) are planning to deploy Symantec Endpoint Protection and Symantec Desktop Recovery 2011 Desktop Edition to our 3000 - 4000 workstations (Windows7 32 and 64) with a few 100s with Windows XP 32/64 Bit. I have read the implementation guide for SEP and have read tech-notes for Desktop Recovery 2011. Our team have planned to deploy this as follows : 1 x dedicated SQL 2008R2 for Symantec Endpoint Protection (Instead of using the Embedded Database) 1 x Dedicated SQL 2008R2 for Symantec Desktop Recovery 2011 (Instead of using the Embedded Database) 1 x Dedicated W2K8 R2 Box for the SEPM (Symantec Endpoint Protection Manager - Mgmt. APP) 1 x Dedicated W2K8 R2 Box for the Symantec Desktop Recovery 2011 Management Application Agent Deployment : As per Symantec Documentation for both of the above, an agent can be pushed via the Mgmt. Application (provided no firewalls are blocking ports required etc. - we have Windows firewall disabled already). Server Hardware : Per SQL Server : 16GB RAM + SAS DISKS + Dual XEON, RAID-10 for the SQL DB or I can always mount a LUN from our existing Hitachi or EMC SAN. SEPM Server : 16GB RAM + SAS DISKS + DUAL XEON System Recovery MGMT SERVER : 16GB RAM + SAS DISKS + DUAL XEON Above is the initial plan we have for 3000 - 4000 client workstation (Windows) Now my Questions :-) a) If we had these users distributed amongst two sites with AD DC / GC in each site, How would I restrict SEPM and Desktop Mgmt. solution to only check for users in their respective site ? b) At present all users are under one building but we are going to move some dept. to a new location (with dedicated connectivity), How would we control which SEPM / MGMT Server is responsible for which site ? c) We have netbackup in our environment backing up other servers, I am planning to protect these 4 (2 x SQL, 1 x SEPM, 1 x System Recovery Mgmt. Server) via netbackup or I can use System recovery 2011 server edition on all 4 of these boxes as well. (License is not an issue as we have the complete symantec portfolio included in our license). d) Now - Saving Desktop backups - What strategies have you implemented ? Any best practice recommendation for a large user base ? I was thinking to either mount a LUN from our Hitachi SAN on the Symantec Recovery Server itself or backup to the users hard drive locally and then copy it over to a network location ? Suggestions welcome :-) If you have anything to add / correct - that will be really helpful before diving into the actual implementation phase. Will be most grateful with your suggestions, recommendations and corrections with above - Many Thanks !

    Read the article

  • Moving a Drupal between linux servers, best practice to avoid file-ownership problems

    - by zero
    I want to port over a Drupal commons 6x24 from a local LAMP-stack to a production webserver. Both systems run OpenSuse Linux. How do I do this, what are the most important steps. How should I handle file-ownership. It's important for me to have to have full control of the file ownership. If I use the wwwrun account, I frequently run into problems, due to a very strict webserver-admin. See for example the long history of looking for fixes and solutions see this thread and even more interesting see this very long and impressive thread here. All troubles I run into have to do with file-owernship and permissions. This is my current setup; Note: This was just a quick hacked installation - quick and dirty. Well my interest is after the general options i have in the port of a drupal from linux to linux linux-vi17:/srv/www/htdocs/com624 # ls -l insgesamt 224 -rwxrwxrwx 1 root www 45285 19. Jan 00:54 CHANGELOG.txt -rwxrwxrwx 1 root www 925 19. Jan 00:54 COPYRIGHT.txt -rwxrwxrwx 1 root www 206 19. Jan 00:54 cron.php drwxrwxrwx 2 root www 4096 19. Jan 00:54 includes -rwxrwxrwx 1 root www 923 19. Jan 00:54 index.php -rwxrwxrwx 1 root www 1244 19. Jan 00:54 INSTALL.mysql.txt -rwxrwxrwx 1 root www 1011 19. Jan 00:54 INSTALL.pgsql.txt -rwxrwxrwx 1 root www 47073 19. Jan 00:54 install.php -rwxrwxrwx 1 root www 15572 19. Jan 00:54 INSTALL.txt -rwxrwxrwx 1 root www 14940 19. Jan 00:54 LICENSE.txt -rwxrwxrwx 1 root www 1858 19. Jan 00:54 MAINTAINERS.txt drwxrwxrwx 3 root www 4096 19. Jan 00:54 misc drwxrwxrwx 35 root www 4096 19. Jan 00:54 modules drwxrwxrwx 4 root www 4096 19. Jan 00:54 profiles -rwxrwxrwx 1 root www 1470 19. Jan 00:54 robots.txt drwxrwxrwx 2 root www 4096 19. Jan 00:54 scripts drwxrwxrwx 4 root www 4096 19. Jan 00:54 sites drwxrwxrwx 7 root www 4096 19. Jan 00:54 themes -rwxrwxrwx 1 root www 26250 19. Jan 00:54 update.php -rwxrwxrwx 1 root www 4864 19. Jan 00:54 UPGRADE.txt -rwxrwxrwx 1 root www 294 19. Jan 00:54 xmlrpc.php linux-vi17:/srv/www/htdocs/com624 # thx to BetaRides answer here a quick overview on the drush functionality with rsync http://drush.ws/ core-rsync Rsync the Drupal tree to/from another server using ssh. Examples: drush rsync @dev @stage Rsync Drupal root from dev to stage (one of which must be local). drush rsync ./ @stage:%files/img Rsync all files in the current directory to the 'img' directory in the file storage folder on stage. Arguments: source May be rsync path or site alias. See rsync documentation and example.aliases.drushrc.php. destination May be rsync path or site alias. See rsync documentation and example.aliases.drushrc.php. Options: --mode The unary flags to pass to rsync; --mode=rultz implies rsync -rultz. Default is -az. --RSYNC-FLAG Most rsync flags passed to drush sync will be passed on to rsync. See rsync documentation. --exclude-conf Excludes settings.php from being rsynced. Default. --include-conf Allow settings.php to be rsynced --exclude-files Exclude the files directory. --exclude-sites Exclude all directories in "sites/" except for "sites/all". --exclude-other-sites Exclude all directories in "sites/" except for "sites/all" and the site directory for the site being synced. Note: if the site directory is different between the source and destination, use --exclude-sites followed by "drush rsync @from:%site @to:%site" --exclude-paths List of paths to exclude, seperated by : (Unix-based systems) or ; (Windows). --include-paths List of paths to include, seperated by : (Unix-based systems) or ; (Windows). Topics: docs-aliases Site aliases overview with examples Aliases: rsync

    Read the article

  • Choice of an OS for a home ZFS NAS

    - by OlafM
    I am preparing a home NAS with an old Athlon 64 X2 3800+, 4 GB ECC RAM, Asus M2V MX motherboard, and a single 3 TB WDC Green (another one as mirror may be installed in the future). It's the cheapest solution I found that includes ECC memory and the higher energy consumption is offset by the lower (zero) cost of acquisition. The system will be used for: music storage and stream to other desktop computers; storage of the scanned dia slides (3-4k slides, 180 MB TIFF each one plus reduced quality JPEG version); stream of these photos to a local iPad 2 (maybe Plex App? not yet sure); (one additional) remote backup via rsync/ssh or ZFS send/receive. It will be controlled via remote ssh, maybe VNC, no monitor attached. Absolute requirement is a reliable ZFS solution, plus the ability to easily install packets/software/virtual machines and to update remotely (I will be the admin and I don't live near the NAS). I have mainly three options: NAS4free/FreeNAS OpenIndiana Solaris Express 11 (yeah yeah I know the license requirements, I will write a perl script on it to count it as development machine). Problems: NAS4free/FreeNAS (I tested only NAS4free) required embedded installation for remote upgrading, but full install for easy addition of software packets. Since I need at least AirVideo Server (linux/win) and Plex App (win/linux) to stream the photos and some videos to iPad (they both require virtualbox), but I cannot be there to install updates, NAS4free/FreeNAS are excluded. http://www.nas4free.org/general_information.html explains the issue: embedded can be remotely updated, full cannot. Solaris has also another advantage: Crashplan client supports Solaris and I'm already using it for other backups. I would like to leave the option open, even if I will be doing backups probably through zfs send/receive. NexentaStor was left out because zfs send/receive are not included in the free version. The question is now Solaris 11 Express over OpenIndiana. To ease the management, I will be using http://www.napp-it.org Which one would you suggest and why? I found lots of informations and it's difficult for me to decide. I think (from the napp-it manual) that Solaris has some additional options for SMB shares, but are they really needed at home? I think I won't even use ACLs, since normal unix-style permissions are enough. OpenIndiana has maybe more frequent updates (Solaris offers only security updates between releases), but again, do I need them? I don't think so. Moreover, this is a NAS that has to work and nothing else, I cannot risk having problems that require me to access the server. Isn't OpenIndiana a bit more... cutting edge (in the Solaris world)? I'm just asking, no need to focus on this for the answer :-) I would limit myself to these two options (SE11.1/OI) also because I will be making a NAS for me in the future (where high performances with Mac shares are also required) and Solaris has kernel support for AFP. I will use this server to gather experience as well. After this long question, thanks in advance! If you need additional info, let me know and I will update this post. UPDATES Given the first answers, I will strongly suggest the person paying the hardware to insert a second HD. Better 2x2TB than 1x3TB (3 TB is oversized anyway). I was trying to keep the initial costs down to spread them over a longer period, but better having something good from the beginning.

    Read the article

  • Interactive Data Language, IDL: Does anybody care?

    - by Alex
    Anyone use a language called Interactive Data Language, IDL? It is popular with scientists. I think it is a poor language because it is proprietary (every terminal running it has to have an expensive license purchased) and it has minimal support (try searching for IDL, the language, right now on stack) . I am trying to convince my colleagues to stop using it and learn C/C++/Python/Fortran/Java/Ruby. Does anybody know about or even care about IDL enough to have opinions on it? What do you think of it? Should I tell my colleagues to stop wasting their time on it now? How can I convince them? Edit: People are getting the impression that I don't know or use IDL. Also, I said IDL has minimal support which is true in one sense, so I must clarify that the scientific libraries are indeed large. I use IDL all the time, but this is exactly the problem: I am only using IDL because colleagues use it. There is a file format IDL uses, the .sav, which can only be opened in IDL. So I must use IDL to work with this data and transfer the data back to colleagues, but I know I would be more efficient in another language. This is like someone sending you a microsoft word file in an email attachment and if you don't understand how wrong that is then you probably write too many words not enough code and you bought microsoft word. Edit: As an alternative to IDL Python is popular. Here is a list of The Pros of IDL (and the cons) from AstroBetter: Pros of IDL Mature many numerical and astronomical libraries available Wide astronomical user base Numerical aspect well integrated with language itself Many local users with deep experience Faster for small arrays Easier installation Good, unified documentation Standard GUI run/debug tool (IDLDE) Single widget system (no angst about which to choose or learn) SAVE/RESTORE capability Use of keyword arguments as flags more convenient Cons of IDL Narrow applicability, not well suited to general programming Slower for large arrays Array functionality less powerful Table support poor Limited ability to extend using C or Fortran, such extensions hard to distribute and support Expensive, sometimes problem collaborating with others that don’t have or can’t afford licenses. Closed source (only RSI can fix bugs) Very awkward to integrate with IRAF tasks Memory management more awkward Single widget system (useless if working within another framework) Plotting: Awkward support for symbols and math text Many font systems, portability issues (v5.1 alleviates somewhat) not as flexible or as extensible plot windows not intrinsically interactive (e.g., pan & zoom) Pros of Python Very general and powerful programming language, yet easy to learn. Strong, but optional, Object Oriented programming support Very large user and developer community, very extensive and broad library base Very extensible with C, C++, or Fortran, portable distribution mechanisms available Free; non-restrictive license; Open Source Becoming the standard scripting language for astronomy Easy to use with IRAF tasks Basis of STScI application efforts More general array capabilities Faster for large arrays, better support for memory mapping Many books and on-line documentation resources available (for the language and its libraries) Better support for table structures Plotting framework (matplotlib) more extensible and general Better font support and portability (only one way to do it too) Usable within many windowing frameworks (GTK, Tk, WX, Qt…) Standard plotting functionality independent of framework used plots are embeddable within other GUIs more powerful image handling (multiple simultaneous LUTS, optional resampling/rescaling, alpha blending, etc) Support for many widget systems Strong local influence over capabilities being developed for Python Cons of Python More items to install separately Not as well accepted in astronomical community (but support clearly growing) Scientific libraries not as mature: Documentation not as complete, not as unified Not as deep in astronomical libraries and utilities Not all IDL numerical library functions have corresponding functionality in Python Some numeric constructs not quite as consistent with language (or slightly less convenient than IDL) Array indexing convention “backwards” Small array performance slower No standard GUI run/debug tool Support for many widget systems (angst regarding which to choose) Current lack of function equivalent to SAVE/RESTORE in IDL matplotlib does not yet have equivalents for all IDL 2-D plotting capability (e.g., surface plots) Use of keyword arguments used as flags less convenient Plotting: comparatively immature, still much development going on missing some plot type (e.g., surface) 3-d capability requires VTK (though matplotlib has some basic 3-d capability)

    Read the article

  • Embedding ADF UI Components into OAF regions

    - by Juan Camilo Ruiz
    Having finished the 2 Webcast on ADF integration with Oracle E-Business Suite, Sara Woodhull, Principal Product Manager on the Oracle E-Business Suite Applications Technology team and I are going to continue adding entries to the series on this topic, trying to cover as many use cases as possible. In this entry, Sara created an overview on how Oracle ADF pages can be embedded into an Oracle Application Framework region. This is a very interesting approach that will enable those of you who are exploring ADF as a technology stack to enhanced some of the Oracle E-Business Suite flows and leverage your skill on Oracle Applications Framework (OAF). In upcoming entries we will start unveiling the internals needed to achieve session sharing between the regions. Stay tuned for more entries and enjoy this new post.   Document Scope This document only covers information that is specific to embedding an Oracle ADF page in an Oracle Application Framework–based page. It assumes knowledge of Oracle ADF and Oracle Application Framework development. It also assumes knowledge of the material in My Oracle Support Note 974949.1, “Oracle E-Business Suite SDK for Java” and My Oracle Support Note 1296491.1, "FAQ for Integration of Oracle E-Business Suite and Oracle Application Development Framework (ADF) Applications". Prerequisite Patch Download Patch 12726556:R12.FND.B from My Oracle Support and install it. The implementation described below requires Patch 12726556:R12.FND.B to provide the accessors for the ADF page. This patch is required in addition to the Oracle E-Business Suite SDK for Java patch described in My Oracle Support Note 974949.1. Development Environments You need two different JDeveloper environments: Oracle ADF and OA Framework. Oracle ADF Development Environment You build your Oracle ADF page using JDeveloper 11g. You should use JDeveloper 11g R1 (the latest is 11.1.1.6.0) if you need to use other products in the Oracle Fusion Middleware Stack, such as Oracle WebCenter, Oracle SOA Suite, or BI. You should use JDeveloper 11g R2 (the latest is 11.1.2.3.0) if you do not need other Oracle Fusion Middleware products. JDeveloper 11g R2 is an Oracle ADF-specific release that supports the latest Java EE standards and has various core improvements. Oracle Application Framework Development Environment Build your OA Framework page using a development environment corresponding to your Oracle E-Business Suite version. You must use Release 12.1.2 or later because the rich content container was introduced in Release 12.1.2. See “OA Framework - How to find the correct version of JDeveloper to use with eBusiness Suite 11i or Release 12.x” (My Oracle Support Doc ID 416708.1). Building your Oracle ADF Page Typically you build your ADF page using the session management feature of the Oracle E-Business Suite SDK for Java as described in My Oracle Support Note 974949.1. Also see My Oracle Support Note 1296491.1, "FAQ for Integration of Oracle E-Business Suite and Oracle Application Development Framework (ADF) Applications". Building an ADF Page with the Hierarchy Viewer If you are using the ADF hierarchy viewer, you should set up the structure and settings of the ADF page as follows or the hierarchy viewer may not fill the entire area it is supposed to fill (especially a problem in Firefox). Create a stretchable component as the parent component for the hierarchy viewer, such as af:panelStretchLayout (underneath the af:form component in the structure). Use af:panelStretchLayout for Oracle ADF 11.1.1.6 and earlier. For later versions of Oracle ADF, use af:panelGridLayout. Create your hierarchy viewer component inside the stretchable component. Create Function in Oracle E-Business Suite Instance In your Oracle E-Business Suite instance, create a function for your ADF page with the following parameters. You can use either the Functions window in the System Administrator responsibility or the Functions page in the Functional Administrator responsibility. Function Function Name Type=External ADF Function (ADFX) HTML Call=GWY.jsp?targetPage=faces/<your ADF page> ">You must also add your function to an Oracle E-Business Suite menu or permission set and set up function security or role-based access control (RBAC) so that the user has authorization to access the function. If you do not want the function to appear on the navigation menu, add the function without a menu prompt. See the Oracle E-Business Suite System Administrator's Guide Documentation Set for more information. Testing the Function from the Oracle E-Business Suite Home Page It’s a good idea to test launching your ADF page from the Oracle E-Business Suite Home Page. Add your function to the navigation menu for your responsibility with a prompt and try launching it. If your ADF page expects parameters from the surrounding page, those might not be available, however. Setting up the Oracle Application Framework Rich Container Once you have built your Oracle ADF 11g page, you need to embed it in your Oracle Application Framework page. Create Rich Content Container in your OA Framework JDeveloper environment In the OA Extension Structure pane for your OAF page, select the region where you want to add the rich content, and add a richContainer item to the region. Set the following properties on the richContainer item: id Content Type=Others (for Release 12.1.3. This property value may change in a future release.) Destination Function=[function code] Width (in pixels or percent, such as 100%) Height (in pixels) Parameters=[any parameters your Oracle ADF page is expecting to receive from the Oracle Application Framework page] Parameters In the Parameters property, specify parameters that will be passed to the embedded content as a list of comma-separated, name-value pairs. Dynamic parameters may be specified as paramName={@viewAttr}. Dynamic Rich Content Container Properties If you want your rich content container to display a different Oracle ADF page depending on other information, you would set up a different function for each different Oracle ADF page. You would then set the Destination Function and Parameters properties programmatically, instead of setting them in the Property Inspector. In the processRequest() method of your Oracle Application Framework page controller, where OAFRichContentPage is the ID of your richContainer item and the parameters are whatever parameters your ADF page expects, your code might look similar to this code fragment: OARichContainerBean richBean = (OARichContainerBean) webBean.findChildRecursive("OAFRichContentPage"); if(richBean != null){ if(isFirstCondition){ richBean.setFunctionName("ADF_EXAMPLE_EMBEDDED"); richBean.setParameters("ParamLoginPersonId="+loginPersonId +"&ParamPersonId="+personId+"&ParamUserId="+userId +"&ParamRespId="+respId+"&ParamRespApplId="+respApplId +"&ParamFromOA=Y"+"&ParamSecurityGroupId="+securityGroupId); } else if(isSecondCondition){ richBean.setFunctionName("ADF_EXAMPLE_OTHER_FUNCTION"); richBean.setParameters("ParamLoginPersonId=" +loginPersonId+"&ParamPersonId="+personId +"&ParamUserId="+userId+"&ParamRespId="+respId +"&ParamRespApplId="+respApplId +"&ParamFromOA=Y" +"&ParamSecurityGroupId="+securityGroupId); } }

    Read the article

  • Issue 15: Oracle PartnerNetwork Exchange @ Oracle OpenWorld

    - by rituchhibber
         ORACLE FOCUS Oracle PartnerNetwork Exchange@ ORACLE OpenWorld Sylvie MichouSenior DirectorPartner Marketing & Communications and Strategic Programs RESOURCES -- Oracle OpenWorld 2012 Oracle PartnerNetwork Exchange @ OpenWorld Oracle PartnerNetwork Exchange @ OpenWorld Registration Oracle PartnerNetwork Exchange SpecializationTest Fest Oracle OpenWorld Schedule Builder Oracle OpenWorld Promotional Toolkit for Partners Oracle Partner Events Oracle Partner Webcasts Oracle EMEA Partner News SUBSCRIBE FEEDBACK PREVIOUS ISSUES If you are attending our forthcoming Oracle OpenWorld 2012 conference in San Francisco from 30 September to 4 October, you will discover a new dedicated programme of keynotes and sessions tailored especially for you, our valued partners. Oracle PartnerNetwork Exchange @ OpenWorld has been created to enhance the opportunities for you to learn from and network with Oracle executives and experts. The programme also provides more informal opportunities than ever throughout the week to meet up with the people who are most important to your business: customers, prospects, colleagues and the Oracle EMEA Alliances & Channels management team. Oracle remains fully focused on building the industry's most admired partner ecosystem—which today spans over 25,000 partners. This new OPN Exchange programme offers an exciting change of pace for partners throughout the conference. Now it will be possible to enjoy a fully-integrated, partner-dedicated session schedule throughout the week, as well as key social events such as the Sunday night Welcome Reception, networking lunches from Monday to Thursday at the Howard Street Tent, and a fantastic closing event on the last Thursday afternoon. In addition to the regular Oracle OpenWorld conference schedule, if you have registered for the Oracle PartnerNetwork Exchange @ OpenWorld programme, you will be invited to attend a much anticipated global partner keynote presentation, plus more than 40 conference sessions aimed squarely at what's most important to you, as partners. Prominent topics for discussion will include: Oracle technologies and roadmaps and how they fit with partners' business plans; business development; regional distinctions in business practices; and much more. Each session will provide plenty of food for thought ahead of the numerous networking opportunities throughout the week, encouraging the knowledge exchange with Oracle executives, customers, prospects, and colleagues that will make this conference of even greater value for you. At Oracle we always work closely with our partners to deliver solution offerings that improve business value, simplify the IT experience and drive innovation and efficiencies for joint customers. The most important element of our new OPN Exchange is content that helps you get more from technology investments, more from your peer-to-peer connections, and more from your interactions with customers. To this end we've created some partner-specific tools which can be used by OPN members ahead of the conference itself. Crucially, a comprehensive Content Catalog already lists and organises details of every OPN Exchange session, speaker, exhibitor, demonstration and related materials. This Content Catalog can be used by all our partners to identify interesting content that you can add to your own personalised Oracle OpenWorld Schedule Builder, allowing more effective planning and pre-enrolment for vital sessions. There are numerous highlights that you will definitely want to include in those personal schedules. On Sunday morning, 30 September we will start the week with partner dedicated OPN Exchange sessions, following our Global Partner Keynote at 13:00 with Judson Althoff, SVP, Worldwide Alliances & Channels and Embedded Sales and senior executives, giving insight into Oracle's partner vision, strategy, and resources—all designed to help build and strengthen market opportunities for you. This will be followed by a number of OPN Exchange general sessions, the Oracle OpenWorld Opening Keynote with Larry Ellison, CEO, Oracle and concluded with the OPN Exchange AfterDark Welcome Reception, starting at 19:30 at the Metreon. From Monday 1 to Thursday 4 October, you can attend the OPN Exchange sessions that are most relevant to your business today and over the coming year. Oracle's top product and sales leaders will be on hand to discuss Oracle's strategic direction in 40+ targeted and in-depth sessions focussing on critical success factors to develop your business. Oracle's dedication to innovation, specialization, enablement and engineering provides Oracle partners with a huge opportunity to create new services and solutions, differentiate themselves and deliver extreme value to joint customers across the globe. Oracle will even be helping over 1000 partners to earn OPN Specialization certification during the Oracle OpenWorld OPN Exchange Test Fest, which will be providing all the study materials and exams required to drive Specialization for free at the conference. You simply need to check the list of current certification tracks available, and make sure you pre-register to reserve a seat in one of the ten sessions being offered free to OPN Exchange registered attendees. And finally, let's not forget those all-important networking opportunities, which can so often provide partners with valuable long-term alliances as well as exciting new business leads. The Oracle PartnerNetwork Lounge, located at Moscone South, exhibition hall, room 100 is the place where partners can meet formally or informally with colleagues, customers, prospects, and other industry professionals. OPN Specialized partners with OPN Exchange passes can also visit the OPN Video Blogging room to record and share ideas, and at the OPN Information Station you will find consultants available to answer your questions. "For the first time ever we will have a full partner conference within OpenWorld. OPN Exchange @ OpenWorld will kick-off on the first Sunday and run the entire week. We'll have over 40 sessions throughout that time and partners will hear from our top development executives, with special sessions dedicated to partnering throughout. It's going to be a phenomenal event, and we look forward to seeing our partners there." Judson Althoff, SVP, Oracle Worldwide Alliances & Channels and Embedded Sales So if you haven't done so already, please register for Oracle PartnerNetwork Exchange @ OpenWorld today or add OPN Exchange to your existing registration for just $100 through My Account. And if you have any further questions regarding partner activities at Oracle OpenWorld, please don't hesitate to contact the Oracle PartnerNetwork team at [email protected] will be on hand to share the very latest information about: Oracle's SPARC Superclusters: the latest Engineered Systems from Oracle, delivering radically improved performance, faster deployment and greatly reduced operational costs for mixed database and enterprise application consolidation Oracle's SPARC T4 servers: with the newly developed T4 processor and Oracle Solaris providing up to five times the single threaded performance and better overall system throughput for expanded application versatility Oracle Database Appliance: a new way to take advantage of the world's most popular database, Oracle Database 11g, in a single, easy-to-deploy and manage system. It's a complete package engineered to deliver simple, reliable and affordable database services to small and medium size businesses and departmental systems. All hardware and software components are supported together and offer customers unique pay-as-you-grow software licensing to quickly scale from two to 24 processor cores without incurring the costs and downtime usually associated with hardware upgrades Oracle Exalogic: the world's only integrated cloud machine, featuring server hardware and middleware software engineered together for maximum performance with minimum set-up and operational cost Oracle Exadata Database Machine: the only database machine that provides extreme performance for both data warehousing and online transaction processing (OLTP) applications, making it the ideal platform for consolidating onto grids or private clouds. It is a complete package of servers, storage, networking and software that is massively scalable, secure and redundant Oracle Sun ZFS Storage Appliances: providing enterprise-class NAS performance, price-performance, manageability and TCO by combining third-generation software with high-performance controllers, flash-based caches and disks Oracle Pillar Axiom Quality-of-Service: confidently consolidate storage for multiple applications into a single datacentre storage solution Oracle Solaris 11: delivering secure enterprise cloud deployments with the ability to run hundreds of virtual application with no overhead and co-engineered with other Oracle software products to provide the highest levels of security, manageability and performance Oracle Enterprise Manager 12c: Oracle's integrated enterprise IT management product, providing the industry's only complete, integrated and business-driven enterprise cloud management solution Oracle VM 3.0: the latest release of Oracle's server virtualisation and management solution, helping to move datacentres beyond server consolidation to improve application deployment and management. Register today and ensure your place at the Extreme Performance Tour! Extreme Performance Tour events are free to attend, but places are limited. To make sure that you don't miss out, please visit Oracle's Extreme Performance Tour website, select the city that you'd be interest in attending an event in, and then click on the 'Register Now' button for that city to secure your interest. Each individual city page also contains more in-depth information about your local event, including logistics, agenda and maybe even a preview of VIP guest speakers. -- Oracle OpenWorld 2010 Whether you attended Oracle OpenWorld 2009 or not, don't forget to save the date now for Oracle OpenWorld 2010. The event will be held a little earlier next year, from 19th-23rd September, so please don't miss out. With thousands of sessions and hundreds of exhibits and demos already lined up, there's no better place to learn how to optimise your existing systems, get an inside line on upcoming technology breakthroughs, and meet with your partner peers, Oracle strategists and even the developers responsible for the products and services that help you get better results for your end customers. Register Now for Oracle OpenWorld 2010! Perhaps you are interested in learning more about Oracle OpenWorld 2010, but don't wish to register at this time? Great! Please just enter your contact information here and we will contact you at a later date. How to Exhibit at Oracle OpenWorld 2010 Sponsorship Opportunities at Oracle OpenWorld 2010 Advertising Opportunities at Oracle OpenWorld 2010 -- Back to the welcome page

    Read the article

  • Wishful Thinking: Why can't HTML fix Script Attacks at the Source?

    - by Rick Strahl
    The Web can be an evil place, especially if you're a Web Developer blissfully unaware of Cross Site Script Attacks (XSS). Even if you are aware of XSS in all of its insidious forms, it's extremely complex to deal with all the issues if you're taking user input and you're actually allowing users to post raw HTML into an application. I'm dealing with this again today in a Web application where legacy data contains raw HTML that has to be displayed and users ask for the ability to use raw HTML as input for listings. The first line of defense of course is: Just say no to HTML input from users. If you don't allow HTML input directly and use HTML Encoding (HttyUtility.HtmlEncode() in .NET or using standard ASP.NET MVC output @Model.Content) you're fairly safe at least from the HTML input provided. Both WebForms and Razor support HtmlEncoded content, although Razor makes it the default. In Razor the default @ expression syntax:@Model.UserContent automatically produces HTML encoded content - you actually have to go out of your way to create raw HTML content (safe by default) using @Html.Raw() or the HtmlString class. In Web Forms (V4) you can use:<%: Model.UserContent %> or if you're using a version prior to 4.0:<%= HttpUtility.HtmlEncode(Model.UserContent) %> This works great as a hedge against embedded <script> tags and HTML markup as any HTML is turned into text that displays as HTML but doesn't render the HTML. But it turns any embedded HTML markup tags into plain text. If you need to display HTML in raw form with the markup tags rendering based on user input this approach is worthless. If you do accept HTML input and need to echo the rendered HTML input back, the task of cleaning up that HTML is a complex task. In the projects I work on, customers are frequently asking for the ability to post raw HTML quite frequently.  Almost every app that I've built where there's document content from users we start out with text only input - possibly using something like MarkDown - but inevitably users want to just post plain old HTML they created in some other rich editing application. See this a lot with realtors especially who often want to reuse their postings easily in multiple places. In my work this is a common problem I need to deal with and I've tried dozens of different methods from sanitizing, simple rejection of input to custom markup schemes none of which have ever felt comfortable to me. They work in a half assed, hacked together sort of way but I always live in fear of missing something vital which is *really easy to do*. My Wishlist Item: A <restricted> tag in HTML Let me dream here for a second on how to address this problem. It seems to me the easiest place where this can be fixed is: In the browser. Browsers are actually executing script code so they have a lot of control over the script code that resides in a page. What if there was a way to specify that you want to turn off script code for a block of HTML? The main issue when dealing with HTML raw input isn't that we as developers are unaware of the implications of user input, but the fact that we sometimes have to display raw HTML input the user provides. So the problem markup is usually isolated in only a very specific part of the document. So, what if we had a way to specify that in any given HTML block, no script code could execute by wrapping it into a tag that disables all script functionality in the browser? This would include <script> tags and any document script attributes like onclick, onfocus etc. and potentially also disallow things like iFrames that can potentially be scripted from the within the iFrame's target. I'd like to see something along these lines:<article> <restricted allowscripts="no" allowiframes="no"> <div>Some content</div> <script>alert('go ahead make my day, punk!");</script> <div onfocus="$.getJson('http://evilsite.com/')">more content</div> </restricted> </article> A tag like this would basically disallow all script code from firing from any HTML that's rendered within it. You'd use this only on code that you actually render from your data only and only if you are dealing with custom data. So something like this:<article> <restricted> @Html.Raw(Model.UserContent) </restricted> </article> For browsers this would actually be easy to intercept. They render the DOM and control loading and execution of scripts that are loaded through it. All the browser would have to do is suspend execution of <script> tags and not hookup any event handlers defined via markup in this block. Given all the crazy XSS attacks that exist and the prevalence of this problem this would go a long way towards preventing at least coded script attacks in the DOM. And it seems like a totally doable solution that wouldn't be very difficult to implement by vendors. There would also need to be some logic in the parser to not allow an </restricted> or <restricted> tag into the content as to short-circuit the rstricted section (per James Hart's comment). I'm sure there are other issues to consider as well that I didn't think of in my off-the-back-of-a-napkin concept here but the idea overall seems worth consideration I think. Without code running in a user supplied HTML block it'd be pretty hard to compromise a local HTML document and pass information like Cookies to a server. Or even send data to a server period. Short of an iFrame that can access the parent frame (which is another restriction that should be available on this <restricted> tag) that could potentially communicate back, there's not a lot a malicious site could do. The HTML could still 'phone home' via image links and href links potentially and basically say this site was accessed, but without the ability to run script code it would be pretty tough to pass along critical information to the server beyond that. Ahhhh… one can dream… Not holding my breath of course. The design by committee that is the W3C can't agree on anything in timeframes measured less than decades, but maybe this is one place where browser vendors can actually step up the pressure. This is something in their best interest to reduce the attack surface for vulnerabilities on their browser platforms significantly. Several people commented on Twitter today that there isn't enough discussion on issues like this that address serious needs in the web browser space. Realistically security has to be a number one concern with Web applications in general - there isn't a Web app out there that is not vulnerable. And yet nothing has been done to address these security issues even though there might be relatively easy solutions to make this happen. It'll take time, and it's probably not going to happen in our lifetime, but maybe this rambling thought sparks some ideas on how this sort of restriction can get into browsers in some way in the future.© Rick Strahl, West Wind Technologies, 2005-2012Posted in ASP.NET  HTML5  HTML  Security   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Migrating from SQL Trace to Extended Events

    - by extended_events
    In SQL Server codenamed “Denali” we are moving our diagnostic tracing capabilities forward by building a system on top of Extended Events. With every new system you face the specter of migration which is always a bit of a hassle. I’m obviously motivated to see everyone move their diagnostic tracing systems over to the new extended events based system, so I wanted to make sure we lowered the bar for the migration process to help ease your trials. In my initial post on Denali CTP 1 I described a couple tables that we created that will help map the existing SQL Trace Event Classes to the equivalent Extended Events events. In this post I’ll describe the tables in a bit more details, explain the relationship between the SQL Trace objects (Event Class & Column) and Extended Event objects (Events & Actions) and at the end provide some sample code for a managed stored procedure that will take an existing SQL Trace session (eg. a trace that you can see in sys.Traces) and converts it into event session DDL. Can you relate? In some ways, SQL Trace and Extended Events is kind of like the Standard and Metric measuring systems in the United States. If you spend too much time trying to figure out how to convert between the two it will probably make your head hurt. It’s often better to just use the new system without trying to translate between the two. That said, people like to relate new things to the things they’re comfortable with, so, with some trepidation, I will now explain how these two systems are related to each other. First, some terms… SQL Trace is made up of Event Classes and Columns. The Event Class occurs as the result of some activity in the database engine, for example, SQL:Batch Completed fires when a batch has completed executing on the server. Each Event Class can have any number of Columns associated with it and those Columns contain the data that is interesting about the Event Class, such as the duration or database name. In Extended Events we have objects named Events, EventData field and Actions. The Event (some people call this an xEvent but I’ll stick with Event) is equivalent to the Event Class in SQL Trace since it is the thing that occurs as the result of some activity taking place in the server. An  EventData field (from now on I’ll just refer to these as fields) is a piece of information that is highly correlated with the event and is always included as part of the schema of an Event. An Action is something that can be associated with any Event and it will cause some additional “action” to occur when ever the parent Event occurs. Actions can do a number of different things for example, there are Actions that collect additional data and, take memory dumps. When mapping SQL Trace onto Extended Events, Columns are covered by a combination of both fields and Actions. Knowing exactly where a Column is covered by a field and where it is covered by an Action is a bit of an art, so we created the mapping tables to make you an Artist without the years of practice. Let me draw you a map. Event Mapping The table dbo.trace_xe_event_map exists in the master database with the following structure: Column_name Type trace_event_id smallint package_name nvarchar xe_event_name nvarchar By joining this table sys.trace_events using trace_event_id and to the sys.dm_xe_objects using xe_event_name you can get a fair amount of information about how Event Classes are related to Events. The most basic query this lends itself to is to match an Event Class with the corresponding Event. SELECT     t.trace_event_id,     t.name [event_class],     e.package_name,     e.xe_event_name FROM sys.trace_events t INNER JOIN dbo.trace_xe_event_map e     ON t.trace_event_id = e.trace_event_id There are a couple things you’ll notice as you peruse the output of this query: For the most part, the names of Events are fairly close to the original Event Class; eg. SP:CacheMiss == sp_cache_miss, and so on. We’ve mostly stuck to a one to one mapping between Event Classes and Events, but there are a few cases where we have combined when it made sense. For example, Data File Auto Grow, Log File Auto Grow, Data File Auto Shrink & Log File Auto Shrink are now all covered by a single event named database_file_size_change. This just seemed like a “smarter” implementation for this type of event, you can get all the same information from this single event (grow/shrink, Data/Log, Auto/Manual growth) without having multiple different events. You can use Predicates if you want to limit the output to just one of the original Event Class measures. There are some Event Classes that did not make the cut and were not migrated. These fall into two categories; there were a few Event Classes that had been deprecated, or that just did not make sense, so we didn’t migrate them. (You won’t find an Event related to mounting a tape – sorry.) The second class is bigger; with rare exception, we did not migrate any of the Event Classes that were related to Security Auditing using SQL Trace. We introduced the SQL Audit feature in SQL Server 2008 and that will be the compliance and auditing feature going forward. Doing this is a very deliberate decision to support separation of duties for DBAs. There are separate permissions required for SQL Audit and Extended Events tracing so you can assign these tasks to different people if you choose. (If you’re wondering, the permission for Extended Events is ALTER ANY EVENT SESSION, which is covered by CONTROL SERVER.) Action Mapping The table dbo.trace_xe_action_map exists in the master database with the following structure: Column_name Type trace_column_id smallint package_name nvarchar xe_action_name nvarchar You can find more details by joining this to sys.trace_columns on the trace_column_id field. SELECT     c.trace_column_id,     c.name [column_name],     a.package_name,     a.xe_action_name FROM sys.trace_columns c INNER JOIN    dbo.trace_xe_action_map a     ON c.trace_column_id = a.trace_column_id If you examine this list, you’ll notice that there are relatively few Actions that map to SQL Trace Columns given the number of Columns that exist. This is not because we forgot to migrate all the Columns, but because much of the data for individual Event Classes is included as part of the EventData fields of the equivalent Events so there is no need to specify them as Actions. Putting it all together If you’ve spent a bunch of time figuring out the inner workings of SQL Trace, and who hasn’t, then you probably know that the typically set of Columns you find associated with any given Event Class in SQL Profiler is not fix, but is determine by the contents of the table sys.trace_event_bindings. We’ve used this table along with the mapping tables to produce a list of Event + Action combinations that duplicate the SQL Profiler Event Class definitions using the following query, which you can also find in the Books Online topic How To: View the Extended Events Equivalents to SQL Trace Event Classes. USE MASTER; GO SELECT DISTINCT    tb.trace_event_id,    te.name AS 'Event Class',    em.package_name AS 'Package',    em.xe_event_name AS 'XEvent Name',    tb.trace_column_id,    tc.name AS 'SQL Trace Column',    am.xe_action_name as 'Extended Events action' FROM (sys.trace_events te LEFT OUTER JOIN dbo.trace_xe_event_map em    ON te.trace_event_id = em.trace_event_id) LEFT OUTER JOIN sys.trace_event_bindings tb    ON em.trace_event_id = tb.trace_event_id LEFT OUTER JOIN sys.trace_columns tc    ON tb.trace_column_id = tc.trace_column_id LEFT OUTER JOIN dbo.trace_xe_action_map am    ON tc.trace_column_id = am.trace_column_id ORDER BY te.name, tc.name As you might imagine, it’s also possible to map an existing trace definition to the equivalent event session by judicious use of fn_trace_geteventinfo joined with the two mapping tables. This query extracts the list of Events and Actions equivalent to the trace with ID = 1, which is most likely the Default Trace. You can find this query, along with a set of other queries and steps required to migrate your existing traces over to Extended Events in the Books Online topic How to: Convert an Existing SQL Trace Script to an Extended Events Session. USE MASTER; GO DECLARE @trace_id int SET @trace_id = 1 SELECT DISTINCT el.eventid, em.package_name, em.xe_event_name AS 'event'    , el.columnid, ec.xe_action_name AS 'action' FROM (sys.fn_trace_geteventinfo(@trace_id) AS el    LEFT OUTER JOIN dbo.trace_xe_event_map AS em       ON el.eventid = em.trace_event_id) LEFT OUTER JOIN dbo.trace_xe_action_map AS ec    ON el.columnid = ec.trace_column_id WHERE em.xe_event_name IS NOT NULL AND ec.xe_action_name IS NOT NULL You’ll notice in the output that the list doesn’t include any of the security audit Event Classes, as I wrote earlier, those were not migrated. But wait…there’s more! If this were an infomercial there’d by some obnoxious guy next to me blogging “Well Mike…that’s pretty neat, but I’m sure you can do more. Can’t you make it even easier to migrate from SQL Trace?”  Needless to say, I’d blog back, in an overly excited way, “You bet I can' obnoxious blogger side-kick!” What I’ve got for you here is a Extended Events Team Blog only special – this tool will not be sold in any store; it’s a special offer for those of you reading the blog. I’ve wrapped all the logic of pulling the configuration information out of an existing trace and and building the Extended Events DDL statement into a handy, dandy CLR stored procedure. Once you load the assembly and register the procedure you just supply the trace id (from sys.traces) and provide a name for the event session. Run the procedure and out pops the DDL required to create an equivalent session. Any aspects of the trace that could not be duplicated are included in comments within the DDL output. This procedure does not actually create the event session – you need to copy the DDL out of the message tab and put it into a new query window to do that. It also requires an existing trace (but it doesn’t have to be running) to evaluate; there is no functionality to parse t-sql scripts. I’m not going to spend a bunch of time explaining the code here – the code is pretty well commented and hopefully easy to follow. If not, you can always post comments or hit the feedback button to send us some mail. Sample code: TraceToExtendedEventDDL   Installing the procedure Just in case you’re not familiar with installing CLR procedures…once you’ve compile the assembly you can load it using a script like this: -- Context to master USE master GO -- Create the assembly from a shared location. CREATE ASSEMBLY TraceToXESessionConverter FROM 'C:\Temp\TraceToXEventSessionConverter.dll' WITH PERMISSION_SET = SAFE GO -- Create a stored procedure from the assembly. CREATE PROCEDURE CreateEventSessionFromTrace @trace_id int, @session_name nvarchar(max) AS EXTERNAL NAME TraceToXESessionConverter.StoredProcedures.ConvertTraceToExtendedEvent GO Enjoy! -Mike

    Read the article

  • SQL SERVER – 5 Tips for Improving Your Data with expressor Studio

    - by pinaldave
    It’s no secret that bad data leads to bad decisions and poor results.  However, how do you prevent dirty data from taking up residency in your data store?  Some might argue that it’s the responsibility of the person sending you the data.  While that may be true, in practice that will rarely hold up.  It doesn’t matter how many times you ask, you will get the data however they decide to provide it. So now you have bad data.  What constitutes bad data?  There are quite a few valid answers, for example: Invalid date values Inappropriate characters Wrong data Values that exceed a pre-set threshold While it is certainly possible to write your own scripts and custom SQL to identify and deal with these data anomalies, that effort often takes too long and becomes difficult to maintain.  Instead, leveraging an ETL tool like expressor Studio makes the data cleansing process much easier and faster.  Below are some tips for leveraging expressor to get your data into tip-top shape. Tip 1:     Build reusable data objects with embedded cleansing rules One of the new features in expressor Studio 3.2 is the ability to define constraints at the metadata level.  Using expressor’s concept of Semantic Types, you can define reusable data objects that have embedded logic such as constraints for dealing with dirty data.  Once defined, they can be saved as a shared atomic type and then re-applied to other data attributes in other schemas. As you can see in the figure above, I’ve defined a constraint on zip code.  I can then save the constraint rules I defined for zip code as a shared atomic type called zip_type for example.   The next time I get a different data source with a schema that also contains a zip code field, I can simply apply the shared atomic type (shown below) and the previously defined constraints will be automatically applied. Tip 2:     Unlock the power of regular expressions in Semantic Types Another powerful feature introduced in expressor Studio 3.2 is the option to use regular expressions as a constraint.   A regular expression is used to identify patterns within data.   The patterns could be something as simple as a date format or something much more complex such as a street address.  For example, I could define that a valid IP address should be made up of 4 numbers, each 0 to 255, and separated by a period.  So 192.168.23.123 might be a valid IP address whereas 888.777.0.123 would not be.   How can I account for this using regular expressions? A very simple regular expression that would look for any 4 sets of 3 digits separated by a period would be:  ^[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}$ Alternatively, the following would be the exact check for truly valid IP addresses as we had defined above:  ^(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])\.(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])\.(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])\.(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])$ .  In expressor, we would enter this regular expression as a constraint like this: Here we select the corrective action to be ‘Escalate’, meaning that the expressor Dataflow operator will decide what to do.  Some of the options include rejecting the offending record, skipping it, or aborting the dataflow. Tip 3:     Email pattern expressions that might come in handy In the example schema that I am using, there’s a field for email.  Email addresses are often entered incorrectly because people are trying to avoid spam.  While there are a lot of different ways to define what constitutes a valid email address, a quick search online yields a couple of really useful regular expressions for validating email addresses: This one is short and sweet:  \b[A-Z0-9._%+-]+@[A-Z0-9.-]+\.[A-Z]{2,4}\b (Source: http://www.regular-expressions.info/) This one is more specific about which characters are allowed:  ^([a-zA-Z0-9_\-\.]+)@((\[[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.)|(([a-zA-Z0-9\-]+\.)+))([a-zA-Z]{2,4}|[0-9]{1,3})(\]?)$ (Source: http://regexlib.com/REDetails.aspx?regexp_id=26 ) Tip 4:     Reject “dirty data” for analysis or further processing Yet another feature introduced in expressor Studio 3.2 is the ability to reject records based on constraint violations.  To capture reject records on input, simply specify Reject Record in the Error Handling setting for the Read File operator.  Then attach a Write File operator to the reject port of the Read File operator as such: Next, in the Write File operator, you can configure the expressor operator in a similar way to the Read File.  The key difference would be that the schema needs to be derived from the upstream operator as shown below: Once configured, expressor will output rejected records to the file you specified.  In addition to the rejected records, expressor also captures some diagnostic information that will be helpful towards identifying why the record was rejected.  This makes diagnosing errors much easier! Tip 5:    Use a Filter or Transform after the initial cleansing to finish the job Sometimes you may want to predicate the data cleansing on a more complex set of conditions.  For example, I may only be interested in processing data containing males over the age of 25 in certain zip codes.  Using an expressor Filter operator, you can define the conditional logic which isolates the records of importance away from the others. Alternatively, the expressor Transform operator can be used to alter the input value via a user defined algorithm or transformation.  It also supports the use of conditional logic and data can be rejected based on constraint violations. However, the best tip I can leave you with is to not constrain your solution design approach – expressor operators can be combined in many different ways to achieve the desired results.  For example, in the expressor Dataflow below, I can post-process the reject data from the Filter which did not meet my pre-defined criteria and, if successful, Funnel it back into the flow so that it gets written to the target table. I continue to be impressed that expressor offers all this functionality as part of their FREE expressor Studio desktop ETL tool, which you can download from here.  Their Studio ETL tool is absolutely free and they are very open about saying that if you want to deploy their software on a dedicated Windows Server, you need to purchase their server software, whose pricing is posted on their website. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Building the Ultimate SharePoint 2010 Development Environment

    - by Manesh Karunakaran
    It’s been more than a month since SharePoint 2010 RTMed. And a lot of people have downloaded and set up their very own SharePoint 2010 development rigs. And quite a few people have written blogs about setting up good development environments, there is even an MSDN article on it. Two of the blogs worth noting are from MVPs Sahil Malik and Wictor Wilén. Make sure that you check these out as well. Part of the bad side-effects of being a geek is the need to do the technical stuff the best way possible (pragmatic or otherwise), but the problem with this is that what is considered “best” is relative. Precisely the reason why you are reading this post now. Most of the posts that I read are out dated/need updations or are using the wrong OS’es or virtualization solutions (again, opinions vary) or using them the wrong way. Here’s a developer’s view of Building the Ultimate SharePoint 2010 Development Rig. If you are a sales guy, it’s time to close this window. Confusion 1: Which Host Operating System and Virtualization Solution to use? This point has been beaten to death in numerous blog posts in the past, if you have time to invest, read this excellent post by our very own SharePoint Joel on this subject. But if you are planning to build the Ultimate Development Rig, then Windows Server 2008 R2 with Hyper-V is the option that you should be looking at. I have been using this as my primary OS for about 6-7 months now, and I haven’t had any Driver issue or Application compatibility issue. In my experience all the Windows 7 drivers work fine with WIN2008 R2 also. You can enable Aero for eye candy (and the Windows 7 look and feel) and except for a few things like the Hibernation support (which a can be enabled if you really want it), Windows Server 2008 R2, is the best Workstation OS that I have used till date. But frankly the answer to this question of which OS to use depends primarily on one question - Are you willing to change your primary OS? If the answer to that is ‘Yes’, then Windows 2008 R2 with Hyper-V is the best option, if not look at vmWare or VirtualBox, both are equally good. Those who are familiar with a Virtual PC background might prefer Sun VirtualBox. Besides, these provide support for running 64 bit guest machines on 32 bit hosts if the underlying hardware is truly 64 bit. See my earlier post on this. Since we are going to make the ultimate rig, we will use Windows Server 2008 R2 with Hyper-V, for reasons mentioned above. Confusion 2: Should I use a multi-(virtual) server set up? A lot of people use multiple servers for their development environments - like Wictor Wilén is suggesting - one server hosting the Active directory, one hosting SharePoint Server and another one for SQL Server. True, this mimics the production environment the best possible way, but as somebody who has fallen for this set up earlier, I can tell you that you don’t really get anything by doing this. Microsoft has done well to ensure that if you can do it on one machine, you can do it in a farm environment as well. Besides, when you run multiple Server class machine instances in parallel, there are a lot of unwanted processor cycles wasted for no good use. In my personal experience, as somebody who needs to switch between MOSS 2007/SharePoint 2010 environments from time to time, the best possible solution is to Make the host Windows Server 2008 R2 machine your Domain Controller (AD Server) Make all your Virtual Guest OS’es join this domain. Have each Individual Guest OS Image have it’s own local SQL Server instance. The advantages are that you can reuse the users and groups in each of the Guest operating systems, you can manage the users in one place, AD is light weight and doesn't take too much resources on your host machine and also having separate SQL instances for each of the Development images gives you maximum flexibility in terms of configuration, for example your SharePoint rigs can have simpler DB configurations, compared to your MS BI blast pits. Confusion 3: Which Operating System should I use to run SharePoint 2010 Now that’s a no brainer. Use Windows 2008 R2 as your Guest OS. When you are building the ultimate rig, why compromise? If you are planning to run Windows Server 2008 as your Guest OS, there are a few patches that you need to install at different times during the installation, for that follow the steps mentioned here Okay now that we have made our choices, let’s get to the interesting part of building the rig, Step 1: Prepare the host machine – Install Windows Server 2008 R2 Install Windows Server 2008 R2 on your best Desktop/Laptop. If you have read this far, I am quite sure that you are somebody who can install an OS on your own, so go ahead and do that. Make sure that you run the compatibility wizard before you go ahead and nuke your current OS. There are plenty of blogs telling you how to make a good Windows 2008 R2 Workstation that feels and behaves like a Windows 7 machine, follow one and once you are done, head to Step 2. Step 2: Configure the host machine as a Domain Controller Before we begin this, let me tell you, this step is completely optional, you don’t really need to do this, you can simply use the local users on the Guest machines instead, but if this is a much cleaner approach to manage users and groups if you run multiple guest operating systems.  This post neatly explains how to configure your Windows Server 2008 R2 host machine as a Domain Controller. Follow those simple steps and you are good to go. If you are not able to get it to work, try this. Step 3: Prepare the guest machine – Install Windows Server 2008 R2 Open Hyper-V Manager Choose to Create a new Guest Operating system Allocate at least 2 GB of Memory to the Guest OS Choose the Windows 2008 R2 Installation Media Start the Virtual Machine to commence installation. Once the Installation is done, Activate the OS. Step 4: Make the Guest operating systems Join the Domain This step is quite simple, just follow these steps below, Fire up Hyper-V Manager, open your Guest OS Click on Start, and Right click on ‘Computer’ and choose ‘Properties’ On the window that pops-up, click on ‘Change Settings’ On the ‘System Properties’ Window that comes up, Click on the ‘Change’ button Now a window named ‘Computer Name/Domain Changes’ opens up, In the text box titled Domain, type in the Domain name from Step 2. Click Ok and windows will show you the welcome to domain message and ask you to restart the machine, click OK to restart. If the addition to domain fails, that means that you have not set up networking in Hyper-V for the Guest OS to communicate with the Host. To enable it, follow the steps I had mentioned in this post earlier. Step 5: Install SQL Server 2008 R2 on the Guest Machine SQL Server 2008 R2 gets installed with out hassle on Windows Server 2008 R2. SQL Server 2008 needs SP2 to work properly on WIN2008 R2. Also SQL Server 2008 R2 allows you to directly add PowerPivot support to SharePoint. Choose to install in SharePoint Integrated Mode in Reporting Server Configuration. Step 6: Install KB971831 and SharePoint 2010 Pre-requisites Now install the WCF Hotfix for Microsoft Windows (KB971831) from this location, and SharePoint 2010 Pre-requisites from the SP2010 Installation media. Step 7: Install and Configure SharePoint 2010 Install SharePoint 2010 from the installation media, after the installation is complete, you are prompted to start the SharePoint Products and Technologies Configuration Wizard. If you are using a local instance of Microsoft SQL Server 2008, install the Microsoft SQL Server 2008 KB 970315 x64 before starting the wizard. If your development environment uses a remote instance of Microsoft SQL Server 2008 or if it has a pre-existing installation of Microsoft SQL Server 2008 on which KB 970315 x64 has already been applied, this step is not necessary. With the wizard open, do the following: Install SQL Server 2008 KB 970315 x64. After the Microsoft SQL Server 2008 KB 970315 x64 installation is finished, complete the wizard. Alternatively, you can choose not to run the wizard by clearing the SharePoint Products and Technologies Configuration Wizard check box and closing the completed installation dialog box. Install SQL Server 2008 KB 970315 x64, and then manually start the SharePoint Products and Technologies Configuration Wizard by opening a Command Prompt window and executing the following command: C:\Program Files\Common Files\Microsoft Shared Debug\Web Server Extensions\14\BIN\psconfigui.exe The SharePoint Products and Technologies Configuration Wizard may fail if you are using a computer that is joined to a domain but that is not connected to a domain controller. Step 8: Install Visual Studio 2010 and SharePoint 2010 SDK Install Visual Studio 2010 Download and Install the Microsoft SharePoint 2010 SDK Step 9: Install PowerPivot for SharePoint and Configure Reporting Services Pop-In the SQLServer 2008 R2 installation media once again and install PowerPivot for SharePoint. This will get added as another instance named POWERPIVOT. Configure Reporting Services by following the steps mentioned here, if you need to get down to the details on how the integration between SharePoint 2010 and SQL Server 2008 R2 works, see Working Together: SQL Server 2008 R2 Reporting Services Integration in SharePoint 2010 an excellent article by Alan Le Marquand Step 10: Download and Install Sample Databases for Microsoft SQL Server 2008R2 SharePoint 2010 comes with a lot of cool stuff like PerformancePoint Services and BCS, if you need to try these out, you need to have data in your databases. So if you want to save yourself the trouble of creating sample data for your PerformancePoint and BCS experiments, download and install Sample Databases for Microsoft SQL Server 2008R2 from CodePlex. And you are done! Fire up your Visual Studio 2010 and Start Coding away!!

    Read the article

  • Recover Data Like a Forensics Expert Using an Ubuntu Live CD

    - by Trevor Bekolay
    There are lots of utilities to recover deleted files, but what if you can’t boot up your computer, or the whole drive has been formatted? We’ll show you some tools that will dig deep and recover the most elusive deleted files, or even whole hard drive partitions. We’ve shown you simple ways to recover accidentally deleted files, even a simple method that can be done from an Ubuntu Live CD, but for hard disks that have been heavily corrupted, those methods aren’t going to cut it. In this article, we’ll examine four tools that can recover data from the most messed up hard drives, regardless of whether they were formatted for a Windows, Linux, or Mac computer, or even if the partition table is wiped out entirely. Note: These tools cannot recover data that has been overwritten on a hard disk. Whether a deleted file has been overwritten depends on many factors – the quicker you realize that you want to recover a file, the more likely you will be able to do so. Our setup To show these tools, we’ve set up a small 1 GB hard drive, with half of the space partitioned as ext2, a file system used in Linux, and half the space partitioned as FAT32, a file system used in older Windows systems. We stored ten random pictures on each hard drive. We then wiped the partition table from the hard drive by deleting the partitions in GParted. Is our data lost forever? Installing the tools All of the tools we’re going to use are in Ubuntu’s universe repository. To enable the repository, open Synaptic Package Manager by clicking on System in the top-left, then Administration > Synaptic Package Manager. Click on Settings > Repositories and add a check in the box labelled “Community-maintained Open Source software (universe)”. Click Close, and then in the main Synaptic Package Manager window, click the Reload button. Once the package list has reloaded, and the search index rebuilt, search for and mark for installation one or all of the following packages: testdisk, foremost, and scalpel. Testdisk includes TestDisk, which can recover lost partitions and repair boot sectors, and PhotoRec, which can recover many different types of files from tons of different file systems. Foremost, originally developed by the US Air Force Office of Special Investigations, recovers files based on their headers and other internal structures. Foremost operates on hard drives or drive image files generated by various tools. Finally, scalpel performs the same functions as foremost, but is focused on enhanced performance and lower memory usage. Scalpel may run better if you have an older machine with less RAM. Recover hard drive partitions If you can’t mount your hard drive, then its partition table might be corrupted. Before you start trying to recover your important files, it may be possible to recover one or more partitions on your drive, recovering all of your files with one step. Testdisk is the tool for the job. Start it by opening a terminal (Applications > Accessories > Terminal) and typing in: sudo testdisk If you’d like, you can create a log file, though it won’t affect how much data you recover. Once you make your choice, you’re greeted with a list of the storage media on your machine. You should be able to identify the hard drive you want to recover partitions from by its size and label. TestDisk asks you select the type of partition table to search for. In most cases (ext2/3, NTFS, FAT32, etc.) you should select Intel and press Enter. Highlight Analyse and press enter. In our case, our small hard drive has previously been formatted as NTFS. Amazingly, TestDisk finds this partition, though it is unable to recover it. It also finds the two partitions we just deleted. We are able to change their attributes, or add more partitions, but we’ll just recover them by pressing Enter. If TestDisk hasn’t found all of your partitions, you can try doing a deeper search by selecting that option with the left and right arrow keys. We only had these two partitions, so we’ll recover them by selecting Write and pressing Enter. Testdisk informs us that we will have to reboot. Note: If your Ubuntu Live CD is not persistent, then when you reboot you will have to reinstall any tools that you installed earlier. After restarting, both of our partitions are back to their original states, pictures and all. Recover files of certain types For the following examples, we deleted the 10 pictures from both partitions and then reformatted them. PhotoRec Of the three tools we’ll show, PhotoRec is the most user-friendly, despite being a console-based utility. To start recovering files, open a terminal (Applications > Accessories > Terminal) and type in: sudo photorec To begin, you are asked to select a storage device to search. You should be able to identify the right device by its size and label. Select the right device, and then hit Enter. PhotoRec asks you select the type of partition to search. In most cases (ext2/3, NTFS, FAT, etc.) you should select Intel and press Enter. You are given a list of the partitions on your selected hard drive. If you want to recover all of the files on a partition, then select Search and hit enter. However, this process can be very slow, and in our case we only want to search for pictures files, so instead we use the right arrow key to select File Opt and press Enter. PhotoRec can recover many different types of files, and deselecting each one would take a long time. Instead, we press “s” to clear all of the selections, and then find the appropriate file types – jpg, gif, and png – and select them by pressing the right arrow key. Once we’ve selected these three, we press “b” to save these selections. Press enter to return to the list of hard drive partitions. We want to search both of our partitions, so we highlight “No partition” and “Search” and then press Enter. PhotoRec prompts for a location to store the recovered files. If you have a different healthy hard drive, then we recommend storing the recovered files there. Since we’re not recovering very much, we’ll store it on the Ubuntu Live CD’s desktop. Note: Do not recover files to the hard drive you’re recovering from. PhotoRec is able to recover the 20 pictures from the partitions on our hard drive! A quick look in the recup_dir.1 directory that it creates confirms that PhotoRec has recovered all of our pictures, save for the file names. Foremost Foremost is a command-line program with no interactive interface like PhotoRec, but offers a number of command-line options to get as much data out of your had drive as possible. For a full list of options that can be tweaked via the command line, open up a terminal (Applications > Accessories > Terminal) and type in: foremost –h In our case, the command line options that we are going to use are: -t, a comma-separated list of types of files to search for. In our case, this is “jpeg,png,gif”. -v, enabling verbose-mode, giving us more information about what foremost is doing. -o, the output folder to store recovered files in. In our case, we created a directory called “foremost” on the desktop. -i, the input that will be searched for files. This can be a disk image in several different formats; however, we will use a hard disk, /dev/sda. Our foremost invocation is: sudo foremost –t jpeg,png,gif –o foremost –v –i /dev/sda Your invocation will differ depending on what you’re searching for and where you’re searching for it. Foremost is able to recover 17 of the 20 files stored on the hard drive. Looking at the files, we can confirm that these files were recovered relatively well, though we can see some errors in the thumbnail for 00622449.jpg. Part of this may be due to the ext2 filesystem. Foremost recommends using the –d command-line option for Linux file systems like ext2. We’ll run foremost again, adding the –d command-line option to our foremost invocation: sudo foremost –t jpeg,png,gif –d –o foremost –v –i /dev/sda This time, foremost is able to recover all 20 images! A final look at the pictures reveals that the pictures were recovered with no problems. Scalpel Scalpel is another powerful program that, like Foremost, is heavily configurable. Unlike Foremost, Scalpel requires you to edit a configuration file before attempting any data recovery. Any text editor will do, but we’ll use gedit to change the configuration file. In a terminal window (Applications > Accessories > Terminal), type in: sudo gedit /etc/scalpel/scalpel.conf scalpel.conf contains information about a number of different file types. Scroll through this file and uncomment lines that start with a file type that you want to recover (i.e. remove the “#” character at the start of those lines). Save the file and close it. Return to the terminal window. Scalpel also has a ton of command-line options that can help you search quickly and effectively; however, we’ll just define the input device (/dev/sda) and the output folder (a folder called “scalpel” that we created on the desktop). Our invocation is: sudo scalpel /dev/sda –o scalpel Scalpel is able to recover 18 of our 20 files. A quick look at the files scalpel recovered reveals that most of our files were recovered successfully, though there were some problems (e.g. 00000012.jpg). Conclusion In our quick toy example, TestDisk was able to recover two deleted partitions, and PhotoRec and Foremost were able to recover all 20 deleted images. Scalpel recovered most of the files, but it’s very likely that playing with the command-line options for scalpel would have enabled us to recover all 20 images. These tools are lifesavers when something goes wrong with your hard drive. If your data is on the hard drive somewhere, then one of these tools will track it down! Similar Articles Productive Geek Tips Recover Deleted Files on an NTFS Hard Drive from a Ubuntu Live CDUse an Ubuntu Live CD to Securely Wipe Your PC’s Hard DriveReset Your Ubuntu Password Easily from the Live CDBackup Your Windows Live Writer SettingsAdding extra Repositories on Ubuntu TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Awe inspiring, inter-galactic theme (Win 7) Case Study – How to Optimize Popular Wordpress Sites Restore Hidden Updates in Windows 7 & Vista Iceland an Insurance Job? Find Downloads and Add-ins for Outlook Recycle !

    Read the article

  • 8 Reasons Why Even Microsoft Agrees the Windows Desktop is a Nightmare

    - by Chris Hoffman
    Let’s be honest: The Windows desktop is a mess. Sure, it’s extremely powerful and has a huge software library, but it’s not a good experience for average people. It’s not even a good experience for geeks, although we tolerate it. Even Microsoft agrees about this. Microsoft’s Surface tablets with Windows RT don’t support any third-party desktop apps. They consider this a feature — users can’t install malware and other desktop junk, so the system will always be speedy and secure. Malware is Still Common Malware may not affect geeks, but it certainly continues to affect average people. Securing Windows, keeping it secure, and avoiding unsafe programs is a complex process. There are over 50 different file extensions that can contain harmful code to keep track of. It’s easy to have theoretical discussions about how malware could infect Mac computers, Android devices, and other systems. But Mac malware is extremely rare, and has  generally been caused by problem with the terrible Java plug-in. Macs are configured to only run executables from identified developers by default, whereas Windows will run everything. Android malware is talked about a lot, but Android malware is rare in the real world and is generally confined to users who disable security protections and install pirated apps. Google has also taken action, rolling out built-in antivirus-like app checking to all Android devices, even old ones running Android 2.3, via Play Services. Whatever the reason, Windows malware is still common while malware for other systems isn’t. We all know it — anyone who does tech support for average users has dealt with infected Windows computers. Even users who can avoid malware are stuck dealing with complex and nagging antivirus programs, especially since it’s now so difficult to trust Microsoft’s antivirus products. Manufacturer-Installed Bloatware is Terrible Sit down with a new Mac, Chromebook, iPad, Android tablet, Linux laptop, or even a Surface running Windows RT and you can enjoy using your new device. The system is a clean slate for you to start exploring and installing your new software. Sit down with a new Windows PC and the system is a mess. Rather than be delighted, you’re stuck reinstalling Windows and then installing the necessary drivers or you’re forced to start uninstalling useless bloatware programs one-by-one, trying to figure out which ones are actually useful. After uninstalling the useless programs, you may end up with a system tray full of icons for ten different hardware utilities anyway. The first experience of using a new Windows PC is frustration, not delight. Yes, bloatware is still a problem on Windows 8 PCs. Manufacturers can customize the Refresh image, preventing bloatware rom easily being removed. Finding a Desktop Program is Dangerous Want to install a Windows desktop program? Well, you’ll have to head to your web browser and start searching. It’s up to you, the user, to know which programs are safe and which are dangerous. Even if you find a website for a reputable program, the advertisements on that page will often try to trick you into downloading fake installers full of adware. While it’s great to have the ability to leave the app store and get software that the platform’s owner hasn’t approved — as on Android — this is no excuse for not providing a good, secure software installation experience for typical users installing typical programs. Even Reputable Desktop Programs Try to Install Junk Even if you do find an entirely reputable program, you’ll have to keep your eyes open while installing it. It will likely try to install adware, add browse toolbars, change your default search engine, or change your web browser’s home page. Even Microsoft’s own programs do this — when you install Skype for Windows desktop, it will attempt to modify your browser settings t ouse Bing, even if you’re specially chosen another search engine and home page. With Microsoft setting such an example, it’s no surprise so many other software developers have followed suit. Geeks know how to avoid this stuff, but there’s a reason program installers continue to do this. It works and tricks many users, who end up with junk installed and settings changed. The Update Process is Confusing On iOS, Android, and Windows RT, software updates come from a single place — the app store. On Linux, software updates come from the package manager. On Mac OS X, typical users’ software updates likely come from the Mac App Store. On the Windows desktop, software updates come from… well, every program has to create its own update mechanism. Users have to keep track of all these updaters and make sure their software is up-to-date. Most programs now have their act together and automatically update by default, but users who have old versions of Flash and Adobe Reader installed are vulnerable until they realize their software isn’t automatically updating. Even if every program updates properly, the sheer mess of updaters is clunky, slow, and confusing in comparison to a centralized update process. Browser Plugins Open Security Holes It’s no surprise that other modern platforms like iOS, Android, Chrome OS, Windows RT, and Windows Phone don’t allow traditional browser plugins, or only allow Flash and build it into the system. Browser plugins provide a wealth of different ways for malicious web pages to exploit the browser and open the system to attack. Browser plugins are one of the most popular attack vectors because of how many users have out-of-date plugins and how many plugins, especially Java, seem to be designed without taking security seriously. Oracle’s Java plugin even tries to install the terrible Ask toolbar when installing security updates. That’s right — the security update process is also used to cram additional adware into users’ machines so unscrupulous companies like Oracle can make a quick buck. It’s no wonder that most Windows PCs have an out-of-date, vulnerable version of Java installed. Battery Life is Terrible Windows PCs have bad battery life compared to Macs, IOS devices, and Android tablets, all of which Windows now competes with. Even Microsoft’s own Surface Pro 2 has bad battery life. Apple’s 11-inch MacBook Air, which has very similar hardware to the Surface Pro 2, offers double its battery life when web browsing. Microsoft has been fond of blaming third-party hardware manufacturers for their poorly optimized drivers in the past, but there’s no longer any room to hide. The problem is clearly Windows. Why is this? No one really knows for sure. Perhaps Microsoft has kept on piling Windows component on top of Windows component and many older Windows components were never properly optimized. Windows Users Become Stuck on Old Windows Versions Apple’s new OS X 10.9 Mavericks upgrade is completely free to all Mac users and supports Macs going back to 2007. Apple has also announced their intention that all new releases of Mac OS X will be free. In 2007, Microsoft had just shipped Windows Vista. Macs from the Windows Vista era are being upgraded to the latest version of the Mac operating system for free, while Windows PCs from the same era are probably still using Windows Vista. There’s no easy upgrade path for these people. They’re stuck using Windows Vista and maybe even the outdated Internet Explorer 9 if they haven’t installed a third-party web browser. Microsoft’s upgrade path is for these people to pay $120 for a full copy of Windows 8.1 and go through a complicated process that’s actaully a clean install. Even users of Windows 8 devices will probably have to pay money to upgrade to Windows 9, while updates for other operating systems are completely free. If you’re a PC geek, a PC gamer, or someone who just requires specialized software that only runs on Windows, you probably use the Windows desktop and don’t want to switch. That’s fine, but it doesn’t mean the Windows desktop is actually a good experience. Much of the burden falls on average users, who have to struggle with malware, bloatware, adware bundled in installers, complex software installation processes, and out-of-date software. In return, all they get is the ability to use a web browser and some basic Office apps that they could use on almost any other platform without all the hassle. Microsoft would agree with this, touting Windows RT and their new “Windows 8-style” app platform as the solution. Why else would Microsoft, a “devices and services” company, position the Surface — a device without traditional Windows desktop programs — as their mass-market device recommended for average people? This isn’t necessarily an endorsement of Windows RT. If you’re tech support for your family members and it comes time for them to upgrade, you may want to get them off the Windows desktop and tell them to get a Mac or something else that’s simple. Better yet, if they get a Mac, you can tell them to visit the Apple Store for help instead of calling you. That’s another thing Windows PCs don’t offer — good manufacturer support. Image Credit: Blanca Stella Mejia on Flickr, Collin Andserson on Flickr, Luca Conti on Flickr     

    Read the article

  • Data Integration 12c Raising the Big Data Roof at Oracle OpenWorld

    - by Tanu Sood
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Times New Roman","serif"; mso-fareast-font-family:"MS Mincho";} Author: Dain Hansen, Director, Oracle It was an exciting OpenWorld 2013 for us in the Data Integration track. Our theme this year was all about ‘being future ready’ - previewing one of our biggest releases this year: Oracle Data Integration 12c. Just this week we followed up with this preview by announcing the general availability of 12c release for Oracle’s key data integration products: Oracle Data Integrator 12c and Oracle GoldenGate 12c. The new release delivers extreme performance, increase IT productivity, and simplify deployment, while helping IT organizations to keep pace with new data-oriented technology trends including cloud computing, big data analytics, real-time business intelligence. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Times New Roman","serif"; mso-fareast-font-family:"MS Mincho";} Mark Hurd's keynote on day one set the tone for the Data Integration sessions. Mark focused on big data analytics and the changing consumer expectations. Especially real-time insight is a key theme for Oracle overall and data integration products. In Mark Hurd's keynote we heard from key customers, such as Airbus and Thomson Reuters, how real-time analysis of operational data including machine data creates value, in some cases even saves lives. Thomas Kurian gave a deeper look into Oracle's big data and fast data solutions. In the initial lead Data Integration track session - Brad Adelberg, VP of Development, presented Oracle’s Data Integration 12c product strategy based on key trends from the initial OpenWorld keynotes. Brad talked about how Oracle's data integration products address the new data integration requirements that evolved with cloud computing, big data, and changing consumer expectations and how they set the key themes in our products’ road map. Brad explained why and how fast-time to value, high-performance and future-ready solutions is the top focus areas for product development. If you were not able to attend OpenWorld or this session I recommend reading the white paper: Five New Data Integration Requirements and How to Meet them with Oracle Data Integration, which provides an in-depth look into how Oracle addresses the new trends in the DI market. Following Brad’s session, Nick Wagner provided in depth review of Oracle GoldenGate’s latest features and roadmap. Nick discussed how Oracle GoldenGate’s tight integration with Oracle Database sets the product apart from the competition. We also heard that heterogeneity of the product is still a major focus for GoldenGate’s development and there will be more news on that front when there is a major release. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Times New Roman","serif"; mso-fareast-font-family:"MS Mincho";} After GoldenGate’s product strategy session, Denis Gray from the PM team presented Oracle Data Integrator’s product strategy session, talking about the latest and greatest on ODI. Another good session was delivered by long-time GoldenGate users, Comcast.  Jason Hurd and Amit Patel of Comcast talked about the various use cases they deploy Oracle GoldenGate throughout their enterprise, from database upgrades, feeding reporting systems, to active-active database synchronization.  The Comcast team shared many good tips on how to use GoldenGate for both zero downtime upgrades and active-active replication with conflict management requirement. One of our other important goals we had this year for the Data Integration track at OpenWorld was hearing from our customers. We ended day 1 on just that, with a wonderful award ceremony for Oracle Excellence Awards for Oracle Fusion Middleware Innovation. The ceremony was held in the Yerba Buena Center for the Arts. Congratulations to Royal Bank of Scotland and Yalumba Wine Company, the winners in the Data Integration category. You can find more information on the award and the winners in our previous blog post: 2013 Oracle Excellence Awards for Fusion Middleware Innovation… Selected for their innovation use of Oracle’s Data Integration products; the winners for the Data Integration Category are Royal Bank of Scotland and The Yalumba Wine Company. Congratulations!!! Royal Bank of Scotland’s Market and International Banking division provides clients across the globe with seamless trading and competitive pricing, underpinned by a deep knowledge of risk management across the full spectrum of financial products. They handle millions of transactions daily to keep the lifeblood of their clients’ businesses flowing – whether through payment management solutions or through bespoke trade finance solutions. Royal Bank of Scotland is leveraging Oracle GoldenGate and Oracle Data Integrator along with Oracle Business Intelligence Enterprise Edition and the Oracle Database for a variety of solutions. Mainly, Oracle GoldenGate and Oracle Data Integrator are used to feed their data warehouse – providing a real-time data integration solution that feeds transactional data to their analytics system in minutes to enable improved decision making with timely, accurate data for their business users. Oracle Data Integrator’s in-database transformation capabilities and its ability to integrate with Oracle GoldenGate for real-time data capture is the foundation of this implementation. This solution makes it such that changes happening in the analytics systems are available the same day they are deployed on the operational system with 100% data quality guaranteed. Additionally, the solution has helped to reduce their operational database size from 150GB to 10GB. Impressive! Now what if I told you this solution was built in 3 months and had a less than 6 month return on investment? That’s outstanding! The Yalumba Wine Company is situated in the Barossa Valley of Australia. It is the oldest family owned winery in Australia with a unique way of aging their wines in specially crafted 100 liter barrels. Did you know that “Yalumba” is Aboriginal for “all the land around”? The Yalumba Wine Company is growing rapidly, and was in need of introducing a more modern standard to the existing manufacturing processes to meet globalization demands, overall time-to-market, and better operational efficiency objectives of product development. The Yalumba Wine Company worked with a partner, Bristlecone to develop a unique solution whereby Oracle Data Integrator is leveraged to pull data from Salesforce.com and JD Edwards, in addition to their other pre-existing source systems, for consumption into their data warehouse. They have emphasized the overall ease of developing integration workflows with Oracle Data Integrator. The solution has brought better visibility for the business users, shorter data loading and transformation performance to their data warehouse with rapid incorporation of new data sources, and a solid future-proof foundation for their organization. Moving forward, they plan on leveraging more from Oracle’s Data Integration portfolio. Terrific! In addition to these two customers on Tuesday we featured many other important Oracle Data Integrator and Oracle GoldenGate customers. On Tuesday the GoldenGate panel included: Land O’Lakes, Smuckers, and Veolia Water. Besides giving us yummy nutrition and healthy water, these companies have another aspect in common. They all use GoldenGate to boost their ERP application. Please read the recap by Irem Radzik. On Wednesday, the ODI Panel included: Barry Ralston and Ryan Weber of Infinity Insurance, Paul Stracke of Paychex Inc., and Ian Wall of Vertex Pharmaceuticals for a session filled with interesting projects, use cases and approaches to leveraging Oracle Data Integrator. Please read the recap by Sandrine Riley for more. Thanks to everyone who joined with us and we hope to stay connected! To hear more about our Data Integration12c products join us in an upcoming webcast to learn more. Follow us www.twitter.com/ORCLGoldenGate or goto our website at www.oracle.com/goto/dataintegration

    Read the article

  • Securing an ADF Application using OES11g: Part 1

    - by user12587121
    Future releases of the Oracle stack should allow ADF applications to be secured natively with Oracle Entitlements Server (OES). In a sequence of postings here I explore one way to achive this with the current technology, namely OES 11.1.1.5 and ADF 11.1.1.6. ADF Security Basics ADF Bascis The Application Development Framework (ADF) is Oracle’s preferred technology for developing GUI based Java applications.  It can be used to develop a UI for Swing applications or, more typically in the Oracle stack, for Web and J2EE applications.  ADF is based on and extends the Java Server Faces (JSF) technology.  To get an idea, Oracle provides an online demo to showcase ADF components. ADF can be used to develop just the UI part of an application, where, for example, the data access layer is implemented using some custom Java beans or EJBs.  However ADF also has it’s own data access layer, ADF Business Components (ADF BC) that will allow rapid integration of data from data bases and Webservice interfaces to the ADF UI component.   In this way ADF helps implement the MVC  approach to building applications with UI and data components. The canonical tutorial for ADF is to open JDeveloper, define a connection to a database, drag and drop a table from the database view to a UI page, build and deploy.  One has an application up and running very quickly with the ability to quickly integrate changes to, for example, the DB schema. ADF allows web pages to be created graphically and components like tables, forms, text fields, graphs and so on to be easily added to a page.  On top of JSF Oracle have added drag and drop tooling with JDeveloper and declarative binding of the UI to the data layer, be it database, WebService or Java beans.  An important addition is the bounded task flow which is a reusable set of pages and transitions.   ADF adds some steps to the page lifecycle defined in JSF and adds extra widgets including powerful visualizations. It is worth pointing out that the Oracle Web Center product (portal, content management and so on) is based on and extends ADF. ADF Security ADF comes with it’s own security mechanism that is exposed by JDeveloper at development time and in the WLS Console and Enterprise Manager (EM) at run time. The security elements that need to be addressed in an ADF application are: authentication, authorization of access to web pages, task-flows, components within the pages and data being returned from the model layer. One  typically relies on WLS to handle authentication and because of this users and groups will also be handled by WLS.  Typically in a Dev environment, users and groups are stored in the WLS embedded LDAP server. One has a choice when enabling ADF security (Application->Secure->Configure ADF Security) about whether to turn on ADF authorization checking or not: In the case where authorization is enabled for ADF one defines a set of roles in which we place users and then we grant access to these roles to the different ADF elements (pages or task flows or elements in a page). An important notion here is the difference between Enterprise Roles and Application Roles. The idea behind an enterprise role is that is defined in terms of users and LDAP groups from the WLS identity store.  “Enterprise” in the sense that these are things available for use to all applications that use that store.  The other kind of role is an Application Role and the idea is that  a given application will make use of Enterprise roles and users to build up a set of roles for it’s own use.  These application roles will be available only to that application.   The general idea here is that the enterprise roles are relatively static (for example an Employees group in the LDAP directory) while application roles are more dynamic, possibly depending on time, location, accessed resource and so on.  One of the things that OES adds that is that we can define these dynamic membership conditions in Role Mapping Policies. To make this concrete, here is how, at design time in Jdeveloper, one assigns these rights in Jdeveloper, which puts them into a file called jazn-data.xml: When the ADF app is deployed to a WLS this JAZN security data is pushed to the system-jazn-data.xml file of the WLS deployment for the policies and application roles and to the WLS backing LDAP for the users and enterprise roles.  Note the difference here: after deploying the application we will see the users and enterprise roles show up in the WLS LDAP server.  But the policies and application roles are defined in the system-jazn-data.xml file.  Consult the embedded WLS LDAP server to manage users and enterprise roles by going to the domain console and then Security Realms->myrealm->Users and Groups: For production environments (or in future to share this data with OES) one would then perform the operation of “reassociating” this security policy and application role data to a DB schema (or an LDAP).  This is done in the EM console by reassociating the Security Provider.  This blog posting has more explanations and references on this reassociation process. If ADF Authentication and Authorization are enabled then the Security Policies for a deployed application can be managed in EM.  Our goal is to be able to manage security policies for the applicaiton rather via OES and it's console. Security Requirements for an ADF Application With this package tour of ADF security we can see that to secure an ADF application with we would expect to be able to take care of at least the following items: Authentication, including a user and user-group store Authorization for page access Authorization for bounded Task Flow access.  A bounded task flow has only one point of entry and so if we protect that entry point by calling to OES then all the pages in the flow are protected.  Authorization for viewing data coming from the data access layer In the next posting we will describe a sample ADF application and required security policies. References ADF Dev Guide: Fusion Middleware Fusion Developer's Guide for Oracle Application Development Framework: Enabling ADF Security in a Fusion Web Application Oracle tutorial on securing a sample ADF application, appears to require ADF 11.1.2 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Cisco Unifed Communication integration for Microsoft Lync crashes on Remote Desktop services 2008 R2!

    - by user66267
    Hi everybody i have deployed office communication server 2007 R2 and communicator 2007 R2 and i made integration with Cisco Unified Communication Manager 7.1 in my network, i also uses Remote Desktop Servers 2008 R2 for Thin Client Computers, now that i installed Cisco UC integration client for communicator 2007 R2 (Ver. 8.0.3) or Cisco UC integration client for Microsoft Lync that works fine on PCs but Not on Remote Desktop Servers. i have Three Remote Desktop Servers in a Farm with loadbalancing enabled. all other applications on these RDP servers works fine for 120 active users. some times when i start Cisco UC client on Remote Desktop servers i get the following error "The Port Reguired for callbacks from Cisco unified client framework could not be read, please retry" i also found the folowing log so i think that may be the cause: 2011-01-05 08:24:21,489 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.controller.SingleInstanceManager] [SingleInstanceManager.acquireMutex(0)] - Acquiring Mutex... 2011-01-05 08:24:21,512 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.IPC.PipeServer] [PipeServer.start(0)] - Starting Pipe Server 2011-01-05 08:24:21,516 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.controller.SingleInstanceManager] [SingleInstanceManager.acquireMutex(0)] - Mutex Acquired... 2011-01-05 08:24:25,437 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.process.ProcessUtil] [ProcessUtil.isOtherPRTProcessRunning(0)] - No other instance(s) of ProblemReportingTool.exe found 2011-01-05 08:24:25,438 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.controller.Controller] [Controller.Main(0)] - ******************************* 2011-01-05 08:24:25,439 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.controller.Controller] [Controller.Main(0)] - **Launching CUCSF Problem Reporting Tool v0.8.3.2** 2011-01-05 08:24:25,440 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.controller.Controller] [Controller.Main(0)] - ******************************* 2011-01-05 08:24:25,441 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.controller.Controller] [Controller.Main(0)] - Raw input: -reason=Launched by the user from CUCIMOC ver 8.5.105.17095 -file=C:\Users\MA899~1.SAD\AppData\Local\Temp\36\CUCIMOCInstaller.txt 2011-01-05 08:24:25,445 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.controller.Controller] [Controller.Main(0)] - Current culture: English (United States) 2011-01-05 08:24:25,448 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.controller.ResourceUtil] [ResourceUtil.init(0)] - Loading string resources from file 2011-01-05 08:24:25,455 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.context.CLIUtil] [CLIUtil.parse(0)] - Argument -reason Launched by the user from CUCIMOC ver 8.5.105.17095 received 2011-01-05 08:24:25,456 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.context.CLIUtil] [CLIUtil.parse(0)] - Argument -file C:\Users\MA899~1.SAD\AppData\Local\Temp\36\CUCIMOCInstaller.txt received 2011-01-05 08:24:25,457 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.controller.Controller] [Controller.startup(0)] - Launching GUI... 2011-01-05 08:24:25,536 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.controller.ResourceUtil] [ResourceUtil.getResourceFileString(0)] - Retrieving Key: com.cisco.uc.csf.prt.PROG.PleaseWaitText from resource file 2011-01-05 08:24:25,545 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.controller.ResourceUtil] [ResourceUtil.getResourceFileString(0)] - Retrieving Key: com.cisco.uc.csf.prt.PF.OKButtonText from resource file 2011-01-05 08:24:25,548 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.controller.ResourceUtil] [ResourceUtil.getResourceFileString(0)] - Retrieving Key: com.cisco.uc.csf.prt.PF.CancelButtonText from resource file 2011-01-05 08:24:25,549 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.controller.ResourceUtil] [ResourceUtil.getResourceFileString(0)] - Retrieving Key: com.cisco.uc.csf.prt.PF.ErrorMsgText1 from resource file 2011-01-05 08:24:25,549 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.controller.ResourceUtil] [ResourceUtil.getResourceFileString(0)] - Retrieving Key: com.cisco.uc.csf.prt.PF.Title from resource file 2011-01-05 08:24:25,552 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.controller.ResourceUtil] [ResourceUtil.getResourceFileString(0)] - Retrieving Key: com.cisco.uc.csf.prt.PF.WindowTitle from resource file 2011-01-05 08:24:25,553 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.controller.ResourceUtil] [ResourceUtil.getResourceFileString(0)] - Retrieving Key: com.cisco.uc.csf.prt.PF.AgreeText from resource file 2011-01-05 08:24:25,553 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.controller.ResourceUtil] [ResourceUtil.getResourceFileString(0)] - Retrieving Key: com.cisco.uc.csf.prt.PF.PrivacyText from resource file 2011-01-05 08:24:25,554 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.controller.ResourceUtil] [ResourceUtil.getResourceFileString(0)] - Retrieving Key: com.cisco.uc.csf.prt.PF.PrivacyTitle from resource file 2011-01-05 08:24:25,555 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.controller.ResourceUtil] [ResourceUtil.getResourceFileString(0)] - Retrieving Key: com.cisco.uc.csf.prt.PF.PrivacyLinkText from resource file 2011-01-05 08:24:25,555 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.controller.ResourceUtil] [ResourceUtil.getResourceFileString(0)] - Retrieving Key: com.cisco.uc.csf.prt.PF.DescriptionTitle from resource file 2011-01-05 08:24:25,629 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.sysinfo.SysInfoManager] [SysInfoManager..ctor(0)] - Starting SysInfoManager... 2011-01-05 08:24:25,634 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.sysinfo.WindowsUtilsInfo] [WindowsUtilsInfo.startWindowsUtilsThreads(0)] - Launching worker thread: systeminfo.exe 2011-01-05 08:24:25,669 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.sysinfo.WindowsUtilsInfo] [WindowsUtilsInfo.startWindowsUtilsThreads(0)] - Launching worker thread: tasklist.exe 2011-01-05 08:24:25,672 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.sysinfo.WindowsUtilsInfo] [WindowsUtilsInfo.startWindowsUtilsThreads(0)] - Launching worker thread: ipconfig.exe 2011-01-05 08:24:25,676 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.sysinfo.WindowsUtilsInfo] [WindowsUtilsInfo.startWindowsUtilsThreads(0)] - Launching worker thread: netstat.exe 2011-01-05 08:24:25,684 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.sysinfo.WindowsUtilsInfo] [WindowsUtilsInfo.startWindowsUtilsThreads(0)] - Launching worker thread: net.exe 2011-01-05 08:24:25,926 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.sysinfo.SysInfoManager] [SysInfoManager.launchHardwareInfoThread(0)] - Launching worker thread: HardwareInfo 2011-01-05 08:24:25,928 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.sysinfo.CSFDirectoryInfo] [HardwareInfo.getHardWareInfo(0)] - Gathering CPU data 2011-01-05 08:24:26,149 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.sysinfo.SysInfoManager] [SysInfoManager.launchCSFDirectoryInfoThread(0)] - Gathering CSF Directory Listing 2011-01-05 08:24:26,153 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.sysinfo.CSFDirectoryInfo] [CSFDirectoryInfo.getCSFInstallPath(0)] - Retrieving CSF Install Directory 2011-01-05 08:24:26,159 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.sysinfo.CSFDirectoryInfo] [CSFDirectoryInfo.getCSFInstallPath(0)] - CSF Install Path: C:\Program Files (x86)\Common Files\Cisco Systems\Client Services Framework 2011-01-05 08:24:26,162 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.sysinfo.SysInfoManager] [SysInfoManager.launchWMIInfoThread(0)] - Launching worker thread: WMIInfo 2011-01-05 08:24:26,164 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.sysinfo.CSFDirectoryInfo] [HardwareInfo.getWMIInfo(0)] - Gathering Audio info... 2011-01-05 08:24:26,168 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.sysinfo.SysInfoManager] [SysInfoManager.launchRegistryAndEnvironmentalVarInfoThread(0)] - Launching worker thread: Registry & Environment Variables 2011-01-05 08:24:26,173 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.sysinfo.RegistryEnvironmentInfo] [RegistryEnvironmentInfo.generateRegString(0)] - Gathering Registry data under: Software\Cisco Systems, Inc.\Client Services Framework\AdminData\ 2011-01-05 08:24:26,180 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.sysinfo.RegistryEnvironmentInfo] [RegistryEnvironmentInfo.generateRegString(0)] - Gathering Registry data under: Software\Policies\Cisco Systems, Inc.\Client Services Framework\AdminData\ 2011-01-05 08:24:26,182 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.sysinfo.RegistryEnvironmentInfo] [RegistryEnvironmentInfo.generateRegString(0)] - Gathering Registry data under: Software\Cisco Systems, Inc.\Unified Communications\CUCSF 2011-01-05 08:24:26,183 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.sysinfo.RegistryEnvironmentInfo] [RegistryEnvironmentInfo.generateRegString(0)] - Gathering Registry data under: Software\JavaSoft\Java Runtime Environment 2011-01-05 08:24:26,184 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.sysinfo.RegistryEnvironmentInfo] [RegistryEnvironmentInfo.generateRegString(0)] - Gathering Registry data under: Software\JavaSoft\Java Runtime Environment\1.6 2011-01-05 08:24:26,186 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.sysinfo.RegistryEnvironmentInfo] [RegistryEnvironmentInfo.generateRegString(0)] - Gathering Registry data under: Software\JavaSoft\Java Runtime Environment\1.6.0_17 2011-01-05 08:24:26,188 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.sysinfo.RegistryEnvironmentInfo] [RegistryEnvironmentInfo.generateRegString(0)] - Gathering Registry data under: Software\JavaSoft\Java Runtime Environment\1.6.0_17\MSI 2011-01-05 08:24:26,190 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.sysinfo.RegistryEnvironmentInfo] [RegistryEnvironmentInfo.gatherRegistryAndEnvInfo(0)] - Gathering Environment Variables data 2011-01-05 08:24:26,283 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.sysinfo.CSFDirectoryInfo] [HardwareInfo.getWMIInfo(0)] - Gathering Video driver info... 2011-01-05 08:24:26,750 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.sysinfo.SysInfoManager] [SysInfoManager.writeFile(0)] - Creating file: DirectoryInfo.txt 2011-01-05 08:24:26,759 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.sysinfo.CSFDirectoryInfo] [HardwareInfo.getWMIInfo(0)] - Gathering Monitor info... 2011-01-05 08:24:34,483 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.file.FileUtil] [FileUtil.gatherFiles(0)] - Config Dir C:\Users\m.sadeghi\AppData\Roaming\Cisco\Unified Communications\ 2011-01-05 08:24:34,530 [WARN ] [com.cisco.uc.ucsf.ProblemReportingTool.file.FileUtil] [FileUtil.addFile(0)] - C:\Users\MA899~1.SAD\AppData\Local\Temp\36\CUCIMOCInstaller.txt not found 2011-01-05 08:24:34,561 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.file.FileUtil] [FileUtil.addSystemInfo(0)] - Waiting for worker threads... 2011-01-05 08:24:38,180 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.sysinfo.CSFDirectoryInfo] [HardwareInfo.getHardWareInfo(0)] - Gathering Resolution data 2011-01-05 08:24:55,565 [ERROR] [com.cisco.uc.ucsf.ProblemReportingTool.file.FileUtil] [FileUtil.addSystemInfo(0)] - One or more worker threads have not returned in a timely manner. Forcing quit. 2011-01-05 08:24:55,568 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.sysinfo.SysInfoManager] [SysInfoManager.writeFile(0)] - Creating file: SystemInfo.txt 2011-01-05 08:24:55,577 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.file.FileUtil] [FileUtil.removePrivateFiles(0)] - Checking for files to be excluded 2011-01-05 08:24:55,578 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.file.FileUtil] [FileUtil.removePrivateFiles(0)] - Excluding: d11bfd8f-9745-41db-a35b-200389e65583.dat 2011-01-05 08:24:55,579 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.file.FileUtil] [FileUtil.removePrivateFiles(0)] - Excluding: cacerts 2011-01-05 08:24:55,580 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.file.FileUtil] [FileUtil.removePrivateFiles(0)] - Excluding: Voicemail.2639.20110103081119+0330.wav 2011-01-05 08:24:55,581 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.file.FileUtil] [FileUtil.removePrivateFiles(0)] - Excluding: Voicemail.farhad.20101224165510+0330.wav 2011-01-05 08:24:55,581 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.file.FileUtil] [FileUtil.removePrivateFiles(0)] - Excluding: Voicemail.postmaster.20101224165906+0330.wav 2011-01-05 08:24:55,582 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.file.FileUtil] [FileUtil.removePrivateFiles(0)] - Excluding: VoicemailBeep.wav 2011-01-05 08:24:55,583 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.file.FileUtil] [FileUtil.removePrivateFiles(0)] - Excluding: secModeNone 2011-01-05 08:24:55,586 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Preparing to create zip file... 2011-01-05 08:24:55,588 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - 60 files found 2011-01-05 08:24:55,589 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying .CSFExit.loc to temp folder. 2011-01-05 08:24:55,595 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying CSF.loc to temp folder. 2011-01-05 08:24:55,597 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying CsfAddress.dat to temp folder. 2011-01-05 08:24:55,600 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying CSFLogSetting.dat to temp folder. 2011-01-05 08:24:55,634 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying CSFSecurityKey.dat to temp folder. 2011-01-05 08:24:55,637 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying CommunicationHistory.xml to temp folder. 2011-01-05 08:24:55,641 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying MehdiSadeghi.cnf.xml to temp folder. 2011-01-05 08:24:55,751 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying jtapi.jar to temp folder. 2011-01-05 08:24:55,812 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying CiscoJtapi.index to temp folder. 2011-01-05 08:24:55,820 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying CiscoJtapi01.log to temp folder. 2011-01-05 08:24:55,887 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying CiscoJtapi02.log to temp folder. 2011-01-05 08:24:55,968 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying CiscoJtapi03.log to temp folder. 2011-01-05 08:24:55,972 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying CiscoJtapi04.log to temp folder. 2011-01-05 08:24:56,008 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying CiscoJtapi05.log to temp folder. 2011-01-05 08:24:56,038 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying CiscoJtapi06.log to temp folder. 2011-01-05 08:24:56,079 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying CiscoJtapi07.log to temp folder. 2011-01-05 08:24:56,100 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying CiscoJtapi08.log to temp folder. 2011-01-05 08:24:56,140 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying CiscoJtapi09.log to temp folder. 2011-01-05 08:24:56,215 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying CiscoJtapi10.log to temp folder. 2011-01-05 08:24:56,296 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying Core.log to temp folder. 2011-01-05 08:24:56,319 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying Core.log.1 to temp folder. 2011-01-05 08:24:56,498 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying Core.log.2 to temp folder. 2011-01-05 08:24:56,708 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying Core.log.3 to temp folder. 2011-01-05 08:24:56,912 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying Core.log.4 to temp folder. 2011-01-05 08:24:57,105 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying Core.log.5 to temp folder. 2011-01-05 08:24:57,292 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying Core.log.6 to temp folder. 2011-01-05 08:24:57,505 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying tracker.log to temp folder. 2011-01-05 08:24:57,523 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying VideoEngineEncryptedTrace.txt to temp folder. 2011-01-05 08:24:57,542 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying VoiceEngineDebugTrace.txt to temp folder. 2011-01-05 08:24:57,545 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying VoiceEngineTrace.txt to temp folder. 2011-01-05 08:24:57,548 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying operationreport.log to temp folder. 2011-01-05 08:24:57,551 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying voicemailbox.dat to temp folder. 2011-01-05 08:24:57,554 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying voicemailfolder.dat to temp folder. 2011-01-05 08:24:57,558 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying UIPrefs.xml to temp folder. 2011-01-05 08:24:57,562 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying uc-client.log to temp folder. 2011-01-05 08:24:57,569 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying uc-client.log.1 to temp folder. 2011-01-05 08:24:57,752 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying uc-client.log.10 to temp folder. 2011-01-05 08:24:58,099 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying uc-client.log.2 to temp folder. 2011-01-05 08:24:58,302 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying uc-client.log.3 to temp folder. 2011-01-05 08:24:58,517 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying uc-client.log.4 to temp folder. 2011-01-05 08:24:58,697 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying uc-client.log.5 to temp folder. 2011-01-05 08:24:58,899 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying uc-client.log.6 to temp folder. 2011-01-05 08:24:59,100 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying uc-client.log.7 to temp folder. 2011-01-05 08:24:59,303 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying uc-client.log.8 to temp folder. 2011-01-05 08:24:59,500 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying uc-client.log.9 to temp folder. 2011-01-05 08:24:59,895 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying Cisco.ClickToCall.Common.Core.dll.config to temp folder. 2011-01-05 08:24:59,915 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying ClickToCall.pref to temp folder. 2011-01-05 08:24:59,918 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying CiscoClickToCall.dll.config to temp folder. 2011-01-05 08:24:59,928 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying CiscoClickToCallContacts.dll.config to temp folder. 2011-01-05 08:24:59,948 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying CiscoPersonName.dll.config to temp folder. 2011-01-05 08:24:59,980 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying userData.properties to temp folder. 2011-01-05 08:24:59,988 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying userData.properties.backup to temp folder. 2011-01-05 08:24:59,990 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying cisco-uc-client.log4net.config to temp folder. 2011-01-05 08:24:59,994 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying cisco-uc-tab.log4net.config to temp folder. 2011-01-05 08:25:00,011 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying LocalSettings.xml to temp folder. 2011-01-05 08:25:00,025 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying Description.txt to temp folder. 2011-01-05 08:25:00,028 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying LaunchInfo.txt to temp folder. 2011-01-05 08:25:00,031 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying DirectoryInfo.txt to temp folder. 2011-01-05 08:25:00,034 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying SystemInfo.txt to temp folder. 2011-01-05 08:25:00,036 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying csf-prt.log to temp folder.

    Read the article

  • Cisco Unified Communication integration for Microsoft Lync crashes on Remote Desktop services 2008 R2!

    - by user66267
    Hi everybody i have deployed office communication server 2007 R2 and communicator 2007 R2 and i made integration with Cisco Unified Communication Manager 7.1 in my network, i also use Remote Desktop Servers 2008 R2 for Thin Client Computers, now that i installed Cisco UC integration client for communicator 2007 R2 (Ver. 8.0.3) or Cisco UC integration client for Microsoft Lync that works fine on PCs but Not on Remote Desktop Servers. i have Three Remote Desktop Servers in a Farm with loadbalancing enabled. all other applications on these RDP servers works fine for 120 active users. some times when i start Cisco UC client on Remote Desktop servers i get the following error: "The Port Reguired for callbacks from Cisco unified client framework could not be read, please retry" i also found the folowing log so i think that may be the cause: 2011-01-05 08:24:21,489 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.controller.SingleInstanceManager] [SingleInstanceManager.acquireMutex(0)] - Acquiring Mutex... 2011-01-05 08:24:21,512 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.IPC.PipeServer] [PipeServer.start(0)] - Starting Pipe Server 2011-01-05 08:24:21,516 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.controller.SingleInstanceManager] [SingleInstanceManager.acquireMutex(0)] - Mutex Acquired... 2011-01-05 08:24:25,437 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.process.ProcessUtil] [ProcessUtil.isOtherPRTProcessRunning(0)] - No other instance(s) of ProblemReportingTool.exe found 2011-01-05 08:24:25,438 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.controller.Controller] [Controller.Main(0)] - ******************************* 2011-01-05 08:24:25,439 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.controller.Controller] [Controller.Main(0)] - **Launching CUCSF Problem Reporting Tool v0.8.3.2** 2011-01-05 08:24:25,440 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.controller.Controller] [Controller.Main(0)] - ******************************* 2011-01-05 08:24:25,441 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.controller.Controller] [Controller.Main(0)] - Raw input: -reason=Launched by the user from CUCIMOC ver 8.5.105.17095 -file=C:\Users\MA899~1.SAD\AppData\Local\Temp\36\CUCIMOCInstaller.txt 2011-01-05 08:24:25,445 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.controller.Controller] [Controller.Main(0)] - Current culture: English (United States) 2011-01-05 08:24:25,448 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.controller.ResourceUtil] [ResourceUtil.init(0)] - Loading string resources from file 2011-01-05 08:24:25,455 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.context.CLIUtil] [CLIUtil.parse(0)] - Argument -reason Launched by the user from CUCIMOC ver 8.5.105.17095 received 2011-01-05 08:24:25,456 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.context.CLIUtil] [CLIUtil.parse(0)] - Argument -file C:\Users\MA899~1.SAD\AppData\Local\Temp\36\CUCIMOCInstaller.txt received 2011-01-05 08:24:25,457 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.controller.Controller] [Controller.startup(0)] - Launching GUI... 2011-01-05 08:24:25,536 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.controller.ResourceUtil] [ResourceUtil.getResourceFileString(0)] - Retrieving Key: com.cisco.uc.csf.prt.PROG.PleaseWaitText from resource file 2011-01-05 08:24:25,545 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.controller.ResourceUtil] [ResourceUtil.getResourceFileString(0)] - Retrieving Key: com.cisco.uc.csf.prt.PF.OKButtonText from resource file 2011-01-05 08:24:25,548 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.controller.ResourceUtil] [ResourceUtil.getResourceFileString(0)] - Retrieving Key: com.cisco.uc.csf.prt.PF.CancelButtonText from resource file 2011-01-05 08:24:25,549 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.controller.ResourceUtil] [ResourceUtil.getResourceFileString(0)] - Retrieving Key: com.cisco.uc.csf.prt.PF.ErrorMsgText1 from resource file 2011-01-05 08:24:25,549 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.controller.ResourceUtil] [ResourceUtil.getResourceFileString(0)] - Retrieving Key: com.cisco.uc.csf.prt.PF.Title from resource file 2011-01-05 08:24:25,552 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.controller.ResourceUtil] [ResourceUtil.getResourceFileString(0)] - Retrieving Key: com.cisco.uc.csf.prt.PF.WindowTitle from resource file 2011-01-05 08:24:25,553 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.controller.ResourceUtil] [ResourceUtil.getResourceFileString(0)] - Retrieving Key: com.cisco.uc.csf.prt.PF.AgreeText from resource file 2011-01-05 08:24:25,553 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.controller.ResourceUtil] [ResourceUtil.getResourceFileString(0)] - Retrieving Key: com.cisco.uc.csf.prt.PF.PrivacyText from resource file 2011-01-05 08:24:25,554 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.controller.ResourceUtil] [ResourceUtil.getResourceFileString(0)] - Retrieving Key: com.cisco.uc.csf.prt.PF.PrivacyTitle from resource file 2011-01-05 08:24:25,555 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.controller.ResourceUtil] [ResourceUtil.getResourceFileString(0)] - Retrieving Key: com.cisco.uc.csf.prt.PF.PrivacyLinkText from resource file 2011-01-05 08:24:25,555 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.controller.ResourceUtil] [ResourceUtil.getResourceFileString(0)] - Retrieving Key: com.cisco.uc.csf.prt.PF.DescriptionTitle from resource file 2011-01-05 08:24:25,629 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.sysinfo.SysInfoManager] [SysInfoManager..ctor(0)] - Starting SysInfoManager... 2011-01-05 08:24:25,634 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.sysinfo.WindowsUtilsInfo] [WindowsUtilsInfo.startWindowsUtilsThreads(0)] - Launching worker thread: systeminfo.exe 2011-01-05 08:24:25,669 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.sysinfo.WindowsUtilsInfo] [WindowsUtilsInfo.startWindowsUtilsThreads(0)] - Launching worker thread: tasklist.exe 2011-01-05 08:24:25,672 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.sysinfo.WindowsUtilsInfo] [WindowsUtilsInfo.startWindowsUtilsThreads(0)] - Launching worker thread: ipconfig.exe 2011-01-05 08:24:25,676 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.sysinfo.WindowsUtilsInfo] [WindowsUtilsInfo.startWindowsUtilsThreads(0)] - Launching worker thread: netstat.exe 2011-01-05 08:24:25,684 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.sysinfo.WindowsUtilsInfo] [WindowsUtilsInfo.startWindowsUtilsThreads(0)] - Launching worker thread: net.exe 2011-01-05 08:24:25,926 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.sysinfo.SysInfoManager] [SysInfoManager.launchHardwareInfoThread(0)] - Launching worker thread: HardwareInfo 2011-01-05 08:24:25,928 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.sysinfo.CSFDirectoryInfo] [HardwareInfo.getHardWareInfo(0)] - Gathering CPU data 2011-01-05 08:24:26,149 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.sysinfo.SysInfoManager] [SysInfoManager.launchCSFDirectoryInfoThread(0)] - Gathering CSF Directory Listing 2011-01-05 08:24:26,153 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.sysinfo.CSFDirectoryInfo] [CSFDirectoryInfo.getCSFInstallPath(0)] - Retrieving CSF Install Directory 2011-01-05 08:24:26,159 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.sysinfo.CSFDirectoryInfo] [CSFDirectoryInfo.getCSFInstallPath(0)] - CSF Install Path: C:\Program Files (x86)\Common Files\Cisco Systems\Client Services Framework 2011-01-05 08:24:26,162 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.sysinfo.SysInfoManager] [SysInfoManager.launchWMIInfoThread(0)] - Launching worker thread: WMIInfo 2011-01-05 08:24:26,164 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.sysinfo.CSFDirectoryInfo] [HardwareInfo.getWMIInfo(0)] - Gathering Audio info... 2011-01-05 08:24:26,168 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.sysinfo.SysInfoManager] [SysInfoManager.launchRegistryAndEnvironmentalVarInfoThread(0)] - Launching worker thread: Registry & Environment Variables 2011-01-05 08:24:26,173 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.sysinfo.RegistryEnvironmentInfo] [RegistryEnvironmentInfo.generateRegString(0)] - Gathering Registry data under: Software\Cisco Systems, Inc.\Client Services Framework\AdminData\ 2011-01-05 08:24:26,180 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.sysinfo.RegistryEnvironmentInfo] [RegistryEnvironmentInfo.generateRegString(0)] - Gathering Registry data under: Software\Policies\Cisco Systems, Inc.\Client Services Framework\AdminData\ 2011-01-05 08:24:26,182 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.sysinfo.RegistryEnvironmentInfo] [RegistryEnvironmentInfo.generateRegString(0)] - Gathering Registry data under: Software\Cisco Systems, Inc.\Unified Communications\CUCSF 2011-01-05 08:24:26,183 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.sysinfo.RegistryEnvironmentInfo] [RegistryEnvironmentInfo.generateRegString(0)] - Gathering Registry data under: Software\JavaSoft\Java Runtime Environment 2011-01-05 08:24:26,184 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.sysinfo.RegistryEnvironmentInfo] [RegistryEnvironmentInfo.generateRegString(0)] - Gathering Registry data under: Software\JavaSoft\Java Runtime Environment\1.6 2011-01-05 08:24:26,186 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.sysinfo.RegistryEnvironmentInfo] [RegistryEnvironmentInfo.generateRegString(0)] - Gathering Registry data under: Software\JavaSoft\Java Runtime Environment\1.6.0_17 2011-01-05 08:24:26,188 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.sysinfo.RegistryEnvironmentInfo] [RegistryEnvironmentInfo.generateRegString(0)] - Gathering Registry data under: Software\JavaSoft\Java Runtime Environment\1.6.0_17\MSI 2011-01-05 08:24:26,190 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.sysinfo.RegistryEnvironmentInfo] [RegistryEnvironmentInfo.gatherRegistryAndEnvInfo(0)] - Gathering Environment Variables data 2011-01-05 08:24:26,283 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.sysinfo.CSFDirectoryInfo] [HardwareInfo.getWMIInfo(0)] - Gathering Video driver info... 2011-01-05 08:24:26,750 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.sysinfo.SysInfoManager] [SysInfoManager.writeFile(0)] - Creating file: DirectoryInfo.txt 2011-01-05 08:24:26,759 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.sysinfo.CSFDirectoryInfo] [HardwareInfo.getWMIInfo(0)] - Gathering Monitor info... 2011-01-05 08:24:34,483 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.file.FileUtil] [FileUtil.gatherFiles(0)] - Config Dir C:\Users\m.sadeghi\AppData\Roaming\Cisco\Unified Communications\ 2011-01-05 08:24:34,530 [WARN ] [com.cisco.uc.ucsf.ProblemReportingTool.file.FileUtil] [FileUtil.addFile(0)] - C:\Users\MA899~1.SAD\AppData\Local\Temp\36\CUCIMOCInstaller.txt not found 2011-01-05 08:24:34,561 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.file.FileUtil] [FileUtil.addSystemInfo(0)] - Waiting for worker threads... 2011-01-05 08:24:38,180 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.sysinfo.CSFDirectoryInfo] [HardwareInfo.getHardWareInfo(0)] - Gathering Resolution data 2011-01-05 08:24:55,565 [ERROR] [com.cisco.uc.ucsf.ProblemReportingTool.file.FileUtil] [FileUtil.addSystemInfo(0)] - One or more worker threads have not returned in a timely manner. Forcing quit. 2011-01-05 08:24:55,568 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.sysinfo.SysInfoManager] [SysInfoManager.writeFile(0)] - Creating file: SystemInfo.txt 2011-01-05 08:24:55,577 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.file.FileUtil] [FileUtil.removePrivateFiles(0)] - Checking for files to be excluded 2011-01-05 08:24:55,578 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.file.FileUtil] [FileUtil.removePrivateFiles(0)] - Excluding: d11bfd8f-9745-41db-a35b-200389e65583.dat 2011-01-05 08:24:55,579 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.file.FileUtil] [FileUtil.removePrivateFiles(0)] - Excluding: cacerts 2011-01-05 08:24:55,580 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.file.FileUtil] [FileUtil.removePrivateFiles(0)] - Excluding: Voicemail.2639.20110103081119+0330.wav 2011-01-05 08:24:55,581 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.file.FileUtil] [FileUtil.removePrivateFiles(0)] - Excluding: Voicemail.farhad.20101224165510+0330.wav 2011-01-05 08:24:55,581 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.file.FileUtil] [FileUtil.removePrivateFiles(0)] - Excluding: Voicemail.postmaster.20101224165906+0330.wav 2011-01-05 08:24:55,582 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.file.FileUtil] [FileUtil.removePrivateFiles(0)] - Excluding: VoicemailBeep.wav 2011-01-05 08:24:55,583 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.file.FileUtil] [FileUtil.removePrivateFiles(0)] - Excluding: secModeNone 2011-01-05 08:24:55,586 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Preparing to create zip file... 2011-01-05 08:24:55,588 [INFO ] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - 60 files found 2011-01-05 08:24:55,589 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying .CSFExit.loc to temp folder. 2011-01-05 08:24:55,595 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying CSF.loc to temp folder. 2011-01-05 08:24:55,597 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying CsfAddress.dat to temp folder. 2011-01-05 08:24:55,600 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying CSFLogSetting.dat to temp folder. 2011-01-05 08:24:55,634 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying CSFSecurityKey.dat to temp folder. 2011-01-05 08:24:55,637 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying CommunicationHistory.xml to temp folder. 2011-01-05 08:24:55,641 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying MehdiSadeghi.cnf.xml to temp folder. 2011-01-05 08:24:55,751 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying jtapi.jar to temp folder. 2011-01-05 08:24:55,812 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying CiscoJtapi.index to temp folder. 2011-01-05 08:24:55,820 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying CiscoJtapi01.log to temp folder. 2011-01-05 08:24:55,887 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying CiscoJtapi02.log to temp folder. 2011-01-05 08:24:55,968 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying CiscoJtapi03.log to temp folder. 2011-01-05 08:24:55,972 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying CiscoJtapi04.log to temp folder. 2011-01-05 08:24:56,008 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying CiscoJtapi05.log to temp folder. 2011-01-05 08:24:56,038 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying CiscoJtapi06.log to temp folder. 2011-01-05 08:24:56,079 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying CiscoJtapi07.log to temp folder. 2011-01-05 08:24:56,100 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying CiscoJtapi08.log to temp folder. 2011-01-05 08:24:56,140 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying CiscoJtapi09.log to temp folder. 2011-01-05 08:24:56,215 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying CiscoJtapi10.log to temp folder. 2011-01-05 08:24:56,296 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying Core.log to temp folder. 2011-01-05 08:24:56,319 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying Core.log.1 to temp folder. 2011-01-05 08:24:56,498 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying Core.log.2 to temp folder. 2011-01-05 08:24:56,708 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying Core.log.3 to temp folder. 2011-01-05 08:24:56,912 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying Core.log.4 to temp folder. 2011-01-05 08:24:57,105 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying Core.log.5 to temp folder. 2011-01-05 08:24:57,292 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying Core.log.6 to temp folder. 2011-01-05 08:24:57,505 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying tracker.log to temp folder. 2011-01-05 08:24:57,523 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying VideoEngineEncryptedTrace.txt to temp folder. 2011-01-05 08:24:57,542 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying VoiceEngineDebugTrace.txt to temp folder. 2011-01-05 08:24:57,545 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying VoiceEngineTrace.txt to temp folder. 2011-01-05 08:24:57,548 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying operationreport.log to temp folder. 2011-01-05 08:24:57,551 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying voicemailbox.dat to temp folder. 2011-01-05 08:24:57,554 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying voicemailfolder.dat to temp folder. 2011-01-05 08:24:57,558 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying UIPrefs.xml to temp folder. 2011-01-05 08:24:57,562 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying uc-client.log to temp folder. 2011-01-05 08:24:57,569 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying uc-client.log.1 to temp folder. 2011-01-05 08:24:57,752 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying uc-client.log.10 to temp folder. 2011-01-05 08:24:58,099 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying uc-client.log.2 to temp folder. 2011-01-05 08:24:58,302 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying uc-client.log.3 to temp folder. 2011-01-05 08:24:58,517 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying uc-client.log.4 to temp folder. 2011-01-05 08:24:58,697 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying uc-client.log.5 to temp folder. 2011-01-05 08:24:58,899 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying uc-client.log.6 to temp folder. 2011-01-05 08:24:59,100 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying uc-client.log.7 to temp folder. 2011-01-05 08:24:59,303 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying uc-client.log.8 to temp folder. 2011-01-05 08:24:59,500 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying uc-client.log.9 to temp folder. 2011-01-05 08:24:59,895 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying Cisco.ClickToCall.Common.Core.dll.config to temp folder. 2011-01-05 08:24:59,915 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying ClickToCall.pref to temp folder. 2011-01-05 08:24:59,918 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying CiscoClickToCall.dll.config to temp folder. 2011-01-05 08:24:59,928 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying CiscoClickToCallContacts.dll.config to temp folder. 2011-01-05 08:24:59,948 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying CiscoPersonName.dll.config to temp folder. 2011-01-05 08:24:59,980 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying userData.properties to temp folder. 2011-01-05 08:24:59,988 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying userData.properties.backup to temp folder. 2011-01-05 08:24:59,990 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying cisco-uc-client.log4net.config to temp folder. 2011-01-05 08:24:59,994 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying cisco-uc-tab.log4net.config to temp folder. 2011-01-05 08:25:00,011 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying LocalSettings.xml to temp folder. 2011-01-05 08:25:00,025 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying Description.txt to temp folder. 2011-01-05 08:25:00,028 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying LaunchInfo.txt to temp folder. 2011-01-05 08:25:00,031 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying DirectoryInfo.txt to temp folder. 2011-01-05 08:25:00,034 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying SystemInfo.txt to temp folder. 2011-01-05 08:25:00,036 [DEBUG] [com.cisco.uc.ucsf.ProblemReportingTool.file.Zip] [Zip.zipMultipleFiles(0)] - Copying csf-prt.log to temp folder.

    Read the article

< Previous Page | 154 155 156 157 158 159 160 161 162 163 164 165  | Next Page >