Search Results

Search found 1935 results on 78 pages for 'digital signatures'.

Page 72/78 | < Previous Page | 68 69 70 71 72 73 74 75 76 77 78  | Next Page >

  • Books are Dead! Long Live the Books!

    - by smisner
    We live in interesting times with regard to the availability of technical material. We have lots of free written material online in the form of vendor documentation online, forums, blogs, and Twitter. And we have written material that we can buy in the form of books, magazines, and training materials. Online videos and training – some free and some not free – are also an option. All of these formats are useful for one need or another. As an author, I pay particular attention to the demand for books, and for now I see no reason to stop authoring books. I assure you that I don’t get rich from the effort, and fortunately that is not my motivation. As someone who likes to refer to books frequently, I am still a big believer in books and have evidence from book sales that there are others like me. If I can do my part to help others learn about the technologies I work with, I will continue to produce content in a variety of formats, including books. (You can view a list of all of my books on the Publications page of my site and my online training videos at Pluralsight.) As a consumer of technical information, I prefer books because a book typically can get into a topic much more deeply than a blog post, and can provide more context than vendor documentation. It comes with a table of contents and a (hopefully accurate) index that helps me zero in on a topic of interest, and of course I can use the Search feature in digital form. Some people suggest that technology books are outdated as soon as they get published. I guess it depends on where you are with technology. Not everyone is able to upgrade to the latest and greatest version at release. I do assume, however, that the SQL Server 7.0 titles in my library have little value for me now, but I’m certain that the minute I discard the book, I’m going to want it for some reason! Meanwhile, as electronic books overtake physical books in sales, my husband is grateful that I can continue to build my collection digitally rather than physically as the books have a way of taking over significant square footage in our house! Blog posts, on the other hand, are useful for describing the scenarios that come up in real-life implementations that wouldn’t fit neatly into a book. As many years that I have working with the Microsoft BI stack, I still run into new problems that require creative thinking. Likewise, people who work with BI and other technologies that I use share what they learn through their blogs. Internet search engines help us find information in blogs that simply isn’t available anywhere else. Another great thing about blogs, also, is the connection to community and the dialog that can ensue between people with common interests. With the trend towards electronic formats for books, I imagine that we’ll see books continue to adapt to incorporate different forms of media and better ways to keep the information current. At the moment, I wish I had a better way to help readers with my last two Reporting Services books. In the case of the Microsoft® SQL Server™ 2005 Reporting Services Step by Step book, I have heard many cases of readers having problems with the sample database that shipped on CD – either the database was missing or it was corrupt. So I’ve provided a copy of the database on my site for download from http://datainspirations.com/uploads/rs2005sbsDW.zip. Then for the Microsoft® SQL Server™ 2008 Reporting Services Step by Step book, we decided to avoid the database problem by using the AdventureWorks2008 samples that Microsoft published on Codeplex (although code samples are still available on CD). We had this silly idea that the URL for the download would remain constant, but it seems that expectation was ill-founded. Currently, the sample database is found at http://msftdbprodsamples.codeplex.com/releases/view/37109 but I have no idea how long that will remain valid. My latest books (#9 and #10 which are milestones I never anticipated), Building Integrated Business Intelligence Solutions with SQL Server 2008 R2 and Office 2010 (McGraw Hill, 2011) and Business Intelligence in Microsoft SharePoint 2010 (Microsoft Press, 2011), will not ship with a CD, but will provide all code samples for download at a site maintained by the respective publishers. I expect that the URLs for the downloads for the book will remain valid, but there are lots of references to other sites that can change or disappear over time. Does that mean authors shouldn’t make reference to such sites? Personally, I think the benefits to be gained from including links are greater than the risks of the links becoming invalid at some point. Do you think the time for technology books has come to an end? Is the delivery of books in electronic format enough to keep them alive? If technological barriers were no object, what would make a book more valuable to you than other formats through which you can obtain information?

    Read the article

  • CodePlex Daily Summary for Thursday, May 15, 2014

    CodePlex Daily Summary for Thursday, May 15, 2014Popular ReleasesVisualizers: Visualizer: This dll is the core of the project just copy and paste it in the following folder %Microsoft Visual Studio%\Common7\Packages\Debugger\Visualizers where %Microsoft Visual Studio% is the Visual studio installation folderQuickMon: Version 3.10: Adding the ability to see 'history' of Collector states (including details of errors or warnings at that time). The history size is configurable (default is switched off) and the Windows Service completely ignores keeping history (no UI or user to access it anyway). The Collector stats window now displays this history plus multiple collector stats windows can be opened at the same time. Additionally fixed a bug in the event log collector that reported an 'Error' state when an 'out of bounds' ...xFunc: xFunc 2.15.4: Fixed bug in Processor.csTFS Planning and Disaster Recovery Avoidance Guide: v1.4.BETA - TFS, DR and Azure IaaS Planning Guides: Welcome to the TFS Planning and DR Avoidance Guidance What is new? A new crisper, more compact style, which is easier to consume on multiple devices without sacrificing any content. Also included are the new TFS on Azure IaaS guide and supplementary guides. Note Capacity planning workbook and posters are included in the Everything Zip package. Quality-Bar Detail Documentation has been reviewed by Visual Studio ALM Rangers Documentation has been through an independent technical review ...WinAudit: WinAudit Freeware v3.0: WinAudit.exe v3.0 MD5: 88750CCF49FF7418199B2645755830FA Known Issues: 1. Report creation can be very slow when right-to-left (Hebrew) characters are present. 2. Emsisoft Anti-Malware may stop and/or quarantine WinAudit. This happens when WinAudit attempts to obtain a list if running programmes. You will need to set an exception rule in Emsisoft to allow WinAudit to run.MVCwCMS - ASP.NET MVC CMS: MVCwCMS 2.2.2: Updated CKFinder config. For the installation instructions visit the documentation page: https://mvcwcms.codeplex.com/documentationTerraMap (Terraria World Map Viewer): TerraMap 1.0.4: Added support for the new Terraria v1.2.4 update. New items, walls, and tiles Fixed Issue 35206: Hightlight/Find doesn't work for Demon Altars Fixed finding Demon Hearts/Shadow Orbs Added ability to find Enchanted Swords (in the stone) and Water Bolt books Fixed installer not uninstalling older versions The setup file will make sure .NET 4 is installed, install TerraMap, create desktop and start menu shortcuts, add a .wld file association, and launch TerraMap. If you prefer the zip ...WPF Localization Extension: v2.2.1: Issue #9277 Issue #9292 Issue #9311 Issue #9312 Issue #9313 Issue #9314CtrlAltStudio Viewer: CtrlAltStudio Viewer 1.2.1.41167 Release: This release of the CtrlAltStudio Viewer includes the following significant features: Oculus Rift support. Stereoscopic 3D display support. Variable walking / flying speed. Xbox 360 Controller support. Kinect for Windows support. Based on Firestorm viewer 4.6.5 codebase. For more details, see the release notes linked to below. Release notes: http://ctrlaltstudio.com/viewer/release-notes/1-2-1-41167-release Support info: http://ctrlaltstudio.com/viewer/support Privacy policy: http:/...ExtJS based ASP.NET Controls: FineUI v4.0.6: FineUI(???) ?? ExtJS ??? ASP.NET ??? FineUI??? ?? No JavaScript,No CSS,No UpdatePanel,No ViewState,No WebServices ??????? ?????? IE 8.0+、Chrome、Firefox、Opera、Safari ???? Apache License v2.0 ?:ExtJS ?? GPL v3 ?????(http://www.sencha.com/license) ???? ??:http://fineui.com/ ??:http://fineui.com/bbs/ ??:http://fineui.com/demo/ ??:http://fineui.com/doc/ ??:http://fineui.codeplex.com/ FineUI ???? ExtJS ????????,???? ExtJS ?,?????: 1. ????? FineUI ? ExtJS ? http://fineui.com/bbs/forum.ph...Office App Model Samples: Office App Model Samples v2.0: Office App Model Samples v2.0Readable Passphrase Generator: KeePass Plugin 0.13.0: Version 0.13.0 Added "mutators" which add uppercase and numbers to passphrases (to help complying with upper, lower, number complexity rules). Additional API methods which help consuming the generator from 3rd party c# projects. 13,160 words in the default dictionary (~600 more than previous release).CS-Script for Notepad++ (C# intellisense and code execution): Release v1.0.25.0: Release v1.0.25.0 MemberInfo/MethodInfo popup is now positioned properly to fit the screen In MethodInfo popup method signatures are word-wrapped Implemented Debug text value visualizer Pining sub-values from Watch PanelWrapper for the PAYMILL API: Paymill API Wrapper: Add Description in PreauthorizationHow to develop an autodialer / predictive dialer in C#: VoIP AutoDialer in C Sharp: This is the downloadable source code for this example project that is intended to help you in developing your own VoIP autodialer application in C#.R.NET: R.NET 1.5.12: R.NET 1.5.12 is a beta release towards R.NET 1.6. You are encouraged to use 1.5.12 now and give feedback. See the documentation for setup and usage instructions. Main changes for R.NET 1.5.12: The C stack limit was not disabled on Windows. For reasons possibly peculiar to R, this means that non-concurrent access to R from multiple threads was not stable. This is now fixed, with the fix validated with a unit test. Thanks to Odugen, skyguy94, and previously others (evolvedmicrobe, tomasp) fo...SEToolbox: SEToolbox 01.029.006 Release 1: Fix to allow keyboard search on load dialog. (type the first few letters of your save) Fixed check for new release. Changed the way ship details are loaded to alleviate load time for worlds with very large ships (100,000+ blocks). Fixed Image importer, was incorrectly listing 'Asteroid' as import option. Minor changes to menus (text and appearance) for clarity and OS consistency. Added in reading of world palette for color dialog editor. WIP on subsystem editor. Can now multiselec...Thaumcraft4 Research: Alpha 2 release, lots of awesome improvements: Performance inprovements asynchronous running of the search Search result-cap ( usefull for long routes, it wont try to find 3000 results ) statistics addedTiny Wifi Host: Tiny Wifi Host 3.0.0.0: Tiny Wifi Hotspot Creator (Portable) v3 size: 50KB-140KB New Features: Friendly name for connected devices instead of Mac-Address (Double click selected device to enter friendly name) Saves device names to devices.xml Better error reporting+solutions Warning sound when number of connected devices exceed a certain number. (useful when only certain number of devices must be connected at a time) Many Bug Fixes. NoAudio files does not include connect, disconnect and warning audio to dec...Media Companion: Media Companion MC3.597b: Thank you for being patient, againThere are a number of fixes in place with this release. and some new features added. Most are self explanatory, so check out the options in Preferences. Couple of new Features:* Movie - Allow save Title and Sort Title in Title Case format. * Movie - Allow save fanart.jpg if movie in folder. * TV - display episode source. Get episode source from episode filename. Fixed:* Movie - Added Fill Tags from plot keywords to Batch Rescraper. * Movie - Fixed TMDB s...New ProjectsBLADE View Engine - An independent RAZOR alternative: An alternative parser and view engine to Asp.net Razor that does not depend on Asp.net or .NET 4.5BPL++: A basic object orientated programming language built upon a virtual machine using C#Caedisi - A Cellular Automata Editor and Simulator for Network Decontamination: A research tool that explores the use of cellular automata in order to decontaminate a network attacked or infected by a virus. Cosmos Software Distrobution 1.0: No content until project released!ETCQuality: ETC QUALITY STAT SYSTEM. ????????????,??????????????。F.A.Q.: F.A.Q. project about Frequently asked questions. Help Viewer Redirector: Enables use of Help Viewer 2.0 or 2.1 with SQL Server instead of Help Viewer 1.0Jet.Payment.Cielo: Projeto contém integração com o serviço e-commerce Cielo, desenvolvido em C#. Criamos esse Helper para nossa própria necessidade. Nos ajude a melhorá-lo.Learning with Pati: Learning Javascript with PatiLeRenard: LeRenard is a collection of solutions (from core helper, extension methods) to libraries, all written in C# to help build applications.MCMP.NET: MCMP.NET allows ASP.NET applications to be added as contexts using mod_cluster.PowerShell Deployment Automation Framework: This project contains resources related to my blog at www.powershellcoach.compravda-f: ??????????? ? ?????? ??? ?????????? ??????sdir: Colorful, sorted and optionally rich directory enumeration for Windows.SienSchoofsProject: school project for .NET ExpertSorting collection of any type by several fields: Sorting collection of any type by several fields with using Reflection and Expression TreesTPS BarCode Scanner: This is going to be a collaboration platform for TPS team. We want to develop a simple low cost bar code generator and scanner system. This project is currentlVerifyDomainOutlookAddIn: ??????????????????????????????。Ynote Packages: Ynote Packages are a collection of tools which a play a crucial role in extending Ynote Classic and understanding it's true potential.??????-??????【??】??????????: ??????????????????,???、???!???????,????????????????,????????????,???! ?????-?????【??】?????????: ????????????、?????、?????、?????、?????、????,???????????,?????,??????! ?????-?????【??】?????????: ???????????????,?????????????? ??。??????????、????、????、?????????? ???????。 ??????-??????【??】??????????: ????????????????????,?????,???????,???????????,??????! ??????-??????【??】??????????: ??????????????、??????、????、?????、?????!????,????????????????!????。 ?????-?????【??】?????????: ???????????????????,?????????????????????,?????,????,???????. ?????-?????【??】?????????: ???????????????????、????????、????????、????????、???????,????????????。 ??????-??????【??】??????????: ????????????????、?????、?????、????、?????,??????????。????????????????! ??????-??????【??】??????????: ???????????????????????:????、????、??????????????,????????。????????! ????-????【??】????????: ?????????????????????,????????????,?????、??、????,?????,??????! ?????-?????【??】?????????: ???????????????????,??????????,????????、????,??????????,??????????。 ?????-?????【??】?????????: ???????,??????:?????,?????,??????,??????????,????????。????????! ????: ????????,????????????????????????,?????????,????????????????。????????????????????,??????????????,?????????????。 ????????????,?:   ??????   ??????   ????????   ??????-??????【??】??????????: ??????????????????????,???????????????,????????????????????! ??????-??????【??】??????????: ??????????????????、????,??100%????,??????,????????????,???????????! ????-????【??】????????: ???????????、??????????????????,????????,?????,??????,????,????,????! ?????-?????【??】?????????: ????????????????,??????????、??????,??????????、????、????、???????。 ?????--?????【??】?????????: ?????????????????????,???????????????,???????,?????,?????,????? !!!

    Read the article

  • Windows for IoT, continued

    - by Valter Minute
    Originally posted on: http://geekswithblogs.net/WindowsEmbeddedCookbook/archive/2014/08/05/windows-for-iot-continued.aspxI received many interesting feedbacks on my previous blog post and I tried to find some time to do some additional tests. Bert Kleinschmidt pointed out that pins 2,3 and 10 of the Galileo are connected directly to the SOC, while pin 13, the one used for the sample sketch is controlled via an I2C I/O expander. I changed my code to use pin 2 instead of 13 (just changing the variable assignment at the beginning of the code) and latency was greatly reduced. Now each pulse lasts for 1.44ms, 44% more than the expected time, but ways better that the result we got using pin 13. I also used SetThreadPriority to increase the priority of the thread that was running the sketch to THREAD_PRIORITY_HIGHEST but that didn't change the results. When I was using the I2C-controlled pin I tried the same and the timings got ways worse (increasing more than 10 times) and so I did not commented on that part, wanting to investigate the issua a bit more in detail. It seems that increasing the priority of the application thread impacts negatively the I2C communication. I tried to use also the Linux-based implementation (using a different Galileo board since the one provided by MS seems to use a different firmware) and the results of running the sample blink sketch modified to use pin 2 and blink the led for 1ms are similar to those we got on the same board running Windows. Here the difference between expected time and measured time is worse, getting around 3.2ms instead of 1 (320% compared to 150% using Windows but far from the 100.1% we got with the 8-bit Arduino). Both systems were not under load during the test, maybe loading some applications that use part of the CPU time would make those timings even less reliable, but I think that those numbers are enough to draw some conclusions. It may not be worth running a full OS if what you need is Arduino compatibility. The Arduino UNO is probably the best Arduino you can find to perform this kind of development. The Galileo running the Linux-based stack or running Windows for IoT is targeted to be a platform for "Internet of Things" devices, whatever that means. At the moment I don't see the "I" part of IoT. We have low level interfaces (SPI, I2C, the GPIO pins) that can be used to connect sensors but the support for connectivity is limited and the amount of work required to deliver some data to the cloud (using a secure HTTP request or a message queuing system like APMQS or MQTT) is still big and the rich OS underneath seems to not provide any help doing that.Why should I use sockets and can't access all the high level connectivity features we have on "full" Windows?I know that it's possible to use some third party libraries, try to build them using the Windows For IoT SDK etc. but this means re-inventing the wheel every time and can also lead to some IP concerns if used for products meant to be closed-source. I hope that MS and Intel (and others) will focus less on the "coolness" of running (some) Arduino sketches and more on providing a better platform to people that really want to design devices that leverage internet connectivity and the cloud processing power to deliver better products and services. Providing a reliable set of connectivity services would be a great start. Providing support for .NET would be even better, leaving native code available for hardware access etc. I know that those components may require additional storage and memory etc. So making the OS componentizable (or, at least, provide a way to install additional components) would be a great way to let developers pick the parts of the system they need to develop their solution, knowing that they will integrate well together. I can understand that the Arduino and Raspberry Pi* success may have attracted the attention of marketing departments worldwide and almost any new development board those days is promoted as "XXX response to Arduino" or "YYYY alternative to Raspberry Pi", but this is misleading and prevents companies from focusing on how to deliver good products and how to integrate "IoT" features with their existing offer to provide, at the end, a better product or service to their customers. Marketing is important, but can't decide the key features of a product (the OS) that is going to be used to develop full products for end customers integrating it with hardware and application software. I really like the "hackable" nature of open-source devices and like to see that companies are getting more and more open in releasing information, providing "hackable" devices and supporting developers with documentation, good samples etc. On the other side being able to run a sketch designed for an 8 bit microcontroller on a full-featured application processor may sound cool and an easy upgrade path for people that just experimented with sensors etc. on Arduino but it's not, in my humble opinion, the main path to follow for people who want to deliver real products.   *Shameless self-promotion: if you are looking for a good book in Italian about the Raspberry Pi , try mine: http://www.amazon.it/Raspberry-Pi-alluso-Digital-LifeStyle-ebook/dp/B00GYY3OKO

    Read the article

  • Drive Online Engagement with Intuitive Portals and Websites

    - by kellsey.ruppel
    As more and more business is being conducted via online channels, engaging users and making them more productive and efficient though these online channels is becoming critical. These users could be customers, partners or employees and while the respective channels through which they interact might be different, these users do increasingly interact with your business through the Web, or mobile devices or now through various social mediums.  Businesses need a user engagement strategy and solution that allows them to deliver targeted and personalized content and applications to users through the various online mediums and touch points.  The customer experience today is made up of an ongoing set of interactions with organizations across many channels, online and offline.  The Direct channel (including sales reps, email and mail) is an important point of contact, as is the Contact Center.  Contact Centers rely on the phone as a means of interacting with customers, and also more now than ever, the Web as well.  However, the online organization is often managed separately from the Contact Center organization within a business. In-store is an important channel for retailers, offering Point-of-Service for human interactions, and Kiosks which enable self-service. Kiosks are a Web-enabled touch point but in-store kiosks are often managed by the head of retail operations, rather than the online organization.  And of course, the online channel, including customer interactions with an organization via digital means -- on the website, mobile websites, and social networking sites, has risen to paramount importance in recent years in the customer experience. Historically all of these channels have been managed separately. The result of all of this fragmentation is that the customer touch points with an organization are siloed.  Their interactions online are not known and respected in their dealings in-store.  Their calls to the contact center are not taken as input into what the website offers them when they arrive. Think of how many times you’ve fallen victim to this. Your experience with the company call center is different than the experience in-store. Your experience with the company website on your desktop computer is different than your experience on your iPad. I think you get the point. But the customer isn’t the only one we need to look at here, as employees and the IT organization have challenges as well when it comes to online engagement. There are many common tools and technologies that organizations have been using to try and engage users, whether it’s customers, employees or partners. Some have adopted different blog and wiki technologies (some hosted, some open source, sometimes embedded in platforms), to things like tagging, file sharing and content management, or composite applications for self-service applications and activity streams. Basically, there are so many different tools & technologies that each address different aspects of user engagement. Now, one of the challenges with this, is that if we look at each individual tool, typically just implementing for example a file sharing and basic collaboration solution, may meet the needs of the business user for one aspect of user engagement, but it may not be the best solution to engage with customers and partners, or it may not fit with IT standards such as integrating with their single sign on tools or their corporate website. Often, the scenario is that businesses are having to acquire multiple pieces and parts as well as build custom applications to meet their needs. Leaving customers and partners with a more fragmented way of interacting with the company. Every organization has some sort of enterprise balancing act between the needs of the business user and the needs and restrictions enforced by enterprise IT groups. As we’ve been discussing, we all know that the expectations for online engagement have changed since the days of the static, one-size fits all website. With these changes have come some very difficult organizational challenges as well. Today, as a business user, you want to engage with your customers, and your customers expect you to know who they are. They expect you to recall the details they’ve provided to you on your website, to your CSRs and to your sales people. They expect you to remember their purchases, their preferences and their problems. And they expect you to know who they are, equally well, across channels, including your web presence. This creates a host of challenges for today’s business users. Delivering targeted, relevant content online is now essential for converting prospects into customers and for engendering long term loyalty. Business users need the ability to leverage customer data from different sources to fuel their segmentation and targeting strategies and to easily set-up, manage and optimize online campaigns. Also critical, they need the ability to accomplish these things on-the-fly, at the speed of the marketplace, while making iterative improvements.  These changing expectations put a host of demands on the IT organization as well. The web presence must be able to scale to support the delivery of personalized and targeted content to thousands of site visitors without sacrificing performance. And integration between systems becomes more important as well, as organizations strive to obtain one view of the customer culled from WCM data, CRM data and more. So then, how do you solve these challenges and meet the growing demands of your users?  You need a solution that: Unifies every customer interaction across all channels Personalizes the products and content that interest the customer and to the device Delivers targeted promotions to the right customer Engages and improve employee productivity Provides self-service access to applications Includes embedded in-context social   So how then do you achieve this level of online engagement, complete customer experience and engage your employees? The answer: Oracle WebCenter. If you want to learn how to get there, we encourage you to attend this webcast on Thursday Drive Online Engagement with Intuitive Portals and Websites, where we'll talk about how you are able to transform your portal experience and optimize online engagement -- making your portals more interactive and more engaging across multiple channels. Register today!

    Read the article

  • Content in Context: The right medicine for your business applications

    - by Lance Shaw
    For many of you, your companies have already invested in a number of applications that are critical to the way your business is run. HR, Payroll, Legal, Accounts Payable, and while they might need an upgrade in some cases, they are all there and handling the lifeblood of your business. But are they really running as efficiently as they could be? For many companies, the answer is no. The problem has to do with the important information caught up within documents and paper. It’s everywhere except where it truly needs to be – readily available right within the context of the application itself. When the right information cannot be easily found, business processes suffer significantly. The importance of this recently struck me when I recently went to meet my new doctor and get a routine physical. Walking into the office lobby, I couldn't help but notice rows and rows of manila folders in racks from floor to ceiling, filled with documents and sensitive, personal information about various patients like myself.  As I looked at all that paper and all that history, two things immediately popped into my head.  “How do they find anything?” and then the even more alarming, “So much for information security!” It sure looked to me like all those documents could be accessed by anyone with a key to the building. Now the truth is that the offices of many general practitioners look like this all over the United States and the world.  But it had me thinking, is the same thing going on in just about any company around the world, involving a wide variety of important business processes? Probably so. Think about all the various processes going on in your company right now. Invoice payments are being processed through Accounts Payable, contracts are being reviewed by Procurement, and Human Resources is reviewing job candidate submissions and doing background checks. All of these processes and many more like them rely on access to forms and documents, whether they are paper or digital. Now consider that it is estimated that employee’s spend nearly 9 hours a week searching for information and not finding it. That is a lot of very well paid employees, spending more than one day per week not doing their regular job while they search for or re-create what already exists. Back in the doctor’s office, I saw this trend exemplified as well. First, I had to fill out a new patient form, even though my previous doctor had transferred my records over months previously. After filling out the form, I was later introduced to my new doctor who then interviewed me and asked me the exact same questions that I had answered on the form. I understand that there is value in the interview process and it was great to meet my new doctor, but this simple process could have been so much more efficient if the information already on file could have been brought directly together with the new patient information I had provided. Instead of having a highly paid medical professional re-enter the same information into the records database, the form I filled out could have been immediately scanned into the system, associated with my previous information, discrepancies identified, and the entire process streamlined significantly. We won’t solve the health records management issues that exist in the United States in this blog post, but this example illustrates how the automation of information capture and classification can eliminate a lot of repetitive and costly human entry and re-creation, even in a simple process like new patient on-boarding. In a similar fashion, by taking a fresh look at the various processes in place today in your organization, you can likely spot points along the way where automating the capture and access to the right information could be significantly improved. As you evaluate how content-process flows through your organization, take a look at how departments and regions share information between the applications they are using. Business applications are often implemented on an individual department basis to solve specific problems but a holistic approach to overall information management is not taken at the same time. The end result over the years is disparate applications with separate information repositories and in many cases these contain duplicate information, or worse, slightly different versions of the same information. This is where Oracle WebCenter Content comes into the story. More and more companies are realizing that they can significantly improve their existing application processes by automating the capture of paper, forms and other content. This makes the right information immediately accessible in the context of the business process and making the same information accessible across departmental systems which has helped many organizations realize significant cost savings. Here on the Oracle WebCenter team, one of our primary goals is to help customers find new ways to be more effective, more cost-efficient and manage information as effectively as possible. We have a series of three webcasts occurring over the next few weeks that are focused on the integration of enterprise content management within the context of business applications. We hope you will join us for one or all three and that you will find them informative. Click here to learn more about these sessions and to register for them. There are many aspects of information management to consider as you look at integrating content management within your business applications. We've barely scratched the surface here but look for upcoming blog posts where we will discuss more specifics on the value of delivering documents, forms and images directly within applications like Oracle E-Business Suite, PeopleSoft Enterprise, JD Edwards Enterprise One, Siebel CRM and many others. What do you think?  Are your important business processes as healthy as they can be?  Do you have any insights to share on the value of delivering content directly within critical business processes? Please post a comment and let us know the value you have realized, the lessons learned and what specific areas you are interested in.

    Read the article

  • Eloqua Experience 2013: Mystique, Modern Marketing and Masterful Engagement

    - by Mike Stiles
    The following is a guest post from Erick Mott, a social business leader at Oracle Eloqua. There’s a growing gap between 20th century marketing and a modern marketing way of doing business. I can’t think of a better example of modern marketing in action than what more than 2,000 people experienced in San Francisco at #EE13; customer-obsession, multichannel content, and real-time engagement all coming together at one extraordinary event. This was my first Eloqua Experience as a new Oracle Eloqua employee. In weeks prior, I heard about the mystique but didn’t know what to expect. What I’ve come to understand with more clarity is everything we do revolves around customer success, and we operate and educate at all times with these five tenets in mind: 1. Targeting: Really Know Your Buyer 2. Engagement: Create a 1:1 Relationship 3. Conversion: Visualize Guided Thinking 4. Analysis: Learn What’s Working 5. Marketing Technology: Enable and Extend the Cloud Product News from Eloqua Experience 2013 We made some announcements that John Stetic, VP of Products, Oracle Eloqua covers in this brief ‘Modern Marketing Minute’ video recorded after Wednesday’s keynote; summarized below, too: Oracle Eloqua AdFocus: While understanding the impact of a specific marketing channel was formerly relegated to marketers’ wish lists, the channels we now focus on are digital, social, and mobile. AdFocus gives marketers a single platform to dynamically create, manage and measure display ads alongside owned and earned media. AdFocus enables marketers to target only key accounts or prospects you want to reach with display ads, as well as provide creative content or personalized ad copy based on their persona and activities. Oracle Eloqua Profiler: The details of what we now know about customers have expanded into a universal customer profile, which can be used to create highly targeted segments. Marketers now can take data that’s not even stored in Eloqua to help targeted and score prospects for a complete, multichannel view of the customer. Profiler gives sales reps one, detailed view of the prospect to extend views beyond Oracle Eloqua asset activity (emails, forms, page views) to any external assets stored in Oracle Eloqua. Marketing Resource Management: New capabilities create more secure and controlled access to marketing resources and data. New integrations provide greater insight into campaign resources and management through a central marketing calendar and simplify resource management. Integrated Sales and Marketing Funnel: An integrated sales and marketing funnel view gives marketing and sales users, cross-functional teams, and executive management a consistent and clear view of pipeline performance. It also quickly provides users with historical metrics across different time spans and conditions. Eloqua AppCloud: More than 20 new AppCloud partners have been added to the community, which now includes 100+ apps. Eloqua AppCloud now provides modern marketers with an even broader range of marketing applications that help expand and enrich sales and marketing efforts; easily accessible in the Topliners Community. Social Capabilities: Recent integration between Oracle Eloqua and Oracle Social Relationship Management (SRM) deliver a comprehensive, scalable and integrated modern marketing solution. New capabilities include better tracking of social activities for a more complete customer profile. Engage Facebook custom audiences with AdFocus to deliver ads and meaningful experiences through trusted social networks. Biggest and Best Eloqua Experience. There’s a lot of talk in the industry about the Marketing Cloud. At Oracle Eloqua, we have been on a mission of delivering the most advanced and integrated modern marketing technology on the planet. It’s not just a concept but reality with proven execution, as seen first-hand this week in San Francisco. In this video, Kevin Akeroyd, SVP of Oracle Eloqua, provides some highlights of what made this year’s Eloqua Experience, exceptional, including Steve Woods’ presentation about the journey of modern marketers and Andrea Ward’s conversation with Vince Gilligan, creator of the Breaking Bad television series. The 2013 Markie Awards The Oracle Eloqua Marketing Cloud was best exemplified for me as 19 Markies were awarded to customers for their exceptional creativity and results as modern marketers. Wow, what a night to remember with so many committed and talented people working to create an extraordinary experience! To learn more about how to become a modern marketer, check out these resources. We look forward to seeing you next year at Eloqua Experience. More on Erick: 20 years experience at Oracle, Ektron, Sitecore, Lyris, Habeas, Nokia, creatorbase, Mark Monitor, Cisco Systems, GlobalFluency, Sun Microsystems, Philips NV, Elm Products and CBS TV. Patent holder with agency, Fortune 500, media, and startup company expertise. @mikestiles

    Read the article

  • Why won't USB 3.0 external hard drive run at USB 3.0 speeds?

    - by jgottula
    I recently purchased a PCI Express x1 USB 3.0 controller card (containing the NEC USB 3.0 controller) with the intent of using a USB 3.0 external hard drive with my Linux box. I installed the card in an empty PCIe slot on my motherboard, connected the card to a power cable, strung a USB 3.0 cable between one of the new ports and my external HDD, and connected the HDD to a wall socket for power. Booting the system, the drive works 100% as intended, with the one exception of throughput: rather than using SuperSpeed 4.8 Gbps connectivity, it seems to be falling back to High Speed 480 Mbps USB 2.0-style throughput. Disk Utility shows it as a 480 Mbps device, and running a couple Disk Utility and dd benchmarks confirms that the drive fails to exceed ~40 MB/s (the approximate limit of USB 2.0), despite it being an SSD capable of far more than that. When I connect my USB 3.0 HDD, dmesg shows this: [ 3923.280018] usb 3-2: new high speed USB device using ehci_hcd and address 6 where I would expect to find this: [ 3923.280018] usb 3-2: new SuperSpeed USB device using xhci_hcd and address 6 My system was running on kernel 2.6.35-25-generic at the time. Then, I stumbled upon this forum thread by an individual who found that a bug, which was present in kernels prior to 2.6.37-rc5, could be the culprit for this type of problem. Consequently, I installed the 2.6.37-generic mainline Ubuntu kernel to determine if the problem would go away. It didn't, so I tried 2.6.38-rc3-generic, and even the 2.6.38 nightly from 2010.02.01, to no avail. In short, I'm trying to determine why, with USB 3.0 support in the kernel, my USB 3.0 drive fails to run at full SuperSpeed throughput. See the comments under this question for additional details. Output that might be relevant to the problem (when booting from 2.6.38-rc3): Relevant lines from dmesg: [ 19.589491] xhci_hcd 0000:03:00.0: PCI INT A -> GSI 17 (level, low) -> IRQ 17 [ 19.589512] xhci_hcd 0000:03:00.0: setting latency timer to 64 [ 19.589516] xhci_hcd 0000:03:00.0: xHCI Host Controller [ 19.589623] xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 12 [ 19.650492] xhci_hcd 0000:03:00.0: irq 17, io mem 0xf8100000 [ 19.650556] xhci_hcd 0000:03:00.0: irq 47 for MSI/MSI-X [ 19.650560] xhci_hcd 0000:03:00.0: irq 48 for MSI/MSI-X [ 19.650563] xhci_hcd 0000:03:00.0: irq 49 for MSI/MSI-X [ 19.653946] xHCI xhci_add_endpoint called for root hub [ 19.653948] xHCI xhci_check_bandwidth called for root hub Relevant section of sudo lspci -v: 03:00.0 USB Controller: NEC Corporation uPD720200 USB 3.0 Host Controller (rev 03) (prog-if 30) Flags: bus master, fast devsel, latency 0, IRQ 17 Memory at f8100000 (64-bit, non-prefetchable) [size=8K] Capabilities: [50] Power Management version 3 Capabilities: [70] MSI: Enable- Count=1/8 Maskable- 64bit+ Capabilities: [90] MSI-X: Enable+ Count=8 Masked- Capabilities: [a0] Express Endpoint, MSI 00 Capabilities: [100] Advanced Error Reporting Capabilities: [140] Device Serial Number ff-ff-ff-ff-ff-ff-ff-ff Capabilities: [150] #18 Kernel driver in use: xhci_hcd Kernel modules: xhci-hcd Relevant section of sudo lsusb -v: Bus 012 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 3.00 bDeviceClass 9 Hub bDeviceSubClass 0 Unused bDeviceProtocol 3 bMaxPacketSize0 9 idVendor 0x1d6b Linux Foundation idProduct 0x0003 3.0 root hub bcdDevice 2.06 iManufacturer 3 Linux 2.6.38-020638rc3-generic xhci_hcd iProduct 2 xHCI Host Controller iSerial 1 0000:03:00.0 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 25 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xe0 Self Powered Remote Wakeup MaxPower 0mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 9 Hub bInterfaceSubClass 0 Unused bInterfaceProtocol 0 Full speed (or root) hub iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0004 1x 4 bytes bInterval 12 Hub Descriptor: bLength 9 bDescriptorType 41 nNbrPorts 4 wHubCharacteristic 0x0009 Per-port power switching Per-port overcurrent protection TT think time 8 FS bits bPwrOn2PwrGood 10 * 2 milli seconds bHubContrCurrent 0 milli Ampere DeviceRemovable 0x00 PortPwrCtrlMask 0xff Hub Port Status: Port 1: 0000.0100 power Port 2: 0000.0100 power Port 3: 0000.0100 power Port 4: 0000.0100 power Device Status: 0x0003 Self Powered Remote Wakeup Enabled Full, non-verbose lsusb: Bus 012 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 011 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 010 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 009 Device 003: ID 04d9:0702 Holtek Semiconductor, Inc. Bus 009 Device 002: ID 046d:c068 Logitech, Inc. G500 Laser Mouse Bus 009 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 008 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 006: ID 174c:5106 ASMedia Technology Inc. Bus 003 Device 004: ID 0bda:0151 Realtek Semiconductor Corp. Mass Storage Device (Multicard Reader) Bus 003 Device 002: ID 058f:6366 Alcor Micro Corp. Multi Flash Reader Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 006: ID 1687:0163 Kingmax Digital Inc. Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 002: ID 046d:081b Logitech, Inc. Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Full output: full dmesg full lspci full lsusb

    Read the article

  • Some Early Considerations

    - by Chris Massey
    Following on from my previous post, I want to say "thank you" to everyone who has got in touch and got involved – you are pioneers! An update on where we are right now: paper prototypes v1 To be more specific, we’ve picked two of the ideas that seem to have more pros than cons, turned them into Balsamiq mockups, and are getting them fleshed out with realistic content. We’ll initially make these available to the aforementioned pioneers (thank you again), roll in the feedback, and then open up to get more data on what works and what doesn’t. If you’ve got any questions about this (or what we’re working on right now), feel free to ask me in the comments below. I’ve had a few people express an interest in the process we’re going through, and I’m more than happy to share details more frequently as we go along – not least because you, dear reader, will help us stay on target and create something Good. To start with, here’s a quick flashback to bring you all up to speed. A Brief Retrospective As you may already know, we’re creating a new publishing asset specifically focused on providing great content for web developers. We don’t yet know exactly what this thing will look like, or exactly how it will work, but we know we want to create something that is useful different. For my part, I’m seriously excited at the prospect of building a genuinely digital publishing system (as opposed to what most publishing is these days, which is print-style publishing which just happens to be on the web). The main challenge at this point is working out our build-measure-assess loop to speed up our experimental turn-around, and that’ll get better as we run more trials. Of course, there are a few things we’ve been pondering at this early conceptual stage: Do we publishing about heterogeneous technology stacks from day 1, or do we start with ASP.NET (which we’re familiar with) & branch out later? There are challenges with either approach. What publishing "modes" are already being well-handled? For example, the likes of Pluralsight, TekPub, and Treehouse have pretty much nailed video training (debate about price, if you like), and unless we think we can do it faster / better / cheaper (unlikely, for the record), we should leave them to it. Where should we base whatever we create? Should we create a completely new asset under a new name, graft something onto Simple-Talk (like the labs), or just build something directly into Simple-Talk? It sounds trivial, but it does have at least some impact on infrastructure and what how we manage the different types of content we (will) have. Are there any obvious problems or niches that we think could address really well, or should we just throw ideas out and see what readers respond to? What kind of users do we want to provide for? This actually deserves a little bit of unpacking… Why are you here? We currently divide readers into (broadly) the categories: Category 1: I know nothing about X, and I’d like to learn about it. Category 2: I know something about X, but I’d like to learn how to do something specific with it. Category 3: Ah man, I have a problem with X, and I need to fix it now. Now that I think about it, I might also include a 4th class of reader: Category 4: I’m looking for something interesting to engage my brain. These are clearly task-based categorizations, and depending on which task you’re performing when you arrive here, you’re going to need different types of content, or will have specific discovery needs. One of the questions that’s at the back of my mind whenever I consider a new idea is “How many of the categories will this satisfy?” As an example, typical video training is very well suited to categories 1, 2, and 4. StackOverflow is very well suited to category 3, and serves as a sign-posting system to the rest. Clearly it’s not necessary to satisfy every category need to be useful and popular, but being aware of what behavior readers might be exhibiting when they arrive will help us tune our ideas appropriately. < / Flashback > We don’t have clean answers to most of these considerations – they’re things we’re aware of, and each idea we look at is going to be best suited to a different mix of the options I’ve described. Our first experimental loop will be coming full circle in the next few days, so we should start to see how the different possibilities vary between ideas. Free to chime in with questions and suggestions about anything I’ve just brain-dumped, or at any stage as we go along. If you see anything that intrigued or enrages you, or just have an idea you’d like to share, I’d love to hear from you.

    Read the article

  • Revisiting the Generations

    - by Row Henson
    I was asked earlier this year to contribute an article to the IHRIM publication – Workforce Solutions Review.  My topic focused on the reality of the Gen Y population 10 years after their entry into the workforce.  Below is an excerpt from that article: It seems like yesterday that we were all talking about the entry of the Gen Y'ers into the workforce and what a radical change that would have on how we attract, retain, motivate, reward, and engage this new, younger segment of the workforce.  We all heard and read that these youngsters would be more entrepreneurial than their predecessors – the Gen X'ers – who were said to be more loyal to their profession than their employer. And, we heard that these “youngsters” would certainly be far less loyal to their employers than the Baby Boomers or even earlier Traditionalists. It was also predicted that – at least for the developed parts of the world – they would be more interested in work/life balance than financial reward; they would need constant and immediate reinforcement and recognition and we would be lucky to have them in our employment for two to three years. And, to keep them longer than that we would need to promote them often so they would be continuously learning since their long-term (10-year) goal would be to own their own business or be an independent consultant.  Well, it occurred to me recently that the first of the Gen Y'ers are now in their early 30s and it is time to look back on some of these predictions. Many really believed the Gen Y'ers would enter the workforce with an attitude – expect everything to be easy for them – have their employers meet their demands or move to the next employer, and I believe that we can now say that, generally, has not been the case. Speaking from personal experience, I have mentored a number of Gen Y'ers and initially felt that with a 40-year career in Human Resources and Human Resources Technology – I could share a lot with them. I found out very quickly that I was learning at least as much from them! Some of the amazing attributes I found from these under-30s was their fearlessness, ease of which they were able to multi-task, amazing energy and great technical savvy. They were very comfortable with collaborating with colleagues from both inside the company and peers outside their organization to problem-solve quickly. Most were eager to learn and willing to work hard.  This brings me to the generation that will follow the Gen Y'ers – the Generation Z'ers – those born after 1998. We have come full circle. If we look at the Silent Generation or Traditionalists, we find a workforce that preceded the television and even very early telephones. We Baby Boomers (as I fall right squarely in this category) remembered the invention of the television and telephone – but laptop computers and personal digital assistants (PDAs) were a thing of “StarTrek” and other science fiction movies and publications. Certainly, the Gen X'ers and Gen Y'ers grew up with the comfort of these devices just as we did with calculators. But, what of those under the age of 10 – how will the workplace look in 15 more years and what type of workforce will be required to operate in the mobile, global, virtual world. I spoke to a friend recently who had her four-year-old granddaughter for a visit. She said she found her in the den in front of the TV trying to use her hand to get the screen to move! So, you see – we have come full circle. The under-70 Traditionalist grew up in a world without TV and the Generation Z'er may never remember the TV we knew just a few years ago. As with every generation – we spend much time generalizing on their characteristics. The most important thing to remember is every generation – just like every individual – is different. The important thing for those of us in Human Resources to remember is that one size doesn’t fit all. What motivates one employee to come to work for you and stay there and be productive is very different than what the next employee is looking for and the organization that can provide this fluidity and flexibility will be the survivor for generations to come. And, finally, just when we think we have it figured out, a multitude of external factors such as the economy, world politics, industries, and technologies we haven’t even thought about will come along and change those predictions. As I reach retirement age – I do so believing that our organizations are in good hands with the generations to follow – energetic, collaborative and capable of working hard while still understanding the need for balance at work, at home and in the community! Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Configuring WCF to Handle a Signature on a SOAP Message from an Oracle Server

    - by AlEl
    I'm trying to use WCF to consume a web service provided by a third-party's Oracle Application Server. I pass a username and password and as part of the response the web service returns a standard security tag in the header which includes a digest and signature. With my current setup, I successfully send a request to the server and the web service sends the expected response data back. However, when parsing the response WCF throws a MessageSecurityException, with an InnerException.Message of "Supporting token signatures not expected." My guess is that WCF wants me to configure it to handle the signature and verify it. I have a certificate from the third party that hosts the web service that I should be able to use to verify the signature. It's in the form of -----BEGIN CERTIFICATE----- [certificate garble] -----END CERTIFICATE----- Here's a sample header from a response that makes WCF throw the exception: <?xml version="1.0" encoding="UTF-8"?> <soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"> <soap:Header> <wsse:Security soap:mustUnderstand="1" xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd" xmlns="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> <dsig:Signature xmlns="http://www.w3.org/2000/09/xmldsig#" xmlns:dsig="http://www.w3.org/2000/09/xmldsig#"> <dsig:SignedInfo> <dsig:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/> <dsig:SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#rsa-sha1"/> <dsig:Reference URI="#_51IUwNWRVvPOcz12pZHLNQ22"> <dsig:Transforms> <dsig:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/> </dsig:Transforms> <dsig:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/> <dsig:DigestValue> [DigestValue here] </dsig:DigestValue> </dsig:Reference> <dsig:Reference URI="#_dI5j0EqxrVsj0e62J6vd6w22"> <dsig:Transforms> <dsig:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/> </dsig:Transforms> <dsig:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/> <dsig:DigestValue> [DigestValue here] </dsig:DigestValue> </dsig:Reference> </dsig:SignedInfo> <dsig:SignatureValue> [Signature Value Here] </dsig:SignatureValue> <dsig:KeyInfo> <wsse:SecurityTokenReference xmlns="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> <wsse:Reference URI="#BST-9nKWbrE4LRv6maqstrGuUQ22" ValueType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-x509-token-profile-1.0#X509v3"/> </wsse:SecurityTokenReference> </dsig:KeyInfo> </dsig:Signature> <wsse:BinarySecurityToken ValueType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-x509-token-profile-1.0#X509v3" EncodingType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary" wsu:Id="BST-9nKWbrE4LRv6maqstrGuUQ22" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd"> [Security Token Here] </wsse:BinarySecurityToken> <wsu:Timestamp wsu:Id="_dI5j0EqxrVsj0e62J6vd6w22" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd" xmlns="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd"> <wsu:Created>2010-05-26T18:46:30Z</wsu:Created> </wsu:Timestamp> </wsse:Security> </soap:Header> <soap:Body wsu:Id="_51IUwNWRVvPOcz12pZHLNQ22" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd"> [Body content here] </soap:Body> </soap:Envelope> My binding configuration looks like: <basicHttpBinding> <binding name="myBinding" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" allowCookies="false" bypassProxyOnLocal="false" hostNameComparisonMode="StrongWildcard" maxBufferSize="65536" maxBufferPoolSize="524288" maxReceivedMessageSize="65536" messageEncoding="Text" textEncoding="utf-8" transferMode="Buffered" useDefaultWebProxy="true"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> <security mode="TransportWithMessageCredential"> <transport clientCredentialType="None" proxyCredentialType="None" realm="" /> <message clientCredentialType="UserName" algorithmSuite="Default" /> </security> </binding> </basicHttpBinding> I'm new at WCF, so I'm sorry if this is a bit of a dumb question. I've been trying to Google solutions, but there seem to be so many different ways to configure WCF that I'm getting overwhelmed. Thanks in advance!

    Read the article

  • Configuring a WCF Client to Use UserName Credentials On the Request and Check Certificate Credential

    - by AlEl
    I'm trying to use WCF to consume a web service provided by a third-party's Oracle Application Server. I pass a username and password in a UsernameToken as part of the request and as part of the response the web service returns a standard security tag in the header which includes a digest and signature. With my current setup, I successfully send a request to the server and the web service sends the expected response data back. However, when parsing the response WCF throws a MessageSecurityException, with an InnerException.Message of "Supporting token signatures not expected." My guess is that WCF wants me to configure it to handle the signature and verify it. I have a certificate from the third party that hosts the web service that I should be able to use to verify the signature, although I'm not sure if I'll need it. Here's a sample header from a response that makes WCF throw the exception: <?xml version="1.0" encoding="UTF-8"?> <soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"> <soap:Header> <wsse:Security soap:mustUnderstand="1" xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd" xmlns="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> <dsig:Signature xmlns="http://www.w3.org/2000/09/xmldsig#" xmlns:dsig="http://www.w3.org/2000/09/xmldsig#"> <dsig:SignedInfo> <dsig:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/> <dsig:SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#rsa-sha1"/> <dsig:Reference URI="#_51IUwNWRVvPOcz12pZHLNQ22"> <dsig:Transforms> <dsig:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/> </dsig:Transforms> <dsig:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/> <dsig:DigestValue> [DigestValue here] </dsig:DigestValue> </dsig:Reference> <dsig:Reference URI="#_dI5j0EqxrVsj0e62J6vd6w22"> <dsig:Transforms> <dsig:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/> </dsig:Transforms> <dsig:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/> <dsig:DigestValue> [DigestValue here] </dsig:DigestValue> </dsig:Reference> </dsig:SignedInfo> <dsig:SignatureValue> [Signature Value Here] </dsig:SignatureValue> <dsig:KeyInfo> <wsse:SecurityTokenReference xmlns="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> <wsse:Reference URI="#BST-9nKWbrE4LRv6maqstrGuUQ22" ValueType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-x509-token-profile-1.0#X509v3"/> </wsse:SecurityTokenReference> </dsig:KeyInfo> </dsig:Signature> <wsse:BinarySecurityToken ValueType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-x509-token-profile-1.0#X509v3" EncodingType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary" wsu:Id="BST-9nKWbrE4LRv6maqstrGuUQ22" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd"> [Security Token Here] </wsse:BinarySecurityToken> <wsu:Timestamp wsu:Id="_dI5j0EqxrVsj0e62J6vd6w22" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd" xmlns="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd"> <wsu:Created>2010-05-26T18:46:30Z</wsu:Created> </wsu:Timestamp> </wsse:Security> </soap:Header> <soap:Body wsu:Id="_51IUwNWRVvPOcz12pZHLNQ22" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd"> [Body content here] </soap:Body> </soap:Envelope> My binding configuration looks like: <basicHttpBinding> <binding name="myBinding" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" allowCookies="false" bypassProxyOnLocal="false" hostNameComparisonMode="StrongWildcard" maxBufferSize="65536" maxBufferPoolSize="524288" maxReceivedMessageSize="65536" messageEncoding="Text" textEncoding="utf-8" transferMode="Buffered" useDefaultWebProxy="true"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> <security mode="TransportWithMessageCredential"> <transport clientCredentialType="None" proxyCredentialType="None" realm="" /> <message clientCredentialType="UserName" algorithmSuite="Default" /> </security> </binding> </basicHttpBinding> I think that basically what I have to do is configure WCF to use UserName client credentials in the request and Certificate client credentials in the response. I don't know how to do this though. I'm new at WCF, so I'm sorry if this is a bit of a dumb question. I've been trying to Google solutions, but there seem to be so many different ways to configure WCF that I'm getting overwhelmed. Thanks in advance!

    Read the article

  • Is there an equivalent to Java's ClassFileTransformer in .NET? (a way to replace a class)

    - by Alix
    I've been searching for this for quite a while with no luck so far. Is there an equivalent to Java's ClassFileTransformer in .NET? Basically, I want to create a class CustomClassFileTransformer (which in Java would implement the interface ClassFileTransformer) that gets called whenever a class is loaded, and is allowed to tweak it and replace it with the tweaked version. I know there are frameworks that do similar things, but I was looking for something more straightforward, like implementing my own ClassFileTransformer. Is it possible? EDIT #1. More details about why I need this: Basically, I have a C# application and I need to monitor the instructions it wants to run in order to detect read or write operations to fields (operations Ldfld and Stfld) and insert some instructions before the read/write takes place. I know how to do this (except for the part where I need to be invoked to replace the class): for every method whose code I want to monitor, I must: Get the method's MethodBody using MethodBase.GetMethodBody() Transform it to byte array with MethodBody.GetILAsByteArray(). The byte[] it returns contains the bytecode. Analyse the bytecode as explained here, possibly inserting new instructions or deleting/modifying existing ones by changing the contents of the array. Create a new method and use the new bytecode to create its body, with MethodBuilder.CreateMethodBody(byte[] il, int count), where il is the array with the bytecode. I put all these tweaked methods in a new class and use the new class to replace the one that was originally going to be loaded. An alternative to replacing classes would be somehow getting notified whenever a method is invoked. Then I'd replace the call to that method with a call to my own tweaked method, which I would tweak only the first time is invoked and then I'd put it in a dictionary for future uses, to reduce overhead (for future calls I'll just look up the method and invoke it; I won't need to analyse the bytecode again). I'm currently investigating ways to do this and LinFu looks pretty interesting, but if there was something like a ClassFileTransformer it would be much simpler: I just rewrite the class, replace it, and let the code run without monitoring anything. An additional note: the classes may be sealed. I want to be able to replace any kind of class, I cannot impose restrictions on their attributes. EDIT #2. Why I need to do this at runtime. I need to monitor everything that is going on so that I can detect every access to data. This applies to the code of library classes as well. However, I cannot know in advance which classes are going to be used, and even if I knew every possible class that may get loaded it would be a huge performance hit to tweak all of them instead of waiting to see whether they actually get invoked or not. POSSIBLE (BUT PRETTY HARDCORE) SOLUTION. In case anyone is interested (and I see the question has been faved, so I guess someone is), this is what I'm looking at right now. Basically I'd have to implement the profiling API and I'll register for the events that I'm interested in, in my case whenever a JIT compilation starts. An extract of the blogpost: In your ICorProfilerCallback2::ModuleLoadFinished callback, you call ICorProfilerInfo2::GetModuleMetadata to get a pointer to a metadata interface on that module. QI for the metadata interface you want. Search MSDN for "IMetaDataImport", and grope through the table of contents to find topics on the metadata interfaces. Once you're in metadata-land, you have access to all the types in the module, including their fields and function prototypes. You may need to parse metadata signatures and this signature parser may be of use to you. In your ICorProfilerCallback2::JITCompilationStarted callback, you may use ICorProfilerInfo2::GetILFunctionBody to inspect the original IL, and ICorProfilerInfo2::GetILFunctionBodyAllocator and then ICorProfilerInfo2::SetILFunctionBody to replace that IL with your own. The great news: I get notified when a JIT compilation starts and I can replace the bytecode right there, without having to worry about replacing the class, etc. The not-so-great news: you cannot invoke managed code from the API's callback methods, which makes sense but means I'm on my own parsing the IL code, etc, as opposed to be able to use Cecil, which would've been a breeze. I don't think there's a simpler way to do this without using AOP frameworks (such as PostSharp). If anyone has any other idea please let me know. I'm not marking the question as answered yet.

    Read the article

  • ASP.net MVC 2.0 using the same form for adding and editing.

    - by Chevex
    I would like to use the same view for editing a blog post and adding a blog post. However, I'm having an issue with the ID. When adding a blog post, I have no need for an ID value to be posted. When model binding binds the form values to the BlogPost object in the controller, it will auto-generate the ID in entity framework entity. When I am editing a blog post I DO need a hidden form field to store the ID in so that it accompanies the next form post. Here is the view I have right now. <% using (Html.BeginForm("CommitEditBlogPost", "Admin")) { %> <% if (Model != null) { %> <%: Html.HiddenFor(x => x.Id)%> <% } %> Title:<br /> <%: Html.TextBoxFor(x => x.Title, new { Style = "Width: 90%;" })%> <br /> <br /> Summary:<br /> <%: Html.TextAreaFor(x => x.Summary, new { Style = "Width: 90%; Height: 50px;" }) %> <br /> <br /> Body:<br /> <%: Html.TextAreaFor(x => x.Body, new { Style = "Height: 250px; Width: 90%;" })%> <br /> <br /> <input type="submit" value="Submit" /> <% } %> Right now checking if the model is coming in NULL is a great way to know if I'm editing a blog post or adding one, because when I'm adding one it will be null as it hasn't been created yet. The problem comes in when there is an error and the entity is invalid. When the controller renders the form after an invalid model the Model != null evaluates to false, even though we are editing a post and there is clearly a model. If I render the hidden input field for ID when adding a post, I get an error stating that the ID can't be null. Any help is appreciated. EDIT: I went with OJ's answer for this question, however I discovered something that made me feel silly and I wanted to share it just in case anyone was having a similar issue. The page the adds/edits blogs does not even need a hidden field for id, ever. The reason is because when I go to add a blog I do a GET to this relative URL BlogProject/Admin/AddBlogPost This URL does not contain an ID and the action method just renders the page. The page does a POST to the same URL when adding the blog post. The incoming BlogPost entity has a null Id and is generated by EF during save changes. The same thing happens when I edit blog posts. The URL is BlogProject/Admin/EditBlogPost/{Id} This URL contains the id of the blog post and since the page is posting back to the exact same URL the id goes with the POST to the action method that executes the edit. The only problem I encountered with this is that the action methods cannot have identical signatures. [HttpGet] public ViewResult EditBlogPost(int Id) { } [HttpPost] public ViewResult EditBlogPost(int Id) { } The compiler will yell at you if you try to use these two methods above. It is far too convenient that the Id will be posted back when doing a Html.BeginForm() with no arguments for action or controller. So rather than change the name of the POST method I just modified the arguments to include a FormCollection. Like this: [HttpPost] public ViewResult EditBlogPost(int Id, FormCollection formCollection) { // You can then use formCollection as the IValueProvider for UpdateModel() // and TryUpdateModel() if you wish. I mean, you might as well use the // argument since you're taking it. } The formCollection variable is filled via model binding with the same content that Request.Form would be by default. You don't have to use this collection for UpdateModel() or TryUpdateModel() but I did just so I didn't feel like that collection was pointless since it really was just to make the method signature different from its GET counterpart. Thanks for the help guys!

    Read the article

  • Multiple (variant) arguments overloading in Java: What's the purpose?

    - by fortran
    Browsing google's guava collect library code, I've found the following: // Casting to any type is safe because the list will never hold any elements. @SuppressWarnings("unchecked") public static <E> ImmutableList<E> of() { return (ImmutableList<E>) EmptyImmutableList.INSTANCE; } public static <E> ImmutableList<E> of(E element) { return new SingletonImmutableList<E>(element); } public static <E> ImmutableList<E> of(E e1, E e2) { return new RegularImmutableList<E>( ImmutableList.<E>nullCheckedList(e1, e2)); } public static <E> ImmutableList<E> of(E e1, E e2, E e3) { return new RegularImmutableList<E>( ImmutableList.<E>nullCheckedList(e1, e2, e3)); } public static <E> ImmutableList<E> of(E e1, E e2, E e3, E e4) { return new RegularImmutableList<E>( ImmutableList.<E>nullCheckedList(e1, e2, e3, e4)); } public static <E> ImmutableList<E> of(E e1, E e2, E e3, E e4, E e5) { return new RegularImmutableList<E>( ImmutableList.<E>nullCheckedList(e1, e2, e3, e4, e5)); } public static <E> ImmutableList<E> of(E e1, E e2, E e3, E e4, E e5, E e6) { return new RegularImmutableList<E>( ImmutableList.<E>nullCheckedList(e1, e2, e3, e4, e5, e6)); } public static <E> ImmutableList<E> of( E e1, E e2, E e3, E e4, E e5, E e6, E e7) { return new RegularImmutableList<E>( ImmutableList.<E>nullCheckedList(e1, e2, e3, e4, e5, e6, e7)); } public static <E> ImmutableList<E> of( E e1, E e2, E e3, E e4, E e5, E e6, E e7, E e8) { return new RegularImmutableList<E>( ImmutableList.<E>nullCheckedList(e1, e2, e3, e4, e5, e6, e7, e8)); } public static <E> ImmutableList<E> of( E e1, E e2, E e3, E e4, E e5, E e6, E e7, E e8, E e9) { return new RegularImmutableList<E>( ImmutableList.<E>nullCheckedList(e1, e2, e3, e4, e5, e6, e7, e8, e9)); } public static <E> ImmutableList<E> of( E e1, E e2, E e3, E e4, E e5, E e6, E e7, E e8, E e9, E e10) { return new RegularImmutableList<E>(ImmutableList.<E>nullCheckedList( e1, e2, e3, e4, e5, e6, e7, e8, e9, e10)); } public static <E> ImmutableList<E> of( E e1, E e2, E e3, E e4, E e5, E e6, E e7, E e8, E e9, E e10, E e11) { return new RegularImmutableList<E>(ImmutableList.<E>nullCheckedList( e1, e2, e3, e4, e5, e6, e7, e8, e9, e10, e11)); } public static <E> ImmutableList<E> of( E e1, E e2, E e3, E e4, E e5, E e6, E e7, E e8, E e9, E e10, E e11, E e12, E... others) { final int paramCount = 12; Object[] array = new Object[paramCount + others.length]; arrayCopy(array, 0, e1, e2, e3, e4, e5, e6, e7, e8, e9, e10, e11, e12); arrayCopy(array, paramCount, others); return new RegularImmutableList<E>(ImmutableList.<E>nullCheckedList(array)); } And although it seems reasonable to have overloads for empty and single arguments (as they are going to use special instances), I cannot see the reason behind having all the others, when just the last one (with two fixed arguments plus the variable argument instead the dozen) seems to be enough. As I'm writing, one explanation that pops into my head is that the API pre-dates Java 1.5; and although the signatures would be source-level compatible, the binary interface would differ. Isn't it?

    Read the article

  • What to filter when providing very limited open WiFi to a small conference or meeting?

    - by Tim Farley
    Executive Summary The basic question is: if you have a very limited bandwidth WiFi to provide Internet for a small meeting of only a day or two, how do you set the filters on the router to avoid one or two users monopolizing all the available bandwidth? For folks who don't have the time to read the details below, I am NOT looking for any of these answers: Secure the router and only let a few trusted people use it Tell everyone to turn off unused services & generally police themselves Monitor the traffic with a sniffer and add filters as needed I am aware of all of that. None are appropriate for reasons that will become clear. ALSO NOTE: There is already a question concerning providing adequate WiFi at large (500 attendees) conferences here. This question concerns SMALL meetings of less than 200 people, typically with less than half that using the WiFi. Something that can be handled with a single home or small office router. Background I've used a 3G/4G router device to provide WiFi to small meetings in the past with some success. By small I mean single-room conferences or meetings on the order of a barcamp or Skepticamp or user group meeting. These meetings sometimes have technical attendees there, but not exclusively. Usually less than half to a third of the attendees will actually use the WiFi. Maximum meeting size I'm talking about is 100 to 200 people. I typically use a Cradlepoint MBR-1000 but many other devices exist, especially all-in-one units supplied by 3G and/or 4G vendors like Verizon, Sprint and Clear. These devices take a 3G or 4G internet connection and fan it out to multiple users using WiFi. One key aspect of providing net access this way is the limited bandwidth available over 3G/4G. Even with something like the Cradlepoint which can load-balance multiple radios, you are only going to achieve a few megabits of download speed and maybe a megabit or so of upload speed. That's a best case scenario. Often it is considerably slower. The goal in most of these meeting situations is to allow folks access to services like email, web, social media, chat services and so on. This is so they can live-blog or live-tweet the proceedings, or simply chat online or otherwise stay in touch (with both attendees and non-attendees) while the meeting proceeds. I would like to limit the services provided by the router to just those services that meet those needs. Problems In particular I have noticed a couple of scenarios where particular users end up abusing most of the bandwidth on the router, to the detriment of everyone. These boil into two areas: Intentional use. Folks looking at YouTube videos, downloading podcasts to their iPod, and otherwise using the bandwidth for things that really aren't appropriate in a meeting room where you should be paying attention to the speaker and/or interacting.At one meeting that we were live-streaming (over a separate, dedicated connection) via UStream, I noticed several folks in the room that had the UStream page up so they could interact with the meeting chat - apparently oblivious that they were wasting bandwidth streaming back video of something that was taking place right in front of them. Unintentional use. There are a variety of software utilities that will make extensive use of bandwidth in the background, that folks often have installed on their laptops and smartphones, perhaps without realizing.Examples: Peer to peer downloading programs such as Bittorrent that run in the background Automatic software update services. These are legion, as every major software vendor has their own, so one can easily have Microsoft, Apple, Mozilla, Adobe, Google and others all trying to download updates in the background. Security software that downloads new signatures such as anti-virus, anti-malware, etc. Backup software and other software that "syncs" in the background to cloud services. For some numbers on how much network bandwidth gets sucked up by these non-web, non-email type services, check out this recent Wired article. Apparently web, email and chat all together are less than one quarter of the Internet traffic now. If the numbers in that article are correct, by filtering out all the other stuff I should be able to increase the usefulness of the WiFi four-fold. Now, in some situations I've been able to control access using security on the router to limit it to a very small group of people (typically the organizers of the meeting). But that's not always appropriate. At an upcoming meeting I would like to run the WiFi without security and let anyone use it, because it happens at the meeting location the 4G coverage in my town is particularly excellent. In a recent test I got 10 Megabits down at the meeting site. The "tell people to police themselves" solution mentioned at top is not appropriate because of (a) a largely non-technical audience and (b) the unintentional nature of much of the usage as described above. The "run a sniffer and filter as needed" solution is not useful because these meetings typically only last a couple of days, often only one day, and have a very small volunteer staff. I don't have a person to dedicate to network monitoring, and by the time we got the rules tweaked completely the meeting will be over. What I've Got First thing, I figured I would use OpenDNS's domain filtering rules to filter out whole classes of sites. A number of video and peer-to-peer sites can be wiped out using this. (Yes, I am aware that filtering via DNS technically leaves the services accessible - remember, these are largely non-technical users attending a 2 day meeting. It's enough). I figured I would start with these selections in OpenDNS's UI: I figure I will probably also block DNS (port 53) to anything other than the router itself, so that folks can't bypass my DNS configuration. A savvy user could get around this, because I'm not going to put a lot of elaborate filters on the firewall, but I don't care too much. Because these meetings don't last very long, its probably not going to be worth the trouble. This should cover the bulk of the non-web traffic, i.e. peer-to-peer and video if that Wired article is correct. Please advise if you think there are severe limitations to the OpenDNS approach. What I Need Note that OpenDNS focuses on things that are "objectionable" in some context or another. Video, music, radio and peer-to-peer all get covered. I still need to cover a number of perfectly reasonable things that we just want to block because they aren't needed in a meeting. Most of these are utilities that upload or download legit things in the background. Specifically, I'd like to know port numbers or DNS names to filter in order to effectively disable the following services: Microsoft automatic updates Apple automatic updates Adobe automatic updates Google automatic updates Other major software update services Major virus/malware/security signature updates Major background backup services Other services that run in the background and can eat lots of bandwidth I also would like any other suggestions you might have that would be applicable. Sorry to be so verbose, but I find it helps to be very, very clear on questions of this nature, and I already have half a solution with the OpenDNS thing.

    Read the article

  • How to remove not required Elements from generated XML via jaxb

    - by Dangling Piyush
    I want to know if there is anyway for removing not required elements from generated xml using jaxb.I have my xsd element definition as follows. <xsd:element name="Title" maxOccurs="1" minOccurs="0"> <xsd:annotation> <xsd:documentation> A name given to the digital record. </xsd:documentation> </xsd:annotation> <xsd:simpleType> <xsd:restriction base="xsd:string"> <xsd:minLength value="1"></xsd:minLength> </xsd:restriction> </xsd:simpleType> </xsd:element> As you can see it is not a mandatory element because minOccurs="0" But if it is not empty the length should be 1. <xsd:minLength value="1"></xsd:minLength> At the time of marshalling if I left the Title field blank it is throwing the SAXException because of min-length restriction. So what I want to do is to remove the whole occurrence of <Title/> from generated XML.Right now i have removed the min-length restriction so it is adding the <Title> element as EMPTY <Title></Title> But I do not want it like this.Any help is appreciated.I am using jaxb 2.0 for Marshalling. UPDATE: Following is my variable definiton : private JAXBContext jaxbContext; private Unmarshaller unmarshaller; private SchemaFactory factory; private Schema schema; private Marshaller marshaller; Marshalling code. jaxbContext = JAXBContext.newInstance(ERecordType.class); marshaller = jaxbContext.createMarshaller(); factory = SchemaFactory.newInstance(XMLConstants.W3C_XML_SCHEMA_NS_URI); schema = factory.newSchema((new File(xsdLocation))); marshaller.setProperty(Marshaller.JAXB_FORMATTED_OUTPUT, true); ERecordType e = new ERecordType(); e.setCataloging(rc); /** * Validate Against Schema. */ marshaller.setSchema(schema); /** * Marshal will throw an exception if XML not validated against * schema. */ marshaller.marshal(e, System.out);

    Read the article

  • How to remove music/videos DRM protection and convert to Mobile Devices such as iPod, iPhone, PSP, Z

    - by tonywesley
    The music/video files you purchased from online music stores like iTunes, Yahoo Music or Wal-Mart are under DRM protection. So you can't convert them to the formats supported by your own mobile devices such as Nokia phone, Creative Zen palyer, iPod, PSP, Walkman, Zune… You also can't share your purchased music/videos with your friends. The following step by step tutorial is dedicated to instructing music lovers to how to convert your DRM protected music/videos to mobile devices. Method 1: If you only want to remove DRM protection from your protected music, this method will not spend your money. Step 1: Burn your protected music files to CD-R/RW disc to make an audio CD Step 2: Find a free CD Ripper software to convert the audio CD track back to MP3, WAV, WMA, M4A, AAC, RA… Method 2: This guide will show you how to crack drm from protected wmv, wma, m4p, m4v, m4a, aac files and convert to unprotected WMV, MP4, MP3, WMA or any video and audio formats you like, such as AVI, MP4, Flv, MPEG, MOV, 3GP, m4a, aac, wmv, ogg, wav... I have been using Media Converter software, it is the quickest and easiest solution to remove drm from WMV, M4V, M4P, WMA, M4A, AAC, M4B, AA files by quick recording. It gets audio and video stream at the bottom of operating system, so the output quality is lossless and the conversion speed is fast . The process is as follows. Step 1: Download and install the software Step 2: Run the software and click "Add…" button to load WMA or M4A, M4B, AAC, WMV, M4P, M4V, ASF files Step 3: Choose output formats. If you want to convert protected audio files, please select "Convert audio to" list; If you want to convert protected video files, please select "Convert video to" list. Step 4: You can click "Settings" button to custom preference for output files. Click "Settings" button bellow "Convert audio to" list for protected audio files Click "Settings" button bellow "Convert video to" list for protected video files Step 5: Start remove DRM and convert your DRM protected music and videos by click on "Start" button. What is DRM? DRM, which is most commonly found in movies and music files, doesn't mean just basic copy-protection of video, audio and ebooks, but it basically means full protection for digital content, ranging from delivery to end user's ways to use the content. We can remove the Drm from video and audio files legally by quick recording.

    Read the article

  • File sizing issue in DOS/FAT

    - by Heather
    I've been tasked with writing a data collection program for a Unitech HT630, which runs a proprietary DOS operating system that can run executables compiled for 16-bit MS DOS with some restrictions. I'm using the Digital Mars C/C++ compiler, which is working well thus far. One of the application requirements is that the data file must be human-readable plain text, meaning the file can be imported into Excel or opened by Notepad. I'm using a variable length record format much like CSV that I've successfully implemented using the C standard library file I/O functions. When saving a record, I have to calculate whether the updated record is larger or smaller than the version of the record currently in the data file. If larger, I first shift all records immediately after the current record forward by the size difference calculated before saving the updated record. EOF is extended automatically by the OS to accommodate the extra data. If smaller, I shift all records backwards by my calculated offset. This is working well, however I have found no way to modify the EOF marker or file size to ignore the data after the end of the last record. Most of the time records will grow in size because the data collection program will be filling some of the empty fields with data when saving a record. Records will only shrink in size when a correction is made on an existing entry, or on a normal record save if the descriptive data in the record is longer than what the program reads in memory. In the situation of a shrinking record, after the last record in the file I'm left with whatever data was sitting there before the shift. I have been writing an EOF delimiter into the file after a "shrinking record save" to signal where the end of my records are and space-filling the remaining data, but then I no longer have a clean file until a "growing record save" extends the size of the file over the space-filled area. The truncate() function in unistd.h does not work (I'm now thinking this is for *nix flavors only?). One proposed solution I've seen involves creating a second file and writing all the data you wish to save into that file, and then deleting the original. Since I only have 4MB worth of disk space to use, this works if the file size is less than 2MB minus the size of my program executable and configuration files, but would fail otherwise. It is very likely that when this goes into production, users would end up with a file exceeding 2MB in size. I've looked at Ralph Brown's Interrupt List and the interrupt reference in IBM PC Assembly Language and Programming and I can't seem to find anything to update the file size or similar. Is reducing a file's size without creating a second file even possible in DOS?

    Read the article

  • how to develop a program to minimize errors in human transcription of hand written surveys

    - by Alex. S.
    I need to develop custom software to do surveys. Questions may be of multiple choice, or free text in a very few cases. I was asked to design a subsystem to check if there is any error in the manual data entry for the multiple choices part. We're trying to speed up the user data entry process and to minimize human input differences between digital forms and the original questionnaires. The surveys are filled with handwritten marks and text by human interviewers, so it's possible to find hard to read marks, or also the user could accidentally select a different value in some question, and we would like to avoid that. The software must include some automatic control to detect possible typing differences. Each answer of the multiple choice questions has the same probability of being selected. This question has two parts: The GUI. The most simple thing I have in mind is to implement the most usable design of the questions display: use of large and readable fonts and space generously the choices. Is there something else? For faster input, I would like to use drop down lists (favoring keyboard over mouse). Given the questions are grouped in sections, I would like to show the answers selected for the questions of that section, but this could slow down the process. Any other ideas? The error checking subsystem. What else can I do to minimize or to check human typos in the multiple choice questions? Is this a solvable problem? is there some statistical methodology to check values that were entered by the users are the same from the hand filled forms? For example, let's suppose the survey has 5 questions, and each has 4 options. Let's say I have n survey forms filled in paper by interviewers, and they're ready to be entered in the software, then how to minimize the accidental differences that can have the manual transcription of the n surveys, without having to double check everything in the 5 questions of the n surveys? My first suggestion is that at the end of the processing of all the hand filled forms, the software could choose some forms randomly to make a double check of the responses in a few instances, but on what criteria can I make this selection? This validation would be enough to cover everything in a significant way? The actual survey is nation level and it has 56 pages with over 200 questions in total, so it will be a lot of hand written pages by many people, and the intention is to reduce the likelihood of errors and to optimize speed in the data entry process. The surveys must filled in paper first, given the complications of taking laptops or handhelds with the interviewers.

    Read the article

  • Subband decomposition using Daubechies filter

    - by misha
    I have the following two 8-tap filters: h0 ['-0.010597', '0.032883', '0.030841', '-0.187035', '-0.027984', '0.630881', '0.714847', '0.230378'] h1 ['-0.230378', '0.714847', '-0.630881', '-0.027984', '0.187035', '0.030841', '-0.032883', '-0.010597'] Here they are on a graph: I'm using it to obtain the approximation (lower subband of an image). This is a(m,n) in the following diagram: I got the coefficients and diagram from the book Digital Image Processing, 3rd Edition, so I trust that they are correct. The star symbol denotes one dimensional convolution (either over rows or over columns). The down arrow denotes downsampling in one dimension (either over rows, or columns). My problem is that the filter coefficients for h0 and h1 sum to greater than 1 (approximately 1.4 or sqrt(2) to be exact). Naturally, if I convolve any image with the filter, the image will get brighter. Indeed, here's what I get (expected result on right): Can somebody suggest what the problem is here? Why should it work if the convolution filter coefficients sum to greater than 1? I have the source code, but it's quite long so I'm hoping to avoid posting it here. If it's absolutely necessary, I'll put it up later. EDIT What I'm doing is: Decompose into subbands Filter one of the subbands Recompose subbands into original image Note that the point isn't just to have a displayable subband-decomposed image -- I have to be able to perfectly reconstruct the original image from the subbands as well. So if I scale the filtered image in order to compensate for my decomposition filter making the image brighter, this is what I will have to do: Decompose into subbands Apply intensity scaling Filter one of the subbands Apply inverse intensity scaling Recompose subbands into original image Step 2 performs the scaling. This is what @Benjamin is suggesting. The problem is that then step 4 becomes necessary, or the original image will not be properly reconstructed. This longer method will work. However, the textbook explicitly says that no scaling is performed on the approximation subband. Of course, it's possible that the textbook is wrong. However, what's more possible is I'm misunderstanding something about the way this all works -- this is why I'm asking this question.

    Read the article

  • Calling CryptUIWizDigitalSign from .NET on x64

    - by Joe Kuemerle
    I am trying to digitally sign files using the CryptUIWizDigitalSign function from a .NET 2.0 application compiled to AnyCPU. The call works fine when running on x86 but fails on x64, it also works on an x64 OS when compiled to x86. Any idea on how to better marshall or call from x64? The Win32exception returned is "Error encountered during digital signing of the file ..." with a native error code of -2146762749. The relevant portion of the code are: [StructLayout(LayoutKind.Sequential)] public struct CRYPTUI_WIZ_DIGITAL_SIGN_INFO { public Int32 dwSize; public Int32 dwSubjectChoice; [MarshalAs(UnmanagedType.LPWStr)] public string pwszFileName; public Int32 dwSigningCertChoice; public IntPtr pSigningCertContext; [MarshalAs(UnmanagedType.LPWStr)] public string pwszTimestampURL; public Int32 dwAdditionalCertChoice; public IntPtr pSignExtInfo; } [DllImport("Cryptui.dll", CharSet=CharSet.Unicode, SetLastError=true)] public static extern bool CryptUIWizDigitalSign(int dwFlags, IntPtr hwndParent, string pwszWizardTitle, ref CRYPTUI_WIZ_DIGITAL_SIGN_INFO pDigitalSignInfo, ref IntPtr ppSignContext); CRYPTUI_WIZ_DIGITAL_SIGN_INFO digitalSignInfo = new CRYPTUI_WIZ_DIGITAL_SIGN_INFO(); digitalSignInfo = new CRYPTUI_WIZ_DIGITAL_SIGN_INFO(); digitalSignInfo.dwSize = Marshal.SizeOf(digitalSignInfo); digitalSignInfo.dwSubjectChoice = 1; digitalSignInfo.dwSigningCertChoice = 1; digitalSignInfo.pSigningCertContext = pSigningCertContext; digitalSignInfo.pwszTimestampURL = timestampUrl; digitalSignInfo.dwAdditionalCertChoice = 0; digitalSignInfo.pSignExtInfo = IntPtr.Zero; digitalSignInfo.pwszFileName = filepath; CryptUIWizDigitalSign(1, IntPtr.Zero, null, ref digitalSignInfo, ref pSignContext)); And here is how the SigningCertContext is retrieved (minus various error handling) public IntPtr GetCertContext(String pfxfilename, String pswd) IntPtr hMemStore = IntPtr.Zero; IntPtr hCertCntxt = IntPtr.Zero; IntPtr pProvInfo = IntPtr.Zero; uint provinfosize = 0; try { byte[] pfxdata = PfxUtility.GetFileBytes(pfxfilename); CRYPT_DATA_BLOB ppfx = new CRYPT_DATA_BLOB(); ppfx.cbData = pfxdata.Length; ppfx.pbData = Marshal.AllocHGlobal(pfxdata.Length); Marshal.Copy(pfxdata, 0, ppfx.pbData, pfxdata.Length); hMemStore = Win32.PFXImportCertStore(ref ppfx, pswd, CRYPT_USER_KEYSET); pswd = null; if (hMemStore != IntPtr.Zero) { Marshal.FreeHGlobal(ppfx.pbData); while ((hCertCntxt = Win32.CertEnumCertificatesInStore(hMemStore, hCertCntxt)) != IntPtr.Zero) { if (Win32.CertGetCertificateContextProperty(hCertCntxt, CERT_KEY_PROV_INFO_PROP_ID, IntPtr.Zero, ref provinfosize)) pProvInfo = Marshal.AllocHGlobal((int)provinfosize); else continue; if (Win32.CertGetCertificateContextProperty(hCertCntxt, CERT_KEY_PROV_INFO_PROP_ID, pProvInfo, ref provinfosize)) break; } } finally { if (pProvInfo != IntPtr.Zero) Marshal.FreeHGlobal(pProvInfo); if (hMemStore != IntPtr.Zero) Win32.CertCloseStore(hMemStore, 0); } return hCertCntxt; }

    Read the article

  • Generating Unordered List with PHP + CodeIgniter from a MySQL Database

    - by Tim
    Hello Everyone, I am trying to build a dynamically generated unordered list in the following format using PHP. I am using CodeIgniter but it can just be normal php. This is the end output I need to achieve. <ul id="categories" class="menu"> <li rel="1"> Arts &amp; Humanities <ul> <li rel="2"> Photography <ul> <li rel="3"> 3D </li> <li rel="4"> Digital </li> </ul> </li> <li rel="5"> History </li> <li rel="6"> Literature </li> </ul> </li> <li rel="7"> Business &amp; Economy </li> <li rel="8"> Computers &amp; Internet </li> <li rel="9"> Education </li> <li rel="11"> Entertainment <ul> <li rel="12"> Movies </li> <li rel="13"> TV Shows </li> <li rel="14"> Music </li> <li rel="15"> Humor </li> </ul> </li> <li rel="10"> Health </li> And here is my SQL that I have to work with. -- -- Table structure for table `categories` -- CREATE TABLE IF NOT EXISTS `categories` ( `id` mediumint(8) NOT NULL auto_increment, `dd_id` mediumint(8) NOT NULL, `parent_id` mediumint(8) NOT NULL, `cat_name` varchar(256) NOT NULL, `cat_order` smallint(4) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=1 ; So I know that I am going to need at least 1 foreach loop to generate the first level of categories. What I don't know is how to iterate inside each loop and check for parents and do that in a dynamic way so that there could be an endless tree of children. Thanks for any help you can offer. Tim

    Read the article

  • Replace text in XSL using wildcards

    - by JosephThomas
    This is similar to an earlier problem I was having which you guys solved in less than a day. I am working with XML files that are generated by a digital video camera. The camera allows the user to save all of the camera's settngs to an SD card so that the settings can be recalled or loaded into another camera. The XSL stylesheet I am writing will allow users to view the camera's settings, as saved to the SD card in a web browser. While most of the values in the XML file -- as formatted by my stylesheet -- make sense to humans, some do not. What I would like to do is have the stylesheet display text that is based on the value in the XML file but more easily understood by humans. A typical value that can be written to the XML file is "_23_970" which represents the camera's frame rate. This would be better displayed as 23.970 (or 023.970). The first underscore is a sort of place holder to make a space for values over 099.999. The second underscore, obviously represents the decimal. My previous (similar) question involved replacing predictable text, and the solution was matching templates. In this case, however, the camera can be set at any one of 119,999 frame rates (I think I did that math correctly). The approach, I would guess, is to pass a value to the displayed webpage that keeps the numeric values (each digit), replaces the second underscore with a decimal, and replaces the first underscore with either an nbsp or a zero (whichever is easier). If the first character in the string is a "1" (the camera can run at frame rates up to 120.000) then the one should be passed on to the page displayed by the stylesheet. I have read other posts here regarding wildcards, but couldn't find one that answered this question. EDIT: Sorry for leaving out important info. I fared better on my first try at asking a question! I guess I got complacent. Anyhow . . . I should have shown you the code that displays the text in the XSL file as is: <tr> <xsl:for-each select="Settings/Groups/Recording"> <tr><td class="title_column">Frame Rate</td><td><xsl:value-of select="RecOutLinkSpeed"/></td></tr> </xsl:for-each> </tr> I should also have given you the URL for the sample file I have been working with: http://josephthomas.info/Alexa/Setup_120511_140322.xml

    Read the article

  • SSL certificates and types for securing your websites and applications

    - by Mit Naik
    Need to share few information regarding SSL certificates and there types, which SSL certificates are widely used etc. There are several SSL certificates available in the market today inorder to secure your domains, multiple subdomains, your applications and code too. Few of the details are mentioned below. CheapSSL certificates available today are Standard Rapidssl certificate, Thwate SSL 123 etc certificates which are basic level certificates. Most of these cheap SSL certificates are domain-validated only and don't provide the greatest trust for your customers. This means you shouldn't use cheap SSL certificates on e-commerce stores or other public-facing sites that require people to trust the site. EV certificates I found Geotrust Truebusinessid with EV certificate which is one of the cheapest certificate available in market today, you can also find Thwate, Versign EV version of certificates. Its designed to prevent phishing attacks better than normal SSL certificates. What makes an EV Certificate so special? An SSL Certificate Provider has to do some extensive validation to give you one including: Verifying that your organization is legally registered and active, Verifying the address and phone number of your organization, Verifying that your organization has exclusive right to use the domain specified in the EV Certificate, Verifying that the person ordering the certificate has been authorized by the organization, Verifying that your organization is not on any government blacklists. SSL WILDCARD CERTIFICATES, SSL Wildcard Certificates are big money-savers. An SSL Wildcard Certificate allows you to secure an unlimited number of first-level sub-domains on a single domain name. For example, if you need to secure the following websites: * www.yourdomain.com * secure.yourdomain.com * product.yourdomain.com * info.yourdomain.com * download.yourdomain.com * anything.yourdomain.com and all of these websites are hosted on the multiple server box, you can purchase and install one Wildcard certificate issued to *.yourdomain.com to secure all these sites. SAN CERTIFICATES, are interesting certificates and are helpfull if you want to secure multiple domains by generating single CSR and can install the same certificate on your additional sites without generating new CSRs for all the additional domains. CODE SIGNING CERTIFICATES, A code signing certificate is a file containing a digital signature that can be used to sign executables and scripts in order to verify your identity and ensure that your code has not been tampered with since it was signed. This helps your users to determine whether your software can be trusted. Scroll to the chart below to compare cheap code signing certificates. A code signing certificate allows you to sign code using a private and public key system similar to how an SSL certificate secures a website. When you request a code signing certificate, a public/private key pair is generated. The certificate authority will then issue a code signing certificate that contains the public key. A certificate for code signing needs to be signed by a trusted certificate authority so that the operating system knows that your identity has been validated. You could still use the code signing certificate to sign and distribute malicious software but you will be held legally accountable for it. You can sign many different types of code. The most common types include Windows applications such as .exe, .cab, .dll, .ocx, and .xpi files (using an Authenticode certificate), Apple applications (using an Apple code signing certificate), Microsoft Office VBA objects and macros (using a VBA code signing certificate), .jar files (using a Java code signing certificate), .air or .airi files (using an Adobe AIR certificate), and Windows Vista drivers and other kernel-mode software (using a Vista code certificate). In reality, a code signing certificate can sign almost all types of code as long as you convert the certificate to the correct format first. Also I found the below URL which provides you good suggestion regarding purchasing best SSL certificates for securing your site, as per the Financial institution, Bank, Hosting providers, ISP, Retail Merchants etc. Please vote and provide comments or any additional suggestions regarding SSL certificates.

    Read the article

  • A faulty Caviar Blue hard drive?

    - by Glister
    We have a small "homemade" server running fully updated Debian Wheezy (amd64). One hard drive installed: WDC WD6400AAKS. The motherboard is ASUS M4N68T V2. The usual load: CPU: an average of 20% Each week about 50GB of additional space is occupied. About 47GB of uploaded files and 3GB of MySQL data. I'm afraid that the hard drive may be about to fail. I saw Pre-fail on few places when I ran: root@SERVER:/tmp# smartctl -a /dev/sda smartctl 5.41 2011-06-09 r3365 [x86_64-linux-3.2.0-4-amd64] (local build) Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net === START OF INFORMATION SECTION === Model Family: Western Digital Caviar Blue Serial ATA Device Model: WDC WD6400AAKS-XXXXXXX Serial Number: WD-XXXXXXXXXXXXXXXXXXX LU WWN Device Id: 5 0014ee XXXXXXXXXXXXX Firmware Version: 01.03B01 User Capacity: 640,135,028,736 bytes [640 GB] Sector Size: 512 bytes logical/physical Device is: In smartctl database [for details use: -P show] ATA Version is: 8 ATA Standard is: Exact ATA specification draft version not indicated Local Time is: Mon Oct 28 18:55:27 2013 UTC SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED General SMART Values: Offline data collection status: (0x85) Offline data collection activity was aborted by an interrupting command from host. Auto Offline Data Collection: Enabled. Self-test execution status: ( 247) Self-test routine in progress... 70% of test remaining. Total time to complete Offline data collection: (11580) seconds. Offline data collection capabilities: (0x7b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. General Purpose Logging supported. Short self-test routine recommended polling time: ( 2) minutes. Extended self-test routine recommended polling time: ( 136) minutes. Conveyance self-test routine recommended polling time: ( 5) minutes. SCT capabilities: (0x303f) SCT Status supported. SCT Error Recovery Control supported. SCT Feature Control supported. SCT Data Table supported. SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 0 3 Spin_Up_Time 0x0027 157 146 021 Pre-fail Always - 5108 4 Start_Stop_Count 0x0032 098 098 000 Old_age Always - 2968 5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always - 0 7 Seek_Error_Rate 0x002e 200 200 051 Old_age Always - 0 9 Power_On_Hours 0x0032 079 079 000 Old_age Always - 15445 10 Spin_Retry_Count 0x0032 100 100 051 Old_age Always - 0 11 Calibration_Retry_Count 0x0032 100 100 051 Old_age Always - 0 12 Power_Cycle_Count 0x0032 098 098 000 Old_age Always - 2950 192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age Always - 426 193 Load_Cycle_Count 0x0032 200 200 000 Old_age Always - 2968 194 Temperature_Celsius 0x0022 111 095 000 Old_age Always - 36 196 Reallocated_Event_Count 0x0032 200 200 000 Old_age Always - 0 197 Current_Pending_Sector 0x0032 200 200 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0030 200 200 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x0032 200 160 000 Old_age Always - 21716 200 Multi_Zone_Error_Rate 0x0008 200 200 051 Old_age Offline - 0 SMART Error Log Version: 1 No Errors Logged SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Short offline Completed without error 00% 15444 - Error SMART Read Selective Self-Test Log failed: scsi error aborted command Smartctl: SMART Selective Self Test Log Read Failed root@SERVER:/tmp# In one tutorial I read that the pre-fail is a an indication of coming failure, in another tutorial I read that it is not true. Can you guys help me decode the output of smartctl? It would be also nice to share suggestions what should I do if I want to ensure data integrity (about 50GB of new data each week, up to 2TB for the whole period I'm interested in). Maybe I will go with 2x2TB Caviar Black in RAID4?

    Read the article

< Previous Page | 68 69 70 71 72 73 74 75 76 77 78  | Next Page >