Search Results

Search found 1726 results on 70 pages for 'digital compass'.

Page 64/70 | < Previous Page | 60 61 62 63 64 65 66 67 68 69 70  | Next Page >

  • Organization &amp; Architecture UNISA Studies &ndash; Chap 5

    - by MarkPearl
    Learning Outcomes Describe the operation of a memory cell Explain the difference between DRAM and SRAM Discuss the different types of ROM Explain the concepts of a hard failure and a soft error respectively Describe SDRAM organization Semiconductor Main Memory The two traditional forms of RAM used in computers are DRAM and SRAM DRAM (Dynamic RAM) Divided into two technologies… Dynamic Static Dynamic RAM is made with cells that store data as charge on capacitors. The presence or absence of charge in a capacitor is interpreted as a binary 1 or 0. Because capacitors have natural tendency to discharge, dynamic RAM requires periodic charge refreshing to maintain data storage. The term dynamic refers to the tendency of the stored charge to leak away, even with power continuously applied. Although the DRAM cell is used to store a single bit (0 or 1), it is essentially an analogue device. The capacitor can store any charge value within a range, a threshold value determines whether the charge is interpreted as a 1 or 0. SRAM (Static RAM) SRAM is a digital device that uses the same logic elements used in the processor. In SRAM, binary values are stored using traditional flip flop logic configurations. SRAM will hold its data as along as power is supplied to it. Unlike DRAM, no refresh is required to retain data. SRAM vs. DRAM DRAM is simpler and smaller than SRAM. Thus it is more dense and less expensive than SRAM. The cost of the refreshing circuitry for DRAM needs to be considered, but if the machine requires a large amount of memory, DRAM turns out to be cheaper than SRAM. SRAMS are somewhat faster than DRAM, thus SRAM is generally used for cache memory and DRAM is used for main memory. Types of ROM Read Only Memory (ROM) contains a permanent pattern of data that cannot be changed. ROM is non volatile meaning no power source is required to maintain the bit values in memory. While it is possible to read a ROM, it is not possible to write new data into it. An important application of ROM is microprogramming, other applications include library subroutines for frequently wanted functions, System programs, Function tables. A ROM is created like any other integrated circuit chip, with the data actually wired into the chip as part of the fabrication process. To reduce costs of fabrication, we have PROMS. PROMS are… Written only once Non-volatile Written after fabrication Another variation of ROM is the read-mostly memory, which is useful for applications in which read operations are far more frequent than write operations, but for which non volatile storage is required. There are three common forms of read-mostly memory, namely… EPROM EEPROM Flash memory Error Correction Semiconductor memory is subject to errors, which can be classed into two categories… Hard failure – Permanent physical defect so that the memory cell or cells cannot reliably store data Soft failure – Random error that alters the contents of one or more memory cells without damaging the memory (common cause includes power supply issues, etc.) Most modern main memory systems include logic for both detecting and correcting errors. Error detection works as follows… When data is to be read into memory, a calculation is performed on the data to produce a code Both the code and the data are stored When the previously stored word is read out, the code is used to detect and possibly correct errors The error checking provides one of 3 possible results… No errors are detected – the fetched data bits are sent out An error is detected, and it is possible to correct the error. The data bits plus error correction bits are fed into a corrector, which produces a corrected set of bits to be sent out An error is detected, but it is not possible to correct it. This condition is reported Hamming Code See wiki for detailed explanation. We will probably need to know how to do a hemming code – refer to the textbook (pg. 188 – 189) Advanced DRAM organization One of the most critical system bottlenecks when using high-performance processors is the interface to main memory. This interface is the most important pathway in the entire computer system. The basic building block of main memory remains the DRAM chip. In recent years a number of enhancements to the basic DRAM architecture have been explored, and some of these are now on the market including… SDRAM (Synchronous DRAM) DDR-DRAM RDRAM SDRAM (Synchronous DRAM) SDRAM exchanges data with the processor synchronized to an external clock signal and running at the full speed of the processor/memory bus without imposing wait states. SDRAM employs a burst mode to eliminate the address setup time and row and column line precharge time after the first access In burst mode a series of data bits can be clocked out rapidly after the first bit has been accessed SDRAM has a multiple bank internal architecture that improves opportunities for on chip parallelism SDRAM performs best when it is transferring large blocks of data serially There is now an enhanced version of SDRAM known as double data rate SDRAM or DDR-SDRAM that overcomes the once-per-cycle limitation of SDRAM

    Read the article

  • 3 Trends for SMBs around Social, Mobile, and Sensor

    - by Socially_Aware_Enterprise
    While I often am talking to big companies or discussing enterprise solutions. There are times when individuals ask me about Small or Medium sized business trends.  Interestingly,  the Enterprise Social, Mobile, and Sensor initiatives I regularly discuss are in fact related to even the Mom and Pop storefront. The eco-system of new service players in the Social-Mobile-Sensor space generally emerge developing partnerships with enterprises as they develop and bring economy to scale to their services for the larger market. And of course Oracle has an entire division dedicated for delivering products and support to help emerging companies compete without the need to open an industrial strength credit line.. So here are some trends that we are helping large enterprises to deploy today, but small and medium businesses should be able to take advantage of by the end of this year and starting into 2015. 1) The typical small business is generally "Localized". But the ability to be "Hyper-Localized" will come as location based services become ubiquitous. Many small businesses have one or several storefronts and theirs are typically within a single regional economic footprint. While the internet provides global reach, it will be the businesses that invest in social, mobile and local that will win in the end.  Of course I am a huge SoMoLo evangelist. The SMBs' content and targeting with platforms for Geo-Fencing, Geo-Conquesting and Path-Matching to HHI are all going to be accessible to them, if not for Mobile Apps, then via Mobile messaging in Social Networks that offer it.. Expect to be able to target FaceBook messaging not by city, but by store or mall… This makes being able to be "Hyper-Local" even more important. And with new proximity services coming online more than ever before, SMBs will operate and service customers with pinpoint accuracy right down to where they stand in an aisle. Geo-Conquesting will be huge for small players to place ads when customers pass through competitors regions. Car Dealers are doing this now.. But also of course iBeacons are now very cheap and getting easier to put in retail stores. The ability for sales to happen anywhere in the store via a mobile phone or tablet is huge, as it will give the small shop the flexibility to not have to "Guard the Register" as more or most transactions will be digital. Thus, M-Commerce and T-Commerce will change the job of cashier dramatically.. 2) Intra-Brand Advocacy, the idea now is that rather than just depend on your trusty social media manager and his team, you are going to push more and more individuals with expertise inside the organization to help manage, reach-out, and utilize social channels to manage the incoming questions and answers customers need. While for years CRM was the tool of the enterprise, today CRMs enable this now "Salesforce et al" capability to trickle throughout the company. This gives greater pressure to organize roles, but also flatten out the organization. Internal collaboration around topics and customer needs is going to be the key for SMBs to finally get serious about customer experiences. Their customers are online and in social networks. This includes not just B2C SMBs but also B2B companies as well. Don't believe me? To find the players just use hashtag #SocialSelling and you will see… 3) The Visual Networks will begin to move from Content Aggregators to Content Collaboration platforms, which means Pinterest, Instagram, Vine, & others will begin to move to add more features brands want, first marketing platforms, rather than unique brand partnerships as they do today, but this will open ways for SMBs to engage with clear brand messaging and metrics. Eventually providing more "Collaboration" between Brand and Consumer.. Don't think for a minute Facebook bought Oculus Rift so you could see your timeline in 3-D. The Social Networks I advise customers to invest in are ones that are audio and visual intrinsically. Players from SoundCloud to Pinterest are deploying ways for brands to harness their interactive visual or audio based social networks to sell ad units aka brand messaging. While the Social Media revolution is going on, the emphasis was on the social, today it more and more about the media in social, that enterprises soon small and medium businesses will be connected to. 

    Read the article

  • JavaOne 2012 Sunday Strategy Keynote

    - by Janice J. Heiss
    At the Sunday Strategy Keynote, held at the Masonic Auditorium, Hasan Rizvi, EVP, Middleware and Java Development, stated that the theme for this year's JavaOne is: “Make the future Java”-- meaning that Java continues in its role as the most popular, complete, productive, secure, and innovative development platform. But it also means, he qualified, the process by which we make the future Java -- an open, transparent, collaborative, and community-driven evolution. "Many of you have bet your businesses and your careers on Java, and we have bet our business on Java," he said.Rizvi detailed the three factors they consider critical to the success of Java--technology innovation, community participation, and Oracle's leadership/stewardship. He offered a scorecard in these three realms over the past year--with OS X and Linux ARM support on Java SE, open sourcing of JavaFX by the end of the year, the release of Java Embedded Suite 7.0 middleware platform, and multiple releases on the Java EE side. The JCP process continues, with new JSR activity, and JUGs show a 25% increase in participation since last year. Oracle, meanwhile, continues its commitment to both technology and community development/outreach--with four regional JavaOne conferences last year in various part of the world, as well as the release of Java Magazine, with over 120,000 current subscribers. Georges Saab, VP Development, Java SE, next reviewed features of Java SE 7--the first major revision to the platform under Oracle's stewardship, which has included near-monthly update releases offering hundreds of fixes, performance enhancements, and new features. Saab indicated that developers, ISVs, and hosting providers have all been rapid adopters of the platform. He also noted that Oracle's entire Fusion middleware stack is supported on SE 7. The supported platforms for SE 7 has also increased--from Windows, Linux, and Solaris, to OS X, Linux ARM, and the emerging ARM micro-server market. "In the last year, we've added as many new platforms for Java, as were added in the previous decade," said Saab.Saab also explored the upcoming JDK 8 release--including Project Lambda, Project Nashorn (a modern implementation of JavaScript running on the JVM), and others. He noted that Nashorn functionality had already been used internally in NetBeans 7.3, and announced that they were planning to contribute the implementation to OpenJDK. Nandini Ramani, VP Development, Java Client, ME and Card, discussed the latest news pertaining to JavaFX 2.0--releases on Windows, OS X, and Linux, release of the FX Scene Builder tool, the JavaFX WebView component in NetBeans 7.3, and an OpenJFX project in OpenJDK. Nandini announced, as of Sunday, the availability for download of JavaFX on Linux ARM (developer preview), as well as Scene Builder on Linux. She noted that for next year's JDK 8 release, JavaFX will offer 3D, as well as third-party component integration. Avinder Brar, Senior Software Engineer, Navis, and Dierk König, Canoo Fellow, next took the stage and demonstrated all that JavaFX offers, with a feature-rich, animation-rich, real-time cargo management application that employs Canoo's just open-sourced Dolphin technology.Saab also explored Java SE 9 and beyond--Jigsaw modularity, Penrose Project for interoperability with OSGi, improved multi-tenancy for Java in the cloud, and Project Sumatra. Phil Rogers, HSA Foundation President and AMD Corporate Fellow, explored heterogeneous computing platforms that combine the CPU and the parallel processor of the GPU into a single piece of silicon and shared memory—a hardware technology driven by such advanced functionalities as HD video, face recognition, and cloud workloads. Project Sumatra is an OpenJDK project targeted at bringing Java to such heterogeneous platforms--with hardware and software experts working together to modify the JVM for these advanced applications and platforms.Ramani next discussed the latest with Java in the embedded space--"the Internet of things" and M2M--declaring this to be "the next IT revolution," with Java as the ideal technology for the ecosystem. Last week, Oracle released Java ME Embedded 3.2 (for micro-contollers and low-power devices), and Java Embedded Suite 7.0 (a middleware stack based on Java SE 7). Axel Hansmann, VP Strategy and Marketing, Cinterion, explored his company's use of Java in M2M, and their new release of EHS5, the world's smallest 3G-capable M2M module, running Java ME Embedded. Hansmaan explained that Java offers them the ability to create a "simple to use, scalable, coherent, end-to-end layer" for such diverse edge devices.Marc Brule, Chief Financial Office, Royal Canadian Mint, also explored the fascinating use-case of JavaCard in his country's MintChip e-cash technology--deployable on smartphones, USB device, computer, tablet, or cloud. In parting, Ramani encouraged developers to download the latest releases of Java Embedded, and try them out.Cameron Purdy, VP, Fusion Middleware Development and Java EE, summarized the latest developments and announcements in the Enterprise space--greater developer productivity in Java EE6 (with more on the way in EE 7), portability between platforms, vendors, and even cloud-to-cloud portability. The earliest version of the Java EE 7 SDK is now available for download--in GlassFish 4--with WebSocket support, better JSON support, and more. The final release is scheduled for April of 2013. Nicole Otto, Senior Director, Consumer Digital Technology, Nike, explored her company's Java technology driven enterprise ecosystem for all things sports, including the NikeFuel accelerometer wrist band. Looking beyond Java EE 7, Purdy mentioned NoSQL database functionality for EE 8, the concurrency utilities (possibly in EE 7), some of the Avatar projects in EE 7, some in EE 8, multi-tenancy for the cloud, supporting SaaS applications, and more.Rizvi ended by introducing Dr. Robert Ballard, oceanographer and National Geographic Explorer in Residence--part of Oracle's philanthropic relationship with the National Geographic Society to fund K-12 education around ocean science and conservation. Ballard is best known for having discovered the wreckage of the Titanic. He offered a fascinating video and overview of the cutting edge technology used in such deep-sea explorations, noting that in his early days, high-bandwidth exploration meant that you’d go down in a submarine and "stick your face up against the window." Now, it's a remotely operated, technology telepresence--"I think of my Hercules vehicle as my equivalent of a Na'vi. When I go beneath the sea, I actually send my spirit." Using high bandwidth satellite links, such amazing explorations can now occur via smartphone, laptop, or whatever platform. Ballard’s team regularly offers live feeds and programming out to schools and the world, spanning 188 countries--with embedding educators as part of the expeditions. It's technology at its finest, inspiring the next-generation of scientists and explorers!

    Read the article

  • Books are Dead! Long Live the Books!

    - by smisner
    We live in interesting times with regard to the availability of technical material. We have lots of free written material online in the form of vendor documentation online, forums, blogs, and Twitter. And we have written material that we can buy in the form of books, magazines, and training materials. Online videos and training – some free and some not free – are also an option. All of these formats are useful for one need or another. As an author, I pay particular attention to the demand for books, and for now I see no reason to stop authoring books. I assure you that I don’t get rich from the effort, and fortunately that is not my motivation. As someone who likes to refer to books frequently, I am still a big believer in books and have evidence from book sales that there are others like me. If I can do my part to help others learn about the technologies I work with, I will continue to produce content in a variety of formats, including books. (You can view a list of all of my books on the Publications page of my site and my online training videos at Pluralsight.) As a consumer of technical information, I prefer books because a book typically can get into a topic much more deeply than a blog post, and can provide more context than vendor documentation. It comes with a table of contents and a (hopefully accurate) index that helps me zero in on a topic of interest, and of course I can use the Search feature in digital form. Some people suggest that technology books are outdated as soon as they get published. I guess it depends on where you are with technology. Not everyone is able to upgrade to the latest and greatest version at release. I do assume, however, that the SQL Server 7.0 titles in my library have little value for me now, but I’m certain that the minute I discard the book, I’m going to want it for some reason! Meanwhile, as electronic books overtake physical books in sales, my husband is grateful that I can continue to build my collection digitally rather than physically as the books have a way of taking over significant square footage in our house! Blog posts, on the other hand, are useful for describing the scenarios that come up in real-life implementations that wouldn’t fit neatly into a book. As many years that I have working with the Microsoft BI stack, I still run into new problems that require creative thinking. Likewise, people who work with BI and other technologies that I use share what they learn through their blogs. Internet search engines help us find information in blogs that simply isn’t available anywhere else. Another great thing about blogs, also, is the connection to community and the dialog that can ensue between people with common interests. With the trend towards electronic formats for books, I imagine that we’ll see books continue to adapt to incorporate different forms of media and better ways to keep the information current. At the moment, I wish I had a better way to help readers with my last two Reporting Services books. In the case of the Microsoft® SQL Server™ 2005 Reporting Services Step by Step book, I have heard many cases of readers having problems with the sample database that shipped on CD – either the database was missing or it was corrupt. So I’ve provided a copy of the database on my site for download from http://datainspirations.com/uploads/rs2005sbsDW.zip. Then for the Microsoft® SQL Server™ 2008 Reporting Services Step by Step book, we decided to avoid the database problem by using the AdventureWorks2008 samples that Microsoft published on Codeplex (although code samples are still available on CD). We had this silly idea that the URL for the download would remain constant, but it seems that expectation was ill-founded. Currently, the sample database is found at http://msftdbprodsamples.codeplex.com/releases/view/37109 but I have no idea how long that will remain valid. My latest books (#9 and #10 which are milestones I never anticipated), Building Integrated Business Intelligence Solutions with SQL Server 2008 R2 and Office 2010 (McGraw Hill, 2011) and Business Intelligence in Microsoft SharePoint 2010 (Microsoft Press, 2011), will not ship with a CD, but will provide all code samples for download at a site maintained by the respective publishers. I expect that the URLs for the downloads for the book will remain valid, but there are lots of references to other sites that can change or disappear over time. Does that mean authors shouldn’t make reference to such sites? Personally, I think the benefits to be gained from including links are greater than the risks of the links becoming invalid at some point. Do you think the time for technology books has come to an end? Is the delivery of books in electronic format enough to keep them alive? If technological barriers were no object, what would make a book more valuable to you than other formats through which you can obtain information?

    Read the article

  • Windows for IoT, continued

    - by Valter Minute
    Originally posted on: http://geekswithblogs.net/WindowsEmbeddedCookbook/archive/2014/08/05/windows-for-iot-continued.aspxI received many interesting feedbacks on my previous blog post and I tried to find some time to do some additional tests. Bert Kleinschmidt pointed out that pins 2,3 and 10 of the Galileo are connected directly to the SOC, while pin 13, the one used for the sample sketch is controlled via an I2C I/O expander. I changed my code to use pin 2 instead of 13 (just changing the variable assignment at the beginning of the code) and latency was greatly reduced. Now each pulse lasts for 1.44ms, 44% more than the expected time, but ways better that the result we got using pin 13. I also used SetThreadPriority to increase the priority of the thread that was running the sketch to THREAD_PRIORITY_HIGHEST but that didn't change the results. When I was using the I2C-controlled pin I tried the same and the timings got ways worse (increasing more than 10 times) and so I did not commented on that part, wanting to investigate the issua a bit more in detail. It seems that increasing the priority of the application thread impacts negatively the I2C communication. I tried to use also the Linux-based implementation (using a different Galileo board since the one provided by MS seems to use a different firmware) and the results of running the sample blink sketch modified to use pin 2 and blink the led for 1ms are similar to those we got on the same board running Windows. Here the difference between expected time and measured time is worse, getting around 3.2ms instead of 1 (320% compared to 150% using Windows but far from the 100.1% we got with the 8-bit Arduino). Both systems were not under load during the test, maybe loading some applications that use part of the CPU time would make those timings even less reliable, but I think that those numbers are enough to draw some conclusions. It may not be worth running a full OS if what you need is Arduino compatibility. The Arduino UNO is probably the best Arduino you can find to perform this kind of development. The Galileo running the Linux-based stack or running Windows for IoT is targeted to be a platform for "Internet of Things" devices, whatever that means. At the moment I don't see the "I" part of IoT. We have low level interfaces (SPI, I2C, the GPIO pins) that can be used to connect sensors but the support for connectivity is limited and the amount of work required to deliver some data to the cloud (using a secure HTTP request or a message queuing system like APMQS or MQTT) is still big and the rich OS underneath seems to not provide any help doing that.Why should I use sockets and can't access all the high level connectivity features we have on "full" Windows?I know that it's possible to use some third party libraries, try to build them using the Windows For IoT SDK etc. but this means re-inventing the wheel every time and can also lead to some IP concerns if used for products meant to be closed-source. I hope that MS and Intel (and others) will focus less on the "coolness" of running (some) Arduino sketches and more on providing a better platform to people that really want to design devices that leverage internet connectivity and the cloud processing power to deliver better products and services. Providing a reliable set of connectivity services would be a great start. Providing support for .NET would be even better, leaving native code available for hardware access etc. I know that those components may require additional storage and memory etc. So making the OS componentizable (or, at least, provide a way to install additional components) would be a great way to let developers pick the parts of the system they need to develop their solution, knowing that they will integrate well together. I can understand that the Arduino and Raspberry Pi* success may have attracted the attention of marketing departments worldwide and almost any new development board those days is promoted as "XXX response to Arduino" or "YYYY alternative to Raspberry Pi", but this is misleading and prevents companies from focusing on how to deliver good products and how to integrate "IoT" features with their existing offer to provide, at the end, a better product or service to their customers. Marketing is important, but can't decide the key features of a product (the OS) that is going to be used to develop full products for end customers integrating it with hardware and application software. I really like the "hackable" nature of open-source devices and like to see that companies are getting more and more open in releasing information, providing "hackable" devices and supporting developers with documentation, good samples etc. On the other side being able to run a sketch designed for an 8 bit microcontroller on a full-featured application processor may sound cool and an easy upgrade path for people that just experimented with sensors etc. on Arduino but it's not, in my humble opinion, the main path to follow for people who want to deliver real products.   *Shameless self-promotion: if you are looking for a good book in Italian about the Raspberry Pi , try mine: http://www.amazon.it/Raspberry-Pi-alluso-Digital-LifeStyle-ebook/dp/B00GYY3OKO

    Read the article

  • Drive Online Engagement with Intuitive Portals and Websites

    - by kellsey.ruppel
    As more and more business is being conducted via online channels, engaging users and making them more productive and efficient though these online channels is becoming critical. These users could be customers, partners or employees and while the respective channels through which they interact might be different, these users do increasingly interact with your business through the Web, or mobile devices or now through various social mediums.  Businesses need a user engagement strategy and solution that allows them to deliver targeted and personalized content and applications to users through the various online mediums and touch points.  The customer experience today is made up of an ongoing set of interactions with organizations across many channels, online and offline.  The Direct channel (including sales reps, email and mail) is an important point of contact, as is the Contact Center.  Contact Centers rely on the phone as a means of interacting with customers, and also more now than ever, the Web as well.  However, the online organization is often managed separately from the Contact Center organization within a business. In-store is an important channel for retailers, offering Point-of-Service for human interactions, and Kiosks which enable self-service. Kiosks are a Web-enabled touch point but in-store kiosks are often managed by the head of retail operations, rather than the online organization.  And of course, the online channel, including customer interactions with an organization via digital means -- on the website, mobile websites, and social networking sites, has risen to paramount importance in recent years in the customer experience. Historically all of these channels have been managed separately. The result of all of this fragmentation is that the customer touch points with an organization are siloed.  Their interactions online are not known and respected in their dealings in-store.  Their calls to the contact center are not taken as input into what the website offers them when they arrive. Think of how many times you’ve fallen victim to this. Your experience with the company call center is different than the experience in-store. Your experience with the company website on your desktop computer is different than your experience on your iPad. I think you get the point. But the customer isn’t the only one we need to look at here, as employees and the IT organization have challenges as well when it comes to online engagement. There are many common tools and technologies that organizations have been using to try and engage users, whether it’s customers, employees or partners. Some have adopted different blog and wiki technologies (some hosted, some open source, sometimes embedded in platforms), to things like tagging, file sharing and content management, or composite applications for self-service applications and activity streams. Basically, there are so many different tools & technologies that each address different aspects of user engagement. Now, one of the challenges with this, is that if we look at each individual tool, typically just implementing for example a file sharing and basic collaboration solution, may meet the needs of the business user for one aspect of user engagement, but it may not be the best solution to engage with customers and partners, or it may not fit with IT standards such as integrating with their single sign on tools or their corporate website. Often, the scenario is that businesses are having to acquire multiple pieces and parts as well as build custom applications to meet their needs. Leaving customers and partners with a more fragmented way of interacting with the company. Every organization has some sort of enterprise balancing act between the needs of the business user and the needs and restrictions enforced by enterprise IT groups. As we’ve been discussing, we all know that the expectations for online engagement have changed since the days of the static, one-size fits all website. With these changes have come some very difficult organizational challenges as well. Today, as a business user, you want to engage with your customers, and your customers expect you to know who they are. They expect you to recall the details they’ve provided to you on your website, to your CSRs and to your sales people. They expect you to remember their purchases, their preferences and their problems. And they expect you to know who they are, equally well, across channels, including your web presence. This creates a host of challenges for today’s business users. Delivering targeted, relevant content online is now essential for converting prospects into customers and for engendering long term loyalty. Business users need the ability to leverage customer data from different sources to fuel their segmentation and targeting strategies and to easily set-up, manage and optimize online campaigns. Also critical, they need the ability to accomplish these things on-the-fly, at the speed of the marketplace, while making iterative improvements.  These changing expectations put a host of demands on the IT organization as well. The web presence must be able to scale to support the delivery of personalized and targeted content to thousands of site visitors without sacrificing performance. And integration between systems becomes more important as well, as organizations strive to obtain one view of the customer culled from WCM data, CRM data and more. So then, how do you solve these challenges and meet the growing demands of your users?  You need a solution that: Unifies every customer interaction across all channels Personalizes the products and content that interest the customer and to the device Delivers targeted promotions to the right customer Engages and improve employee productivity Provides self-service access to applications Includes embedded in-context social   So how then do you achieve this level of online engagement, complete customer experience and engage your employees? The answer: Oracle WebCenter. If you want to learn how to get there, we encourage you to attend this webcast on Thursday Drive Online Engagement with Intuitive Portals and Websites, where we'll talk about how you are able to transform your portal experience and optimize online engagement -- making your portals more interactive and more engaging across multiple channels. Register today!

    Read the article

  • Content in Context: The right medicine for your business applications

    - by Lance Shaw
    For many of you, your companies have already invested in a number of applications that are critical to the way your business is run. HR, Payroll, Legal, Accounts Payable, and while they might need an upgrade in some cases, they are all there and handling the lifeblood of your business. But are they really running as efficiently as they could be? For many companies, the answer is no. The problem has to do with the important information caught up within documents and paper. It’s everywhere except where it truly needs to be – readily available right within the context of the application itself. When the right information cannot be easily found, business processes suffer significantly. The importance of this recently struck me when I recently went to meet my new doctor and get a routine physical. Walking into the office lobby, I couldn't help but notice rows and rows of manila folders in racks from floor to ceiling, filled with documents and sensitive, personal information about various patients like myself.  As I looked at all that paper and all that history, two things immediately popped into my head.  “How do they find anything?” and then the even more alarming, “So much for information security!” It sure looked to me like all those documents could be accessed by anyone with a key to the building. Now the truth is that the offices of many general practitioners look like this all over the United States and the world.  But it had me thinking, is the same thing going on in just about any company around the world, involving a wide variety of important business processes? Probably so. Think about all the various processes going on in your company right now. Invoice payments are being processed through Accounts Payable, contracts are being reviewed by Procurement, and Human Resources is reviewing job candidate submissions and doing background checks. All of these processes and many more like them rely on access to forms and documents, whether they are paper or digital. Now consider that it is estimated that employee’s spend nearly 9 hours a week searching for information and not finding it. That is a lot of very well paid employees, spending more than one day per week not doing their regular job while they search for or re-create what already exists. Back in the doctor’s office, I saw this trend exemplified as well. First, I had to fill out a new patient form, even though my previous doctor had transferred my records over months previously. After filling out the form, I was later introduced to my new doctor who then interviewed me and asked me the exact same questions that I had answered on the form. I understand that there is value in the interview process and it was great to meet my new doctor, but this simple process could have been so much more efficient if the information already on file could have been brought directly together with the new patient information I had provided. Instead of having a highly paid medical professional re-enter the same information into the records database, the form I filled out could have been immediately scanned into the system, associated with my previous information, discrepancies identified, and the entire process streamlined significantly. We won’t solve the health records management issues that exist in the United States in this blog post, but this example illustrates how the automation of information capture and classification can eliminate a lot of repetitive and costly human entry and re-creation, even in a simple process like new patient on-boarding. In a similar fashion, by taking a fresh look at the various processes in place today in your organization, you can likely spot points along the way where automating the capture and access to the right information could be significantly improved. As you evaluate how content-process flows through your organization, take a look at how departments and regions share information between the applications they are using. Business applications are often implemented on an individual department basis to solve specific problems but a holistic approach to overall information management is not taken at the same time. The end result over the years is disparate applications with separate information repositories and in many cases these contain duplicate information, or worse, slightly different versions of the same information. This is where Oracle WebCenter Content comes into the story. More and more companies are realizing that they can significantly improve their existing application processes by automating the capture of paper, forms and other content. This makes the right information immediately accessible in the context of the business process and making the same information accessible across departmental systems which has helped many organizations realize significant cost savings. Here on the Oracle WebCenter team, one of our primary goals is to help customers find new ways to be more effective, more cost-efficient and manage information as effectively as possible. We have a series of three webcasts occurring over the next few weeks that are focused on the integration of enterprise content management within the context of business applications. We hope you will join us for one or all three and that you will find them informative. Click here to learn more about these sessions and to register for them. There are many aspects of information management to consider as you look at integrating content management within your business applications. We've barely scratched the surface here but look for upcoming blog posts where we will discuss more specifics on the value of delivering documents, forms and images directly within applications like Oracle E-Business Suite, PeopleSoft Enterprise, JD Edwards Enterprise One, Siebel CRM and many others. What do you think?  Are your important business processes as healthy as they can be?  Do you have any insights to share on the value of delivering content directly within critical business processes? Please post a comment and let us know the value you have realized, the lessons learned and what specific areas you are interested in.

    Read the article

  • Eloqua Experience 2013: Mystique, Modern Marketing and Masterful Engagement

    - by Mike Stiles
    The following is a guest post from Erick Mott, a social business leader at Oracle Eloqua. There’s a growing gap between 20th century marketing and a modern marketing way of doing business. I can’t think of a better example of modern marketing in action than what more than 2,000 people experienced in San Francisco at #EE13; customer-obsession, multichannel content, and real-time engagement all coming together at one extraordinary event. This was my first Eloqua Experience as a new Oracle Eloqua employee. In weeks prior, I heard about the mystique but didn’t know what to expect. What I’ve come to understand with more clarity is everything we do revolves around customer success, and we operate and educate at all times with these five tenets in mind: 1. Targeting: Really Know Your Buyer 2. Engagement: Create a 1:1 Relationship 3. Conversion: Visualize Guided Thinking 4. Analysis: Learn What’s Working 5. Marketing Technology: Enable and Extend the Cloud Product News from Eloqua Experience 2013 We made some announcements that John Stetic, VP of Products, Oracle Eloqua covers in this brief ‘Modern Marketing Minute’ video recorded after Wednesday’s keynote; summarized below, too: Oracle Eloqua AdFocus: While understanding the impact of a specific marketing channel was formerly relegated to marketers’ wish lists, the channels we now focus on are digital, social, and mobile. AdFocus gives marketers a single platform to dynamically create, manage and measure display ads alongside owned and earned media. AdFocus enables marketers to target only key accounts or prospects you want to reach with display ads, as well as provide creative content or personalized ad copy based on their persona and activities. Oracle Eloqua Profiler: The details of what we now know about customers have expanded into a universal customer profile, which can be used to create highly targeted segments. Marketers now can take data that’s not even stored in Eloqua to help targeted and score prospects for a complete, multichannel view of the customer. Profiler gives sales reps one, detailed view of the prospect to extend views beyond Oracle Eloqua asset activity (emails, forms, page views) to any external assets stored in Oracle Eloqua. Marketing Resource Management: New capabilities create more secure and controlled access to marketing resources and data. New integrations provide greater insight into campaign resources and management through a central marketing calendar and simplify resource management. Integrated Sales and Marketing Funnel: An integrated sales and marketing funnel view gives marketing and sales users, cross-functional teams, and executive management a consistent and clear view of pipeline performance. It also quickly provides users with historical metrics across different time spans and conditions. Eloqua AppCloud: More than 20 new AppCloud partners have been added to the community, which now includes 100+ apps. Eloqua AppCloud now provides modern marketers with an even broader range of marketing applications that help expand and enrich sales and marketing efforts; easily accessible in the Topliners Community. Social Capabilities: Recent integration between Oracle Eloqua and Oracle Social Relationship Management (SRM) deliver a comprehensive, scalable and integrated modern marketing solution. New capabilities include better tracking of social activities for a more complete customer profile. Engage Facebook custom audiences with AdFocus to deliver ads and meaningful experiences through trusted social networks. Biggest and Best Eloqua Experience. There’s a lot of talk in the industry about the Marketing Cloud. At Oracle Eloqua, we have been on a mission of delivering the most advanced and integrated modern marketing technology on the planet. It’s not just a concept but reality with proven execution, as seen first-hand this week in San Francisco. In this video, Kevin Akeroyd, SVP of Oracle Eloqua, provides some highlights of what made this year’s Eloqua Experience, exceptional, including Steve Woods’ presentation about the journey of modern marketers and Andrea Ward’s conversation with Vince Gilligan, creator of the Breaking Bad television series. The 2013 Markie Awards The Oracle Eloqua Marketing Cloud was best exemplified for me as 19 Markies were awarded to customers for their exceptional creativity and results as modern marketers. Wow, what a night to remember with so many committed and talented people working to create an extraordinary experience! To learn more about how to become a modern marketer, check out these resources. We look forward to seeing you next year at Eloqua Experience. More on Erick: 20 years experience at Oracle, Ektron, Sitecore, Lyris, Habeas, Nokia, creatorbase, Mark Monitor, Cisco Systems, GlobalFluency, Sun Microsystems, Philips NV, Elm Products and CBS TV. Patent holder with agency, Fortune 500, media, and startup company expertise. @mikestiles

    Read the article

  • Why won't USB 3.0 external hard drive run at USB 3.0 speeds?

    - by jgottula
    I recently purchased a PCI Express x1 USB 3.0 controller card (containing the NEC USB 3.0 controller) with the intent of using a USB 3.0 external hard drive with my Linux box. I installed the card in an empty PCIe slot on my motherboard, connected the card to a power cable, strung a USB 3.0 cable between one of the new ports and my external HDD, and connected the HDD to a wall socket for power. Booting the system, the drive works 100% as intended, with the one exception of throughput: rather than using SuperSpeed 4.8 Gbps connectivity, it seems to be falling back to High Speed 480 Mbps USB 2.0-style throughput. Disk Utility shows it as a 480 Mbps device, and running a couple Disk Utility and dd benchmarks confirms that the drive fails to exceed ~40 MB/s (the approximate limit of USB 2.0), despite it being an SSD capable of far more than that. When I connect my USB 3.0 HDD, dmesg shows this: [ 3923.280018] usb 3-2: new high speed USB device using ehci_hcd and address 6 where I would expect to find this: [ 3923.280018] usb 3-2: new SuperSpeed USB device using xhci_hcd and address 6 My system was running on kernel 2.6.35-25-generic at the time. Then, I stumbled upon this forum thread by an individual who found that a bug, which was present in kernels prior to 2.6.37-rc5, could be the culprit for this type of problem. Consequently, I installed the 2.6.37-generic mainline Ubuntu kernel to determine if the problem would go away. It didn't, so I tried 2.6.38-rc3-generic, and even the 2.6.38 nightly from 2010.02.01, to no avail. In short, I'm trying to determine why, with USB 3.0 support in the kernel, my USB 3.0 drive fails to run at full SuperSpeed throughput. See the comments under this question for additional details. Output that might be relevant to the problem (when booting from 2.6.38-rc3): Relevant lines from dmesg: [ 19.589491] xhci_hcd 0000:03:00.0: PCI INT A -> GSI 17 (level, low) -> IRQ 17 [ 19.589512] xhci_hcd 0000:03:00.0: setting latency timer to 64 [ 19.589516] xhci_hcd 0000:03:00.0: xHCI Host Controller [ 19.589623] xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 12 [ 19.650492] xhci_hcd 0000:03:00.0: irq 17, io mem 0xf8100000 [ 19.650556] xhci_hcd 0000:03:00.0: irq 47 for MSI/MSI-X [ 19.650560] xhci_hcd 0000:03:00.0: irq 48 for MSI/MSI-X [ 19.650563] xhci_hcd 0000:03:00.0: irq 49 for MSI/MSI-X [ 19.653946] xHCI xhci_add_endpoint called for root hub [ 19.653948] xHCI xhci_check_bandwidth called for root hub Relevant section of sudo lspci -v: 03:00.0 USB Controller: NEC Corporation uPD720200 USB 3.0 Host Controller (rev 03) (prog-if 30) Flags: bus master, fast devsel, latency 0, IRQ 17 Memory at f8100000 (64-bit, non-prefetchable) [size=8K] Capabilities: [50] Power Management version 3 Capabilities: [70] MSI: Enable- Count=1/8 Maskable- 64bit+ Capabilities: [90] MSI-X: Enable+ Count=8 Masked- Capabilities: [a0] Express Endpoint, MSI 00 Capabilities: [100] Advanced Error Reporting Capabilities: [140] Device Serial Number ff-ff-ff-ff-ff-ff-ff-ff Capabilities: [150] #18 Kernel driver in use: xhci_hcd Kernel modules: xhci-hcd Relevant section of sudo lsusb -v: Bus 012 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 3.00 bDeviceClass 9 Hub bDeviceSubClass 0 Unused bDeviceProtocol 3 bMaxPacketSize0 9 idVendor 0x1d6b Linux Foundation idProduct 0x0003 3.0 root hub bcdDevice 2.06 iManufacturer 3 Linux 2.6.38-020638rc3-generic xhci_hcd iProduct 2 xHCI Host Controller iSerial 1 0000:03:00.0 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 25 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xe0 Self Powered Remote Wakeup MaxPower 0mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 9 Hub bInterfaceSubClass 0 Unused bInterfaceProtocol 0 Full speed (or root) hub iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0004 1x 4 bytes bInterval 12 Hub Descriptor: bLength 9 bDescriptorType 41 nNbrPorts 4 wHubCharacteristic 0x0009 Per-port power switching Per-port overcurrent protection TT think time 8 FS bits bPwrOn2PwrGood 10 * 2 milli seconds bHubContrCurrent 0 milli Ampere DeviceRemovable 0x00 PortPwrCtrlMask 0xff Hub Port Status: Port 1: 0000.0100 power Port 2: 0000.0100 power Port 3: 0000.0100 power Port 4: 0000.0100 power Device Status: 0x0003 Self Powered Remote Wakeup Enabled Full, non-verbose lsusb: Bus 012 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 011 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 010 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 009 Device 003: ID 04d9:0702 Holtek Semiconductor, Inc. Bus 009 Device 002: ID 046d:c068 Logitech, Inc. G500 Laser Mouse Bus 009 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 008 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 006: ID 174c:5106 ASMedia Technology Inc. Bus 003 Device 004: ID 0bda:0151 Realtek Semiconductor Corp. Mass Storage Device (Multicard Reader) Bus 003 Device 002: ID 058f:6366 Alcor Micro Corp. Multi Flash Reader Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 006: ID 1687:0163 Kingmax Digital Inc. Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 002: ID 046d:081b Logitech, Inc. Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Full output: full dmesg full lspci full lsusb

    Read the article

  • Some Early Considerations

    - by Chris Massey
    Following on from my previous post, I want to say "thank you" to everyone who has got in touch and got involved – you are pioneers! An update on where we are right now: paper prototypes v1 To be more specific, we’ve picked two of the ideas that seem to have more pros than cons, turned them into Balsamiq mockups, and are getting them fleshed out with realistic content. We’ll initially make these available to the aforementioned pioneers (thank you again), roll in the feedback, and then open up to get more data on what works and what doesn’t. If you’ve got any questions about this (or what we’re working on right now), feel free to ask me in the comments below. I’ve had a few people express an interest in the process we’re going through, and I’m more than happy to share details more frequently as we go along – not least because you, dear reader, will help us stay on target and create something Good. To start with, here’s a quick flashback to bring you all up to speed. A Brief Retrospective As you may already know, we’re creating a new publishing asset specifically focused on providing great content for web developers. We don’t yet know exactly what this thing will look like, or exactly how it will work, but we know we want to create something that is useful different. For my part, I’m seriously excited at the prospect of building a genuinely digital publishing system (as opposed to what most publishing is these days, which is print-style publishing which just happens to be on the web). The main challenge at this point is working out our build-measure-assess loop to speed up our experimental turn-around, and that’ll get better as we run more trials. Of course, there are a few things we’ve been pondering at this early conceptual stage: Do we publishing about heterogeneous technology stacks from day 1, or do we start with ASP.NET (which we’re familiar with) & branch out later? There are challenges with either approach. What publishing "modes" are already being well-handled? For example, the likes of Pluralsight, TekPub, and Treehouse have pretty much nailed video training (debate about price, if you like), and unless we think we can do it faster / better / cheaper (unlikely, for the record), we should leave them to it. Where should we base whatever we create? Should we create a completely new asset under a new name, graft something onto Simple-Talk (like the labs), or just build something directly into Simple-Talk? It sounds trivial, but it does have at least some impact on infrastructure and what how we manage the different types of content we (will) have. Are there any obvious problems or niches that we think could address really well, or should we just throw ideas out and see what readers respond to? What kind of users do we want to provide for? This actually deserves a little bit of unpacking… Why are you here? We currently divide readers into (broadly) the categories: Category 1: I know nothing about X, and I’d like to learn about it. Category 2: I know something about X, but I’d like to learn how to do something specific with it. Category 3: Ah man, I have a problem with X, and I need to fix it now. Now that I think about it, I might also include a 4th class of reader: Category 4: I’m looking for something interesting to engage my brain. These are clearly task-based categorizations, and depending on which task you’re performing when you arrive here, you’re going to need different types of content, or will have specific discovery needs. One of the questions that’s at the back of my mind whenever I consider a new idea is “How many of the categories will this satisfy?” As an example, typical video training is very well suited to categories 1, 2, and 4. StackOverflow is very well suited to category 3, and serves as a sign-posting system to the rest. Clearly it’s not necessary to satisfy every category need to be useful and popular, but being aware of what behavior readers might be exhibiting when they arrive will help us tune our ideas appropriately. < / Flashback > We don’t have clean answers to most of these considerations – they’re things we’re aware of, and each idea we look at is going to be best suited to a different mix of the options I’ve described. Our first experimental loop will be coming full circle in the next few days, so we should start to see how the different possibilities vary between ideas. Free to chime in with questions and suggestions about anything I’ve just brain-dumped, or at any stage as we go along. If you see anything that intrigued or enrages you, or just have an idea you’d like to share, I’d love to hear from you.

    Read the article

  • Revisiting the Generations

    - by Row Henson
    I was asked earlier this year to contribute an article to the IHRIM publication – Workforce Solutions Review.  My topic focused on the reality of the Gen Y population 10 years after their entry into the workforce.  Below is an excerpt from that article: It seems like yesterday that we were all talking about the entry of the Gen Y'ers into the workforce and what a radical change that would have on how we attract, retain, motivate, reward, and engage this new, younger segment of the workforce.  We all heard and read that these youngsters would be more entrepreneurial than their predecessors – the Gen X'ers – who were said to be more loyal to their profession than their employer. And, we heard that these “youngsters” would certainly be far less loyal to their employers than the Baby Boomers or even earlier Traditionalists. It was also predicted that – at least for the developed parts of the world – they would be more interested in work/life balance than financial reward; they would need constant and immediate reinforcement and recognition and we would be lucky to have them in our employment for two to three years. And, to keep them longer than that we would need to promote them often so they would be continuously learning since their long-term (10-year) goal would be to own their own business or be an independent consultant.  Well, it occurred to me recently that the first of the Gen Y'ers are now in their early 30s and it is time to look back on some of these predictions. Many really believed the Gen Y'ers would enter the workforce with an attitude – expect everything to be easy for them – have their employers meet their demands or move to the next employer, and I believe that we can now say that, generally, has not been the case. Speaking from personal experience, I have mentored a number of Gen Y'ers and initially felt that with a 40-year career in Human Resources and Human Resources Technology – I could share a lot with them. I found out very quickly that I was learning at least as much from them! Some of the amazing attributes I found from these under-30s was their fearlessness, ease of which they were able to multi-task, amazing energy and great technical savvy. They were very comfortable with collaborating with colleagues from both inside the company and peers outside their organization to problem-solve quickly. Most were eager to learn and willing to work hard.  This brings me to the generation that will follow the Gen Y'ers – the Generation Z'ers – those born after 1998. We have come full circle. If we look at the Silent Generation or Traditionalists, we find a workforce that preceded the television and even very early telephones. We Baby Boomers (as I fall right squarely in this category) remembered the invention of the television and telephone – but laptop computers and personal digital assistants (PDAs) were a thing of “StarTrek” and other science fiction movies and publications. Certainly, the Gen X'ers and Gen Y'ers grew up with the comfort of these devices just as we did with calculators. But, what of those under the age of 10 – how will the workplace look in 15 more years and what type of workforce will be required to operate in the mobile, global, virtual world. I spoke to a friend recently who had her four-year-old granddaughter for a visit. She said she found her in the den in front of the TV trying to use her hand to get the screen to move! So, you see – we have come full circle. The under-70 Traditionalist grew up in a world without TV and the Generation Z'er may never remember the TV we knew just a few years ago. As with every generation – we spend much time generalizing on their characteristics. The most important thing to remember is every generation – just like every individual – is different. The important thing for those of us in Human Resources to remember is that one size doesn’t fit all. What motivates one employee to come to work for you and stay there and be productive is very different than what the next employee is looking for and the organization that can provide this fluidity and flexibility will be the survivor for generations to come. And, finally, just when we think we have it figured out, a multitude of external factors such as the economy, world politics, industries, and technologies we haven’t even thought about will come along and change those predictions. As I reach retirement age – I do so believing that our organizations are in good hands with the generations to follow – energetic, collaborative and capable of working hard while still understanding the need for balance at work, at home and in the community! Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • How to remove not required Elements from generated XML via jaxb

    - by Dangling Piyush
    I want to know if there is anyway for removing not required elements from generated xml using jaxb.I have my xsd element definition as follows. <xsd:element name="Title" maxOccurs="1" minOccurs="0"> <xsd:annotation> <xsd:documentation> A name given to the digital record. </xsd:documentation> </xsd:annotation> <xsd:simpleType> <xsd:restriction base="xsd:string"> <xsd:minLength value="1"></xsd:minLength> </xsd:restriction> </xsd:simpleType> </xsd:element> As you can see it is not a mandatory element because minOccurs="0" But if it is not empty the length should be 1. <xsd:minLength value="1"></xsd:minLength> At the time of marshalling if I left the Title field blank it is throwing the SAXException because of min-length restriction. So what I want to do is to remove the whole occurrence of <Title/> from generated XML.Right now i have removed the min-length restriction so it is adding the <Title> element as EMPTY <Title></Title> But I do not want it like this.Any help is appreciated.I am using jaxb 2.0 for Marshalling. UPDATE: Following is my variable definiton : private JAXBContext jaxbContext; private Unmarshaller unmarshaller; private SchemaFactory factory; private Schema schema; private Marshaller marshaller; Marshalling code. jaxbContext = JAXBContext.newInstance(ERecordType.class); marshaller = jaxbContext.createMarshaller(); factory = SchemaFactory.newInstance(XMLConstants.W3C_XML_SCHEMA_NS_URI); schema = factory.newSchema((new File(xsdLocation))); marshaller.setProperty(Marshaller.JAXB_FORMATTED_OUTPUT, true); ERecordType e = new ERecordType(); e.setCataloging(rc); /** * Validate Against Schema. */ marshaller.setSchema(schema); /** * Marshal will throw an exception if XML not validated against * schema. */ marshaller.marshal(e, System.out);

    Read the article

  • How to remove music/videos DRM protection and convert to Mobile Devices such as iPod, iPhone, PSP, Z

    - by tonywesley
    The music/video files you purchased from online music stores like iTunes, Yahoo Music or Wal-Mart are under DRM protection. So you can't convert them to the formats supported by your own mobile devices such as Nokia phone, Creative Zen palyer, iPod, PSP, Walkman, Zune… You also can't share your purchased music/videos with your friends. The following step by step tutorial is dedicated to instructing music lovers to how to convert your DRM protected music/videos to mobile devices. Method 1: If you only want to remove DRM protection from your protected music, this method will not spend your money. Step 1: Burn your protected music files to CD-R/RW disc to make an audio CD Step 2: Find a free CD Ripper software to convert the audio CD track back to MP3, WAV, WMA, M4A, AAC, RA… Method 2: This guide will show you how to crack drm from protected wmv, wma, m4p, m4v, m4a, aac files and convert to unprotected WMV, MP4, MP3, WMA or any video and audio formats you like, such as AVI, MP4, Flv, MPEG, MOV, 3GP, m4a, aac, wmv, ogg, wav... I have been using Media Converter software, it is the quickest and easiest solution to remove drm from WMV, M4V, M4P, WMA, M4A, AAC, M4B, AA files by quick recording. It gets audio and video stream at the bottom of operating system, so the output quality is lossless and the conversion speed is fast . The process is as follows. Step 1: Download and install the software Step 2: Run the software and click "Add…" button to load WMA or M4A, M4B, AAC, WMV, M4P, M4V, ASF files Step 3: Choose output formats. If you want to convert protected audio files, please select "Convert audio to" list; If you want to convert protected video files, please select "Convert video to" list. Step 4: You can click "Settings" button to custom preference for output files. Click "Settings" button bellow "Convert audio to" list for protected audio files Click "Settings" button bellow "Convert video to" list for protected video files Step 5: Start remove DRM and convert your DRM protected music and videos by click on "Start" button. What is DRM? DRM, which is most commonly found in movies and music files, doesn't mean just basic copy-protection of video, audio and ebooks, but it basically means full protection for digital content, ranging from delivery to end user's ways to use the content. We can remove the Drm from video and audio files legally by quick recording.

    Read the article

  • File sizing issue in DOS/FAT

    - by Heather
    I've been tasked with writing a data collection program for a Unitech HT630, which runs a proprietary DOS operating system that can run executables compiled for 16-bit MS DOS with some restrictions. I'm using the Digital Mars C/C++ compiler, which is working well thus far. One of the application requirements is that the data file must be human-readable plain text, meaning the file can be imported into Excel or opened by Notepad. I'm using a variable length record format much like CSV that I've successfully implemented using the C standard library file I/O functions. When saving a record, I have to calculate whether the updated record is larger or smaller than the version of the record currently in the data file. If larger, I first shift all records immediately after the current record forward by the size difference calculated before saving the updated record. EOF is extended automatically by the OS to accommodate the extra data. If smaller, I shift all records backwards by my calculated offset. This is working well, however I have found no way to modify the EOF marker or file size to ignore the data after the end of the last record. Most of the time records will grow in size because the data collection program will be filling some of the empty fields with data when saving a record. Records will only shrink in size when a correction is made on an existing entry, or on a normal record save if the descriptive data in the record is longer than what the program reads in memory. In the situation of a shrinking record, after the last record in the file I'm left with whatever data was sitting there before the shift. I have been writing an EOF delimiter into the file after a "shrinking record save" to signal where the end of my records are and space-filling the remaining data, but then I no longer have a clean file until a "growing record save" extends the size of the file over the space-filled area. The truncate() function in unistd.h does not work (I'm now thinking this is for *nix flavors only?). One proposed solution I've seen involves creating a second file and writing all the data you wish to save into that file, and then deleting the original. Since I only have 4MB worth of disk space to use, this works if the file size is less than 2MB minus the size of my program executable and configuration files, but would fail otherwise. It is very likely that when this goes into production, users would end up with a file exceeding 2MB in size. I've looked at Ralph Brown's Interrupt List and the interrupt reference in IBM PC Assembly Language and Programming and I can't seem to find anything to update the file size or similar. Is reducing a file's size without creating a second file even possible in DOS?

    Read the article

  • how to develop a program to minimize errors in human transcription of hand written surveys

    - by Alex. S.
    I need to develop custom software to do surveys. Questions may be of multiple choice, or free text in a very few cases. I was asked to design a subsystem to check if there is any error in the manual data entry for the multiple choices part. We're trying to speed up the user data entry process and to minimize human input differences between digital forms and the original questionnaires. The surveys are filled with handwritten marks and text by human interviewers, so it's possible to find hard to read marks, or also the user could accidentally select a different value in some question, and we would like to avoid that. The software must include some automatic control to detect possible typing differences. Each answer of the multiple choice questions has the same probability of being selected. This question has two parts: The GUI. The most simple thing I have in mind is to implement the most usable design of the questions display: use of large and readable fonts and space generously the choices. Is there something else? For faster input, I would like to use drop down lists (favoring keyboard over mouse). Given the questions are grouped in sections, I would like to show the answers selected for the questions of that section, but this could slow down the process. Any other ideas? The error checking subsystem. What else can I do to minimize or to check human typos in the multiple choice questions? Is this a solvable problem? is there some statistical methodology to check values that were entered by the users are the same from the hand filled forms? For example, let's suppose the survey has 5 questions, and each has 4 options. Let's say I have n survey forms filled in paper by interviewers, and they're ready to be entered in the software, then how to minimize the accidental differences that can have the manual transcription of the n surveys, without having to double check everything in the 5 questions of the n surveys? My first suggestion is that at the end of the processing of all the hand filled forms, the software could choose some forms randomly to make a double check of the responses in a few instances, but on what criteria can I make this selection? This validation would be enough to cover everything in a significant way? The actual survey is nation level and it has 56 pages with over 200 questions in total, so it will be a lot of hand written pages by many people, and the intention is to reduce the likelihood of errors and to optimize speed in the data entry process. The surveys must filled in paper first, given the complications of taking laptops or handhelds with the interviewers.

    Read the article

  • Subband decomposition using Daubechies filter

    - by misha
    I have the following two 8-tap filters: h0 ['-0.010597', '0.032883', '0.030841', '-0.187035', '-0.027984', '0.630881', '0.714847', '0.230378'] h1 ['-0.230378', '0.714847', '-0.630881', '-0.027984', '0.187035', '0.030841', '-0.032883', '-0.010597'] Here they are on a graph: I'm using it to obtain the approximation (lower subband of an image). This is a(m,n) in the following diagram: I got the coefficients and diagram from the book Digital Image Processing, 3rd Edition, so I trust that they are correct. The star symbol denotes one dimensional convolution (either over rows or over columns). The down arrow denotes downsampling in one dimension (either over rows, or columns). My problem is that the filter coefficients for h0 and h1 sum to greater than 1 (approximately 1.4 or sqrt(2) to be exact). Naturally, if I convolve any image with the filter, the image will get brighter. Indeed, here's what I get (expected result on right): Can somebody suggest what the problem is here? Why should it work if the convolution filter coefficients sum to greater than 1? I have the source code, but it's quite long so I'm hoping to avoid posting it here. If it's absolutely necessary, I'll put it up later. EDIT What I'm doing is: Decompose into subbands Filter one of the subbands Recompose subbands into original image Note that the point isn't just to have a displayable subband-decomposed image -- I have to be able to perfectly reconstruct the original image from the subbands as well. So if I scale the filtered image in order to compensate for my decomposition filter making the image brighter, this is what I will have to do: Decompose into subbands Apply intensity scaling Filter one of the subbands Apply inverse intensity scaling Recompose subbands into original image Step 2 performs the scaling. This is what @Benjamin is suggesting. The problem is that then step 4 becomes necessary, or the original image will not be properly reconstructed. This longer method will work. However, the textbook explicitly says that no scaling is performed on the approximation subband. Of course, it's possible that the textbook is wrong. However, what's more possible is I'm misunderstanding something about the way this all works -- this is why I'm asking this question.

    Read the article

  • Calling CryptUIWizDigitalSign from .NET on x64

    - by Joe Kuemerle
    I am trying to digitally sign files using the CryptUIWizDigitalSign function from a .NET 2.0 application compiled to AnyCPU. The call works fine when running on x86 but fails on x64, it also works on an x64 OS when compiled to x86. Any idea on how to better marshall or call from x64? The Win32exception returned is "Error encountered during digital signing of the file ..." with a native error code of -2146762749. The relevant portion of the code are: [StructLayout(LayoutKind.Sequential)] public struct CRYPTUI_WIZ_DIGITAL_SIGN_INFO { public Int32 dwSize; public Int32 dwSubjectChoice; [MarshalAs(UnmanagedType.LPWStr)] public string pwszFileName; public Int32 dwSigningCertChoice; public IntPtr pSigningCertContext; [MarshalAs(UnmanagedType.LPWStr)] public string pwszTimestampURL; public Int32 dwAdditionalCertChoice; public IntPtr pSignExtInfo; } [DllImport("Cryptui.dll", CharSet=CharSet.Unicode, SetLastError=true)] public static extern bool CryptUIWizDigitalSign(int dwFlags, IntPtr hwndParent, string pwszWizardTitle, ref CRYPTUI_WIZ_DIGITAL_SIGN_INFO pDigitalSignInfo, ref IntPtr ppSignContext); CRYPTUI_WIZ_DIGITAL_SIGN_INFO digitalSignInfo = new CRYPTUI_WIZ_DIGITAL_SIGN_INFO(); digitalSignInfo = new CRYPTUI_WIZ_DIGITAL_SIGN_INFO(); digitalSignInfo.dwSize = Marshal.SizeOf(digitalSignInfo); digitalSignInfo.dwSubjectChoice = 1; digitalSignInfo.dwSigningCertChoice = 1; digitalSignInfo.pSigningCertContext = pSigningCertContext; digitalSignInfo.pwszTimestampURL = timestampUrl; digitalSignInfo.dwAdditionalCertChoice = 0; digitalSignInfo.pSignExtInfo = IntPtr.Zero; digitalSignInfo.pwszFileName = filepath; CryptUIWizDigitalSign(1, IntPtr.Zero, null, ref digitalSignInfo, ref pSignContext)); And here is how the SigningCertContext is retrieved (minus various error handling) public IntPtr GetCertContext(String pfxfilename, String pswd) IntPtr hMemStore = IntPtr.Zero; IntPtr hCertCntxt = IntPtr.Zero; IntPtr pProvInfo = IntPtr.Zero; uint provinfosize = 0; try { byte[] pfxdata = PfxUtility.GetFileBytes(pfxfilename); CRYPT_DATA_BLOB ppfx = new CRYPT_DATA_BLOB(); ppfx.cbData = pfxdata.Length; ppfx.pbData = Marshal.AllocHGlobal(pfxdata.Length); Marshal.Copy(pfxdata, 0, ppfx.pbData, pfxdata.Length); hMemStore = Win32.PFXImportCertStore(ref ppfx, pswd, CRYPT_USER_KEYSET); pswd = null; if (hMemStore != IntPtr.Zero) { Marshal.FreeHGlobal(ppfx.pbData); while ((hCertCntxt = Win32.CertEnumCertificatesInStore(hMemStore, hCertCntxt)) != IntPtr.Zero) { if (Win32.CertGetCertificateContextProperty(hCertCntxt, CERT_KEY_PROV_INFO_PROP_ID, IntPtr.Zero, ref provinfosize)) pProvInfo = Marshal.AllocHGlobal((int)provinfosize); else continue; if (Win32.CertGetCertificateContextProperty(hCertCntxt, CERT_KEY_PROV_INFO_PROP_ID, pProvInfo, ref provinfosize)) break; } } finally { if (pProvInfo != IntPtr.Zero) Marshal.FreeHGlobal(pProvInfo); if (hMemStore != IntPtr.Zero) Win32.CertCloseStore(hMemStore, 0); } return hCertCntxt; }

    Read the article

  • Generating Unordered List with PHP + CodeIgniter from a MySQL Database

    - by Tim
    Hello Everyone, I am trying to build a dynamically generated unordered list in the following format using PHP. I am using CodeIgniter but it can just be normal php. This is the end output I need to achieve. <ul id="categories" class="menu"> <li rel="1"> Arts &amp; Humanities <ul> <li rel="2"> Photography <ul> <li rel="3"> 3D </li> <li rel="4"> Digital </li> </ul> </li> <li rel="5"> History </li> <li rel="6"> Literature </li> </ul> </li> <li rel="7"> Business &amp; Economy </li> <li rel="8"> Computers &amp; Internet </li> <li rel="9"> Education </li> <li rel="11"> Entertainment <ul> <li rel="12"> Movies </li> <li rel="13"> TV Shows </li> <li rel="14"> Music </li> <li rel="15"> Humor </li> </ul> </li> <li rel="10"> Health </li> And here is my SQL that I have to work with. -- -- Table structure for table `categories` -- CREATE TABLE IF NOT EXISTS `categories` ( `id` mediumint(8) NOT NULL auto_increment, `dd_id` mediumint(8) NOT NULL, `parent_id` mediumint(8) NOT NULL, `cat_name` varchar(256) NOT NULL, `cat_order` smallint(4) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=1 ; So I know that I am going to need at least 1 foreach loop to generate the first level of categories. What I don't know is how to iterate inside each loop and check for parents and do that in a dynamic way so that there could be an endless tree of children. Thanks for any help you can offer. Tim

    Read the article

  • Replace text in XSL using wildcards

    - by JosephThomas
    This is similar to an earlier problem I was having which you guys solved in less than a day. I am working with XML files that are generated by a digital video camera. The camera allows the user to save all of the camera's settngs to an SD card so that the settings can be recalled or loaded into another camera. The XSL stylesheet I am writing will allow users to view the camera's settings, as saved to the SD card in a web browser. While most of the values in the XML file -- as formatted by my stylesheet -- make sense to humans, some do not. What I would like to do is have the stylesheet display text that is based on the value in the XML file but more easily understood by humans. A typical value that can be written to the XML file is "_23_970" which represents the camera's frame rate. This would be better displayed as 23.970 (or 023.970). The first underscore is a sort of place holder to make a space for values over 099.999. The second underscore, obviously represents the decimal. My previous (similar) question involved replacing predictable text, and the solution was matching templates. In this case, however, the camera can be set at any one of 119,999 frame rates (I think I did that math correctly). The approach, I would guess, is to pass a value to the displayed webpage that keeps the numeric values (each digit), replaces the second underscore with a decimal, and replaces the first underscore with either an nbsp or a zero (whichever is easier). If the first character in the string is a "1" (the camera can run at frame rates up to 120.000) then the one should be passed on to the page displayed by the stylesheet. I have read other posts here regarding wildcards, but couldn't find one that answered this question. EDIT: Sorry for leaving out important info. I fared better on my first try at asking a question! I guess I got complacent. Anyhow . . . I should have shown you the code that displays the text in the XSL file as is: <tr> <xsl:for-each select="Settings/Groups/Recording"> <tr><td class="title_column">Frame Rate</td><td><xsl:value-of select="RecOutLinkSpeed"/></td></tr> </xsl:for-each> </tr> I should also have given you the URL for the sample file I have been working with: http://josephthomas.info/Alexa/Setup_120511_140322.xml

    Read the article

  • SSL certificates and types for securing your websites and applications

    - by Mit Naik
    Need to share few information regarding SSL certificates and there types, which SSL certificates are widely used etc. There are several SSL certificates available in the market today inorder to secure your domains, multiple subdomains, your applications and code too. Few of the details are mentioned below. CheapSSL certificates available today are Standard Rapidssl certificate, Thwate SSL 123 etc certificates which are basic level certificates. Most of these cheap SSL certificates are domain-validated only and don't provide the greatest trust for your customers. This means you shouldn't use cheap SSL certificates on e-commerce stores or other public-facing sites that require people to trust the site. EV certificates I found Geotrust Truebusinessid with EV certificate which is one of the cheapest certificate available in market today, you can also find Thwate, Versign EV version of certificates. Its designed to prevent phishing attacks better than normal SSL certificates. What makes an EV Certificate so special? An SSL Certificate Provider has to do some extensive validation to give you one including: Verifying that your organization is legally registered and active, Verifying the address and phone number of your organization, Verifying that your organization has exclusive right to use the domain specified in the EV Certificate, Verifying that the person ordering the certificate has been authorized by the organization, Verifying that your organization is not on any government blacklists. SSL WILDCARD CERTIFICATES, SSL Wildcard Certificates are big money-savers. An SSL Wildcard Certificate allows you to secure an unlimited number of first-level sub-domains on a single domain name. For example, if you need to secure the following websites: * www.yourdomain.com * secure.yourdomain.com * product.yourdomain.com * info.yourdomain.com * download.yourdomain.com * anything.yourdomain.com and all of these websites are hosted on the multiple server box, you can purchase and install one Wildcard certificate issued to *.yourdomain.com to secure all these sites. SAN CERTIFICATES, are interesting certificates and are helpfull if you want to secure multiple domains by generating single CSR and can install the same certificate on your additional sites without generating new CSRs for all the additional domains. CODE SIGNING CERTIFICATES, A code signing certificate is a file containing a digital signature that can be used to sign executables and scripts in order to verify your identity and ensure that your code has not been tampered with since it was signed. This helps your users to determine whether your software can be trusted. Scroll to the chart below to compare cheap code signing certificates. A code signing certificate allows you to sign code using a private and public key system similar to how an SSL certificate secures a website. When you request a code signing certificate, a public/private key pair is generated. The certificate authority will then issue a code signing certificate that contains the public key. A certificate for code signing needs to be signed by a trusted certificate authority so that the operating system knows that your identity has been validated. You could still use the code signing certificate to sign and distribute malicious software but you will be held legally accountable for it. You can sign many different types of code. The most common types include Windows applications such as .exe, .cab, .dll, .ocx, and .xpi files (using an Authenticode certificate), Apple applications (using an Apple code signing certificate), Microsoft Office VBA objects and macros (using a VBA code signing certificate), .jar files (using a Java code signing certificate), .air or .airi files (using an Adobe AIR certificate), and Windows Vista drivers and other kernel-mode software (using a Vista code certificate). In reality, a code signing certificate can sign almost all types of code as long as you convert the certificate to the correct format first. Also I found the below URL which provides you good suggestion regarding purchasing best SSL certificates for securing your site, as per the Financial institution, Bank, Hosting providers, ISP, Retail Merchants etc. Please vote and provide comments or any additional suggestions regarding SSL certificates.

    Read the article

  • A faulty Caviar Blue hard drive?

    - by Glister
    We have a small "homemade" server running fully updated Debian Wheezy (amd64). One hard drive installed: WDC WD6400AAKS. The motherboard is ASUS M4N68T V2. The usual load: CPU: an average of 20% Each week about 50GB of additional space is occupied. About 47GB of uploaded files and 3GB of MySQL data. I'm afraid that the hard drive may be about to fail. I saw Pre-fail on few places when I ran: root@SERVER:/tmp# smartctl -a /dev/sda smartctl 5.41 2011-06-09 r3365 [x86_64-linux-3.2.0-4-amd64] (local build) Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net === START OF INFORMATION SECTION === Model Family: Western Digital Caviar Blue Serial ATA Device Model: WDC WD6400AAKS-XXXXXXX Serial Number: WD-XXXXXXXXXXXXXXXXXXX LU WWN Device Id: 5 0014ee XXXXXXXXXXXXX Firmware Version: 01.03B01 User Capacity: 640,135,028,736 bytes [640 GB] Sector Size: 512 bytes logical/physical Device is: In smartctl database [for details use: -P show] ATA Version is: 8 ATA Standard is: Exact ATA specification draft version not indicated Local Time is: Mon Oct 28 18:55:27 2013 UTC SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED General SMART Values: Offline data collection status: (0x85) Offline data collection activity was aborted by an interrupting command from host. Auto Offline Data Collection: Enabled. Self-test execution status: ( 247) Self-test routine in progress... 70% of test remaining. Total time to complete Offline data collection: (11580) seconds. Offline data collection capabilities: (0x7b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. General Purpose Logging supported. Short self-test routine recommended polling time: ( 2) minutes. Extended self-test routine recommended polling time: ( 136) minutes. Conveyance self-test routine recommended polling time: ( 5) minutes. SCT capabilities: (0x303f) SCT Status supported. SCT Error Recovery Control supported. SCT Feature Control supported. SCT Data Table supported. SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 0 3 Spin_Up_Time 0x0027 157 146 021 Pre-fail Always - 5108 4 Start_Stop_Count 0x0032 098 098 000 Old_age Always - 2968 5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always - 0 7 Seek_Error_Rate 0x002e 200 200 051 Old_age Always - 0 9 Power_On_Hours 0x0032 079 079 000 Old_age Always - 15445 10 Spin_Retry_Count 0x0032 100 100 051 Old_age Always - 0 11 Calibration_Retry_Count 0x0032 100 100 051 Old_age Always - 0 12 Power_Cycle_Count 0x0032 098 098 000 Old_age Always - 2950 192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age Always - 426 193 Load_Cycle_Count 0x0032 200 200 000 Old_age Always - 2968 194 Temperature_Celsius 0x0022 111 095 000 Old_age Always - 36 196 Reallocated_Event_Count 0x0032 200 200 000 Old_age Always - 0 197 Current_Pending_Sector 0x0032 200 200 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0030 200 200 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x0032 200 160 000 Old_age Always - 21716 200 Multi_Zone_Error_Rate 0x0008 200 200 051 Old_age Offline - 0 SMART Error Log Version: 1 No Errors Logged SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Short offline Completed without error 00% 15444 - Error SMART Read Selective Self-Test Log failed: scsi error aborted command Smartctl: SMART Selective Self Test Log Read Failed root@SERVER:/tmp# In one tutorial I read that the pre-fail is a an indication of coming failure, in another tutorial I read that it is not true. Can you guys help me decode the output of smartctl? It would be also nice to share suggestions what should I do if I want to ensure data integrity (about 50GB of new data each week, up to 2TB for the whole period I'm interested in). Maybe I will go with 2x2TB Caviar Black in RAID4?

    Read the article

  • Do I need to be worried about these SMART drive temperatures?

    - by Steve Lorimer
    I have 5 hard drives in a machine sitting in a cupboard. /dev/sda is a 500GB Seagate drive, and is the boot disk. /dev/sd{b,c,d,e} are 2TB drives in a raid6 configuration. smartctl is showing significantly higher temperatures (like ~140 degrees celsius) on the raid drives than the boot drive. Do I need to be worried? /dev/sdb and /dev/sde are new Western Digital Black drives (new=1 week) /dev/sdc and /dev/sdd are 5 year old Hitachi drives /dev/sda [SAT], Temperature_Celsius changed from 40 to 39 /dev/sdc [SAT], Temperature_Celsius changed from 142 to 146 /dev/sdc [SAT], Temperature_Celsius changed from 146 to 142 /dev/sdd [SAT], Temperature_Celsius changed from 142 to 146 /dev/sda [SAT], Airflow_Temperature_Cel changed from 61 to 62 /dev/sda [SAT], Temperature_Celsius changed from 39 to 38 /dev/sde [SAT], Temperature_Celsius changed from 107 to 108 /dev/sdb [SAT], Temperature_Celsius changed from 108 to 109 /dev/sdc [SAT], Temperature_Celsius changed from 146 to 150 /dev/sdc [SAT], Temperature_Celsius changed from 146 to 150 /dev/sda [SAT], Airflow_Temperature_Cel changed from 62 to 61 /dev/sda [SAT], Temperature_Celsius changed from 38 to 39 Update: Adding detailed drive information as per request: /dev/sda =========================== smartctl 6.0 2012-10-10 r3643 [x86_64-linux-3.9.10-100.fc17.x86_64] (local build) Copyright (C) 2002-12, Bruce Allen, Christian Franke, www.smartmontools.org === START OF INFORMATION SECTION === Model Family: Seagate Pipeline HD 5900.2 Device Model: ST3500312CS Serial Number: 5VV47HXA LU WWN Device Id: 5 000c50 02aad5ad6 Firmware Version: SC13 User Capacity: 500,107,862,016 bytes [500 GB] Sector Size: 512 bytes logical/physical Rotation Rate: 5900 rpm Device is: In smartctl database [for details use: -P show] ATA Version is: ATA8-ACS T13/1699-D revision 4 SATA Version is: SATA 2.6, 1.5 Gb/s (current: 1.5 Gb/s) Local Time is: Tue Jun 3 10:54:11 2014 EST SMART support is: Available - device has SMART capability. SMART support is: Enabled /dev/sdb =========================== smartctl 6.0 2012-10-10 r3643 [x86_64-linux-3.9.10-100.fc17.x86_64] (local build) Copyright (C) 2002-12, Bruce Allen, Christian Franke, www.smartmontools.org === START OF INFORMATION SECTION === Device Model: WDC WD2003FZEX-00Z4SA0 Serial Number: WD-WMC1F1398726 LU WWN Device Id: 5 0014ee 003b8bd25 Firmware Version: 01.01A01 User Capacity: 2,000,398,934,016 bytes [2.00 TB] Sector Sizes: 512 bytes logical, 4096 bytes physical Rotation Rate: 7200 rpm Device is: Not in smartctl database [for details use: -P showall] ATA Version is: ACS-2 (minor revision not indicated) SATA Version is: SATA 3.0, 6.0 Gb/s (current: 3.0 Gb/s) Local Time is: Tue Jun 3 10:54:11 2014 EST SMART support is: Available - device has SMART capability. SMART support is: Enabled /dev/sdc =========================== smartctl 6.0 2012-10-10 r3643 [x86_64-linux-3.9.10-100.fc17.x86_64] (local build) Copyright (C) 2002-12, Bruce Allen, Christian Franke, www.smartmontools.org === START OF INFORMATION SECTION === Model Family: Hitachi Deskstar 7K3000 Device Model: Hitachi HDS723020BLA642 Serial Number: MN1220F30WSTUD LU WWN Device Id: 5 000cca 369cc9f5d Firmware Version: MN6OA580 User Capacity: 2,000,398,934,016 bytes [2.00 TB] Sector Size: 512 bytes logical/physical Rotation Rate: 7200 rpm Device is: In smartctl database [for details use: -P show] ATA Version is: ATA8-ACS T13/1699-D revision 4 SATA Version is: SATA 2.6, 6.0 Gb/s (current: 3.0 Gb/s) Local Time is: Tue Jun 3 10:54:11 2014 EST SMART support is: Available - device has SMART capability. SMART support is: Enabled /dev/sdd =========================== smartctl 6.0 2012-10-10 r3643 [x86_64-linux-3.9.10-100.fc17.x86_64] (local build) Copyright (C) 2002-12, Bruce Allen, Christian Franke, www.smartmontools.org === START OF INFORMATION SECTION === Model Family: Hitachi Deskstar 7K3000 Device Model: Hitachi HDS723020BLA642 Serial Number: MN1220F30WST4D LU WWN Device Id: 5 000cca 369cc9f48 Firmware Version: MN6OA580 User Capacity: 2,000,398,934,016 bytes [2.00 TB] Sector Size: 512 bytes logical/physical Rotation Rate: 7200 rpm Device is: In smartctl database [for details use: -P show] ATA Version is: ATA8-ACS T13/1699-D revision 4 SATA Version is: SATA 2.6, 6.0 Gb/s (current: 1.5 Gb/s) Local Time is: Tue Jun 3 10:54:11 2014 EST SMART support is: Available - device has SMART capability. SMART support is: Enabled /dev/sde =========================== smartctl 6.0 2012-10-10 r3643 [x86_64-linux-3.9.10-100.fc17.x86_64] (local build) Copyright (C) 2002-12, Bruce Allen, Christian Franke, www.smartmontools.org === START OF INFORMATION SECTION === Device Model: WDC WD2003FZEX-00Z4SA0 Serial Number: WD-WMC1F1483782 LU WWN Device Id: 5 0014ee 3002d235c Firmware Version: 01.01A01 User Capacity: 2,000,398,934,016 bytes [2.00 TB] Sector Sizes: 512 bytes logical, 4096 bytes physical Rotation Rate: 7200 rpm Device is: Not in smartctl database [for details use: -P showall] ATA Version is: ACS-2 (minor revision not indicated) SATA Version is: SATA 3.0, 6.0 Gb/s (current: 1.5 Gb/s) Local Time is: Tue Jun 3 10:54:11 2014 EST SMART support is: Available - device has SMART capability. SMART support is: Enabled

    Read the article

  • TV not detected by Windows/VGA - when there is a WHDI device in the signal chain

    - by ashwalk
    I'm at my wit's end with this one... I had an EVGA GTS 250, and I used to plug it's HDMI out into a WHDI sender, which transmitted to its corresponding WHDI receiver 15ft away, which then connected to a Samsung LN40D LCD TV through another HDMI cable. PC/VGA < [hdmi cable] < WHDI sender <[air] WHDI receiver < [hdmi cable] < TV It was perfect, stable, no perceivable latency. I just plugged everything the first time and it worked instantly. It sent 5.1 audio, and Windows/nVidia Control Center detected the TV by its name. The WHDI device is this one: http://goo.gl/Q8iWI5 Now I bought an EVGA GTX 650, and WHDI doesn't work anymore. Both Windows and nVidia Control Center won't detect the TV, only the monitor that's connected via DVI. The TV screen shows "TX202913 connected. Check video signal." on top of a black screen. Though the device is not the problem itself, just the fact that it's not allowing direct connection between PC and TV. I would bet that if put an AVR in its place I'd also have this issue. The HDMI on this new card works with other monitors. If I put the older card back, WHDI works normally. I have googled this for 5 months on and off. Once I bumped into a page that showed how to force a display device to always-on through registry edit. Once I restarted windows, the Tv (through WHDI) displayed my expanded or duplicated desktop at 1024x768 ONLY, and listed the display as "digital display". I could not change the resolution and it wouldn't playback audio (although the option was available at nVidia Control Center HDMI audio options, but did not work). This proves that there is no conflict between the devices, except that software-wise, Windows cannot, for the life of it, understand that there's a TV there to send video/audio to. Since this won't do (no audio, poor video), I reverted this regedit. It's also not an EDID problem within the TV, since when connected directly it works. The last weird bit of this saga is that today, I reminded of Windows' "Add Device" dialog, gave it a go, and a "Samsung Generic UPNP TV" showed up, which I promptly installed the drives for, rising to a climax of... ...NOTHING HAPPENING. As far as I can tell, it really didn't change anything other than using up a few kb in my main disc. I should also say that I looked a LOT into handshake problems and nothing applied either. Do any of you have an idea of what may be going on? I can't stand the thought of having a us$200 device not working because of the addition of a newer graphics card, when the much older one had no issues. There is absolutely NO REASON for this to happen. There is NO documentation on WHDI online. Apparently no one buys this stuff. For the same reason, no one responded to this same plea for help on NVidia and EVGA forums. Worst case, this can be a warning about this setup for people in the future. Thanx in advance.

    Read the article

  • Issues with Verizon's "Network Extender" device talking on my home network.

    - by Logan
    I recently switched my phone service to Verizon from ATT, and I get somewhat spotty service in my house. I called them and they sent me a "network extender" device for free. Its a femtocell that connects to my home network. The directions that come with it are very dumbed down, basically just say to connect it to your router and put it near a window (so it can get a GPS signal, it has to make sure its within the correct area before operating). The problem I'm having is the network light on it stays red. The troubleshooting information that came with it tells me this means there is a bad network connection. Its connected through an ASUS router running DD-WRT. No other devices on my network have a problem with it, including a Western Digital WDLIVE device, mine and my wife's cell phones (via wifi), a Wii, and an Xbox. If I connect the device directly to my cable modem, the light goes blue (which means good) and it starts working. So this tells me that its definately a configuration issue with my router. Verizon basically washed their hands of me when I connected it to my cable modem, and told me that its a router issue and to try a different router. Because normal people just have extra routers laying around their houses... When I connect it to the router, I can watch the DHCP Clients list on the status page, and the MAC of the network extender quickly fills up the clients list, grabbing every available DHCP address. Its like it grabs an address, can't connect to the internet, releases it, grabs another, then another, then another. So in the DHCP server settings I assigned a static IP to its MAC. This made it quit doing what it was doing before, but its still not working. I found the ports I needed to open on verizon's website, and opened them in the port forwarding config on my router. This still didn't help. So, I tried setting the network extender device's IP as the DMZ IP on the router. This still did no good. I called Verizon back and got the tech to write up a report which he passed on to a "senior network tech" who I got a call back from a few hours ago. This guy told me that while an ASUS router isn't listed as a supported device, he's not really sure why its not working. He suggested restoring the firmware to stock ASUS firmware and trying again. I have a very hard time believing its DD-WRT doing this, since every other device is working just fine with it. But its also not the Network Extender, since it works just fine when connected directly to the modem. At this point I'm out of ideas, and the next step is to restore the stock firmware on my router, and then going to walmart and getting a linksys WRT-54G to try. Is there anything else I could try before going that drastic? Cliffs- -Network extender won't work behind router, works when connected directly to cable modem. -Extender goes nuts when allowed to pick its own DHCP address, I had to assign it a static IP. -Won't work when correct ports are forwarded to it -Won't work with a DMZ address.

    Read the article

  • Add a small RAID card? Will it help overall stability and performance of my nine hard drives?

    - by Ray
    Hi, Will I get any extra genuine added performance and RAID stability if I insert a basic RAID card into a PCI-E x1 slot? I am considering the Adaptec 1220SA - 2 port SATA , pci-express (1x) , raid 0/1. Ok it only supports two SATA drives. Purpose is to help support the eight internal hard drives (1TB each), a DVD drive and an external e-SATA connected 2TB hard drive - by dealing with two of the internal hard drives. My current configuration of eight internal 1TB Barracuda (7200.12) SATA hard drives, one external 2TB SATA Western Digital Green Drive (e-SATA) and one DVD drive can already be supported by the Intel P55 & JMicron controllers on the ASUS motherboard : the Intel P55 (controls six HDD; configured as three x RAID 1), and the JMicron (controls two HDD as one RAID 1, as well as the DVD drive and the external SATA drive via the motherboard's e-SATA port (controlled by the JMicron)). Bigger picture details : I have an ASUS motherboard designed for the LGA1156 type processor and it includes the Intel P55 Express Chipset and JMicron. I am using the Intel Core i7-870 processor, and have 8GB DDR3 (1333) memory (four x 2GB Corsair DIMMs). Enough overall power. The power supply is more than sufficicient for the system. Corsair AX850. The system will never need the full 850 watts (future : second graphics card). The RAID card would provide hardware RAID 1 for two of the eight intrnal drives. It would either reduce the load on : the Intel P55 firmware RAID support, or replace the JMicron controller's RAID 1 set. I am busy installing the above configuration using Windows 7 Ultimate 64-bit as the OS. The RAID card is a last minute addition to the plan. Is it worth spending the extra R700 - R900 on the Adaptec 1220SA, or equivalent RAID card? I cannot afford to spend yet another R2000 - R3000 on a RAID card that would support many SATA2 hard drives, with a better RAID, example the RAID 5. My Issue & assumption : I am trusting that the Intel P55 chipset can properly handle six drives, configured as three * RAID 1. I am assuming that the JMicron can handle, using its RED SATA ports, one RAID-1 (two HDDs). The DVD drive connects to the JMicron optical SATA port 1 (white port 1). White port 2 is not used. The e-SATA connection is from the JMicron straight to, and through the motherboard - to an on-board (rear panel) e-SATA port. Am I being a little hopeful in only using the on-board Intel P55 and the JMicron? Is it a waste of money to install a RAID card that handles two SATA2 drives? OR Is it wisdom to take the pressure a little off the Intel P55? Obviously I am interested in data security, hence RAID 1, not RAID Zero. RAID 5 would be nice. The CPU, Intel Core i7-870 will provide the clout. Context to nine drives : I am using virtualisation with Windows 7 Ultimate. Bootable VMs. The operating system gets a mirror. Loaded apps gets a mirror. The current design data is kept in another mirror and Another mirror is back-up one and / or VM territory. Then the external 2TB drive (via e-SATA) is the next layer of data security and then finally, I use off-site data security. Thanks.

    Read the article

< Previous Page | 60 61 62 63 64 65 66 67 68 69 70  | Next Page >