Search Results

Search found 11778 results on 472 pages for 'mark ms smith'.

Page 134/472 | < Previous Page | 130 131 132 133 134 135 136 137 138 139 140 141  | Next Page >

  • Sending an Email from 2 Mail Servers

    - by Ted Smith
    We are currently attempting to move away from using a "local" mail(exchange) server to an cloud based offering for all our automated emails. The problem is that we send and receive thousands for emails a day and its uptime is quite critical so the business do not want to put all their eggs in one basket, so if we would like to use a cloud based offering(mailgun) they would like a backup if this goes down. So my question is: Would it be possible to set multpile A, TXT and CNAME records to multiple IP address so if one mail server goes down we can automatically start sending emails from the fallover(without them being blocked doing a reverse DNS lookup)? I know we will still need to adjust the MX record for incoming emails but that is acceptable to not receive emails for a short(1-2 hours) of time. Does this make sense?

    Read the article

  • Lessons From OpenId, Cardspace and Facebook Connect

    - by mark.wilcox
    (c) denise carbonell I think Johannes Ernst summarized pretty well what happened in a broad sense in regards to OpenId, Cardspace and Facebook Connect. However, I'm more interested in the lessons we can take away from this. First  - "Apple Lesson" - If user-centric identity is going to happen it's going to require not only technology but also a strong marketing campaign. I'm calling this the "Apple Lesson" because it's very similar to how Apple iPad saw success vs the tablet market. The iPad is not only a very good technology product but it was backed by a very good marketing plan. I know most people do not want to think about marketing here - but the fact is that nobody could really articulate why user-centric identity mattered in a way that the average person cared about. Second - "Facebook Lesson" - Facebook Connect solves a number of interesting problems that is easy for both consumer and service providers. For a consumer it's simple to log-in without any redirects. And while Facebook isn't perfect on privacy - no other major consumer-focused service on the Internet provides as much control about sharing identity information. From a developer perspective it is very easy to implement the SSO and fetch other identity information (if the user has given permission). This could only happen because a major company just decided to make a singular focus to make it happen. Third - "Developers Lesson" -  Facebook Social Graph API is by far the simplest API for accessing identity information which also is another reason why you're seeing such rapid growth in Facebook enabled Websites. By using a combination of URL and Javascript - the power a single HTML page now gives a developer writing Web applications is simply amazing. For example It doesn't get much simpler than this "http://api.facebook.com/mewilcox" for accessing identity. And while I can't yet share too much publicly about the specifics - the social graph API had a profound impact on me in designing our next generation APIs.  Posted via email from Virtual Identity Dialogue

    Read the article

  • Upcoming Directory Services Live Webcast - Improve Time-to-Market and Reduce Cost with Oracle Direct

    - by mark.wilcox
    We're doing another live webcast on May 27 - Here's the details: Live Webcast: Improve Time-to-Market and Reduce Cost with Oracle Directory Services Event Date: Thursday, May 27, 2010 Event Time: 10:00 AM Pacific Standard Time / 1:00 Eastern Standard Time Organizations can spend up to 60% of their IT budgets on operational activities. • Are you being asked to do more, with less resources? • Have you had to lead a cost cutting exercise in your IT department? • Do you have licenses for software and wonder whether you are getting the most out of those resources? • Do you want to be an Identity Hero inside your organization? Oracle brings leadership in Directory Services to help organization's identify ways to leverage Oracle Virtual Directory to reduce costs in their enterprise. This presentation will explore ways to use Oracle Virtual Directory to federate faster, create architectures to meet aggressive time constraints for identity projects or mergers and acquisitions in a cost conscious environment. -- Posted via email from Virtual Identity Dialogue

    Read the article

  • Create Virtual Image of Laptop before Formatting

    - by Simon Mark Smith
    I have a 3 year old laptop running Windows XP that I used for business. Although I have not used the laptop in over a year, I now want to re-commission it with Windows 7 and a fresh install. Before I do the fresh install I want to create a Virtual Image of the laptop that I can keep and potentially run on my desktop machine should I ever need to access any of the old files/projects that it contains currently. I know that most people will say just copy the files over to your desktop, but my concern is the configuration of the laptop. I used to use it for development and it has older versions of Visual Studio, SQL Server, Active X controls etc, etc than I currently use so I really want to preserve the environment not just the files. So really I am asking what is the best tool-set/method to achieve this? I understand there are free VM tools available but I have never done this before and would appreciate any help.

    Read the article

  • data replication from a production web server back to the staging web server

    - by Dennis Smith
    Have two web servers, development/staging and production. Code and some documentation is moved from the staging area to production either through on-demand jobs or nightly via a global replication job. The production server of course sits isolated in a DMZ. There is some content that gets uploaded to the live server that needs to be replicated back to staging. Our security team is locking the network down (and they should) and restricting access to the live server. Best suggestions for replication of "live" data back to "stage" and backing up the live server also.

    Read the article

  • Limited resource practice problems?

    - by Mark
    I'm applying for some big companies and the areas I seem to be getting burned on is problems involving limited memory, disk-space or throughput. These large companies process GBs of data every second (or more), and they need efficient ways of managing all that data. I have no experience with this as none of the projects I have worked on have grown that large. Is there a good place to learn about or practice these sorts of problems? Most of the practice-problem sites I've encountered only have problems where you have to solve something efficiently (usually involving prime numbers) but none of them limit your resources.

    Read the article

  • Software engineering and independence

    - by Mark
    I tend to think very independently, often coming up with unconventional, sometimes unorthodox, ways of solving problems. I do not like to listen to authority such as having to code up software a certain way or following strict guidelines/formats. Do you think the software engineering/development field would be very tough for someone like me who prefers autonomy? If not, what fields of computer science do allow for that?

    Read the article

  • Cool Tools You Can Use: Validation Templates for PeopleSoft Contracts Processes

    - by Mark Rosenberg
    This is the first in a series of postings we’ll be making under the heading of Cool Tools You Can Use. Our PeopleSoft product management team identified the need for this series after reflecting on the many conversations we have each year with our PeopleSoft community members. During these conversations, we were discovering that customers and implementation partners were often not aware that solutions exist to the problems they were trying to address and that the solutions were readily available at no additional charge. Thus, the Cool Tools You Can Use series will describe the business challenge we’ve heard, the PeopleSoft solution to the challenge, and how you can learn more about the solution so that everyone can be sure to make full use of what PeopleSoft applications have to offer. The first cool tool we’ll look at is the Validation Template for PeopleSoft Contracts Process Requests, which was first released in December 2013 as part of PeopleSoft Contracts 9.2 Update Image 4. The business issue our customers highlighted to us is the need to tightly control but easily configure and manage the scope of data that any user can process when initiating a process. Control of each user’s span of impact is essential to reducing billing reconciliation issues, passing span of authority audits, and reducing (or even eliminating) the frequency of unexpected process results.  Setting Up the Validation Template for a PeopleSoft Contracts Process With the validation template, organizations can easily and quickly ensure the software restricts the scope of transactions a user can affect and gives organizations the confidence to know that business processes are being governed effectively. Additionally, this control of PeopleSoft Contracts process requests can be applied and easily maintained and adjusted from a web browser thereby enabling analysts to administer the rules without having to engage software developers to customize the software. During the field validation template setup, an analyst specifies the combinations of fields that must contain values when a user tries to setup a run control and initiate a PeopleSoft Contracts process from a process request page. For example, for the Process Limits component, an organization could require that users enter a valid combination of values for the business unit, contract, and contract type fields or a value in the contract administrator field. Until the user enters a valid combination of entries on the process request page, he cannot launch the process. With the validation template activated for process request pages, organizations can be confident that PeopleSoft Contracts users will not accidentally begin generating invoices or triggering other revenue management processes for transactions beyond their scope of authority. To learn more about the Validation Template, please review the Defining Validation Templates section of the PeopleSoft Contracts PeopleBooks. 

    Read the article

  • .NET Libraries Cost More Than Windows?

    - by Kevin Mark
    When looking into libraries to make my programming life a little bit easier I've (almost) always been disappointed by the prices offered. For instance, Actipro's WPF Studio is $650. I suppose that's worth it if you plan to make money from the use of those controls. But take a look at, say, Windows. Windows 7 Ultimate is just about $220. I consider Windows to be a far more complex and "worth-it" product/purchase than a library that runs on it. Why the significant difference in pricing? Do libraries really need to be so expensive, or do they need to charge more in order to make a decent some of money?

    Read the article

  • Where do I connect the two red SATA cables on my motherboard?

    - by Dennis Smith
    I have a Compaq Presario PC SR5110NX. The Processor is AMD Athlon 64 3800+. It has 512 MB of RAM and a 40GB Hard Drive. I'm running Windows XP Professional on it. I have 2 SATA drives, one is black and the other is white. I have 2 red little cables, and they have the letters and numbers on them. On one side of the cable it says "HP P/N:5188-2897 0720". My motherboard is a MCP61PM-HM Rev 1.0B. Where do I connect the two SATA connectors?

    Read the article

  • Loading dependencies for custom puppet functions

    - by Ben Smith
    I have written a custom puppet function, which is working fine, that depends on the cloudservers gem (a Rackspace client library). This is fine if I have pre-installed the gem on a server before running puppet but totally breaks if I have not installed the gem as the function seems to be run during the 'compilation' sweep, well before my package definition is realised. Here's what my .pp looks like, with get_hosts the function that requires the cloudservers gem. package { "rubygems": ensure => installed, provider => "gem"; } package { "cloudservers": ensure => installed, provider => "gem", require => Package["rubygems"]; } class hosts::us { $hosts = get_hosts("us") hostentry { $hosts: } } define hostentry() { $parts = split($name, ',') $address = $parts[0] $ip = $parts[1] $aliases = $parts[2] host{ $address: ip => $ip, host_aliases => $aliases } } Is there a way to stop the function getting run so early, or at least having it's run depend up the library being installed. Alternatively, is there a way that I can add dependencies somewhere in the functions folder that will be available to the function?

    Read the article

  • cant get ubuntu to work with windows 8

    - by John Mark High
    ive been trying to dual boot Ubuntu with windows 8 but so far I haven't been able too. the laptop im using is a HP Pavilion g6-2240sa pre-installed with windows 8. ive made the bootbale USB with Ubuntu 12.10, it installs but when I restart the computer boot straight into windows, no grub boot options. I can get into Ubuntu once by doing an advanced restart and booting from the Ubuntu partition. I can use Ubuntu fine but once I restart or shutdown, I do the advanced restart again and the Ubuntu partition is now gone and I have to reinstall. i used this tutorial to install Ubuntu, http://www.youtube.com/watch?v=wNCSbTyUzoM After i have to reinstall and still no grub boot menu, i used the boot repair to re-install it. once i rebooted the computer it went straight to windows again and the Ubuntu partition was gone. can i dual boot windows 8 and unbuntu 12.10 with the grub so i can pick what OS to boot into when the computer is starting, and without the partition going AWOL???? Thanks in advance

    Read the article

  • Compiling Assembly Manually [migrated]

    - by John Smith
    I am having trouble with translating a specific line in assembly to machine code for the Nios II. I have successfully compiled these lines: START_TIMER = 0xF68C r0 = 0x0 r8 = 0x8 label = 50000 addi r8, r8, %lo(label) - 01000 01000 1100001101010000 000100 subi r8, r8, 1 - 01000 01000 1111111111111111 000100 bne r8, r0, START_TIMER - 01000 00000 1111011010001100 011110 The line in question that I have trouble with is this one: orhi r8, r0, %hiadj(label) As explained in the handbook linked above, "%lo" means "Extract bits [15..0] of immed32" and "%hiadj" means "Extract bits [31..16] and adds bit 15 of immed32". However, 50000 in binary is 1100001101010000, and is therefore a 16 bit number. As far as I can see, it doesn't contain any bits between 16 and 31. I tried with 0000000000000001, but it's incorrect. What am I doing wrong?

    Read the article

  • How do I get Windows 7 wallpaper to display the company logo properly?

    - by David Silva Smith
    Windows 7 is not displaying our company background properly. Curves show pixelation and straight lines are jagged. I'm working with a scalable vector graphics (SVG) image that I've exported to the same resolution (pixel dimensions, to be technical) as the desktop, which is 1440x900. I have tried exporting the image as a .png, .jpg, and .bmp. All of these look correct in an image viewing program, such as Windows Photo Viewer and Paint, but when I set the Windows background to these images, curves show pixelation and straight lines are jagged. Reading online, it seems that behind the scenes, Windows is converting the image to a .jpg with low quality compression, which is causing the issue. I've tried setting the image as a background through Internet Explorer, saving it as a .jpg, and putting the file in the Windows photo directory as suggested in some online forums, but none of those solutions have fixed my issue.

    Read the article

  • Use Entitlements To Secure LDAP-enabled Applications With Oracle Virtual Directory and Oracle Entitl

    - by mark.wilcox
    I stumbled on an interesting article  that shows how the author used OVD to exposed OES security to protect a portal that only understood LDAP group-based authorization.This is great because it shows how you can use OES today to build central policies that can be used without needing to rewrite all of your applications - in particular if you just want to leverage rule-based groups.  Posted via email from Virtual Identity Dialogue

    Read the article

  • Correctly indexing multiple domains with same content in Google and others

    - by AJweb
    I have a client with a dozen territorial domains, like mydomain.co.uk, mydomain.fr, mydomain.de, etc Most of these domains hold a different language of the same dynamic content (shop), but some, like co.uk and .com, have the same language and content, except for some content customized to each country/domain in the front page, contact and other pages. I am aware that we should use the canonical meta tag to mark those duplicated contents, but, we want the co.uk to be present in UK ( indexed in google.co.uk ) and the .com to be present in US and other countries, for example, or least that is the goal. Is there anything we can do to "help" google determine the geographical meaning of each domain? If we mark with canonical tag the .com and co.uk sites, do you know how google will decide which one to show on a given search?

    Read the article

  • Zooming options terminology

    - by Mark
    I've come up with 4 different ways to fit an image inside a viewing region, but I'm trouble coming up with names for them. Perhaps someone can suggest some? Fit image in viewing region, do not enlarge if image is smaller Size image so it fits snuggly inside the viewing region (enlarge if necessary) -- the image is as large as possible while still fitting within the viewing region Size image so that it fills the entire viewing region -- the image will be the same size or bigger than the viewing region 1:1 ratio; 1 pixel in the image corresponds to 1 pixel on screen All zooming options maintain aspect ratio. Stretching is just ugly, so it's not an option :)

    Read the article

  • Weird graphical errors in console and on computer shut down

    - by Mark A.
    I am all new to Ubuntu (and Linux in general) and I am experiencing some strange graphic on my screen. Console #1 (ctrl+alt+f1): Exactly the same happens on all the other consoles (2-6), and the consoles don't seem to work. And I see the same when I hibernate or shut down my computer, but not when I suspend it. I was thinking that it may have something to do with the SiS 671 video driver work around that I use? http://ubuntuforums.org/showpost.php?p=11476910&postcount=773 Any ideas how to fix this?

    Read the article

  • Can't complete dropbox installation from behind proxy in Ubuntu 11.10

    - by Mark Jones
    Problem: My PC on campus sits behind a proxy (requiring authentication) and I can't setup Dropbox. I am convinced that this is a proxy issue as I can't setup Ubuntu one either (but I don't use Ubuntu One so that is not a problem). I have looked at the Ubuntu One fix but it seems to be to modify settings explicitly related to Ubuntu One. I can install the nautilus-dropbox package (compiled from source and from .deb package from website and from software centre) but once I click OK from the "Dropbox Installation" dialog box (prompting me to download the proprietary daemon) the installation just freezes with the OK button pressed. When I look at its process in System Monitor its waiting channel is inet_wait_for_connect. I have set the following proxy directives thus far: Added mj22:**@proxy.waikato.ac.nz:80 information to network proxy settings under network in settings. Added http_host and http_port variables under gconf-editor-system-proxy Added 'host', 'authentication_password' 'authentication_user' and ticked 'user authentication' and 'use_http_proxy' under gconf-editor-system-http_proxy Added export http_proxy="http://mj22:**@proxy.waikato.ac.nz:80/" to /etc/bash.bashrc Added Acquire::http::proxy "http://mj22:**@proxy.waikato.ac.nz:80/"; to /etc/apt/apt.conf (which is what I imagine is letting Software Center retrieve packages). (where ** is my password) I have also added the equivalent ftp and https lines for the above entries. I get the internet fine and Software Centre can download packages but thats it. Related issues: The software centre can't fetch reviews (but can download packages). When trying to add an online account in Gnome 3 a dialog pop up appears with "Error getting a Request Token: Cannot connect to proxy (proxy.waikato.ac.nz)" Updates: After some time (10mins ish) Dropbox shows an error dialog box that reads: Trouble connecting to Dropbox servers. Maybe your internet connection is down, or you need to set you http_proxy environment variable. Is there a way I can see what environment variables are currently set?

    Read the article

  • How to keep balance / Unlock items / achievement rules

    - by Mark Knol
    I'm working on an engine for a game, too learn javascript and just because its fun. I'm a flashdeveloper, I know how to build websites. Now making games is a different challenge, javascript is a challenge, but I'd love to learn how to structure code and what patterns are common. I dont mind if the game ever finish, I'm mostly interested in the programming part of it. I dont have a particular endresult in mind, so I'll see where it takes me. I currently have a system where you can buy items. The items cost a specified amount of gold, silver, diamonds etc. When you have selected and bought the item, it takes time before getting rewarded. When time is over, you are getting rewarded with other properties (gold, energy, diamonds). For example, you can buy an apple for 50gold, It takes a minute, you get rewarded with 75energy. Or if you take a run, it cost 50energy, it takes 5minutes, reward is 25gold and 25silver. These definitions is what i call actions. Currently I already have a system where this already works and I can define as much actions with as much properties as I want. The definitions I have kinda looks like this: {id:101, category:544, onInit:{gold:-75}, onComplete:{energy:75}, time:2000, name:"Apple", locked: false} {id:102, category:544, onInit:{gold:-135}, onComplete:{energy:145}, time:2000, name:"Banana", locked: false} {id:106, category:302, onInit:{energy:-50, power: -25}, onComplete:{gold:100, diamonds:2}, time:10000, name:"Run", locked: false} {id:107, category:302, onInit:{energy:-70, silver: -55}, onComplete:{gold:100}, time:10000, name:"Dance", locked: false} {id:108, category:302, onInit:{energy:-230, power: -355}, onComplete:{gold:70, silver:70}, time:10000, name:"Fitness", locked: false} Now, I would love to add a system where I can lock/unlock the actions using achievement rules. Lets say, if you buy 10 apples, you unlock a new action, like bananas which cost more, and reward more. In the future I maybe want to restrict achievements and actions to levels. I am kinda stuck how to structure this. I have 2 questions: Which patterns are used to define achievements? How/where are they defined? Should it be part of the action, or should it be a separate controller? Is it a good idea to register all completed actions to it? I think I want multiple types of achievement rules, Id love to hear some ideas how to develop it. How do you create/find a good balance, so the user does not get stuck or can cheat by repeat a pattern of actions to get too much rewards. I know there is not a simple answer and i'm lacking of a good game-concept, but I wonder if anyone created such a game and how you dealed and played with it.

    Read the article

  • SQL Server Developer Tools &ndash; Codename Juneau vs. Red-Gate SQL Source Control

    - by Ajarn Mark Caldwell
    So how do the new SQL Server Developer Tools (previously code-named Juneau) stack up against SQL Source Control?  Read on to find out. At the PASS Community Summit a couple of weeks ago, it was announced that the previously code-named Juneau software would be released under the name of SQL Server Developer Tools with the release of SQL Server 2012.  This replacement for Database Projects in Visual Studio (also known in a former life as Data Dude) has some great new features.  I won’t attempt to describe them all here, but I will applaud Microsoft for making major improvements.  One of my favorite changes is the way database elements are broken down.  Previously every little thing was in its own file.  For example, indexes were each in their own file.  I always hated that.  Now, SSDT uses a pattern similar to Red-Gate’s and puts the indexes and keys into the same file as the overall table definition. Of course there are really cool features to keep your database model in sync with the actual source scripts, and the rename refactoring feature is now touted as being more than just a search and replace, but rather a “semantic-aware” search and replace.  Funny, it reminds me of SQL Prompt’s Smart Rename feature.  But I’m not writing this just to criticize Microsoft and argue that they are late to the party with this feature set.  Instead, I do see it as a viable alternative for folks who want all of their source code to be version controlled, but there are a couple of key trade-offs that you need to know about when you choose which tool set to use. First, the basics Both tool sets integrate with a wide variety of source control systems including the most popular: Subversion, GIT, Vault, and Team Foundation Server.  Both tools have integrated functionality to produce objects to upgrade your target database when you are ready (DACPACs in SSDT, integration with SQL Compare for SQL Source Control).  If you regularly live in Visual Studio or the Business Intelligence Development Studio (BIDS) then SSDT will likely be comfortable for you.  Like BIDS, SSDT is a Visual Studio Project Type that comes with SQL Server, and if you don’t already have Visual Studio installed, it will install the shell for you.  If you already have Visual Studio 2010 installed, then it will just add this as an available project type.  On the other hand, if you regularly live in SQL Server Management Studio (SSMS) then you will really enjoy the SQL Source Control integration from within SSMS.  Both tool sets store their database model in script files.  In SSDT, these are on your file system like other source files; in SQL Source Control, these are stored in the folder structure in your source control system, and you can always GET them to your file system if you want to browse them directly. For me, the key differentiating factors are 1) a single, unified check-in, and 2) migration scripts.  How you value those two features will likely make your decision for you. Unified Check-In If you do a continuous-integration (CI) style of development that triggers an automated build with unit testing on every check-in of source code, and you use Visual Studio for the rest of your development, then you will want to really consider SSDT.  Because it is just another project in Visual Studio, it can be added to your existing Solution, and you can then do a complete, or unified single check-in of all changes whether they are application or database changes.  This is simply not possible with SQL Source Control because it is in a different development tool (SSMS instead of Visual Studio) and there is no way to do one unified check-in between the two.  You CAN do really fast back-to-back check-ins, but there is the possibility that the automated build that is triggered from the first check-in will cause your unit tests to fail and the CI tool to report that you broke the build.  Of course, the automated build that is triggered from the second check-in which contains the “other half” of your changes should pass and so the amount of time that the build was broken may be very, very short, but if that is very, very important to you, then SQL Source Control just won’t work; you’ll have to use SSDT. Refactoring and Migrations If you work on a mature system, or on a not-so-mature but also not-so-well-designed system, where you want to refactor the database schema as you go along, but you can’t have data suddenly disappearing from your target system, then you’ll probably want to go with SQL Source Control.  As I wrote previously, there are a number of changes which you can make to your database that the comparison tools (both from Microsoft and Red Gate) simply cannot handle without the possibility (or probability) of data loss.  Currently, SSDT only offers you the ability to inject PRE and POST custom deployment scripts.  There is no way to insert your own script in the middle to override the default behavior of the tool.  In version 3.0 of SQL Source Control (Early Access version now available) you have that ability to create your own custom migration script to take the place of the commands that the tool would have done, and ensure the preservation of your data.  Or, even if the default tool behavior would have worked, but you simply know a better way then you can take control and do things your way instead of theirs. You Decide In the environment I work in, our automated builds are not triggered off of check-ins, but off of the clock (currently once per night) and so there is no point at which the automated build and unit tests will be triggered without having both sides of the development effort already checked-in.  Therefore having a unified check-in, while handy, is not critical for us.  As for migration scripts, these are critically important to us.  We do a lot of new development on systems that have already been in production for years, and it is not uncommon for us to need to do a refactoring of the database.  Because of the maturity of the existing system, that often involves data migrations or other additional SQL tasks that the comparison tools just can’t detect on their own.  Therefore, the ability to create a custom migration script to override the tool’s default behavior is very important to us.  And so, you can see why we will continue to use Red Gate SQL Source Control for the foreseeable future.

    Read the article

  • Artec TV18R Tuner not working :(

    - by Antillar Maximus
    Can somebody kindly help me setup my TV Tuner stick? Make and Model: Artec TV18R USB TV Tuner OS: Ubuntu 11.10 64bit Hardware: MSI MS1656-US (GX640 barebones): i5 580M, 2xIntel X25M SSDs, Radeon HD5850M,Realtek sound. Logs and messages: antillar@antillar-MS-1656:/usr/share/dvb/atsc$ [B][COLOR=Red]sudo lsusb[/COLOR][/B] Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 002: ID 8087:0020 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 002: ID 8087:0020 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 003: ID ffc0:001f Bus 002 Device 005: ID 05d8:8117 Ultima Electronics Corp. [/code][code]antillar@antillar-MS-1656:/usr/share/dvb/atsc$ So, it seems that the tuner is being detected, but the light on it does not turn on? Not sure what is going on? sudo lspci [sudo] password for antillar: 00:00.0 Host bridge: Intel Corporation Core Processor DRAM Controller (rev 02) 00:01.0 PCI bridge: Intel Corporation Core Processor PCI Express x16 Root Port (rev 02) 00:1a.0 USB Controller: Intel Corporation 5 Series/3400 Series Chipset USB2 Enhanced Host Controller (rev 05) 00:1b.0 Audio device: Intel Corporation 5 Series/3400 Series Chipset High Definition Audio (rev 05) 00:1c.0 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 1 (rev 05) 00:1c.1 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 2 (rev 05) 00:1c.2 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 3 (rev 05) 00:1c.4 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 5 (rev 05) 00:1d.0 USB Controller: Intel Corporation 5 Series/3400 Series Chipset USB2 Enhanced Host Controller (rev 05) 00:1e.0 PCI bridge: Intel Corporation 82801 Mobile PCI Bridge (rev a5) 00:1f.0 ISA bridge: Intel Corporation Mobile 5 Series Chipset LPC Interface Controller (rev 05) 00:1f.2 SATA controller: Intel Corporation 5 Series/3400 Series Chipset 4 port SATA AHCI Controller (rev 05) 00:1f.3 SMBus: Intel Corporation 5 Series/3400 Series Chipset SMBus Controller (rev 05) 01:00.0 VGA compatible controller: ATI Technologies Inc Broadway PRO [Mobility Radeon HD 5800 Series] 01:00.1 Audio device: ATI Technologies Inc Juniper HDMI Audio [Radeon HD 5700 Series] 02:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 03) 04:00.0 FireWire (IEEE 1394): JMicron Technology Corp. IEEE 1394 Host Controller 04:00.1 System peripheral: JMicron Technology Corp. SD/MMC Host Controller 04:00.2 SD Host controller: JMicron Technology Corp. Standard SD Host Controller 04:00.3 System peripheral: JMicron Technology Corp. MS Host Controller 04:00.4 System peripheral: JMicron Technology Corp. xD Host Controller 05:00.0 Network controller: Intel Corporation Ultimate N WiFi Link 5300 ff:00.0 Host bridge: Intel Corporation Core Processor QuickPath Architecture Generic Non-core Registers (rev 02) ff:00.1 Host bridge: Intel Corporation Core Processor QuickPath Architecture System Address Decoder (rev 02) ff:02.0 Host bridge: Intel Corporation Core Processor QPI Link 0 (rev 02) ff:02.1 Host bridge: Intel Corporation Core Processor QPI Physical 0 (rev 02) ff:02.2 Host bridge: Intel Corporation Core Processor Reserved (rev 02) ff:02.3 Host bridge: Intel Corporation Core Processor Reserved (rev 02) antillar@antillar-MS-1656:/usr/share/dvb/atsc$ I have been searching for a solution to this all weekend without any luck. I remember reading that ehci_hcd is the problem, but I'm not sure if disabling it is a good idea. I don't want my USB ports to go back to v1.1. Thank you!

    Read the article

< Previous Page | 130 131 132 133 134 135 136 137 138 139 140 141  | Next Page >