Search Results

Search found 115 results on 5 pages for 'tj fischer'.

Page 1/5 | 1 2 3 4 5  | Next Page >

  • Disabling Laptop (PB TJ-75) faulty card reader Linux

    - by Gab
    My problem comes from that my laptop [PB TJ-75] has a faulty Alcor card reader. It’s 100% sure, the device is dead and unusable whatever the OS is. It cannot be disabled in BIOS [latest: Vendor: Phoenix Technologies LTD Version: V1.26 Release Date: 05/04/2010]. If I could take it apart from the main board easily, and if with that, the system would never look again for it, I’ll be very happy! Is it possible, has anyone ever tried this? Or maybe, replacing the BIOS with a more open one, which let you disable the card reader. Does this exists? Here's what I've tried to disable it so far. In Win7, I choose ‘disable’ in device manager and that’s ok. If not, the device keeps on appearing and disappearing and lot of resources are used. In Lubuntu 13.04, I got extra boot time, with the msg:'sdb, assuming drive cache, etc.’ I tried other distros (isos booted by grub). I can boot Puppy, Gparted, and Redobackup apparently without any problem. I cannot boot Debian, live or install + tried Crunchbang and Tails. I got a loop :’usb device, scsi n+1 blabla‘. I tried "nousb", no result, I have blacklisted EHCI, no result, then usb_storage module, better boot time in Lubuntu, with just the message "...data transfer failed", better shutdown time too. But, no way to use usb storage medias. In Debian, it ends with BusyBox prompt. Is it possible to just disable that Alcor card reader? Does it have a specific module? Is there a special kernel boot option that I missed? Does it have something to do with kernel recompiling, and if yes, how to do with isos? Programming a driver which says everything is ok (out of my comprehension for the moment)? Disabling device by vendor id? What is the best way?

    Read the article

  • In listview,Viewstub cannot be found after the previous one is inflate

    - by user2958132
    I am using some list item layout and in the item layout, there is a Viewstub where I want to put some image in.I don't have the source of list item layout and just know there are some TextViews and ViewStubs in it. My purpose is to find the ViewStub first and set my personal layout and play with it. However, some of the ViewStub cannot be found. public class TJAdapter extends CursorAdapter { .... public void bindView(View view, Context context, Cursor cursor) { ViewStub contentstub = (ViewStub)item.findViewById(R.id.content_stub); if (contentstub == null){ LOG.error("TJ,contentstub is null"); } else { LOG.error("TJ,contentstub is not null"); contentstub.setLayoutResource(R.layout.icon_image); View iconImage = contentstub.inflate(); } .... } public View newView(Context context, Cursor cursor, ViewGroup parent) { final View view = mInflater.inflate(R.layout.list_item, parent, false); bindView(view, context, cursor); } And the log output is like this: TJ,bindView is called TJ,contentstub is not null TJ,bindView is called TJ,contentstub is null TJ,bindView is called TJ,contentstub is not null TJ,bindView is called TJ,contentstub is not null TJ,bindView is called TJ,contentstub is null TJ,bindView is called TJ,contentstub is not null I spent a lot of time on it and have no idea why this happens. Can some body help?

    Read the article

  • Some optimization about the code (computing ranks of a vector)?

    - by user1748356
    The following code is a function (performance-critical) to compute tied ranks of a vector: mergeSort(x,inds,ci); //a sort function to sort vector x of length ci, also returns keys (inds) of x. int tj=0; double xi=x[0]; for (int j = 1; j < ci; ++j) { if (x[j] > xi) { double rankvalue = 0.5 * (j - 1 + tj); for (int k = tj; k < j; ++k) { ranks[inds[k]]=rankvalue; }; tj = j; xi = x[j]; }; }; double rankvalue = 0.5 * (ci - 1 + tj); for (int k = tj; k < ci; ++k) { ranks[inds[k]]=rankvalue; }; The problem is, the supposed performance bottleneck mergeSort(), which is O(NlogN) is several times faster than the other part of codes (which is O(N)), which suggests there is room for huge improvment with the other part of the codes, any advices?

    Read the article

  • Benchmarking Linux flash player and google chrome built in flash player

    - by Fischer
    I use xubuntu 14.04 64 bit, I installed flash player from software center and xubuntu-restricted-extras too Are there any benchmarks on Linux flash player and google chrome built in flash player? I just want to see their performance because in theory google's flash player should be more updated and have better performance than the one we use in Firefox. (that's what I read everywhere) I have chrome latest version installed and Firefox next, and I found that flash videos in Chrome are laggy and they take long time to load. While the same flash videos load much faster in Firefox and I tend to prefer watching flash videos in firefox, especially the long ones because it loads them so much faster. I can't believe these results on my PC, so is there any way to benchmark flash players performance on both browsers? I want to know if it's because of the flash player or the browsers or something else

    Read the article

  • Issue with Godaddy DNS manager

    - by Fischer
    I'm using domains.live.com to setup an email to a domain registered on Godaddy. The domains.live.com configuration page says: Godaddy's DNS manager isn't accepting this string Value: v=spf1 include:hotmail.com ~all it gives an error, something is wrong, either with the string or with the DNS manager and I would like to know how to fix it. Notes: The more information link is dead, Godaddy no longer gives support by email, no Microsoft support

    Read the article

  • Laravel - public layout not needed in every function

    - by fischer
    I have just recently started working with Laravel. Great framework so far! However I have a question. I am using a layout template like this: public $layout = 'layouts.private'; This is set in my Base_Controller: public function __construct(){ //Styles Asset::add('reset', 'css/reset.css'); Asset::add('main', 'css/main.css'); //Scripts Asset::add('jQuery', 'http://ajax.googleapis.com/ajax/libs/jquery/1.8.1/jquery.min.js'); //Switch layout template according to the users auth credentials. if (Auth::check()) { $this -> layout = 'layouts.private'; } else { $this -> layout = 'layouts.public'; } parent::__construct(); } However I get an error exception now when I try to access functions in my diffrent controllers, which should not call any view, i.e. when a user is going to login: class Login_Controller extends Base_Controller { public $restful = true; public function post_index() { $user = new User(); $credentials = array('username' => Input::get('email'), 'password' => Input::get('password')); if (Auth::attempt($credentials)) { } else { } } } The error I get, is that I do not set the content of the different variables in my public $layout. But since no view is needed in this function, how do I tell Laravel not to include the layout in this function? The best solution that I my self have come a cross (don't know if this is a bad way?) is to unset($this -> layout); from function post_index()... To sum up my question: how do I tell Laravel not to include public $layout in certain functions, where a view is not needed? Thanks in advance, fischer

    Read the article

  • How to keep menu in a single place without using frames

    - by TJ Ellis
    This is probably a duplicate, but I can't find the answer anywhere (maybe I'm searching for the wrong thing?) and so I'm going to go ahead and ask. What is the accepted standard practice for creating a menu that is stored in a single file, but is included on every page across a site? Back in the day, one used frames, but this seems to be taboo now. I can get things layed out just the way I want, but copy/pasting across every page is a pain. I have seen php-based solutions, but my cheap-o free hosting doesn't support php (which is admittedly a pain, but it's a fairly simple webpage...). Any ideas for doing this that does not require server-side scripting?

    Read the article

  • Moodle 2 pages loading up to 2000% faster

    - by TJ
    On average our Moodle 2 pages were loading in 2.8 seconds, now they load in as little as 0.12 seconds, so that’s like 2333.333% faster!What was it I hear you say?Well it was the database connection, or more correctly the database library. I was using FreeTDS http://docs.moodle.org/22/en/Installing_MSSQL_for_PHP, but now I’m using the new Microsoft Drivers 3.0 for PHP for SQL Server http://www.microsoft.com/en-us/download/details.aspx?id=20098. I’m in a Windows Server IT department, and in both our live and development environments, we have Moodle 2.2.3, IIS 7.5, and PHP 5.3.10 running on two Windows Server 2008 R2 servers and using MS Network Load Balancing.Since moving to Moodle 2, the pages have always loaded much more slowly than they did in Moodle 1.9, I’ve been chasing this issue for quite a while. I had previously tried the Microsoft Drivers for PHP for SQL Server 2.0, but my testing showed it was slower than the FreeTDS driver.Then yesterday I found Microsoft had released the new version, Microsoft Drivers 3.0 for PHP for SQL Server, so I thought I’d give it a run, and wow what a difference it made.Pages that were loading in 2.8 seconds, now they load in as little as 0.12 seconds, 2333.333% faster…I have more testing to do, but so far it’s looking good, I have scheduled multi user load testing for early next week (fingers crossed).To make the change all I need to do was,download the driverscopy the relevant files to PHP\ext (for us they were php_pdo_sqlsrv_53_nts.dll and php_sqlsrv_53_nts.dll) install the Microsoft SQL Server 2012 Native Client x64 http://www.microsoft.com/en-us/download/details.aspx?id=29065 add to PHP.ini, extension=php_pdo_sqlsrv_53_nts.dll, extension=php_sqlsrv_53_nts.dllremove form PHP.ini, extension=php_dblib.dllvchange in PHP.ini, mssql.textlimit = 20971520 and mssql.textsize = 20971520change Moodle config.php, $CFG->dbtype = 'sqlsrv'; and 'dbpersist' => Trueand then reboot and test…I've browsed courses, backed up/restored some really large and complicated courses, deleted courses etc. etc. all good.Still more testing to do but, hey this is good start...Hope this helps anyone experiencing the same slowness…

    Read the article

  • How to I access "Deny" message from a Lidgren client?

    - by TJ Mott
    I'm using the Lidgren v3 network for a UDP client/server networking model. On the server end, I'm initializing a NetServer object with the NetIncomingMessage.ConnectionApproval message type enabled. So the client is able to successfully connect and the first packet it sends is a login packet, containing a username and password supplied by the user. The server is receiving that and doing some black magic to authenticate, and everything works up to that point. If the login fails, the server calling NetIncomingMessage.SenderConnection.Deny("Invalid Login Credentials"). I want to know how to properly receive this deny message on the client. I'm getting the message, it shows up with a message type of NetIncomingMessage.StatusChanged. If I call ReadString on that message, I get a corrupted version of the string I passed to the Deny method on the server. The type of corruption varies, I've seen odd characters in there but in every case it's truncated and is way shorter than the string I entered. Any ideas? The official documentation is sparse on this topic. I could use pointers from anyone who has successfully used the Lidgren library and uses the Accept or Deny methods. Also, if I don't do any authentication and just Approve() the connection every time, stuff actually works just fine and I'm getting reliable two-way UDP traffic. (And lastly, Stack Exchange said I don't have enough reputation to use the "Lidgren" tag....???)

    Read the article

  • MVC & Windows Authentication

    - by TJ
    Changed the ValidateUser function  public bool ValidateUser(string userName, string password)         {             bool validation;             try             {                 LdapConnection ldc = new LdapConnection(new LdapDirectoryIdentifier((string)null, false, false));                 NetworkCredential nc = new NetworkCredential(userName, password, "domainname.com");                 ldc.Credential = nc;                 ldc.AuthType = AuthType.Negotiate;                 ldc.Bind(nc); // user has authenticated at this point, as the credentials were used to login to the dc.                  string myvar = ldc.SessionOptions.DomainName;                 validation = true;             }             catch (LdapException)             {                 validation = false;             }             return validation;         }

    Read the article

  • Using JQuery to open a popup window and print

    - by TJ Kirchner
    Hello, A while back I created a lightbox plugin using jQuery that would load a url specified in a link into a lightbox. The code is really simple: $('.readmore').each(function(i){ $(this).popup(); }); and the link would look like this: <a class='readmore' href='view-details.php?Id=11'>TJ Kirchner</a> The plugin could also accept arguments for width, height, a different url, and more data to pass through. The problem I'm facing right now is printing the lightbox. I set it up so that the lightbox has a print button at the top of the box. That link would open up a new window and print that window. This is all being controlled by the lightbox plugin. Here's what that code looks like: $('.printBtn').bind('click',function() { var url = options.url + ( ( options.url.indexOf('?') < 0 && options.data != "" ) ? '?' : '&' ) + options.data; var thePopup = window.open( url, "Member Listing", "menubar=0,location=0,height=700,width=700" ); thePopup.print(); }); The problem is the script doesn't seem to be waiting until the window loads. It wants to print the moment the window appears. As a result, if I click "cancel" to the print dialog box, it'll popup again and again until the window loads. The first time I tried printing I got a blank page. That might be because the window didn't finish load. I need to find a way to alter the previous code block to wait until the window loads and then print. I feel like there should be an easy way to do this, but I haven't found it yet. Either that, or I need to find a better way to open a popup window and print from the lightbox script in the parent window, without alternating the webpage code in the popup window.

    Read the article

  • Increase RGB components every Hour (r), Minute (g), Second (b) for digital clock

    - by TJ Fertterer
    So I am taking my first javascript class (total noob) and one of the assignments is to modify a digital clock by assigning the color red to hours, green minutes, blue to seconds, then increase the respective color component when it changes. I have successfully assigned a decimal color value (ex. "#850000" to each element (hours, minutes, seconds), but my brain is fried trying to figure out how to increase the brightness when hours, minutes, seconds change, i.e. red goes up to "#870000" changing from 1:00:00 pm to 2:00:00 pm. I've searched everywhere with no help on how to successfully do this. Here is what I have so far and any help on this would be greatly appreciated. TJ <script type="text/javascript"> <!-- function updateClock() { var currentTime = new Date(); var currentHours = currentTime.getHours(); var currentMinutes = currentTime.getMinutes(); var currentSeconds = currentTime.getSeconds(); // Pad the minutes with leading zeros, if required currentMinutes = ( currentMinutes < 10 ? "0" : "" ) + currentMinutes; // Pad the seconds with leading zeros, if required currentSeconds = ( currentSeconds < 10 ? "0" : "" ) + currentSeconds; // Choose either "AM" or "PM" as appropriate var timeOfDay = ( currentHours < 12 ) ? "AM" : "PM"; // Convert the hours component to 12-hour format currentHours = ( currentHours > 12 ) ? currentHours - 12 : currentHours; // Convert an hours component if "0" to "12" currentHours = ( currentHours == 0 ) ? 12 : currentHours; // Get hold of the html elements by their ids var hoursElement = document.getElementById("hours"); document.getElementById("hours").style.color = "#850000"; var minutesElement = document.getElementById("minutes"); document.getElementById("minutes").style.color = "#008500"; var secondsElement = document.getElementById("seconds"); document.getElementById("seconds").style.color = "#000085"; var am_pmElement = document.getElementById("am_pm"); // Put the clock sections text into the elements' innerHTML hoursElement.innerHTML = currentHours; minutesElement.innerHTML = currentMinutes; secondsElement.innerHTML = currentSeconds; am_pmElement.innerHTML = timeOfDay; } // --> </script> </head> <body onload="updateClock(); setInterval( 'updateClock()', 1000 )"> <h1 align="center">The JavaScript digital clock</h1> <h2 align="center">Thomas Fertterer - Lab 2</h2> <div id='clock' style="text-align: center"> <span id="hours"></span>: <span id='minutes'></span>: <span id='seconds'></span> <span id='am_pm'></span> </div> </body> </html>

    Read the article

  • Issues with Rsync on a NAS

    - by Daniel Fischer
    I'm trying to rsync a few external hard drives over to my new Nas DS412+ but I'm noticing it's stupid slow. I'm trying it via mounting the backup folder via afb on a Mac. I was told this may be the wrong way to do it. I recently just turned on "network backup" on the Synology and am now running rsync over ssh like: rsync -ar --progress . admin@localip:/backup/path Is this the right way to do it now? Will it be faster? Is there something else I can do to make it faster? Edit: I'm getting a ton of: "failed to set permissions" "failed to set times" now that I run it. What do I do?

    Read the article

  • Why do my speakers get distorted randomly on Windows 7?

    - by Daniel Fischer
    I have a studio monitor setup. I have 2 KRK 6's and a Focusrite Firewire Pro 24. Every few hours my speakers sound distorted and my solution has been go to sound levels Properties of Saffire Audio Device Advanced Default Format Toggle to 16 bit then back to 24bit. Why does it screw up every few hours? Sometimes one speaker doesn't output too and this same process resets it but that's more rare. Is this a OS issue or Focusrite Driver Issue?

    Read the article

  • How to gain admin privileges on D-Link router if my isp is not allowing me to do so?

    - by Fischer
    So I switched to new ISP yesterday, they gave me a D-Link router, can't use my old router. I want to change the wireless password, went to 192.168.1.1, I can login with the the username and password user user, but not as admin. On the catalog it says that the default username and password are admin admin, tried that didn't work. Tried admin and no pass, tried many combinations, none worked. I asked some other users and they said that the isp is blocking the users from logging in as admins, and blocking the reset button, and said that there's a hack where you do something like: cmd telnet "router ip" and do something like dumpcfg. Could you please give a better explanation on how to gain admin privileges on your own router if your isp is not letting you do so by default?

    Read the article

  • windows installer service could not be started

    - by Fischer
    I'm on windows 7 64 bit, i'm using the admin accouunt, i try to install some programs, but can't, i troubleshooted the error that i got, and turned out that windows installer service is stopped, i tried to start it, it says windows installer service could not be started because its disabled or it has no enabled devices associated with it. Error 1058 How to fix it? Note that the laptop has an expired bitdefender installed on it. I don't know if it's causing the problem or not, i just thought it was worth mentioning since i had many problems with misconfigured or expired antiviruses before. The MSIServer could not be started as well, and i tried to run as administrator The laptop is not mine, i'm just trying to fix it

    Read the article

  • 32-bit Ubuntu or 64-bit w/Intel Atom D510 w/4GB RAM?

    - by T.J. Crowder
    (I've seen this question and some related ones, and perhaps this is a duplicate although part of my question is specific to the Atom D510.) I'm going to be installing Ubuntu on a new silent desktop as my latest (and hopefully last) attempt to switch from Windows to Linux for at least most everyday tasks. The new machine is entirely passvely cooled, but as a consequence, not astonishingly powerful — an Atom D510 (dual-core, 1.6GHz, HT) on Intel's D510MO board. That's fine, I won't use it for gaming, (much) video editing, etc. It's a 64-bit processor and I'm maxing the board out at 4GB of RAM (hey, that 1.6 CPU needs all the help it can get), which naturally raises the question of whether to install Ubuntu 64-bit or 32-bit (and if the latter, either live with the missing RAM, or do the PAE kernel dance). Although I've used Linux on servers for years, I'm very nearly a Linux desktop newbie and am not currently in the mood to fight driver wars and such. So if I'm setting myself up for failure with 64-bit, I'll live with the missing ~0.8GB or fiddle with PAE. But if 64-bit is entirely "ready," great, I'm there. So: Do most mainstream apps (now) play nicely with 64-bit Linux? I can't help but notice the "AMD" in the ISO image filename ubuntu-10.04-desktop-amd64.iso and I know AMD lead the way on this stuff — does Ubuntu 64-bit play nicely with Intel processors? Just generally, would you recommend one or the other? (And if anyone has any experience with Ubuntu specifically on the D510 [32-bit or 64-bit] which might lead me one way or t'other, that would be useful.) Thanks in advance.

    Read the article

  • QoS for Cisco Router to Prioritize Voice and Interactive Traffic

    - by TJ Huffington
    I have a Cisco 891W NATing Voice and Data to the internet over a 10mbit/2mbit connection. Voice traffic gets degraded when I upload large files. Pings time out as well. I tried to configure a QoS policy but it's basically not doing anything. Voice traffic still degrades when upload bandwidth gets saturated. Here is my current configruation: class-map match-any QoS-Transactional match protocol ssh match protocol xwindows class-map match-any QoS-Voice match protocol rtp audio class-map match-any QoS-Bulk match protocol secure-nntp match protocol smtp match protocol tftp match protocol ftp class-map match-any QoS-Management match protocol snmp match protocol dns match protocol secure-imap class-map match-any QoS-Inter-Video match protocol rtp video class-map match-any QoS-Voice-Control match access-group name Voice-Control policy-map QoS-Priority-Output class QoS-Voice priority percent 25 set dscp ef class QoS-Inter-Video bandwidth remaining percent 10 set dscp af41 class QoS-Transactional bandwidth remaining percent 25 random-detect dscp-based set dscp af21 class QoS-Bulk bandwidth remaining percent 5 random-detect dscp-based set dscp af11 class QoS-Management bandwidth remaining percent 1 set dscp cs2 class QoS-Voice-Control priority percent 5 set dscp ef class class-default fair-queue interface FastEthernet8 bandwidth 1024 bandwidth receive 20480 ip address dhcp ip nat outside ip virtual-reassembly duplex auto speed auto auto discovery qos crypto map mymap max-reserved-bandwidth 80 service-policy output QoS-Priority-Output crypto map mymap 10 ipsec-isakmp set peer 1.2.3.4 default set transform-set ESP-3DES-SHA match address 110 qos pre-classify ! fa8 is my connection to the internet. Voice traffic goes over a VPN ("mymap") to the SIP server. That's why I specified "qos pre-classify" which I believe is the way to classify traffic over the VPN. However even when I ping a public IP while saturating upload bandwidth, the latency is exceptionally high. Is this configuration correct? Are there any suggestions that might make this work for my setup? Thanks in advance.

    Read the article

  • QoS for Cisco Router to Prioritize Voice and Interactive Traffic

    - by TJ Huffington
    I have a Cisco 891W NATing Voice and Data to the internet over a 10mbit/2mbit connection. Voice traffic gets degraded when I upload large files. Pings time out as well. I tried to configure a QoS policy but it's basically not doing anything. Voice traffic still degrades when upload bandwidth gets saturated. Here is my current configruation: class-map match-any QoS-Transactional match protocol ssh match protocol xwindows class-map match-any QoS-Voice match protocol rtp audio class-map match-any QoS-Bulk match protocol secure-nntp match protocol smtp match protocol tftp match protocol ftp class-map match-any QoS-Management match protocol snmp match protocol dns match protocol secure-imap class-map match-any QoS-Inter-Video match protocol rtp video class-map match-any QoS-Voice-Control match access-group name Voice-Control policy-map QoS-Priority-Output class QoS-Voice priority percent 25 set dscp ef class QoS-Inter-Video bandwidth remaining percent 10 set dscp af41 class QoS-Transactional bandwidth remaining percent 25 random-detect dscp-based set dscp af21 class QoS-Bulk bandwidth remaining percent 5 random-detect dscp-based set dscp af11 class QoS-Management bandwidth remaining percent 1 set dscp cs2 class QoS-Voice-Control priority percent 5 set dscp ef class class-default fair-queue interface FastEthernet8 bandwidth 1024 bandwidth receive 20480 ip address dhcp ip nat outside ip virtual-reassembly duplex auto speed auto auto discovery qos crypto map mymap max-reserved-bandwidth 80 service-policy output QoS-Priority-Output crypto map mymap 10 ipsec-isakmp set peer 1.2.3.4 default set transform-set ESP-3DES-SHA match address 110 qos pre-classify ! fa8 is my connection to the internet. Voice traffic goes over a VPN ("mymap") to the SIP server. That's why I specified "qos pre-classify" which I believe is the way to classify traffic over the VPN. However even when I ping a public IP while saturating upload bandwidth, the latency is exceptionally high. Is this configuration correct? Are there any suggestions that might make this work for my setup? Thanks in advance.

    Read the article

  • How easy is it to migrate a Linux VM image from one VM env to another?

    - by T.J. Crowder
    If I stick to one of the standard, well-supported VM disk images (like a raw image, or VDI, VMDK, ...), are Linux VMs typically easy to move between VM environments? E.g., between (say) VirtualBox and KVM, or VMWare and Xen? I'm talking here of fully virtualized environments, not paravirtualization requiring support within the guest OS. It seems to me that the kernels in most Linux distributions these days are configured to...keep an open mind and detect things at boot time, so you don't have the issue that you sometimes have moving a Windows VM from one virtualization system to another (I'm thinking particularly of HAL issues that Windows has, like ACPI vs. non-ACPI; I've also just had Windows VMs generally acting strangely when moved from VMWare to VirtualBox, for instance). I'm looking for a general answer, but if it helps, specifically I'm mostly going to be doing this with Ubuntu 8.04 LTS and 10.04 LTS guests. But that could change.

    Read the article

  • Remote NX login to Ubuntu, Gnome can't mount CD/DVD drive

    - by T.J. Crowder
    Even though I'm sitting next to it, I log into my Ubuntu 10.04 LTS system via NX Free Edition from another system at the moment (this is temporary, not worth buying a KVM for). Curiously, though, when I do that Gnome's auto-mounting fails for CD/DVD media (I haven't tried other kinds) with a "Not Authorized" error. For instance here's what happens when I put the Ubuntu 10.04 LTS installation CD in: This does not happen if I log into it locally (not via NX) with the same user account. When using NX, I can mount the media if I go to mount directly: tjc@midnight:~$ sudo mkdir /media/dvd tjc@midnight:~$ sudo mount -r -t iso9660 /dev/sr0 /media/dvd tjc@midnight:~$ ls /media/dvd autorun.inf casper dists install isolinux md5sum.txt pics pool preseed README.diskdefines ubuntu wubi.exe ...which, along with the "not authorized" error, suggests some kind of permissions problem to me (doh). What I find odd is that the same user is involved in both cases (local and via NX). I'm new to Ubuntu on the desktop (used it and other distros on servers for years), so I'm afraid I don't know how this auto-mounting is happening. I think it's handled by the gvfs package and its daemon, but that's about as far as I got (and perhaps I've taken a left turn even getting that far). Although I can work around it with mount, does anyone know how I might get auto-mounting to work?

    Read the article

  • Intel D510MO board - 1000Mbit LAN or not?

    - by T.J. Crowder
    The Intel forum seems to be down (signing in fails with connection refused), but perhaps someone here knows the answer. The Intel D510MO product page says that the LAN is 10/100/1000, but when I look at the NM10 chipset it uses, it says it's just 10/100 (and the detailed PDF spec here backs that up pretty definitively). I don't immediately see anything saying the D510MO has a different LAN controller than the NM10's onboard one, and it would seem odd if it did given the purpose of the board (low power, small footprint; integrated). Does this board support 1000Mbit LAN or not? Anyone have direct knowledge of it? Thanks in advance.

    Read the article

  • How to get rid of a stubborn 'removed' device in mdadm

    - by T.J. Crowder
    One of my server's drives failed and so I removed the failed drive from all three relevant arrays, had the drive swapped out, and then added the new drive to the arrays. Two of the arrays worked perfectly. The third added the drive back as a spare, and there's an odd "removed" entry in the mdadm details. I tried both mdadm /dev/md2 --remove failed and mdadm /dev/md2 --remove detached as suggested here and here, neither of which complained, but neither of which had any effect, either. Does anyone know how I can get rid of that entry and get the drive added back properly? (Ideally without resyncing a third time, I've already had to do it twice and it takes hours. But if that's what it takes, that's what it takes.) The new drive is /dev/sda, the relevant partition is /dev/sda3. Here's the detail on the array: # mdadm --detail /dev/md2 /dev/md2: Version : 0.90 Creation Time : Wed Oct 26 12:27:49 2011 Raid Level : raid1 Array Size : 729952192 (696.14 GiB 747.47 GB) Used Dev Size : 729952192 (696.14 GiB 747.47 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 2 Persistence : Superblock is persistent Update Time : Tue Nov 12 17:48:53 2013 State : clean, degraded Active Devices : 1 Working Devices : 2 Failed Devices : 0 Spare Devices : 1 UUID : 2fdbf68c:d572d905:776c2c25:004bd7b2 (local to host blah) Events : 0.34665 Number Major Minor RaidDevice State 0 0 0 0 removed 1 8 19 1 active sync /dev/sdb3 2 8 3 - spare /dev/sda3 If it's relevant, it's a 64-bit server. It normally runs Ubuntu, but right now I'm in the data centre's "rescue" OS, which is Debian 7 (wheezy). The "removed" entry was there the last time I was in Ubuntu (it won't, currently, boot from the disk), so I don't think that's not some Ubuntu/Debian conflict (and they are, of course, closely related). Update: Having done extensive tests with test devices on a local machine, I'm just plain getting anomalous behavior from mdadm with this array. For instance, with /dev/sda3 removed from the array again, I did this: mdadm /dev/md2 --grow --force --raid-devices=1 And that got rid of the "removed" device, leaving me just with /dev/sdb3. Then I nuked /dev/sda3 (wrote a file system to it, so it didn't have the raid fs anymore), then: mdadm /dev/md2 --grow --raid-devices=2 ...which gave me an array with /dev/sdb3 in slot 0 and "removed" in slot 1 as you'd expect. Then mdadm /dev/md2 --add /dev/sda3 ...added it — as a spare again. (Another 3.5 hours down the drain.) So with the rebuilt spare in the array, given that mdadm's man page says RAID-DEVICES CHANGES ... When the number of devices is increased, any hot spares that are present will be activated immediately. ...I grew the array to three devices, to try to activate the "spare": mdadm /dev/md2 --grow --raid-devices=3 What did I get? Two "removed" devices, and the spare. And yet when I do this with a test array, I don't get this behavior. So I nuked /dev/sda3 again, used it to create a brand-new array, and am copying the data from the old array to the new one: rsync -r -t -v --exclude 'lost+found' --progress /mnt/oldarray/* /mnt/newarray This will, of course, take hours. Hopefully when I'm done, I can stop the old array entirely, nuke /dev/sdb3, and add it to the new array. Hopefully, it won't get added as a spare!

    Read the article

  • Why is port 444 open on this router?

    - by TJ Thind
    I have a Cisco RV110W. I ran nmap at it from the outside and nmap reports that the router has tcp port 444 open. Yet there are no port forwarding rules specifying this port. It should as far as I can tell, be closed. There's even a service listening to that port which I can connect to through telnet. I threw some SNPP commands at it but the service doesn't respond to any of them so I don't believe it's SNPP. Does anyone have any idea why this particular router has tcp port 444 open? I haven't been able to find anything in the manual or on Cisco's website.

    Read the article

1 2 3 4 5  | Next Page >