Search Results

Search found 22 results on 1 pages for 'ernie'.

Page 1/1 | 1 

  • Ubuntu Minimal in the new Intel NUC Haswell

    - by Ernie
    I have one the of the new Haswell NUCs that just came out - D34010WYK. https://downloadcenter.intel.com/Detail_Desc.aspx?agr=Y&DwnldID=23089&lang=eng&OSVersion=OS%20Independent&DownloadType=Documentation), I tried to load ubuntu minimal 13.10 x64 from the mini.iso and it cannot properly detect a network during install - my guess is a driver issue. Tried 12.04 lts and it doesnt even see the NIC at all. Is there anything I could do to get one of the current versions to see the network properly? I did try the latest trusty nightly desktop and it did install without issue. Is there a minimal version of that? Thanks Ernie

    Read the article

  • keeps asking for the CD how do I force it to stop asking for the CD

    - by Ernie
    I have installed 12.04 and am attempting to update the wireless drivers (I have a Dell Insiron 6400), but it keeps asking for an installation CD that I do not have. How do I force it to stop asking for the CD and go to the network? The commands I am running are: ~$ sudo apt-get update ~$ sudo apt-get install firmware-b43-installer ~$ sudo apt-get remove bcmwl-kernel-source ~$ sudo reboot It asks for the Cd at step 2

    Read the article

  • Wireless will not connect on an Asus U56? [closed]

    - by ernie
    I have an ASUS U56. I could connect to internet without delay or problems in Ubuntu 11.04. Now, running Ubuntu 11.10, I have not been able to connect. I can see the network and the computer tries to connect, but never makes the connection. I have been trying to fix this problem since the release date of 11.10. I had to install the older kernel 2.6 to get the wifi to work. Any other solutions? ernest@ernest-U56E:~$ ifconfig eth0 Link encap:Ethernet HWaddr 14:da:e9:25:6c:d0 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:53 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) wlan0 Link encap:Ethernet HWaddr 40:25:c2:3f:46:d4 inet addr:192.168.xx.xxx Bcast:192.168.10.255 Mask:255.255.255.0 inet6 addr: fe80::4225:c2ff:fe3f:46d4/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:44962 errors:0 dropped:0 overruns:0 frame:0 TX packets:32287 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:35614976 (35.6 MB) TX bytes:5400816 (5.4 MB) ernest@ernest-U56E:~$

    Read the article

  • What's the most efficient way to reclaim disk space after deleting lots of data from a database on Sybase ASE 15?

    - by Ernie Longmire
    As I understand it, based on some research but zero real-world experience with Sybase ASE, the only way to reclaim disk space once it's been allocated to a database is to export that database, create a new DB with the same schema, and reload all the exported data to the new database. Is this correct, or is there some other method? Then: assuming the above is correct and a full export-recreate-reload is required, what's the most efficient way to do that? Are there tools that will automate all or part of that process? I'm being told we would have to write separate bcp export and import commands for each and every object in the database, which if true sounds easily scriptable by someone who knows Sybase ASE well enough. (I don't.) This seems to me like a really basic housekeeping task, and it feels like I'm missing something obvious.

    Read the article

  • NFS headaches with FreeBSD 4.9

    - by Ernie
    Once upon a time, this used to work, and I kept the configuration the same, but... now nothing. I'm just trying to get an NFS server set up on a FreeBSD 4.9 server. The process should be about as complicated as this: Add this entry to /etc/exports: /var/home /var/vpopmail/domains -maproot=root XXX.XX.XX.XXX Execute this: portmap nfsd -u -t -n 4 mountd -r Then this should work, regardless of network and firewall issues: showmount -e localhost But showmount -e localhost fails with the following error: RPC: Port mapper failure showmount: can't do exports rpc And even if I kill off the NFS daemon, and try a rpcinfo -p localhost, I get this error: rpcinfo: can't contact portmapper: rpcinfo: RPC: Unable to receive; errno = Connection reset by peer The portmapper is still running. Why the heck does nothing work as if it isn't? Edit to add: FYI: Sockstat gives me this: $ sockstat |egrep "(nfsd|portmap)" root nfsd 86310 3 udp4 *:2049 *:* root nfsd 86310 4 udp4 *:973 *:* root portmap 45920 0 tcp4 *:111 *:* Then, at a later time (say, 5 minutes) it's as if nfsd isn't acting as a server: $ sockstat |egrep "(nfsd|portmap)" root portmap 45920 0 tcp4 *:111 *:* But the nfs daemon is still running: $ ps ax |grep nfsd 86311 ?? I 0:00.00 nfsd: server (nfsd) 86312 ?? I 0:00.00 nfsd: server (nfsd) 86313 ?? I 0:00.00 nfsd: server (nfsd) 86314 ?? I 0:00.00 nfsd: server (nfsd)

    Read the article

  • Getting started with webserver clustering.

    - by Ernie
    I work for a small ISP, and we host about 250 domains and all the stuff that goes along with that: DNS, mail, spam filtering, and backups. Currently, we have separate DNS servers (two of them) and mail servers (outgoing mail is actually on the secondary DNS server, but was previously on its own server). In the past, this was done as an insurance measure. The last thing we need is for some doofus (usually yours truly) to hose a server, taking out DNS and mail right along with it, or for spammers to jam our incoming SMTP server, preventing outgoing mail from being sent too. In the past, this was a problem, and our servers were set up the way they are now to combat it. However, clustering solutions like Sun's Cobalt RAQ (in days of olde) and Virtualmin appear to cater to an all-in-one approach, then deal with failures through redundant servers. I have avoided this thus far, but we've been using Virtualmin on our web server for a while now, and I'd like to expand into using it for a high availability cluster. Our networking partner has recently built a datacenter that has eliminated all of our other bugaboos like network, cooling, and power issues, so now the only thing left to go wrong is me hosing a server, which happened earlier this month. One of the bigger reasons we've avoided going this route is because our hardware requirements aren't particularly high. One server easily handles all the sites we host (most of them are flat sites). Also, load-balancing routers tend to be expensive and complicated. All that I'm really expecting to do is building a two-node cluster for redundancy so that when I hose a server (however rare that might be), we're not out for 8-12 hours while I rebuild it. What I need to know is how to get started, and if I'm really in a position to bother with this kind of thing at all.

    Read the article

  • How do you prevent spam being sent by your users?

    - by Ernie
    So, for the third time in about two weeks (maybe less), one of our customers has had their password compromised, and a spammer was sending mail with their account using our webmail. As a result, our outgoing mail server has been listed at Spamhaus, and a lot of our outgoing mail is being rejected. I can't think of any way to prevent this from happening (although now our webmail server is using Sendmail instead of SMTP, but that just limits the scope of the problem), yet the big ISPs never seem to have a problem like this.

    Read the article

  • Is an open license enough?

    <b>Ernie Leseberg blog:</b> "Does having an open license for a software project, have all the advantages negated if the development process is basically closed to the outside world?"

    Read the article

  • How do you gracefully upgrade mission critical systems to wildly disparate systems?

    - by Ernie
    In the span of the 12+ years of my career, I have yet to overcome this hurdle and I suspect the answer simply isn't easy or even possible, so I ask everyone here for their experience. Say that you're running into egregious problems that can only be fixed by moving from one platform to another - either from making a mistake in choosing the platform that was chosen years ago, or simply growing beyond what the system was originally designed for. You know for certain that the cruft that has built up over time will invariably mean that it will be nearly impossible to test for all the things that will certainly lead to tech support hell - which we all know leads to the loss of customers. Not that customers aren't already complaining about the egregious problems that already exist! The best possible way that I've discovered so far is to maybe devise a plan for the changeover, test it on a few clients, test it on a dozen clients, test it on a hundred clients, then finally finish the changeover for everyone and pray that you've worked out all the bugs with those first hundred and twenty, and that the animal by-products will not hit the ventilation system in the most spectacular fashion possible. However, that doesn't mean that it won't anyway. So say that you're moving from Exchange to Exim (or even just Sendmail to Exim). How do you handle it?

    Read the article

  • Switch to IPv6 and get rid of NAT? Are you kidding?

    - by Ernie
    So our ISP has set up IPv6 recently, and I've been studying what the transition should entail before jumping into the fray. I've noticed three very important issues: Our office NAT router (an old Linksys BEFSR41) does not support IPv6. Nor does any newer router, AFAICT. The book I'm reading about IPv6 tells me that it makes NAT "unnecessary" anyway. If we're supposed to just get rid of this router and plug everything directly to the Internet, I start to panic. There's no way in hell I'll put our billing database (With lots of credit card information!) on the internet for everyone to see. Even if I were to propose setting up Windows' firewall on it to allow only 6 addresses to have any access to it at all, I still break out in a cold sweat. I don't trust Windows, Windows' firewall, or the network at large enough to even be remotely comfortable with that. There's a few old hardware devices (ie, printers) that have absolutely no IPv6 capability at all. And likely a laundry list of security issues that date back to around 1998. And likely no way to actually patch them in any way. And no funding for new printers. I hear that IPv6 and IPSEC are supposed to make all this secure somehow, but without physically separated networks that make these devices invisible to the Internet, I really can't see how. I can likewise really see how any defences I create will be overrun in short order. I've been running servers on the Internet for years now and I'm quite familiar with the sort of things necessary to secure those, but putting something Private on the network like our billing database has always been completely out of the question. What should I be replacing NAT with, if we don't have physically separate networks?

    Read the article

  • FTP error when doing file transfer

    - by Ernie
    I'm running vsftpd version 3.0.2 over FTPeS, and I'm having a bit of trouble with file transfers. It seems to work fine when I'm on the LAN, but not from an external IP address. I have the control port and data ports open on my server's software firewall and my router's firewall. When I'm using the service from an external IP address, it seems like sometimes a file transfer will complete, but it times out and I always get the client error: "426 Failure writing network stream". I've tried several clients. I'm thinking there is some sort of data sabotage either at the router or some server policy; maybe because I'm using passive ftp? Suggestions?

    Read the article

  • Drupal - get program name

    - by ernie
    I would like to get program name in Drupal (actually name in URL that calls program or function). example: http://localhost/drupal6/my_widget This works: $myURL = (basename($_SERVER['REQUEST_URI'])); // $myURL=my_widget but not when I have additional parameters: http://localhost/drupal6/my_widget/parm1/parm2

    Read the article

  • Drupal filter_form form input

    - by ernie
    This drupal form snippet will give me a textarea with user able to change filter to full html/wysiwyg mode. My Questions: How can I default to to full html mode? function MY_MODULE_admin() { $form = array(); $form['format'] = filter_form($form->format); // MY_MODULE - ** Image 1 ** $form['MY_MODULE_image_1'] = array( '#type' => 'textarea', '#title' => t('Image 1'), '#default_value' => variable_get('setup_image_1', 'image_1.jpg'), '#description' => "Current value =" .variable_get('setup_image_1', 'image_1.jpg'), '#required' => TRUE, );

    Read the article

  • dedicated template for a Drupal module

    - by ernie
    I have a Drupal module, that I want to present in a clean page - with no headers, menus, footer ect. I think all I need is a version of page.tpl.php that contains HTML page headers and <?php print $content ?> How can I point my module to such a page?

    Read the article

  • where to find Ubercart translation files

    - by ernie
    I am trying to update language specific text files ("po files") for Ubercart, but it is unclear who/where these files are maintained. There are several places sited but I am not sure which is maintained? http://ftp.drupal.org/files/translations/6.x/ubercart/ http://l10n.privnet.biz/translation_group/ Also a description of how to do this in Drupal. In Drupal (link: admin/build/translate/import) there are several text groups to select. Do I have to repeat update for each group?

    Read the article

  • construct test environment for web application on PC - directory issues

    - by ernie
    I have a site that physically has this directory structure. -public_html --conf > contains file conf.php -SiteFiles -LiveSite > contains file ConfLive.php Directory public_html/conf/ contains a file called conf.php this file contains the following include include_once('/home/mydir/SiteFiles/LiveSite/conf/ConfLive.iphp'); I want to copy this application to test PC to test it. Test PC uses XAMPP Apache. "Root" directory on the test machine is: C:\xampp\htdocs\ My questions: 1. Where is logical path "/home/mydir/" defined? 2. What steps should I take to get this to work on my test machine preferably by server configuration and not changing application. Thanks. (PS maybe this question is better posed at Server Overflow site.)

    Read the article

  • Drupal - assign menu to block based on node type

    - by ernie
    I want to assign a specific menu in left sidebar block, based on the node type of the page currently being displayed. I think it should look something like this, but I am stuck. function my_module_nodeapi(&$node, $op) { switch ($op) { case 'view': if ($node->type == "large_reptiles") { //menu_set_active_menu_name('menu_reptile_menu'); //menu_set_active_item('menu_reptile_menu'); } break; } }

    Read the article

  • mysql statement with nested SELECT - how to improve performance

    - by ernie
    This statement appears inefficient because only one one out of 10 records are selected and only 1 of 100 entries contain comments. What can I do to improve it ? $query = "SELECT A,B,C, (SELECT COUNT(*) FROM comments WHERE comments.nid = header_file.nid) as my_comment_count FROM header_file Where A = 'admin' " edit: I want header records even if no comments are found.

    Read the article

1