Search Results

Search found 14316 results on 573 pages for 'reference manual'.

Page 279/573 | < Previous Page | 275 276 277 278 279 280 281 282 283 284 285 286  | Next Page >

  • Create new folder for new sender name and move message into new folder

    - by Dave Jarvis
    Background I'd like to have Outlook 2010 automatically move e-mails into folders designated by the person's name. For example: Click Rules Click Manage Rules & Alerts Click New Rule Select "Move messages from someone to a folder" Click Next The following dialog is shown: Problem The next part usually looks as follows: Click people or public group Select the desired person Click specified Select the desired folder Question How would you automate those problematic manual tasks? Here's the logic for the new rule I'd like to create: Receive a new message. Extract the name of the sender. If it does not exist, create a new folder under Inbox Move the new message into the folder assigned to that person's name I think this will require a VBA macro. Related Links http://www.experts-exchange.com/Software/Office_Productivity/Groupware/Outlook/A_420-Extending-Outlook-Rules-via-Scripting.html http://msdn.microsoft.com/en-us/library/office/ee814735.aspx http://msdn.microsoft.com/en-us/library/office/ee814736.aspx http://stackoverflow.com/questions/11263483/how-do-i-trigger-a-macro-to-run-after-a-new-mail-is-received-in-outlook http://en.kioskea.net/faq/6174-outlook-a-macro-to-create-folders http://blogs.iis.net/robert_mcmurray/archive/2010/02/25/outlook-macros-part-1-moving-emails-into-personal-folders.aspx Update #1 The code might resemble something like: Public WithEvents myOlApp As Outlook.Application Sub Initialize_handler() Set myOlApp = CreateObject("Outlook.Application") End Sub Private Sub myOlApp_NewMail() Dim myInbox As Outlook.MAPIFolder Dim myItem As Outlook.MailItem Set myInbox = myOlApp.GetNamespace("MAPI").GetDefaultFolder(olFolderInbox) Set mySenderName = myItem.SenderName On Error GoTo ErrorHandler Set myDestinationFolder = myInbox.Folders.Add(mySenderName, olFolderInbox) Set myItems = myInbox.Items Set myItem = myItems.Find("[SenderName] = " & mySenderName) myItem.Move myDestinationFolder ErrorHandler: Resume Next End Sub Update #2 Split the code as follows: Sent a test message and nothing happened. The instructions for actually triggering a message when a new message arrives are a little light on details (for example, no mention is made regarding ThisOutlookSession and how to use it). Thank you.

    Read the article

  • Installing SATA dvd burner on machine with no spare SATA ports/connectors

    - by Faheem Mitha
    Greetings. I have the following motherboard Tyan Thunder K8WE S2895A2NRF Motherboard - extended ATX - nForce Pro 2200/2050 - Socket 940 - UDMA133, Serial ATA-300 (RAID) - 2 x Gigabit Ethernet - FireWire - 6-1 channel audio This is part of a computer that was assembled in the winter of 2006/2007. The user manual says the following with regard to SATA Integrated SATAII Generation 1 Controllers (from NForce Professional 2200) Two integrated dual port SATA II controllers Four SATA connectors support up to four drives 3 Gb/s per direction per channel NvRAID v2.0 support Supports RAID 0, 1, 0+1 and JBOD. I just purchased a SATA DVD burner. Here is the page for the product http://www.amazon.ca/gp/product/B002QGDWLK/ The problem I am facing is that I already have 4 SATA drives installed. I don't want to remove any of them. However, I want the DVD burner above installed as well. The person I am consulting with here (Bombay, India) tells me that my four available SATA ports are filled, and that my only option is to install a SATA card into the one free PCI slot on the motherboard. However, he says that with this setup I will not be able to boot from the DVD drive. Are these statements correct, and what are my other options if any? Even it the statements in the last para are true, I suppose I could use one of the motherboard connectors/ports there are currently being used with the hard drives with the DVD drive, and use the "add-on" connector with one of the hard drives. Not all the 4 hard drives need to be bootable. BTW, despite having read through http://en.wikipedia.org/wiki/Serial_ATA#Cables.2C_connectors.2C_and_ports I am fuzzy on the differences between connectors, cables and ports. Thanks in advance.

    Read the article

  • Smart Array P400 battery failure

    - by RobIII
    The P400 Smart Array controller in my HP DL380G5 is indicating: Battery Status: Failed, Replace Battery 1 in the System Management Homepage. Also the IML indicates: POST Error: 1794-Drive Array - Array Accelerator Battery Charge Low I do have several replacement batteries (actually, several controllers including batteries) lying around at work but never had to actually replace one. I am wondering if the battery replacement (swap) could be done hot or if I need to take down my server to do the battery replacement. I have replaced other spare parts like fans and drives before and all those can be replaced hot. I just don't know about the battery of the P400 Smart Array controller. Any help would be appreciated. EDIT1 For those interested: straight from the horse's mouth: With thanks to Frands Hansen pointing it out here. EDIT2 In the end I did just power down, to be on the safe side and because the manual says so, and replace the battery. Couldn't be easier. Unplugged the old one of the end of the cable (not the end connected to the controller) and reconnected a spare one. The replaced battery is now in the server for about an hour and currently (still) recharging. I'm assuming all will end wel.

    Read the article

  • Using Virtualbox Bridge Networking fails connection from Guest OS to Oracle XE running on Host

    - by Licheng
    I am trying to make a JDBC connection from a VirtualBox Ubuntu Guest OS to an Oracle XE database running o Host. However, the connection is refused. Here are the details of my environment: VirtualBox: 4.1.4 Host OS: Windows 7 Guest OS: Ubuntu server 11.4 Networking mode: Bridged network Oracle XE database running on Host Issue: WebLogic server runs on the Ubuntu virtualbox. It attempts to connect to an Oracle XE database running on the Host OS (windows 7) with listening port 1521. On the Guest OS (Ubuntu), I am able to ping the Host computer from the Guest OS. However, when I configured a JDBC data source on the WebLogic server on the Guest OS to connect to the Oracle XE, connection took a long time, and eventually I received an "IO Exception: The Network Adapter could not establish the connection". When I tried "telnet host-ip 1521", no connection was established. With Bridge networking, I can make bi-directional connections between the host and the guest OS (e.g. connection through ssh and ftp). Is there anything I missed in the setup of Bridge networking and the guest/host OS? Note that I was able to make the same connection within a normal networking environment (i.e. not using virtual box). I am not sure whether Bridge networking is a good option for the work described above. Should I use host-only networking mode? If so, any specific configurations I need to perform? I read through the Virtual box document on setting up the host-only network, however, it lacks of details. I followed the procedures described in the manual, and couldn't even connect to the host. Could some experts here enlighten me on this issue? Much appreciated. Licheng

    Read the article

  • iSCSI, failover and XenServer

    - by jemmille
    I have an iSCSI fail over implementation setup so if one of my storage units fails the other takes over immediately (it also runs the NFS shares). When fail over occurs, volumes are exported, the IP is switched to the other machine and the targets are reconfigured. The fail over of the storage system itself works just fine. I use NexentaStor for my filer. When I do a test (manual) fail over of my storage the following occurs: Note: I run the admin VM's on NFS and customer based VM's on iSCSI All NFS based VM's remain up and working perfectly through the failover and after All VM 's running on iSCSI eventually report the following: An error about not being able to write to a particular block An error about journaling not working Then the file system goes RO To get the VM's working again I have to do the following: Force shutdown of the "broken" VM's. Detach the iSCSI SR Re-attach the iSCSI SR Boot the VM on a different server (5 in my pool) If I don't boot on a different server I get this error "Internal error: Failure("The VDI <uuid&gt; is already attached in RW mode; it can't be attached in RO mode!")" The only way I have found to fix that error is to reboot the entire server it was running on previously which is obviously a huge pain. Currently multipathing is NOT enabled (but can be and the same thing still occurs). I have edited much of the /etc/iscsid.conf file to work with the timeout settings but to no avail. In short, my storage fails over properly but XenServer does not keep the connection alive. As a thought, the error that shows up in #4 above might be the ultimate cause and fixing that would fix everything? Any help would be appreciated more than you know.

    Read the article

  • Windows Scheduled Startup Task doesn't appear to be fully working but why?

    - by Devtron
    I originally tried to use Group Policy to enforce a startup script to run at startup. My startup script is a .CMD file, which calls 10 .exe files. Using Group Policy I could never get this to work....so I looked into using Scheduled Tasks. And here I am. I have tried two different versions of my script (for syntax purposes). I originally thought my syntax could be bad, so I tried a few approaches. Neither work. My #1 .CMD file approach commands look similar to this: start "this is my title" /D "C:\Somepathhere\myExecutable.exe" "..\..\published\wc_task.wfc" My #2 .CMD file approach commands look similar to this (it invokes a shortcut file): rundll32 shell32.dll,ShellExec_RunDLL "C:\Somepathhere\bin\Virtual Workflow.lnk" ^ Both of these scripts work fine if I manually run them, either by running the .CMD file, or even by manually forcing the Schedule Task MSC console to "Run" this script. Manual process seems to work fine, but automated it does not. My scheduled task is set for startup and uses "highest privileges" to execute as Admin. At the end of my .CMD script, I added a line to write to a text file, just to prove that the script was being run. That command looks like this: echo foo > C:\foo.txt When I reboot my server, and Schedule Tasks kicks in, I never get my ten .EXE files to run, but I do get the C:\foo.txt on my drive. What gives?

    Read the article

  • How can I manually install pecl_http on Ubuntu 9.10?

    - by Richard
    This is essentially a repost of http://stackoverflow.com/questions/4159369/ubuntu-9-04-pecl-extension-downloads-but-does-not-install. Hoping maybe someone can help me here. I've done this: sudo apt-get install php-pear sudo apt-get install php5-dev sudo apt-get install libcurl3-openssl-dev which installs fine. However, the next step: sudo pecl install pecl_http Doesn't install the extension, but merely downloads it. There are no error messages. So I have unpacked it and built it myself per http://php.net/manual/en/install.pecl.phpize.php Essentially: cd pecl_http phpize ./configure make make install I also make test'd to check all ok - and it failed one test: HttpRequest, which is kind of fundamental to this package. And indeed this doesn't work: $r = new HttpRequest('http://www.google.com'); $r->send; echo $r->getResponseCode(); No request is sent, the response code is zero, but no errors either. How can I get this damn thing installed? Is this a bug? Am I doing something wrong? Any alternatives, workarounds? Help appreciated. Thanks

    Read the article

  • OpenLDAP, howto allow both secure (TLS) and unsecure (normal) connections?

    - by Mikael Roos
    Installed OpenLDAP 2.4 on FreeBSD 8.1. It works for ordinary connections OR for TLS connections. I can change it by (un)commenting the following lines in slapd.conf. # Enable TLS #security ssf=128 # Disable TLS security ssf=0 Is there a way to allow the clients to connect using TLS OR no-TLS? Can the ldap-server be configured to support both TLS connections and no-TLS connections? Tried to find the information in the manual, but failed: http://www.openldap.org/doc/admin24/access-control.html#Granting%20and%20Denying%20access%20based%20on%20security%20strength%20factors%20(ssf) http://www.openldap.org/doc/admin24/tls.html#Server%20Configuration Tried to read up on 'security' in manualpage for ldap.conf, didn't find the info there either. I guess I need to configure the 'secure' with some negotiation mechanism, "try to use TLS if client has it, otherwise continue using no-TLS". Connecting with a client (when slapd.conf is configure to use TLS): gm# ldapsearch -x -b '' -s base '(objectclass=*)' namingContexts ldap_bind: Confidentiality required (13) additional info: TLS confidentiality required gm# ldapsearch -Z -x -b '' -s base '(objectclass=*)' namingContexts (this works, -Z makes a TLS connection) So, can I have my ldap-server supporting client connections using TLS and ordinary (no-TLS) connections? Thanx in advance.

    Read the article

  • Adding PHP to Apache

    - by user528451
    Where I work, we use ancient technology that belongs in a museum. Further I have to get everything done through system admins. They are telling me in order to get PHP, they will need to upgrade the operating system as well as the Apache version. lcas100[67]% uname -a Linux lcas100 2.6.9-11.ELsmp #1 SMP Fri May 20 18:26:27 EDT 2005 i686 i686 i386 GNU/Linux lcas100[68]% cat /etc/*-release LSB_VERSION="1.3" Red Hat Enterprise Linux AS release 4 (Nahant) lcas100[75]% /ots/apache/bin/httpd -v Server version: Apache/1.3.31 (Unix) Server built: Nov 3 2004 18:47:31 This doesn't make sense to me because apparently Apache 1.3.x supports PHP: http://php.net/manual/en/install.unix.apache.php Furthermore, we have another machine that runs PHP and is running the exact same OS and OS version. The reason I want it on the former machine is because it is mounted on a different file system. Lastly they tell me that all software the Apache webserver runs will need to be reinstalled/recompiled (assuming an Apache upgrade WAS needed). I am not even sure about this. Are they full of it? Thanks

    Read the article

  • deploy LAMP config to new boxes with low/no effort

    - by user1444233
    I'm spending a lot of time setting up new Centos 6 instances. I use a VCS (Subversion) for most of the config files and all of the webapp source files (Github), but even with excellent package managers (like yum, npm, easy_install, etc.) it still takes time. I'd like to get to the point where I could try out a new potential web host by just signing up for an account, logging in and automatically sucking my standardised config onto the box. I know there are a set of tools that can help: Puppet Chef Vagrant and a set of services that sell solutions: [Jumpbox] http://www.jumpbox.com/ [BitNami Cloud] http://bitnami.org/cloud I don't mind investing time in learning a new tool, but as a no-budget start-up, I'm keen to keep monthly costs down. My biggest concern is that time spent on the server config is time away from the codebase, and that's where I think my team and I should be investing our energy, at least until we get funded and scale up a bit. I'd be grateful of some recommendations for which way to jump on config: stick with SSH and manual deploys, at least until you get big. bite the bullet and learn [say] puppet. You may only use it 8-10 times, but it pays to have such an easy tunable server bootstrap. don't bother, just pay the $100/month for a standard config service. It'll cost you $1000/year, but you should focus on the code. Other questions in this domain I use quite a complex stack (Drupal, Zend Server, MySQL, PHP, MongoDB, Python, django), but are there standard(ish) setups that include these or that I could build upon more quickly? Are the configs optimised for small, medium, large VPS (1GB, 4GB, 16GB)? How secure are they?

    Read the article

  • Plesk 10 - creating and using vhost.conf

    - by MrFidge
    I'm having some issues setting up and using a vhost.conf for one of my domains. So far none of the domains have required any extra configuration but now I need to use a PEAR module, so I'm looking to include /usr/share/pear in the PHP settings for the domain. vhost file created in /var/www/vhosts/domain.com/conf/vhost.conf <Directory /var/www/vhosts/domain.com/httpdocs> php_admin_value include_path ".:/usr/share/pear" </Directory> I then restart Plesk using: /usr/local/psa/admin/sbin/websrvmng --reconfigure-vhost --vhost-name=domain.com Or as plesk says that command is obsolete in Plesk 10 I've tried using /usr/local/psa/admin/sbin/httpdmng --reconfigure-domain domain.com And for good luck I've restarted apache too each time. Net result - none of the PEAR includes work unless I edit the include_path in /etc/php.ini! Any tips on how to get this MOFO working? I've had a look through the documentation but TBH I just don't have time to read 40 pages of Plesk manual for one line of code, this can't be that hard, surely! Thanks for any pointers, H

    Read the article

  • Apache doesn't immediately notice a change in the document root

    - by Tom
    We use capistrano for website deployments and our Apache document root is a symlink to a particular code release. The deployment procedure switches the symlink from the old release to the new release as the final step of the deployment. We are migrating our webservers from real servers running RHEL 5.6 to Amazon EC2 virtual machines running Ubuntu 11.10 and the new servers are suffering from a problem where Apache doesn't immediately notice the change to it's document root when the symlink is switched. It can take a second or so (and I think I've even seen it take a couple of minutes). It's kind of like Apache has cached the physical path of the symlink for some time. Does anyone know some Apache settings I could look at to get it to "scan" for changes to it's served files quicker. Thoughts: I read that the disks on virtual machines are much slower (since they are network attached storage). Perhaps the filesystem cache somehow works differently too? If so, is there anything that can be done? The website runs PHP code. Perhaps there is some PHP config differences between RHEL and Ubuntu? I checked realpath_cache_ttl but both servers have it commented out: e.g. ; Duration of time, in seconds for which to cache realpath information for a given ; file or directory. For systems with rarely changing files, consider increasing this ; value. ; http://www.php.net/manual/en/ini.core.php#ini.realpath-cache-ttl ;realpath_cache_ttl = 120 We do use the APC opcode cache but don't think it's the issue due to experimentation. The PHP code is in different file paths for each deployment and we ensure stat=1. Here is a similar question that is very interesting: 294107 - but doesn't provide an answer for me. One solution would be to reload Apache everytime we modify the document root symlink. I'll do this if we can't find another solution.

    Read the article

  • System fans connected to a Gigabyte Z77-D3H motherboard do not increase in speed

    - by Andrew
    The motherboard (Gigabyte Z77-D3H) controls my 3-pin CPU fan just fine. My system fans are a 3-pin fan (plugged into SYS_FAN1), and a 4-pin fan plugged into SYS_FAN3. All 3 of the system fan headers are 4-pin, but the user manual states that SYS_FAN1 is really a 3-pin header (that it can control the speed of a 3-pin fan) and the 4th pin is just a reserve. All my fans have a max RPM of 2000. Normally, all the fans run around 1000 RPMs when I'm not doing anything intensive. This proves that the motherboard can set the speed. However,when I run Folding@Home and my temperatures increase (around 70C) only the CPU fan increases to around 2000 RPMs. The system fans stay around 1000 RPMs. Through the BIOS I am able to disable the system fan control and the system fans then run at max RPMs (meaning the motherboard was doing something). I've updated the BIOS to the latest version and tried out Speedfan, but neither helped my situation. What I'd like is for the system fans to ramp up their RPMs as needed. Any thoughts? Tl;dr: My system (case) fans, but not my CPU fan, are always stuck around 1000 RPMs out of 2000 no matter the temperature.

    Read the article

  • Start daemon after specific samba share is mounted

    - by getack
    I axed this question on AskUbuntu, but it's not getting any traction from there... So I'll try here as well: I have a homebrew headless NAS running 12.04. In it I have a bunch of disks that are presented as a samba share thanks to Greyhole. If I want to do anything to the files within this share, I must do it through greyhole so that everything is updated properly. Thus, the share must be mounted locally and then accessed from there if I want to work on the files from the local machine. I do this mounting automatically thanks to these instructions. I also have Deluge installed that takes care of all my torrenting needs. Deluge's default download location is in this share, so that all the downloads are immediately available to the rest of the network. Obviously for everything to work, the share must be mounted, otherwise Deluge is going to have a problem downloading to it. The problem is, it seems like Deluge is starting before the shares are mounted when the system boots. So downloading/seeding does not continue automatically after boot. I have to log in and force a manual rescan and start on each torrent otherwise all the torrents just hangs on the error. Is there a way I can make deluge start after the shares got properly mounted? I looked into Upstart's emits functionality but I cannot seem to get it to work properly. Any advice?

    Read the article

  • SharePoint Backup/Restore without stsadm

    - by Kevin
    Due to problems we found with the restore of sites/site collections using stsadm (our tasks generated from workflows were not restored), we've taken a different route for backup/restore. We plan a major customization to our SP site and want to take a backup so we can rollback in case the install fails. In our System Testing (not production) environment, we've backed up the 12 hive, the virtual dir's that the IIS points to SharePoint, and the SharePoint databases in SQL (using SQL server to do the db backups). We have custom event handlers and workflows built with Visual Studio, and deploy the dlls to the GAC as version 2 (signed and versioned in Visual Studio). So when we deploy, the GAC will contain 2 versions of the workflows - version 1 and version 2. During the deploy we use SP stsadm features to install/activate the WF's. We also go to each library and add the new, version 2 WFs. This automatically sets the version 1 WF's to "Not Allow" new instances (which is what we want) and the version 2 as active - perfect so far. When we've completed the install, we then assume a failure and attempt to restore to the same machines (SharePoint on one server, SQL on another). We start by uninstalling the version 2 WF's from the GAC, reset IIS (to clear cache of these ver. 2 WF dlls'), restore the 12-hive and virtual directory folders, then restore the SQL dbs. This is all just as manual as you read it - no stsadm here. All seems to work after our restore, it appears the restore was successful - the mods we made to column names, data changes, etc during the install are all reverted back to the original pre-install state. With one exception. When we run a workflow, it always fails and the Logs in the 12-hive indicates the WF is still trying to use the version 2 of the dll (System.IO file not found error) We think we've backed up and restored all the moving pieces of Sharepoint but we're missing something here, does anybody have any ideas why the version 2 WF dlls are still being referenced eventhough we restored all the folders and db's of SharePoint? Thanks, Kevin

    Read the article

  • Fedora 9 not reconizing hard drive

    - by Andrew Jones
    I am installing Fedora 9 to a PC (specifications at the bottom) and have had a lot of trouble with it recognising the hard drive. To get the Fedora installer to recognize it in the first place I had to pass "ata_generic.all_generic_ide=1 pci=nomsi" to the kernel, after which it installed OK. However, now when I boot the installed OS, I get a "could not find filesystem '/dev/root'" error. I tried passing the same arguments to the kernel at boot as I did when installing but to no avail. I have tried using the default LVM layout and defining manual ones but it made no difference. There is no option in the BIOS to enable AHCI or anything like that, in fact the BIOS is very limited in most respects. I can get into the system by using the installation CD in rescue mode (with those extra kernal parameters) but I'm not sure what to do once in there... Unfortunately using a more recent version of Fedora or even another Linux distribution altogether isn't an option becuase of outside constraints - which is annoying since I know for a fact Ubuntu works fine on this setup. I have not been using Linux that long, so treat me like an idiot - I am one. Any help would be greatly appreciated, thanks! System spec: Intel Atom Z530 CPU @ 1.6 GHz Intel US15W chipset 1 GB DDR2 160 GB SATA harddisk (Samsung HM16HI) 1000 Mbit/s Ethernet port Phoenix BIOS

    Read the article

  • Recommendation for Document Management Solution

    - by BillN
    We've just been informed by our software vendor that the custom document management system they'd written is no longer in development, and will not be supported in the future. So we are looking at new document management systems. Requirements: Multiple input vectors, we receive documents via e-mail, fax, scanning, and from the originating application Ability to Redact or obscure data. Customers may fax an order with CC data, we want to attach the image of the order form with the order record, but the CC data needs to be protected. Same with Tax IDs. Certain users should be able to see the redacted data, but access should be logged. Version control on documents. We'd like Product Development and Marketing to be able to track various versions of documents like Packaging Designs, but ensure that users have the latest approved version. AD integration, my users don't need another password. Ability to integrate to other apps. Our current system, offers function keys in the order-entry system, that will spawn the viewer application, and open the correct document. Mass import facility, we have a half a terabyte of existing documents in the old system that we would like to import. Retention Policy. I'd like a way to have the system comply with the corporate retention policy, so that when a document of a certain type reaches a certain age, it gets deleted, or atleast marked for manual deletion. We are a Windows Server and HP-UX shop. Does anybody have any experience with Document Management systems that they would like to share? Thanks.

    Read the article

  • SQL Server Installaion error 0x84B40000

    - by Kurtevich
    I have a problem installing SQL Server 2008 R2. Long time ago I had it installed, and then uninstalled. It was left in "Add/remove programs", but I didn't pay attention on that. I had 2005 installed. And now there is a need to install 2008. I removed 2005 and started installing 2008, but it says that space on C: is not enough. That's when I found out that "Add/remove programs" shows it occupying more than 4 gigabytes, though I used to uninstall it. So I click "Remove", it shows all those many screens and validations, shows that removal completed, but the size of Program Files folder is still more than 4 GB. I removed (from "Add\remove programs" everything that had "SQL Server" in it's name, but that main "SQL Server 2008" item is still there and still 4 GB and uninstalling does nothing. Because installation of SQL Server did not show existing instances, and I don't see any running services related to SQL server (well, almost any, more details in the end), I though that this folder contains just some leftover staff and data and deleted it manually. Then agreed to removing of the item in "Add/remove programs" and everything looks clean. Now every time I try to install SQL Server (even in the minimum configuration), I receive the following error: SQL Server Setup has encountered the following error: The specified credentials that were provided for the SQL Server service are not valid. To continue, provide a valid account and password for the SQL Server service. Error code 0x84B40000. What is this service mentioned here? This error looks like I'm trying to add features to existing server and it can't login. But the setup didn't ask me for any credentials, except one username that couldn't be changed. Here are the services shown that can be related, both disabled and pointing to non-existing executables: SQL Active Directory Helper Service SQL Full-text Filter Daemon Launcher (MSSQLSERVER) I understand that this must be because of my manual deletion, but is there a way to clean it up now?

    Read the article

  • How to set up ProxMox 1.9 on VPN?

    - by Gnudiff
    Disclaimer: I have only rudimentary knowledge of VPNs. I would love to learn about them properly, however, at the moment I really need to make stuff work on short notice. I am trying to set up a ProxMox virtualization platform in an existing network. The network currently consists of several servers which have VMWare free edition. There is some sort of VPN defined in switch. In order for VMWare management interface to be accessible, there needs to be ticked a checkbox in the network settings for VPN and entered the VPN id. I didn't notice any such configuration option during ProxMox installation, so my Proxmox VE on the same physical server, using same manual IP settings (ip/nm/gw), is not accessible. As I understand I should touch the Proxmox's underlying Debian config in /etc/network/interfaces, but I have no idea, what should I aim for: do I specify the settings for eth0, do I make a virtual interface? How to make it accessible for both ProxMox VE and underlying future VMs? I read the ProxMox installation guide, but unfortunately it presumes better understanding of VPNs than I have. A config template or similar would be appreciated. Thanks in advance.

    Read the article

  • Installing FIREFOX with extensions/addons manually? (not really auto install)

    - by BrownChiLD
    I've been reading around with regards to creating firefox installers, bundling it w/ addons, using scripts, and CLI lines and a whole bunch of stuffs ... but it seems that going through this route is just too complicated and time consuming.. Since i don't mind a bit of manually copying files and stuff, I was planning to do the following: on my test machine, 1) install firefox on a machine AND configure it the way i want it 2) install addons AND set the configurations for it 3) set advanced configurations for firefox (about:config) Then once i'm all set, I just simply copy the contents of the firefox/profiles folder (for this particular tests it's ....\AppData\Local\Mozilla\Firefox\Profiles\6m0mef0s.default for deployment, all i have to do is: 1) Install the same version (offline installer) of the Firefox i used.. 2) overwrite the contents of the new profiles folder (randomly named by Firefox installer as usual) .. This should set all my configs and addons right? or what other folders do i have to backup and copy manually into the new profiles folder? I don't think i need to tinker w/ any registries right? anyway, if this works, though it's a bit manual, it's a whole lot simplier, and straight forward than fiddling w/ Installers and Packages etc.. PS I do this a lot w/ other simple (and some complex) software that i use and they seem to work fine for years.. i'm just not sure with firefox and how it's structured..

    Read the article

  • How do I apply multiple subnets to a server with one NIC?

    - by Cosban
    I am trying to route multiple IPs through one physical NIC on my dedicated server for use with Proxmox KVM VMs. I have a dedicated server which is currently running Debian 4.4.5-8 with 3 available ip addresses for use, which will be displayed as 176.xxx.xxx.196 (main), 176.xxx.xxx.198 (on same subnet as main) and 5.xxx.xxx.166 (different subnet). I am currently trying to route the third IP address with the dedi for use with a vps that I have set up using proxmox v2.x but am having a really, really hard time doing so. Virtual interfaces binding the additional IP addresses work as expected, ruling out external routing problems. The provider has given the following information for the IP addresses on the main subnet: gateway: 176.xxx.xxx.193 netmask: 255.255.255.224 broadcast: 176.xxx.xxx.223 As well as the following information for the IP address on the second subnet: gateway: 5.xxx.xxx.161 netmask: 255.255.255.248 broadcast: 5.xxx.xxx.167 Everything I've tried with /etc/network/interfaces has either not worked, or has rendered the network completely useless. This is the current state of the file, which has the secondary IP address working on the same subnet as well as IPv6 working, but not the second subnet. # Nativen IPv6 Schnittstelle iface eth0 inet6 manual # Bridge IPv4 Schnittstelle (176.xxx.xxx.193/27) auto vmbr0 iface vmbr0 inet static address 176.xxx.xxx.196 netmask 255.255.255.224 gateway 176.xxx.xxx.193 broadcast 176.xxx.xxx.223 bridge_ports eth0 bridge_stp off bridge_fd 0 bridge_maxwait 0 post-up ip addr add 176.xxx.xxx.198/27 dev vmbr0 auto vmbr1 iface vmbr1 inet static address 5.xxx.xxx.166 netmask 255.255.255.248 gateway 5.xxx.xxx.161 broadcast 5.xxx.xxx.167 bridge_ports eth0 bridge_stp off bridge_fd 0 bridge_maxwait 0 post-up ip addr add 5.xxx.xxx.166/27 dev vmbr1 # Bridge IPv6 Schnittstelle (Reichweite: xxxx:xxxx:xxxx:xxxx:xxxx:xxxx::/64) iface vmbr0 inet6 static address xxxx:xxxx:xxxx:xxxx:xxxx:xxxx netmask 64 up ip -6 route add xxxx:xxxx:xxxx:xxxx:xxxx:xxxx dev vmbr0 down ip -6 route del xxxx:xxxx:xxxx:xxxx:xxxx:xxxx dev vmbr0 up ip -6 route add default via xxxx:xxxx:xxxx:xxxx:xxxx:xxxx dev vmbr0 down ip -6 route del default via xxxx:xxxx:xxxx:xxxx:xxxx:xxxx dev vmbr0

    Read the article

  • Mysql Fail to start

    - by John Naegle
    I'm running a Ubuntu 12.04 LTS Virtual Machine. Last week, the VM stopped unexpectedly now mysql will not start on the VM. These two events may be related, they may not be. When I try to connect: $ mysql ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) Then: $ sudo service mysql start start: Job failed to start And $ dmesg [ 1838.218400] type=1400 audit(1374633238.253:50): apparmor="STATUS" operation="profile_replace" name="/usr/sbin/mysqld" pid=18473 comm="apparmor_parser" [ 1838.358656] init: mysql main process (18477) terminated with status 1 [ 1838.358695] init: mysql main process ended, respawning [ 1839.269303] init: mysql post-start process (18478) terminated with status 1 And $ service mysql status mysql stop/waiting I think this means mysql is crashing when it starts: $ sudo mysqld start 130723 21:51:24 InnoDB: Assertion failure in thread 3064211200 in file fut0lst.ic line 83 InnoDB: Failing assertion: addr.page == FIL_NULL || addr.boffset >= FIL_PAGE_DATA InnoDB: We intentionally generate a memory trap. InnoDB: Submit a detailed bug report to http://bugs.mysql.com. InnoDB: If you get repeated assertion failures or crashes, even InnoDB: immediately after the mysqld startup, there may be InnoDB: corruption in the InnoDB tablespace. Please refer to InnoDB: http://dev.mysql.com/doc/refman/5.5/en/forcing-innodb-recovery.html InnoDB: about forcing recovery. 02:51:24 UTC - mysqld got signal 6 ; Per the manual, I went to the data directory (/var/lib/mysql) and ran this: myisamchk --silent --force */*.MYI Then: $ sudo mysqld ... InnoDB: Your database may be corrupt or you may have copied the InnoDB InnoDB: tablespace but not the InnoDB log files. See InnoDB: http://dev.mysql.com/doc/refman/5.5/en/forcing-innodb-recovery.html InnoDB: for more information. ... Is my database corrupt? What can I do to recover? Re-install mysql? Something less drastic? I'm fine with losing the database, I just want a working system.

    Read the article

  • Does a 3ware "ECC-ERROR" matter on a JBOD when I have ZFS?

    - by Stefan Lasiewski
    I have a FreeBSD 8.x machine running ZFS and with a 3ware 9690SA controller. The 3ware controller shows an ECC-ERROR with one of the disks: //host> /c0 show VPort Status Unit Size Type Phy Encl-Slot Model ------------------------------------------------------------------------------ p0 OK u0 279.39 GB SAS 0 - SEAGATE ST3300657SS p1 OK u0 279.39 GB SAS 1 - SEAGATE ST3300657SS p2 OK u1 931.51 GB SAS 2 - SEAGATE ST31000640SS p3 ECC-ERROR u2 931.51 GB SAS 3 - SEAGATE ST31000640SS p4 OK u3 931.51 GB SAS 4 - SEAGATE ST31000640SS /c0 show events shows no ECC errors in it's recent history. ZFS does not currently detect any errors. zpool status says No known data errors My question: Is this ECC-ERROR something that I need to be concerned about? According to the 3ware CLI 9.5.2 Manual, an ECC-ERROR means that the 3ware controller caught a read-error for one or more sectors on this drive. This sometimes occurs when a RAID array is recovering from a failed disk. I believe that ECC-ERRORS can also be detected when the 3ware Controller verifies each disk. None of the drives have failed and thus there was no drive rebuild, so I assume that 3ware discovered a bad sector when it ran it's weekly auto-verify scan of the disks. Is this a safe assumption? According to our logs, ZFS has not detected any bad sectors on this drive. ZFS can work around read errors -- if ZFS detects a bad sector on the drive, it will simply mark that sector as bad and never use it again. From the ZFS perspective one bad sector isn't a big deal, although it might indicate that the drive is starting to go bad.

    Read the article

  • Debian Linux bridging router intermittently dropping packets [migrated]

    - by nomen
    My old Asus router died a few weeks ago, so I thought I'd set up my Debian box to deal with routing my home network. I have a few complications, but I adapted my configuration from a previously working configuration, and I don't see why I am having intermittent problems. But I am having them! Every so often, my SSH connections to the router (and to the Xen virtual machines hosted by the router) just drop. I am unable to use the router's dns server. I can't ping the router. Etc. (I can provide more details, but I'm not sure what will be helpful) /etc/network/interfaces: # The loopback network interface auto lo iface lo inet loopback # Gigabit ethernet, internal network auto eth0 allow-hotplug eth0 iface eth0 inet manual # USB ethernet, internet auto eth1 allow-hotplug eth1 iface eth1 inet dhcp # Xen Bridge auto xlan0 iface xlan0 inet static bridge_ports eth0 address 10.47.94.1 netmask 255.255.255.0 As I understand it, this is sufficient to create the network interfaces, and even do some switching between Xen hosts and my eth0 interface. I installed and configured Shorewall to manage routing: /etc/shorewall/zones fw firewall net ipv4 lan ipv4 /etc/shorewall/interfaces net eth1 detect dhcp,tcpflags,nosmurfs,routefilter,logmartians lan xlan0 detect dhcp,tcpflags,nosmurfs,routefilter,logmartians,routeback,bridge /etc/shorewall/policy net all DROP info fw net ACCEPT info all all REJECT info /etc/shorewall/rules DNS(ACCEPT) fw net DNS(ACCEPT) lan fw ... and so on, these all work, when the router is accepting traffic at all. /etc/shorewall/masq eth1 10.47.94.0/24 Can anybody help?

    Read the article

  • Which linux distributions offer seamless support for UEFI and an LVM root out of the box?

    - by Jannik Jochem
    My new ultrabook (an Asus UX32VD) requires UEFI in order to boot from the internal harddisk. I use an LVM partition which contains my root fs and dual-boot Windows 8. I somehow managed to get this working on Sabayon Linux, however the overall process was pretty painful, and system upgrades keep breaking my configuration because everything depends on a hand-configured kernel and a hand-crafted GRUB2 configuration. This causes a lot of hassle and distractions for me, so I am considering to switch to a different distribution. However, I cannot find any concrete resources that precisely document the state of UEFI support in the popular distributions. As an example, the length of the Ubuntu wiki page on UEFI suggests that installing on UEFI systems is a non-trivial process, and this AskUbuntu thread on encrypted LVM on UEFI systems suggests that LVM might also be a problem. I know that this question seems somewhat open-ended, so I'll formulate concrete questions: Are there any Linux distributions with an installer that supports installing to an LVM root in a UEFI boot setting where Windows 8 is dual-booted? Which distributions support UEFI without having to jump through hoops in order to bootstrap into a UEFI-booted system or requiring manual configuration of the boot manager?

    Read the article

< Previous Page | 275 276 277 278 279 280 281 282 283 284 285 286  | Next Page >