Search Results

Search found 19499 results on 780 pages for 'transaction log'.

Page 527/780 | < Previous Page | 523 524 525 526 527 528 529 530 531 532 533 534  | Next Page >

  • How to route traffic from VM (Parallels) over an Open VPN connection on the host (OS X)

    - by withakay
    Scenario: I have a Mac running Lion that is connected to an OpenVPN server I have a Windows XP VM (running on parallels, but I don't think this is important) I want to be able to route traffic from the XP VM via the host Mac's OpenVPN connection so that I can log on to a domain. The remote network is 172.16.0.0/23 (255.255.254.0) Open VPN is configured to supply address in the 10.100.101.0/24 range and sets up the routing to 172.16.0.0 using the gateway 10.100.101.1/32 My local network is 192.16.1.0/24 NOTE: I do not want to install OpenVPN into the XP virtual machine as I would have to use a passwordless key in order for OpenVPN to connect before logon. Anyone got any ideas?

    Read the article

  • frequent abnormal shutdowns/system crashes

    - by user110353
    It's been almost 5 days since I have installed Ubuntu and almost 6th time that my laptop has been crashed entirely and it shuts down abnormally. Actually, it heats up and I have to wait for 20 odd minutes before I can turn it on again. A message appears that my PC crashed due to overheating which may damage my hard disk. The crashes happened when I tried to open some application that freeze my PC not even giving me enough time to go to system monitor and end process. Sometimes the culprit application which caused crash is Ever-pad, sometime it's team-viewer, sometimes it's some other. This is something very serious. The last crash occurred at 09:14:40. Kindly click here to view system log. I want to stick to Ubuntu and the same laptop as I had serious issues with Windows and I nearly went out to dump my laptop and purchase a more powerful system. Below are my hw/os specs. Kindly advice on how to resolve this issue Ubuntu 12.10 Kernal 3.5.0-18-generic GNOME 3.6.0 Memory 2.0GB Processor: Genuine Intel CPU [email protected] x 2 Available Disk Space: 63.7 GB Thanks in advance

    Read the article

  • Apollo linux boot into single user

    - by Spirit
    We have a device that runs Appolo Linux and I have to boot that device into a single user mode so that i can run a fsck to check the hard drive for errors. I've been goggling this during the past hour and so far I haven't found any specific method on how can I do that on this version on Linux. The device is known formerly as a NFX Cinxi One - now re-branded into BlackStratus LOG Storm. If any of you have any experience with this one you may know it is a device that is used to collect logs from other servers. I know that the above info isn't much but that is everything that I can provide up until now since tomorrow I have to follow up closely on this problem.

    Read the article

  • Password not working for sudo ("Authentication failure")

    - by Souta
    Before I mention anything further, DO NOT give me a response saying that terminal won't show password input. I'm AWARE of that. I'm typing my user password in (not a capslock issue), and for some reason it still says 'Authentication Failure'. Is there some other password (one I'm not aware of) I'm supposed to be using other than my user password? I've had this ubuntu before, on another hard drive and I didn't have this problem. (And it was the same ubuntu, ubuntu 12.04 LTS) ai@AiNekoYokai:~$ groups ai adm cdrom sudo dip plugdev lpadmin sambashare ai@AiNekoYokai:~$ lsb_release -rd Description: Ubuntu 12.04 LTS Release: 12.04 ai@AiNekoYokai:~$ pkexec cat /etc/sudoers # # This file MUST be edited with the 'visudo' command as root. # # Please consider adding local content in /etc/sudoers.d/ instead of # directly modifying this file. # # See the man page for details on how to write a sudoers file. # Defaults env_reset Defaults secure_path="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" # Host alias specification # User alias specification # Cmnd alias specification # User privilege specification root ALL=(ALL:ALL) ALL # Members of the admin group may gain root privileges %admin ALL=(ALL) ALL # Allow members of group sudo to execute any command %sudo ALL=(ALL:ALL) ALL # See sudoers(5) for more information on "#include" directives: #includedir /etc/sudoers.d I can log in with my password, but it's not accepted as valid for authentication <-- That is pretty much my issue. (Although, I haven't gone into recovery mode.) I've ran: ai@AiNekoYokai:~$ ls /etc/sudoers.d README And also reinstalled sudo with: pkexec apt-get update pkexec apt-get --purge --reinstall install sudo pkexec usermod -a -G admin $USER <- Says admin does not exist su $USER <- worked for me, however, my password still does not do much (in sense of not working for other things) I changed my password with pkexec passwd $USER. I was able to change it no problem. gksudo xclock was something I was able to get into, no problem. (Clock showed) ai@AiNekoYokai:~$ gksudo xclock

    Read the article

  • Why are some web clients requesting a page named "cache"?

    - by Toto
    We see errors like this in the apache error log: [Thu May 17 14:32:35 2012] [error] [client 192.168.1.1] File does not exist: /home/www-data/mywebsite.com/r/cache, referer: http://www.mywebsite.com/r/1010 It is strange because: There is no reference in the code/url about a folder/file "cache". The folder/file "cache" does not exist The client is randomly trying to access a "cache" folder everywhere on the website. It is always trying to access the folder/file "cache" following this pattern: Pattern: /level1/.../levelwhatever/filename (referer) /level1/.../levelwhatever/cache We run a LAMP (Debian stable: PHP 5.3.3-7+squeeze9. We also use APC 3.1.3p1). We use Google Analytics and AdSense. We do not know how to reproduce the problem. Note: I replaced the user's IP in the code for privacy.

    Read the article

  • DNS propagation

    - by Paddington
    I have 1 primary DNS server (ns1.mydomain.com) running on Fedora and 2 secondary ones (ns2 and ns3). DNS changes made on my web servers first goes to the primary name server and then propagates to the secondary servers. After making a DNS change on a domain on the web server, I can't see the new dns information on my ns1 when I perform: dig @ns1 A blahblah.com I then went to the master records on the names server (uses named) in the directory /var/named/run-root/var/named/masters and I see the A record has been updated appropriately. Tailing the logs /var/log/messages is not showing any errors. What could be the issue?

    Read the article

  • Cannot install wireless lan service on windows 2012 RTM offline

    - by user1763118
    I'm having trouble installing the wireless lan service offline with a fresh installed windows 2012 server RTM. I tried "install-windowsfeature wireless-networking" in the non-gui mode and using the server manager in the gui mode to enable the wireless lan service, but both of them show a "failure configuring windows updates" message after the installation restarted the system. I checked the event log and I think messages about "The WLAN Autoconfig service depends on the following service: nativewifip. This service might not be installed" are the source of the issue. Google shows it is a service called "native wifi filter", but I cannot find anywhere to install that service. I don't have a Ethernet adapter for that computer, so have to install everything offline before the wifi's working.

    Read the article

  • I keep getting OpenSSL Header Version not found error when compiling OpenSSH Debian Squeeze

    - by Romoku
    I built Openssl1.0.0d ./config shared no-threads zlib It installed fine to the default /usr/local/ssl I went and downloaded OpenSSH 5.8p2 and ran ./configure but now it keeps giving me a Openssl version header not found error even when I set --with-ssl-dir= I've tried it with arguments /usr/local/ssl/include /usr/local/ssl/include/openssl /usr/include /usr/local/ssl/lib I looked in config.log and found error: openssl/opensslv.h: no such file or directory which makes little sense since I pointed openssh to where it is store. /etc/ld.so.conf include /usr/local/ssl/lib I'm at a loss at this point. Answer (maybe): Because I am an idiot. include /usr/local/ssl/lib is incorrect. /usr/local/ssl/lib is correct. It needs to be before the first include.

    Read the article

  • Is a Hostname A Entry necessary?

    - by Citizen
    When I log into WHM, I get this message: The server was unable to lookup an an A entry for its hostname (server226.taxi.com). This is generally because the entry was never added. However this could also be the result of your nameserver(s) being down. If you would like to attempt to automatically add the entry, click here. If I click "here" it does nothing and I still get the message. I'm not hosting websites for other people, just internal projects on our server. Does not having an A Hostname Record affect SEO or anything like that or is this just a convenience when setting up nameservers or something like that?

    Read the article

  • Find hosted directories/ports in Jetty/Apache

    - by Paul Creasey
    Hi, I first asked this on SO, but i didn't get a response and i think it is probably more appropriate here. Let say I have a directory which is being hosted by Jetty or Apache (i'd like an answer for both), i know the URL including the port and i can log into the server. How can i find the directory that is being hosted by a certain port? I'd also like to go the other way, i have a folder on the server, which i know if being hosted, but i don't know the port so i can't find it in a web browser. How can i find a list of directories that are being hosted? This has been bugging me for ages but i've never bothered to ask before! Thanks.

    Read the article

  • VIsual Studio SP1 Fatal Installation Error

    - by user39593
    I have visual studio 2008 professional installed. I want to install SP1. When I try and install SP1 the following happens. MSI (s) (20:E4) [15:40:00:165]: Product: Microsoft Visual Studio 2008 Professional Edition - ENU - Update 'KB945140' could not be installed. Error code 1603. Additional information is available in the log file C:\Users\bjbell\AppData\Local\Temp\Microsoft Visual Studio 2008 SP1_20100609_151708728-Microsoft Visual Studio 2008 Professional Edition - ENU-MSP0.txt. My machine is running Windows 7 Enterprise 64bit.

    Read the article

  • How to secure svn+ssh checkout users?

    - by vvanscherpenseel
    All our SVN repositories are hosted on a dedicated machine on which all the developers have access. Every now and then we need to checkout a repository on a machine we don't own or operate ourselves. Currently we all use our own system (SSH) account for this, but instead I would like to use some generic 'checkoutsvn' user that can be used for this. This user is only used for checking out from a repository, but should not be allowed to log in to the system (no shell access). I tried to do this by setting the default shell of that account to /sbin/nologin but then SVN fails, as apparently svn+ssh requires shell access. How do you do this? Is there a good solution for this?

    Read the article

  • /etc/environment and cron

    - by clorz
    Hi, I've got two machines: Fedora and CentOS. And a cronjob 0-59 * * * * env > /home/me/env.log On CentOS I can see that /etc/environment is affecting the output while on Fedora it does not. I want Fedora to be like CentOS. What do I need to make it happen? /etc/pam.d/crond on Fedora auth sufficient pam_rootok.so auth required pam_env.so auth include system-auth account required pam_access.so account include system-auth session required pam_loginuid.so session include system-auth /etc/pam.d/crond on CentOS auth sufficient pam_env.so auth required pam_rootok.so auth include system-auth account required pam_access.so account include system-auth session required pam_loginuid.so session include system-auth /etc/security/pam_env.conf is the same on both systems and consists of commented out lines. Even if I make /etc/pam.d/cron.d files the same, problem still persists.

    Read the article

  • Revert a VM client from within the VM Client - is it possible?

    - by Saariko
    I am creating a test VM client for our QA department. Once it's installed, my options will be: Use a snapshot Use non persistent disk on the hard disk. For either options, I can give the role of QA_DEP the ability to log to vCenter and go back/power-off the client - so they can return to their clean machine. My question: is it possible to have that ability without using vCenter? What If I want that on a Client reboot - it will return to it's initial/clean state? The clients are not gonna be heavy loaded.

    Read the article

  • After deleting a local machines offline file cache, the same user's "my documents" no longer redirects to the network location.

    - by stead1984
    One of my apprentices was tasked with clearing out unused local profiles and clearing the offline file cache. After he cleared the offline file cache and rebooted the machine, he would log in as himself and no longer have his "my documents" redirected to the set network location. More over this seemed to then affect ANY other networked machine he logged into, except his own laptop. All our standard workstations run Windows XP Service Pack 3, the apprentice's laptop runs Windows 7 Professional. I can understand how clearing the offline file cache after deleting old local profiles could cause this issue but draw a complete blank as to why it would affect all networked machines. It's a strange one so this question may be a little hard to understand so any questions or further understanding required please ask.

    Read the article

  • Upgraded from 11.4 to 11.10, There was an error, now the system won't initiialize

    - by Eric
    This morning the system gave me a message that my Version (11.4) was no longer supported, and I took the 'upgrade' option (- 11.10). While installing the various components I encountered a message to the effect that the there was an error and the system may have become unusable. Among the messges: E:Sup-process /usr/bin/dpkg received a segmentation fault...returned an error code (1). I was given an option to do several things, one of which seemed to mean that it would attempt to roll back to the previous version (the default), which I took. After the process ran it said the upgrade process had finished, but there were errors. I attempted to initialize a console so I could enter ubuntu-bug update-manager /var/log/dist-upgrade, per the instructions I received when I received the error message, but the console failed during initialization. I restarted the machine, and the screen has stopped with the following contents: * Starting bluetooth * Stopping save kernel messages * Starting CUPS printing spooler/server * PulseAudio configured per-user sessions saned disabled: edit /etc/default/saned $starting up Cisco VPN daemon *Starting anac(h)ronistic cron *Stopping anac(h)ronistic cron Each of these steps followed by [ OK ] What are my options? Any help appreciated!

    Read the article

  • Advice on off-site backup of Hyper-V Failover Cluster

    - by Paul McCowat
    We are currently setting up a Server 2008 R2 which will be off-site over a leased line with VPN. At the main site is 2 x Hyper-V hosts in a failover cluster with PowerVault M3000i iSCSI SAN. We are using BackupAssist for local backups and each host backups up itself and it's guests nightly creating a 500GB backup each which is copied to a 2TB rotated NAS drive. Files and SQL DB's are also backed up / log shipped etc. Looking for the best way to backup the Hyper-V VM's and copy them off-site so that the OS's are only a month old and the data is a day old. The main backups are too large to transfer between backups so options discussed so far are: Take rotating individual backups of the VM's each day and copy over, Day 1 SQL VM, Day 2 Exchange VM etc, would require more storage. Look in to Hyper-V snapshots, however don't believe these are supported in clustering. 3rd party replication tools

    Read the article

  • Simple server status page hosted externally available for users

    - by Chris
    I am looking for any kind of script - can be asp or php or any other web language - that gives me the ability to log outages and the current state of the network for our organisation. This would be similar to any major Telco's "Network Status" page, but I just want to tell the user's out there if the systems are up and running and have a history of recent outages. This would be for our remote user's so they could go to a webpage (externally hosted from our main site) and see that we are currently having problems with our network. What are other people out there using?

    Read the article

  • Oracle EZConnect in Mediawiki

    - by raindog308
    Mediawiki supports Oracle and I'm trying to configure it in the installer. The installers says you can use EZConnect...something like: user/pass@//server.example.com/dbname or since the installer has fields elsewhere for user/pass server.example.com/dbname The installer includes a link to the EZConnect docs: http://docs.oracle.com/cd/E11882_01/network.112/e10836/naming.htm. All the examples in that doc include a forward slash. But every combination I've tried results in an error like this: Invalid database TNS "sever.example.com/service_name". Use only ASCII letters (a-z, A-Z), numbers (0-9), underscores (_) and dots (.). I can't find any examples of EZConnect that don't include a forward slash. That error is from Mediawiki, not Oracle. I'm tailing the listener log and there is no connection made - Medaiwiki is returning an error without trying to connect. I'm using php OCI8 with the Oracle instant client. I don't have a tnsnames.ora setup for this client - which is kind of the point of EZConnect. I did write a test php script that connects via oci_connect just fine. Has anyone configured Mediawiki to use Oracle with EZConnect? If so, what did you use in the installer?

    Read the article

  • IIS 7.5 with PHP 5.3, displaying errors on page

    - by dreamlax
    I'm running Windows Server 2008 R2, with IIS 7.5 and PHP 5.3 (configured by FastCGI). In my php.ini I have: log_errors = On display_errors = Off error_log = syslog (also tried an actual file with appropriate permissions) Each time a page contains an error, it is never logged anywhere, but it is displayed on the page (unless I turn log_errors off). I'm guessing that the stderr from php-cgi.exe is being put on the page, instead of being logged where it is supposed to be. Is there a setting somewhere that allows me to log these errors properly?

    Read the article

  • Intermittent HTTP 401 errors

    - by forthrin
    I am using an Intranet solution which requires basic HTTP login. However, there is an intermittent error which requires me to log in again, and then the server says "Forbidden" whether I give the correct login information or not. To add insult to injury, Safari (and Chrome) seems to show the login dialog for every included resource in the HTML, and it's impossible to cancel this modal dialog sequence, so the whole browser is blocked until I've pressed Esc some 30 odd times. After an hour, I may gain access again, without having really done anything. My questions: What could cause temporal 401 errors? Why do the browsers show the login dialog 30 times per page load (assumedly for every included resource in the HTML from the same domain)?

    Read the article

  • Deduping your redundancies

    - by nospam(at)example.com (Joerg Moellenkamp)
    Robin Harris of Storagemojo pointed to an interesting article about about deduplication and it's impact to the resiliency of your data against data corruption on ACM Queue. The problem in short: A considerable number of filesystems store important metadata at multiple locations. For example the ZFS rootblock is copied to three locations. Other filesystems have similar provisions to protect their metadata. However you can easily proof, that the rootblock pointer in the uberblock of ZFS for example is pointing to blocks with absolutely equal content in all three locatition (with zdb -uu and zdb -r). It has to be that way, because they are protected by the same checksum. A number of devices offer block level dedup, either as an option or as part of their inner workings. However when you store three identical blocks on them and the devices does block level dedup internally, the device may just deduplicated your redundant metadata to a block stored just once that is stored on the non-voilatile storage. When this block is corrupted, you have essentially three corrupted copies. Three hit with one bullet. This is indeed an interesting problem: A device doing deduplication doesn't know if a block is important or just a datablock. This is the reason why I like deduplication like it's done in ZFS. It's an integrated part and so important parts don't get deduplicated away. A disk accessed by a block level interface doesn't know anything about the importance of a block. A metadata block is nothing different to it's inner mechanism than a normal data block because there is no way to tell that this is important and that those redundancies aren't allowed to fall prey to some clever deduplication mechanism. Robin talks about this in regard of the Sandforce disk controllers who use a kind of dedup to reduce some of the nasty effects of writing data to flash, but the problem is much broader. However this is relevant whenever you are using a device with block level deduplication. It's just the point that you have to activate it for most implementation by command, whereas certain devices do this by default or by design and you don't know about it. However I'm not perfectly sure about that ? given that storage administration and server administration are often different groups with different business objectives I would ask your storage guys if they have activated dedup without telling somebody elase on their boxes in order to speak less often with the storage sales rep. The problem is even more interesting with ZFS. You may use ditto blocks to protect important data to store multiple copies of data in the pool to increase redundancy, even when your pool just consists out of one disk or just a striped set of disk. However when your device is doing dedup internally it may remove your redundancy before it hits the nonvolatile storage. You've won nothing. Just spend your disk quota on the the LUNs in the SAN and you make your disk admin happy because of the good dedup ratio However you can just fall in this specific "deduped ditto block"trap when your pool just consists out of a single device, because ZFS writes ditto blocks on different disks, when there is more than just one disk. Yet another reason why you should spend some extra-thought when putting your zpool on a single LUN, especially when the LUN is sliced and dices out of a large heap of storage devices by a storage controller. However I have one problem with the articles and their specific mention of ZFS: You can just hit by this problem when you are using the deduplicating device for the pool. However in the specifically mentioned case of SSD this isn't the usecase. Most implementations of SSD in conjunction with ZFS are hybrid storage pools and so rotating rust disk is used as pool and SSD are used as L2ARC/sZIL. And there it simply doesn't matter: When you really have to resort to the sZIL (your system went down, it doesn't matter of one block or several blocks are corrupt, you have to fail back to the last known good transaction group the device. On the other side, when a block in L2ARC is corrupt, you simply read it from the pool and in HSP implementations this is the already mentioned rust. In conjunction with ZFS this is more interesting when using a storage array, that is capable to do dedup and where you use LUNs for your pool. However as mentioned before, on those devices it's a user made decision to do so, and so it's less probable that you deduplicating your redundancies. Other filesystems lacking acapability similar to hybrid storage pools are more "haunted" by this problem of SSD using dedup-like mechanisms internally, because those filesystem really store the data on the the SSD instead of using it just as accelerating devices. However at the end Robin is correct: It's jet another point why protecting your data by creating redundancies by dispersing it several disks (by mirror or parity RAIDs) is really important. No dedup mechanism inside a device can dedup away your redundancy when you write it to a totally different and indepenent device.

    Read the article

  • How do I stop ssh-agent from forgetting my password after I login to the screen session from SSH?

    - by Shwouchk
    I have a screen session open in an lxterminal window. If I SSH somewhere, the first time it happens, an ssh-agent window opens and asks me for my private key passphrase, and after that ssh goes right on. If I log in from outside to this machine and attach to the screen session however, ssh-agent now asks me every time I connect for my passphrase, in the terminal. Is there a way to avoid this and to let it continue using the X agent, or at least to have the non-X agent remember the passphrase?

    Read the article

  • Neverending issues with grub (ubuntu 14.04 on ASUS with Win8 dual boot)

    - by Mariana
    This is the most frustrating issue I have ever run into using Ubuntu and Windows in the same machine. I have an ASUS K46CB, 6GB RAM and preinstalled Windows 8.1 64-bits. I have successfully installed Ubuntu 14.04 LTS, also 64-bits. To do so,I followed this tutorial whenever possible. I only failed on the disable secure boot part: there is no 'Secure-boot' or even UEFI mention in my BIOS! Screenshots from other BIOS of the same model show the option under Boot, but in mine there is absolutely none. Because of this, I cannot boot into Ubuntu. The computer loads straight into Windows. I tried running boot repair, but got an error (i can show the log, but it's pretty long). Does anyone know how to fix this issue? UPDATE I reinstalled Ubuntu. Same problem, goes straight to Window. Boot-Repair informs me that i am using Windows in Legacy mode. It excecuted with no errors this time, but after restarting GRUB was still missing. I can't turn off Secure Boot yet. UPDATE I tried using Boot Repair to install grub on a boot-grub 1mb partition. Still boots straight to windows. I feel like punching something

    Read the article

  • Ubuntu 12.04 menu bar, nautilus, terminal, and gtk themes not working after installation of Gimp 2.8

    - by Chris
    I installed gimp2.8 from this ppa: ppa:otto-kesselgulasch/gimp after that, my system began having problems. This is my thought process in trying to fix what's happened and the order in which it happened: I noticed the menu bar at the top changed from an opaque black to perfectly clear and the titles of applications and the hidden buttons reacted slowly. No big deal, restarted to see if it fixed it. It didn't, in fact, when the logon screen came up, the password field was grey and boxy like a default windows 98 theme (that's the best I can describe it) as were all the option buttons for gtk programs. I open terminal to try and reinstall gtk, but the terminal is just a black screen with no ability to input commands. I go to a tty and I reinstalled gtk3 and gtk2 (I have both on my system. I don't think they're in conflict, they hadn't been before hand). I restarted. Nothing doing. Log in, nautilus isn't placing icons on my desktop. I click the launcher. It flashes, but no window opens. Try to open by Alt+f2, nothing. I purge ubuntu-desktop, restart, reinstall ubuntu-desktop. Nothing. I have no clue what to do at this point so I'm asking for any help diagnosing the problem and fixing it.

    Read the article

< Previous Page | 523 524 525 526 527 528 529 530 531 532 533 534  | Next Page >