Search Results

Search found 19212 results on 769 pages for 'side projects'.

Page 604/769 | < Previous Page | 600 601 602 603 604 605 606 607 608 609 610 611  | Next Page >

  • I've got very brazen pop3 attack how to protect the server?

    - by Ken Tang
    Today I have brazen attack to my pop3-dovecot server and mail log is full over (200MB) with this kind of information: Nov 11 09:28:14 lax dovecot: pop3-login: Aborted login (auth failed, 1 attempts): user=<shawn>, method=PLAIN, rip=200.233.152.111, lip=myip Nov 11 09:28:14 lax dovecot: pop3-login: Aborted login (auth failed, 1 attempts): user=<shop>, method=PLAIN, rip=200.233.152.111, lip=myip Nov 11 09:28:14 lax dovecot: pop3-login: Aborted login (auth failed, 1 attempts): user=<sitetest>, method=PLAIN, rip=200.233.152.111, lip=myip Nov 11 09:28:14 lax dovecot: pop3-login: Aborted login (auth failed, 1 attempts): user=<solar>, method=PLAIN, rip=200.233.152.111, lip=myip Nov 11 09:28:15 lax dovecot: pop3-login: Aborted login (auth failed, 1 attempts): user=<services>, method=PLAIN, rip=200.233.152.111, lip=myip I just blocked attacker's ip by iptables -A INPUT -s 200.233.152.111 -j DROP But it this can be continued anytime from other ips. My question is: Is there any method to disallow anyone to connect my pop3 server (except only me?) because my ip is dynamic from ISP side so I don't know how to make pop3 server know that it is exactly me connecting to. Thank you in advance!

    Read the article

  • GPO startup script not copying files

    - by marcwenger
    I created a GPO startup script to execute for computers in a specific AD container. The script takes a file from the AD netlogon share and places it on a directory on the computer. Given the right permissions (ie: myself) can execute the script just fine and the file copies. But it doesn't work on startup - the file does not copy over from the AD server. The startup script should run as localsystem (am I right?). So the question is why do the files not copy on startup? Could it be because of: Is it permissions of the local system user? Reading the registry is problematic on startup? Obtaining files from the AD netlogon folder is problematic on startup? Am I missing it completely? My test machine does have the registry key and local directories as described in the script. I myself have standard user permissions on the test machine. AD server is Windows 2008, test client is Windows XP SP3 (and soon to be Windows 7, which I assume permissions issues will be inevitable) Dim wShell, fso, oraHome, tnsHome, key, srcDir Set wShell = WScript.CreateObject("WScript.Shell") Set fso = CreateObject("Scripting.FileSystemObject") key = "HKLM\Software\Oracle\Oracle_Home" On Error Resume Next orahome = wShell.RegRead(key) If err.Number = 0 Then tnsHome = oraHome + "\" + "network\admin\" srcDir = wShell.ExpandEnvironmentStrings("%logonserver%") + "\netlogon\UpdatedFiles\" fso.CopyFile srcDir + "file1.ext", tnsHome, true End If Side note: To ensure that the script is properly deployed, I purposely put some errors in the script, and on the next startup the error message appeared. So I know the GPO is deployed properly.

    Read the article

  • Troubleshooting an overheating CPU

    - by Jeff Fry
    I & my father just recently put together a new PC. Specs below. From the very beginning, on boot it will often complain that the CPU is too hot. If I sit in BIOS and watch the CPU, it'll drop back down from red to blue (<72C), at which point I've tended to just boot into Windows...and haven't had any problems. In fact, I've played a couple hours straight of Skyrim at max settings, and not had any visible issues. That said, I've occasionally walked away & come back to find that it's crashed. Yesterday, it crashed (while idle) twice in 12 hours, which shifted the balance from busy-with-life to nervous-I'm-about-to-melt-something. I just installed Core Temp which is showing my 4 cores fluxuating between 70-98C. I'm guessing at this point that the CPU fan may be incorrectly installed or defective. My first thought is to either (a) add water cooling (which the case supports) and / or (b) replace the CPU fan with an after-market one. That said, I'm very open to suggestions. A note, while I certainly don't want to burn money here, I have a baby coming any day now and am still unpacking from a recent move so if I have a choice between an option that costs money and another that takes a while...I'll happily spend a bit extra. Side question: Should I be nervous to even have this on at this point? Let me know if there's something useful I could add to my report. Otherwise, I'm looking forward to your suggestions! Thanks. CPU Intel i7-2600 CPU w/ stock fan Other HW ASUS P8Z68-V Pro motherboard 64G SSD boot drive 4 older SATA HDs GIGABYTE ATI Radeon HD6950 1 GB DDR5 8G Kingston T1 Series RAM Corsair 650W Gold Certified power supply Antec P280 case

    Read the article

  • What permissions do I need to move a folder?

    - by isme
    In the root of my drive there exists a folder called SourceControl that contains all the working copies of all my programming projects. I would like to move the folder to my user directory (\Users\Me), but something about the permissions on the folder forbids me. I don't remember how I created the folder. When I execute the move command: MOVE \SourceControl \Users\Me I receive the following error: Access is denied. I have resolved a similar problem in the past using the Takeown utility to assign ownership of the file to me, so I tried this command next: TAKEOWN /F \SourceControl It returns the following error: ERROR: The current logged on user does not have ownership privileges on the file (or folder) "C:\SourceControl". I've just learned about the Icacls utility, which can inspect and modify file permissions. I used this command to inspect the permissions on the folder: ICACLS \SourceControl It produced this list: \SourceControl BUILTIN\Administrators:(I)(F) BUILTIN\Administrators:(I)(OI)(CI)(IO)(F) NT AUTHORITY\SYSTEM:(I)(F) NT AUTHORITY\SYSTEM:(I)(OI)(CI)(IO)(F) BUILTIN\Users:(I)(OI)(CI)(RX) NT AUTHORITY\Authenticated Users:(I)(M) NT AUTHORITY\Authenticated Users:(I)(OI)(CI)(IO)(M) I think this means that normal user accounts, like mine, have permission only to read and execute (RX) here, while administrator accounts have full control (F). I used Icacls to confer full control of the directory to my user account with this command: ICACLS \SourceControl /grant:r Me:F The command produces this output: processed file: \SourceControl Successfully processed 1 files; Failed processing 0 files Now inspection of the permissions produces this output: \SourceControl Domain\Me:(F) BUILTIN\Administrators:(I)(F) BUILTIN\Administrators:(I)(OI)(CI)(IO)(F) NT AUTHORITY\SYSTEM:(I)(F) NT AUTHORITY\SYSTEM:(I)(OI)(CI)(IO)(F) BUILTIN\Users:(I)(OI)(CI)(RX) NT AUTHORITY\Authenticated Users:(I)(M) NT AUTHORITY\Authenticated Users:(I)(OI)(CI)(IO)(M) But after this the move command still fails with the same error. Is it possible to move this folder without invoking administrator rights? If not, how should I do it as administrator?

    Read the article

  • Linux: Can't overwrite files on samba store

    - by jonescb
    I'm using CentOS 5.5 with smbclient 3.0.33-3.28-el5 (latest version in repo), and I can't overwrite files in my Samba store. I am not the admin for the Samba server, so there isn't anything I can do server side. But I do have write permission to the server. I know the server runs Windows XP or Server 2003; I don't know. I can delete the file, and then copy the new version over, but I can't overwrite it. Using the cp command I'll get this error: [jonescb@localhost ~]$ cp foo.txt /mnt/si_storage/foo.txt cp: cannot create regular file `/mnt/si_storage/foo.txt': No such file or directory` And if I edit a file on the server using vim, I can save it once, but if I save it again I get this: "/mnt/si_storage/foo.txt" E212: Can't open file for writing This is my /etc/fstab entry for the samba server: //192.168.1.2/SI_STORAGE /mnt/si_storage cifs username=myuser,password=mypass 0 0 Edit: I can overwrite files just fine on my XP machine. The CentOS box is the only one having problems.

    Read the article

  • How to remove Media Center from Windows 8

    - by Arabella
    I bought 2 Windows 8 upgrades, 1 for my PC and 1 for my notebook. I added Windows Media Center to my notebook using the free offer in November (side note: the key was emailed to me within 5 minutes, I see many people have been complaining that it takes a few days). Today I decided to add WMC to my PC as well, so I went onto the Microsoft website, same like last time, and I received the email within a few minutes. Once I added WMC, entered the key and the computer rebooted, my activation is now broken: This product key is already being used on another PC. Try a different key or buy a new one. After rereading the product key email, I realised that the WMC key was exactly the same as the one I had received in November for my notebook (I used the same email, i.e. my Microsoft account Outlook email, for both). I didn't think this would be a problem, as on Microsoft's feature pack page it states: ...is limited to five licenses per customer per promotion. So then I decided, I'll just remove WMC from my PC and go back to Windows 8 Pro. So I turned off the WMC feature, PC restarted, activation still broken because my key has been replaced. I then tried to activate it with my original Pro key. The error it gave was that this key cannot be used with this version of Windows, as it is now Windows 8 Pro with Media Center and not Windows 8 Pro anymore. I've searched a bit and it seems the only way to remove it is do a clean install. I tried the Windows 8 Downgrade Helper, which told me I was already running Win 8 Pro when I tried to downgrade, and that I was running Win 8 Pro with Media center when I tried the other option. To sum up: How do I remove Windows Media Center from Windows 8 Pro without having to do a clean install?

    Read the article

  • All FireFTP passwords gone after auto-update

    - by GitaarLAB
    For the last six months (since the Firefox madness started and they keep on taking control of my PC) I'm terrified to touch Firefox. Problem is however, I've been using it in my business (since once upon a time it was a trustworthy application with useful extensions like FireFTP) and that installation (and plugins) holds four years of information. So Firefox continually deletes my important data (by) messing up (or blocking/or worse: auto-updating) my plug-ins, even crashing my computer as a result. Today Firefox killed FireFTP by (again) autoupdating FireFTP without my permission, and I did my best to disable that nonsense in about:config). Result: none of the (over 100) FireFTP accounts can be logged on to, they suddenly all ask for a password. I do not have the time to to find all of the passwords and reconfigure FireFTP again. How can I undo the mess Firefox created once again? That is, where are the passwords, how do I downgrade? As a side-question, how can I make Firefox behave again? I'm the boss of my computer, not them! How can I once and for-all take back control and completely kill every kind of auto-update feature?

    Read the article

  • Tables in the SQL Server "master" database, will they cause problems?

    - by pepoluan
    Folks, please be kind on me... I'm just an 'accidental' DBA due to our DBA resigned, so I'm totally a newbie in DBA... You see, I have this application, "ESET Remote Administration Server" (ERAS) that stores its logs and analysis on (originally) a local Access database. The decision was to migrate its database to a SQL Server 2008 R2 machine. ESET (the maker of the software) helpfully provided tools to perform such migration; unfortunately, being the DBA neophyte that I am, I didn't realize that I have to first create my own database (on the SQL Server side) and assign that database as the 'default' database for ERAS' ODBC connection. Now, the migration tool had successfully created a whole bunch of tables inside the "master" database. My questions: Should I leave things be as it is, or should I re-migrate the ERAS database to a different database? If you suggest me perform a re-migration, my plan is to (1) create a new instance, (2) create a new database within the new instance, (3) create a new ODBC System DSN on the ERAS server pointing to the new DB in step 2, (4) use ESET's migration tool to migrate from the current DSN to the new DSN. Do you think I missed a step there? Thanks beforehand for any guidance.

    Read the article

  • Nginx terminate SSL for wordpress

    - by Mike
    I have a bit of a problem. We run a wordpress blog behind a ngnix proxy and looking to terminate the ssl on the nginx side. Our current nginx config is upstream admin_nossl { server 192.168.100.36:80; } server { listen 192.168.71.178:443; server_name host.domain.com; ssl on; ssl_certificate /etc/nginx/wild.domain.com.crt; ssl_certificate_key /etc/nginx/wild.domain.com.key; ssl_session_timeout 5m; ssl_protocols SSLv2 SSLv3 TLSv1; ssl_prefer_server_ciphers on; ssl_session_cache shared:SSL:10m; ssl_ciphers RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP; location / { proxy_read_timeout 2000; proxy_next_upstream error; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_max_temp_file_size 0; proxy_pass http://admin_nossl; break; It just does not seem to work. If I can hit https://host.domain.com but it quickly switches back to non-secured from what I can see. Any pointers?

    Read the article

  • Server not accepting uploads

    - by Tatu Ulmanen
    I'm having a strange problem with my VPS: I can download files from it, I can use PuTTy to connect to it and all behaves normally. But sometimes, when I try to upload a file to the server or save a file via SFTP, the connection inexplicably fails. I am using jEdit to edit files remotely via SFTP. When it works, it works fine. When it doesn't, I get an error message: Cannot save: java.io.IOException: inputstream is closed Cannot save: java.io.IOException: 4: I can see that a temporary save file (#file.php#save#) is created on the server with a filesize of 0. So the connection works, but when it comes to sending the actual data, something fails. The same thing with WinSCP, but the error is different: Copying file fatally failed. Copying files to remote side failed. And I can always browse the server with PuTTy without a problem. I see nothing abnormal in any log files. Auth.log shows this when I try to save: sshd[32638]: Accepted password for - from - port 62272 ssh2 sshd[32638]: pam_unix(sshd:session): session opened for user - by (uid=0) sshd[32640]: subsystem request for sftp sshd[32638]: pam_unix(sshd:session): session closed for user - When I wait for a while (say, an hour), everything works fine again. It can't be a temporary ban, as I am still allowed to connect to the server, right? I know this may not be enough info to solve the problem, but I am grateful for any clues or bits of information that might help me. What are the possible causes for this kind of behaviour, what log files can I check for clues etc.. I'm running out of ideas!

    Read the article

  • SFTP, SCP, Secure Webdav: which is the most suitable ?

    - by Xavier Maillard
    Hi, currently, I am hosting a webdav share setup in order to store files I need anywhere I am. It is available via HTTPS. Things are that I do not need all the HTTP machinery -i.e. my nginx http server is only there for this webdav folder. I am not sure I made the best choice. My requirements on the client side are: secured transfers mountable as a network drive at work with 'near realtime sync' usable for any OS I could use (including my mobile (android)) At first, I chose webdav since it would pass through my work proxy (which refuses all that is not on HTTP/S (port 80 or 443)). Today, I am not satisfied with the setup and even if nginx memory footprint is pretty small, its webdav support is not really "clean" and full. What would you recommend between SFTP, SCP and the current webdav solution ? I think SFTP is the closest solution but I still have to find out how to pass through my proxy ;) SCP seems quite limited as I read about it (only file transfers if I read right). Cheers

    Read the article

  • MacBook Pro (2007 - ATI-Card) Screen goes half black then shuts down the entire computer

    - by BluePerry
    Hey, when I'm starting my MBP I get about 10-60 seconds of the booting process before the screen goes half black (Its like applying a black to transparent gradient from the left side of the screen to the middle of the screen). After a 5 seconds the entire screen blacks out and the MBP shuts down. There have been issues with heat and some minor artifacts on the screen in the past but nothing serious as far as I know. Everything could be resolved by rebooting or letting the MBP cool down a bit. I would like to identify the problem now. There are 3 possibilities I can think of right now: My graphics card is defect and shuts down after a few seconds. (But if thats true, I find it kind of hard to explain the HALF black screen) My LCD-Panel is defect and sends back wrong signals so the system shuts down??? (I not sure about that) My fan or/and thermal sensors are defect and forcing the graphics card to shut down. Can anyone point if I'm right or if there yet another reason for this. I'd be thankful for any hint or tips. Cheers, Per

    Read the article

  • Slow connection to Linux MySQL from Windows only (XAMPP)

    - by Josh
    I'm having a problem with a PHP project (using Kohana 3.2 framework) on my Windows 7 64-bit machine connecting to the database. The development database is stored on a Ubuntu Linux server on the local network. Other development machines running OSX and Linux are connecting fine. There are no other Windows development machines to test with. I can access MySQL fine using MySQL Workbench, and other projects (which I believe to be less database heavy) run mostly ok, only occasionally getting timeout messages. I'm constantly getting Maximum execution time of 30 seconds exceeded when functions such as mysql_query() are run in this particular project. Specifically, the Kohana file where the timeout occurs is MODPATH\database\classes\kohana\database\mysql.php [ 186 ]. My local set-up is: Windows 7 Professional 64bit XAMPP 1.7.7 (PHP 5.3.8) The output of uname -a of the Linux server is: Linux peach 2.6.38-11-server #50-Ubuntu SMP Mon Sep 12 21:34:27 UTC 2011 x86_64 x86_64 x86_64 GNU/Linux I've tried the following, with no success: Disabling Windows firewall Switching between using a persistant and normal connection In my.cnf, adding skip-name-resolve Increasing wait_timeout Enabling bind-address I've run out of ideas now, and have no idea how to debug an odd issue like this. Has anyone come across this before, or have any idea how I could find the root of the issue, or what might be the problem?

    Read the article

  • Possible Solution for Setting up a Linux VPN Server to Encrypt WLAN Traffic of Macs and iPhones on

    - by GorillaPatch
    I would like to set up a VPN server on debian linux to encrypt wireless traffic coming from my Mac or iOS device. I would like to use a certificate-based solution. Setting up a PKI infrastructure and managing certificates is OK for me. 1. Which server to pick? By looking through the internet and here on stackoverflow I found the following possible solutions: strongSwan IPSec and racoon Which solution is feasible for a linode running debian squeeze? 2. How to configure the network? If I understood correctly a VPN has a virtual network interface as an endpoint on the server side. Naively I would think that I need a DHCP server running on the server to assign a dynamic private IP (like of the class C network 192.168.xxx.xxx) to the connecting clients. Next I think I would need to set up masquerading to NAT the incoming VPN traffic to the real interface directly connected to the internet. Is this the right way to go? Do you have any configuration examples? I often saw VPN configurations used to connect to your home network, but that is not what I am looking for. I have a server up in the internet and want to use it as a proxy to encrypt traffic in insecure network environments like public WLANs.

    Read the article

  • vSphere Promiscuous mode only receiving packets one way from network switch

    - by steve.lippert
    We have two network switches, a POE switch (SwitchA) to power our phones / users computers and a non-POE switch (SwitchB for the rest network.) Each switch is setup to do port mirroring to support our VoIP recording system. SwitchA does port mirroring on specific ports if we need to record a user. SwitchB mirrors one port to monitor our work at home users (Internet comes in from managed router, to switch, back out to our firewall.) These two port mirroring setups feed into one vmware vSphere 4.1 server, it has four total physical cards. The other two NICs feed into an unmanaged switch for connecting to the rest of the network. Once into the vSphere server all network ports go into a vSwitch, and then one of the servers (Windows 2008 R2) sniffs them out and does its thing. Everything is working fine and dandy from SwitchB. But on SwitchA we only receive one side of the VoIP packets (going out to the phone, nothing coming in from the phone). Troubleshooting steps I have taken so far: I hooked up my laptop to the monitor port on SwitchB and I see both sides of the packets. I swapped which network interface is plugged into the monitor port on SwitchA. Because everything feeds into one vSwitch / vNetwork and both sides of the conversation arrive just fine from SwitchB I believe everything is configured correctly on the vSphere server/guest. What could be causing one way packets to arrive on my guest machine from only one interface, but not the other? Could a bad cable be causing the problems from SwitchB?

    Read the article

  • Print over remote CUPS server, but just show a subset of the printers.

    - by jdm
    I'd like to print from my Ubuntu laptop (karmic) to some networked printers. Our organisation uses a CUPS server with several hundred printers. What I know I can do is: CUPS_SERVER=printers.company.com acroread document.pdf and then Adobe Reader shows me all available printers to select from. However, it takes a couple of minutes to display the large list, which is really annoying. (The desktop PCs here suffer from this, too.) The other option is to add a new printer with an address like ipp://printers.company.com/printer/bldg1_hp8150 (to the Ubuntu printer configuration = local CUPS server). However, it asks me for a driver. I don't want to / can't always specify a driver, since some printers don't appear in the list. I'd like to let the remote CUPS server handle the driver part (like it does when i set CUPS_SERVER), and do no more preprocessing/"driver stuff" on my side. The ideal thing would be if I could somehow add the remote printer list to my local cups server, and apply a filter, so that it would just display printers a la bldg1_*. This feature was available in KDE3.?, but I can't find something similar in Ubuntu/Gnome. Any suggestions?

    Read the article

  • Multiple Rails apps on same subdomain?

    - by Derek
    I recently decided to try out Rails. When working with PHP, I simply had all of my PHP projects in the same directory. For example, I may have http://ubuntu/app1, http://ubuntu/app2, etc. I created a subdomain for Rails (http://ruby.ubuntu), installed Rails and Passenger and everything is working. However, I may be wrong, but it looks like I can only have one Rails app per subdomain? My VirtualHost is as follows: <VirtualHost *:80> ServerName ruby.ubuntu ServerAdmin webmaster@localhost DocumentRoot /var/www/ruby/blog/public <Directory /var/www/ruby/blog/public> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all RailsEnv development </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> All of my PHP and misc. files are stored in /var/www/main. I want to be able to store all of my Rails apps in /var/www/ruby. I tried changing DocumentRoot to /var/www/ruby, but I don't think it's as simple as that. When I browse to a Rails app's Welcome Aboard page and click on "About my application's environment," I get a 404 page, but when the DocumentRoot is set to the public directory, I get the expected result. I don't want to have to create a new subdomain every time I create a new project. Is there any way I can make it so I can store all of my apps in /var/www/ruby, and browsing to http://ruby.ubuntu will let me access all of my Rails apps there? That way if I want to create a new app, all I have to do is rails new app, no Apache .htaccess or VirtualHost configuration required.

    Read the article

  • Weird rendering artefact in vim (terminal, not MacVim)

    - by Tobi Lehman
    Running Mac OS X, using either Terminal.app or iTerm2, there is a strange artefact with the character rendering that I have a hard time explaining and an even harder time understanding. I'll start with a video of my screen so that you can see and example of it in action: From the video you can see a few ways it is weird, for example, sometimes when I hit a letter in insert mode, the character is double printed. When I go into normal mode, the artefact remains. When I re-enter insert mode, hitting backspace copies the characters on the left to the position under the cursor. This has happened in OS X Lion, and Mountain Lion, under both Terminal.app and iTerm 2. This never happens under MacVim. Also, I use GNU/Linux on my other machine, and have never had this happen, I am pretty sure it is strictly a Mac OS X issue, but I do not know how to fix it. For a while, I've been working around it by using MacVim most of the time, but I prefer working in a terminal. Does anyone know what is happening here, and if so, how can I fix it? EDIT: I tried using the macvim Vim executable, and I still get strange artefacts, but they are localized to the left side of the screen, here is an example:

    Read the article

  • KVM Hosting: How to efficiently replicate guests

    - by javano
    I have three KVM servers each with 1 guest VM, running directly on it's local storage, (so they are essentially getting a dedicated box worth of computing power each). In the event of a host failure I would like the guests replicated to at least one of the other hosts so I can spin it up there, until the failing host is fixed. I am curious about KVM cloning. I can clone a VM live or when it's suspended/shutdown. Obivously suspended VMs will naturally be quicker to clone but these three VMs comprise three parts of a single solution, so I don't want to ever have any one of them shutdown. How can I efficiently clone these VMs between servers? I have had a couple of ideas, but are these insane or, is there a better method I have missed for my scenario? Set up a DRDB partition between box 1 and 2 where VM 1 runs from, and so is replicated between box1 and box 2, repeat between box 2 & 3, and box 3 & 1 (This could be insane, I have never used DRDB only read about it) Just use standard KVM CLI clone options to perform live clones (I'm dubious about this because I don't know how long it will take and what the performance impact will be during) Run a copy of each VM on at least one other host, and have the guest on one host export it's data to the matching guest on another host where it can import that data, scripting this on the guest) Some of other way? Ideas welcome! Side Note These servers have 4x15k SAS drives in a RAID 10 so they aren't rocketing fast, and as I mentioned, each VM runs from the host's local storage, no NAS or SAN etc. So that is why I am asking this question about guest replication. Also, this isn't about disaster recovery. Guests will be exporting their data to a NAS over a VPN, so I am looking at how I can have them quickly spun up in a host failure situation.

    Read the article

  • How to recover my invisible HD again?

    - by pattulus
    I made this several times now, but this time something bad happened. What I did: I installed Windows 7 at a 32GB partition on my slot 2 HD in my MacPro. Windows 7 made a 105MB partition… I knew this before, but what I didn’t know was that this partition is now on my slot 4 HD. My home folder, my private videos and some other stuff are on this 1TB drive. What I found out so far: I’m currently logged in as another admin since my OS partition as well as the two other HD's aren't harmed. Disk Utility: … only shows the 105MB NTSF partition on this 1TB volume. It isn’t showing my old 1TB partition/ex-HD named "storehouse". Only the partition tab is telling me that there now is a 1TB empty free unpartitioned space. Data Rescue II: … is showing the Volume as it used to be with it's old Name "storehouse". A quick scan and a thorough scan both were done in 1 second which leds me to the conclusion that there's isn’t something deleted at all (» hope!). Data Rescue doesn’t even mention the damn "system reserved" partition. Drive Genius: … also shows the old partition and doesn’t mention the new one. But looking at the info it tells me under "content": FDisk_partition_scheme (instead of Apple_partition_scheme). Well D'oh…. Tech Tools: … doesn’t show the volume, otherwise I'd might have been tempted to press rebuild/repair. What to do next?? I think the best approach is to buy another 1TB HD and let Disk Warrior Clone my old one to it… just to be on the safe side. But what is the best thing to do after this… ???

    Read the article

  • Change Outlook's default calendar to iCloud (meeting requests end up in wrong calendar)

    - by flohei
    Following scenario: I've got a main computer (Windows 7, Office 2010) which is being used to manage contacts, meetings, etc. using Outlook. Now I've added an iPad and an iPhone to sync using iCloud. I moved all appointments and contacts from the old PST file to the iCloud file. All the data syncs nicely. The email account I'm using in Outlook is an IMAP account which opens up another data file which brings us to a total of three data files in Outlook's side bar. The problem: When one of our clients sends us meeting requests via email they show up in the IMAP's inbox. When we open them up they automatically get added to Outlook's default calendar (the one in the original PST). Is there any chance not to add them to that calendar but the iCloud one? Basically we could completely get rid of the original PST since we don't use it at all anymore but the settings do not allow me to remove this PST file and set the iCloud one as default. Thanks!

    Read the article

  • Nginx + WordPress + HHVM: Why isn't Batcache working? Would Varnish help even more?

    - by javipas
    I've heard great things about HHVM, so I've setup a copy of WordPress blog (on another domain) with Nginx (with the Pagespeed module) and HHVM. Right now the benefits are obvious: on the same config, load times are between two and three times faster. I'm trying to speed up things a little bit, and I've also installed Memcached and Batcache. I've installed the memcached package, copied object-cache.php (Pastebin) onto the root folder of the WordPress blog, and after that I've installed the Batcache plugin and copied the advanced-cache.php (Pastebin) file onto the wp-content folder. Also, I've included the line define('WP_CACHE', true); in the wp-config.php file. It seems it doesn't work, though. If I quickly reload the page several times Batcache should show the cached page, but it doesn't. It's easy to check that by reloading (Cmd+R on Chrome on OS X) the page several times and then viewing the page's code. Under the <head> section I should see some batcache stats, but they aren't there. I wonder if someone could give me some hint on this. On a side note, I don't know if I could add some other component in order to help the performance be even better. I'm thing about Varnish, but I'm not sure if it's just useless and it's just another way to the same I'm currently doing. Any other component there? (I'll test CDN for images, minifying js, etc and some other tricks as well, but I'm talking from the server perspective).

    Read the article

  • Why doesn't NFS recognize a new UID?

    - by user76177
    I have two servers running RHEL6. I have root access to both. The main server, which I will refer to as server, is a database server. The application server, which I will refer to as client, mounts a directory from server via NFS. There is a user, appuser, on both client and server. However, appuser's UID on client is 502. appuser's UID on server is 506. Both users need read and write capability on the NFS share. To facilitate this, I made the share owned by appuser on server. Running id appuser on each yields: uid=506(appuser). Of course, client does not recognize that ownership, since appuser has a different id on client. So I did the following: Changed UID of user in /etc/passwd on client to be 506. Changed ownership of appuser's $HOME on client to be appuser again so that I could log in. Now, when I go to look at the NFS share from the client side, I see that it is owned by 502. 502 is the OLD id for appuser on client. I can't change ownership of the NFS share from client, since that is a volume that physically resides on server. I need to make sure that the NFS share shows ownership of appuser from both server and client. What step have I missed since changing the appuser id on client? NOTE: I have not rebooted client (or anything else.)

    Read the article

  • SSD for swap on Ubuntu server

    - by grs
    Currently I am reading SSD reviews and I wonder how much exactly I will benefit if I move the 24 GB swap from 7200rpm HDD to SSD. Does anyone implemented swap space on SSD? Is this generally good idea? On a side note: I read that ext4 has much better performance if the journal is on SSD. Anyone with such a setup? Thanks! Edit: Here I will answer the questions posted: Occasionally, relatively rare I am hitting the swap. I know what the swap is for and that is better to get more RAM. When the server begins to swap its performance degrades (not a surprise). The idea is if I have few memory hungry processes running, to improve the overall system performance at that time, using SSD for swap, instead of slower rotational media. At the end - I want to be able to login faster and check the server state during swapping, instead of waiting on the login prompt. And of what I see SSD is cheaper per GB than RAM. Would I have better server performance during swapping (as rare it is) using SSD compared to HDD? Where 10k or 15k rpm HDDs would rate in this scenario? Thank you all for your quick and prompt answers!

    Read the article

  • Windows Search not searching in files

    - by Cylindric
    I am trying to get Windows Search to work on my Windows Server 2008 SP2 fileserver, so I can search in files for content. I have added the Windows Search Service role to the server, and using the right-click properties in Explorer set some folders to "Index this location". The problem is that neither on the server or remotely can I search in the files. I seem to get some inconsistencies in the GUIs, for example the "Indexing Options" panel shows me just 6 locations indexed, but if I click "Modify" I see nearly everything ticked. For example, the "SeachTest" folder under "infrastructure" has the "index this location" option ticked, but the "Projects" folder does not. I assume this is why some are grey and some not, but they are all ticked. T The "SearchTest" folder contains some files that have nothing but the text PurpleOrange in it, so I should be able to find those. So, to summarise: Which locations are indexed? The ones in the "Index these locations" list, the ones ticked, or the ones not greyed-out in the list? How do I get to the state where I can click in the search box and type PurpleOrange and see the files?

    Read the article

< Previous Page | 600 601 602 603 604 605 606 607 608 609 610 611  | Next Page >