Search Results

Search found 5623 results on 225 pages for 'prevent deletion'.

Page 178/225 | < Previous Page | 174 175 176 177 178 179 180 181 182 183 184 185  | Next Page >

  • VMWare vSphere 5: 4 pNICs for iSCSI vs. 2 pNICs

    - by gravyface
    New SAN for me, never used before: it's an IBM DS3512, dual controller with a quad 1GbE NIC per controller that a client bought and needs help setting up. Hosts (x2) have 8 pNICs and while I usually reserve 2 pNICs for iSCSI per host (and 2 for VM, 2 for management, 2 for vMotion, staggered across adapters), these extra ports on the SAN have me wondering if storage I/O would be significantly improved with 2 additional NICs per host, or if the limitations of the vmkernel/initiator would prevent the additional multipaths from ever being realized. I'm not seeing alot of 4 pNIC iSCSI implementations per host; 2 is the de facto standard from what I've read/seen online. I could and probably will do some I/O testing, but just wondering if there's a "wall" that someone else has discovered long ago (i.e. before 10GbE) that makes a 4 NIC iSCSI per host setup somewhat pointless. Just to clarify: I'm not looking for a how-to, but an explanation (link to paper, VMWare recommendation, benchmark, etc.) as to why 2-NIC configurations are the norm vs. 4-NIC iSCSI configurations. i.e. storage vendor limitations, VMKernel/initiator limitations, etc.

    Read the article

  • Restoring a fresh home folder in a shared user domain environment

    - by Cocoabean
    I am using a tool called pGINA that adds another credential provider to my Windows 7 clients so we can authenticate campus users via campus LDAP. We have the default Windows credential providers setup to authenticate off of our Active Directory, but we have students in our classes that don't have entries in our AD, and we need to know who they are to allow them internet access. Once these LDAP users login using pGINA, they are all redirected to the same AD account, a 'kiosk' account with GPOs in place to prevent anything malicious. My concern is that my users will accidentally save personal login information or files in that shared profile, and another user may login later and have access to a previous user's Gmail account, as the AppData folder on each computer is shared by anyone logging into the kiosk user. I've looked into MS's 'roll-your-own' SteadyState but it didn't seem to have what I wanted. I tried to write a PS script to copy a pre-saved clean version of the profile from a network share, but I just kept running into issues with CredSSP delegation and accessing the share from the UNC path. Others have recommended something like DeepFreeze but I'd like to do it without 3rd party tools if possible.

    Read the article

  • htaccess hacked - i've deleted code and file - what next?

    - by user1762595
    My website was hacked recently. I think i've found the code that was added to the htaccess file, deleted it and then added script to prevent the htaccess file being accessed again. I've also deleted the php file that the hacked code refers to (common.php). What do i need to do next? I'm not a programmer or website developer but i really wanted to see if i could fix the problem myself as i've spent quite a few hours trying and don't give up easily. Here is the hacked code that i deleted; <IfModule mod_rewrite.c> RewriteEngine On RewriteCond %{HTTP_USER_AGENT} (google|yahoo) [OR] RewriteCond %{HTTP_REFERER} (google|yahoo) RewriteCond %{REQUEST_URI} /$ [OR] RewriteCond %{REQUEST_FILENAME} (shtml|html|htm|php|xml|phtml|asp|aspx)$ [NC] RewriteCond %{REQUEST_FILENAME} !common.php RewriteCond /home/httpd/vhosts/bluestardive.com/httpdocs/common.php -f RewriteRule ^.*$ /common.php [L] </IfModule> this code has to stay in the htaccess file as it redirects my url to seo friendly ones or the website errors, but has this code been hacked as well? # Apache search queries statistic module RewriteEngine On AddHandler php5-fastcgi .php .php5 # <contrexx> # <core_modules__alias> RewriteRule ^about-us$ /index.php?page=883 [L,NC] RewriteRule ^ausfluge-und-aktivitaten$ /index.php?page=800 [L,NC] RewriteRule ^bluestardive-news$ /index.php?page=919 [L,NC] RewriteRule ^bookings$ /index.php?page=911 [L,NC] RewriteRule ^diveresort$ /index.php?page=879 [L,NC] RewriteRule ^diving$ /index.php?page=880 [L,NC] RewriteRule ^excursions-and-activities$ /index.php?page=881 [L,NC] RewriteRule ^galerie$ /index.php?section=gallery [L,NC] RewriteRule ^oceannight$ http://www.bluestardive.com/index.php?page=906 [L,NC] RewriteRule ^philosophy$ /index.php?page=846 [L,NC] RewriteRule ^reservation$ /index.php?page=917 [L,NC] RewriteRule ^reservierung$ /index.php?page=918 [L,NC] RewriteRule ^resort$ /index.php?page=798 [L,NC] # </core_modules__alias> # </contrexx> many thanks for any help Claire

    Read the article

  • Strange File-Server I/O Spikes - What Is Causing This?

    - by CruftRemover
    I am currently having a problem with a small Linux server that is providing file-sharing services to four Windows 7 32-bit clients. The server is an AMD PhenomX3 with two Western Digital 10EADS (1TB) drives, attached to a Gigabyte GA-MA770T-UD3 mainboard and running Ubuntu Server 10.04.1 LTS. The client machines are taking an extremely long time to access/transfer data on the file server. Applications often become non-responsive while trying to open files located remotely, or one program attempting to open a file but having to wait will prevent other software from accessing network resources at all. Other examples include one image taking 20 seconds or more to open, and in one instance a user waited 110 seconds for Microsoft Word 2007 to save a document. I had initially thought the problem was network-related, but this appears not to be the case. All cables and switches have been tested (one cable was replaced) for verification. This was additionally confirmed when closing down all client machines and rebooting the server resulted in the hard-drive light staying on solid during the startup process. For the first 15 minutes during boot, logon and after logging on (with no client machines attached), the system displayed a load average of 4 or higher. Symptoms included waiting several minutes for the logon prompt to appear, and then several minutes for the password prompt to appear after typing in a user name. After logon, it also took upwards of 45 seconds for the 'smartctl' man page to appear after the command 'man smartctl' was issued. After 15 minutes of this behaviour, the load average dropped to around 0.02 and the machine behaved normally. I have also considered that the problem is hard-drive-related, however diagnostic programs reveal no drive problems. Western Digital DLG, Spinrite and SMARTUDM show no abnormal characteristics - the drives are in perfect health as far as the hardware is concerned. I have thus far been completely unable to track down the cause of this problem, so any help is greatly appreciated. Requested Information: Output of 'free' hxxp://pastebin.com/mfsJS8HS (stupid spam filter) The command 'hdparm -d /dev/sda1' reports: HDIO_GET_DMA failed: Inappropriate ioctl for device (the BIOS is set to AHCI - I probably should have mentioned that).

    Read the article

  • Preventing endless forwarding with two routers

    - by jarmund
    The network in quesiton looks basically like this: /----Inet1 / H1---[111.0/24]---GW1---[99.0/24] \----GW2-----Inet2 Device explaination H1: Host with IP 192.168.111.47 GW1: Linux box with IPs 192.168.111.1 and 192.168.99.2, as well as its own route to the internet. GW2: Generic wireless router with IP 192.168.99.1 and its own route to the internet. Inet1 & Inet2: Two possible routes to the internet In short: H has more than one possible route to the internet. H is supposed to only access the internet via GW2 when that link is up, so GW1 has some policy based routing special just for H1: ip rule add from 192.168.111.47 table 991 ip route add default via 192.168.99.1 table 991 While this works as long as GW2 has a direct link to the internet, the problem occurs when that link is down. What then happens is that GW2 forwards the packet back to GW1, which again forwards back to GW2, creating an endless loop of TCP-pingpong. The preferred result would be that the packet was just dropped. Is there something that can be done with iptables on GW1 to prevent this? Basically, an iptables-friendly version of "If packet comes from GW2, but originated from H1, drop it" Note1: It is preferable not to change anything on GW2. Note2: H1 needs to be able to talk to both GW1 and GW2, and vice versa, but only GW2 should lead to the internet TLDR; H1 should only be allowed internet access via GW2, but still needs to be able to talk to both GW1 and GW2. EDIT: The interfaces for GW1 are br0.105 for the '99' network, and br0.111 for the '111' network. The sollution may or may not be obnoxiously simple, but i have not been able to produce the proper iptables syntax myself, so help would be most appreciated. PS: This is a follow-up question from this question

    Read the article

  • Stronger laptop_mode in Linux

    - by Vi
    Can I have stronger laptop mode in Linux? I want to spin down the hard drive and prevent it to spin up even if something wants to read something not in cache. In general I want to have these modes: Normal Current laptop mode Stronger laptop mode: spin up only when needs to read something uncached (and cache it). No spinups to write something unless really memory pressure (Exception: explicit "sync" command in console). Kernel is allowed to keep processes in D-sleep for 10 seconds for that. Forced laptop mode: do not spin up, period. Keep offending processes in D-sleep unless I turn off this mode. Like there is a bomb instead of hard drive. I also want to have access times tracked (mount -o atime), but I don't want the hard drive to be spinned up only to update them. Is there some settings or kernel patches that can get closer to this? May be I should write special io scheduler for "forced laptop mode"? E.g. echo suspend > /sys/block/sda/queue/scheduler to lock the drive and echo cfq > /ys/block/sda/queue/scheduler to unlock it again?

    Read the article

  • saving data from a failing drive

    - by intuited
    An external 3½" HDD seems to be in danger of failing — it's making ticking sounds when idle. I've acquired a replacement drive, and want to know the best strategy to get the data off of the dubious drive with the best chance of saving as much as possible. There are some directories that are more important than others. However, I'm guessing that picking and choosing directories is going to reduce my chances of saving the whole thing. I would also have to mount it, dump a file listing, and then unmount it in order to be able to effectively prioritize directories. Adding in the fact that it's time-consuming to do this, I'm leaning away from this approach. I've considered just using dd, but I'm not sure how it would handle read errors or other problems that might prevent only certain parts of the data from being rescued, or which could be overcome with some retries, but not so many that they endanger other parts of the drive from being saved. I guess ideally it would do a single pass to get as much as possible and then go back to retry anything that was missed due to errors. Is it possible that copying more slowly — e.g. pausing every x MB/GB — would be better than just running the operation full tilt, for example to avoid any overheating issues? For the "where is your backup" crowd: this actually is my backup drive, but it also contains some non-critical and bulky stuff, like music, that aren't backups, i.e. aren't backed up. The drive has not exhibited any clear signs of failure other than this somewhat ominous sound. I did have to fsck a few errors recently — orphaned inodes, incorrect free blocks/inodes counts, inode bitmap differences, zero dtime on deleted inodes; about 20 errors in all. The filesystem of the partition is ext3.

    Read the article

  • Periodic internet connection drops

    - by user9647
    My setup is a dsl modem, and a dlink di 524M router. I'm also using a Witopia VPN which runs through OpenVPN. I've been having trouble with the internet connection dropping very frequently. It comes back shortly, without even a router/modem/computer restart. This happens as frequently as every ten minutes. Occasionally (not often) it will last as long as an hour or two without dropping. When it drops, I can get it back almost immediately by clicking Reconnect in the OpenVPN GUI and letting that do it's thing. It's worth noting that I'm in China. Calling support is a bit difficult because of that. Also I don't really understand all of the router's software, although I've got it generally figured out. I've tried a bunch of stuff, attempts to diagnose and/or fix the problem. No success with any of the following: I've power cycled both the modem and the router. I've tried an ethernet connection to the router. I've connected without the VPN. I've disabled IEEE authentication on all connections. I've checked for viruses. I've tried lifting it off the ground so as to prevent overheating.

    Read the article

  • iptables: How to combine DNAT and SNAT to use a secondary IP address?

    - by Que_273
    There are lots of questions on here about iptables DNAT/SNAT setups but I haven't found one that solves my current problem. I have services bound to the IP address of eth0 (e.g. 192.168.0.20) and I also have a IP address on eth0:0 (192.168.0.40) which is shared with another server. Only one server is active, so this alias interface comes and goes depending on which server is active. In order to get traffic accepted by the service a DNAT rule is used to change the destination IP. iptables -t nat -A PREROUTING -d 192.168.0.40 -p udp --dport 7100 -j DNAT --to-destination 192.168.0.20 I also wish all outbound traffic from this service to appear to come from the shared IP, so that return responses will work in the event of a active-standby failover. iptables -t nat -A POSTROUTING -p udp --sport 7100 -j SNAT --to-source 192.168.0.40 My problem is that the SNAT rule is not always run. Inbound traffic causes a connection tracking entry like this. [root]# conntrack -L -p udp udp 17 170 src=192.168.0.185 dst=192.168.0.40 sport=7100 dport=7100 src=192.168.0.20 dst=192.168.0.185 sport=7100 dport=7100 [ASSURED] mark=0 secmark=0 use=2 which means the POSTROUTING chain is not run and outbound traffic leaves with the real IP address as the source. I am thinking I can set up a NOTRACK rule in the raw table to prevent conntracking for this port number, but is there a better or more efficient way to make this work? Edit - Alternative question: Is there a way (in CentOS/Linux) to have an interface that can be bound to but not used, such that it can be attached to the network or detached when a shared IP address is swapped between servers?

    Read the article

  • Suspending/Screen Going Off When Still In Use (Ubuntu & Arch)

    - by luke
    I have a laptop (HP Pavilion G6) that was running Ubuntu and for a while now (at least 6 months) has been having a problems randomly suspending whilst still in use with a full battery and still being charged. Originally the problem was with Ubuntu so I first attempted to disable suspend using every way I could find (gui settings + dconf editor) this didn't work and it still kept suspending so I ended up switching to Arch Linux. Unfortunately not long after switching to Arch Linux I ended up experiencing the same problems. So yet again I modified the settings in /etc/systemd/logind.conf to prevent it from suspending and this time it worked, kind of. Now I am experiencing the screen going off and I have to change to a different tty (by using ctrl-alt-fx, which was something I also found I had to do sometimes when waking up from suspend in Ubuntu) to get the screen to go back on. The strange thing is this only happens when running a Linux distros and only occasionally (e.g. it may happen once/twice a week at most). But when it does happen it can happen multiple times in a row. And it only seems to happen when I am using it. This may just mean that it hasn't happened yet when I am not but generally if I leave it to run something or play a video it hasn't occurred only when I am using it regardless of which program I am using (e.g. it has occurred when using firefox, vim, even when using a virtualbox vm). At first I thought it could be the CPU temperature but after monitoring it I discovered it occurred a lot of the time when my CPU was less than 50 °C. I then checked /var/log/* but could not see anything related to it suspending only a few standard things from when it was woken up. I am really out of ideas and really hoping someone can help. Thanks in advance.

    Read the article

  • How to install (old) packages for Ubuntu 9.04?

    - by wchrisjohnson
    Based on some excellent feedback by Mark here (http://serverfault.com/questions/285598/should-i-clone-a-physical-server-to-create-a-vm-for-a-staging-server), today I was able to use the vmware converter to clone my production server for a staging server. However the nic won't come up no matter what I do. I attempted to inistall vmware tools, as I suspect that the fact that it is not installed might prevent the nic from working. (I have the nic set as a vmxnet3 card in the vm settings). The install failed because there were several dependencies missing as well as the Linux headers. Given that Ubuntu 9.04 has been EOL'd, the packages I need to install to get the vmware tools to install are no longer available. I doubt the ubuntu 9.04 install CD has the packages on it. What are my options? I'd rather not upgrade the version of Ubuntu yet, as the point of the vm right now is to maintain parity with the production server. Might I have better luck resetting the driver to use vmxnet2 instead of the vmxnet3? Thanks in advance! Chris

    Read the article

  • PHP unable to allocate memory.

    - by AlReece45
    On my way to the office this morning, every website on our shared VPS started giving the same error (several times, not the typical memory_limit error which is fatal): Warning: Unknown: Unable to allocate memory for pool. in Unknown on line 0 The shared server is a 64-bit OpenVZ container running cPanel. There are only ~6 VPSes on the host-- this is the largest one at only 4GB. The host itself has 24GB RAM. As the below graphs show, the memory usage on the host and VPS are both rather low. CPU Usage/Disk/Host all seem to be normal. RlimitMem was set to 583653034, yet the memory usage is about the same as it usually is. Apache 2.2, PHP 5.2 (mod_php) Restarting Apache has corrected the problem for now. However, I'd like to prevent it from happening again and I'm not sure what was limiting the memory. RlimitMem was set to 583653034, yet the memory usage is about the same as it usually is. There's seems to be plenty of memory: what caused this error? VPS Memory Usage Host Memory Usage APC Information apc.ttl=0 apc.shm_size=0 apc.mmap_file_mask=(blank) 1 Segment(s) with 32.0 MBytes (mmap memory, pthread mutex locking)

    Read the article

  • How to make Thunderbird play nice with Google mail

    - by Christi
    Thunderbird and gmail aren't exactly the best of friends. Gmail's tags mean that Thunderbird often downloads multiple copies of a single mail. Anything tagged in gmail will appear in a folder related to that tag, the "all mail" folder, and possibly the "inbox" and "sent mail" folders too. Thus a mail with multiple tags could potentially be stored more than four times in a local Thunderbird cache. This can make searching difficult, and is obviously wasteful of disk space. The best solution I have come up with is as follows. Operate a zero inbox policy (i.e. use the inbox for processing live mail only and archive everything else) which eliminates an extra copy in the inbox. Secondly, configure Thunderbird not to sync the "Sent Mail" folder - this is a bit of a pain, since I actually find it quite useful to be able to look through just the mails I've sent, but a search can duplicate this functionality. In this way, most of the duplicates are removed, and only mail with tags is stored locally more than once. Ideally, however, I'd only like one copy of each mail to be stored locally. I am surprised Thunderbird doesn't store mail by some sort of hashing algorithm to prevent precisely this problem - but it wouldn't be compatible with the way the folders are mirrored in a local directory structure, I suppose. Can anyone think of a better way to get Thunderbird to cache a Google mail account locally efficiently.

    Read the article

  • ms excel 2010 in windows xp - when open workbook the data is formatted differently than when i saved it

    - by Justin
    I haven't been able to find an answer to this. I have multiple files that I use regularly in excel that now have cell formats of "date". Every single cell in the entire workbook (all sheets) is now formatted as "date". The problem is that I lost my formatting for percents, numbers years, etc and now everything is converted to date (xx/xx/xxxx). I am able to open previously saved versions of a file (prior to me having the problem) and the cells are formatted as I intend them to be (percents, numbers, general, as well as dates). Since this has happened on a couple different files recently, I am wondering how this is happening and how do I prevent it from happening in the future. I cannot cure the problem just by highlighting the entire sheet and converting back to general because I lose all my percents and number formatting. Example (Correct formatting): Month Year Working Days MTD POS Curr Rem May 2012 22 0 1,553,549 June 2012 22 0 1,516,903 June 2011 22 0 1,555,512 June 2010 22 0 1,584,704 Example (Incorrect formatting): Month Year Working Days MTD POS Curr Rem June Tuesday, July 04, 1905 Wednesday, January 04, 1900 Wednesday, January 18, 1900 213,320 July Tuesday, July 04, 1905 Wednesday, January 04, 1900 Monday, January 16, 1900 314,261 July Monday, July 03, 1905 Wednesday, January 04, 1900 Sunday, January 15, 1900 447,759 July Sunday, July 02, 1905 Wednesday, January 04, 1900 Monday, January 16, 1900 321,952 Sorry for the mess. Any suggestions?

    Read the article

  • Is there a Mac utility that does low level drive integrity check and repair?

    - by Puzzled Late at Night
    The PGP Whole Disk Encryption for Mac OS X Quick Start User Guide version 10.0 contains the following remarks: PGP Corporation deliberately takes a conservative stance when encrypting drives, to prevent loss of data. It is not uncommon to encounter Cyclic Redundancy Check (CRC) errors while encrypting a hard disk. If PGP WDE encounters a hard drive with bad sectors, PGP WDE will, by default, pause the encryption process. This pause allows you to remedy the problem before continuing with the encryption process, thus avoiding potential disk corruption and lost data. To avoid disruption during encryption, PGP Corporation recommends that you start with a healthy disk by correcting any disk errors prior to encrypting. and As a best practice, before you attempt to use PGP WDE, use a third-party scan disk utility that has the ability to perform a low-level integrity check and repair any inconsistencies with the drive that could lead to CRC errors. These software applications can correct errors that would otherwise disrupt encryption. The PGP WDE Windows user guide suggests SpinRite or Norton Disk Doctor. What recourse do I have on the Mac?

    Read the article

  • Apache Virtual Host Issue

    - by Nik
    I think I hate Apache now, but on with the issue. It might be a configuration error on my end or just my inability to see what's right in front of me, but I'm trying to configure a sub-domain in Apache and no matter what, it always redirects the sub-domain to the web root of the main domain. My configuration is posted below (and yes, the domain name information was purposefully modified): <VirtualHost *> DocumentRoot /var/www/root/ ServerName example.com <Directory /var/www/root/> allow from all Options +Indexes </Directory> </VirtualHost> <Directory /usr/share/squirrelmail> Options Indexes FollowSymLinks <IfModule mod_php5.c> php_flag register_globals off </IfModule> <IfModule mod_dir.c> DirectoryIndex index.php </IfModule> # access to configtest is limited by default to prevent information leak <Files configtest.php> order deny,allow deny from all allow from 127.0.0.1 </Files> </Directory> # users will prefer a simple URL like http://webmail.example.com <VirtualHost *> DocumentRoot /usr/share/squirrelmail/ ServerName squirrelmail.example.com </VirtualHost>

    Read the article

  • Allowing Sharepoint to relay email through Exchange

    - by dunxd
    I have written a Sharepoint 2007 web part that sends a field from a form to a specified email address. I have got the form working as I require, but at present it can only send to internal email addresses. Sharepoint's email functions use SMTP to send to our Exchange 2003 server, but because our Exchange server is configured to prevent relaying, if the To: address is not at a local domain, it won't deliver the mail. I don't want to open up our Exchange server to be a completely open relay. What I want is to allow my Sharepoint servers to send mail to addresses outside our domain. The following seem possible: Allow all mail sent from one of the Sharepoint servers to be relayed Allow all mail from a web application pool account to be relayed (I am not sure that the application pool authenticates to the SMTP server though) A combination of the two Can anyone advise on the best way of doing this? Is setting up a dedicated SMTP server on the Exchange server (not a separate physical server) the right way of going about this? EDIT: Note this is for Exchange 2003. There is a post on setting this up in Exchange 2007 which appears to have recognised the frequent requirement to do what I need. It doesn't give much detail on 2003 though. Can anyone expand?

    Read the article

  • How do I rsync an entire folder based on the existence of a specific file type in that folder

    - by inquam
    I have a server set up that receives movies to a folder. I then serve these movies using DLNA. But in the initial folder where they end up all kind of files end up. Pictures, music, documents etc. I thought I'd fix this by running the following script inside that folder rsync -rvt --include='*/' --include='*.avi' --include='*.mkv' --exclude='*' . ../Movies/ This works and scans the given folder and moves all the found movies of the given extension types to the Movies folder. But I wonder if there is anyway to tell rsync to if a folder if found that includes a movie of the given extension types, sync the entire folder. Including other files such as .srt. This is to make it easier for me to get subtitles moved along with the movie. I have a solution figured out via a script made in php (yea, I actually do most of my scripting in linux using php... just a habbit that stuck a long time ago). But if rsync can handle it from the start that would be super. Also, I have noticed that this line of rsync actually copies all the root folders in the given folder. If no movie is in the folder it will create an empty folder. How do I prevent rsync from doing this... and saving me the trouble of deleting all folder in Movies that are empty.

    Read the article

  • Does Google sometimes ignore "special" characters, possibly depending on your location or font type settings? [closed]

    - by RLH
    TLDR Google tends to ignore special characters in my search strings. Is there anything that I can do about it and is it, possibly, happening because Google makes certain assumptions based off of my default text-encoding settings and my location? I just posted this question over at StackOverflow. I had found a C preprocessor that I'd never seen before. As I should have done, I Googled it and tried to find out further information. I attempted various search terms which were all variations of "C Operator ##" (some times with and some times without the double-quotes.) Google didn't bring back anything of use so I posted my question on SO. As you can see from the comments, someone mentioned a search string (ironically one which I did try to search) and stated that I could have even hit the "I'm feeling lucky" button and have gotten my answer. The problem is I did search that, and the results that I received were far more basic and even after following the top results and searching the resulting pages, I could find nothing referencing the string "##". I'm not posting this question to complain but it does provide an empirical example of something I've seen before that really bugs me-- Google often ignores special characters in my search strings and the results are often useless. As a developer I often need to search for string values containing non-alphanumeric characters. Some characters (like the underscore or hyphen) can be used without trouble. However, other characters (such as the ampersand, carat, tilde and pound sign) are often ignored in my query strings. Is there a way to prevent this from happening so that I can get meaningful results from Google? NOTE I stay logged into Google and I live in the US. I wonder if Google detects some form of text-encoding setting or derives my results based off of certain, localized text-based assumptions. Regardless, I would like to for Google to search for what I give it. Is there anything that I can do to improve my results?

    Read the article

  • Unwanted forced authentication after server restart (Win 2k3)

    - by Felthragar
    We're running a Win 2k3 R2 Standard 64-bit edition server. On this server we're running a fileserver and the ability to allow remote login to our network through vpn. We do not currently utilize a domain setup, all user accounts are local accounts on the server. Each employee is given a unique account to login to the server. The password is a randomly generated 16 character long string, which makes it hard to remember. What we've done is basicly had the password stored on the client machine (standard "Remember Me" functionality). This has worked well. However, last night our server automatically restarted after an automatic update. After that, some of our employees, myself included, had to re-authenticate with the server, submitting our credentials again. Then again, some others did not have to re-authenticate. Do you guys have any idea why this is? Is there a setting to prevent this? I've checked the logs but I couldn't find anything of interest. Then again I'm not really sure what I'm looking for. Thanks in advance, I'll try to answer any additional questions you may have. Edit: When I say "login" or "authenticate" I mean through the standard windows samba protocol. Edit 2: Ok, new day. Tonight the server restarted again, and the same two clients that had to re-authenticate yesterday had to re-authenticate today as well. The rest did not.

    Read the article

  • mail server checklist..

    - by Jeff
    currently we ran into some issues with our mail server setup. im preparing a list of actions that we should enforce and use in order to maintain a proper email solution within our company. we have around 80 exchange users, and send mass emails out almost on a monthly bases to 20,000 + customers each time.. the checklist i currently have: 1) mcafee mxlogic 'cloud' anti-spam functionality for incoming message. 2) antivirus on each computer in company 3) antivirus on exchange and DNS servers 4) setup SPF record 5) setup DKIM 6) setup domainkey 7) setup senderID 8) submit spf to microsoft, yahoo, etc. for 'whitelist' purposes. 9) configure size limits for messages in exchange to safe numbers 10) i have 2 outside IPs for my email server, incase one gets blacklisted, switch to the backup. 11) my internet site rests on a different ip than the mail server 12) all mass emails for company sent through 3rd party company (listtrak.com) 13) setup domain alias, media, enews, and bounce for the 3rd party mass mail software. 14) verify the setup using [email protected] 15) configure group policy and our opendns.org account to prevent unwanted actions and website viewing mass emails: 1) schedule them to send different amounts at different times (1,000 at 10am, 1,000 at 4pm, 1,000 10am next day).. 2) setup user prefences, decide what they want to receive ect. ( there interests) 3) send a more steady flow of email, maybe 100 a week with top new products instead of 20,000k every other month.. if anyone has suggestions or additions/subtractions to this checklist they are greatly appreciated. thank you

    Read the article

  • Nginx , Apache , Mysql , Memcache with server 4G ram. How optimize to enought memory?

    - by TomSawyer
    i have 1 dedicated server with Nginx proxy for Apache. Memcache, mysql, 4G Ram. These day, my visitor on my site wasn't increased, but my server get overload always in some specified time. (9AM - 15PM) Ram in use is increased second by second to full. that's moment, my server will get overload. i have to kill all apache , mysql service and reboot it to get free memory. that's the circle. here is my ram in use at the moment 160(nginx) 220(apache) 512(memcache) 924(mysql) here's process number 4(nginx) 14(apache) 5(memcache) 20(mysql) and here's my my.cnf config. someone can help me to optimize it? [mysqld] datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock user=mysql skip-locking skip-networking skip-name-resolve # enable log-slow-queries log-slow-queries = /var/log/mysql-slow-queries.log long_query_time=3 max_connections=200 wait_timeout=64 connect_timeout = 10 interactive_timeout = 25 thread_stack = 512K max_allowed_packet=16M table_cache=1500 read_buffer_size=4M join_buffer_size=4M sort_buffer_size=4M read_rnd_buffer_size = 4M max_heap_table_size=256M tmp_table_size=256M thread_cache=256 query_cache_type=1 query_cache_limit=4M query_cache_size=16M thread_concurrency=8 myisam_sort_buffer_size=128M # Disabling symbolic-links is recommended to prevent assorted security risks symbolic-links=0 [mysqldump] quick max_allowed_packet=16M [mysql] no-auto-rehash [isamchk] key_buffer=256M sort_buffer=256M read_buffer=64M write_buffer=64M [myisamchk] key_buffer=256M sort_buffer=256M read_buffer=64M write_buffer=64M [mysqlhotcopy] interactive-timeout [mysql.server] user=mysql basedir=/var/lib [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid

    Read the article

  • Web service for checking out / leasing a token

    - by JP Slavinsky
    I run a web site on AWS that has a number of web servers (say 4 of them) running behind a load balancer. For this particular web site, I have one license key of New Relic for doing instrumentation. At any one time, I only want one of the 4 web servers to be using the key. If that server goes offline, I want one of the remaining web servers to be able to begin using the license key. Does anyone know of a service that would let me manage this process? The service would not particularly need to store the key itself but rather just manage the fact that only one web server can lease out the right to use the key at any time. Something where the web servers would have to come back every few minutes and renew their lease, and if they don't it becomes available to someone else. I just realized I could maybe accomplish a hacked version of this using a file on S3, but that doesn't prevent race conditions / etc and is definitely hackish. Any thoughts welcome. FWIW, this site is built on Ruby on Rails. Thanks! JP

    Read the article

  • Euro character messed up during FTP transfer

    - by djechelon
    My customer is using a very outdated ecommerce management system on my hosting service. For that product, no support is being provided anymore by the vendor. Brief explanation: the shop website, that claims to run under LAMP stack, is built by an old Visual Basic Windows application running on MS Access. The user constructs the shop, defines the HTML template, adds products and categories, etc. Then the VB exe builds the PHP pages (one for each template page) and the SQL script to run on MySQL. It also uploads everything via FTP and runs the installation/upgrade script on its own. The problem Browsing the website, many products' descriptions are cut before the euro sign. For example, what was supposed to be "Product price €1000" becomes "Product price" The analysis MySQL contains a cutted description until the € sign, so it's not PHP fault The Access databases contain full description with € sign, so it's not fault of the webmaster writing bad description or eDisplay cutting them The SQL that will run once the site gets uploaded, stored on my local machine before upload, contains the € sign The same script, after being FTPed by eDisplay and opened with nano from SSH, shows the € sign messed up like this: ^À vsftpd log reports (obfuscated for privacy) Sat Dec 15 11:16:57 2012 22 xxx.xxx.128.13 1112727 /srv/www/domains/xxxxxx.it/htdocs/db.sql b _ i r xxxxxxx ftp 0 * c which seems to be a binary transfer (and also a huge security vulnerability because you can download the whole database from unauthenticated HTTP) The eDisplay internal FTP client provides no option for ascii/binary transfer modes [Add] Trying to manually upload the SQL file via SFTP shows messing up euro [Add2] Trying to manually upload using Xftp client with explicit ASCII mode doesn't fix too It looks like the file gets uploaded as binary. Perhaps on the customer's previous host it all worked fine because that was a Windows host. The server It's an Azure virtual machine running openSUSE 12.2 with both vsftpd and openSSH The question Without asking the customer to manually upload files using FileZilla or replacing € with &euro;, because he refuses, what can I do on server side to prevent vsftpd to screw up euro sign?

    Read the article

  • install Oracle’s VirtualBox

    - by Shamith c
    I am trying to install Oracle’s VirtualBox. I used sudo dpkg -i virtualbox-4.2_4.2.4-81684\~Ubuntu\~quantal_i386.deb Getting following errors (Reading database ... 226237 files and directories currently installed.) Preparing to replace virtualbox-4.2 4.2.4-81684~Ubuntu~quantal (using virtualbox-4.2_4.2.4-81684~Ubuntu~quantal_i386.deb) ... Unpacking replacement virtualbox-4.2 ... dpkg: dependency problems prevent configuration of virtualbox-4.2: virtualbox-4.2 depends on libc6 (>= 2.15); however: Version of libc6 on system is 2.13-20ubuntu5. virtualbox-4.2 depends on libqtcore4 (>= 4:4.8.0); however: Version of libqtcore4 on system is 4:4.7.4-0ubuntu8.1. virtualbox-4.2 depends on libqtgui4 (>= 4:4.8.0); however: Version of libqtgui4 on system is 4:4.7.4-0ubuntu8.1. dpkg: error processing virtualbox-4.2 (--install): dependency problems - leaving unconfigured Processing triggers for ureadahead ... Processing triggers for shared-mime-info ... How to solve it?

    Read the article

< Previous Page | 174 175 176 177 178 179 180 181 182 183 184 185  | Next Page >