Search Results

Search found 1872 results on 75 pages for 'tom dalling'.

Page 19/75 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • BGP Multihomed/Multi-location best practice

    - by Tom O'Connor
    We're in the process of designing a new iteration of our network where we improve resilliency by adding a second datacentre. We'll be adding a second datacentre, with an identical configuration of servers as our primary location. To achieve network connectivity, we're looking into a couple of possible methods. See earlier questions http://serverfault.com/questions/86736/best-way-to-improve-resilience and http://serverfault.com/questions/101582/dns-round-robin-failover-and-load-balancing I'm pretty convinced that BGP is the right way to go about this, and this question is not about RRDNS. 1) If we have 2 locations, do we announce the same IP address block from both locations? 2) If we did this, but had a management ssh interface on x.x.x.50 from datacentre A, but it was on x.x.x.150 in datacentre B. What is the best practice mechanism for achieving this? Because if I were nearest to A, then all my traffic would go to x.50, but if i attempted to connect to x.150, I'd not be able to connect, because this address wouldn't be valid at A, but only at B. Is the best solution to announce 2 different netblocks, one at each location, facilitating the need for RRDNS, or to announce a single block, and run some form of VPN between the two sites for managment traffic?

    Read the article

  • After Effects: Mac to PC

    - by Tom
    Is it possible to open an After Effects file that's been created on a Mac with PC? I don't know what version the AE was on the Mac side, but I want to oepn it with CS3 on a PC laptop.

    Read the article

  • Windows NT workstation on AD domain

    - by Tom
    We run a Windows NT workstation connected to special manufacturing equipment, that everyone is deathly afraid to touch. It has custom software and special cards inside of the machine, making a rebuild impossible. The problem is, we are migrating to an AD domain from an NT domain, and this workstation stills needs access to storage on the network (AD computers). How should I go about doing this, after we get rid of our NT Domain controller? Upgrading to 2000 is not an option (so says management). I know, I know, if it dies we are in trouble. But that's managements choice, we just need to get rid of this NT domain.

    Read the article

  • Error adding 4tb LUN (Raw Device Mapping) to ESX4 VM

    - by Tom Gardiner
    Hi guys, I'm trying to map an existing 4tb LUN from a Fibre Channel SAN, through to a VM in my ESX4 environment. It keeps telling me that the VMDK file size exceeds the the maximum size supported by the datastore. I've tried in Physical compatibility mode, and also both Virtual styles. I'm a little confused by this as we had the same LUN mapped through to another VM when we were running ESX3.5... I've also noticed that some of my other RAW mappings are generating extremely large VMDK files on the ESX servers. Does anyone know if this change in behaviour is intentional? And if so why? It doesn't seem to me that if the LUN is mapped directly to the VM that it's size should be relevant. We're running 4.0.0 build 236512, and 4.0.0 build 219382 and I've not had any success on either. Any insight or advice would be much appreciated! TG

    Read the article

  • Hints on diagnosing performance issue in OpenBSD firewall

    - by Tom
    My OpenBSD 4.6 pf firewall has started having really bad performance in the past few weeks. I've isolated the firewall (as opposed to the WAN connection, switch, cable, etc.) as the problem, but need a hint on how to further diagnose or fix the problem. The facts: Normal setup is: DSL Modem - FW Ext. NIC - FW Int. NIC - Switch - Laptop Normal setup described above gives only 25 Kbps! Plugging the laptop straight from the DSL modem gives a 1 MBps connection (full speed, as advertised). Therefore, the DSL connection seems to be OK. Plugging the laptop directly into the firewall's internal NIC (bypassing the switch) also gives only 25 Kbps. Therefore, the switch does not seem to be a problem. I've replaced the ethernet cables, but it didn't help. Here's the weird thing. Reloading the ruleset (/sbin/pfctl -Fa -f /etc/pf.conf) causes the laptop's connection to go up to 1 Mbps (i.e. full speed) for a few minutes before it gradually degrades back down to 25Kbps again. Any ideas on what's wrong or how I could further diagnose the problem?

    Read the article

  • Ubuntu - why would /var/log/dmesg stop updating after boot? does not show panic/cpu_hung errors which the console shows

    - by Tom G
    So I have an Ubuntu 10.04 install VM on a host. Latest 2.6.38-15-server kernel . /var/log/dmesg displays only the bootup but will stop recording after that. It will not show the trace/cpu_hung errors I am trying to troubleshoot. /var/log/dmesg.0 , dmesg.1 nothing - I did a string search for the text that displays on the console during the crash and NOTHING gets logged anywhere in /var/log/* . I have to call into the provider and ask them to take a screenshot of the console since nothing shows in dmesg. Why would /var/log/dmesg not record kernel panics, or such?

    Read the article

  • When you add a new node to a cluster, what do you do to the default services?

    - by Tom
    Everytime I acquire a new server or reload an existing server (CentOS 6.X) I have to decide what services to leave running in chkconfig. It seems the datacenter staff aren't using a single edition of CentOS and sometimes the default services running are different. I'm always inclined to turn off every service I've never heard of but then I think, if it's not broken don't fix it. How do you deal with the default services in a new installation?

    Read the article

  • How does one get rid of fishy behavior in Windows?

    - by Tom Wijsman
    After I had boot my computer this morning there suddenly flooded water from the top of the screen, after which some fishes dropped into it. Now I can barely see what I am doing because the water distorts the view. Sometimes the fish follow the cursor so I need to move it away or wait for the fish to mind their own business. This makes it very annoying to use my system. What have I tried? Reboot the system. This caused the water to deplete from the desktop. Upon reboot, the screen was refilled with water and fishes. Attach another monitor. Same problem, fills that monitor as well and gives me extra fish. Clicking the fish. Makes them turn direction. Right clicking the fish. Changes color of the fish, not really useful. I'm locked out of changing the background or screen saver settings. Hence, I had to post the lady below... Safe mode doesn't save me from the fishes. It does give me another background there, but I can't screenshot easily. Other user accounts experience this as well. The Guest account seems to experience more fish than the other accounts. Using HijackThis, OTL Timekeeper List, Syninternal Autoruns, RootKitRevealer, ShellExView and similar tools I can't seem to find any entries that could be it, the Sysinternals tools show everything as verified. I'm suspecting this to be a driver problem. Randomly removing drivers doesn't seem to alleviate the problem. When removing the Graphics Drivers, it makes my screen black. While that could be considered the solution, it's not what I want. Changing the time / date settings does also not seem to affect the fishes. Changing the time a few years in the future, I would have expected the fishes to be dead. But, the same fishes are still there... They simply won't die! Tried to get used to them. They are really bothering me, looks like they require food. I don't know how to give them food, but apparently they get it elsewhere during reboot... Tried to disable my mouse pointer and use the keyboard. This works, they now swim around more randomly. They do put their attention to huge changes on the screen, so I need to type slow. Or otherwise I can't see what I'm tying exactly. Hold my laptop upside down. This seems to affect the water and fishes, but the water stays in the screen. They seem super resistant against water sickness and confusion though... What does the problem look like? What do I need? A way to get rid of these fishes on my screen forever, they are really annoying me a lot and I'm about to crack the screen to see if that makes them escape. Do you have any idea why this problem is occurring? What are my considerations? Buying an USB fish tank could make the fish leave the screen, I am uncertain though whether the fish could leave the screen through the USB cable. Using the FISh (programming language) which seems to provide EXPRESSIVE POWER and EFFICIENT EXECUTION, I can however not find any examples on how to remove fish. What are my Specifications? I'm using a Sony Vaio Fishy laptop. Sony VAIO VGN-Fishy, VAIO. Processor: 1337 MHz, Intel Core 2 Duo, T5432, 1 MB, Intel PM965 Express, 667 MHz. Memory: 1024 MB, DDR2-SDRAM, 667 MHz, 2 x 1024 MB, 4 GB. Disk Drive: 50 GB, Serial ATA, 5400 RPM. Storage Media: Memory Stick™, Memory Stick PRO™. Display: 15.4 ", 1280 x 800 pixels, LCD. Video: GeForce 8400M GT, 128 MB. Optical Drive: DVD±R/RW DL, 24 x, 24 x, 24 x, 6 x, 4 x, 6 x, 4 x, 5 x, 5 x, 8 x, 8 x, 8 x, 8 x, 6 x, 6 x, 24 x, 24 x, 24 x, 16 x. Camera: 1.3 MP, 30 fps. Networking: 2.0+EDR. Keyboard: Touchpad, AZERTY. Operating System/Software: Windows Vista Home Premium. Security: Kensington. Weight & Dimensions: 98.8 oz (2800 g), 14 " (355.8 mm), 10 " (254.4 mm), 0.98 " (24.9 mm). Other features: 100 BASE-TX/10 BASE-T, 802.11a/b/g/n/Draft n, V92/V.90, fishes. Plz! Help me...

    Read the article

  • Puppet won't execute command

    - by tom
    Puppet 0.25.4 on ubuntu point blank refuses to execute the following command: exec {"initiate replica set": command => "echo 'rs.initiate()' | mongo", path => ["/usr/bin","/usr/sbin","/bin"], user => "root", require => Class["mongodb"] } I can execute the command as root myself, so I'm guessing perhaps it's an issue with the shell. Unfortunately upgrading puppet isn't an option (and causes other issues anyway). I've tried specifying explicit paths to the binaries instead of relying on the path parameter, and also changing the command to: "bash -c \"echo 'rs.initiate()' | mongo\"" Still doesn't work. Any ideas? I get an error message saying something like "failed to change from notrun to 0"

    Read the article

  • How to improve the HTML formatting in Evolution mail client

    - by Tom
    I have a question about viewing HTML emails in the Evolution mail client. Basically, I am receiving some emails that look lovely in Thunderbird but not in Evolution because the HTML rendering of Evolution isn't as advanced. Does anyone know how to improve the HTML rendering of Evolution? e.g. a plugin, tip, code patch, etc... The closest I've got is to right-click the email, "Save As...", save as a html file, then open in Firefox. Not exactly streamline! What emails can't it display well? We use the subversion revision control system which is set up to send an email whenever someone commits via svnnotify all nicely coloured via the --handler HTML::ColorDiff -d parameter. When Evolution fails to use the colours, I find it very had to read the raw diff.

    Read the article

  • What causes winsock 10055 errors? How should I troubleshoot?

    - by Tom Kerr
    I'm investigating some issues with winsock 10055 errors on a chain of custom applications (some of which we control, some not) and was hoping to get some advice on techniques to troubleshoot the problem. No buffer space available. An operation on a socket could not be performed because the system lacked sufficient buffer space or because a queue was full. From research, non-paged pool and ports seem to be the only resources which can cause this error. Is there another resource which might cause 10055 errors? Currently, we have perfmon counters setup on the applications and non-paged pool usage looks low in most circumstances. Open TCP connections looks low and I am unaware of another way to monitor ports. Since it only happens in production, we are unable to use more invasive counters. Is there some other tool or procedure you would recommend to diagnose which application is causing the issue?

    Read the article

  • fedora 11 server won't boot from SATA disk, won't boot from CD, BIOS configuration problems

    - by Tom
    Hi all, Yesterday our fc11 file/print server didn't boot, and had stopped on the BIOS page with a configuration problem. (with a distinct lack of foresight) I reset the BIOS settings to default without recording the message and booted the server. The server ran until it was to be booted this morning, and it was failing to mount the root partition from the SATA disk. It also failed to boot from a known good diagnostics CD. After a few more tries, it now fails part way through the Phoenix - AwardBIOS screen where it is listing the SATA/IDE devices, and it is showing garbage for the identity of one of the disks, which should actually be "none" It looks like the motherboard has gone kaput. The motherboard is an EVGA NF790i, are there any diagnostic tools that I can use to determine this? (as I would prefer to not send the motherboard back, only to discover that it is the RAM or the CPU) ps I can't get it to boot from the memTest disk, so I can't run that diagnostic. Thanks!

    Read the article

  • apply seach criteria to whole cells

    - by Tom
    I am trying to use MATCH() and/or LOOKUP() in LibreOffice Calc. I have the "Search criteria = and < must apply to whole cells" and the "Enable regular expressions in formulas" options selected. The item I am looking for is a multi word string. How do I provide a search criteria as a cell reference (E3) in to LOOKUP(E3, B1:B20, C1:C20) and have it match the value in B1:B20 EXACTLY? Right now it is matching partial strings rather than matching the cell values exactly.

    Read the article

  • Join ActiveDirectory (Win 2k8R2) to OpenDirectory(Snow Leopard)

    - by Tom O'Connor
    The vast majority of questions and so on regarding the interoperability of Active and Open directories involves getting Mac clients to see an AD and auth against it. What we'd like to do is get a Windows 7 workstation to auth completely against Open Directory. We tried setting it up as an NT4 type PDC, and that doesn't work satisfactorily. We tried using pGina and the LDAP backend, which allows Authentication, but has no support for Authorization, and as a result, if we mount an NFS Share, the user has the rights to do anything they damn well please. Not ideal for security (Totally bloody unacceptable, actually). We tried using a Samba server (newer version than on the Open Directory Server) as an intermediate, so that it knows about the LDAP server on the OD Server, but uses Samba 4 instead of v3. That didn't work either. We could login, but couldn't mount, and if we did, we had the same rights as with pGina. If we right-click the mounted drive in Windows, and have a look at NFS UID, it returns -2, not the correct (mapped) UID. So the final plan I've got is to use an Active Directory, inside a Windows 2008R2 Virtual Machine. What I want to achieve is to have the Active Directory sync it's user data from OpenDirectory (read-only would be fine). That way, we'd have the ability to connect Windows 7 clients to a "virtual domain" which would actually just grab information from OD's LDAP. All the information I've found is about how to go the other way. Does anyone know how we can do this?

    Read the article

  • Read access to Active Directory property (uSNCreated)

    - by Tom Ligda
    I have an issue with read access to the uSNCreated property when doing LDAP searches. If I do an LDAP search with a user that is a member of the Domain Admins group (UserA), I can see the uSNCreated property for every user. The problem is that if I do an LDAP search with a user (UserB) that is not a member of the Domain Admins group, I can see the uSNCreated property for some users (UserGroupA) and not for some users (UserGroupB). When I look at the users in UserGroupA and compare them to the users in UserGroupB, I see a crucial difference in the "Security" tab. The users in UserGroupA have the "Include inheritable permissions from this object's parent" unchecked. The users in UserGroupB have that option checked. I also noticed that the users in UserGroupA are users that were created earlier. The users in UserGroupB are users created recently. It's difficult to quantify, but I estimate the border between creation time between the users in UserGroupA and UserGroupB is about 6 months ago. What can cause the user creation to default to having that security property checked as opposed to unchecked? A while back (maybe around 6 months ago?) I changed the domain functional level from Windows Server 2003 to Windows Server 2008 R2. Would that have had this effect? (I can't exactly downgrade the domain functional level to test it out.) Is this security property actually the cause of the issue with read access to the uSNChanged property on LDAP searches? It seems correlated, but I'm not sure about causation. What I want in the end is for all authenticated users to have read access to the uSNCreated property for all users when doing an LDAP search. I would also be OK if I could grant read access for that property to an AD group. Then I can control access by adding members to the group.

    Read the article

  • MySQL 5.5 (Percona) assertion failure log.. what would cause this?

    - by Tom Geee
    256GB, 64 Core , AMD running Ubuntu 12.04 with Percona MySQL 5.5.28. Below is the assertion failure. We just had a second assertion failure (different "in file", position, etc) while running a large set of inserts. After the first failure, MySQL restarted after a reboot only - after continuously looping on the same error after trying to recover. I decided to do a mysqlcheck with -o for optimize. Since these are all Innodb tables (very large tables, 60+GB) this would do an alter table on all tables. In the middle of this , the below assertion failure happened again: 121115 22:30:31 InnoDB: Assertion failure in thread 140086589445888 in file btr0pcur.c line 452 InnoDB: Failing assertion: btr_page_get_prev(next_page, mtr) == buf_block_get_page_no(btr_pcur_get_block(cursor)) InnoDB: We intentionally generate a memory trap. InnoDB: Submit a detailed bug report to http://bugs.mysql.com. InnoDB: If you get repeated assertion failures or crashes, even InnoDB: immediately after the mysqld startup, there may be InnoDB: corruption in the InnoDB tablespace. Please refer to InnoDB: http://dev.mysql.com/doc/refman/5.5/en/forcing-innodb-recovery.html InnoDB: about forcing recovery. 03:30:31 UTC - mysqld got signal 6 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. Please help us make Percona Server better by reporting any bugs at http://bugs.percona.com/ key_buffer_size=536870912 read_buffer_size=131072 max_used_connections=404 max_threads=500 thread_count=90 connection_count=90 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 1618416 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. Thread pointer: 0x14edeb710 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 7f687366ce80 thread_stack 0x30000 /usr/sbin/mysqld(my_print_stacktrace+0x2e)[0x7b52ee] /usr/sbin/mysqld(handle_fatal_signal+0x484)[0x68f024] /lib/x86_64-linux-gnu/libpthread.so.0(+0xfcb0)[0x7f9cbb23fcb0] /lib/x86_64-linux-gnu/libc.so.6(gsignal+0x35)[0x7f9cbaea6425] /lib/x86_64-linux-gnu/libc.so.6(abort+0x17b)[0x7f9cbaea9b8b] /usr/sbin/mysqld[0x858463] /usr/sbin/mysqld[0x804513] /usr/sbin/mysqld[0x808432] /usr/sbin/mysqld[0x7db8bf] /usr/sbin/mysqld(_Z13rr_sequentialP11READ_RECORD+0x1d)[0x755aed] /usr/sbin/mysqld(_Z17mysql_alter_tableP3THDPcS1_P24st_ha_create_informationP10TABLE_LISTP10Alter_infojP8st_orderb+0x216b)[0x60399b] /usr/sbin/mysqld(_Z20mysql_recreate_tableP3THDP10TABLE_LIST+0x166)[0x604bd6] /usr/sbin/mysqld[0x647da1] /usr/sbin/mysqld(_ZN24Optimize_table_statement7executeEP3THD+0xde)[0x64891e] /usr/sbin/mysqld(_Z21mysql_execute_commandP3THD+0x1168)[0x59b558] /usr/sbin/mysqld(_Z11mysql_parseP3THDPcjP12Parser_state+0x30c)[0x5a132c] /usr/sbin/mysqld(_Z16dispatch_command19enum_server_commandP3THDPcj+0x1620)[0x5a2a00] /usr/sbin/mysqld(_Z24do_handle_one_connectionP3THD+0x14f)[0x63ce6f] /usr/sbin/mysqld(handle_one_connection+0x51)[0x63cf31] /lib/x86_64-linux-gnu/libpthread.so.0(+0x7e9a)[0x7f9cbb237e9a] /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7f9cbaf63cbd] Trying to get some variables. Some pointers may be invalid and cause the dump to abort. Query (7f6300004b60): is an invalid pointer Connection ID (thread ID): 876 Status: NOT_KILLED You may download the Percona Server operations manual by visiting http://www.percona.com/software/percona-server/. You may find information in the manual which will help you identify the cause of the crash. 121115 22:31:07 [Note] Plugin 'FEDERATED' is disabled. 121115 22:31:07 InnoDB: The InnoDB memory heap is disabled 121115 22:31:07 InnoDB: Mutexes and rw_locks use GCC atomic builtins .. Then it recovered , without a reboot this time. from the log, what would cause this? I am currently running a dump to see if the problem resurfaces. edit: data partition is all in / since this is a hosted, defaulted file system unfortunately: Filesystem Size Used Avail Use% Mounted on /dev/vda3 742G 445G 260G 64% / udev 121G 4.0K 121G 1% /dev tmpfs 49G 248K 49G 1% /run none 5.0M 0 5.0M 0% /run/lock none 121G 0 121G 0% /run/shm /dev/vda1 99M 54M 40M 58% /boot my.cnf: [client] port = 3306 socket = /var/run/mysqld/mysqld.sock [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] skip-name-resolve innodb_file_per_table default_storage_engine=InnoDB user = mysql socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /data/mysql tmpdir = /tmp skip-external-locking key_buffer = 512M max_allowed_packet = 128M thread_stack = 192K thread_cache_size = 64 myisam-recover = BACKUP max_connections = 500 table_cache = 812 table_definition_cache = 812 #query_cache_limit = 4M #query_cache_size = 512M join_buffer_size = 512K innodb_additional_mem_pool_size = 20M innodb_buffer_pool_size = 196G #innodb_file_io_threads = 4 #innodb_thread_concurrency = 12 innodb_flush_log_at_trx_commit = 1 innodb_log_buffer_size = 8M innodb_log_file_size = 1024M innodb_log_files_in_group = 2 innodb_max_dirty_pages_pct = 90 innodb_lock_wait_timeout = 120 log_error = /var/log/mysql/error.log long_query_time = 5 slow_query_log = 1 slow_query_log_file = /var/log/mysql/slowlog.log [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] [isamchk] key_buffer = 16M

    Read the article

  • If I change Windows admin user password, then I can't login to Outlook, why?

    - by Tom
    I am seeing this strange behaviour with Windows 7 and Outlook 2010. If I change the password of User1 (Admin user), login, and start Outlook, it asks for the pasword. It keeps saying "password incorrect". I can login by using same password on the webclient. If I change User1's password back to last one, Outlook starts without any prompting and I'm able to send and receive emails. Is there any link between the user account, its password and the PST file's password?

    Read the article

  • Intermittent 404 on select assets, LAMP stack

    - by Tom Lagier
    We have a LAMP stack WordPress server that is serving most assets correctly. However, one plugin's CSS file and several images are returning soft 404s roughly 20% of the time. I can't find any reference to the 404 in the access logs, but the browser is definitely receiving a 404 response from somewhere (WordPress, I would assume). When I use an alias URL that does not match the site URL but does resolve to the asset path, the resource loads correctly 100% of the time. However, using the site url only resolves for the select, problematic assets 20% of the time. You can test one of the problematic assets here: http://www.mreco.org/wp-content/uploads/2014/05/zero-cost.jpg However the alias link always resolves correctly: http://mr-eco.wordpress.promocampaigns.com/wp-content/uploads/2014/05/zero-cost.jpg Stranger, if I attempt to access outdated content that definitely does not exist on the server, at the live URL it returns the content roughly 50% of the time. Using the alias link, it 404s 100% of the time - the correct behavior. Error log and PHP error log are clean. A sample access log (pulled from grep 'zero-cost.jpg' /var/log/httpd/mr-eco-access_log) from several refreshes of the live direct link (where I am not seeing any 404's): 10.166.202.202 - - [28/May/2014:20:27:41 +0000] "GET /wp-content/uploads/2014/05/zero-cost.jpg HTTP/1.1" 304 - 10.166.202.202 - - [28/May/2014:20:27:42 +0000] "GET /wp-content/uploads/2014/05/zero-cost.jpg HTTP/1.1" 304 - 10.166.202.202 - - [28/May/2014:20:27:43 +0000] "GET /wp-content/uploads/2014/05/zero-cost.jpg HTTP/1.1" 304 - 10.166.202.202 - - [28/May/2014:20:27:43 +0000] "GET /wp-content/uploads/2014/05/zero-cost.jpg HTTP/1.1" 304 - 10.176.201.37 - - [28/May/2014:20:27:56 +0000] "GET /wp-content/uploads/2014/05/zero-cost.jpg HTTP/1.1" 200 57027 Chrome's dev tools list the following network activity before displaying 404 page content: zero-cost.jpg /wp-content/uploads/2014/05 GET 404 Not Found text/html Other 15.9?KB 73.2?KB 953?ms 947?ms My Apache configuration is standard, I've listed the virtual host entry and .htaccess file below. I can provide other parts of Apache config if necessary. Virtual host: <VirtualHost *:80> DocumentRoot /var/www/public_html/mr-eco.wordpress.promocampaigns.com ServerName www.mreco.org ServerAlias mreco.org mr-eco.wordpress.promocampaigns.com ErrorLog logs/mr-eco-error_log CustomLog logs/mr-eco-access_log common <Directory /var/www/public_html/mr-eco.wordpress.promocampaigns.com> AllowOverride All SetOutputFilter DEFLATE </Directory> </VirtualHost> .htaccess: # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> # END WordPress I have checked for multiple A records and can confirm that there is a single A record pointing at the domain: ;; ANSWER SECTION: mreco.org. 60 IN A 50.18.58.174 I'm fairly new to systems administration, and at a complete loss as to what could cause this. In the past, inconsistently 404ing assets have been because of out-of-sync instances behind a load balancer. In this case, it is a single instance behind the load balancer. Because of the inconsistency, it feels like a caching issue. We don't make use of Apache caching, and as far as I know WordPress should not be caching either. What I've done so far: Reset WordPress permalinks Disabled WordPress plugins Re-generated WordPress .htaccess file Swapped ServerName and ServerAlias directives Cleared browser cache Confirmed disk location of resources Checked PHP, access, and error logs Confirmed correct DNS setup (can post if necessary) I'm at a total loss. Thanks for helping me out!

    Read the article

  • can 'Percona MySQL Data Recovery' be used to recover dropped tables if the datadir filesystem is mounted as /

    - by Tom Geee
    according to Percona: Unmount the filesystem or make it read-only if... You have filesystem corruption OR You have dropped tables in innodb_file_per_table format If I have innodb_file_per_table enabled, and accidently dropped a table, while the datadir is mounted as within the / partition , can data still be recovered? Obviously you can't work with an unmounted root filesystem. Our VPS host has a defaulted filesystem table which we cannot customize. I was wondering in case of any future scenario. edit: would mounting the / filesystem through NFS onto another system as read-only be a workaround? TIA.

    Read the article

  • What was scientifically shown to support productivity when organizing/accessing file and folders?

    - by Tom Wijsman
    I have gathered terabytes of data but it has became a habit to store files and folders to the same folder, that folder could be kind of seen as a Inbox where most files (non-installations) enter my system. This way I end up with a big collections of files that are hard to organize properly, I mostly end up making folders that match their file type but then I still have several gigabytes of data per folder which doesn't make it efficient such that I can productively use the folder. I'd rather do a few clicks than having to search through the files, whether that's by some software product or by looking through the folder. Often the file names themselves are not proper so it would be easier to recognize them if there were few in a folder, rather than thousands of them. Scaling in the structure of directory trees in a computer cluster summarizes this problem as following: The processes of storing and retrieving information are rapidly gaining importance in science as well as society as a whole [1, 2, 3, 4]. A considerable effort is being undertaken, firstly to characterize and describe how publicly available information, for example in the world wide web, is actually organized, and secondly, to design efficient methods to access this information. [1] R. M. Shiffrin and K. B¨orner, Proc. Natl. Acad. Sci. USA 101, 5183 (2004). [2] S. Lawrence, C.L. Giles, Nature 400, 107–109 (1999). [3] R.F.I. Cancho and R.V. Sol, Proc. R. Soc. London, Ser. B 268, 2261 (2001). [4] M. Sigman and G. A. Cecchi, Proc. Natl. Acad. Sci. USA 99, 1742 (2002). It goes further on explaining how the data is usually organized by taking general looks at it, but by looking at the abstract and conclusion it doesn't come with a conclusion or approach which results in a productive organization of a directory hierarchy. So, in essence, this is a problem for which I haven't found a solution yet; and I would love to see a scientific solution to this problem. Upon searching further, I don't seem to find anything useful or free papers that approach this problem so it might be that I'm looking in the wrong place. I've also noted that there are different ways to term this problem, which leads out to different results of papers. Perhaps a paper is out there, but I'm not just using the same terms as that paper uses? They often use more scientific terms. I've once heard a story about an advocate with a laptop which has simply outperformed an advocate with had tons of papers, which shows how proper organization leads to productivity; but that story didn't share details on how the advocate used the laptop or how he had organized his data. But in any case, it was way more useful than how most of us organize our data these days... Advice me how I should organize my data, I'm not looking for suggestions here. I would love to see statistics or scientific measurement approaches that help me confirm that it does help me reach my goal.

    Read the article

  • How can I debug user mode driver failures in Windows 8

    - by Tom
    I have a 32 GB SD Card. Whenever I insert this card in to my newly upgraded Windows 8 laptop the OS stops responding normally. Metro Apps won't work. The system may or may not log in. Desktop apps may or may not be able to do things. When I remove the card and restart then all is fine. As soon as I put the card back in, the system starts misbehaving again. I've run Windows Update, so I have the latest drivers from Microsoft. This does not occur with the 8 GB cards I have. Unfortunately I only have one 32 GB card, so I can't test with others. From examining the system event log I've determined this is happening due to a user mode driver failure. How can I best debug this issue from here? How can I figure out which driver this is related to? Will there be a Dr. Watson crash dump somewhere? Details - System - Provider [ Name] Microsoft-Windows-DriverFrameworks-UserMode [ Guid] {2E35AAEB-857F-4BEB-A418-2E6C0E54D988} EventID 10110 Version 1 Level 1 Task 64 Opcode 0 Keywords 0x2000000000000000 - TimeCreated [ SystemTime] 2012-10-29T00:51:57.532718300Z EventRecordID 40417 Correlation - Execution [ ProcessID] 1056 [ ThreadID] 3796 Channel System Computer thebrain - Security [ UserID] S-1-5-18 - UserData - UMDFHostProblem [ lifetime] {811E3DC4-FBC6-420B-ABCC-AD7505A36F3B} - Problem [ code] 3 [ detectedBy] 2 ExitCode 3 - Operation [ code] 259 Message 72448 Status 4294967295 Edit 1 So I tried using Debug View from SysInternals (you can get it here: http://technet.microsoft.com/en-us/sysinternals/bb896647.aspx). That gave me this information: which is not especially helpful. Then I tried connecting WinDbg to WUDFHost.exe (the process that seems to host user mode drivers) to see if it could catch the error. Get it here: http://msdn.microsoft.com/en-US/windows/hardware/hh852363 Instructions: http://msdn.microsoft.com/en-US/library/windows/hardware/ff554716(v=vs.85).aspx That didn't help much. It didn't catch any exceptions as I'd hoped (which would point me to the cause of the crash at least). Here's the stack of one of the threads:

    Read the article

  • Why does "commit" appear in the mysql slow query log?

    - by Tom
    In our MySQL slow query logs I often see lines that just say "COMMIT". What causes a commit to take time? Another way to ask this question is: "How can I reproduce getting a slow commit; statement with some test queries?" From my investigation so far I have found that if there is a slow query within a transaction, then it is the slow query that gets output into the slow log, not the commit itself. Testing In mysql command line client: mysql begin; Query OK, 0 rows affected (0.00 sec) mysql UPDATE members SET myfield=benchmark(9999999, md5('This is to slow down the update')) WHERE id = 21560; Query OK, 0 rows affected (2.32 sec) Rows matched: 1 Changed: 0 Warnings: 0 At this point (before the commit) the UPDATE is already in the slow log. mysql commit; Query OK, 0 rows affected (0.01 sec) The commit happens fast, it never appeared in the slow log. I also tried a UPDATE which changes a large amount of data but again it was the UPDATE that was slow not the COMMIT. However, I can reproduce a slow ROLLBACK that takes 46s and gets output to the slow log: mysql begin; Query OK, 0 rows affected (0.00 sec) mysql UPDATE members SET myfield=CONCAT(myfield,'TEST'); Query OK, 481446 rows affected (53.31 sec) Rows matched: 481446 Changed: 481446 Warnings: 0 mysql rollback; Query OK, 0 rows affected (46.09 sec) I understand why rollback has a lot of work to do and therefore takes some time. But I'm still struggling to understand the COMMIT situation - i.e. why it might take a while.

    Read the article

  • T42 Thinkpad, USB boot, CF for programs and storage?

    - by Tom K.
    Is this feasible? I have a Thinkpad T40. I'd like to get one of the tiny USB drives (like the Verbatim Store 'n' Stay series) of sufficient size to handle basic Linux OS booting (8GB?), then put 16Gb or greater memory card (SD or CF) in the PCMCIA slot with appropriate adapter for additional application programs and data storage. I know I could get a used or refurb HD, but I've had reliability issues in the past. I don't believe that new IDE drives are available.

    Read the article

  • Enterprise Tape Backup solutions

    - by Tom O'Connor
    I'm currently attempting to re-architect a backup solution where I'm working. We've got 2 NAS devices, one in the office, one in the datacentre. The servers in the DC back up to the DC NAS, which is then replicated to the Office NAS. The office NAS exports shares as CIFS and NFS, this bit is fine. At some point, I'll have to expand our storage capacity, currently we've got about 1.4TB of storage space, which is about 96% full. Previously, the tape backup was a script that ran tar a few times and squirted data onto a tape. It worked, but was by no means a perfect solution. Restores are a bit of a pest, adding new data to the backup requires editing the script as root. It's just all a bit non-ideal. I've been evaluating a number of "enterprise" ready backup solutions, such as Yosemite Backup from Barracuda, Acronis Backup/Restore, and something from Arkeia. In the process of evaluating these, I've found 2 big problems. Not all of them allow backup of mounted devices (such as a NFS mounted NAS) Many of these applications don't like our tape device. For the most part, (1) is essential. Our NAS has a feeble processor and can't run applications like backup agents. I suspect that the biggest problem is the tape device, which is a HP C7438A DAT72 connected via USB. Questions: Has anyone else got an USB DAT72 device working with similar software? Is there a better way to back up data from an "appliance" NAS device on which you can't run an agent? Would I be totally out of my mind to specify a cheap HP or Dell server with a couple of 1TB hard disks, and a SAS card to then talk to an HP Ultrium (or similar) device? The biggest drawback to this would be cost (400ish for the server, 200 for the SAS connectivity and 1700 for a LTO4 device) Notes: I'd love to be able to say that I'd get rid of tapes entirely, and use some form of hard disk backup. In a previous job, we had LaCie USB drives, which were decidedly unreliable.

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >