Search Results

Search found 3312 results on 133 pages for 'david michel'.

Page 45/133 | < Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >

  • inodes and tree-depth in ext2

    - by David Hagan
    I have an ext2 filesystem with a maximum number of inodes per directory (somewhere around 32k), and also a maximum number of inodes in the entire filesystem (somewhere around 350m). Because I'm using this filesystem as a datastore for a service that has in excess of 32k objects, I'm distributing those objects between multiple subdirectories (like a dictionary separates A-K and L-Z). My question is this: Is there any significance to the tree depth when I'm building these inodes? Is there a significant difference or limitation that's going to affect my service if I choose "/usr/www/service/data/a_k/aardvark" over "/data/a_k/aardvark"?

    Read the article

  • Is .htaccess slowing down my dedicated server?

    - by David Robles
    First of all, I consider myself more a programmer than a servers guy. I have a website where I receive about 3,000 visits per day, which I think is a lot less than the max capacity for a dedicated server. However, I've noticed that the connection to the website is pretty slow, e.g., to load images, to connect to it via SSH, etc. I configured .httaccess recently to avoid hotlinking to images in my server (i.e. .jpg, .gif and .png), and I was wondering if that could be slowing down my website. This is the configuration that I have: # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteEngine on RewriteCond %{HTTP_REFERER} !^$ RewriteCond %{HTTP_REFERER} !^http://www.mysite.com/.*$ [NC] RewriteCond %{HTTP_REFERER} !^http://www.mysite.com$ [NC] RewriteRule .*\.(jpg|jpeg|gif|png|bmp|swf)$ http://www.google.com/ [R,NC] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> # END WordPress I found some code to do that in google, and I just copied to .htacces since I'm not an expert in apache. It works, but I don't know if that is the best way to do it. How can I see if that is the reason why the server is slow? Are there any tools to monitor it? What would you do guys? Thanks in advance!

    Read the article

  • Testing realistic loads for new versions of existing web app

    - by David Cournapeau
    Assuming I have a relatively complex web application, I am interested in testing performances of a new version using a traffic as realistic as possible. Traffic is relatively complex (session-based, lots of internal logic which depends on incoming requests). The webapp depends on many servers (databases, frontends, etc...). I can think of two basic directions: Recording every incoming request with its timestamp in production in a centralized manner and replaying it from N clients to reproduce a load as close as possible as the original. Issue: because we have many servers, getting the centralized log is not trivial. having a system duplicating requests to a staging area so that I could "plug" a dev version of my webapp to it at anytime without affecting the production. Issue: I have not found much information about it expect this, which suggests to me that may not be the best solution. OTOH, it is realistic by definition. What is the standard way of doing this kind of testing ? I did not find much information about load testing with complex, realistic traffic.

    Read the article

  • What happens to running processes when I lose a remote connection to a *nix box?

    - by David Marble
    I occasionally lose my remote SSH connection to my VPS. I use screen for long-running processes, but am wondering what happens to the processes I had running aside from those run within a screen session if I lose the connection to the box. When I re-establish a connection to the box, what happened to the bash and sshd processes that were running when I lost the connection? Today I lost connection repeatedly and noticed many more bash and sshd processes than usual. If there are processes hanging around, do I need to kill them? How could I determine which processes were abandoned from my previous session? Thanks for any replies!

    Read the article

  • Keyboard layout for mathematical/Greek symbols

    - by David
    I've been wondering about this for a long time but never thought to ask: I do a lot of scientific work so there are many times it would be really handy to be able to type mathematical symbols or Greek letters which, for the most part, aren't part of the ASCII character set. Like "8 µ ? s t ? ? … v ? = = ±" and so on. Is there a keyboard layout (for Linux) that maps simple key combinations to these kinds of characters? (Assuming all the encoding and font issues are worked out properly) I know I could create one myself but it'd be a lot easier if someone's already done the work, or at least if there's a partial solution I could modify.

    Read the article

  • Optimizing Disk I/O & RAID on Windows SQL Server 2005

    - by David
    I've been monitoring our SQL server for a while, and have noticed that I/O hits 100% every so often using Task Manager and Perfmon. I have normally been able to correlate this spike with SUSPENDED processes in SQL Server Management when I execute "exec sp_who2". The RAID controller is controlled by LSI MegaRAID Storage Manager. We have the following setup: System Drive (Windows) on RAID 1 with two 280GB drives SQL is on a RAID 10 (2 mirroed drives of 280GB in two different spans) This is a database that is hammered during the day, but is pretty inactive at night. The DB size is currently about 13GB, and is used by approximately 200 (and growing) users a day. I have a couple of ideas I'm toying around with: Checking for Indexes & reindexing some tables Adding an additional RAID 1 (with 2 new, smaller, HDs) and moving the SQL's Log Data File (LDF) onto the new RAID. For #2, my question is this: Would we really be increasing disk performance (IO) by moving data off of the RAID 10 onto a RAID 1? RAID 10 obviously has better performance than RAID 1. Furthermore, SQL must write to the transaction logs before writing to the database. But on the flip side, we'll be reducing both the size of the disks as well as the amount of data written to the RAID 10, which is where all of the "meat" is - thereby increasing that RAID's performance for read requests. Is there any way to find out what our current limiting factor is? (The drives vs. the RAID Controller)? If the limiting factor is the drives, then maybe adding the additional RAID 1 makes sense. But if the limiting factor is the Controller itself, then I think we're approaching this thing wrong. Finally, are we just wasting our time? Should we instead be focusing our efforts towards #1 (reindexing tables, reducing network latency where possible, etc...)?

    Read the article

  • How can I peer into a Windows user's RDP session for support, where I initiate the support session?

    - by David Bullock
    I've used both WebEx and GoToAssist, but neither of them have a story to tell for 'unattended' access of a user's desktop unless the user is using the machine's primary console. Unattended in the sense that they phone me and I then appear in their session, rather than they visit a website and enter their details and wait for me. This is a common use-case, since the users' machine is a virtual desktop, and the session broker is connecting the user via RDP. They never have a session with their desktop unless it's a remote desktop session. At the moment, if I use either of the said products to get an unattended support session going, all I can see is the login screen of the physical console, telling me that a remote session is in progress. Are there alternative tools which will make me happy?

    Read the article

  • Funny/clever/creative HTTP error pages

    - by David
    I'm sure many of us have seen or heard of amusing ways to express the standard HTTP errors, like "404 Lost in Cyberspace" instead of "404 File Not Found". What are some of the funniest/cleverest/most creative error pages or error messages you've seen, or can think of? (Somewhat similar to this question on StackOverflow) I know this isn't a specific question with a single answer but it is relevant to site admins who want to keep their visitors happy (or terrified, if you prefer ;-) I'll certainly be looking for inspiration for my own website's error pages.

    Read the article

  • Coldfusion multiserver instance hangs

    - by David Sedeño
    I have a coldfusion 8 multiserver setup with IIS in Windows 2008 Standard SP2 and when one instance "hangs" (I can't connect to the instance from fusion reactor) the web server throws a "503 service unavailable". The remains instance seems to works ok in fusion reactor but the website have only the 503. I have to restart jvm processes and IIS to get the website work again. The jvm processes have the option -Xmx2048m and the instanaces have 2.5Gb allocated. Maybe the jvm process reach the 2Gb limit and stop working? Can be a problem between IIS and CF instances? I'm new to CF debugging process, how can I find why the instance hangs? Thanks

    Read the article

  • IIS6 Virtual Directory 500 Error on Remote Share

    - by David Boike
    We have our servers at the server farm in a domain. Let's call it LIVE. Our developer computers live in a completely separate corporate domain, miles and miles away. Let's call it CORP. We have a large central storage unit (unix) that houses images and other media needed by many webservers in the server farm. The IIS application pools run as (let's say) LIVE\MediaUser and use those credentials to connect to a central storage share as a virtual directory, retrieve the images, and serve them as if they were local on each server. The problem is in development. On my development machine. I log in as CORP\MyName. My IIS 6 application pool runs as Network Service. I can't run it as a user from the LIVE domain because my machine isn't (and can not be) joined to that domain. I try to create a virtual directory, point it to the same network directory, click Connect As, uncheck the "Always use the authenticated user's credentials when validating access to the network directory" checkbox so that I can enter the login info, enter the credentails for LIVE\MediaUser, click OK, verify the password, etc. This doesn't work. I get "HTTP Error 500 - Internal server error" from IIS. The IIS log file reports sc-status = 500, sc-substatus = 16, and sc-win32-status = 1326. The documentation says this means "UNC authorization credentials are incorrect" and the Win32 status means "Logon failure: unknown user name or bad password." This would be all and good if it were anywhere close to accurate. I double- and trouble-checked it. Tried multiple known good logins. The IIS manager allows me to view the file tree in its window, it's only the browser that kicks me out. I even tried going to the virtual directory's Directory Security tab, and under Authentication and Access Control, I tried using the same LIVE domain username for the anonymous access credential. No luck. I'm not trying to run any ASP, ASP.NET, or other dynamic anything out of the virtual directory. I just want IIS to be able to load static images, css, and js files. If anyone has some bright ideas I would be most appreciative!

    Read the article

  • HP LaserJet 1320 printing black boxes instead of text

    - by David Gard
    I have an HP LaserJet 1320, running off of the HP PCL5 64-bit Universal Driver (Windows 7 Professional, 64-bit). When printing, anything other than body text is blacked out. I.e. on an email, where the 'To...', 'From...', 'CC...' and 'Subect' are usually shown at the top, there is just a black box. And on Word documents, anything to do with Track Changes is also blacked out. I have tried restarting the Print Spooler, and reinstalling the printer, but this does not help. Does anybody know why this is happening?

    Read the article

  • DNS pointing to different IPs from different parts of the world

    - by David
    I have a domain name that for some reason is pointing to different servers depending on where you are located in the world. What is odd is that I have another domain that has the same DNS servers, which points to the same server regardless of your location (which is the way it's mean't to work). Any ideas why the first domain is pointing to different IP addresses for different people?

    Read the article

  • What is Peformance Monitor telling me when my page faults / second are high?

    - by David Robison
    I have a Windows 7 x64 computer that is having performance issues. After some investigation, I have discovered that the page faults / second on it, as reported by Performance Monitor, are really high. Everything else seems to be normal. Resource Monitor reports no hard faults and lots of available memory. Is this a potential cause for problems, or is it a red herring? If it is something that could be causing problems, what should I do next to figure out what is causing it? Here is a screen shot of the Performance Monitor. Notice that the average page faults / second is 75,887. On another computer that does not have problems, this number is closer to 3,000. Here is a screen shot of the Resource Monitor, sorted by hard faults / second, which is currently 0 for all processes.

    Read the article

  • DKIM for email through Google Apps domain with external outbound relay

    - by David Gardiner
    I'd like to enable the new Domain Keys DKIM email authentication feature for a domain hosted in Google Apps. Some of my users use an external SMTP gateway (such that when they send email, it doesn't go through smtp.gmail.com). I have an SPF record configured for the domain, and this allows the external SMTP gateways as valid SMTP hosts. (I realise SPF is different to DKIM) Will enabling DKIM adversely affect the external gateway email? eg. Are the externally sent emails at risk of being marked as spam because they would not have the DKIM signature, or will DKIM only positively benefit emails sent through Google's SMTP server?

    Read the article

  • Cannot drop a table in SQL 2005

    - by David George
    I have a SQL Server 2005 SP3 box that one of my developers created a temp table on that we cannot seem to remove because it somehow got brackets in the name of the table? SELECT Name, object_id FROM sys.objects WHERE Name LIKE '%#example%' Results: Name object_id [#example] 123828384 Anyone know how we can get rid of this? Thanks!

    Read the article

  • How do I deliver mail for wildcard addresses to a particular user/alias/program?

    - by David M
    I need to configure sendmail so that mail delivered for wildcard addresses is accepted for delivery and then delivered to a user, alias, or directly to a script. I can rewrite the envelope/headers any number of ways, but I don't know how to accept the wildcard address when it's provided in RCPT TO: Everything I've tried so far winds up with a 550 user unknown error. So here's a specific example: I want to be able to handle any address that consists of a series of digits followed by a dot followed by a word, then pipe that to a script. If the headers get rewritten, that's OK, but I need the envelope to contain the actual Delivered-To address. Here's the sort of SMTP session I need: 220 blah.foo.com ESMTP server ready; Thu, 22 Apr 2010 20:41:08 -0700 (PDT) HELO blort.foo.com 250 blah.foo.com Hello blort.foo.com [10.1.2.3], pleased to meet you MAIL FROM: <[email protected]> 250 2.1.0 <[email protected]>... Sender ok RCPT TO: <[email protected]> 250 2.1.5 <[email protected]>... Recipient ok I tried some stuff with regex maps, but I never got past 550 user unknown.

    Read the article

  • Public Facing Recursive DNS Servers - iptables rules

    - by David Schwartz
    We run public-facing recursive DNS servers on Linux machines. We've been used for DNS amplification attacks. Are there any recommended iptables rules that would help mitigate these attacks? The obvious solution is just to limit outbound DNS packets to a certain traffic level. But I was hoping to find something a little bit more clever so that an attack just blocks off traffic to the victim IP address. I've searched for advice and suggestions, but they all seem to be "don't run public-facing recursive name servers". Unfortunately, we are backed into a situation where things that are not easy to change will break if we don't do so, and this is due to decisions made more than a decade ago before these attacks were an issue.

    Read the article

  • Sourcing local .bashrc .vimrc without copy to remote machine

    - by David Strejc
    Does anyone have an idea or hack on how to source my local dotfiles (I will probably need more of them so this solution should work with many files) on remote machines without scp them to remote machine? Is something like scp .bashrc to /tmp folder on remote machine and then exporting BASHRC env variable the best solution? I need this because of our company policy and fast cloud servers deployment and redeployment and I don't want to touch .bashrc files on remote machine so my colleges are able to use their default env which doesn't suit me.

    Read the article

  • Advantages of a 130W vs 90W ac adapter?

    - by David
    I purchased a Lenovo T420S laptop with a 90W/20V AC adapter P/N 42t4426 through my IT department. Subsequently, I ordered a replacement adapter and received a 135W/20V AC adapter P/N 45N0054. The 135W version is about twice as big and heavy as the 90W version. Is there an advantage to the 135W version like faster battery recharging? Are there any negative effects (other than weight), like reduced battery life?

    Read the article

  • keepalived issues on xen domU

    - by David Cournapeau
    Hi, I cannot manage to run keepalived correctly on xen domU. I am following this link for configuration, and it works great on some local VM (running with KVM). If I set up the exact same configuration, but on xen domU, it does not work: both servers do not see each other and decide to be master (10.120.100.99 being the virtual IP) $ sudo ip addr sh eth0 # host1 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:16:3e:78:f5:31 brd ff:ff:ff:ff:ff:ff inet 10.120.100.104/24 brd 10.120.100.255 scope global eth0 inet 10.120.100.99/32 scope global eth0 inet6 fe80::216:3eff:fe78:f531/64 scope link valid_lft forever preferred_lft forever $ sudo ip addr sh eth0 # host2 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:16:3e:51:36:20 brd ff:ff:ff:ff:ff:ff inet 10.120.100.105/24 brd 10.120.100.255 scope global eth0 inet 10.120.100.99/32 scope global eth0 inet6 fe80::216:3eff:fe51:3620/64 scope link valid_lft forever preferred_lft forever Is there a way I could debug this - it seems some people are able to use keepalived on xen following some mailing list, but without much info on their config.

    Read the article

  • Which databases support parallel processing across multiple servers?

    - by David
    I need a database engine that can utilize multiple servers for processing a single SQL query in parallel. So far I know that this is possible with the some engines, though none of them are feasible for me either because of pricing or missing features. The engines currently known to me are: MS SQL (enterprise) DB2 (enterprise) Oracle (enterprise) GridSQL Greenplum Which other engines have this feature? Do you have any experience with using this feature? Edit: I have now proposed a method for creating one myself. Any input is welcome. Edit: I have found another one: Informix Extended Parallel Server

    Read the article

  • Plesk hosting on MediaTemple DV

    - by David
    Hi there, We have a MediaTemple dedicated virtual running Plesk. The problem we're having is that changing the permissions of files on the server to be writable by server owner (apache) is conflicting with the ability to upload and overwrite files via the FTP user. Here's an example, I upload a file from user "serverftp" and they own the new file in the httpdocs folder. I then change the permission of an image upload folder to the apache user to that I can upload images via a PHP script. Uploading or changing that folder with the serverftp user is then locked out. Speaking to tech support didn't get very far because there are some strange group permissions going on and it would involve me adding every single domain FTP user to the pcantl group or something similar. I'm wondering how I can easily change things so that I don't have this problem anymore.

    Read the article

< Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >