Search Results

Search found 4520 results on 181 pages for 'notice'.

Page 108/181 | < Previous Page | 104 105 106 107 108 109 110 111 112 113 114 115  | Next Page >

  • Laptops with easy heat sink service?

    - by Niten
    Can you recommend a current laptop model with easy heat sink access – or better yet, a removable air intake filter – making it easy to periodically clean out the dust and lint that always packs up in these things? Every laptop I've owned has eventually overheated on account of a clogged heat sink. (I suppose it doesn't help that I have a cat who loves to hang out where I'm working, or that my laptop is almost always running.) One of the things I really love about my current system, a Dell Inspiron 1420n, is how easy it is to service its cooling system: whenever I notice the fan starting to work harder and the CPU temperature climbing higher than it should be, I merely have to unscrew a single panel from the bottom of the machine, clean out the heat sink, and then I'm good for another few months. Which current models of the "business laptop" variety offer similar easy cooling system service? I'm looking for something roughly along the lines of: 14- or 15-inch display Nehalem-based CPU Solid construction – magnesium chassis or better (like the Inspiron) TPM (for BitLocker) ideal, but not mandatory Docking adapter ideal, but not mandatory Good battery life For example, the ThinkPad T410 would have been my top choice, but it seems like it would be a serious chore to service its heat sink. For the current MacBook Pros it looks downright impossible. No matter how nice the laptop is in other respects, it'll be of no use to me when it's overheating. So, any suggestions? Thanks in advance... (I'm constantly surprised that customers and manufacturers don't pay more attention to this feature, at least in the business laptop subcategory. In the last couple months I've fixed two friends' laptops which were also overheating due to clogged cooling systems; clearly I'm not the only one affected by this.)

    Read the article

  • Windows 7 - "A disk read error occured. Press Ctrl + Alt + Del to restart"

    - by Senthil
    Problem: When I switch on my PC, after BIOS POST, a cursor is blinking for about 5 seconds and then I am getting this error message: A disk read error occurred. Press Ctrl + Alt + Del to restart. I am able to go into BIOS. But Windows loader doesn't even start. This message is shown after my motherboard logo comes and goes. Symptoms: I DID notice my system freezing for minutes at a time for past two days. Also, in the past two days, it stopped half way through the Window booting process. I had to do hard reset couple of times to get it working. But since today morning, I only get this error message. Configuration: Operating System: Windows 7 Ultimate 32-bit only. Hard disk: 1 Physical Disk - 80GB SATA Partitions: Two (2) - C: and D: File System: NTFS No drive encryption or compression is turned on. After I searched on the net, I have found people mentioning these possible causes: Hard Disk is physically failing Corrupt MBR Bad Sector I am planning to buy a new hard disk, install Windows on it and continue. But I need data from the old hard disk. The data I want is in D: drive, outside any Windows user folder, is not encrypted or compressed or protected in anyway. I think if someone/something can get the disk working again and knows NTFS, the data can be hopefully read. What steps should I follow to recover files from the defective disk? Update: I bought a new disk, installed windows on it and added the defective one as a slave. Then I was able to read the data from the defective hard disk. Though chkdsk found lots of errors, the files I wanted were not affected and I got them back :) I am not using that hard disk anymore though it seems to be working at the moment.

    Read the article

  • How can I erase the traces of Folder Redirection from the Default Domain Policy

    - by bruor
    I've taken over from an IT outsourcer and have found a struggle now that we're starting a migration to windows 7. Someone decided that they would setup Folder redirection in the Default Domain Policy. I've since configured redirection in another policy at an OU level. No matter what I do, the windows 7 systems pick up the Default Domain Policy folder redirection settings only. I keep getting entries in the event log showing that the previously redirected folders "need to be redirected" with a status of 0x80000004. From what I can tell this just means that it's redirecting them locally. Is there a way I can wipe that section of the GPO clean so it's no longer there? I'm hesitant to try to reset the default domain policy to complete defaults. ***UPDATE 6-26 I found that the following condition occurred and was causing the grief here. I've already implemented the new policies for clients, and for some reason, XP was working great, 7 was refusing to process. The DDP was enforced. Because of this, and the fact that the folder redirection policies were set to redirect back to the local profile upon removal, it was forcing clients to pick up it's "redirect to local" settings. Requirements for to recreate the issue. -Create a new test OU and policy. -Create some folder redirection settings, set them to redirect to local upon removal -Remove settings on that GPO -Refresh your view of the GPO and check the settings. -You'll notice that the settings show "not configured" entries for folder redirection. -Enforce this GPO -Create another sub-OU -Create a GPO linked to this sub-ou and configure some folder redirection settings. -Watch as the enforced GPOs "not configured" setting overrides the policy you just defined. I've had to relink the DDP to all OU's that have "block inheritance" enabled, and disable the "enforced" option on the DDP as a workaround. I'd love to re-enable enforcement of the DDP, but until I can erase the traces of folder redirection settings from the DDP, I think I'm stuck.

    Read the article

  • Apache2 on Raspbian: Multiviews is enabled but not working [closed]

    - by Christian L
    I recently moved webserver, from a ubuntuserver set up by my brother (I have sudo) to a rasbianserver set up by my self. On the other server multiviews worked out of the box, but on the raspbian it does not seem to work althoug it seems to be enabled out of the box there as well. What I am trying to do is to get it to find my.doma.in/mobile.php when I enter my.doma.in/mobile in the adress field. I am using the same available-site-file as I did before, the file looks as this: <VirtualHost *:80> ServerName my.doma.in ServerAdmin [email protected] DocumentRoot /home/christian/www/do <Directory /> Options FollowSymLinks AllowOverride All </Directory> <Directory /home/christian/www/do> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> From what I have read various places while googling this issue I found that the negotiation module had to be enabled so I tried to enable it. sudo a2enmod negotiation Giving me this result Module negotiation already enabled I have read through the /etc/apache2/apache2.conf and I did not find anything in particular that seemed to be helping me there, but please do ask if you think I should post it. Any ideas on how to solve this through getting Multiviews to work?

    Read the article

  • Please, help writing a MIB

    - by facha
    I have a problem with an snmpwalk query returning snmp variables in a non-uniform way: .1.3.6.1.2.1.10.127.1.3.3.1.2.215 -> Hex-STRING: 24 37 4C 0C 65 0E .1.3.6.1.2.1.10.127.1.3.3.1.2.216 -> Hex-STRING: 24 37 4C 0B A2 DA .1.3.6.1.2.1.10.127.1.3.3.1.2.217 -> STRING: "$7L f:" .1.3.6.1.2.1.10.127.1.3.3.1.2.218 -> STRING: "$7L k2" As you can see, some variables are of a STRING type, others are Hex-STRING. So, I'm trying to write a simple MIB to force them all come out as Hex-STRING. This is where I've gotten so far: TEST-MIB DEFINITIONS ::= BEGIN PhysAddress ::= TEXTUAL-CONVENTION DISPLAY-HINT "1x:" STATUS current SYNTAX OCTET STRING test OBJECT-TYPE SYNTAX PhysAddresss MAX-ACCESS read-only STATUS current ::= { 1 3 6 1 2 1 10 127 1 3 3 1 2 } END However, snmpwalk doesn't seem to notice my textual convention (even though the "test" variable is being recognized). I still get a mixture of STIRNGs and Hex-STRINGs. Could anybody point to where is my mistake? snmpwalk -v2c -cpublic 192.168.1.2 TEST-MIB::test ... TEST-MIB::test.216 = Hex-STRING: 24 37 4C 0B A2 DA TEST-MIB::test.217 = STRING: "$7L f:"

    Read the article

  • How can I password protect & let cgi-bin to work?

    - by jaaaaaaax
    This is taken from sites-available directory. It's a virtual host setting for apache. Accessing myiphere/cgi-bin/ throws 403. The directory setting for /var/www2/ drwxrwxrwx 8 www-data www-data NameVirtualHost myiphere <VirtualHost myiphere> ServerAdmin webmaster@localhost DocumentRoot /var/www2/ <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www2/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined ServerSignature On Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory>

    Read the article

  • Erase just the free space on my hard drive

    - by Patriot
    I'm about to give away an older computer with just the Windows XP operating system intact and all other programs uninstalled. However, upon peeking at the "free space" with software called "Recuva", I notice lots of deleted things that could be recoverable. Some of these include sensitive data files, pdfs, and other personal items that I would not want retrieved. I ran a program called "Eraser" to try and overwrite that data, but it failed to do an adequate job. I also tried to do the job with "Glary Utilities" but it failed too. Short of installing a new, very cheap hard drive and re-installing the bare bones operating system, I'm out of ideas. EDIT - WOW!!! I was not really expecting this many GREAT ideas. My next question is this. If I go the DBAN route and truely wipe the hard drive, then restore my disc image (I use Acronis True Image) will it also restore the free space data? Does imaging just copy readable data? I have an old image of when the OS was first installed.

    Read the article

  • Ubuntu in VirtualBox File Modified Time in Future and PHP slow file operations

    - by user1750
    For some reason, some of my files have a last modified date in the future. In addition to this, file operations in PHP are SUPER slow. For example, rebuilding the Symfony2 cache can take over 40 seconds (its takes 1-2 on my MacBook Pro). Notice the time for ListingsCRUDController.php. It just says "2012". In order see the date more clearly I ran ls --time-style="full-iso" -l For some reason it shows that this file's last modified date is ~5 hours into the future. System time: To make things more confusing, the system will intermittently speed up. Suddenly, my app will start serving requests in 1-2 seconds (down from 40 seconds) for no apparent reason. I mean I don't do anything to my code/system config - it just changes. Also, during a slow PHP request, the php5-fpm process (nginx) uses 100% of the CPU for the duration of the request. This is the second VM this has happened on and I need to know why its doing this. It has become unusable. Information About My Setup VirtualBox 4.2.0 Host: Macbook Pro Guest: Ubuntu Server 12.04 Package dkms is installed Timezones match for Ubuntu and PHP. Things I've Tried Both Apache and Nginx. APC enabled and disabled. Xdebug enabled and disabled. 1 processor up to 4 processors. 1gb memory up to 4gb memory. I've installed Ubuntu using the regular kernel and the VM kernel.

    Read the article

  • Convert raw IMAP server data into local folders, then upload partial dataset to new IMAP server?

    - by Manca Weeks
    I am transitioning a company with about 30 IMAP accounts, loaded with data (about 77GB total), to a new email host. The majority of the data will be converted into a local archive and distributed to the company computers as a static reference data set. The server side folders the users absolutely cannot do without being on the server will be uploaded back to the new server. I used Mac OS X Mail (Snow Leopard 10.6.6) to download the content. I notice some messages have the name [xxx].partial.emlx, which leads me to believe they have not been downloaded all the way. I have root access to the mail server data and could download the IMAP server data via FTP. I am not sure what utility to use to convert that data to local Mail.app mailboxes. Furthermore, I would appreciate any input on the best way to upload a portion of the data to the new server (GoDaddy), preserving the original dates of the messages. edit OK - forget the raw server data. I found a script that apparently does pretty good archiving IMAP folders to local mbx files. My main quest now is to batch upload a mailbox hierarchy to the new IMAP server without having to start-stop and deal with similar issues. Anyone know of a utility (hopefully for OS X, but if not, I'll fire up my XP virtual system...) that would be capable of this? Thanks, M

    Read the article

  • Problems sending email using .Net's SmtpClient

    - by Jason Haley
    I've been looking through questions on Stackoverflow and Serverfault but haven't found the same problem mentioned - though that may be because I just don't know enough about how email works to understand that some of the questions are really the same as mine ... here's my situation: I have a web application that uses .Net's SmtpClient to send email. The configuration of the SmtpClient uses a smtp server, username and password. The SmtpClient code executes on a server that has an ip address not in the domain the smtp server is in. In most cases the emails go without a problem - but not AOL (and maybe others - but that is one we know for sure right now). When I look at the headers in the message that was kicked back from AOL it has one less line than the successful messages hotmail gets: AOL Bad Message: Received: from WEBSVRNAME ([##.###.###.###]) by domainofsmtp.com with MailEnable ESMTP; Mon, 18 Jun 2012 09:48:24 -0500 MIME-Version: 1.0 From: "[email protected]" <[email protected]> ... Good Hotmail Message: Received: from mail.domainofsmtp.com ([###.###.###.###]) by subdomainsof.hotmail.com with Microsoft SMTPSVC(6.0.3790.4900); Thu, 21 Jun 2012 09:29:13 -0700 Received: from WEBSVRNAME ([##.###.###.###]) by domainofsmtp.com with MailEnable ESMTP; Thu, 21 Jun 2012 11:29:03 -0500 MIME-Version: 1.0 From: "[email protected]" <[email protected]> ... Notice the hotmail message headers has an additional line. I'm confused as to why the Web server's name and ip address are even in the headers since I thought I was using the SmtpClient to go through the smtp server (hence the need for the username and password of a valid email box). I've read about SPFs, DKIM and SenderID's but at this point I'm not sure if I would need to do something with the web server (and its ip/domain) or the domain the smtp is coming from. Has anyone had to do anything similiar before? Am I using the smtp server as a relay? Any help on how to describe what I'm doing would also help.

    Read the article

  • puppet master --compile logs errors to stdout

    - by danny
    I see a bug about this that was accepted and then closed a year ago: http://projects.puppetlabs.com/issues/3670 but I'm using puppet 2.7.14 and am getting the same issue. I'm trying to use "puppet solo" (i.e. just running puppet apply on each server to be configured) as I only have 2 or 3 servers in this project and adding another server as a puppetmaster would be completely overkill. Unless I'm mistaken, the best way to apply a node manually to a server is to do: puppet master --compile=mynode > catalog.json puppet apply --catalog catalog.json But the puppet master command outputs a couple of warnings and notices to stdout, mixed in with the desired json content. And it uses colored output so I can't just pipe it through egrep -v '^warning:' EDIT: I guess it's not too big of a deal to use grep - since puppet 2.7 pretty-prints the actual content and the warnings don't ever start with spaces, piping the output through egrep '^( |{|})' works So my questions are basically: Is there a better way than this to apply a puppet node without using a puppetmaster? I can't really find any good references online to using puppet without a puppetmaster, even though that seems like a perfectly reasonable thing to do for a small project. Is there a setting or flag that I'm missing that will get puppet master to stop being an asshole and send its errors to stderr instead of stdout? Or do I really have to turn off color logging, then grep to exclude warning: and notice: lines?

    Read the article

  • Linux machine can't find its tape drive

    - by Kyle Hodgson
    I have an older HP NetServer LPr with what is apparently a Symbios SCSI card connecting to a Quantum SuperLoader 3 that is DLT based. From time to time, we seem to lose the connection to the autoloader. It's usually due to flaky power, but not totally sure why; sometimes when this happens the Autoloader's LED's are orange and it needs to be power cycled. The annoying workaround currently is to reboot the machine. As it is our production VPN and DNS server in addition to being our backup server, this is less than optimal. In Debian (Sarge) is there not some command one can type to get the card to notice that it has the autoloader connected again? dcr1:/proc# grep -i symbios /proc/pci SCSI storage controller: LSI Logic / Symbios Logic 53c895 (rev 1). dcr1:/proc# uname -a Linux dcr1 2.4.27-3-686 #1 Tue Dec 5 21:03:54 UTC 2006 i686 GNU/Linux dcr1:/proc# mt status mt: /dev/tape: No such device dcr1:/proc# ls -l /dev/tape lrwxrwxrwx 1 root root 8 2007-02-07 16:01 /dev/tape -> /dev/st0 dcr1:/proc# That mt status command will show the actual st0 status when things are working correctly. The No such device message is usually the second clue that we need to reboot - the first clue is usually that the backups didn't run.

    Read the article

  • QuickTime Player sounds much better than iTunes

    - by Gene Goykhman
    I am playing a 320 kpbs encoded music MP3 in iTunes and the sound is substantially worse than the exact same file played back in QuickTime Player (Max OS X 10.8.5). I have maxed out system volume and iTunes playback volume. I have disabled all the audio processing features in iTunes (equalization, sound enhancer, etc.) The audio coming from iTunes still sounds resampled and/or processed, whereas QuickTime Player appears to be playing it "as is". Even when I Get Info on the MP3 file in Finder and play it back directly from the Get Info window it sounds good. It's just iTunes that seems to be mangling the song. I can notice a difference on virtually all my music, so it's not just one particular MP3. I suspect the issue is that iTunes is doing some kind of audio processing but I can't find a way to turn it off. This is the newest iTunes (11.1), but the problem has probably been going on for a while... I just switched to decent earbuds and started noticing the difference. What's the best way to force iTunes to play back the file as-is, or as close as possible to how QuickTime Player/Finder would play it?

    Read the article

  • HTTP Upload Problems

    - by jfoster
    We are running a marketplace on ColdFusion8 and IIS with a widely geographically distributed user base and have been receiving complaints of issues with some HTTP uploads. Most of the complaints are coming from geographically distant locations from our main datacenter on the US east coast. I've attempted to upload the same 70MB file from a US West coast test server to both our main site and a backup running the same code on a different network route and I saw the same issues fairly consistently in both places, so I've ruled out the code, route, and internal network errors. I've also tested uploads using both the native cf upload tag and a third party tool called SaFileUp. I saw the same issues with both upload tools, so I also don't think this is necessarily a ColdFusion problem. I don't have any problems uploading the test file from the East coast to other east coast servers, so I'm beginning to think that the distance between our users and our equipment is a factor. I've also found that smaller files are more likely to succeed than large ones (< 10MB) I tried the test upload with both IE and FF and did notice a difference in the way that the browsers seemed to handle packet errors. IE seemed to have a tough time continuing an upload after dropped / bad packets, whereas FF seemed to have the ability to gracefully resume an upload after experiencing packet problems. Has anyone experienced similar issues? Is there anything we can do on our side to make uploads more forgiving to packet loss or resumable after an error? A different upload tool etc… Do we need upload servers in more than one location to shorten the network routes between clients and servers? Does anyone think that switching uploads to SSL will help (no layer7 packet sniffing may lead to a smoother upload). Thanks.

    Read the article

  • Best way to mount 3-4 monitor like this?

    - by jasondavis
    I just purchased 2 HP 2009m widescreen monitors, they are not the biggest thing on the block, they are like 19-20" and are only around 150-200$ so I think they are perfect. I bought 2 of them just to make sure I like them, with the full intention of purchasing more to make either a tripple or quad display. I now I am stuck trying to decide, if I purchase 1 more to have a tripple display I would then like to just wrap the third monitor to either the rigth or left side, I could do this without a mount most likely pretty easy. If I decide to go with 2 more monitors to make a quad display then I would like to add the 2 new monitor directly above the 2 that I have now. So it would make a grid of 2 wide and 2 high. I have posted a few photos belwo to show them now with the 2 I have, you will notice that I have them tilted inwards to make more of a "V" shape instead of them being side by side and "STRAIGHT". Now if I decide to make thegrid of 4 then I will need to buy or build a stand to hold them all tightly together (no whitespace or gap between the grid of monitors) but I would like to still have both rows invert to make the slight "V". Do you know of any existing stands I could purchase that would hold all 4 monitors without making them be STARIGHT without the "V" shape? Any tips appreciated please, also they do have holes in the back for VESA. a few photos... (they are from iphone and lighting made them note very good but you can see what I am working with here)

    Read the article

  • Symbolic link not allowed or link target not accessible

    - by TK Kocheran
    I can't seem to get a symlink working in my Apache VirtualHost, no matter what I try and I see the following error in the error log: Symbolic link not allowed or link target not accessible: /var/www/carddesigner I can browse the actual symlink from Linux with no problems whatsoever: $ ls -l /var/www | grep "carddesigner" lrwxrwxrwx 1 rfkrocktk rfkrocktk 64 2011-02-28 16:52 carddesigner -> /home/rfkrocktk/Documents/Projects/Work/carddesigner/build/main/ Additionally, I've made sure that the my VirtualHost allows the FollowSymLinks option: /etc/apache2/sites-enabled/000-localhost: <VirtualHost 127.0.0.1:80> ServerAdmin ########## DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Deny from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel debug CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> RewriteEngine On RewriteLog "/var/log/apache2/mod_rewrite.log" RewriteLogLevel 9 </VirtualHost> I can't seem to find any other configuration files that seem to override this and/or prevent symlinks from being loaded. Any ideas? Here are my permissions on the actual referenced files: $ ls -l ~/Documents/Projects/Work/carddesigner/build/main total 12 drwxrwxrwx 5 rfkrocktk rfkrocktk 4096 2011-02-28 16:11 advanced drwxrwxrwx 2 rfkrocktk rfkrocktk 4096 2011-02-28 16:10 core drwxrwxrwx 2 rfkrocktk rfkrocktk 4096 2011-02-28 16:10 simple Seems like the permissions are good to go, right?

    Read the article

  • Ruby on Rails (Redmine) on Apache - 503 Error

    - by andrewtweber
    I am running a Ruby on Rails application called Redmine. It's been working fine, but today it's giving a 503 Service Temporarily Unavailable error. (It was initially set up by an employee who is now gone.) I check the error log and it says: [Mon Nov 21 11:03:30 2011] [error] (111)Connection refused: proxy: HTTP: attempt to connect to 127.0.0.1:3000 (127.0.0.1) failed [Mon Nov 21 11:03:30 2011] [error] ap_proxy_connect_backend disabling worker for (127.0.0.1) Here's a chunk of my Apache config <VirtualHost *:80> ServerName redmine.{domain}.com RewriteCond %{DOCUMENT_ROOT}/%{REQUEST_FILENAME} !-f RewriteRule ^/(.*)$ balancer://redminecluster%{REQUEST_URI} [P,QSA,L] </VirtualHost> <Proxy balancer://redminecluster> BalancerMember http://127.0.0.1:3000 </Proxy> I found this link: http://www.redmine.org/boards/2/topics/20561 which suggests I simply need to "start the redmine server." I've tried /etc/init.d/redmine start which gives me this output => Booting Mongrel => Rails 2.3.11 application starting on http://0.0.0.0:3000 The contents of /etc/init.d/redmine: cd /var/redmine sudo ruby script/server -d -e production One thing I immediately notice is that it says 0.0.0.0 instead of 127.0.0.1. In addition, running top or ps -ef shows no record of a "mongrel" or "redmine" process. I've also tried restarting Apache before and after starting redmine. Not sure where to go from here.

    Read the article

  • Files not being copied to AFP volume when copying through the Finder

    - by cefstat
    I am trying to copy files from my Macbook's hard disk to my NAS. The latter is a ReadyNAS Duo and is mounted as an AFP volume. The files are about 5MB each and I copy them by selecting in a Finder window all the files that I need and then dropping them onto the destination directory. Almost always some of the files do not get copied to the NAS. For example, if I select 200 files and then start the copying, everything looks at the beginning normal (while the copying takes place the Finder window for the destination directory is updated to show 200 files while it was empty before), but after the copying ends the destination directory shows less than 200 files (let's say 190). If I copy again the same 200 files to the NAS, without replacing already copied files, the remaining 10 files are usually copied correctly. In a few cases, I have to repeat the process a third time. Notice that the Finder does not give any warning that some of the files have not been copied at any stage. I am wondering if this a known problem with AFP and the Finder and/or if there is something that I can do to solve this problem.

    Read the article

  • Can an image based backup potentially corrupt data?

    - by ServerAdminGuy45
    I'm considering doing image based backups (Acronis) on production Windows systems during non-peak hours. I'm just wondering if they can potentially lead to application data corruption. Lets say that I have a database that is getting hit pretty hard. Could I potentially have the beginning blocks of the database be commit ed to the image, data inserted into the db (which changes the beginning blocks of the DB on the server but not the image), then the blocks of data committed to the image (leading to an inconsistent state). Here's an example of what I'm trying to illustrate. Imagine a simple data structure which has a number in the front which represents the number of "a"s in a file. The number and data are delimited by a "-". For example: 4-ajjjjjjjajuuuuuuuaoffffa If an "a" is changed, the datastructure resets the number in the begining of the file such as: 3-ajjjjjjjajuuuuuuuboffffa I assume acronis writes block by block being a straight up image so here is what i'm invisioning happening with my database t0: 4-ajjjjjjjajuuuuuuuaoffffa ^pointer is here t1: 4-ajjjjjjjajuuuuuuuaoffffa ^pointer is here (all data before this is comitted to the image) t2: 4-ajjjjjjjajuuuuuuuboffffa ^pointer is here (all data before this is comitted to the image) Also notice how one of the "a"s change to a b. There are only 3 "a"s now t3: 4-ajjjjjjjajuuuuuuuboffffa ^pointer is here (all data before this is comitted to the image) The final image now reads "4-ajjjjjjjajuuuuuuuboffffa", while the true data is "3-ajjjjjjjajuuuuuuuboffffa" leading to a corrupt "database". Basically changes further down the blockchain could be reflected in the image, while important header and synchronization could already be committed. The out of date header information doesn't accurately reflect the structure of the blocks to come.

    Read the article

  • .htaccess error "not allowed here" for all for all instructions

    - by andres descalzo
    I am using Debian Lenny and Apache 2. I changed the default .htaccess file with: AllowOverride AuthConfig But I always get the error message not allowed here when putting any instructions in the .htaccess file. EDIT: file default: <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www/ <Directory /> Options FollowSymLinks Order allow,deny Allow from all AllowOverride All </Directory> <Directory /var/www/> Options Indexes FollowSymLinks Includes #AllowOverride All #AllowOverride Indexes AuthConfig Limit FileInfo AllowOverride AuthConfig Order allow,deny Allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> .htaccess: #Options +FollowSymlinks # Prevent Directoy listing Options -Indexes # Prevent Direct Access to files <FilesMatch "\.(tpl|ini)"> Order deny,allow Deny from all </FilesMatch> # SEO URL Settings RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)\?*$ index.php?_route_=$1 [L,QSA] PHP info: apache2handler Apache Version = Apache/2.2.9 (Debian) PHP/5.2.6-1+lenny10 with Suhosin-Patch Apache API Version = 20051115 Server Administrator = webmaster@localhost Hostname:Port = hw-linux.homework:80 User/Group = www-data(33)/33 Max Requests = Per Child: 0 - Keep Alive: on - Max Per Connection: 100 Timeouts = Connection: 300 - Keep-Alive: 15 Virtual Server = Yes Server Root = /etc/apache2 Loaded Modules = core mod_log_config mod_logio prefork http_core mod_so mod_alias mod_auth_basic mod_authn_file mod_authz_default mod_authz_groupfile mod_authz_host mod_authz_user mod_autoindex mod_cgi mod_deflate mod_dir mod_env mod_mime mod_negotiation mod_php5 mod_rewrite mod_setenvif mod_status

    Read the article

  • supervisord launches with wrong setuid

    - by friendzis
    I am trying to test a pilot system with nginx connecting to uwsgi served application controlled by supervisord running on ubuntu-server. Application is written in python with Flask in virtualenv, although I'm not sure if that is relevant. To test the system I have created a simple hello world with flask. I want nginx and uwsgi both to run as www-data user. If I launch uwsgi "manually" from root shell I can see uwsgi processes runing as appropriate user (www-data). Although, if I let supervisor launch the application something strange happens - uwsgi processes are runing under my user (friendzis). Consequently, socket file gets created under wrong user and nginx cannot communicate with my applicaion. note: the linux server runs as Hyper-V VM, under Windows Server 2008. Relevant configuration: [uwsgi] socket = /var/www/sockets/cowsay.sock chmod-socket = 666 abstract-socket = false master = true workers = 2 uid = www-data gid = www-data chdir = /var/www/cowsay/cowsay pp = /var/www/cowsay/cowsay pyhome = /var/www/cowsay module = cowsay callable = app supervisor [program:cowsay] command = /var/www/cowsay/bin/uwsgi -s /var/www/sockets/cowsay.sock -w cowsay:app directory = /var/www/cowsay/cowsay user = www-data autostart = true autorestart = true stdout_logfile = /var/www/cowsay/log/supervisor.log redirect_stderr = true stopsignal = QUIT I'm sure I'm missing some minor detail, but I'm unable to notice it. Would appreciate any suggestions.

    Read the article

  • SQL server queries are really slow only on first run

    - by JoelFan
    Somewhat strange problem... when I start my .NET app for the first time after rebooting my machine, the SQL Server queries are really slow... when I pause the debugger, I notice that it's hanging on getting the response from the query. This only happens when connecting to a remote SQL server (2008)... if I connect to one on my local machine, it's fine. Also, if I restart the app, it works fast, even off the remote SQL server, and subsequent runs are also fine. The only problem is when I connect to a remote SQL server for the first time after rebooting my machine. What's more, I have even noticed this same exact behavior with a 3rd party app (also .NET) that also connects to a remote SQL server. Another piece of info... this has only started hapenning since I upgraded my machine from XP to Win7 (64 bit). Also, other developers on my team who upgraded to Win7 are seeing the same behavior (both with the app we're developing and the 3rd party .NET app). (copied from http://stackoverflow.com/questions/2014814/sql-server-queries-are-really-slow-only-on-first-run )

    Read the article

  • Slow transfer speed between two servers

    - by Linux Guy
    I have two servers both network cards speed is 10Gbps The inbound bandwidth between two servers is 10Gbps , the outbound bandwidth internet bandwidth is 500Mpbs Both servers using public ip addresses in public and private network Both servers transfer and connection on nginx port , and the server B used for streaming media , like youtube stream videos I check the transfer speed using iperf utility From Server A to Server B # iperf -c 0.0.0.1 -p 8777 ------------------------------------------------------------ Client connecting to 0.0.0.1, TCP port 8777 TCP window size: 85.3 KByte (default) ------------------------------------------------------------ [ 3] local 0.0.0.0 port 38895 connected with 0.0.0.1 port 8777 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.8 sec 528 KBytes 399 Kbits/sec My Current Connections in Server B # netstat -an|grep ":8777"|awk '/tcp/ {print $6}'|sort -nr| uniq -c 2072 TIME_WAIT 28 SYN_RECV 1 LISTEN 189 LAST_ACK 139 FIN_WAIT2 373 FIN_WAIT1 3381 ESTABLISHED 34 CLOSING Server A Network Card Information Settings for eth0: Supported ports: [ TP ] Supported link modes: 100baseT/Full 1000baseT/Full 10000baseT/Full Supported pause frame use: No Supports auto-negotiation: Yes Advertised link modes: 10000baseT/Full Advertised pause frame use: No Advertised auto-negotiation: Yes Speed: 10000Mb/s Duplex: Full Port: Twisted Pair PHYAD: 0 Transceiver: external Auto-negotiation: on MDI-X: Unknown Supports Wake-on: d Wake-on: d Current message level: 0x00000007 (7) drv probe link Link detected: yes Server B Network Card Information Settings for eth2: Supported ports: [ FIBRE ] Supported link modes: 10000baseT/Full Supported pause frame use: No Supports auto-negotiation: No Advertised link modes: 10000baseT/Full Advertised pause frame use: No Advertised auto-negotiation: No Speed: 10000Mb/s Duplex: Full Port: Direct Attach Copper PHYAD: 0 Transceiver: external Auto-negotiation: off Supports Wake-on: d Wake-on: d Current message level: 0x00000007 (7) drv probe link Link detected: yes The problem is : as you can see from iperf utility, the transfer speed from server A to server B slow when i restart network service the connection will be ok , after 2 minutes , it's getting slow How could i troubleshoot slow speed issue and fix it in server B ? Notice : if there any other commands i should execute in servers for more information, so it might help resolve the problem , let me know in comments

    Read the article

  • Error during configuring kerberos5 using macports

    - by ario
    While trying to install libmemcached via MacPorts, I hit the following issue: libmemcached @0.40 +universal ---> Computing dependencies for libmemcached ---> Dependencies to be installed: cyrus-sasl2 kerberos5 ---> Configuring kerberos5 Error: org.macports.configure for port kerberos5 returned: configure failure: command execution failed Error: Failed to install kerberos5 It tells me to look in the log for details. Here's the last bit of the log file: :info:configure checking for setupterm in -lcurses... no :info:configure checking for setupterm in -lncurses... no :info:configure checking for tgetent... no :info:configure configure: error: Could not find tgetent; are you missing a curses/ncurses library? :info:configure configure: error: /bin/sh './configure' failed for appl/telnet :info:configure Command failed: cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_net_kerberos5/kerberos5/work/krb5-1.7.2/src" && ./configure --prefix=/opt/local --disable-dependency-tracking --mandir=/opt/local/share/man :info:configure Exit code: 1 :error:configure org.macports.configure for port kerberos5 returned: configure failure: command execution failed :debug:configure Error code: NONE :debug:configure Backtrace: configure failure: command execution failed while executing "$procedure $targetname" :info:configure Warning: targets not executed for kerberos5: org.macports.activate org.macports.configure org.macports.build org.macports.destroot org.macports.install :error:configure Failed to install kerberos5 :debug:configure Registry error: kerberos5 not registered as installed & active. invoked from within "registry_active ${subport}" invoked from within "$workername eval registry_active \${subport}" :notice:configure Please see the log file for port kerberos5 for details: /opt/local/var/macports/logs/_opt_local_var_macports_sources_rsync.macports.org_release_ports_net_kerberos5/kerberos5/main.log It seems to say it's missing ncurses. Looks like it's there though, since if I run port installed I see these: ncurses @5.7_0 ncurses @5.9_1 (active) ncursesw @5.7_0 Any ideas on how to get around this error?

    Read the article

  • HaProxy - Http and SSL pass through config

    - by Bill
    I've currently got an HaProxy LB solution in place and everything is working fine however we are having an issue with a very few clients who cannot get to our site via HTTPS (SSL) they can browse our site in Http but as soon as they click on an absolute HTTPS link they are taken to our home page instead. Wondering if anyone can look at our config below and see if there's something awry. I believe we are on HaProxy 1.2.17 global log 127.0.0.1 local0 log 127.0.0.1 local1 notice #log loghost local0 info maxconn 6144 #debug #quiet user haproxy group haproxy defaults log global mode http option httplog option dontlognull retries 3 redispatch maxconn 2000 contimeout 5000 clitimeout 50000 srvtimeout 50000 stats auth # admin password stats uri /monitor listen webfarm # bind :80,:443 bind :443 mode tcp balance source #cookie SERVERID insert indirect #option httpclose #option forwardfor #option httpchk HEAD /check.cfm HTTP/1.0 server webA 111.10.10.1 #server webB 111.10.10.2 server webB 111.10.10.3 server webC 111.10.10.4 listen webfarmhttp :80 mode http balance source # option httpclose option forwardfor # option httpchk HEAD /check.cfm HTTP/1.0 option httpchk /check.cfm server webA 111.10.10.1 #server webB 111.10.10.2 server webB 111.10.10.3 server webC 111.10.10.4 listen monitor :8443 mode http balance roundrobin #cookie SERVERID insert indirect option httpclose option forwardfor #option httpchk HEAD /check.txt HTTP/1.0 #option httpchk HEAD /check.cfm HTTP/1.0 server webA 111.10.10.1 server webB 111.10.10.2

    Read the article

< Previous Page | 104 105 106 107 108 109 110 111 112 113 114 115  | Next Page >