Search Results

Search found 61297 results on 2452 pages for 'open files'.

Page 588/2452 | < Previous Page | 584 585 586 587 588 589 590 591 592 593 594 595  | Next Page >

  • Browser history, by window

    - by Alex
    I use Chrome --but can switch to another browser if the feature I am about to describe is available. Browsing history, AFAIK in Chrome is sorted exclusively chronologically. Very often however, I will be working on a particular task on my laptop and have multiple (read: a lot) of tabs open in a single Chrome window for that task. Before finishing that task, I may need to work on something else --so I will open another window and minimize the other one, and start researching an entirely different issue. Over the course of a day, I may end up with 10-15 windows with many tabs each. This raises two issues: (a) memory usage and (b) quickly switching between the most relevant two or three windows. I solve these two problems like any regular guy probably would --closing windows. I want to be able to reopen specific windows that I have closed, such that the tabs that were open in that window at the time the window was closed will reopen. Ideally, closed windows will be sorted by the time they were closed and identified by the tabs that were open (even more ideally, I would be able to name these windows (contemporaneously or in the history menu)). Now that I describe this, what I am asking is: does any browser offer the ability to "save and close" windows? (This is distinct from an option to auto-restore tabs upon reopening the browser) Thank you.

    Read the article

  • Symantec Endpoint Protection Virus Definitions

    - by Gus Denton
    I have done some Googling but I cannot get a definitive answer certainly not from the Symantec KB. I have a Virtualised Win 2003R2 server 32bit. It has been provisioned to me with Symantec Endpoint Protection 11.0.62xxx CLIENT (not a definitions server) the directory C:\Program Files\Common Files\Symantec Shared\VirusDefs is 750MB IT doesn't contain .tmp directories so it is NOT a corrupt definitions server. IT does contain directories named with a date pattern YYYYMMDD.xxx Some of these folders are 12 months old and I would like to recover the space. The sysmantect forums are full of this stuff but a lot of the postings contain links back to documents that are not specific to End Point Protection Client. It appears that I should be able to delete the older folders and all will be OK. with a service restart however there is a warning about having Live Update Administrator Installed Firstly I have no idea if I have this installed how to I check and secondly can I just ditch these old files and restart ? Regards Gus Denton Learning and Teaching Uni of New South Wales Sydney Australia For those trying to assist me I thankyou. I have followed some instructions found on the Symantec site and assumed that the response from Nixphoe would resolve my issue. It appears that as I am on a provisioned VM from a central IT unit I cannot run the Symantec commands from the Run prompt as my admin creds to get me in. (smc -stop) Basically I need to claw back some Diskspace from the c: drive which is being filed up with WSUS patches and Symantec files. I have managed to delete one symantec cache through the live update control panel and recovered 470Mb I suppose my last question for those more experienced than myself is, can I simply remove say the two oldest virus definition folders without completely foobaring the End Point protection and the server ? Regards Gus

    Read the article

  • Running git-svn with cron results in garbage in .git

    - by Paul
    I've setup a git-svn repo with cron to fetch from the svn repo daily. I have a script to do the fetching, and this is what is invoked by cron. Everything is fine with the repo, and the script works fine when executed manually. However, when it runs under cron, empty files get dropped into the .git directory. The files have names that look like they are some base64 output, e.g. juTrvjP6m8 and kcKf3hu3b4. Two of these files show up for every cron run. I thought these might be commit hashes, but they're not, git-show says it's an unknown revision. I set-up the repo as follows: git svn init http://svn.ip.addr/repo git svn fetch svn-remote My script looks like this: cd /gitsvn/dir git svn fetch svn-remote git svn push pub The last line pushes the repo to a separate (bare) public repo from which others can clone. I'm piping the output from the cron job to a file, which looks like this: fatal: unable to run 'git-svn' Counting objects: 21, done. Delta compression using up to 2 threads. Compressing objects: 100% (10/10), done. Writing objects: 100% (11/11), 59.08 KiB, done. Total 11 (delta 8), reused 0 (delta 0) To /gitpub/repo.git 360faf5..a153b0d trunk -> trunk The line "fatal: unable to run 'git-svn'" is alarming, but the fetch seems to go ahead anyway. Any suggestions? Where are these empty garbage files coming from, and how to stop them? Am I in for bigger problems in the future? BTW, I'm using git 1.6.3.3.

    Read the article

  • How do I find information about a particular trojan? "W32/Smalltroj.XVGT", as reported by Norman

    - by Lasse V. Karlsen
    I tried checking the Norman antivirus page, Virus-descriptions, but sadly it seems Norman has intentionally obfuscated their search results (I tried clicking on W, and it seems they just list viruses with a W somewhere in the description, instead of more typical, all viruses with a name starting with a W.) Is there a common virus-list somewhere, or is it as I suspect, every antivirus manufacturer is free to come up with their own identification tags for each virus? Several "vshost32.exe" files, related to Microsoft Visual Studio 2008, has been quarantined on our server today, probably related to a test-deployment of some internal software. Some developer machines that have grabbed that latest version of our program has also had the same files quarantined. Now, these files should not have been deployed in the first case, so I'll be looking into that, but whenever any developer now builds a program locally and attempts to debug, the same file is placed in the build output directory, and promptly quarantined. Does anyone have any clues as to how I can go about verifying this before I pointedly ask the antivirus software to go take a hike on this particular virus? Edit: I've copied one of the quarantined files manually to a machine over the network that doesn't have antivirus installed, and compared the file on that machine with a local copy (on that machine) of the vshost32.exe template file, and they're bit-for-bit identical. I guess this is a false positive. I still would like to know if it would be possible for me to verify this in any other way though, since next time such a trojan might be reported in a compiled file that we won't have a pristine copy of.

    Read the article

  • file association in Windows 8

    - by Robith Nuriel Haq
    Associating a file to a program should be easy in Windows. However, i find it rather difficult when I'm working on Windows 8. associating a file to a desktop application that I install on my computer is easy because the whole operation is entirely the same with that in the previous Windows releases. What I find rather difficult is associating a file to a Metro apps that I download from the store. So far, I have been using Multimedia 8 (a metro app) to open my video files. However, this app cannot handle particular files-like *.dat-that can be associated easily to desktop video applications, such as media player classic and the like. When I try to associate my DAT files to Multimedia 8, there is indeed a "look for another app on this PC" option at the bottom of the "open with" pop up. But alas, I cannot figure out how to locate my Multimedia 8 app to which I want to associate my DAT file (as well as other video files that are not yet associated to this metro app). If anyone of you knows how to locate those metro apps, please tell me. many thanks

    Read the article

  • What could cause a WMV to not play to completion in a browser?

    - by Ty W
    A realtor has had videos created for a community she is selling homes for, the people who made the videos gave them to us in WMV format. I can play these videos without any problem in Windows Media Player, VLC, and Quicktime (via Flip4Mac). I can play the videos from their location at videohomeguide.com in my browser without any trouble. However when I upload the files to our server the video stops at about the 1 minute mark in Safari and FireFox on Mac OS X Snow Leopard. I'm not sure if Windows browsers have the same issue because they are loaded using Windows Media Player. http://carolepaul.com/images/uploads/cottageslsjamestown.wmv <- our server, will fail at 1:09ish. http://www.videohomeguide.com/media/cottageslsjamestown.wmv <- should play to completion (3:27ish) The files generate the same MD5 hash on my desktop and on our server. I used WGET to transfer the files, always downloading from videohomeguide.com. Since the files are identical and are playable using VLC/WMP/Quicktime, and playable in the browsers from videohomeguide.com it seems to me that it is some sort of server config... maybe incorrect headers sent to the browsers? Here are the headers sent and received by FireFox on OS X: http://carolepaul.com/images/uploads/cottageslsjamestown.wmv GET /images/uploads/cottageslsjamestown.wmv HTTP/1.1 Host: carolepaul.com User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.2) Gecko/20100316 Firefox/3.6.2 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive HTTP/1.1 200 OK Date: Mon, 29 Mar 2010 20:43:20 GMT Server: Apache/1.3.41 (Unix) PHP/5.2.6 FrontPage/5.0.2.2635 mod_psoft_traffic/0.2 mod_ssl/2.8.31 OpenSSL/0.9.8b Last-Modified: Wed, 02 Dec 2009 18:08:46 GMT Etag: "1e7919c-198eadc-4b16ad2e" Accept-Ranges: bytes Content-Length: 26798812 Keep-Alive: timeout=10, max=200 Connection: Keep-Alive Content-Type: video/x-ms-wmv

    Read the article

  • mysql cmd promt import data.sql

    - by udhaya
    i wanna import sql using cmd prompt. first open windows cmd prompt, navigate to xampp/mysql/bin folder & run mysql this error occurs D:\Program Files\xampp\mysql\bin>mysql ERROR 1045 (28000): Access denied for user 'ODBC'@'localhost' (using password: N O) D:\Program Files\xampp\mysql\bin>mysql -u root -p -h localhost dev1base < dev1b ase.sql Enter password: D:\Program Files\xampp\mysql\bin> D:\Program Files\xampp\mysql\bin>mysql -u root Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 104 Server version: 5.0.51a Source distribution Type 'help;' or '\h' for help. Type '\c' to clear the buffer. mysql> mysql> -h localhost dev1base < dev1base.sql -> -> -> ->

    Read the article

  • Apache rewrite rules and special characters

    - by Massimo
    I have a server where some files have an actual %20 in their name (they are generated by an automated tool which handles spaces this way, and I can't do anything about this); this is not a space: it's "%" followed by "2" followed by "0". On this server, there is an Apache web server, and there are some web pages which links to those files, using their name in URLs like http://servername/file%20with%20a%20name%20like%20this.html; those pages are also generated by the same tool, so I (again!) can't do anything about that. A full search-and-replace on all files, pages and URLs is out of question here. The problem: when Apache gets called with an URL like the one above, it (correctly) translates the "%20"s into spaces, and then of course it can't find the files, because they don't have actuale spaces in their names. How can I solve this? I discovered than by using an URL like http://servername/file%2520name.html it works nicely, because then Apache translates "%25" into a "%" sign, and thus the correct filename gets built. I tried using an Apache rewrite rule, and I can succesfully replace spaces with hypens with a syntax like this: RewriteRule (.*)\ (.*) $1-$2 The problem: when I try to replace them with a "%2520" sequence, this just doesn't happen. If I use RewriteRule (.*)\ (.*) $1%2520$2 then the resulting URL is http://servername/file520name.html; I've tried "%25" too, but then I only get a "5"; it just looks like the initial "%2" gets somewhat discarded. The questions: How can I build such a regexp to replace spaces with "%2520"? Is this the only way I can deal with this issue (other than a full search-and-replace which, as I said, can't be done), or do you have any better idea?

    Read the article

  • Is it possible to have a wireless in-house NAS with wireless data transfer rates of equivalent to SATA speeds?

    - by techaddict
    Basically I would like to know, if it is possible to set up an NAS in my house to be accessed wirelessly, that can reach equivalent real-life data transfer speeds to USB 3.0 or an internal SATA hard drive. I have been wanting to do this for some time ( a couple of years now). Basically, this is what I want to do: Plug in a number of hard drives in an array, somewhere in my house, to be left plugged in and never have to be monitored. Ideally several terabytes. Whenever I am home, to have my computer and laptop configured to automatically find the NAS, as easy as plugging in an external hard drive - except completely wirelessly. Data transfer needs to be as seamless and quick as having added another internal hard drive in my laptop. Moreover, data should be able to accessed without having to copy it over - I should be able to wirelessly access the NAS and browse files, and open files directly from the NAS. For example, say I wanted to open a video - I should be able to play the video that is located on the NAS, directly from the NAS, completely wirelessly. If I wanted to open a .pdf file, I should be able to open it and read it directly from the NAS, as if it were located on my physical internal hard drive. Cost is important as well. Please tell me what equipment I need for this to be possible. I know you geniuses out there who can tell me if this is possible.

    Read the article

  • How to use Zune software to listen to podcasts with generic MP3 player?

    - by Bevan
    I listen to a bunch of podcasts - a great way to fill the otherwise mindless space of a daily commute. My MP3 player is a Transonic brand, appears on my computer as a generic storage device. I've been using iTunes to download the podcasts, and manually moving the files out of the disk folder onto my player, but this is pretty tedious. iTunes also fails to recognise that the files are gone and leaves them in the list. (Actually, iTunes for windows is pretty much a dog, but that's a different rant.) The Zune software is 99% of what I want in a podcast downloader - performs well, looks nice, downloads reliably and so on. Some features - like only downloading the next five unheard episodes of a podcast - are superb. However, if I manually move the files across to the MP3 player, the Zune software concludes that the file has never been downloaded, and downloads it again. This leads me to my question: What is a good way to use the Zune software to download podcasts for listening on a generic MP3 player? Are there any addons for the Zune software to make this easier? Registry hacks? Can I configure the Zune software to not download the same episode multiple times? Is there a way for the Zune software to populate my MP3 player directly, instead of having to copy files?

    Read the article

  • Web based file search in the lan?

    - by Magnetic_dud
    I would like to search files in my lan easily. (over 500k files on SMB shares, it would take ages with other ways) I mean, i just need to do a quick search on file names, i don't care content indexing at all, as most of my files are in a proprietary format, and the file name is explicative enough. But, date range filters are a must for me. I just need a quick search like voidtools' everything can do, but in a network way The files are on a WHS box (lol, Videos and Music share names are not appropriate for a company, but a license for that win2003-based os is cheaper than an xp home one!) I tried: Lansearch pro: it is not good for me, as i need a quick index Network Search Engine: it would be perfect, but does not offer a date range filter Microsoft Search Server 2008 Express, but it is horrible! First, does NOT index filenames, and then, my Core2Duo is not powerful enough to run it smoothly. Google Desktop with a proxy on localhost to make it run on the lan, but i don't like the hacked result. The preinstalled Windows Search 4.0 but it sucks totally in choosing the relevance of data - uninstalled Docco... what's that? I am considering to try: Ibm omnifind DocFetcher (can it work as a client? did not investigated yet) Strigi (it looks like that it can work as a client, right?) Any ideas/suggestions?

    Read the article

  • Geographically distributed file system with preferred locality

    - by dpb
    Hi All -- I'm building a application that needs to distribute a standard file server across a few sites over a WAN. Basically, each site needs to write a lot of misc files of varying size (some in the 100s MB range, but most small), and the application is written such that collisions aren't a problem. I'd like to have a system set up that meets the following qualifications: Each site can store files in a shared "namespace". That is, all the files would show up in the same filesystem. Each site would not send data over the WAN unless necessary. I.e., there would be local storage on each side of the WAN that would be "merged" into the same logical filesystem. Linux & Free ($$$) is a must. Basically, something like a central NFS share would meet most of the requirements, however it would not allow the locally written data to stay local. All data from remote sides of the WAN would be copied locally all the time. I have looked into Lustre, and have run some successful tests with it, however, it appears to distribute files fairly uniformly across the distributed storage. I have dug through the documentation and have not found anything that automatically will "prefer" local storage over remote storage. Even something that went with the lowest latency storage would be fine. It would work most of the time, which would meet this application's requirements. Any ideas?

    Read the article

  • How to make new file permission inherit from the parent directory?

    - by Wai Yip Tung
    I have a directory called data. Then I am running a script under the user id 'robot'. robot writes to the data directory and update files inside. The idea is data is open for both me and robot to update. So I setup the permission and owner group like this drwxrwxr-x 2 me robot-grp 4096 Jun 11 20:50 data where both me and robot belongs to the 'robot-grp'. I change the permission and the owner group recursively like the parent directory. I regularly upload new files into the data directory using rsync. Unfortunately, new files uploaded does not inherit the parent directory's permission as I hope. Instead it looks like this -rw-r--r-- 1 me users 6 Jun 11 20:50 new-file.txt When robot tries to update new-file.txt, it fails due to lack of file permission. I'm not sure if setting umask helps. In anycase the new files does not really follow it. $ umask -S u=rwx,g=rx,o=rx I'm often confounded by Unix file permission. Do I even have a right plan? I'm using Debian lenny.

    Read the article

  • apache2 mysql authentication module and SHA1 encryption

    - by Luca Rossi
    I found myself in a setup on where I need to enable some authentication method using mysql. I already have an user scheme. That user scheme is working like a charm with MD5 password and CRYPT, but when I turn to SHA1sum it says: [Fri Oct 26 00:03:20 2012] [error] Unsupported encryption type: Sha1sum No useful debug informations on log files. This is my setup and some info: debian6 apache and ssl installed packages: root@sistemichiocciola:/etc/apache2/mods-available# dpkg --list | grep apache ii apache2 2.2.16-6+squeeze8 Apache HTTP Server metapackage ii apache2-mpm-prefork 2.2.16-6+squeeze8 Apache HTTP Server - traditional non-threaded model ii apache2-utils 2.2.16-6+squeeze8 utility programs for webservers ii apache2.2-bin 2.2.16-6+squeeze8 Apache HTTP Server common binary files ii apache2.2-common 2.2.16-6+squeeze8 Apache HTTP Server common files ii libapache2-mod-auth-mysql 4.3.9-13+b1 Apache 2 module for MySQL authentication ii libapache2-mod-php5 5.3.3-7+squeeze14 server-side, HTML-embedded scripting language (Apache 2 module) root@sistemichiocciola:/etc/apache2/sites-enabled# dpkg --list | grep ssl ii libssl-dev 0.9.8o-4squeeze13 SSL development libraries, header files and documentation ii libssl0.9.8 0.9.8o-4squeeze13 SSL shared libraries ii openssl 0.9.8o-4squeeze13 Secure Socket Layer (SSL) binary and related cryptographic tools ii openssl-blacklist 0.5-2 list of blacklisted OpenSSL RSA keys ii ssl-cert 1.0.28 simple debconf wrapper for OpenSSL my vhost setup: AuthMySQL On Auth_MySQL_Host localhost Auth_MySQL_User XXX Auth_MySQL_Password YYY Auth_MySQL_DB users AuthName "Sistemi Chiocciola Sezione Informatica" AuthType Basic # require valid-user require group informatica Auth_MySQL_Encryption_Types Crypt Sha1sum AuthBasicAuthoritative Off AuthUserFile /dev/null Auth_MySQL_Password_Table users Auth_MYSQL_username_field email Auth_MYSQL_password_field password AuthMySQL_Empty_Passwords Off AuthMySQL_Group_Table http_groups Auth_MySQL_Group_Field user_group Have I missed a package/configuration or something?

    Read the article

  • nginx server over https using up all available file handles

    - by mmr
    Hi all, So I have an nginx server that's working over https with Sinatra. When I try to download a jnlp file in a configuration that works fine over Mongrel and http (no s), the nginx server fails to serve the file with a 504 error. Subsequent checking of the logs states that this error is due to overflowing the available number of file handles, ie, "24: too many open files". Running sudo lsof -p <nginx worker pid> gets me a huge list of files, all looking like: nginx 1771 nobody 11u IPv4 10867997 0t0 TCP localhost:44704->localhost:https (ESTABLISHED) nginx 1771 nobody 12u IPv4 10868113 0t0 TCP localhost:https->localhost:44704 (ESTABLISHED) nginx 1771 nobody 13u IPv4 10868114 0t0 TCP localhost:44705->localhost:https (ESTABLISHED) nginx 1771 nobody 14u IPv4 10868191 0t0 TCP localhost:https->localhost:44705 (ESTABLISHED) nginx 1771 nobody 15u IPv4 10868192 0t0 TCP localhost:44706->localhost:https (ESTABLISHED) nginx 1771 nobody 16u IPv4 10868255 0t0 TCP localhost:https->localhost:44706 (ESTABLISHED) nginx 1771 nobody 17u IPv4 10868256 0t0 TCP localhost:44707->localhost:https (ESTABLISHED) nginx 1771 nobody 18u IPv4 10868330 0t0 TCP localhost:https->localhost:44707 (ESTABLISHED) nginx 1771 nobody 19u IPv4 10868331 0t0 TCP localhost:44708->localhost:https (ESTABLISHED) nginx 1771 nobody 20u IPv4 10868434 0t0 TCP localhost:https->localhost:44708 (ESTABLISHED) Increasing the number of files that can be opened is no help, because then nginx just blows right past that limit. And no wonder, it looks like it's in some kind of loop to pull all available files. Any idea what's going on, and how to fix it?

    Read the article

  • Creating a tar file with checksums included

    - by wazoox
    Here's my problem : I need to archive to tar files a lot ( up to 60 TB) of big files (usually 30 to 40 GB each). I would like to make checksums ( md5, sha1, whatever) of these files before archiving; however not reading every file twice (once for checksumming, twice for tar'ing) is more or less a necessity to achieve a very high archiving performance (LTO-4 wants 120 MB/s sustained, and the backup window is limited). So I'd need some way to read a file, feeding a checksumming tool on one side, and building a tar to tape on the other side, something along : tar cf - files | tee tarfile.tar | md5sum - Except that I don't want the checksum of the whole archive (this sample shell code does just this) but a checksum for each individual file in the archive. I've studied GNU tar, Pax, Star options. I've looked at the source from Archive::Tar. I see no obvious way to achieve this. It looks like I'll have to hand-build something in C or similar to achieve what I need. Perl/Python/etc simply won't cut it performance-wise, and the various tar programs miss the necessary "plugin architecture". Does anyone know of any existing solution to this before I start code-churning ?

    Read the article

  • How do I import large sql file to local LAMP (xampp) environment

    - by mraslton
    I have used Linux to import a large mysql dump file (into a new database), but am new to how the process works in a local LAMP environment using xampp, as xampp does not support SSH. I've dowloaded the large_dump_file.sql from the Linux server to my local system. I'm using Windows XP and have used xampp to setup LAMP. I am able to access the local_database via phpMyAdmin, but the dump file is too large to import using that app. I'm trying to import the file via the command prompt, but so far with no success. At the prompt: cd .. cd .. cd xampp cd mysql cd bin I've found that mysqlimport is used to import .csv and .txt files, and mysql is used to import .sql files, but can't find documentation as to whether or not to use the -u -p options so I've tried many variations of the command with no luck. What would be the proper command? I've modified the hosts, virtual-hosts conf, and apache config files. Do I need to change any other config files on my local system? Thanks.

    Read the article

  • What could cause the file command in Linux to report a text file as data?

    - by Jonah Bishop
    I have a couple of C++ source files (one .cpp and one .h) that are being reported as type data by the file command in Linux. When I run the file -bi command against these files, I'm given this output (same output for each file): application/octet-stream; charset=binary Each file is clearly plain-text (I can view them in vi). What's causing file to misreport the type of these files? Could it be some sort of Unicode thing? Both of these files were created in Windows-land (using Visual Studio 2005), but they're being compiled in Linux (it's a cross-platform application). Any ideas would be appreciated. Update: I don't see any null characters in either file. I found some extended characters in the .cpp file (in a comment block), removed them, but file still reports the same encoding. I've tried forcing the encoding in SlickEdit, but that didn't seem to have an effect. When I open the file in vim, I see a [converted] line as soon as I open the file. Perhaps I can get vim to force the encoding?

    Read the article

  • Should be simple: existing laptop with local user and outlook 2007 migrate on same computer to domain user with outlook 2007 emails intact

    - by bifpowell
    I have Dell Laptop with windows 7 64 bit and for the last year it's been just a machine with an account like: machine\john there are files in folders and stuff in c:\users\john and john uses outlook 2007 as a pop3 client and has identifiable local appdata pst files. Now I installed a server and want to have everything be domain-centric so I added this laptop to the domain with admin credentials and then logged in as a domain user as: domain\john.smith Now I want to duplicate machine\john (outlook emails mostly) to domain\john.smith. In the past I used the Files and Settings Xfer Wizard and done. I tried that here and it crunched away for a while, made the file, but the restore had no effect - it ran for a while, had a progress bar, but it's like nothing happened at all afterwards. I've rebooted the machine, logged in as domain administrator as the first user to log on after the restart and tried: c:\users\john xcopy c:\users\john c:\users\john.smith /V /C /F /H /K /Y /E ...and it copies some of it, but when it gets to c:\users\john.smith\appdata\local\application data it chokes "Access denied, unable to create directory" I also tried logging in as domain\john.smith and copying the entire directory that the PSTs are in from machine\john and a lot of the mail was there when I launched outlook after replacing the PSTs, but not all of them??? I got errors about files in use when doing this method, which I figure must be why not all the old emails are in the inbox?... There must be some extremely simple way to do what must be a very common requirement. Any guidance appreciated.

    Read the article

  • OSX 10.6 goes unresponsive

    - by mjb
    This behavior continues to perplex me. My MBP, running 10.6.7, stops responding to all Apple-based software. Whatever software I have open remains open (Terminal, iTunes, Safari), but if I try to use the F-shortcuts or launch any OSX-based software not already open (System Preferences for example) it just bounces in the dock then never launches. I also cannot reboot without hard rebooting. I left terminal open, so I see the following in /var/log/system.log Jun 25 19:39:02 mjb-2 com.apple.ReportCrash.Root[59432]: 2011-06-25 19:39:02.585 ReportCrash[59432:7f1f] Saved crash report for CoreServicesUIAgent[59576] version ??? (???) to /Library/Logs/DiagnosticReports/CoreServicesUIAgent_2011-06-25-193902_localhost.crash Jun 25 19:39:02 mjb-2 com.apple.ReportCrash.Root[59432]: 2011-06-25 19:39:02.586 ReportCrash[59432:b10f] Saved crash report for quicklookd[59571] version ??? (???) to /Library/Logs/DiagnosticReports/quicklookd_2011-06-25-193902_localhost.crash Two requests: (1) please don't send this off to the Apple area so it can die a slow painful rotting death of tumbleweed. (2) Suggest what I should kill -9 or logs to look at to cut this sh*$ out. Cheers, mjb

    Read the article

  • SQL 2008 SP1 crashing almost daily

    - by matijake
    Hey, almost every day our new DB crashes. It is virtual server residing on same hardware as 5 other servers, two of them beeing identical MS SQL2008sp1 and two Oracle 11g's so I can pretty much rule out hardware issues. Server has dedicated local LUN, 4vCPU and 8GB memory with 2GB windows swap file. It runs 4 instances. Primary instance is limited to 5GB memory and paralelism set to 4 running on MS SQL 2008 SP1 @ Windows Server 2008 Enterprise R2 x64. Only that primary instance is crashing. After it crashes nothing can connect to it, it's even impossible to shut it down through service manager. What I found in logs is: ***Stack Dump being sent to C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\LOG\SQLDump0081.txt SqlDumpExceptionHandler: Process 4788 generated fatal exception c0000005 EXCEPTION_ACCESS_VIOLATION. SQL Server i s terminating this process.     Whole log can be seen at: http://kabl.org/files/SQLDump0081.txt second crash log made second later at: http://kabl.org/files/SQLDump0082.txt I have analyzed mini crashdump with Microsoft tools, but no promising results. If it can help, here it is: http://kabl.org/files/SQLDump0081.mdmp Any ideas are greatly welcome, since it is becoming quite a pain in the ass to restart server almost every day :) Regrads, -Matija

    Read the article

  • Can I recover a rm -rf-ed Mercurial repository?

    - by WishCow
    I made the mistake of wiping out my entire project directory with a quick "rm -rf project". Of course, the .hg directory went with it. I had about 15-20 changesets, that I have not pushed to anyone, and I would really really like to get those back. The system is a Ubuntu machine, and the partiton where the delete happened is ext3, the project consist mostly of PHP files. I know about the guideline to not write to the disk in question. The first idea was to use the tool named scalpel, to get the PHP files back and diff them with the current version from the repo, and somehow carve the changes out. While it succeeded, it did not recover the file names (or there is a switch I'm missing), so I'm left with a few thousand sequentially named .php files, combing through them is not an option. Can a kind soul please save me, and suggest a way to: a) get the repo back, or b) get the files back, with filenames For those wondering how I did such a stupid thing: I was working on a file in Vim which I wanted to remove from the repository: :!hg rm % This complained that the file is in a subrepository, so I specified the following: :!hg rm % -R engine which complained that file has modifications, use -f to force. And this is when somehow, I made up the following command: :!rm -rf % -R engine Somehow, seeing "force" makes me do a rm -rf by reflex.

    Read the article

  • JSP Content Issue in Tomcat

    - by gautam vegeta
    There is one application where I work where there are still manual builds used i.e manually moving the servlet classes and jsp files from Dev to QA and finally to Prod. This is the method used in this application which cant be changed for some wierd reasons.BTW this is not the problem. We did a manual build where we transferred jsp files from QA to PROD recently. And we noticed that the jsp file content does not correspond to the updated jsp's but have the same content to the jsp file which was present in the server prior to the deployment. We did not re-start tomcat since jsp files upon updation automatically changes its contents. This problem persisted even after 6 hours of deployment If we consider the time standards which are different which may cause some delay. So to fix this we had to individually go into every jsp file and just type something save it and delete this change and save it.Then it worked perfectly. But finally the jsp file content before and after was never changed we just did this to change the modification date. If we think in terms of timestamp problem how can this be possible coz the old jsp files which were present in the server prior to deployment was atlest one month old and the ones getting deployed were defenitely newer than that. Why did this happen? This did not happen when we did same type of deployments earlier. How can we prevent this from happening in the future.

    Read the article

  • 403 Forbidden error on Mac OSX - Apache and nginx

    - by tlianza
    Hi All, There are a million questions like this on Google, but I haven't found a solution to my problem. The default Apache install on my Mac is giving 403 Forbidden errors for everything (default directory, user home directory, virtual server, etc). After sifting through the config files, I figured I'd give nginx a try. Nginx serves files fine from it's home directory, but it won't serve files from a subfolder of my user directory. I've configured a simple virtual host, and requesting index.html returns a 403-forbidden. The error message in nginx's log file is pretty clear - it can't read the file: 2011/01/04 16:13:54 [error] 96440#0: *11 open() "/Users/me/Documents/workspace/mobile/index.html" failed (13: Permission denied), client: 127.0.0.1, server: local.test.com, request: "GET /index.html HTTP/1.1", host: "local.test.com" I've opened up this directory to everyone: drwxrwxrwx 6 me admin 204B Dec 31 20:49 mobile And all the files in it: $ ls -lah mobile/ total 24 drwxrwxrwx 6 me admin 204B Dec 31 20:49 . drwxr-xr-x 71 me me 2.4K Dec 31 20:41 .. -rw-r--r--@ 1 me me 6.0K Jan 2 18:58 .DS_Store -rwxrwxrwx 1 me admin 2.1K Jan 4 14:22 index.html drwxrwxrwx 5 me admin 170B Dec 31 20:45 nbproject drwxrwxrwx 5 me admin 170B Jan 2 18:58 script And yet, I cannot figure out why the nginx process cannot read index.html. It's running as the "nobody" user, but the permissions are set such that anyone can read them.

    Read the article

  • How to make new file permission inherit from the parent directory?

    - by Wai Yip Tung
    I have a directory called data. Then I am running a script under the user id 'robot'. robot writes to the data directory and update files inside. The idea is data is open for both me and robot to update. So I setup the permission and owner group like this drwxrwxr-x 2 me robot-grp 4096 Jun 11 20:50 data where both me and robot belongs to the 'robot-grp'. I change the permission and the owner group recursively like the parent directory. I regularly upload new files into the data directory using rsync. Unfortunately, new files uploaded does not inherit the parent directory's permission as I hope. Instead it looks like this -rw-r--r-- 1 me users 6 Jun 11 20:50 new-file.txt When robot tries to update new-file.txt, it fails due to lack of file permission. I'm not sure if setting umask helps. In anycase the new files does not really follow it. $ umask -S u=rwx,g=rx,o=rx I'm often confounded by Unix file permission. Do I even have a right plan? I'm using Debian lenny.

    Read the article

< Previous Page | 584 585 586 587 588 589 590 591 592 593 594 595  | Next Page >