Search Results

Search found 3615 results on 145 pages for 'cron daily'.

Page 126/145 | < Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >

  • Ubuntu on VPS becomes unresponsive: BUG: soft lockup - CPU#0 stuck for 22s

    - by Bhante Nandiya
    We have a VPS running Ubuntu, on Xen. The problem is this, about once a day, for about 20-50 minutes, at a random time, the server becomes completely unresponsive to the outside world. After this period, it becomes responsive again, as if nothing had happened, it doesn't lose uptime, it doesn't restart. It just starts responding again as if it had been in suspended animation. These outages occur under conditions of non-exceptional memory and cpu, for example 70% mem, 5% cpu. I have stopped all non-essential services so the usage is very even. These outages don't particularly occur during times of increased memory/cpu (during daily tasks), they sometimes occur at times of very low cpu use (<2%), but in the past also occured during swapping. These blackouts have been occurring both under Ubuntu 12.04 LTS, and Ubuntu 14.04 LTS - no change at all (I upgraded Ubuntu specifically to see if it helped this problem). It is possible to log into our webhosts site, and use their administration console to see error messages from during this time. Presumably, these messages are from the Xen virtualization, the main message goes like this: BUG: soft lockp - CPU#0 stuck for 22s! [ksoftireqd/0:3] (repeats many times) SysRq : Emergency Sync (Sometimes this is the only message in the console) Others seen previously under different load situations include: BUG: soft lockup - CPU#0 stuck for 22s! [swapper/0:0] (repeated many times) or: INFO: rcu_sched detected stall on CPU 0 (t=15000 jiffies) (repeated many times with t getting bigger) From googling around I've tried various kernel parameters such as nohz=off and acpi=off to no avail. All tech support has said is that other Ubuntu installations are not suffering the same problem. Anyone got any ideas or experience with this problem?

    Read the article

  • Windows 7 scheduled task returns 0x2

    - by demmith
    I have identical scheduled tasks running in Windows XP Pro and Windows 7. The XP Pro one runs fine, the Windows 7 one always returns 0x2 (which means, "The system cannot find the file specified"; however, executing from the command line is no problem) in the Last Run Result column of the Task Scheduler UI. The scheduled task executes a .bat file daily. The .bat file contains a call to execute a Perl script. As I stated in the previous paragraph, it executes under XP without any trouble but under Windows 7, no dice. The task under Windows 7 is set to "run whether the user is logged on or not." In this case it is me, I am the only user of the system. It is also set to "Run with highest privileges." And it is not hidden. The .bat file executes perfectly well from the command line - it calls the Perl script as expected and the Perl script does its thing. I have searched far and wide looking for an appropriate answer to this issue. So far I have found nothing. What the devil is going on with this Win7 scheduled task? I am ready to pull my hair out.

    Read the article

  • Windows Small Business System 2003. SQL timeout in Server Performance Report

    - by tetranz
    I'm the volunteer IT admin at a small school. We have SBS 2003 with about ten desktops. The server performance report is emailed to me daily. It is setup with a wizard in the Monitoring and Performance part of the "Server Management" console. It often fails with a "The page cannot be displayed" error. The event log shows Event Type: Error Event Source: ServerStatusReports Event Category: None Event ID: 1 Date: 1/16/2011 Time: 6:03:14 AM User: N/A Computer: ALPHA Description: Server Status Report: URL: http://localhost/monitoring/perf.aspx?reportMode=1&allHours=1 Error Message: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. Stack Trace: at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, TdsParserState state) at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, TdsParserState state) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning() at System.Data.SqlClient.TdsParser.ReadNetlib(Int32 bytesExpected) [plus lots more stack trace] This has been happening for years :) I've never really solved it. It seems to be related to WSUS. When it happens, I run the Update Services "Server Cleanup Wizard". That takes a long time to run. If I haven't run it for a while it can take 10 hours. I also run the WsusDBMaintenance.sql script (from TechNet I think) which reindexes the database etc. Those two things seem to get it working again for a while. Recently the "while" has become a couple of weeks. My searching online has revealed lots of people having this problem but no real solution. Does anyone have any good ideas about this? I have to wonder if something in the WSUS SQL schema is not indexed properly. The time that the server cleanup wizard takes seems ridiculous. Thanks

    Read the article

  • Sync, share and backup policy using NAS

    - by Cue
    Trying to come up with a way to keep in sync while sharing and keeping a backup of my music/photos and movies. Currently I have an iMac in Greece and a MBP with me in the UK. As a result I've ended up with 2 iPhoto and iTunes libraries not to mention Documents scattered here and there, user settings etc. I also like to have a backup in case of a drive failure or the need to clean install. It seems that iPhoto and iTunes don't work really well with networked libraries. The way I think about it is to have a NAS where I keep my iTunes and iPhoto library but also rsync daily to my MBP to have a local copy. That way my files are shared across the network as well as act like a backup. In addition I get to have my files wherever I take my MBP but also have the ability to clean install. The tricky part comes from keeping in sync the iMac which is miles away. Again I'm considering a mirror setup (NAS, rsync to the iMac) as well as an rsync between the two NAS. It pretty much resembles the way Dropbox works, sans the requirement to go through their servers but I'm no "superuser" and don't really know if it is even feasible to have such a setup. Looks like there are so many things that can go wrong.

    Read the article

  • BackupExec 12 + RALUS - VERY slow backups

    - by LVDave
    We use Backup Exec 12 and the Remote Agent for Linux/Unix Servers (RALUS) to backup a large RHEL5 system. For various reasons we need to do a daily working set job. These working-set jobs run abysmally slow. The link between the target machine and the BE server is gigabit, and any other type of job runs 1-3GB/min. These working-set jobs start out at perhaps 40MB/min and over the course of the backup job slowly drops down so low that the BE job rate display in the "current jobs" goes blank.. Since we usually are only doing changed-files for one day, the job is usually small and finishes overnight and we don't worry abotu the slowness, but we had some issues with the backup server, and missed about 6 days of fairly heavy work on the Linux box, so this working-set job will be a doozy.. We have support with Symantec, and I've pestered them a lot about this, they've had me run RALUS in debug mode, sent them that log and a VXgather from the BE host and they had no fix/workaround.. To give an idea, I have the mentioned working-set job running for the last 3 1/2 hours and it's backed up just under 10MEGAbytes.... I'm posting this here to see if anybody in the "real world" has seen this/and/or has any ideas what might be causing these abysmally slow jobs, since Symantec seems to be clueless...

    Read the article

  • 750Gig Hard Drive shows full with only 315Gigs used

    - by Chris Kelly
    I have a Win7 laptop with a 750Gig C: drive. It came partitioned with 714Gig usable from manufacturer. I installed programs, music files, etc up to 285 gigs. As of a few weeks ago it showed 285 Gigs. Two weeks of house guests later and it shows HD is full. I deleted some files but it still shows 652 Gigs on this drive while there are only 285 Gigs on drive. Relevant details: I am Administrator on laptop and have fair knowledge of what I am doing. I did not restore from backup, restore from mirror, upgrade HD's or anything else that would have touched the partition structure. Just daily use as imaging machine and web. I have checked partitions under disk administrator - no change, still partitioned with 714Gigs usable. Have looked through computer C drive by hand showing Hidden files and folders - no change. I have used JDisk Report to double check - it shows I have only 285 Gigs on C drive. I triple checked with TreeSize run as Administrator and it also shows 285 Gigs on C drive - yet Windows 7 still shows almost full. I used Windows 7 Utilities to Check for Disk Errors, and Defragged the drive. No errors shown and no change after Defrag.

    Read the article

  • Dual hard drive Windows 7 system, modified the registry to get programs to install on second drive, now IE doesn't work

    - by paul
    I have a dual hard drive Windows 7 system, Windows is installed on an SSD (C:) and I modified the registry to try to force programs to install on second HDD drive (another letter). The registry edits are pretty simple, just a few keys in HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion to change the drive letter. For the most part the system is very fast and works great, but IE doesn't work anymore. With IE10, it opens for a flash with a white window then closes. I tried installed IE11 which opens a white window for a few seconds, doesn't respond, then crashes. I've tried all the solutions I could find. This includes resetting the IE settings, "uninstalling" and re-installing IE, which is just turning it on and off in "Turn Windows Features on or off", copying the Program Files\Internet Explorer files onto both/either drives, changing the registry keys back to use C:, lots of rebooting, and safe mode. Nothing has worked. I don't see errors in the event viewer, but I might not know what to look for. Any ideas on how to get IE running? I don't need IE for daily browsing, I just need it for cross-browser testing on sites I build and on the rare occasion a page only works in IE. I don't really want to use a virtual machine, but would be ok with something standalone like tredosoft's, but I'm not aware of something like that for current versions of IE.

    Read the article

  • What kind of server configuration is best for a chatting app? [closed]

    - by mohabitar
    I'm just now starting to go deeper into the world of cloud hosting and databases, and am getting overwhelmed by how deep this information goes. It's all a little too much to consume in a short amount of time. I get a lot of pricing information, but I'm unable to determine what that means to me. I'm making what you might compare to an email app. Users can send messages to one another. I just don't understand, out of the several options, what would be ideal for an app like this, where users would be constantly sending and receiving text data. With Amazon DynamoDB, I have to specify a pre-defined throughput with number of reads and writes per second. Sure I can just type 50, but I'm not exactly sure what 50 writes per second represents. I'm trying to determine what would be the most cost efficient solution, and I want to know what a throughput of 50 reads/writes/second compares to. Is that a high number? What is a good throughput number for a message sending app with say 50,000 daily users? I'm just providing specific numbers so I can understand what these throughput numbers represent. 100 transactions/second to me seems like a small number since I'm not familiar with this stuff, so I'm just looking to bring everything in context. What would 100 read/write/second be useful for? Are there any average example values available? And I'm not sure what each service is good for. For a message sending app, is there any reason I'd want to choose say Amazon DynamoDB over Google App Engine? Any insight would be greatly appreciated.

    Read the article

  • I need to build a mini computer lab with 6 PCs

    - by chiurox
    Hello everyone, This weekend I need to build a mini computer lab with 5-6 PCs. The purpose is for a small computer class that will be taking place. They're mostly going to be doing office and daily kind of stuff, so nothing high performance. However, I hope it will last at least for 2 more years. I already have some parts for 2 PCs, one is a P4 3.0 and another is Celeron 2.4GHz, both socket 478. For the other 4 PCs, I'm wondering if I should buy individual parts and put it all together or get workstations like those Dell Optiplex with P4s. To put things into perspective, I am not currently in the US. I'm in South America and prices here are ridiculously. Another really important thing is that I need to share an internet connection between a total of 8 PCs at this place. Right now I only have a crappy wireless router, should I get another one? Or go with switch hub? I'm not experienced in this matter. Thanks guys!

    Read the article

  • What can be done to improve time synchronization on networks with sporadic internet access?

    - by anregen
    I'm looking for advice setting up time servers for a very non-typical network. I support many closed networks that have occasional access to the internet. A network would get access most days for a few hours, but would frequently go 1-3 weeks blacked-out. The computers/servers on this network are mostly *nix-based, but not all the same flavor. The entire network is mobile, so when it connects, it will have very different hops/latency to internet time servers. The servers on the closed network are powered-off frequently (at least daily). Right now, my gut tells me to use NTP (because I hate re-learning all the stuff that someone else already got working pretty well). But I have several issues, and am looking for someone with experience in this type of strange situation. I currently have no solution in place, I'm simply letting the internal clocks drift. This results in errors of ~600s in a majority of networks. I have seen mismatch worse than 10,000s. Is there something "better" than NTP in this situation? I know NTP likes to have very frequent, consistent access to servers that give nearly identical answers. I won't have that. How many internal NTP servers should I configure, so that during periods of internet blackout, I have internal time that is consistent within the closed network? There is no human access. No matter how large the mismatch, the server(s) must attempt to correct itself. Discrete steps are very bad. No matter how large the mismatch, the correction must be "slewed", not "stepped". I understand that this could take many hours to correct.

    Read the article

  • Disk fragmentation when dealing with many small files

    - by Zorlack
    On a daily basis we generate about 3.4 Million small jpeg files. We also delete about 3.4 Million 90 day old images. To date, we've dealt with this content by storing the images in a hierarchical manner. The heriarchy is something like this: /Year/Month/Day/Source/ This heirarchy allows us to effectively delete days worth of content across all sources. The files are stored on a Windows 2003 server connected to a 14 disk SATA RAID6. We've started having significant performance issues when writing-to and reading-from the disks. This may be due to the performance of the hardware, but I suspect that disk fragmentation may be a culprit at well. Some people have recommended storing the data in a database, but I've been hesitant to do this. An other thought was to use some sort of container file, like a VHD or something. Does anyone have any advice for mitigating this kind of fragmentation? Additional Info: The average file size is 8-14KB Format information from fsutil: NTFS Volume Serial Number : 0x2ae2ea00e2e9d05d Version : 3.1 Number Sectors : 0x00000001e847ffff Total Clusters : 0x000000003d08ffff Free Clusters : 0x000000001c1a4df0 Total Reserved : 0x0000000000000000 Bytes Per Sector : 512 Bytes Per Cluster : 4096 Bytes Per FileRecord Segment : 1024 Clusters Per FileRecord Segment : 0 Mft Valid Data Length : 0x000000208f020000 Mft Start Lcn : 0x00000000000c0000 Mft2 Start Lcn : 0x000000001e847fff Mft Zone Start : 0x0000000002163b20 Mft Zone End : 0x0000000007ad2000

    Read the article

  • Need a helpful/managed VPS to help transition from shared hosting

    - by Xeoncross
    I am looking for a VPS that can help me transition out of a shared hosting environment. My main OS is Ubuntu, although I am still new to the linux world. I spend most of my day programming PHP applications using a git over SSH workflow. I want PHP, SSH, git, MySQL/PostgreSQL and Apache to work well. Someday after I figure out server management I'll move on to http://nginx.org/ or something. I don't really understand 1) linux firewalls, 2) mail servers, or 3) proper daily package/lib update flow. I need a host that can help with these so I don't get hit with a security hole. (I monitor apache access logs so I think I can take it from there.) I want to know if there is a sub $50/m VPS that can help me learn (or do for me) these three main things I need to run a server. I can't leave my shared hosts (plural shows my need!) until I am sure my sites will be safe despite my incompetence. To clarify again, I need the most helpful, supportive, walk-me-through, check-up-on-me, be-there-when-I-need you VPS I can get. Learning isn't a problem when there is someone to turn too. ;)

    Read the article

  • RSS "Newspaper" / Google Reader replacement

    - by Sean D
    With the impending demise of Google Reader I've been looking at ways to replace it. I've decided that what might be cool is to get an email every morning, with all the updates from the last twenty-four hours, maybe in the style of a newspaper. That's not a very original idea, since sites like http://fivefilters.org/pdf-newspaper/ and http://feedjournal.com/ already do this, but they both have various drawbacks. In particular both require a single feed, will just take the last n items, and clicking around on their website. The Pro option for feedjournal seems almost like it would do the job, but the project seems to be dead, and there's no way to buy it. Before I hack together something crazy I'd like to know if there's a better solution to my problem. In short: I want to replace Google Reader with a daily pdf email, how should I do this? edit: I didn't award the bounty because nobody solved the problem (not that I'm assuming it has a solution). Answers like "well for the way I do things this wouldn't work" aren't actually helpful, even if they are well-meaning.

    Read the article

  • Which components should I invest in.. for a backup machine.

    - by Senthil
    I am a freelance developer. I have a PC, a laptop and an old testing and file server machine. I might add one or two in future. I want to have an on-site backup machine that can handle backups of ALL these machines - file backups, MySQL backups, backup of subversion repository, etc.. When building the machine, which components should I invest more in? Examples: The cabinet should have lots of room for expansion. Hard disk size should be large. But I guess hard disk speed need not be high (?) But other components like, RAM, PSU, Processor, Network card, Cooling, etc.. how much relative importance do these have in a backup machine? Which of these components should be high-end or large, and which ones need not be? Some Idea of the load: There will TBs of data. File backups and subversion repository backups will at least be done daily. MySQL backups done weekly. assume 3 machines at the moment and somewhere around 10 machines in the future.

    Read the article

  • Expire Files In A Folder: Delete Files After x Days

    - by Brett G
    I'm looking to make a "Drop Folder" in a windows shared drive that is accessible to everyone. I'd like files to be deleted automagically if they sit in the folder for more than X days. However, it seems like all methods I've found to do this, use the last modified date, last access time, or creation date of a file. I'm trying to make this a folder that a user can drop files in to share with somebody. If someone copies or moves files into here, I'd like the clock to start ticking at this point. However, the last modified date and creation date of a file will not be updated unless someone actually modifies the file. The last access time is updated too frequently... it seems that just opening a directory in windows explorer will update the last access time. Anyone know of a solution to this? I'm thinking that cataloging the hash of files on a daily basis and then expiring files based on hashes older than a certain date might be a solution.... but taking hashes of files can be time consuming. Any ideas would be greatly appreciated! Note: I've already looked at quite a lot of answers on here... looked into File Server Resource Monitor, powershell scripts, batch scripts, etc. They still use the last access time, last modified time or creation time... which, as described, do not fit the above needs.

    Read the article

  • Looking for a smart sync tool that exactly copies the structure of the source to that of the destination

    - by ????????
    I am a Windows 7 user and looking for a sync tool that exactly copies the structure of the source to that of the destination. The scenario is as follows: I have a laptop in which I save my daily projects. I always bring this laptop wherever I go to work, either at my school or at my house. At school I have a room in which I put an external hard drive to backup my laptop data. Upon arriving at school I always sync the external data with my laptop. Before leaving the school, I also do the synchronization if I make some modification in my laptop data. I do the same thing when I arrive and leave my house. Every month I reformat my laptop because of some trial softwares. I am using SyncToy 2.1 to sync the external drives with my laptop. The problem is as follows. Before formatting my laptop, I sync all external drives with my laptop. For example, My laptop structure: c:\data\file1.ext c:\data\folder1\file2.ext Both external drive structure: d:\data\file1.ext d:\data\folder1\file2.ext After reformatting my laptop, I move the file1.ext to folder1. So my laptop structure becomes, c:\data\folder1\file1.ext c:\data\folder1\file2.ext If I sync external drives with my laptop, I get d:\data\file1.ext d:\data\folder1\file1.ext d:\data\folder1\file2.ext I want the tool also remove d:\data\file1.ext. How to do this?

    Read the article

  • Concatenating gziped Apache logs

    - by markdrayton
    We rotate and compress our Apache logs each day but it's become apparent that this isn't frequently enough. An uncompressed log is about 6G, which is getting close to filling our log partition (yep, we'll make it bigger in the future!) as well as taking a lot of time and CPU to compress each day. We have to produce a gziped log for each day for our stats processing. Obviously we could move our logs to a partition with more space but I also want to spread the compression overhead throughout the day. Using Apache's rotatelogs we can rotate and compress the log more often -- hourly, say -- but how can I concatenate all the hourly compressed logs into a running compressed log for the day, without decompressing the previous logs? I don't want to uncompress 24 hours' worth of data and recompress it because that has all the disadvantages of our current solution. Gzip doesn't seem to offer any append or concatenate option but perhaps I've missed something obvious. This question suggests straight shell concatenation "works" in that the archive can be decompressed but that gzip -l doesn't work seems a bit dodgy. Alternatively, perhaps this is still a bad way to do things. Other suggestions are welcome -- our only constraints are our relatively small log partitions and the need to provide a daily compressed log.

    Read the article

  • How To Completely Move Users/Program Files/Program Files (x86)/ProgramData (Folders) To Another Partition(s) On Windows 8?

    - by Enigma83
    I am attempting to move folders Users Program Files Program Files (x86), ProgramData (at the root of the C drive) to at least 2 other partitions, preferably on a fresh install. I have read that there are methods for doing this post-install, but it seems like it would be a bit more tedious to do things that way. I want to move the 2 Program Files folders to another partition on the same HDD, and Users/ProgramData will go to yet another partition on same HDD. I have done a bit of research on this, read up on some things that involved booting into Audit Mode, using the RoboCopy command to copy folders via booting into my Windows 8 USB drive, creating NTFS junctions/symbolic links, Registry edits, as well as accomplishing this automatically by creating an auto-attend file which Windows Setup processes automatically before the user is ever booted in for the 1st time. I tried this morning and now have a basic installation in which programs like Internet Explorer fail to open, certain files can't be found/opened (even if I click on them directly), an example is Regedit. Also, I can't run the Command/DOS (CMD) prompt as Administrator (or otherwise, as any other user), can't activate the real Administrator account or open any of the Administrative Tools (despite having added them to my Start Screen). So far I have only tried RoboCopy-ing Program Files and Program Files (x86) so far, creating junction points for them, and editing the Registry in the relevant locations. This is what I'm left with now. I also found the following blog article which describes how to do this for Windows 7 So, where should I go from here and where can I find more information? And how can this be done without disabling the Metro apps, which I've read will stop working if you move ProgramData. Once I have everything moved, where do I install programs to? Do I tell them to install to C:\Program Files\Program Files (x86) or to the junctioned/symbolic-linked partition/drive? I plan to test in VMware virtual machines from here on until things are working correctly, while using a baseline default install for daily tasks.

    Read the article

  • What is a good replacement for StumbleUpon's Share feature?

    - by Mofoo
    I've been using Firefox + StumbleUpon's Share feature with my friends for years now. It is a perfect way of sharing links directly with your friends. You first need to be Following each other and then on the SU toolbar, you can "Share" with your list of friends. You can even include a personal message. The friend will receive a notification with # of pending shares in their toolbar (bold & red). They click the stumble button and it will navigate to the site plus show a yellow bar with your message. I literally use it daily. But then Chrome came along and beat the tar out of Firefox (and other browsers) in terms of usability and performance. But it doesn't (and never will according to Google) allow toolbars. StumbleUpon's solution in Chrome is a fake toolbar (html) that gets injected into the page you're viewing. It's buggy and performance is low. Overall it's not an acceptable solution. I'm looking for a replacement with something that is just as easy to send/receive links. I was thinking of Twitter DM's and using a bookmarklet, but I wanted to survey the collective for other options Thanks in advance for your input!

    Read the article

  • Help with Backup Scheme for B.E 12.5

    - by Jemartin
    I'm in process of implementing a new backup scheme. I would say that I'm kind of new to it. So here my question. I'm currently using Backup Exec 12.5 on Windows Server 2008 w/Hyper-V, and IBM Adic Scalar 24. I currently backup our mail server, SQL DB, Board Server Linux Red Hat, Ftp, etc. To a Near-line which is local on our SAN I have the daily's go there as well as full. I would like to start weekly full to tape on a Saturday it takes about 2-3 days to complete the entire full to tape due to backing up from our Co-Lo as well. I have read up on the Father/son rotation but here's my issue with that I dont use tapes everyday only on the weekly full to tape will I be using them. So if there is 4 weeks in a month would I rotate in this order ( Month June WK1 =7tapes , June WK2=7 tapes, June WK3=7tapes June Wk4=7tapes with WK4 being the last tape for the month of June I would use that as a Month tape. For the month of July Wk1= June's WK1 tapes, July WK2= June's WK2 tapes July WK4 = Junes Wk4 tape for a month or would I use a set of new tapes for the last week in July. All tapes are being taking off site as well.

    Read the article

  • Long running stats process - thoughts on language choice?

    - by Josh
    I am on a LAMP stack for a website I am managing. There is a need to roll up usage statistics (a variety of things related to our desktop product), and I initially tackled the problem with PHP (being that I had a bunch of classes to work with the data already). All worked well on my dev box which was using 5.3 Long story short, 5.1 memory management seems to suck a lot worse, and I've had to do a lot of fooling to get the long term roll up scripts to run in a fixed memory space. Our server guys are unwilling to upgrade php at this time. I've since moved my dev server back to 5.1 so I don't run into this problem again... For mining of mysql databases to roll up statistics for different periods and resolutions, potentially running a process that does this all the time in the future (as opposed to on a cron schedule), what language choice do you recommend? I was looking at python (I know it more or less), java (don't know it that well), sticking it out with php (know it quite well). Thanks for any suggestions. Josh

    Read the article

  • Shell script to decrypt and move files from one directory to another?

    - by KittyYoung
    So, I have a directory, and in it are several files. I'm trying to decrypt those files and then move them to another directory. I can't seem to figure out how to set the output filename and move it. So, the directory structure looks like this: /Applications/MAMP/bin/decryptandmove.sh /Applications/MAMP/bin/passtext.txt /Applications/MAMP/bin/encrypted/test1.txt.pgp /Applications/MAMP/bin/encrypted/test2.txt.pgp /Applications/MAMP/htdocs/www/decrypted/ For all of the files found in the encrypted directory, I'm trying to decrypt it and then move them into www/decrypted/. I don't know what the filenames in the encrypted directory will be ahead of time (this script will eventually run via cron), so I wanted to just output the decrypted file with the same filename, but without the pgp. So, the result would be: /Applications/MAMP/bin/decryptandmove.sh /Applications/MAMP/bin/passtext.txt /Applications/MAMP/bin/encrypted/ /Applications/MAMP/htdocs/decrypted/test1.txt.pgp /Applications/MAMP/htdocs/decrypted/test2.txt.pgp So, this is all I have so far, and it doesn't work... FILE and FILENAME are both wrong... I haven't even gotten to the moving part of it yet.... Help? I've written exactly one shell script ever, and it was so simple, a monkey could have done it... Feeling out of depth here... pass_phrase=`cat passtext.txt|awk '{print $1}'` for FILE in '/Applications/MAMP/bin/encrypted/'; do FILENAME=$(basename $FILE .pgp) gpg --passphrase $pass_phrase --output $FILENAME --decrypt $FILE done

    Read the article

  • Java FileLock for Reading and Writing

    - by bobtheowl2
    I have a process that will be called rather frequently from cron to read a file that has certain move related commands in it. My process needs to read and write to this data file - and keep it locked to prevent other processes from touching it during this time. A completely separate process can be executed by a user to (potential) write/append to this same data file. I want these two processes to play nice and only access the file one at a time. The nio FileLock seemed to be what I needed (short of writing my own semaphore type files), but I'm having trouble locking it for reading. I can lock and write just fine, but when attempting to create lock when reading I get a NonWritableChannelException. Is it even possible to lock a file for reading? Seems like a RandomAccessFile is closer to what I need, but I don't see how to implement that. Here is the code that fails: FileInputStream fin = new FileInputStream(f); FileLock fl = fin.getChannel().tryLock(); if(fl != null) { System.out.println("Locked File"); BufferedReader in = new BufferedReader(new InputStreamReader(fin)); System.out.println(in.readLine()); ... The exception is thrown on the FileLock line. java.nio.channels.NonWritableChannelException at sun.nio.ch.FileChannelImpl.tryLock(Unknown Source) at java.nio.channels.FileChannel.tryLock(Unknown Source) at Mover.run(Mover.java:74) at java.lang.Thread.run(Unknown Source) Looking at the JavaDocs, it says Unchecked exception thrown when an attempt is made to write to a channel that was not originally opened for writing. But I don't necessarily need to write to it. When I try creating a FileOutpuStream, etc. for writing purposes it is happy until I try to open a FileInputStream on the same file.

    Read the article

  • Would anyone tell me how to fetch the media:thumb element's attribute from a json feed?

    - by ash
    I made a yahoo pipe that pulls up the atoms as json format; however, I can fetch and display all the elements in my html page except for the element's attribute. Would anyone tell me how to fetch the media:thumb element's attribute from a json feed? I am pasting the html page's code with javascript. If you save the html page and then view it in browser, you will see that all the necessary elements get output at html page except for the media:thumb as I cannot display the attribute of media:thumb when the feed is formatted as json. I am also pasting the some portion of the json feed so that you can have an idea what i am talking about. Please tell me how to retrieve attribute from media:thumb element of a json feed by using plain javascript but no server side code or javascript library. Thank you. function getFeed(feed){ var newScript = document.createElement('script'); newScript.type = 'text/javascript'; newScript.src = 'http://pipes.yahoo.com/pipes/pipe.run?_id=40616620df99780bceb3fe923cecd216&_render=json&_callback=piper'; document.getElementsByTagName("head")[0].appendChild(newScript); } function piper(feed){ var tmp=''; for (var i=0; i'; tmp+=feed.value.items[i].title+''; tmp+=feed.value.items[i].author.name+''; tmp+=feed.value.items[i].published+''; if (feed.value.items[i].description) { tmp+=feed.value.items[i].description+''; } tmp+='<hr>'; } document.getElementById('rssLayer').innerHTML=tmp; } </script> bchnbc .............................................................. Some portion of the json feed that gets generated by yahoo pipe .............................................................. piper({"count":2,"value":{"title":"myPipe","description":"Pipes Output","link":"http:\/\/pipes.yahoo.com\/pipes\/pipe.info?_id=f7f4175d493cf1171aecbd3268fea5ee","pubDate":"Fri, 02 Apr 2010 17:59:22 -0700","generator":"http:\/\/pipes.yahoo.com\/pipes\/","callback":"piper", "items": [{ "rights":"Attribution - Noncommercial - No Derivative Works", "link":"http:\/\/vodo.net\/mixtape1", "y:id":{"value":null,"permalink":"true"}, "content":{"content":"We're proud to be releasing this first VODO MIXTAPE. Actual tape might be a thing of the past, but before P2P, mixtapes were the most popular way of sharing popular culture the world had known -- and once called the 'most widely practiced American art form'. We want to resuscitate the spirit of the mixtape for this VODO MIXTAPE series: compilations of our favourite shorts, the weird, the wild and the wonky, all brought together in a temporary and uncomfortable company.","type":"text"}, "author": {"name":"Various"}, "description":"We're proud to be releasing this first VODO MIXTAPE. Actual tape might be a thing of the past, but before P2P, mixtapes were the most popular way of sharing popular culture the world had known -- and once called the 'most widely practiced American art form'. We want to resuscitate the spirit of the mixtape for this VODO MIXTAPE series: compilations of our favourite shorts, the weird, the wild and the wonky, all brought together in a temporary and uncomfortable company.", "media:thumbnail": { "url":"http:\/\/vodo.net\/\/thumbnails\/Mixtape1.jpg" }, "published":"2010-03-08-09:20:20 PM", "format": { "audio_bitrate":null, "width":"608", "xmlns":"http:\/\/xmlns.transmission.cc\/FileFormat", "channels":"2", "samplerate":"44100.0", "duration":"3092.36", "height":"352", "size":"733925376.0", "framerate":"25.0", "audio_codec":"mp3", "video_bitrate":"1898.0", "video_codec":"XVID", "pixel_aspect_ratio":"16:9" }, "y:title":"Mixtape #1: VODO's favourite short films", "title":"Mixtape #1: VODO's favourite short films", "id":null, "pubDate":"2010-03-08-09:20:20 PM", "y:published":{"hour":"3","timezone":"UTC","second":"0","month":"4","minute":"10","utime":"1270264200","day":"3","day_of_week":"6","year":"2010" }}, {"rights":"Attribution - Noncommercial - No Derivative Works","link":"http:\/\/vodo.net\/gilbert","y:id":{"value":"cd6584e06ea4ce7fcd34172f4bbd919e295f8680","permalink":"true"},"content":{"content":"A documentary short about Gilbert, the Beacon Hill \"town crier.\" For the last 9 years, since losing his job and becoming homeless, Gilbert has delivered the weather, sports, and breaking headlines from his spot on the Boston Common. Music (used with permission) in this piece is called \"Blue Bicycle\" by Dusseldorf-based pianist \/ composer Volker Bertelmann also known as Hauschka. Artistic Statement: This is the first in a series of profiles of people who I think are interesting, and who I see on almost a daily basis. I don't want to limit the series to people who live \"on the fringe,\" but it would be appropriate to say that most of the people I interview are eclectic, eccentric, and just a little bit unique. The art is in the viewing - but I hope to turn my lens on individuals that don't always color in the lines, whether they can help it or not.","type":"text"},"author":{"name":"Nathaniel Hansen"},"description":"A documentary short about Gilbert, the Beacon Hill \"town crier.\" For the last 9 years, since losing his job and becoming homeless, Gilbert has delivered the weather, sports, and breaking headlines from his spot on the Boston Common. Music (used with permission) in this piece is called \"Blue Bicycle\" by Dusseldorf-based pianist \/ composer Volker Bertelmann also known as Hauschka. Artistic Statement: This is the first in a series of profiles of people who I think are interesting, and who I see on almost a daily basis. I don't want to limit the series to people who live \"on the fringe,\" but it would be appropriate to say that most of the people I interview are eclectic, eccentric, and just a little bit unique. The art is in the viewing - but I hope to turn my lens on individuals that don't always color in the lines, whether they can help it or not.","media:thumbnail":{"url":"http:\/\/vodo.net\/\/thumbnails\/gilbert.jpeg"},"published":"2010-03-03-10:37:05 AM","format":{"audio_bitrate":null,"width":"624","xmlns":"http:\/\/xmlns.transmission.cc\/FileFormat","channels":"2","samplerate":null,"duration":"373.673","height":"352","size":"123321266.0","framerate":null,"audio_codec":"mp3","video_bitrate":null,"video_codec":"XVID","pixel_aspect_ratio":"16:9"},"y:title":"Gilbert","title":"Gilbert","id":"cd6584e06ea4ce7fcd34172f4bbd919e295f8680","pubDate":"2010-03-03-10:37:05 AM","y:published":{"hour":"3","timezone":"UTC","second":"0","month":"4","minute":"10","utime":"1270264200","day":"3","day_of_week":"6","year":"2010" }} ] }})

    Read the article

  • Optimising RSS parsing on App Engine to avoid high CPU warnings

    - by Danny Tuppeny
    I'm pulling some RSS feeds into a datastore in App Engine to serve up to an iPhone app. I use cron to schedule updating the RSS every x minutes. Each task only parses one RSS feed (which has 15-20 items). I frequently get warnings about high CPU usage in the App Engine dashboard, so I'm looking for ways to optimise my code. Currently, I use minidom (since it's already there on App Engine), but I suspect it's not very efficient! Here's the code: dom = minidom.parseString(urlfetch.fetch(url).content) if dom: items = [] for node in dom.getElementsByTagName('item'): item = RssItem( key_name = self.getText(node.getElementsByTagName('guid')[0].childNodes), title = self.getText(node.getElementsByTagName('title')[0].childNodes), description = self.getText(node.getElementsByTagName('description')[0].childNodes), modified = datetime.now(), link = self.getText(node.getElementsByTagName('link')[0].childNodes), categories = [self.getText(category.childNodes) for category in node.getElementsByTagName('category')] ); items.append(item); db.put(items); def getText(self, nodelist): rc = '' for node in nodelist: if node.nodeType == node.TEXT_NODE: rc = rc + node.data return rc There isn't much going on, but the scripts often take 2-6 seconds CPU time, which seems a bit excessive for looping through 20ish items and reading a few attributes. What can I do to make this faster? Is there anything particularly bad in the above code, or should I change to another way of parsing? Are there are any libraries (that work on App Engine) that would be better, or would I be better parsing the RSS myself?

    Read the article

< Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >