Search Results

Search found 2860 results on 115 pages for 'javaone feed'.

Page 69/115 | < Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >

  • hp laserjet 4200 madness

    - by C-dizzle
    I have an HP LaserJet 4200 printer that is acting really weird. I can print from internet just fine, no problem... but when I try to print from a document used in microsoft word, for example, I always get "Manual Feed Tray 1". I have racked my brain on this and cannot get it figured out. It's a networked printer and only an administrator account can make changes to it, I have also reset the printer back to factory settings, uninstalled and reinstalled onto the pc's and server it is connected to. Running out of hair since I'm pulling it all out... please help.

    Read the article

  • Standard for feeding test data to a Nagios plugin?

    - by chiborg
    I'm developing a Nagios plugin in Perl (no Nagios::Plugin, just plain Perl). The error condition I'm checking for normally comes from a command output, called inside the plugin. However, it would be very inconvenient to create the error condition, so I'm looking for a way to feed test output to the plugin to see if it works correctly. The easiest way I found at the moment would be with a command line option to optionally read input from a file instead of calling the command. if($opt_f) { open(FILE, $opt_f); @output = <FILE>; close FILE; } else { @output = `my_command`; } Are there other, better ways to do this?

    Read the article

  • How do you persist installed software & configurations on an Amazon EC2 instance?

    - by Richard
    I've gotten a base Debian AMI up and running and now I need to know the best way to maintain it. I've ran the updates (aptitude update/upgrade) and installed/configured my software (Apache, Ruby, etc.) but if I reboot the instance or start a new one I'll have to do all this work over again. How do you persist these types of things over a reboot? Do you build a new AMI every time you adjust some tiny piece of the system? Or is there some way to feed it a script on startup that configures it in "real-time"? I know I could go all the way with a Reductive Labs Puppet style setup but that's a bit too much for my needs right now (1-2 servers). Any best practices on this? Update: I found a bit of information on using User-Data to run scripts at instance boot time.

    Read the article

  • How do I start silverlight on netflix in full screen automatically?

    - by KronoS
    I know that there this great big button that I can click for full screen on the bottom left of the screen. What I'm doing though is launching the direct movie/tv episode in chrome from XBMC using XBMCflicks. It's like pasting the video html address in the address bar and going directly to the video feed. Here's my dilemma, I want a fully working, no need for a keyboard/mouse HTPC, which means I don't want to have to click on the 'full screen' button. Is there a way to run chrome and silverlight automatically in full screen mode when launched?

    Read the article

  • How to remove large number of files/folders in linux

    - by user1745713
    We are using hadoop to split a table into smaller files to feed to mahout, but in the process, we created a huge amount of _temporary logs. we have an nfs mount for the hadoop volume so we can use all the linux commands to delete folders files, but we just can't get them to be deleted, here's what I've tried so far: hadoop fs -rmr /.../_temporary : hangs for hours and does nothing on nfs mount: rmr -rf /.../_temporary :hangs for hours and does nothing find . -name '*.*' -type f -delete : same as above the folders look like this (38 of these folders inside _temporary): drwxr-xr-x 319324 user user 319322 Oct 24 12:12 _attempt_201310221525_0404_r_000000_0 the content of these are actually folders, not files. each one of those 319322 folders has exactly one file inside. not sure why the do the logging this way. Any help is appreciated.

    Read the article

  • plugging in a 3.3V 50pin laptop HDD to USB?

    - by barlop
    I have a 50pin laptop hard drive. 1.8" wide. This 50pin connector concerns me.. Even if I get an adaptor, How can I know which side of the connector takes the power? I don't want to plug it in the wrong way. And I don't have n adaptor.. Could people link me to adaptors too. but main question is, which side to plug it in when I get the adaptor. I want to be sure. I do not want to blow the hdd. For the 3.3V I have a plan. Connecting green and black and using the orange cable(3.3v) to feed power. I am not too worried about that bit. But as I said.. Main thing is I want to know which side is 3.3V hard drive is MK6006GAH

    Read the article

  • yum update with shared cache

    - by Sammitch
    We've got a big batch of RHEL6 machines that are due for patching, and for some reason the process here does not involve a local repo. I'm new here, I've asked why, ["it just didn't work"] and I don't have enough time to make it work before the window that's already scheduled. So the usual method is to install yum-downloadonly and run yum update --downloadonly --downloaddir=/mnt/cifs_share and then yum update /mnt/cifs_share/*.rpm which just does not look right to me since not all of these machines have the same set of installed packages. The method I tried today was mounting the share to /var/cache/yum/x86_64/6Server/rhel-x86_64-server-6/packages/ which worked, but then yum automatically deleted everything once it finished. I've looked over the yum man page, but I don't see any flag I can feed it to stop it from deleting everything, nor a flag like up2date's --tmpdir=/mnt/cifs_share. Can anyone out there help me kludge this together until I can get a local repository working?

    Read the article

  • Recommendations: Good Network MFP Printer/Scanner

    - by Joeme
    Hi, We have a small office that is expanding. At the moment we have 1x HP J6424 MFP, shared using it's built in network port. It is now becoming a headache, we have daily problems with people not being able to print or scan, and jobs just sitting in the queue. Or the scanner not being detected. Sometimes people can print but not scan, sometimes scan but not print, sometimes a bit of both. We are also pretty much constantly printing or scanning, or trying to! I would like to get a laser MFP (mono is fine) which works well for scanning a printing over the network with multiple users. Althernativly any recommendations for network scanners (sheet feed and or duplex a bonus). Clients are Windows 7 and Mac. Thanks very much!

    Read the article

  • How can I load one image over network to multiple computers on boot?

    - by user754730
    A few years ago I saw this in a company but I don't know how it was built. There was 1 Computer (I don't know if Windows Server or plain Windows 7 - the server) and 3 other computers (Windows 7 - the clients). As soon as the Windows 7 clients were started, they all started up the same image (Don't know if the same image file or just the same state) over network and were able to work on the computer. As soon as the machine was shutdown, all the changes made to the system were erased. How could I build a system like this so I have 1 image file which I keep up to date and then feed it to the other machines in my network? It would look this this basically:

    Read the article

  • Making audio CDs en mass - Linux based solutions?

    - by The Journeyman geek
    My mom's sings and gives away cds to people. Invariably it falls to me to have to burn cds for her, and burning 50-100 cds on a single drive is a pain. I DO have a handful of cd burners and a slightly geriatric old PIII 450. This is what i want to be able to do - either point an application at a folder of WAV or MP3s, say how many copies i need on CLI (since then i can SSH into the system and use it headless) feed 2 or more CD burners cds until its done, OR pop in a single CD into a master drive and have its contents duplicated to 2 or more burners. I'd rather have it running on linux, be command line based, and be as little work as possible - almost automatic short of telling it how many copies i want would be ideal. I'm sure i'll have people wondering about legality - My mom sings her own music, and its classical, and older than copyright law, so, that's a non issue. I just want a way to make this chore a little easier, short of telling my mom to do it herself.

    Read the article

  • how to stop powershell mangling command line options for program executed from shell?

    - by kem
    From the powershell prompt, when I try to run a program and feed it a command line option, powershell ends up mangling the option. Why does this happen? Is there any way to stop it besides enclosing the option in quotes? For example, from the powershell prompt: PS Microsoft.PowerShell.Core\FileSystem::\\mach\share .\myprog.exe -file=input.txt myprog.exe ends up getting two arguments: 1) -file=input 2) .txt I need to run it like: .\myprog.exe "-file=input.txt" or .\myprog.exe '-file=input.txt' to force it to be one argument. No other shell does this.

    Read the article

  • How can I insert the quoted price of gold from kitco.com into my excel spreadsheet?

    - by Frank Computer
    kitco.com provides a realtime price quote for gold and other metals. I have a spreadsheet which makes calculations based on the price of gold and would like for this realtime value to automatically be updated on my excel sheet. I tried 'get external data' from a website but that didn't work. any ideas? EDIT ADDED: Kitco has a gadget called KCAST which displays realtime quotes on the Windows taskbar. I tried capturing those values from the taskbar but that didn't work either. Maybe if Kitco provided an API or feed, it could be done?

    Read the article

  • Need recommendations for a hardy scanner that has a robust feeder tray

    - by JohnyD
    In the early days of our company all our information came in on paper and all of what we sold was on paper. Because of this we literally rent our an old bank vault to house the millions of sheets of paper that, some say, still contain relevant information. That being said, I'm looking into purchasing some hardware capable of scanning all these documents and converting them to pdf. Being new at this level of digitization I would like to ask for recommendations for accomplishing this task. Most of this material exists as separate bound studies/articles/etc. Someone would have to remove the bindings and be able to load many pages at a time and have the scanner feed them all through and convert them to a single pdf (single pdf per study/article/etc). If you have any recommendations I would very much appreciate hearing about them, thanks.

    Read the article

  • How do you pick what server setup you need?

    - by ed209
    I recently started receiving pubsub data feed from etsy. It averages around 250 notifications per minute. But obviously, when the USA wakes up that spikes quite heavily. I want to be able to deal with those spikes (about 3 per day) but the rest of day is fine. What's the best method of getting the right server configuration. My current approach is to keep upgrading until the server stops dying... next leap is: Processor: AMD Phenom II X6-1055T HEXA Core RAM: 4GB DDR2 SDRAM HD1: SATA Drive (7,200 rpm) (+500 GB 7200 RPM SATA hard drive) HD2: SATA Backup Drive (+500 GB SATA (7,200 rpm)) OS: Linux OS (+CentOS 5 64-bit) Bandwidth: 6000GB Monthly Transfer (3000 in + 3000 out) (+100M uplink port) What's the best approach to working out what sort of server setup you need?

    Read the article

  • preloading RSS contents in thunderbird, before actually reading them

    - by Berry Tsakala
    i have thunderbird 3.x, and i'm subscribed to several RSS feeds. How can I tell thunderbird to load/download any new RSS items in the background? The usual behavior with RSS feeds is that it download the headrs, or few introductory lines from the contents, but only when i'm clicking a feed item it starts loading "for real". I really want to receive the feeds and not to wait for them to load, the same way i receive emails in any email client - all messages are fully downloaded at once. there could be several reasons, BTW. - e.g. if i have short connection time, i'd rather connect, sync everything at once, and read it later. - or if i have a slow wifi connection, it's annoying to wait for each and every message, but the computer is idle while reading.. thanks

    Read the article

  • rss downloader script

    - by The Digital Ninja
    I have a Synology NAS that is powered by linux at my house. I'm looking to set up a cron script to check a group of rss feeds and auto download new video podcasts to a shared folder. I can do most of the scripting, such as deleting files older than 3 weeks and the wget parts. But I'm not sure how to parse the rss feed and check dates to only grab the latest. I figured its best not to re-invent the wheel and surly someone out there has a command line rss downloader or some such script. Any ideas?

    Read the article

  • FPS lags with new acer aspire 5755G

    - by Calvin
    The title is kind of self explanatory, as my new laptop lags and has FPS drops. For example my FPS in Starcraft 2 hovers around 20 and constantly drops to 1 with low settings when I know it should run smoothly in high settings. I've updated my Nvidia driver, and set the preferred global settings to the 'High-performance Nvidia processor'. Here are some screen shots. Screen Shot One - Screen Shot Two - Screen Shot Three I'm not sure how to fix this problem, any feed back would be nice!

    Read the article

  • Podcast online sync service

    - by Hannes de Jager
    I download and listen to podcasts on 2 different computers. I would like so somehow sync the metadata such that, when I subscribe to a feed it gets subscribed on my other computer also. When I've downloaded a podcast it gets marked as downloaded on the other machine also. Is there a software + webservice combination that will allow me to do this. That will save its doings to the cloud and update it also? My one machine is Windows and the other one Linux.

    Read the article

  • Help recovering lost text from a refreshed Chrome webpage (it was in the clipboard as well)? [closed]

    - by tobeannounced
    Possible Duplicate: Chrome: where is the location to save browse temporary files Ok, so here's what happened - and yes, it was pretty stupid by me: I wrote up and submitted a post on Stack Overflow It was not suited to be placed on Stack Overflow as someone pointed out to me, so I deleted the post (this was a few hours ago) I copied the text into a new question page on Super User, but didn't submit it yet I accidentally just refreshed the webpage that had the text, and the question has now been deleted from Stack Overflow I have Lazarus installed, however the Chrome version doesn't have many features and the text was not recoverable from there. I do not have a clipboard manager, but the text was copied to my clipboard - so is there any way to get this back (Windows 7)? Although the post on Stack Overflow was deleted, I suppose it would have existed in my cache - could I recover it from there? Would the post on Stack Overflow exist in an rss feed anywhere? Many thanks, and I hope I can find this - and I am sure that the solution will prove valuable for me (and others) in the not too distant future once again.

    Read the article

  • Share the DVB card on windows 7 [closed]

    - by Bashar Kernel
    I have 2 computers connected to a router and I have a DVB card in one of them. I want to use the one DVB card to feed both of them. I read about it and I know that I want to share the DVB adapter with the Internet Connection Sharing on the LAN network. But when I use the connection sharing, I lose my internet access I tried to use "Bridge Connection", but then I also lost my internet access too. Can any one tell me how to fix this problem? And how to view the channels (for example how to use the VLC)?

    Read the article

  • Software for Company internal Website [closed]

    - by LordT
    hope this is the right stackexchange site to ask this: We've a group of webpages/services at work (SE Startup), ranging from SVN, trac, continous integration to link collections to a DMS. Nearly everything has an RSS Feed to get the info I need, with the exception of SVN. I'm looking for some kind of software that can integrate these well on a kind of start-page. The most recent changes, upcoming events etc should be clearly visible, as well as an option to search (the search will be provided from a different tool). A news area should be included as well. Currently, I'm pondering doing this with either wordpress or TWiki, although wordpress seems to be the simpler solution in terms of getting something good looking quickly. Authentication should be handled by HTTP-Basic Auth, which we already have in place and working well. I normally would consider Sharepoint a viable option for this, but we're exclusively mac and linux, I won't put up a windows server just for this.

    Read the article

  • Improve efficiency when using parallel to read from compressed stream

    - by Yoga
    Is another question extended from the previous one [1] I have a compressed file and stream them to feed into a python program, e.g. bzcat data.bz2 | parallel --no-notice -j16 --pipe python parse.py > result.txt The parse.py can read from stdin continusuoly and print to stdout My ec2 instance is 16 cores but from the top command it is showing 3 to 4 load average only. From the ps, I am seeing a lot of stuffs like.. sh -c 'dd bs=1 count=1 of=/tmp/7D_YxccfY7.chr 2>/dev/null'; I know I can improve using the -a in.txtto improve performance, but with my case I am streaming from bz2 (I cannot exact it since I don't have enought disk space) How to improve the efficiency for my case? [1] Gnu parallel not utilizing all the CPU

    Read the article

  • ipfw is blocking access to a script

    - by user225551
    I need to figure out how to troubleshoot why ipfw is blocking my script. This script parses out some rss feeds over the net. I have about 10 different feed urls. About 5 or 6 of these urls are returning xml, the others are timing out. If I turn the firewall off, they will all work. The issue I'm having is that I have no idea what port I need to open for the urls that are timing out. Is there a command that will show me this?

    Read the article

  • File filtering_Python 3.2 [closed]

    - by user71261
    I'm trying to write a short file filtering code in Python that will find my desired string. I've got it worked out logically, but my Command Feed is sending me an error message for the print statement. This is how it works as of now: filename = input('give file name: ') n = input('give desired string: ') f = open line = f.readline() while line: if n in line: print line line = f.readline() Error Statement: Traceback (most recent call last): File "<string>", line 7, in <fragment> Syntax Error: print line: <string>, line 718 I know this is a simple problem but the answer is not obvious to me. please help.

    Read the article

  • Need help with setting up comet code

    - by Saif Bechan
    Does anyone know off a way or maybe think its possible to connect Node.js with Nginx http push module to maintain a persistent connection between client and browser. I am new to comet so just don't understand the publishing etc maybe someone can help me with this. What i have set up so far is the following. I downloaded the jQuery.comet plugin and set up the following basic code: Client JavaScript <script type="text/javascript"> function updateFeed(data) { $('#time').text(data); } function catchAll(data, type) { console.log(data); console.log(type); } $.comet.connect('/broadcast/sub?channel=getIt'); $.comet.bind(updateFeed, 'feed'); $.comet.bind(catchAll); $('#kill-button').click(function() { $.comet.unbind(updateFeed, 'feed'); }); </script> What I can understand from this is that the client will keep on listening to the url followed by /broadcast/sub=getIt. When there is a message it will fire updateFeed. Pretty basic and understandable IMO. Nginx http push module config default_type application/octet-stream; sendfile on; keepalive_timeout 65; push_authorized_channels_only off; server { listen 80; location /broadcast { location = /broadcast/sub { set $push_channel_id $arg_channel; push_subscriber; push_subscriber_concurrency broadcast; push_channel_group broadcast; } location = /broadcast/pub { set $push_channel_id $arg_channel; push_publisher; push_min_message_buffer_length 5; push_max_message_buffer_length 20; push_message_timeout 5s; push_channel_group broadcast; } } } Ok now this tells nginx to listen at port 80 for any calls to /broadcast/sub and it will give back any responses sent to /broadcast/pub. Pretty basic also. This part is not so hard to understand, and is well documented over the internet. Most of the time there is a ruby or a php file behind this that does the broadcasting. My idea is to have node.js broadcasting /broadcast/pub. I think this will let me have persistent streaming data from the server to the client without breaking the connection. I tried the long-polling approach with looping the request but I think this will be more efficient. Or is this not going to work. Node.js file Now to create the Node.js i'm lost. First off all I don't know how to have node.js to work in this way. The setup I used for long polling is as follows: var sys = require('sys'), http = require('http'); http.createServer(function (req, res) { res.writeHead(200, {'Content-Type': 'text/html'}); res.write(new Date()); res.close(); seTimeout('',1000); }).listen(8000); This listens to port 8000 and just writes on the response variable. For long polling my nginx.config looked something like this: server { listen 80; server_name _; location / { proxy_pass http://mydomain.com:8080$request_uri; include /etc/nginx/proxy.conf; } } This just redirected the port 80 to 8000 and this worked fine. Does anyone have an idea on how to have Node.js act in a way Comet understands it. Would be really nice and you will help me out a lot. Recources used An example where this is done with ruby instead of Node.js jQuery.comet Nginx HTTP push module homepage Faye: a Comet client and server for Node.js and Rack To use faye I have to install the comet client, but I want to use the one supplied with Nginx. Thats why I don't just use faye. The one nginx uses is much more optimzed. extra Persistant connections Going evented with Node.js

    Read the article

< Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >