Search Results

Search found 2442 results on 98 pages for 'feed'.

Page 76/98 | < Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >

  • Serving and caching content from Amazon S3 with Tomcat

    - by Rob
    Hi all, We're looking to serve a range of content using Amazon S3 as a store for the content and Tomcat to host the web application. The content is divided into free and paid for content. We intend to authenticate the users when they access the web application running in Tomcat. Based around their authentication we are able to tell if the user has access to paid for content or simply free stuff. So I envision the flow of a request being something like this: Authenticated request to Tomcat If user is "paid" user, display links to premium content Direct requests for paid content back through Tomcat to prevent direct access to it by non-paying users. Tomcat makes request to S3 through a web cache to keep our costs down Content is returned to user. As we have to pay for each request to S3, I'd ideally like to cache content locally to the Tomcat instance after it has been requested for the first time to keep costs to a minimum and to speed things up. I would also like to be able to invalidate this cache if we publish fresh content to S3. So to confirm my proposal: Client Request - Tomcat - Web Cache - S3 To invalidate the cache, I was thinking of using something like PubSubHubbub with the cache waiting for updates to the feed for content that it should invalidate. I'd appreciate some general feedback on this approach as I've no real experience of caching and I'm sure I've made some invalid assumptions. I'd also appreciate any recommendations for caching technologies. Thanks.

    Read the article

  • How to download Vim script on the command-line?

    - by HaiYuan Zhang
    Whenever I want to install a new Vim script on the Linux server I'm working on, my typical workflow is as the following: surf the plugin's homepage in Vim online using FireXXXX download a right version of the plugin to my laptop by click some highlighted link upload the downloaded plugin from my laptop to Linux server using WinSCP which is really inconvenient. I don't know what is the magic behind this: I mean for the same hyperlink I click it in web browser. I can let you download it but use Wget plus the hyperlink in Linux command-line will end up with nothing but an error indication. Hyperlink in the web browser. Otherwise I can get the link in web browser and then use Wget or some similar tool to actually do the downloding. I try new cool Vim scripts quite ofte , so you can imagine my dismay when I have to repeat the tedious action all the time. What are some tips which can let me download the Vim scripts in a more "professional" way? Post edit: My problem is not find a tool like Wget or cURL. The problem I met is quite specific; to use these tools to download a Vim script. Let's take http://www.vim.org/scripts/script.php?script_id=30 as an example. It's the normal place where one can get the script, at least for me. But I can't find an working URL from this page that can feed to Wget.

    Read the article

  • httpd 2.2.15 + suPHP + suExec + php5 = permission and information ?

    - by Prix
    Hi, i am currently playing around with suexec, suphp, php5 on my apache on slackware 13.1. Everything is installed and working properly but now i did like to got further into the directory permissions and at suphp settings and options available. initially i was planning to leave suphp disabled unless a virtualhost has it specified to be enabled but it does not seem to work, see sample: mod_php.conf which is included in my httpd.conf # # mod_php & mod_suPHP - PHP Hypertext Preprocessor module # # Load the PHP module: LoadModule php5_module lib/httpd/modules/libphp5.so # Load the suPHP module: LoadModule suphp_module lib/httpd/modules/mod_suphp.so <IfModule mod_php5.c> # Tell Apache to feed all *.php files through PHP. If you'd like to # parse PHP embedded in files with different extensions, comment out # these lines and see the example below. <FilesMatch \.php$> SetHandler application/x-httpd-php </FilesMatch> </IfModule> <IfModule mod_suphp.c> # This option tells mod_suphp if a PHP-script requested on this server (or # VirtualHost) should be run with the PHP-interpreter or returned to the # browser "as it is". suPHP_Engine off </IfModule> With the above first sample it makes suPHP and PHP not work if i comment out the php5 stuff but the module it will run just fine ... So my first question is, how could i possible make this setup work ? Leave suPHP disabled using php5 by default and if a virtualhost has suPHP enabled it will disable php5 and use suPHP. if any information is lacked here please let me know and i will update with any additional information you may need. Thanks in advance.

    Read the article

  • Split MPEG video from command line?

    - by Tim
    I have a homemade DVD that I'm effectively trying to insert chapters into and rearrange - the original author burned it as one long chapter, and I'd like to rip it into smaller pieces and re-encode it into a new DVD. I ripped the DVD with the following command: mplayer dvd:// -dvd-device /dev/sr2 -dumpstream -dumpfile raw.vob I'm running Gentoo Linux with mplayer version 1.0-rc2_p20090731 (the latest available in Portage). I have a list of times that the chapters are supposed to span (for example 30:11-33:25), so my first thought was to rip the entire DVD and use mpgtx to cut out certain pieces of the file. My issue is that running mpgtx -i on the file reports quite a few timestamp jumps: Time stamps jumped from 59.753789 to 0.001622 at position 1d29800 Time stamps jumped from 204963823030450.343750 to 31.165900 at position 2d4f800 Time stamps jumped from 60.077878 to 0.001622 at position 43cc000 Time stamps jumped from 60.024233 to 0.001622 at position 65c5000 Time stamps jumped from 204963823068631.718750 to 52.549244 at position 7fd1000 I've tried to fix the indexes using: mencoder raw.vob -oac copy -ovc copy -forceidx -o fixed.vob -of mpeg But mpgtx will still report timestamp issues. My immediate question: is there a way to take the ripped movie I have and correct its timestamps so I can cut it with mpgtx? If I can get that one issue out of the way, building the rest of the DVD will be smooth sailing. If it's not possible to fix the timestamps on this file: is there a better way to rip small chunks of the DVD into separate files for recompilation later? I'd very much like this to be done on Linux, and it'd be even better if I could script it somehow (feed in a list of start and end positions, or start times and durations, and get out a series of ripped files). If need be, I also have a Mac OS X machine available, but no Windows. Edit: I wound up finding another solution involving HandBrake and ffmpeg (with help from this question), but the question stands. Edit again: Turns out my other solution didn't quite work - the audio desynchronized by about five seconds, in about half of my cut mpgs - so I'm back to square one. Anyone?

    Read the article

  • Cache-control for permanent 301 redirects nginx

    - by gansbrest
    I was wondering if there is a way to control lifetime of the redirects in Nginx? We would liek to cache 301 redirects in CDN for specific amount of time, let say 20 minutes and the CDN is controlled by the standard caching headers. By default there is no Cache-control or Expires directives with the Nginx redirect. That could cause the redirect to be cached for a really long time. By having specific redirect lifetime the system could have a chance to correct itself, knowing that even "permanent" redirect change from time to time.. The other thing is that those redirects are included from the Server block, which according the nginx specification should be evaluated before locations. I tried to add add_header Cache-Control "max-age=1200, public"; to the bottom of the redirects file, but the problem is that Cache-control gets added twice - first comes let say from the backend script and the other one added by the add_header directive.. In Apache there is the environment variable trick to control headers for rewrites: RewriteRule /taxonomy/term/(\d+)/feed /taxonomy/term/$1 [R=301,E=expire:1] Header always set Cache-Control "store, max-age=1200" env=expire But I'm not sure how to accomplish this in Nginx.

    Read the article

  • how to download vim script in command line

    - by HaiYuan Zhang
    whenever I want to install a new vim script to the linux server I'm working in , my typical workflow is as the following: surf the plugin's homepage in vim online using fireXXXX download a right version of the plugin to my laptop by click some highlighted link upload the downloaded plugin from my laptop to linux server using winscp which is really inconvenient. I don't know what is the magic behind this : I mean for the same hyperlinki click it in web browser I can let you download it but use wget plus the hyperlink in linux commandline will end up with nothing but error indication. hyperlink in web browser . otherwise I can get the link in web browser and then use wget or some similar tool to actually do the downloding. I try new cool vim scripts quite often , so you can imagin my dismay when have to repeat the tedious action all the time. So if anyone of you knows some tips which can let me downloading the vim scripts in a more "professional" way, I'll appreciate it a lot. post edit : My problem is not find a tool like wget or curl . The problem I met is quite specific to use these tools to download vim script. let's take http://www.vim.org/scripts/script.php?script_id=30 as an example, it's the normal place where one can get the script, at least for me. but I can't find an working url from this page that can feed to wget .

    Read the article

  • Disconnect from PHP after output generated

    - by Oli
    I have a LEMP stack. Nginx sitting in front of PHP-FPM. Because some of the sites are heavy and there's OPCode caching, PHP is set up so that there are only 5 child processes running. The aim being that each child can deal with any request in less than half-a-second and then move onto the next request. One problem I've found is that if it's a big chunk of HTML that's getting sent out, and the user has a slow connection, that PHP thread stays occupied until they've finished downloading. Because of my current setup, I have a pretty unforgiving timeout inside PHP where the script is killed after 20 seconds. This is to make sure everybody gets a turn but on a slow connection, this can mean the user gets cut off with a 504 Gateway timeout. I was wondering if there was some sort of buffer solution that I could implement within or just behind Nginx that sent the request through and then... well... buffered the content into its own cache and feed that onto the client as and when they could download it. The aim being that the underlying PHP thread can be freed up. What I'm asking for doesn't have to be PHP-specific. Anything that deals with FastCGI, or even any Nginx-upstream might have a similar issue to this.

    Read the article

  • Is there any way to synchronize Outlook RSS Feeds with BlackBerry?

    - by nvuono
    Does anyone know how I can view the contents of my Outlook 2007 RSS Feeds from a corporate-issued BlackBerry? Our Inbox and Calendar are already integrated with corporate exchange servers but it looks like nobody cares too much about the RSS Feeds. Is there some setting on my Blackberry or in Outlook I could possibly tweak to include these updates? I know there are many standalone RSS readers available for blackberry (Google Reader for example) but I mention Outlook RSS Feeds specifically in my question because I am subscribing to a number of RSS feeds I've setup on my intranet for various version control systems that would be inaccessible to an external RSS reader. It seems like I might have to setup some sort of email commit notifications if I want anything from my blackberry but I much prefer the 'pull' method of an RSS feed viewer over receiving streams of emails. Please feel free to suggest any alternatives! Edit: I've additionally tried moving my "SVN Repository" folder directly into my Mailbox instead of keeping it as a child of the RSS Feeds folder. This allows me to view the SVN Repository folder on my blackberry where previously the RSS Feeds folder and all children were hidden but unfortunately it never seems to get populated with the items that are displaying in Outlook. I've even made a fresh commit to make sure that the SVN Repository folder still works correctly in Outlook from outside the RSS Feeds folder but no luck on the BlackBerry end of things. BlackBerry Model Details: BlackBerry 8310 smartphone (EDGE) v4.2.2.170 Platform 2.5.0.30

    Read the article

  • Best shortcut in Total Commander

    - by life-warrior
    So, what's your favourite TC shortcut or shortcut combination ? Which one do you use and for what purpose ? Among my most often used: Ctrl-Left ( or Ctrl-Right ) - open archive or folder under cursor in opposite tab. Ctrl-Shift-Enter, Alt-F8, Ctrl-X - copy full file path to clipboard. Shift-F6, Shift-End(if needed), Ctrl-C - copy only file name w/o path. Select files, Ctrl-M - multi-rename, for example remove "DVDrip" from file names. Ctrl-\ - go to root directory. Ctrl-D, - go to directory with highlighted letter specified. For example, name a downloads directory "&Downloads" in favourites, and the letter after ampersand will be highlighted. Alt-F7, feed to listbox, Ctrl-A, Mark(menu)-Save selection to file - creates a file with all files and directories inside current, with full path. Ctrl-[3-6] - sort files by name(3), extension(4), date(5), size(6). For example, Sort by name, when you need movies and soundracks with the same name and different extension to group them together. Sort by extension, when you need to find EXEs in Windows directory. Sort by Date, when you need to find the latest file downloaded in your dir. Sort by size, when you need to delete the largest files for free space.

    Read the article

  • Using a named pipe to simulate a serial port on a VMware virtual machine (linux host and client)

    - by Dave M
    Trying to write a python program to create a simulated data stream and feed it, through a named pipe, to a VMware virtual machine. The host is running Ubuntu 11.10 and VMware player 5.0.0. The Vm is running Ubuntu netbook 10.04. I am able to get the pipe working on the local machine but I am not able to get the pipe to pass data through the virtual serial port to the programs running on the virtual machine. #!/usr/bin/python import os # # Create a named pipe that will be used as the serial port on a VMware virtual machine SerialPipe = '/tmp/gpsd2NMEA' try: os.unlink(SerialPipe) except: pass os.mkfifo(SerialPipe) # # Open the named pipe NMEApipe = os.open(SerialPipe, os.O_RDWR|os.O_NONBLOCK) # # Write a string to the named pipe NMEAtime = "235959" os.write(NMEApipe, str( '%s\n' % NMEAtime )) Test to see if the python program is working on the host machine (displays 235959 if data is passing through the pipe) $ cat /tmp/gpsd2NMEA 235959 Serial port as defined in the VMware .vmx file: serial0.present = "TRUE" serial0.startConnected = "TRUE" serial0.fileType = "pipe" serial0.fileName = "/tmp/gpsd2NMEA" serial0.pipe.endPoint = "client" serial0.autodetect = "FALSE" serial0.tryNoRxLoss = "TRUE" serial0.yieldOnMsrRead = "TRUE" Test to see if the serial port in the VM is receiving data $ cat /dev/ttyS0 or $ minicom -D /dev/ttyS0 or $ stty -F /dev/ttyS0 cs8 -parenb -cstopb 115200 $ echo < /dev/ttyS0 None of these display any data from the python program.

    Read the article

  • What is the sysadmin's dream network printer? 6-8k pg/mo. Xerox, OkiData, Lexmark and HP are all fail

    - by Jacob
    How do I find out what printer brand and/or type doesn't suck? This information is hard to find and manufacturer's websites won't reveal any issues with certain printers. After 10 years of dealing with network shared printers, I can't say that I have been impressed with any of the printer brands I've seen. Brother's little laser MFPs have been close to ideal for low volume, but that is it, period. OkiData, Lexmark, HP, Xerox solid ink printers, they all sucked in one way or another. Currently I'm looking to replace a Xerox ColorQube 8570 because it fails to print on a regular basis. Sometimes it doesn't even boot VxWorks fully - it just hangs at 2% or whatever. I've used Xerox 8860MFPs and they sucked just as bad. I won't talk about ink jets here, that's most likely not what I'm looking for. We currently spend about $4k on paper and ink per year for this printer at up to 6-8k pages per month, letter, mostly black and white, low color usage. I want the printer to feed paper correctly, not crash and burn when a PDF isn't according to its taste (my favorite Xerox problem here) and with decent drivers for Windows and OS X. Print quality is not of the utmost importance but paper does get sent to customers.

    Read the article

  • How can I proxy multiple LDAP servers, and still have grouping of users on the proxy?

    - by Chris
    I have 2 problems that I'm hoping to find a common solution to. First, I need to find a way to have multiple LDAP servers (Windows AD's across multiple domains) feed into a single source for authentication. This is also needed to get applications that can't natively talk to more than one LDAP server to work. I've read this can be done with Open LDAP. Are there other solutions? Second, I need to be able to add those users to groups without being able to make any changes to the LDAP servers I'm proxying. Lastly, this all needs to work on Windows Server 2003/2008. I work for a very large organization, and to create multiple groups and have large numbers of users added to, moved between, and removed from them is no small task. This normally requires tons of paperwork and a lot of time. Time is the one thing we don't normally have; dodging the paperwork is just a plus. I have very limited experience in all this, so I'm not even sure what I'm asking will make sense. Atlassian Crowd comes close to what we need, but falls short of having it's own LDAP front end. Can anyone provide any advice or product names? Thanks for any help you can provide.

    Read the article

  • Weirdly high ping on direct ethernet connection?

    - by Antriel
    I bought new Lenovo IdeaCentre H430 pc and I'm having problem with high pings. Windows 7 with on-board realtek NIC. Fresh install, fully updated, drivers installed from included CD. When I start pinging router (direct 1Gb ethernet connection, 1 hop), pings start at <1ms (which is fine) and after a while they jump to 300-1000ms. I loaded up live ubuntu to test if the problem might be in HW. It's not, in ubuntu pings were always <1ms. I also noticed that when I start using connection somehow, pings go down to 1ms, but go back up when I stop using it (tested by accessing live camera feed on LAN). Power Options set to max performance. I disabled Interrupt Moderation on the NIC, didn't help. I tested it in the safe mode with networking, same problem there. It slows down our client-server based programs and I have no idea what's causing it. All I could google up was that disabling Interrupt Moderation would help, it didn't though. Anyone had similar problems? tl;nr: Computer is giving high pings to router when idle and normal pings when network is under load, it slows down our software significantly.

    Read the article

  • How can I connect my Xbox to my Mac on my network

    - by codecowboy
    I have a wireless router/modem (Router 1) in my living room. This is connected to the internet (cable). Wireless is disabled as the router has a terrible wireless range. My Xbox is connected via ethernet to Router 1. Another LAN output from Router 1 connects to a powerline adapter. Router 1 acts as a DHCP server on 192.168.0.x and has the IP 192.168.0.1 In a second room I have Router 2. This has the powerline feed from Router 1 going into the WAN socket. This router runs the Tomato Firmware and acts as a wireless router for the rest of the house using the IP range 192.168.1.x. Router 2 IP is 192.168.1.1. My Mac is connected to Router 2 using a LAN cable and has the IP 192.168.0.133. Several mobile devices need wireless access. I want an ethernet connection to my Mac, not wireless. I should be able to use software like Connect360 to share media from my Mac to the XBox but the XBox does not see my Mac. I can ping 192.168.0.1 from the Mac. Is this possible using my current setup? If so, how?

    Read the article

  • How to download Vim script on the command-line?

    - by HaiYuan Zhang
    Whenever I want to install a new Vim script on the Linux server I'm working on, my typical workflow is as the following: surf the plugin's homepage in Vim online using FireXXXX download a right version of the plugin to my laptop by click some highlighted link upload the downloaded plugin from my laptop to Linux server using WinSCP which is really inconvenient. I don't know what is the magic behind this: I mean for the same hyperlink I click it in web browser. I can let you download it but use Wget plus the hyperlink in Linux command-line will end up with nothing but an error indication. Hyperlink in the web browser. Otherwise I can get the link in web browser and then use Wget or some similar tool to actually do the downloding. I try new cool Vim scripts quite ofte , so you can imagine my dismay when I have to repeat the tedious action all the time. What are some tips which can let me download the Vim scripts in a more "professional" way? Post edit: My problem is not find a tool like Wget or cURL. The problem I met is quite specific; to use these tools to download a Vim script. Let's take http://www.vim.org/scripts/script.php?script_id=30 as an example. It's the normal place where one can get the script, at least for me. But I can't find an working URL from this page that can feed to Wget.

    Read the article

  • Wordpress Forbidden page

    - by ffffff
    HTML without a body part is null If I read preview mode in (there is no authority) without logging in The response html is this.. <html xmlns="http://www.w3.org/1999/xhtml" lang="ja" xml:lang="ja"> <head profile="http://purl.org/net/ns/metaprof"> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /> <meta http-equiv="Content-Script-Type" content="text/javascript" /> <meta name="generator" content="WordPress 2.9.2" /> <meta name="author" content="blog" /> <link rel="alternate" type="application/atom+xml" href="http://blog.example.com/feed/atom/" title="Atom cite contents" /> <link rel="start" href="http://blog.example.com" title="blog Home" /> <link rel="stylesheet" type="text/css" href="http://blog.example.com/wp-content/themes/blog/style.css" /> <meta name="description" content="blog" /> <title>blog - </title> </head> <body class="individual single"> </div> </body> </html> Do you have any solutions?

    Read the article

  • Converting Audio To Video Output and Attaching Text?

    - by ZeeMan
    I am currently working on a project and before i get started i thought it'd be nice to check with stackOverflow community, and see maybe they can help me with this. The Idea: I have about a thousand MP3 files that i need to convert into Video files to be upload on Youtube for my work. Here is where it gets tricky i need to also attach the Text associated with the Audio to the Video as an Image. I was thinking .ppt. The Problem: I can do this one audio file at a time but it would take me a zillion years. lol!! The Question: Can I Create Some Kind Of Program Using Let's Say XML or JavaScript Or XHTML or some other programming language to do a MASS content creation and all i have to do is feed it the Information?? possibly a script?? or is it possible to create an example .ppt file and then hack it so that i can have it reproduce itself with different information?? The Note: Thanks U In Advance For Helping Out!!! Regards, ZeeMan!!!

    Read the article

  • IPTables forward from only one ip on my server

    - by user1307079
    I was able to get my server to forward connections on a certain port to a different IP, but when I add -d to specify an IP to froward from, It does not work. This is what I am trying, iptables -t nat -A PREROUTING -d 173.208.230.107 -p tcp --dport 80 iptables -t nat -nvL-j DNAT --to-destination 38.105.20.226:80. It works fine without the -d. Here is my ifconfig dump: em1 Link encap:Ethernet HWaddr 00:A0:D1:ED:D0:54 inet addr:173.208.230.106 Bcast:173.208.230.111 Mask:255.255.255.248 inet6 addr: fe80::2a0:d1ff:feed:d054/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:100058 errors:0 dropped:0 overruns:0 frame:0 TX packets:18941701 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:12779711 (12.1 MiB) TX bytes:825498499 (787.2 MiB) Memory:fbde0000-fbe00000 em1:9 Link encap:Ethernet HWaddr 00:A0:D1:ED:D0:54 inet addr:173.208.230.107 Bcast:173.208.230.111 Mask:255.255.255.248 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Memory:fbde0000-fbe00000 em1:10 Link encap:Ethernet HWaddr 00:A0:D1:ED:D0:54 inet addr:173.208.230.108 Bcast:173.208.230.111 Mask:255.255.255.248 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Memory:fbde0000-fbe00000 em1:11 Link encap:Ethernet HWaddr 00:A0:D1:ED:D0:54 inet addr:173.208.230.109 Bcast:173.208.230.111 Mask:255.255.255.248 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Memory:fbde0000-fbe00000 em1:12 Link encap:Ethernet HWaddr 00:A0:D1:ED:D0:54 inet addr:173.208.230.110 Bcast:173.208.230.111 Mask:255.255.255.248 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Memory:fbde0000-fbe00000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)

    Read the article

  • PHP application failed to connect after a network plugged back in

    - by tntu
    My data-center appears to have had some issues with their network and thus my server has suffered from on an off network connectivity for about an hour. After the connection has been completely re-established my code still kept reporting the same issue over and over until I have restarted the service. The code is a simple PHP code that loops forever checking the Apple feed-back server and then sleeps for a few minutes and then it begins all over again. Now I understand the error being generated if the network is down but once it got back up why did it continue until I have restarted the code? Does PHP have something that needs to be re-initialized or something?? Messges log: Dec 20 08:57:22 server kernel: r8169: eth0: link down Dec 20 08:57:28 server kernel: r8169 0000:06:00.0: eth0: link up Dec 20 08:57:29 server kernel: r8169: eth0: link down Dec 20 08:57:33 server kernel: r8169 0000:06:00.0: eth0: link up Dec 20 08:57:33 server kernel: r8169: eth0: link down Dec 20 08:57:37 server kernel: r8169 0000:06:00.0: eth0: link up Dec 20 08:57:38 server kernel: r8169: eth0: link down Dec 20 08:57:44 server kernel: r8169 0000:06:00.0: eth0: link up Dec 20 08:57:44 server kernel: r8169: eth0: link down Dec 20 08:57:52 server kernel: r8169 0000:06:00.0: eth0: link up Dec 20 08:57:52 server kernel: r8169: eth0: link down Dec 20 09:10:58 server kernel: r8169 0000:06:00.0: eth0: link up PHP Error: PHP Warning: stream_socket_client(): php_network_getaddresses: getaddrinfo failed: Name or service not known in /home/push/feedback.php on line 36 Code Line 36: $apns = stream_socket_client('ssl://feedback.sandbox.push.apple.com:2196', $errcode, $errstr, 60, STREAM_CLIENT_CONNECT, $stream_context);

    Read the article

  • 'Future-proof' Live Audio Capture & Broadcast [migrated]

    - by maxpowers
    I'm looking to implement some live audio broadcasting functionality within a Ruby on Rails site for a client and was hoping I could get some input from people who have tackled this type of thing before. Essentially what I need to do is capture and record a user's audio (via microhpone, line in, etc), then stream that to 1,000+ listeners with very little latency, like sub 2 second if possible. So it looks like we've got 3 parts: Web-based audio capture (likely with Flash or JS) Server to accept audio feed and stream to listeners (likely Icecast or Wowza) Actual audio player (maybe HTML5 w/ Flash as a fallback? Maybe this jPlayer fork) Does RTMP makes sense here? Or maybe HTTP? What's the most 'future-proof' way to make this happen? Building with mobile in mind, but still want to be able stream to anyone. I've found lots of potentially helpful threads and software but I'm struggling to get an idea of how it all fits together. I'm a front end guy and way out of my comfort zone so if anyone has insights to offer, I'd love to hear them.

    Read the article

  • Port(s) not forwarding?

    - by user11189
    I have cable internet service through Charter Communications and feed two desktop computers through a Linksys RP614v3 router. One system is my wife's running WinXP Home Edition and the other is mine, running Vista Home Premium (sp1). I have port forwarding configured in the Linksys so I can access the Vista system remotely using TightVNC. Initially, it worked great and I was able to remotely tend email and access local files while out of town for work. Lately, the cable internet service appears to flicker intermittently and upon return, my Mailwasher program loses ability to access the net and I've been unable to make the remote connection. When I reset the port forwarded for email in the router control panel, Mailwasher functionality returns but as I'm home when that happens, I have no easy way to check remote access until the next time I'm on the road or at work. I'm at my wit's end -- the TightVNC client accesses fine from my wife's system from behind the modem/router setup but I don't know how to maintain whatever gets reset when I fiddle with the control panel and the need to do so at all is new. I accessed it fine for a week off and on while out of town a month ago and now I can't leave home and access it from work an hour later.

    Read the article

  • Can you see something wrong in my .htaccess?

    - by AlexV
    OK, after many search, trial and errors I've managed to create an .htaccess that do what I wanted (see explanations and questions after the code block): <IfModule mod_rewrite.c> RewriteEngine On #1 If the requested file is not url-mapper.php (to avoid .htaccess loop) RewriteCond %{REQUEST_FILENAME} (?<!url-mapper\.php)$ #2 If the requested URI does not end with an extension OR if the URI ends with .php* RewriteCond %{REQUEST_URI} !\.(.*) [OR] RewriteCond %{REQUEST_URI} \.php.*$ [NC] #3 If the requested URI is not in an excluded location RewriteCond %{REQUEST_URI} !^/seo-urls\/(excluded1|excluded2)(/.*)?$ #Then serve the URI via the mapper RewriteRule .* /seo-urls/url-mapper.php?uri=%{REQUEST_URI} [L,QSA] </IfModule> This is what the .htaccess should do: #1 is checking that the file requested is not url-mapper.php (to avoid infinite redirect loops). This file will always be at the root of the domain. #2 the .htaccess must only catch URLs that don't end with an extension (www.foo.com -- catch | www.foo.com/catch-me -- catch | www.foo.com/dont-catch.me -- don't catch) and URLs ending with .php* files (.php, .php4, .php5, .php123...). #3 some directories (and childs) can be excluded from the .htaccess (in this case /seo-urls/excluded1 and /seo-urls/excluded2). Finally the .htaccess feed the mapper with an hidden GET parameter named uri containing the requested uri. Even if I tested and everything works, I want to know if what I do is correct (and if it's the "best" way to do it). I've learned a lot with this "project" but I still consider myself a beginner at .htaccess and regular expressions so I want to triple check it there before putting it in production...

    Read the article

  • Problems with Net::FTP slowing down

    - by c0bra
    I'm running into an issue with using Net::FTP (latest version 2.77) to transfer files to a remote host where a process is waiting to take the file and feed into some other system. The remote process does this every 5 minutes but ignores files that were modified in the last 0.2 seconds (that's right, 1/5th of a second). The problem is that for some reason the transfer seems to halt or slow down a bit and no data is transferred for several seconds and during this time the process picks up the incomplete file and removes it. The weird thing is that manually using the ftp binary the file seems to transfer fine. I've tried messing with all of the Net::FTP switches (Active/Passive, different Blocksizes) and nothing seems to help. What's also weird is that the file seems to transfer fairly quickly at first and on occasion a bit into the transfer. Like 300-500k will go up immediately, but then it slows down to where the file size is only increasing by 2,896 bytes every several seconds. It doesn't seem to happen when I try sending the file to a different remote host, but since a regular manual ftp transfer works with this host I don't know what to think. Some combination of Net::FTP and possibly a slow or wonky connection?

    Read the article

  • RSS "Newspaper" / Google Reader replacement

    - by Sean D
    With the impending demise of Google Reader I've been looking at ways to replace it. I've decided that what might be cool is to get an email every morning, with all the updates from the last twenty-four hours, maybe in the style of a newspaper. That's not a very original idea, since sites like http://fivefilters.org/pdf-newspaper/ and http://feedjournal.com/ already do this, but they both have various drawbacks. In particular both require a single feed, will just take the last n items, and clicking around on their website. The Pro option for feedjournal seems almost like it would do the job, but the project seems to be dead, and there's no way to buy it. Before I hack together something crazy I'd like to know if there's a better solution to my problem. In short: I want to replace Google Reader with a daily pdf email, how should I do this? edit: I didn't award the bounty because nobody solved the problem (not that I'm assuming it has a solution). Answers like "well for the way I do things this wouldn't work" aren't actually helpful, even if they are well-meaning.

    Read the article

  • How to avoid duplicates when copying files that have been renamed at the destination

    - by Benoitt
    I have to get pictures from a folder – with subfolders which are updated automatically – with their extensions. These files have to be copied in a folder where a website based on PHP will edit them (by renaming and creating an XML file) to be downloadable and integrated in an XML feed. Because of the rename function of the script, when I perform the copy gain, all the files are duplicated, because the script has renamed the original ones already. I've tried a few things with rsync but I'm looking for something more powerful because I can't copy files with an external "history". #!/bin/bash find '/home/name/picture' -name '*.jpg' | while read FILE ; do rsync --backup --backup-dir=incremental --suffix=.old "$FILE" /var/www/media ; done wget --spider 'http://myscript.php' ; #exit 0 PS: As a little addition, I'd like to replace '.' with a 'space' just after the *.jpeg copy. My PHP script has some problem to define files with comma because of the extension. I'm finking about a command with find – like I did before – with a sed function? Is that a good idea?

    Read the article

< Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >