Search Results

Search found 330 results on 14 pages for 'tibco rv'.

Page 7/14 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • Asterisk is hacking itself [duplicate]

    - by Shirker
    This question already has an answer here: How do I deal with a compromised server? 11 answers I've got some strange logs on my asterisk (and there a lot of extensions were tried): chan_sip.c: Failed to authenticate device 6006<sip:[email protected]>;tag=f106f3fe but IP XX.XX.XX.39 is its OWN IP! cat /etc/asterisk/* | grep 6006 returns nothing. asterisk -rv Asterisk 11.4.0 How its possible, that its hacks itself? And how could I trace, where it comes from?

    Read the article

  • .Net Rocks Visual Studio 2010 Road Trip coming to Raleigh, NC May 6th

    - by Jim Duffy
    Listen up .NET developers within 50 miles of Research Triangle Park, NC!  Take out that red, blue, green, black or any other color Sharpie marker you fancy and circle May 6th! Fellow Microsoft Regional Directors Carl Franklin and Richard Campbell are going to be bringing the .Net Rocks Visual Studio 2010 Road Trip to town. What’s that you say, you’ve never been to a .Net Rocks Road Trip event and don’t know what to expect? Let me help with that. I stol… uhhh… I mean I was “inspired” by some content I found on the event information page. “Carl and Richard are loading up the DotNetMobile (a 30 foot RV) and driving to your town again to show off their favorite bits of Visual Studio 2010 and .NET 4.0! Richard talks about Web load testing and Carl talks about Silverlight 4.0 and multimedia. And to make the night even more fun, we’re going to bring a mystery rock star from the Visual Studio world to the event and interview them for a special .NET Rocks Road Trip show series. Along the way we’ll be giving away some great prizes, showing off some awesome technology and having a ton of laughs. So come out to the most fun you can have in a geeky evening – and learn a few things along the way about web load testing and Silverlight 4!“   I know I’ll be there so what are you waiting for? Head over to the event registration page and sign up today! Have a day. :-|

    Read the article

  • Trying to scrape a page with php/curl, trouble with cookies, post vars, and hidden fields.

    - by Patrick
    Im trying to use CURL to scrape the results from the page http://adlab.msn.com/Online-Commercial-Intention/Default.aspx My understanding, is that i visit this page, it places cookie info and sets a few variables. I enter my query, select the query radio option, and click go. The problem, is that its not working the way that it does through the website as im trying to get it to using the code below. Ive tweaked several things, but im posting here in hopes someone can find what im missing. This is my code as it stands now: include("simple_html_dom.php"); $ckfile = tempnam ("/tmp", "CURLCOOKIE"); $query = $_GET['query']; $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, "http://adlab.msn.com/Online-Commercial-Intention/default.aspx"); curl_setopt($ch, CURLOPT_HEADER, 1); curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 1); curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false); curl_setopt($ch, CURLOPT_COOKIEJAR, $ckfile); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch, CURLOPT_USERAGENT, "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.3) Gecko/20070309 Firefox/2.0.0.3"); $pagetext1 = curl_exec($ch); curl_exec($ch); $html2 = str_get_html($pagetext1); $viewstate = $html2->find('input[id=__VIEWSTATE]', 1)->plaintext; echo $query."<br>".$viewstate."<br>"; $params = array( '__EVENTTARGET' => "", '__EVENTARGUMENT' => "", '__LASTFOCUS' => "", '__VIEWSTATE' => "$viewstate", 'MyMaster%3ADemoPageContent%3AtxtQuery' => "$query", 'MyMaster%3ADemoPageContent%3Alan' => "QueryRadio", 'MyMaster%3ADemoPageContent%3AgoButton.x' => "17", 'MyMaster%3ADemoPageContent%3AgoButton.y' => "12", 'MyMaster%3ADemoPageContent%3AidQuery' => "$query", 'MyMaster%3AHiddenKeywordTextBox' => "", ); $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, 'http://adlab.msn.com/Online-Commercial-Intention/default.aspx'); curl_setopt($ch, CURLOPT_POST, 1); curl_setopt($ch, CURLOPT_REFERER, 'http://adlab.msn.com/Online-Commercial-Intention/default.aspx'); curl_setopt($ch, CURLOPT_POSTFIELDS, '$params'); curl_setopt($ch, CURLOPT_HEADER, 1); curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 1); curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false); curl_setopt ($ch, CURLOPT_COOKIEFILE, $ckfile); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch, CURLOPT_USERAGENT, "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.3) Gecko/20070309 Firefox/2.0.0.3"); $pagetext = curl_exec($ch); curl_exec($ch); // echo $ckfile; $html = str_get_html($pagetext); $ret = $html->find('.intentionLabel', 1)->plaintext; echo $ret."<br><br><br>"; echo $pagetext;

    Read the article

  • After resolving and calling host via ipv6 with curl, next ipv4 request fails and vice versa

    - by Ranty
    I need to request the same host with different methods (using IPv4 and IPv6) from the same script. First request is always successful (no matter IPv4 or IPv6), but the second always fails with curl error 45 (Couldn't bind to IP). I'm using the following curl config: function v4_google() { $ch = curl_init(); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch, CURLOPT_FORBID_REUSE, 1); curl_setopt($ch, CURLOPT_FRESH_CONNECT, 1); curl_setopt($ch, CURLOPT_USERAGENT, 'Mozilla/5.0 (Windows NT 5.1; rv:15.0) Gecko/20100101 Firefox/15.0.1'); curl_setopt($ch, CURLOPT_URL, 'http://www.google.com/search?q=Moon'); curl_setopt($ch, CURLOPT_IPRESOLVE, CURL_IPRESOLVE_V4); curl_setopt($ch, CURLOPT_INTERFACE, 'My IPv4 address'); $c = curl_exec($ch); $result = !curl_errno($ch) ? 'Success' : '<b>' . curl_error($ch) . '; #' . curl_errno($ch) . '</b>'; echo '<p>v4_google:<br>Response length: ' . mb_strlen($c) . '<br>Result: ' . $result . '</p>'; curl_close($ch); } function v6_google() { $ch = curl_init(); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch, CURLOPT_FORBID_REUSE, 1); curl_setopt($ch, CURLOPT_FRESH_CONNECT, 1); curl_setopt($ch, CURLOPT_USERAGENT, 'Mozilla/5.0 (Windows NT 5.1; rv:15.0) Gecko/20100101 Firefox/15.0.1'); curl_setopt($ch, CURLOPT_URL, 'http://www.google.com/search?q=Moon'); curl_setopt($ch, CURLOPT_IPRESOLVE, CURL_IPRESOLVE_V6); curl_setopt($ch, CURLOPT_INTERFACE, 'My IPv6 address'); $c = curl_exec($ch); $result = !curl_errno($ch) ? 'Success' : '<b>' . curl_error($ch) . '; #' . curl_errno($ch) . '</b>'; echo '<p>v6_google:<br>Response length: ' . mb_strlen($c) . '<br>Result: ' . $result . '</p>'; curl_close($ch); } v6_google(); v4_google(); So long story short, if I query v6_google(); first, then all consecutive calls of v4_google(); are failing with the curl error 45 (Couldn't bind to IP). And vice versa. As you can see, I separated code into different functions and added CURLOPT_FORBID_REUSE and CURLOPT_FRESH_CONNECT plus curl_close($ch), but it didn't help at all. It looks like curl is caching the resolving method of every host you request, and even if you specify another resolving method the next time you call that host, the cached one is used instead. I would appreciate any help with this issue.

    Read the article

  • Why doesn't Firefox cache my images and CSS

    - by Richard A
    I am using IIS7, I have already set up the following. But when I run Firefox it seems not to cache any of my images even with "remember history" set. <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.webServer> <staticContent> <clientCache cacheControlCustom="public" cacheControlMode="UseMaxAge" cacheControlMaxAge="7.00:00:00" /> </staticContent> </system.webServer> </configuration> However when I use Firebug it still points to Firefox not caching images and CSS: public,max-age=604800 Content-Type text/css Content-Encoding gzip Last-Modified Mon, 27 Jun 2011 03:53:22 GMT Accept-Ranges bytes Etag "507968c27d34cc1:0" Vary Accept-Encoding Server Microsoft-IIS/7.5 X-Powered-By ASP.NET Date Mon, 27 Jun 2011 13:06:41 GMT Content-Length 5067 Request Headersview source Host www.xx.com User-Agent Mozilla/5.0 (Windows NT 6.1; rv:2.0.1) Gecko/20100101 Firefox/4.0.1 Accept text/css,*/*;q=0.1 Accept-Language en-us,en;q=0.5 Accept-Encoding gzip, deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 115 Connection keep-alive Referer http://www.xx.com/ Cookie __utma=62996397.135679654.1309106351.1309159743.1309164158.8; __utmz=62996397.1309106351.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); __utmc=62996397

    Read the article

  • Why doesn't Firefox cache my images and CSS

    - by Richard A
    I am using IIS7, I have already set up the following. But when I run Firefox it seems not to cache any of my images even with "remember history" set. <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.webServer> <staticContent> <clientCache cacheControlCustom="public" cacheControlMode="UseMaxAge" cacheControlMaxAge="7.00:00:00" /> </staticContent> </system.webServer> </configuration> However when I use Firebug it still points to Firefox not caching images and CSS: public,max-age=604800 Content-Type text/css Content-Encoding gzip Last-Modified Mon, 27 Jun 2011 03:53:22 GMT Accept-Ranges bytes Etag "507968c27d34cc1:0" Vary Accept-Encoding Server Microsoft-IIS/7.5 X-Powered-By ASP.NET Date Mon, 27 Jun 2011 13:06:41 GMT Content-Length 5067 Request Headersview source Host www.xx.com User-Agent Mozilla/5.0 (Windows NT 6.1; rv:2.0.1) Gecko/20100101 Firefox/4.0.1 Accept text/css,*/*;q=0.1 Accept-Language en-us,en;q=0.5 Accept-Encoding gzip, deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 115 Connection keep-alive Referer http://www.xx.com/ Cookie __utma=62996397.135679654.1309106351.1309159743.1309164158.8; __utmz=62996397.1309106351.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); __utmc=62996397

    Read the article

  • Congratulations to 2012 Innovation Award winners in BPM category

    - by Manoj Das
    Last year many of our customers went live on BPM 11g. It is my extreme pleasure to congratulate two of them – Amadeus and Navistar – for being awarded Oracle Fusion Middleware Innovation Award at Oracle OpenWorld 2012. We invited our customers to submit their most innovative BPM implementations that have delivered substantiated value to them. This year we saw more than 20 submissions from our customers seeing significant business value from their live BPM 11g deployments. The submissions came from across the world, spanning various industry verticals including manufacturing, healthcare, logistics, Hi-Tech, Public Sector, Education and covering many process usage patterns. Award submissions were evaluated based on the uniqueness of their business case, business benefits, level of impact relative to the size of the organization, complexity and magnitude of implementation, and the originality of architecture. Amadeus Team Receiving Innovation Award from Hasan Rizvi Congratulations to Amadeus and Navistar and their teams on being recognized from among some very strong submissions and more importantly for the business value delivered. It is an honor to be part of your success and to play a small role in the innovation you drive. Navistar is a leading truck manufacturing company which produces International® brand commercial and military trucks, MaxxForce® brand diesel engines, IC Bus™ brand school and commercial buses, and Navistar RV brands of recreational vehicles. The company also provides truck and diesel engine service parts. Amadeus is a leading transaction processor for the global travel and tourism industry, providing transaction processing power and technology solutions to both travellers and travel providers. Both Navistar and Amadeus have leveraged Oracle BPM Suite to improve visibility into their business and made their business more agile and efficient. We congratulate them again and wish them continued success in their business and future BPM initiatives.

    Read the article

  • Possible iphone animation timing/rendering bug?

    - by David
    Hi all, I have been working on an iphone apps for several weeks. Now I encounter an animation problem that I can't figure out how to resolve. Mayhbe you can help. Here is the details (a little long, bear with me): Basically the effect I want to achieve is, when user click a button, a loading view pops up, hiding the whole screen; and then the apps does a lot of heavy computation, which takes a few seconds. Once the computation is done, soem result views (something likes checkers on a checker board) are rendered under the loading view. Once all result views are rendered, I used animation animation to remove the loading view nand show the result views to the user. Here is what I do: when user click a button, run this code: [UIView beginAnimations:nil context:nil]; [UIView setAnimationDuration:1.0]; [UIView setAnimationBeginsFromCurrentState:YES]; [UIView setAnimationTransition:UIViewAnimationTransitionCurlDown forView:self.view cache:YES]; [UIView setAnimationDelegate:self]; [UIView setAnimationDidStopSelector:@selector(loadingViewInserted:finished:context:)]; // use a really high index number so it will always on top [self.view insertSubview:loadingViewController.view atIndex:1000]; [UIView commitAnimations]; In the "loadingViewInserted" function, it calls another function doing the heavy computation work. Once the computation is done, a lot of result views (like checkers on a checker board) are rendered under the loading view. for(int colIndex = 1; colIndex <= result.columns; colIndex++) { for(int rowIndex = 1; rowIndex <= result.rows; rowIndex++) { ResultView *rv = [ResultView resultViewWithData:results[colIndex][rowIndex]]; [self.view addSubview:rv]; } } Once all result views are added, following animation is invoked to remove the loading view: [UIView beginAnimations:nil context:nil]; [UIView setAnimationDuration:1.0]; [UIView setAnimationBeginsFromCurrentState:YES]; [UIView setAnimationTransition:UIViewAnimationTransitionCurlUp forView:self.view cache:YES]; [loadingViewController.view removeFromSuperview]; [UIView commitAnimations]; By doing this, most of the time (maybe 90%) it does exactly what I want. However, sometime I see some weird result: the loading view shows up first as expected, then before it disappears, some result views, which suppose to be under the loading view, suddenly appears on top of the loading view; and some of them are partial rendered. And then the loading view curled up, and everything looks normal again. The weird situation only lasts for less than a second, but already bad enough to screw up the UI. I have tried all different kinds of thing to fix this (using another thread to remove the loading view, make the loading view non-transparent), but none of them works. The only thing that makes a little better is, I hide all the result views first; after the last animation finished, in its call back, unhide all result views. But this loses the nice effect that when curling up the loading view, the results are already there. At this point, I really think this is a bug in iphone (I compile it with OS 3.0) OS. Or maybe you can point out what I have done wrong (or could do differently). (thanks for finishing this long post, :-) )

    Read the article

  • Big Data: Size isn’t everything

    - by Simon Elliston Ball
    Big Data has a big problem; it’s the word “Big”. These days, a quick Google search will uncover terabytes of negative opinion about the futility of relying on huge volumes of data to produce magical, meaningful insight. There are also many clichéd but correct assertions about the difficulties of correlation versus causation, in massive data sets. In reading some of these pieces, I begin to understand how climatologists must feel when people complain ironically about “global warming” during snowfall. Big Data has a name problem. There is a lot more to it than size. Shape, Speed, and…err…Veracity are also key elements (now I understand why Gartner and the gang went with V’s instead of S’s). The need to handle data of different shapes (Variety) is not new. Data developers have always had to mold strange-shaped data into our reporting systems, integrating with semi-structured sources, and even straying into full-text searching. However, what we lacked was an easy way to add semi-structured and unstructured data to our arsenal. New “Big Data” tools such as MongoDB, and other NoSQL (Not Only SQL) databases, or a graph database like Neo4J, fill this gap. Still, to many, they simply introduce noise to the clean signal that is their sensibly normalized data structures. What about speed (Velocity)? It’s not just high frequency trading that generates data faster than a single system can handle. Many other applications need to make trade-offs that traditional databases won’t, in order to cope with high data insert speeds, or to extract quickly the required information from data streams. Unfortunately, many people equate Big Data with the Hadoop platform, whose batch driven queries and job processing queues have little to do with “velocity”. StreamInsight, Esper and Tibco BusinessEvents are examples of Big Data tools designed to handle high-velocity data streams. Again, the name doesn’t do the discipline of Big Data any favors. Ultimately, though, does analyzing fast moving data produce insights as useful as the ones we get through a more considered approach, enabled by traditional BI? Finally, we have Veracity and Value. In many ways, these additions to the classic Volume, Velocity and Variety trio acknowledge the criticism that without high-quality data and genuinely valuable outputs then data, big or otherwise, is worthless. As a discipline, Big Data has recognized this, and data quality and cleaning tools are starting to appear to support it. Rather than simply decrying the irrelevance of Volume, we need as a profession to focus how to improve Veracity and Value. Perhaps we should just declare the ‘Big’ silent, embrace these new data tools and help develop better practices for their use, just as we did the good old RDBMS? What does Big Data mean to you? Which V gives your business the most pain, or the most value? Do you see these new tools as a useful addition to the BI toolbox, or are they just enabling a dangerous trend to find ghosts in the noise?

    Read the article

  • Oracle Tutor: XPDL conversion (and why you should care)

    - by mary.keane
    You may have noticed that the Oracle Business Process Converter feature in Tutor 14 supports "XPDL" conversion to Oracle Business Process Analysis Suite (BPA), Oracle Business Process Management Suite (BPM), and Oracle Tutor, and you may have briefly wondered "what is XPDL?" before you moved on to the Visio import feature (a very popular feature in Tutor 14). This posting is for those who do not yet understand (or care) about XPDL and process modeling. Many of us (and I'm including myself) have spent years working in the process definition arena: we've written procedures, designed systems and software to help others write procedures, and have been responsible for embedding policies and procedures into training material for employees. We've worked with tools such as Oracle Tutor, Microsoft Visio, Microsoft Word, and UPK. Most of us have never worked with "modeling tools" before, and we certainly never had to understand BPMN. It's a brave new world in this arena, and companies desperately need people with policy and procedural system expertise to be able to work with system analysts so there is a seamless transfer of knowledge from IT to employees. When working with applications, a picture is worth a thousand words, so eventually you're going to need to understand and be able to work with business process models. XPDL is an acronym for XML Process Definition Language, and it is an interchange format for business process models. It allows you to take a BPMN model that was developed in one workflow application such as BizAgi and import it into another workflow application or a true BPMN management system such as Oracle BPM. Specifically, the XPDL format contains the graphical information of a model as well as any executable information. By using a common format, models can be moved from a basic modeling application used by business owners to applications used by system architects. Over 80 applications support the XPDL format, including MetaStorm ProVision, BEA ALBPM, BizAgi, and Tibco. I mention these applications because we have provided XSLT mapping files specifically for these vendors. Oracle Business Process Converter was designed with user extensibility in mind, and thus users can add their own XML files so that additional XPDL models from other vendors can be converted to BPM, BPA, and Oracle Tutor. Instructions on how to add your own files can be found in Appendix 4 of the Oracle Business Converter manual. Let's take a visual look at how this works. Here is an example of a model devloped in BizAgi: This model can be created by the average business user without a large learning curve, and it's a good start for the system analyst who will be adding web services as well as for the business manager who manages the process described in the model. By exporting this model as XPDL, the information can be converted into Oracle BPA and Oracle BPM as well as converted to Oracle Tutor to become the framework for a procedure. Through this conversion feature, one graphic illustration of a business process can be used by a system analyst, business analyst, business manager, and employee, as seen below. Model Converted to Tutor Procedure Below is the task section of the procedure after conversion from an XPDL file. Model converted to BPA Model converted to BPM End users still want step by step instructions on how to perform their jobs, so procedures (Oracle Tutor) and application simulations (UPK) are still a critical piece of the solution. But IT professionals need graphic descriptions of how the applications work, regardless of whether there are any tasks involving humans. Now there is a way to convert procedures (Oracle Tutor docx files) and basic models (XPDL files) so that business managers and system analysts can share process information. References Wikipedia XPDL. Workflow Management Coalition, XPDL Support and Resources Oracle Business Process Converter manual, Oracle Tutor 14 Oracle Business Process Management 11g If you have any XPDL conversion stories to share, we'd love to hear from you. Best wishes for the coming new year, Mary Keane, Senior Development Manager, Oracle Tutor and BPM

    Read the article

  • Weird "406 not acceptable" error

    - by Horace Loeb
    When I try to hit this action via Javascript, I get a 406 Not Acceptable error: def show @annotation = Annotation.find_by_id(params[:id]) respond_to do |format| format.html { if @annotation.blank? redirect_to root_path else redirect_to inline_annotation_path(@annotation) end } format.js { if params[:format] == "raw" render :text => @annotation.body.to_s else render :text => @annotation.body.to_html end } end end This is from jQuery, but I'm doing the right beforeSend stuff: $.ajaxSetup({ beforeSend: function(xhr) { xhr.setRequestHeader("Accept", "text/javascript"); }, cache: false }); Here are my request headers: Host localhost:3000 User-Agent Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 Accept text/javascript Accept-Language en-us,en;q=0.5 Accept-Encoding gzip,deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 300 Connection keep-alive X-Requested-With XMLHttpRequest Content-Type application/x-www-form-urlencoded

    Read the article

  • Defaultifempty seems to work in linq to entities

    - by Rand
    I'm new to linq and linq to entities so I might have gone wrong in my assumptions, but I've been unknowingly trying to use DefaultIfEmpty in L2E. For some reason if I turn the resultset into a List, the Defaultifempty() works I don't know if I've inadvertantly crossed over into Linq area. The code below works, can anyone tell me why? And if it does work great, then it'll be of help to other people. var results = (from u in rv.tbl_user .Include("tbl_pics") .Include("tbl_area") .Include("tbl_province") .ToList() where u.tbl_province.idtbl_Province == prov select new { u.firstName, u.cellNumber, u.tbl_area.Area, u.ID,u.tbl_province.Province_desc, pic = (from p3 in u.tbl_pics where p3.tbl_user.ID == u.ID select p3.pic_path).DefaultIfEmpty("defaultpic.jpg").First() }).ToList();

    Read the article

  • Browser Detection Python / mod_python?

    - by cka
    I want to keep some statistics about users and locations in a database. For instance, I would like to store "Mozilla","Firefox","Safari","Chrome","IE", etc... as well as the versions, and possibly the operating system. What I am trying to locate from Python is this string; Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.14) Gecko/2009090216 Ubuntu/9.04 (jaunty) Firefox/3.0.14 Is there an efficient way to use Python or mod_python to detect the http user agent/browser?

    Read the article

  • Setting user agent of a java URLConnection

    - by diglettpotato
    I'm trying to parse a webpage using Java with URLConnection. I try to set up the user-agent like this: java.net.URLConnection c = url.openConnection(); c.setRequestProperty("User-Agent", "Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.4; en-US; rv:1.9.2.2) Gecko/20100316 Firefox/3.6.2"); But the resulting user agent is the one I specify, with "Java/1.5.0_19" appended to the end. Is there a way to truly set the user agent without this addition?

    Read the article

  • C# - Determine if class initializaion causes infinite recursion?

    - by John M
    I am working on porting a VB6 application to C# (Winforms 3.5) and while doing so I'm trying to break up the functionality into various classes (ie database class, data validation class, string manipulation class). Right now when I attempt to run the program in Debug mode the program pauses and then crashes with a StackOverFlowException. VS 2008 suggests a infinite recursion cause. I have been trying to trace what might be causing this recursion and right now my only hypothesis is that class initializations (which I do in the header(?) of each class). My thought is this: mainForm initializes classA classA initializes classB classB initializes classA .... Does this make sense or should I be looking elsewhere? UPDATE1 (a code sample): mainForm namespace john { public partial class frmLogin : Form { stringCustom sc = new sc(); stringCustom namespace john { class stringCustom { retrieveValues rv = new retrieveValues(); retrieveValues namespace john { class retrieveValues { stringCustom sc = new stringCustom();

    Read the article

  • Curl redirect,, not working?

    - by Sushant Panigrahi
    I'm using the following code: $agent= 'Mozilla/5.0 (Windows; U; Windows NT 5.1; pl; rv:1.9) Gecko/2008052906 Firefox/3.0'; $ch = curl_init(); curl_setopt($ch, CURLOPT_USERAGENT, $agent); curl_setopt($ch, CURLOPT_URL, "www.example.com"); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch, CURLOPT_HEADER, 0); $output = curl_exec($ch); echo $output; But it redirects to like this: http://localhost/aide.do?sht=_aide_cookies_ Instead of to the URL page. Can anyone help me solve my problem, please?

    Read the article

  • Nginx is sending proxy saved conent in gzip format

    - by Sandeep Manne
    Hi I used config given in this http://www.webtatic.com/blog/2008/04/page-level-caching-with-nginx/ for page level caching of php content the problem is that the cached page is saving in gzip format and it returning same gzip content to browser. I need the o/p like this "12:15:37 12:15:47" (Its coming for 1st time when the page is not cached) after that if request is resend it is returning ‹??????34²26±24à23Œ¸¸?`Î9”??? (gzip response as I tried zcat its returning fine) Response Headers Server nginx/0.8.34 Date Wed, 17 Mar 2010 07:04:58 GMT Content-Type text/html Last-Modified Wed, 17 Mar 2010 07:04:20 GMT Transfer-Encoding chunked Connection keep-alive Vary Accept-Encoding Expires Wed, 17 Mar 2010 07:04:58 GMT Cache-Control max-age=0 Content-Encoding gzip Request Headers Host localhost User-Agent Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.18) Gecko/2010021501 Ubuntu/9.04 (jaunty) Firefox/3.0.18 GTB6 Accept text/html,application/xhtml+xml,application/xml;q=0.9,/;q=0.8 Accept-Language en-us,en;q=0.5 Accept-Encoding gzip,deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 300 Connection keep-alive

    Read the article

  • Regex browser version match

    - by Matic
    I have a string: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.1.2) Gecko/20090729 Firefox/3.5.2 (.NET CLR 3.5.30729) I want to know what version of Firefox is in the string (3.5.2). My current regex is: Firefox\/[0-9]\.[0-9]\.[0-9] and it returns Firefox/3.5.2 I only want it to return 3.5.2 from the Firefox version, not the other versions in the string. I already know the browser is Firefox.

    Read the article

  • PHP File Serving Script: Unreliable Downloads?

    - by JGB146
    This post started as a question on ServerFault ( http://serverfault.com/questions/131156/user-receiving-partial-downloads ) but I determined that our php script was the culprit. So I'm issuing an updated question here about what I believe is the actual issue. I am using a php script to verify permissions and then serve up a file for users of my website to download. Most of the time, this works, but recently one user has been seeing problems with larger downloads. He is only getting ~80% of downloads for files that are 100MB in size. Also, all downloads from this script fail to report a filesize. Further, tests revealed that the same user COULD reliably download each of the failed files if given a direct link (at which point the filesize is reported). Here's the relevant snippet of code that we are using to serve the file: header("Content-type:$contenttype"); $len = filesize($filename); header("Content-Length: $len"); header("Content-Disposition: attachment; filename=".$title.".".$ext); readfile($filename); Note that $contenttype, $filename, $title, and $ext are all set correctly before we get here. These have been triple-checked. None of them are the problem. Also, $len does provide the correct filesize. While researching this issue, I came across this post: http://stackoverflow.com/questions/1334471/content-length-header-always-zero It seems that I am encountering the same issue. When I use the script, I get chunked encoding on the file and no size is set for content-length. I'm hypothesizing that something is going wrong on the large downloads, leading him to get a zero-length chunk before the end of the file. Here's what the headers look like for a direct request: http://www.grinderschool.com/videos/zfff5061b65ae00e8b21/KillsAids021.wmv GET /videos/zfff5061b65ae00e8b21/KillsAids021.wmv HTTP/1.1 Host: www.grinderschool.com User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 (.NET CLR 3.5.30729) Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive Referer: http://www.grinderschool.com/phpBB3/viewtopic.php?f=14&p=29468 Cookie: style_cookie=printonly; phpbb3_7c544_u=2; phpbb3_7c544_k=44b832912e5f887d; phpbb3_7c544_sid=e8852df42e08cc1b2250300c2897f78f; __utma=174624884.2719561324781918700.1251850714.1270986325.1270989003.575; __utmz=174624884.1264524375.411.12.utmcsr=google|utmccn=(organic)|utmcmd=organic|utmctr=low%20stakes%20poker%20videos; phpbb3_cmviy_k=; phpbb3_cmviy_u=2; phpbb3_cmviy_sid=d8df5c0943863004ca40ef9c392d371d; __utmb=174624884.4.10.1270989003; __utmc=174624884 Pragma: no-cache Cache-Control: no-cache HTTP/1.1 200 OK Date: Sun, 11 Apr 2010 12:57:41 GMT Server: Apache/2.2.14 (Unix) mod_ssl/2.2.14 OpenSSL/0.9.8l DAV/2 mod_auth_passthrough/2.1 FrontPage/5.0.2.2635 Last-Modified: Sun, 04 Apr 2010 12:51:06 GMT Etag: "eb42d6-7d9b843-48368aa6dc280" Accept-Ranges: bytes Content-Length: 131708995 Keep-Alive: timeout=10, max=30 Connection: Keep-Alive Content-Type: video/x-ms-wmv And here's what they look like for the request answered by my script: http://www.grinderschool.com/download_video_test.php?t=KillsAids021&format=wmv GET /download_video_test.php?t=KillsAids021&format=wmv HTTP/1.1 Host: www.grinderschool.com User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 (.NET CLR 3.5.30729) Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive Cookie: style_cookie=printonly; phpbb3_7c544_u=2; phpbb3_7c544_k=44b832912e5f887d; phpbb3_7c544_sid=e8852df42e08cc1b2250300c2897f78f; __utma=174624884.2719561324781918700.1251850714.1270986325.1270989003.575; __utmz=174624884.1264524375.411.12.utmcsr=google|utmccn=(organic)|utmcmd=organic|utmctr=low%20stakes%20poker%20videos; phpbb3_cmviy_k=; phpbb3_cmviy_u=2; phpbb3_cmviy_sid=d8df5c0943863004ca40ef9c392d371d; __utmb=174624884.4.10.1270989003; __utmc=174624884 HTTP/1.1 200 OK Date: Sun, 11 Apr 2010 12:58:02 GMT Server: Apache/2.2.14 (Unix) mod_ssl/2.2.14 OpenSSL/0.9.8l DAV/2 mod_auth_passthrough/2.1 FrontPage/5.0.2.2635 X-Powered-By: PHP/5.2.11 Content-Disposition: attachment; filename=KillsAids021.wmv Vary: Accept-Encoding Content-Encoding: gzip Keep-Alive: timeout=10, max=30 Connection: Keep-Alive Transfer-Encoding: chunked Content-Type: video/x-ms-wmv So the question is...what can I do to make downloads from the script work properly? Again, for 99% of users, it works as is (though I find it annoying now that no filesize is reported and thus that no time estimate can be computed about the download).

    Read the article

  • a problem in socket programing in perl

    - by isu
    I write this code : #!/usr/local/bin/perl use strict; use LWP::UserAgent; my $ua = new LWP::UserAgent(agent => 'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.0.5) Gecko/20060719 Firefox/1.5.0.5'); $ua->proxy([qw(http https)] => 'http://203.185.28.228:1080' #that is just socks:port); my $response = $ua->get("http://www.google.com"); print $response->code,' ', $response->message,"\n"; but when i execute it i get this error: 500 Can't connect to 203.185.28.228:1080 (connect: timeout) what am i going to do ?

    Read the article

  • Why does Java force user-agent through simple Socket IO?

    - by Zombies
    I am using nothing but raw Socket IO. There isn't one HttpURLConnection nor any http client libs in my project. When I run it through wireshark I see somethign very revealing: GET / HTTP/1.1 User-Agent: Java/1.6.0_15 Host: www.google.com Accept: text/html, image/gif, image/jpeg, *; q=.2, */*; q=.2 Connection: keep-alive Here is the crazy part, I never put ANY of that in my original request. My original request was: "GET http://www.google.com/ HTTP/1.1\r\n" + "Host: www.google.com\r\n" + "User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.1.8) Gecko/20100214 Ubuntu/9.10 (karmic) Firefox/3.5.8\r\n" + "Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8\r\n" + "Accept-Language: en-us,en;q=0.5\r\n" + "Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7\r\n" + "Keep-Alive: 300\r\n" + "\r\n"; I am using the default Sun JVM.

    Read the article

  • what is the meaning of this code

    - by spific
    i have this code : use strict; use LWP::UserAgent; use warnings; my $ua = new LWP::UserAgent(agent => 'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.0.5) Gecko/20060719 Firefox/1.5.0.5'); $ua->proxy([qw(http https)] => 'http://59.39.92.148:1080'); my $response = $ua->get("http://www.google.com"); print $response->code,' ', $response->message,"\n"; is the mean of code " open www.google.com with sock proxy"? pls explain for me .

    Read the article

  • how to click on a button in python

    - by Ciobanu Alexandru
    Trying to build some bot for clicking "skip-ad" button on a page. So far, i manage to use Mechanize to load a web-driver browser and to connect to some page but Mechanize module do not support js directly so now i need something like Selenium if i understand correct. I am also a beginner in programming so please be specific. How can i use Selenium or if there is any other solution, please explain details. This is the inner html code for the button: <a id="skip-ad" class="btn btn-inverse" onclick="open_url('http://imgur.com/gallery/tDK9V68', 'go'); return false;" style="font-weight: bold; " target="_blank" href="http://imgur.com/gallery/tDK9V68"> … </a> And this is my source so far: #!/usr/bin/python # FILENAME: test.py import mechanize import os, time from random import choice, randrange prox_list = [] #list of common UAS to apply to each connection attempt to impersonate browsers user_agent_strings = [ 'Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/28.0.1468.0 Safari/537.36', 'Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.1) Gecko/2008071615 Fedora/3.0.1-1.fc9 Firefox/3.0.1', 'Opera/9.80 (Windows NT 6.0) Presto/2.12.388 Version/12.14', 'Opera/9.80 (Macintosh; Intel Mac OS X 10.6.8; U; fr) Presto/2.9.168 Version/11.52', 'Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:23.0) Gecko/20131011 Firefox/23.0', 'Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0; SLCC2; Media Center PC 6.0; InfoPath.3; MS-RTC LM 8; Zune 4.7', 'Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Win64; x64; Trident/5.0; .NET CLR 2.0.50727; SLCC2; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; Zune 4.0; Tablet PC 2.0; InfoPath.3; .NET4.0C; .NET4.0E)', 'Mozilla/5.0 (compatible; MSIE 10.6; Windows NT 6.1; Trident/5.0; InfoPath.2; SLCC1; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; .NET CLR 2.0.50727) 3gpp-gba UNTRUSTED/1.0', 'Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.0; Trident/5.0; chromeframe/11.0.696.57)', 'Mozilla/5.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0; InfoPath.1; SV1; .NET CLR 3.8.36217; WOW64; en-US)', 'Mozilla/5.0 (compatible; MSIE 8.0; Windows NT 6.1; Trident/4.0; GTB7.4; InfoPath.2; SV1; .NET CLR 3.3.69573; WOW64; en-US)' ] def load_proxy_list(target): #loads and parses the proxy list file = open(target, 'r') count = 0 for line in file: prox_list.append(line) count += 1 print "Loaded " + str(count) + " proxies!" load_proxy_list('proxies.txt') #for i in range(1,(len(prox_list) - 1)): # depreceated for overloading for i in range(1,30): br = mechanize.Browser() #pick a random UAS to add some extra cover to the bot br.addheaders = [('User-agent', choice(user_agent_strings))] print "----------------------------------------------------" #This is bad internet ethics br.set_handle_robots(False) #choose a proxy proxy = choice(prox_list) br.set_proxies({"http": proxy}) br.set_debug_http(True) try: print "Trying connection with: " + str(proxy) #currently using: BTC CoinURL - Grooveshark Broadcast br.open("http://cur.lv/4czwj") print "Opened successfully!" #act like a nice little drone and view the ads sleep_time_on_link = randrange(17.0,34.0) time.sleep(sleep_time_on_link) except mechanize.HTTPError, e: print "Oops Request threw " + str(e.code) #future versions will handle codes properly, 404 most likely means # the ad-linker has noticed bot-traffic and removed the link # or the used proxy is terrible. We will either geo-locate # proxies beforehand and pick good hosts, or ditch the link # which is worse case scenario, account is closed because of botting except mechanize.URLError, e: print "Oops! Request was refused, blacklisting proxy!" + str(e) prox_list.remove(proxy) del br #close browser entirely #wait between 5-30 seconds like a good little human sleep_time = randrange(5.0, 30.0) print "Waiting for %.1f seconds like a good bot." % (sleep_time) time.sleep(sleep_time)

    Read the article

  • why could i not submit the form via php's curl?

    - by home
    $ua_s = 'Mozilla/5.0 (Windows; U; Windows NT 5.1; en; rv:1.8.1.14) Gecko/20080404 Firefox/2.0.0.14'; $c = curl_init($the_url); curl_setopt($c, CURLOPT_RETURNTRANSFER, true); curl_setopt($c, CURLOPT_FOLLOWLOCATION, 1); curl_setopt($c, CURLOPT_USERAGENT, $ua_s); curl_setopt($c, CURLOPT_POST, true); curl_setopt($c, CURLOPT_POSTFIELDS, $post_string); $cont = curl_exec($c); curl_close($c); send all needed fields but fail to submit it properly. wrote html form to test - all is well if done so in browser:

    Read the article

  • What video codecs have most amount of content and thus popular at present/in future?

    - by goldenmean
    Hi, I want to find out if I can get some data on the percentage wise distribution of video content, for different video codecs currently used for video encoding. I know there are different applications/use-case scenarios which have different encoder used but i want to consdier all that and have a overall usage number(%) My guess is(highest to lowest % of content) - H.264(AVC) DivX MPEG2 VP6 Where do H.263, MPEG4, VC-1, RV, Theora, etc. fit in here. How may this look like in future? PS:I would like this to be community wiki to have get wider range of inputs, if someone with privileges can do it for me please. thank you. -AD

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >