Search Results

Search found 6834 results on 274 pages for 'dojo require'.

Page 115/274 | < Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >

  • Can I get the original page source (vs current DOM) with phantomjs/casperjs?

    - by supercoco
    I am trying to get the original source for a particular web page. The page executes some scripts that modify the DOM as soon as it loads. I would like to get the source before any script or user changes any object in the document. With Chrome or Firefox (and probably most browsers) I can either look at the DOM (debug utility F12) or look at the original source (right-click, view source). The latter is what I want to accomplish. Is it possible to do this with phantomjs/casperjs? Before getting to the page I have to log in. This is working fine with casperjs. If I browse to the page and render the results I know I am on the right page. casper.thenOpen('http://'+customUrl, function(response) { this.page.render('example.png'); // *** Renders correct page (current DOM) *** console.log(this.page.content); // *** Gets current DOM *** casper.download('view-source:'+customUrl, 'b.html', 'GET'); // *** Blank page *** console.log(this.getHTML()); // *** Gets current DOM *** this.debugPage(); // *** Gets current DOM *** utils.dump(response); // *** No BODY *** casper.download('http://'+customUrl, 'a.html', 'GET'); // *** Not logged in ?! *** }); I've tried this.download(url, 'a.html') but it doesn't seem to share the same context since it returns HTML as if I was not logged in, even if I run with cookies casperjs test.casper.js --cookies-file=cookies.txt. I believe I should keep analyzing this option. I have also tried casper.open('view-source:url') instead of casper.open('http://url') but it seems it doesn't recognize the url since I just get a blank page. I have looked at the raw HTTP Response I get from the server with a utility I have and the body of this message (which is HTML) is what I need but when the page loads in the browser the DOM has already been modified. I tried: casper.thenOpen('http://'+url, function(response) { ... } But the response object only contains the headers and some other information but not the body. ¿Any ideas? Here is the full code: phantom.casperTest = true; phantom.cookiesEnabled = true; var utils = require('utils'); var casper = require('casper').create({ clientScripts: [], pageSettings: { loadImages: false, loadPlugins: false, javascriptEnabled: true, webSecurityEnabled: false }, logLevel: "error", verbose: true }); casper.userAgent('Mozilla/5.0 (Macintosh; Intel Mac OS X)'); casper.start('http://www.xxxxxxx.xxx/login'); casper.waitForSelector('input#login', function() { this.evaluate(function(customLogin, customPassword) { document.getElementById("login").value = customLogin; document.getElementById("password").value = customPassword; document.getElementById("button").click(); }, { "customLogin": customLogin, "customPassword": customPassword }); }, function() { console.log('Can't login.'); }, 15000 ); casper.waitForSelector('div#home', function() { console.log('Login successfull.'); }, function() { console.log('Login failed.'); }, 15000 ); casper.thenOpen('http://'+customUrl, function(response) { this.page.render('example.png'); // *** Renders correct page (current DOM) *** console.log(this.page.content); // *** Gets current DOM *** casper.download('view-source:'+customUrl, 'b.html', 'GET'); // *** Blank page *** console.log(this.getHTML()); // *** Gets current DOM *** this.debugPage(); // *** Gets current DOM *** utils.dump(response); // *** No BODY *** casper.download('http://'+customUrl, 'a.html', 'GET'); // *** Not logged in ?! *** });

    Read the article

  • Why Standards Place Limits on Data Transfer Rates?

    - by Mehrdad
    This is a rather general question about hardware and standards in general: Why do they place limits on data transfer rates, and disallow manufacturers from exceeding those rates? (E.g. 100 Mbit/s for Wireless G, 150 Mbit/s for Wireless N, ...) Why not allow for some sort of handshake protocol, whereby the devices being connected agree upon the maximum throughput that they each support, and use that value instead? Why does there have to be a hard-coded limit, which would require a new standard for every single improvement to a data rate?

    Read the article

  • Forwarding 80 to 443 on Nagios woes

    - by Ethabelle
    I perhaps just need some extra insight because I don't see where I'm going wrong. I used an SSL Cert to secure our nagios server. We want to specifically require all traffic over nagios (like 2 users, lol) to use SSL. So I thought, oh, mod_rewrite + Rewrite Rule in .htaccess, right? So I went into the DocumentRoot and did a vi .htaccess (one didn't already exist) and then I put in the following rule; RewriteEngine On RewriteCond %{SERVER_PORT} 80 RewriteRule ^(.*)$ https://our.server.org/$1 [R,L] This does absolutely nothing. Does nada. Whhhyy.. Note: AllowOverride all in httpd.conf is on. Also, I verified that the module is not uncommented out ... but note, I couldn't find the mod_rewrite module installed so I copied it over from another server and placed it in modules/mod_rewrite.so . It was weird because it was enabled in the httpd.conf file, but then didn't exist in modules ... I'm a baddie :(

    Read the article

  • Configuring htaccess to show authentication prompt only for subdomain

    - by Philipp Lenssen
    How do I write the htaccess so that it will only require authentication when on admin.example.com, but not on www.example.com (like by using some if-else clause)? Background: I have a site running in two modes: The admin mode should be reached at something like admin.example.com, whereas the normal mode would be www.example.com -- but both should point to the same directory & scripts within them (the scripts then turn on certain editing features by checking if the script is accessed from the admin subdomain). Edit: I can now see this has been asked and answered at StackOverflow, though I can't get the top answer to work for me...

    Read the article

  • How do I remove a URL from Google without having to have a Google E-mail Account

    - by PP
    Really simple question. I do not want a Google account. I just want Google to stop making requests every 2 minutes for a URL it should never have known about (apparently Google harvests URLs from search requests as well as private e-mails, not just from actual web pages). But when I search Google help for removing URLs it appears I have to use their "webmaster tools" which require logging into a GMail account! How do I tell Google not to index my URL without becoming a customer? Note: I already return 404 for the URLs in question using a rewrite rule - this appears to make zero difference to the crawler which continually attempts to fetch the page every 2 minutes.

    Read the article

  • Sticky sessions on load balancers with HTTP and HTTPS

    - by javano
    How does sticky sessions relate to HTTP and HTTPS; If I place a load balancer in front of some web app servers that run a front end that supports HTTPS, will the sessions remain "sticky" on a typical load balancer that lists "stick sessions" as one of it's supported features? I understand that question is partly open ended; To clarify, would I require a load balancer that supports sticky HTTPS session specifical or is "sticky sessions" a principal that functions agnostic of the HTTP payload, be it encrypted or not? Thank you.

    Read the article

  • How to monitor a folder for changes, and execute a command if it does, on Windows?

    - by Camilo Martin
    There are similar questions for Linux and Mac, but I'm after a Windows solution here. The problem is as follows: I want to write several (js) script files in a folder, and have a program monitor that folder for file changes and new files being added, and run a command whenever that happens (to compile them all into one single file). The solution has to: Monitor both file changes and new files being added, in a folder. Run a command only if there is any change. It would be best if it either is a built-in solution (like a JScript or VBscript snippet), or something that does not require installation.

    Read the article

  • Experience with MQ File Transfer Edition?

    - by mfinni
    We've got several processes that move files across servers - SFTP, FTP, SCP; Windows, Linux, AIX; there is a workflow component (usually require a control file with filenames and hash values to move a batch of related files). The action is often initiated on our servers to get the files, so we need to make sure they're done being written. We have some homegrown scripts to do this, but they don't always work properly, and troubleshooting, maintenance, and log review is not easy this way. There's a lot of servers, and our scripts don't have central logging or a dashboard/console/etc. We're looking into commercial products to do this. Has anyone used MQ File Transfer Edition? Another team in our company is using Aspera, does anyone have any thoughts on that, or other favored products? I have no idea what our budget is for this, yet. Just trying to get a handle on the product space from the perspective of other admins.

    Read the article

  • Putting a Windows DC, Exchange in a DMZ

    - by blsub6
    I have one guy at my company telling me that I should put FF:TMG in between my main Internet-facing firewall (Cisco 5510) and put my Exchange server and DC on the internal network. I have another guy telling me that I should put the Exchange server and DC in a DMZ I don't particularly like the idea of having my mailboxes and DC's usernames/passwords in a DMZ and I think that Windows authentication would require me opening up so many ports between my DMZ and my internal network that it would be a moot point to have it out there anyways. What are some thoughts? How do you have it set up?

    Read the article

  • Color Calibrate Dual Monitor XP SP2

    - by Laramie
    This topic has been touched on before but not really answered. I have a dual monitor system and the colors differ wildly. I currently live Buenos Aires where color correction hardware costs premium prices. I do some graphic design, but don't require a pro-level calibration. That said, I'd like my monitors to be set as close to "true color" as possible. I've located the useful and free Monitor Calibration Wizard, but it seems to adjust the entire system internally at startup. I could use the Microsoft Color Control Panel Applet to set a different ICC or ICM profile for each monitor, but the Monitor Calibration Wizard outputs its own format for profiles.

    Read the article

  • Looking at desktop virtualization, but some users need 3D support. Is HP Remote Graphics a viable solution?

    - by Ryan Thompson
    My company is looking at desktop virtualization, and are planning to move all of the desktop compute resources into the server room or data center, and provide users with thin clients for access. In most cases, a simple VNC or Remote Desktop solution is adequate, but some users are running visualizations that require 3D capability--something that VNC and Remote Desktop cannot support. Rather than making an exception and providing desktop machines for these users, complicating out rollout and future operations, we are considering adding servers with GPUs, and using HP's Remote Graphics to provide access from the thin client. The demo version appears to work acceptably, but there is a bit of a learning curve, it's not clear how well it would work for multiple simultaneous sessions, and it's not clear if it would be a good solution to apply to non-3D sessions. If possible, as with the hardware, we want to deploy a single software solution instead of a mishmash. If anyone has had experience managing a large installation of HP Remote Graphics, I would appreciate any feedback you can provide.

    Read the article

  • R: extracting "clean" UTF-8 text from a web page scraped with RCurl

    - by SlowLearner
    Using R, I am trying to scrape a web page save the text, which is in Japanese, to a file. Ultimately this needs to be scaled to tackle hundreds of pages on a daily basis. I already have a workable solution in Perl, but I am trying to migrate the script to R to reduce the cognitive load of switching between multiple languages. So far I am not succeeding. Related questions seem to be this one on saving csv files and this one on writing Hebrew to a HTML file. However, I haven't been successful in cobbling together a solution based on the answers there. The pages are from Yahoo! Japan Finance and my Perl code that looks like this. use strict; use HTML::Tree; use LWP::Simple; #use Encode; use utf8; binmode STDOUT, ":utf8"; my @arr_links = (); $arr_links[1] = "http://stocks.finance.yahoo.co.jp/stocks/detail/?code=7203"; $arr_links[2] = "http://stocks.finance.yahoo.co.jp/stocks/detail/?code=7201"; foreach my $link (@arr_links){ $link =~ s/"//gi; print("$link\n"); my $content = get($link); my $tree = HTML::Tree->new(); $tree->parse($content); my $bar = $tree->as_text; open OUTFILE, ">>:utf8", join("","c:/", substr($link, -4),"_perl.txt") || die; print OUTFILE $bar; } This Perl script produces a CSV file that looks like the screenshot below, with proper kanji and kana that can be mined and manipulated offline: My R code, such as it is, looks like the following. The R script is not an exact duplicate of the Perl solution just given, as it doesn't strip out the HTML and leave the text (this answer suggests an approach using R but it doesn't work for me in this case) and it doesn't have the loop and so on, but the intent is the same. require(RCurl) require(XML) links <- list() links[1] <- "http://stocks.finance.yahoo.co.jp/stocks/detail/?code=7203" links[2] <- "http://stocks.finance.yahoo.co.jp/stocks/detail/?code=7201" txt <- getURL(links, .encoding = "UTF-8") Encoding(txt) <- "bytes" write.table(txt, "c:/geturl_r.txt", quote = FALSE, row.names = FALSE, sep = "\t", fileEncoding = "UTF-8") This R script generates the output shown in the screenshot below. Basically rubbish. I assume that there is some combination of HTML, text and file encoding that will allow me to generate in R a similar result to that of the Perl solution but I cannot find it. The header of the HTML page I'm trying to scrape says the chartset is utf-8 and I have set the encoding in the getURL call and in the write.table function to utf-8, but this alone isn't enough. The question How can I scrape the above web page using R and save the text as CSV in "well-formed" Japanese text rather than something that looks like line noise? Edit: I have added a further screenshot to show what happens when I omit the Encoding step. I get what look like Unicode codes, but not the graphical representation of the characters. So it may be some kind of locale-related issue, but in the exact same locale the Perl script does provide useful output. So this is still puzzling.

    Read the article

  • web based source control management software [closed]

    - by tom smith
    hi. not sure if this is the right place, but hopefully someone might have thoughts on a solution/vendor. Starting to spec out a project that will require multiple (50-100) developers to be able to manipulate source files/scripts for a large scale project. The idea is to be able to have each app go through a dev/review/test process, where the users can select (or be assigned) the role they're going to have for the given app. I'm looking for web-based, version control, issue tracking, user roles/access, workflow functionality, etc... Ideally, the process will also allow for the reviewed/valid app to then be exported to a separate system for testing on the test server/environment. This can be hosted on our servers, or we can do the colo process. I've checked out Alassian/Collabnet, but any thoughts you can provide would me appreciated as well. thanks

    Read the article

  • How does AuthzSVNAccessFile work?

    - by grigy
    I have set up an SVN repo with WebDAV access. For some reason it does not let checkout. Here is my httpd.conf part: <Location /svn> DAV svn SVNParentPath /home/svn/repositories AuthzSVNAccessFile /home/svn/dav_svn.authz Satisfy Any Require valid-user AuthType Basic AuthName "Subversion Repository" AuthUserFile /home/svn/dav_svn.passwd </Location> I have two repositories named "first" and "second" and the content of dav_svn.authz is: [first:/] doe = rw * = r [second:/] doe = rw grig = rw * = r When I'm trying to checkout the second with user doe, I get this in error_log: user doe: authentication failure for "/svn/second": Password Mismatch In order to understand what can be the problem I would like to better understand how the AuthzSVNAccessFile is supposed to work.

    Read the article

  • Apache SSL for login and NON-SSL for everything else (.htacces)

    - by The Devil
    Hey I've almost figured it out on my own but there's something I'm missing. I want to set a couple of directories and files to require SSL and everything else that's not related to those files and dirs to point back to http. So far I have this: RewriteEngine on RewriteBase / # Force ssl for login & admin RewriteCond %{HTTPS} !on RewriteRule ^/?(admin(.*)|login\.php)$ https://%{SERVER_NAME}/$1 [R,NC,L] # Force non-ssl for others RewriteCond %{HTTPS} on RewriteRule ^/?(admin(.*)|login\.php)$ http://%{SERVER_NAME}/$1 [R,NC,L] I'm sure I'm doing something wrong but I just can't figure it out.... The first condition works perfect - whenever I access login.php or /admin/ it points to https. But the second one doesn't... Where have I mistaken ? Thanks in advance!

    Read the article

  • How to split audio into multiple channels from optical S/PDIF or 1/8"?

    - by Josh M.
    I have a motherboard which has an optical S/PDIF output or 1/8". I'd like to "split" that signal into the appropriate channels so that I can then connect that to the wires behind my car's headunit which, in turn, run to the amp. The factory Bose amp just takes a single connector with a million wires running out of it, so that's why I would need to separate the signal into separate channels. On the other end there are four RCA connectors: front left, front right, rear left, rear right. The sub-woofer signal does not require an additional connection. Edit: Revised to include S/PDIF or 1/8".

    Read the article

  • Adjusting color on one monitor in Nvidia Surround

    - by Chris Stauffer
    I'm currently running three 2560x1440 screens using Nvidia Surround. Two monitors are Yamakasi Catleaps (cheap Korean jobs) and the third is the Achievia Shimian (also Korean). The Catleaps have great color reproduction, however the Shimian is exceptionally blue tinted. With normal monitors the required correction would require minimal effort to accomplish. But these Korean monitors do not have hardware controls to do it. For those who are unfamiliar, Nvidia Surround basically takes all three monitors and makes one big "monitor" out of all of them (Xinerama for GNU/Linux folk), at a resolution of 7680x1440 in my situation. Therefore, adjusting the color profile in the Nvidia control panel changes the settings for ALL of the monitors simultaneously. Thus, I am looking for some software to adjust the Shimian (perhaps by just selecting the pixels that that monitor encompasses). Does anyone know of such a program?

    Read the article

  • Getting iTunes to play third party AAC files

    - by Redmastif
    I have a library filled with some old MP3 files and I'm in the process of changing them all to AAC for the better sound quality. Obviously I can't just create AAC versions of the files I already have because they would sound worse (lossy compression to converted to more lossy compression), so I'm going to their source and downloading them in a lossless form and using a third party to make them into AAC. Apparently iTunes will not handle AAC files that aren't made with iTunes. Is there a way around this? I've looked at third party programs and would be willing to use them, but since they all require the iTunes/iPod/iEverything driver, I don't know if they would still prevent my files or not. Also before you jump on my back about pirating, these files are from old CDs that I lost years ago. I paid for them. Thanks.

    Read the article

  • What ports to open for mail server?

    - by radman
    Hi, I have just finished setting up a Postfix mail server on a linux (ubuntu) platform. I have it sending and receiving email and it is not an open relay. It also supports secure smtp and imap. Now this is a pretty beginner question but should I be leaving port 25 open? (since secure smtp is preferred). if so then why? Also what about port 587? Also should I require any authentication on either of these ports? Please excuse my ignorance in this area :P

    Read the article

  • Another Call to a member function get_segment() on a non-object question

    - by hogofwar
    I get the above error when calling this code: <? class Test1 extends Core { function home(){ ?> This is the INDEX of test1 <? } function test2(){ echo $this->uri->get_segment(1); //this is where the error comes from ?> This is the test2 of test1 testing URI <? } } ?> I get the error where commentated. This class extends this class: <?php class Core { public function start() { require("funk/funks/libraries/uri.php"); $this->uri = new uri(); require("funk/core/loader.php"); $this->load = new loader(); if($this->uri->get_segment(1) != "" and file_exists("funk/pages/".$this->uri->get_segment(1).".php")){ include("funk/pages/". $this->uri->get_segment(1).".php"); $var = $this->uri->get_segment(2); if ($var != ""){ $home= $this->uri->get_segment(1); $Index= new $home(); $Index->$var(); }else{ $home= $this->uri->get_segment(1); $Index = new $home(); $Index->home(); } }elseif($this->uri->get_segment(1) and ! file_exists("funk/pages/".$this->uri->get_segment(1).".php")){ echo "404 Error"; }else{ include("funk/pages/index.php"); $Index = new Index(); $Index->home(); //$this->Index->index(); echo "<!--This page was created with FunkyPHP!-->"; } } } ?> And here is the contents of uri.php: <?php class uri { private $server_path_info = ''; private $segment = array(); private $segments = 0; public function __construct() { $segment_temp = array(); $this->server_path_info = preg_replace("/\?/", "", $_SERVER["PATH_INFO"]); $segment_temp = explode("/", $this->server_path_info); foreach ($segment_temp as $key => $seg) { if (!preg_match("/([a-zA-Z0-9\.\_\-]+)/", $seg) || empty($seg)) unset($segment_temp[$key]); } foreach ($segment_temp as $k => $value) { $this->segment[] = $value; } unset($segment_temp); $this->segments = count($this->segment); } public function segment_exists($id = 0) { $id = (int)$id; if (isset($this->segment[$id])) return true; else return false; } public function get_segment($id = 0) { $id--; $id = (int)$id; if ($this->segment_exists($id) === true) return $this->segment[$id]; else return false; } } ?> i have asked a similar question to this before but the answer does not apply here. I have rewritten my code 3 times to KILL and Delimb this godforsaken error! but nooooooo....

    Read the article

  • Use argdo with search pattern to delete line while suppressing errors and requiring confirmation in Vim

    - by richardh
    I use gVim 7.3.46 on Win 7. It is pretty straightforward to use argdo to search args files for a pattern and replace it while suppressing errors and requiring confirmation. :argdo %s/pattern/replace/gec | update However, I would like to delete entire lines that contain the pattern. I use the following. :argdo %/pattern/d | update But I can't suppress errors or require confirmation. Is there a way to do this? Thanks! (Also, is there a way to set "more" off? Thanks!)

    Read the article

  • Web Server Users - Best Practice

    - by Toby
    I was wondering what is considered best practice when several developers/administrators require access to the same web server. Should there be one non-root user with a secure username and password unqiue to the web server which everyone logs in as or should there be a username for each person. I am leaning towards a username for each person to aid in logging etc however then does the same user keep the same credentials over several servers, or should at least their password change depending on the server they are on? Should any non-root user of the system be added to the sudoers file or is it best practice to leave everyone off it and only let root perform certain tasks? Any help would be greatly appreciated.

    Read the article

  • Web Server Users - Best Practice

    - by Toby
    I was wondering what is considered best practice when several developers/administrators require access to the same web server. Should there be one non-root user with a secure username and password unqiue to the web server which everyone logs in as or should there be a username for each person. I am leaning towards a username for each person to aid in logging etc however then does the same user keep the same credentials over several servers, or should at least their password change depending on the server they are on? Should any non-root user of the system be added to the sudoers file or is it best practice to leave everyone off it and only let root perform certain tasks? Any help would be greatly appreciated.

    Read the article

  • USB 3 adapter for a dell 2850 with PCI (or PCIX) ports

    - by Don Dickinson
    Does anyone know if there is a plain PCI (or PCIX) USB 3 adapter. i understand the bandwidth of PCI < USB3, but it still beats the heck out of USB 2. i have some older dell 2850s that do not have the PCI E ports that most USB 3 adapters require. i'd really like to get usb3 in those servers. i searched the internet but didn't see any. the local computer store said they only had pcie adapters. tia, don

    Read the article

  • What is the easiest way to do a direct file transfer of an extremely large file over the Internet?

    - by Kenneth Cochran
    I would like to transfer a 20+ GB file to a friend. I would like it to: Be fast Ensure data integrity Not require opening ports in either end's firewall Be free Not broadcast the file's existence to everyone on the Internet I've looked a several technologies and nothing seems to fit: Gnutella, BitTorrent, et al. satisfies 1, 2 and 4 JetBytes... 1, 3, 4 and 5 Yahoo Messenger, AIM, etc. 3, 4 and 5 FTP, SFTP... 1?, 4 and 5 rsync... 1, 2, 4 and 5 For a file this size speed and data integrity are the most important. No one wants a 20 GB file to fail a MD5 check after spending two days downloading it. Is there anything that meets all these requirements?

    Read the article

< Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >