Search Results

Search found 42662 results on 1707 pages for 'page init'.

Page 36/1707 | < Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >

  • WGet a Page that Requires Logging in

    - by Synetech inc.
    I’m trying to figure out a way to use WGET or a similar tool so that I can schedule a web page to be downloaded regularly as a sort of updating log. The problem is that the page requires that I be logged in otherwise I get a different page, generic. Further, the page does not take login information as GET parameters in the URL, it uses POST to log in on the login page and cookies to save the login information that’s read by the regular page. I’m currently using GNU Wget 1.10.2 for Windows. I’ve tried using WGET’s cookie functionality but have had mixed results, usually skewing towards it not working. Can anyone please advise on a way to accomplish this? Thanks a lot.

    Read the article

  • Async trigger for an update panel refreshes entire page when triggering too much in too short of tim

    - by Matt
    I have a search button tied to an update panel as a trigger like so: <asp:Panel ID="CRM_Search" runat="server"> <p>Search:&nbsp;<asp:TextBox ID="CRM_Search_Box" CssClass="CRM_Search_Box" runat="server"></asp:TextBox> <asp:Button ID="CRM_Search_Button" CssClass="CRM_Search_Button" runat="server" Text="Search" OnClick="SearchLeads" /></p> </asp:Panel> <asp:UpdatePanel ID="UpdatePanel1" runat="server"> <Triggers> <asp:AsyncPostBackTrigger ControlID="CRM_Search_Button" /> </Triggers> <ContentTemplate> /* Content Here */ </ContentTemplate> </asp:UpdatePanel> In my javascript I use jQuery to grab the search box and tie it's keyup to make the search button click: $($(".CRM_Search_Box")[0]).keyup( function () { $($(".CRM_Search_Button")[0]).click(); } ); This works perfectly, except when I start typing too fast. As soon as I type too fast (my guess is if it's any faster than the data actually returns) the entire page refreshes (doing a postback?) instead of just the update panel. I've also found that instead of typing, if I just click the button really fast it starts doing the same thing. Is there any way to prevent it from doing this? Possibly prevent 2nd requests until the first has been completed? If I'm not on the right track then anyone have any other ideas? Thanks, Matt

    Read the article

  • Showing all rows for keys with more than one row

    - by Leif Neland
    Table kal id integer primary key init char 4 indexed job char4 id init job --+----+------ 1 | aa | job1 2 | aa | job2 3 | bb | job1 4 | cc | job3 5 | cc | job5 I want to show all rows where init has more than one row id init job --+----+------ 1 | aa | job1 2 | aa | job2 4 | cc | job3 5 | cc | job5 I tried select * from kal where init in (select init from kal group by init having count(init)2); Actually, the table has 60000 rows, and the query was count(init)<40, but it takes humongus time, phpmyadmin and my patience runs out. Both select init from kal group by init having count(init)2) and select * from kal where init in ('aa','bb','cc') runs in "no time", less than 0.02 seconds. I've tried different subqueries, but all takes "infinite" time, more than a few minutes; I've actually never let them finish. Leif

    Read the article

  • Refresh page in browser without resubmitting form

    - by Michael
    I'm an ASP.NET developer, and I usually find myself leaving the webpage that I'm working on open in my browser (Chrome is my browser of choice, but this question is relevant for any browser). My workflow typically goes like this: I write code, I rebuild my project in Visual Studio, and then I flip back to my browser with Alt-Tab and hit F5 to refresh the page. This is fine and dandy if a form hasn't been submitted since the page was opened. But if I've been clicking around on ASP.NET form controls, the page has posted form data a number of times, so hitting F5 causes the browser to (sensibly) pop up a confirmation message, e.g., "Confirm Form Resubmission: The page that you're looking for used information that you entered...". Sometimes I do want to resubmit the form, but more often than not, I just want to start over with the page (rather than resubmit form data). The way I usually get around this is to simply add some query string data to the URL so that the browser sees it as a fresh page request, e.g.: page.aspx becomes page.aspx? (or vice-versa). My question is: Is there a better way to quickly request a fresh version of a webpage (and not submit form data) in any of the major browsers? It seems like a no-brainer to me for web development, but maybe I'm missing something. What I'd love to see is something like the last item in this list: F5: refresh page Ctrl-F5: refresh page (and force cache refresh) Alt-F5: request fresh copy of the page without resubmitting the form

    Read the article

  • loading remote page into DOM with javascript

    - by scoobydoo
    I am trying to write a web widget which will allow users to display customized information (from my website) in their own web page. The mechanism I want to use (for creating the web widget) is javascript. So basically, I want to be able to write some javascript code like this (this is what the end user copies into their HTML page, to get my widget displayed in their page) <script type="text/javascript"> /* javascript here to fetch page from remote url and insert into DOM */ </script> I have two questions: how do I write a javascript code to fetch the page from the remote url? Ideally this will be PLAIN javascript (i.e. not using jQuery etc - since I dont want to force the user to get third party scripts jQuery which may conflict with other scripts on their page etc) The page I am fetching contains inline javascript, which gets executed in an body.onLoad event, as well as other functions which are used in response to user actions - my questions are: i). will the body.onLoad event be triggered for the retrieved document?. ii). If the retrieved page is dumped directly into the DOM, then the document will contain two <body> sections, which is no longer valid (X)HTML - however, I need the body.onLoad event to be triggered for the page to be setup correctly, and I also need the other functions in the retrieved page, for the retrieved page to be able to respond to the user interaction. Any suggestions/tips on how I can solve these problems?

    Read the article

  • What is recommended minimum object size for gzip performance benefits?

    - by utt73
    I'm working on improving page speed display times, and one of the methods is to gzip content from the webserver. Google recommends: Note that gzipping is only beneficial for larger resources. Due to the overhead and latency of compression and decompression, you should only gzip files above a certain size threshold; we recommend a minimum range between 150 and 1000 bytes. Gzipping files below 150 bytes can actually make them larger. We serve our content through Akamai, using their network for a proxy and CDN. What they've told me: Following up on your question regarding what is the minimum size Akamai will compress the requested object when sending it to the end user: The minimum size is 860 bytes. My reply: What is the reason(s) for why Akamai's minimum size is 860 bytes? And why, for example, is this not the case for files Akamai serves for facebook? (see below) Google recommends to gzip more agressively. And that seems appropriate on our site where the most frequent hits, by far, are AJAX calls that are <860 bytes. Akamai's response: The reasons 860 bytes is the minimum size for compression is twofold: (1) The overhead of compressing an object under 860 bytes outweighs performance gain. (2) Objects under 860 bytes can be transmitted via a single packet anyway, so there isn't a compelling reason to compress them. So I'm here for some fact checking. Is the 860 byte limit due to packet size the end of this reasoning? Why would high traffic sites push this down to the 150 byte limit... just to save on bandwidth costs (since CDNs base their charges on bandwith offloaded from origin), or is there a performance gain in doing so?

    Read the article

  • What is recommended minimum object size for gzip benefits?

    - by utt73
    I'm working on improving page speed display times, and one of the methods is to gzip content from the webserver. Google recommends: Note that gzipping is only beneficial for larger resources. Due to the overhead and latency of compression and decompression, you should only gzip files above a certain size threshold; we recommend a minimum range between 150 and 1000 bytes. Gzipping files below 150 bytes can actually make them larger. We serve our content through Akamai, using their network for a proxy and CDN. What they've told me: Following up on your question regarding what is the minimum size Akamai will compress the requested object when sending it to the end user: The minimum size is 860 bytes. My reply: What is the reason(s) for why Akamai's minimum size is 860 bytes? And why, for example, is this not the case for files Akamai serves for facebook? (see below) Google recommends to gzip more agressively. And that seems appropriate on our site where the most frequent hits, by far, are AJAX calls that are <860 bytes. Akamai's response: The reasons 860 bytes is the minimum size for compression is twofold: (1) The overhead of compressing an object under 860 bytes outweighs performance gain. (2) Objects under 860 bytes can be transmitted via a single packet anyway, so there isn't a compelling reason to compress them. So I'm here for some fact checking. Is the 860 byte limit due to packet size the end of this reasoning? Why would high traffic sites push this lower/closer to the 150 byte limit... just to save on bandwidth costs, or is there a performance gain in doing so?

    Read the article

  • Php profiling on production server or other options

    - by absentx
    Alright I need some help here. I am commonly asked to speed up certain sections of some websites that I program for. I have yet to be able to figure out how to use a good php diagnosis/profiling tool. Some things to consider: The sites I am working on are already built, getting a testing server set up locally is just a huge pain..I have to rewrite include paths and just so many things. This is a results oriented deal and spending days to get a site fully working on a testing platform so I can debug one page probably isn't an option. I can write tons of php, but I have no clue how to interact or mess with servers. So every tutorial I read about setting up xdebug or xhprof all seem to involve getting something installed on a production server that I don't have access to or have no clue how to work with. So are there any solutions out there that will show me where my php is slow without having to do all sorts of server stuff that I just don't know how to do? Xhprof seems to be the closest to useable for me but from what I can tell it still has to be installed on a server. If anyone can just point me in the right direction on this I would be very grateful. Maybe getting these things put on the server isn't a big deal...but I have never interacted with server command lines or anything like that. I suppose I should start sometime but I really have no idea where to start. Plus I realize that profiling on a live platform is not the greatest idea either but I feel I am in a tough spot. I have speed issues to solve and setting up a local environment while a great idea, just doesn't seem real practical at the moment.

    Read the article

  • SQL SERVER – 2012 – All Download Links in Single Page – SQL Server 2012

    - by pinaldave
    SQL Server 2012 RTM is just announced and recently I wrote about all the SQL Server 2012 Certification on single page. As a feedback, I received suggestions to have a single page where everything about SQL Server 2012 is listed. I will keep this page updated as new updates are announced. Microsoft SQL Server 2012 Evaluation Microsoft SQL Server 2012 enables a cloud-ready information platform that will help organizations unlock breakthrough insights across the organization. Microsoft SQL Server 2012 Express Microsoft SQL Server 2012 Express is a powerful and reliable free data management system that delivers a rich and reliable data store for lightweight Web Sites and desktop applications. Microsoft SQL Server 2012 Feature Pack The Microsoft SQL Server 2012 Feature Pack is a collection of stand-alone packages which provide additional value for Microsoft SQL Server 2012. Microsoft SQL Server 2012 Report Builder Report Builder provides a productive report-authoring environment for IT professionals and power users. It supports the full capabilities of SQL Server 2012 Reporting Services. Microsoft SQL Server 2012 Master Data Services Add-in For Microsoft Excel The Master Data Services Add-in for Excel gives multiple users the ability to update master data in a familiar tool without compromising the data’s integrity in Master Data Services. Microsoft SQL Server 2012 Performance Dashboard Reports The SQL Server 2012 Performance Dashboard Reports are Reporting Services report files designed to be used with the Custom Reports feature of SQL Server Management Studio. Microsoft SQL Server 2012 PowerPivot for Microsoft Excel® 2010 Microsoft PowerPivot for Microsoft Excel 2010 provides ground-breaking technology; fast manipulation of large data sets, streamlined integration of data, and the ability to effortlessly share your analysis through Microsoft SharePoint. Microsoft SQL Server 2012 Reporting Services Add-in for Microsoft SharePoint Technologies 2010 The SQL Server 2012 Reporting Services Add-in for Microsoft SharePoint 2010 technologies allows you to integrate your reporting environment with the collaborative SharePoint 2010 experience. Microsoft SQL Server 2012 Semantic Language Statistics The Semantic Language Statistics Database is a required component for the Statistical Semantic Search feature in Microsoft SQL Server 2012 Semantic Language Statistics. Microsoft ®SQL Server 2012 FileStream Driver – Windows Logo Certification Catalog file for Microsoft SQL Server 2012 FileStream Driver that is certified for WindowsServer 2008 R2. It meets Microsoft standards for compatibility and recommended practices with the Windows Server 2008 R2 operating systems. Microsoft SQL Server StreamInsight 2.0 Microsoft StreamInsight is Microsoft’s Complex Event Processing technology to help businesses create event-driven applications and derive better insights by correlating event streams from multiple sources with near-zero latency. Microsoft JDBC Driver 4.0 for SQL Server Download the Microsoft JDBC Driver 4.0 for SQL Server, a Type 4 JDBC driver that provides database connectivity through the standard JDBC application program interfaces (APIs) available in Java Platform, Enterprise Edition 5 and 6. Data Quality Services Performance Best Practices Guide This guide focuses on a set of best practices for optimizing performance of Data Quality Services (DQS). Microsoft Drivers 3.0 for SQL Server for PHP The Microsoft Drivers 3.0 for SQL Server for PHP provide connectivity to Microsoft SQLServer from PHP applications. Product Documentation for Microsoft SQL Server 2012 for firewall and proxy restricted environments The Microsoft SQL Server 2012 setup installs only the Help Viewer…install any documentation. All of the SQL Server documentation is available online. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • nginx error page and internal directives not working as expected

    - by Romain
    I'd like to setup my nginx server to return a specific error page on HTTP 50x status codes, and I'd like this page to be unavailable by a direct request from users (e.g., http//mysite/internalerror). For that, I'm using nginx's internal directive, but I must be missing something, as when I put that directive on my /internalerror location, nginx returns a custom 404 error (which isn't even my own 404 error page) when a page crashes. So, to summarize, here's what seems to happen: GET /Home nginx passes the query to Python I'm simulating an application bug to get the 502 error code nginx tries to return /InternalError from its error_page rule because of the internal rule, it finally fails back to a custom 404 error code <-- why? the documentation says error_page directives are not concerned by internal: http://wiki.nginx.org/HttpCoreModule#internal Here's an extract from nginx.conf with a few comments to point things out: error_page 404 /NotFound; error_page 500 502 503 504 =500 /InternalError; # HTTP 500 Error page declaration location / { try_files /Maintenance.html $uri @pythonbackend; } location @pythonbackend { include uwsgi_params; uwsgi_pass unix:///tmp/uwsgi.sock; } location ~* \.(py|pyc)$ { # This internal location works OK and returns my own 404 error page internal; } location /__Maintenance.html { # This one also works fine internal; } location ~* /internalerror { # This one doesn't work and returns nginx's 404 error page when I trigger an error somewhere on my site internal; } Thanks very much for your help!!

    Read the article

  • runit - unable to open supervise/ok: file does not exist

    - by Alexandr Kurilin
    I'm trying to figure out why runit will not boot or give me the status for the managed applications. Running on Ubuntu 12.04. I created /service, /etc/sv/myapp (with a run script, a config file, a log folder and a run script inside of it). I create a symlink from /service/ to /etc/sv/myapp When I run sudo sv s /service/* I get the following error message: warning: /service/myapp: unable to open supervice/ok: file does not exist Some of my Googling revealed that supposedly rebooting the svscan service might fix this, but killing it and running svscanboot didn't make a difference. Any suggestions? Am I missing a step here somewhere?

    Read the article

  • Rebuilding /etc/rc?.d/ links

    - by timday
    A regular filesystem check on a Debian Lenny system triggered an fsck, and that nuked a handful of links in the /etc/rc?.d hierarchy (unfortunately I didn't keep a list). The system seems to boot and run normally, but I'm worried its storing up trouble for the future. Is there an easy (fairly automatic) way of rebuilding this piece of the system ? As I understand it, the links are generally manipulated by package postinst scripts using update-rc.d (and I haven't made any changes from the installed defaults). Without any better ideas, my plan is one of: Diff a listing with another similar system to identify which packages need their links repairing. Wait until the system is upgraded to Squeeze (hopefully not too long :^) and assume the mass package upgrade will restore all the missing links.

    Read the article

  • Apache2 does not start at boot time - Ubuntu 12.04

    - by renard
    I've install apache2 on an Unbuntu 12.04 machine, but I can't get the server to start at boot time. Although I've tried many things : reinstall apache2 run update-rc.d apache2 enable But each time the apache2 is not start after boot time : I get nothing with a pgrep apache2 or a ps -e | grep apache2 UPDATE I've also tried the logging tweak suggested in another related post, but no entries at all can be found in /var/log/syslog What could I do to make it work ? Thanks for your help Cheers

    Read the article

  • placing shell script under systemd control

    - by Calvin Cheng
    Assuming I have a shell script like this:- #!/bin/sh # cherrypy_server.sh PROCESSES=10 THREADS=1 # threads per process BASE_PORT=3035 # the first port used # you need to make the PIDFILE dir and insure it has the right permissions PIDFILE="/var/run/cherrypy/myproject.pid" WORKDIR=`dirname "$0"` cd "$WORKDIR" cp_start_proc() { N=$1 P=$(( $BASE_PORT + $N - 1 )) ./manage.py runcpserver daemonize=1 port=$P pidfile="$PIDFILE-$N" threads=$THREADS request_queue_size=0 verbose=0 } cp_start() { for N in `seq 1 $PROCESSES`; do cp_start_proc $N done } cp_stop_proc() { N=$1 #[ -f "$PIDFILE-$N" ] && kill `cat "$PIDFILE-$N"` [ -f "$PIDFILE-$N" ] && ./manage.py runcpserver pidfile="$PIDFILE-$N" stop rm -f "$PIDFILE-$N" } cp_stop() { for N in `seq 1 $PROCESSES`; do cp_stop_proc $N done } cp_restart_proc() { N=$1 cp_stop_proc $N #sleep 1 cp_start_proc $N } cp_restart() { for N in `seq 1 $PROCESSES`; do cp_restart_proc $N done } case "$1" in "start") cp_start ;; "stop") cp_stop ;; "restart") cp_restart ;; *) "$@" ;; esac From the bash script, we can essentially do 3 things: start the cherrypy server by calling ./cherrypy_server.sh start stop the cherrypy server by calling ./cherrypy_server.sh stop restart the cherrypy server by calling ./cherrypy_server.sh restart How would I place this shell script under systemd's control as a cherrypy.service file (with the obvious goal of having systemd start up the cherrypy server when a machine has been rebooted)? Reference systemd service file example here - https://wiki.archlinux.org/index.php/Systemd#Using_service_file

    Read the article

  • Backup broken PostgreSQL 8.4 without pg_dump

    - by Daniil
    So. I have a problem. PostgreSQL 8.4 won't start or restart without any output given. But it worked for 3 monthes until hosting provider doesn't rebooted server. Now it is completly broken. It wan't start and doesn't give any output or log. pg_dump: [archiver (db)] connection to database "postgres" failed: No such file or directory Is the server running locally and accepting connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"? Now I want to backup (or just start pgsql socket) my database to reinstall postgesql. How?

    Read the article

  • Any way to make service do not autostart in Ubuntu/Debian, but leave K00 scripts in place?

    - by Evgenyt
    I need to have only stop scripts in rcN.d (runlevels 0,1,6) for apache2. So that I always start it by myself, but when reboot occurs server will shut down apache2 properly. And when I change runlevel 2-3 server doesnt' touch apache daemon (leaving it in the state it is). Basically, I just need a legal way to remove apache2 startup symlinks from rc2.d - rc5.d. With tools like update-rc.d. I can just remove those symlinks by hands, but I'm not sure if this is a proper way for this.

    Read the article

  • Apollo linux boot into single user

    - by Spirit
    We have a device that runs Appolo Linux and I have to boot that device into a single user mode so that i can run a fsck to check the hard drive for errors. I've been goggling this during the past hour and so far I haven't found any specific method on how can I do that on this version on Linux. The device is known formerly as a NFX Cinxi One - now re-branded into BlackStratus LOG Storm. If any of you have any experience with this one you may know it is a device that is used to collect logs from other servers. I know that the above info isn't much but that is everything that I can provide up until now since tomorrow I have to follow up closely on this problem.

    Read the article

  • Unable to change IP address for eth0 without restart in Ubuntu

    - by Rodnower
    I have Ubuntu 12.04.1 installed. I tried to change the IP address of the interface eth0 in /etc/network/interfaces from 192.168.1.3 to 192.168.1.4 auto lo iface lo inet loopback pre-up iptables-restore < /etc/iptables.up.rules auto eth0 iface eth0 inet static address 192.168.1.4 gateway 192.168.1.1 netmask 255.255.255.0 network 192.168.1.0 broadcast 192.168.1.255 sudo service networking status When I issue: sudo service networking restart I get this response: stop: Unknown instance: networking stop/waiting And IP remains 192.168.1.3: eth0 Link encap:Ethernet HWaddr 00:1e:33:71:cd:a4 inet addr:192.168.1.3 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::21e:33ff:fe71:cda4/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:3861 errors:0 dropped:0 overruns:0 frame:0 TX packets:3291 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:3423285 (3.4 MB) TX bytes:521854 (521.8 KB) Interrupt:45 Base address:0x4000 Only after restart does the IP change. Any ideas?

    Read the article

  • Unable chage IP address for eth0 without restart in Ubunto

    - by Rodnower
    I have Ubuntu 12.04.1 installed. I try to change IP address of the interface eth0 in /etc/network/interfaces from 192.168.1.3 to 192.168.1.4: auto lo iface lo inet loopback pre-up iptables-restore < /etc/iptables.up.rules auto eth0 iface eth0 inet static address 192.168.1.4 gateway 192.168.1.1 netmask 255.255.255.0 network 192.168.1.0 broadcast 192.168.1.255 sudo service networking status Now I issue: sudo service networking restart I have response: stop: Unknown instance: networking stop/waiting And IP remains 192.168.1.3: eth0 Link encap:Ethernet HWaddr 00:1e:33:71:cd:a4 inet addr:192.168.1.3 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::21e:33ff:fe71:cda4/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:3861 errors:0 dropped:0 overruns:0 frame:0 TX packets:3291 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:3423285 (3.4 MB) TX bytes:521854 (521.8 KB) Interrupt:45 Base address:0x4000 Only after restart IP changing... Any ideas?

    Read the article

  • stop apache from asking for SSL password each restart

    - by acidzombie24
    Using instructions from this site but varying them just a little i created a CA using -newca, i copied cacert.pem to my comp and imported as trusted issuer in IE. I then did -newreq and -sign (note: i do /full/path/CA.sh -cmd and not sh CA.sh -cmd) and moved the cert and key to apache. I visited the site in IE and using .NET code and it appears trusted, great (unless i write www. in front which is expected). But every time i restart apache i need to type in my password for the site(s?). How can i make it so i DO NOT need to type in the password?

    Read the article

  • Progressive Enhancement vs. Single Page Apps

    - by SeanPlusPlus
    I just got back from a conference in Boston called An Event Apart. A really popular theme amongst the speakers was the idea of progressive enhancement - a site's content should go in the HTML, and JavaScript should only be used to enhance behavior. The arguments that the speakers gave for progressive enhancement were very compelling. Not only is it a solid pattern for supporting older browsers, and devices on a network with low bandwidth, but HTML fails much more gracefully than JavaScript (i.e. markup that is not supported is just ignored, while if a browser throws an exception while executing your script - you are hosed). Jeremy Keith gave a particularly insightful talk about this. But what about single page web apps like Backbone and Angular? The whole design behind these frameworks seems to push the developer toward moving content out of the HTML, and into something like a JSON API. I can not seem to gel these two design patterns: progressive enhancement vs. single page web apps. Are there instances when one is better than the other? Or are they not even antagonistic technologies, and I am missing something here with my mental model?

    Read the article

< Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >