Search Results

Search found 1381 results on 56 pages for 'reload'.

Page 48/56 | < Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >

  • The right approach to loading dynamic content into a UITableView in iOS

    - by OS.
    ok, I've read tons of bits and pieces on the subject of loading dynamic content (from the web) into a UITableView and the problem with calculating cell height upfront. I've tried different simple implementations but the problem persists... Assuming I need to read a JSON file from the web, parse it into 'item' objects, each with variable size image and various text labels, here is what I believe would be the right approach to avoid long hang time of the app while everything is loading: on app load read JSON file and parse into items array provide only small part of the items array to the tableview (about 10 items) - since I need to load the images associated with each item to calculate cell height - I don't want the view to go through the whole items list and load all images - this hangs the app until every image is loaded display the tableview with the available cells (assuming I load a few 'spare' ones, user can even scroll to more items) in the background using Grand Central Dispatch download images for all/some of the remaining items and then reload the tableview with the new data (repeat step 4 if item list is very long) Step 2 above is necessary since I have no way to calculate the cell height without loading the images first, and since tableview first calculates height of all cells it may take a very long time to download all images for all items. Would you say this is the right approach? am I missing something?

    Read the article

  • how to slove error when using thread?

    - by ChandreshKanetiya
    I have following error msg in console when using NSThread "Tried to obtain the web lock from a thread other than the main thread or the web thread. This may be a result of calling to UIKit from a secondary thread. Crashing now..." I have submit my sample code here - (void)viewDidLoad { appDeleg = (NewAshley_MedisonAppDelegate *)[[UIApplication sharedApplication] delegate]; [[self tblView1] setRowHeight:80.0]; [super viewDidLoad]; self.title = @"Under Ground"; [UIApplication sharedApplication].networkActivityIndicatorVisible = YES; [NSThread detachNewThreadSelector:@selector(CallParser) toTarget:self withObject:nil]; } -(void)CallParser { Parsing *parsing = [[Parsing alloc] init]; [parsing DownloadAndParseUnderground]; [parsing release]; [self Update_View]; //[myIndicator stopAnimating]; [UIApplication sharedApplication].networkActivityIndicatorVisible = NO; } here "DownloadAndParseUnderground" is the method of downloding data from the rss feed and -(void) Update_View{ [self.tblView1 reloadData]; } when Update_View method is called the tableView reload Data and in the cellForRowAtIndexPath create error and not display custom cell - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"Cell"; CustomTableviewCell *cell = (CustomTableviewCell *) [tblView1 dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) { [[NSBundle mainBundle] loadNibNamed:@"customCell" owner:self options:nil]; cell = objCustCell; objCustCell = nil; }

    Read the article

  • Delay PHP execution until JavaScript cookie set?

    - by Adam184
    I am trying to delay PHP execution until a cookie is set through JavaScript. The code is below, I trimmed the createCookie JavaScript function for simplicity (I've tested the function itself and it works). <?php if(!isset($_COOKIE["test"])) { ?> <script type="text/javascript"> $(function() { // createCookie script createCookie("test", 1, 3600); }); </script> <?php // Reload the page to ensure cookie was set if(!isset($_COOKIE["test"])) { header("Location: http://localhost/asdf.php/"); } } ?> At first I had no idea why this didn't work, however after using microtime() I figured out that the PHP after the <script> was executing before the jQuery ready function. I reduced my code significantly to show a simple version that is answerable, I am well aware that I am able to use setcookie() in PHP, the requirements for the cookie are client-side. I understand mixing PHP and JavaScript is incorrect, but any help on how to make this work (is there a PHP delay? - I tried sleep(), didn't work and didn't think it would work, since the scripts would be delayed as well) would be greatly appreciated.

    Read the article

  • Add values to an array after isset()

    - by user1656692
    I'm trying to add elements to an array after subsequent trials, but so far only one value is being added to the array. I've Googled and searched stackoverflow, and I seem to be getting only half the picture unless if I'm implementing it wrong. There are about 40 files, which will be needed to be submited one after another, and then a value from each trial is stored in the database. So far, this is what I've done. $_SESSION['task2'] = array(); //Submit Task 1 if (isset($_POST['submit_task_01'])) { $trial1_ac_sec = cleanInput($_POST['clockInputTask_01ac']); $trial1_est_sec = cleanInput($_POST['clockInputTask_01']); $trial1_ac = round(($trial1_ac_sec * 42.67), 2); $trial1_est = round(($trial1_est_sec * 42.67), 2); $trial1_judgErr = $trial1_ac - $trial1_est; $trial_1error = round($trial1_judgErr, 2); array_push($_SESSION['task2'],$trial_1error); header("location: Trial_2.php"); } //Submit Task2 if (isset($_POST['submit_task_02'])) { $trial2_ac_sec = cleanInput($_POST['clockInputTask_02ac']); $trial2_est_sec = cleanInput($_POST['clockInputTask_02']); $trial2_ac = round(($trial2_ac_sec * 42.67), 2); $trial2_est = round(($trial2_est_sec * 42.67), 2); $trial2_judgErr = $trial2_ac - $trial2_est; $trial_2error = round($trial2_judgErr, 2); array_push($_SESSION['task2'],$trial_2error); header("location: newEmptyPHPWebPage.php"); } ... and so on.. up until 40 I'm just wondering what am I doing wrong, I know that each time isset() will reload the page, and the previous data won't be available, so in that sense I thought I'd create an array for sessions and then push data in the session, however that doesn't seem to work. If anyone has any ideas on what I can do, I'll greatly appreciate it. Thank You.

    Read the article

  • Handle submission of forms created dynamically having same class

    - by user1504383
    i am creating a form for users to comment on each posts displayed through a loop and the form for commenting is also in the same loop. Now i want each comment to be submitted via jquery ajax but each time its taking into account only the first form . Here is my code:- while($row=mysql_fetch_array($result)) { ?> <?=$row['title']?> <h4>Add commment </h4> <form class="add_comment" method="post"> <div style="display:none;"><input type="text" name="id" class="id" value="<?=$row['id']?>"/></div> <input type="text" name="comment" class="comment"/> <input type="submit" name="submit" value="add" class="submit"/> </form> <?php } ?> And my jquery goes here $("form.add_comment").submit(function(event) { event.preventDefault(); var comment =$('.comment').attr('value'); var id =$('.id').attr('value'); $.ajax({ type: "POST", url: "/add_comment", data: "comment="+comment+"&id="+id, success: function() { location.reload(); } }); return false; });`enter code here` i understood the error that it selects the first one by default but couldnt fix it up plz help me

    Read the article

  • jCarousel jQuery ajax loading 1000 records

    - by user1714862
    I'm using jCarousel to present a vertical scolling list of +-1000 names. I am using ajax to load the data 100 records at a time then when all the data has loaded I just let the jCarousel loop in the DOM. I have the ajax and loop all working but would like to make the code work no matter how large the total record count becomes. 1) I'd like to eliminate the 1201 fixed number and use a variable. 2) I currently loop on every record I see (carousel.first) to see if it matches my reload position(s) (albeit the loop is ony 12x it still seems a little "loopy") Any suggestions on improving this? function mycarousel_itemLoadCallback(carousel, state) { //if (carousel.has(carousel.first, carousel.last)) { //return; //} var getCount = 100; // Number of records to grab at a time var maxCount = 1201; // total possible number of records var visible = 9; // the number of records you can see in the window so this creates a pre-load by this number of records for (var i = 1; i < maxCount; i+=getCount ) { if (carousel.first === 1 || carousel.first === (i-visible)){ var getFrom = i; var getTo = getFrom+(getCount-1); //alert('TOP Record ='+carousel.first+'\n Now GET '+getFrom+'-'+getTo); jQuery.get('#ajaxscript#', { first: getFrom, last: getTo }, function(xml) { mycarousel_itemAddCallback(carousel, getFrom, getTo, xml); }, 'xml' ); break; } } };

    Read the article

  • Why ajax doesn't work unless I refresh or use location.href?

    - by Connor Tang
    I am working on a html project, which will eventually package by Phonegap. So I am trying to encode the data from html form to JSON format, then use ajax send to a php file resides on server, and receive the response to do something else. Now I use <a href='login.html'> in my index.html to open the login page. In my login page, I have this <script> $(document).ready(function(e) { $('#loginform').submit(function(){ var jData = { "email": $('#emailLogin').val(), "password": $('#Password').val()}; $.ajax({ url: 'PHP/login.php', type:'POST', data: jData, dataType: 'json', async: false, error: function(xhr,status){ //reload(); location.href='index.html'; alert('Wrong email and password'); }, success: function(data){ if(data[1] == 1){ var Id_user = data[0]; location.href='loginSuccess.html'; } } }); }); }); </script> to send my data to server. But I found that it won't work, it's still in the login page. I tried to enter data and submit again, it's still nothing happen. Until I refresh the login page and enter data again, it can give an error message or go to the loginsuccess page. However, when I use <script> function loadLogin(){ location.href='login.html'; } </script> to open the login page, everything works well. So what cause this? How can I modify this piece of code to make it better?

    Read the article

  • use jquery inside data returnd by ajax using " innerHtml"

    - by me.again
    hi , I want to use jquery inside data returnd by ajax using " innerHtml" .. look here , <a href=\"#\" onclick=\"$.post('". $url ."', {'t' : 't'}, function(data){ $('content_rows').attr('innerHTML',data);}); " . $this->js_rebind .";return false;\">" $text .'</a>'; this link makes moving between the pages by ajax "by reloading the div that contains the data" , without - of course - reloading whole page . like this : <div id="content_rows"> rows from mysql database </div> now everything is Ok , but , I use " detailsRow Plugin " like this : <script type="text/javascript"> $(document).ready(function() { $('#rows').detailsRow('admin/blog/detailsRow',{ data:{"id":"id"} , dataType: "script" }); }); </script> this plugin makes every TR/Row in the table get more details by click (+-) .. go there : http://webworkflow.co.uk/plugins/detailsRow/ now this plugin works fine in the frist page ( before reload the div by jquery ) but after reloading the div and in the other pages or also when I go back to the frsit page it dos`nt work .. I put the code inside content_rows div like this : <div id="content_rows"> <script type="text/javascript"> $(document).ready(function() { $('#rows').detailsRow('admin/blog/detailsRow',{ data:{"id":"id"} , dataType: "script" }); }); </script> </div> but also doesnt work .. sorry I`m beginner in jQuery .. thanks ..

    Read the article

  • Rails uses wrong class in belongs_to

    - by macsniper
    I have an application managing software tests and a class called TestResult: class TestResult < ActiveRecord::Base belongs_to :test_case, :class_name => "TestCase" end I'm right now migrating from Rails 1.x to 2.3.5. In Rails 1.x everything works fine. When trying to access the association in Rails 2.3.5, I get the following error: NoMethodError: undefined method 'find' for ActiveRecord::TestCase:Class from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/associations/belongs_to_association.rb:49:in 'send' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/associations/belongs_to_association.rb:49:in 'find_target' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/associations/association_proxy.rb:239:in 'load_target' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/associations/association_proxy.rb:112:in 'reload' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/associations.rb:1250:in 'test_case' My Question is: how can I tell Rails to use my TestCase-class instead of ActiveRecord::TestCase. TestCase class: class TestCase < ActiveRecord::Base set_table_name "test_case" has_many :test_results belongs_to :component, :foreign_key => "subsystem_id" belongs_to :domain, :foreign_key => "area_id" belongs_to :use_case, :foreign_key => "use_case_id" end

    Read the article

  • CSS - background-size: cover; not working in Firefox

    - by Jayant Bhawal
    body{ background-image: url("./content/site_data/bg.jpg"); background-size: cover; background-repeat: no-repeat; font-family: 'Lobster', cursive; } Check: http://demo.jayantbhawal.in on firefox browsers, NOT in widescreen mode. The code works on Chrome(Android + PC) and even the stock Android browser, but NOT Firefox(Android + PC). Is there any good alternative to it? Why is it not working anyways? Googled this issue a lot of times, but no one else seems to have this problem. Is it just me? In any case, how do I fix it? There are quite some questions on SO about it too, but none of them provide a legitimate solution, so can someone just tell me if they have background-size: cover; issues on firefox too? So basically tell me 3 things: 1. Why is it happening? 2. What is a good alternative to it? 3. Is this happening to you too? On Firefox browsers of course. Chrome Version 35.0.1916.114 m Firefox Version 29.0.1 Note: I may already be trying to fix it so at times you may see a totally weird page. Wait a bit and reload.

    Read the article

  • Using AJAX to POST data to PHP database, then refresh

    - by cb74656
    Currently I have a button: <ul> <li><button onclick="display('1')">1</button></li> <li><button onclick="display('2')">2</button></li> <li><button onclick="display('3')">3</button></li> </ul> That when pressed, calls a javascript function, and displays PHP based on which button is pressed using AJAX. I figured this out all on my own. The AJAX gets a PHP file with a postgres query that outputs a table of data to a div. Now I want to be able to add, via form, new data and have it refresh (without reloading the page, yannknow?). I've tried a couple of things, and have hit roadblocks every time. My initial idea was to have the form submit the data using a javascript function and AJAX, then call my "display()" function after the query to reload the content. I just can't figure it out using GoogleFu. Based on my current idea, I'd like help with the following: How do I pass the form data to a javascript function. How do I use POST to pass that data to PHP using AJAX? I'm super new to javascript and AJAX. I've looked into jquery as it seems like that's the way to go, but I can't figure it out. If there's a better way to do this, I'm open to suggestions. Please forgive any misuse of nomenclature. EDIT: Once I solve this problem..., I'll have all the tools needed to finish the project preliminarily.

    Read the article

  • Most effecient way to create a "slider" timeline in HTML, CSS, and JavaScript?

    - by ZapChance
    Alright, so here's my dilemma. I've got these two "slides" lined up, one ready to be passed into view. I have it working and all, but I can scroll over to the right to see the second slide! How could I have it you can only view the one slide? JavaScript used: function validate(form) { var user = form.username.value; var pass = form.password.value; if(user === "test") { if(pass === "pass") { var hideoptions = {"direction" : "left", "mode" : "hide"}; var showoptions = {"direction" : "left", "mode" : "show"}; /*$("#loginView").toggle("slide", hideoptions, 1000, function() { $("#loginView").css("margin-left", "100%"); }); $("#mainView").toggle("slide", showoptions, 1000, function() { $("#mainView").css("margin-left", 0); });*/ $("#loginView").animate({ marginLeft: "-100%" }, 1000); $("#mainView").animate({ marginLeft: "0" }, 1000); } else { alert("nope"); } } else { alert("nope 2"); } } As you can see here @ http://jsfiddle.net/D7Va3/1/ (JSFiddle), once you enter "test" and "pass", then click enter, the tiles slide. But. If you reload, you can see that you can scroll to the right of the screen and view the second slide prematurely, which is just not going to work for me. I still need to achieve the same seamless transition, but you must only be able to view one slide at a time. Also, I plan to expand with more slides, so if you're feeling lucky today, I'd really love to see an example of how I could support multiple frames. I'm very new to JavaScript (yet I know syntax rules, and general coding knowledge from other languages), so the better you explain, the more knowledgeable I can be, and I'd be very grateful for that. Thanks in advance!

    Read the article

  • UIWebView loading/randering error after resize

    - by user1343869
    I have a screen with 2 UIWebView. The user can drag the views left and right to make the right and left view bigger (respectively) and the other one smaller (like UISplitView but customized and self made). I'm loading .html pages from strings and local .css files. After resizing the UIWebView If I load a new page there will be a black or white stripe on the right side of the UIWebView. This stripe is part of the web view (not a space between the views), and if I scroll the webView up and then down, the stripe will vanish and the page will be presented correctly. This issue occurs only in iOS 6 and only on the device (on the simulator it doesn't occur). Some notes: - The .css file contains elements with fixed position. Changing to absolute position didn't solve the problem but changed it: the black stripre occured during the drag. - As slower the drag is, the stripe will be bigger. - After resize the page is presented correctly, only when I load a new page the stripe is shown. - The time between resizing the web view and loading a page doesn't matter, it can be straight away or after couple of minutes. Now, as a workaround I create a new UIWebView and copy the old properties to the new. But than I need to reload the presented page which make a white blink... Any idea why does it happens, and how to fix it?

    Read the article

  • How can I get sessions to work if I'm using Google App Engine + Django 1.1?

    - by user341642
    Is there a way for me to get sessions working? I know Django has built in session management, and GAE has some tools for it if you're using their watered down version of Django 0.96, but is there a way to get sessions to work if you're trying to use GAE w/ Django 1.1 (i.e. use_library() call). I assume using a db-backed session doesn't work, and a file system backed one won't work b/c we don't have access to the filesystem if we deploy to the Google production servers. This kinda worked (as in didn't crap out) when I used SessionMiddleware backed by a local-memory backed cache and a non-persistent cache (i.e. setting SESSION_ENGINE to django.contrib.sessions.backends.cache). But the session never seems to persist in this case, no matter how I set the timeouts. A new session key is generated on every page reload. Maybe this is b/c the GAE assumes complete statelessness with each request and blows away my local cache? Apologies in advance, I'm pretty new to Python. Any suggestions would be greatly appreciated.

    Read the article

  • My game plays itself?

    - by sc8ing
    I've been making a canvas game in HTML5 and am new to a lot of it. I would like to use solely javascript to create my elements too (it is easier to embed into other sites this way and I feel it is cleaner). I had my game reloading in order to play again, but I want to be able to keep track of the score and multiple other variables without having to put each into the URL (which I have been doing) to make it seem like the game is still going. I'm also going to add "power ups" and other things that need to be "remembered" by the script. Anyways, here's my question. When one player kills another, the game will play itself for a while and make my computer very slow until I reload the page. Why is it doing this? I cleared the interval for the main function (which loops the game and keeps everything running) and this used to make everything stop moving - it no longer does. What's wrong here? This is my game: http://dl.dropbox.com/u/11168436/game/game.html Controls: Move the skier with arrow keys and shoot with M (you shoot in the direction you were last moving in). The snowboarder is moved with ESDF and shoots with Q. Thanks for your time.

    Read the article

  • Need help to solve sharedPreference probem

    - by HFherasen
    I am working on this app where I have one EditText field where you can write somthing and then it get saved and added to a list(TextView). I save the content of the EditText in this way : saved += "*" + editTextFelt.getText().toString() + ". \n"; saved is a String. Everything works fine, I can even reload the app and it's still displayed in the TextView, but if i try to write somthing and save it everything that was there, now dissapear. anyone know why ? It's kind of confusing, and I have to get it to work! Thank's!! CODE: init Mehtod() sp = getSharedPreferences(fileName, 0); betaView = (TextView)findViewById(R.id.betaTextView); Ive got a button to send the text, and this is like: p ublic void onClick(View v) { switch(v.getId()){ case R.id.btnSend: saved += "*" + editTextFelt.getText().toString() + ". \n"; SharedPreferences.Editor editor = sp.edit(); editor.putString("SAVED", saved); editor.commit(); betaView.setText(sp.getString("SAVED", "Empty"));

    Read the article

  • Change form submission (enter to tab)

    - by user1298883
    I have a real basic form (code below) with a bunch of back-panel PhP. There is a scanner being used to input the data, but instead of tab after each item, it sends an "enter" command. Is it viable to add javascript to cause enter to instead tab to the next form field, and upon the last form field, submit it instead? I have found a few scripts online, but none that I have tried have worked in Firefox/Chrome. CODE: <html><head><title>Barcode Generation</title></head><body> <fieldset style="width: 300px;"> <form action="generator.php" method="post"> Invoice Number:<input type="text" name="invoice" /><br /> Model Number:<input type="text" name="model" /><br /> Serial Number:<input type="text" name="serial" /><br /> <input type="hidden" name="reload" value="true" /> <input type="submit" /> </form><br /><a href=null>en espanol</a></fieldset> </body></html>

    Read the article

  • nginx php5-fpm "File not found" -- FastCGI sent in stderr: "Primary script unknown"

    - by jmfayard
    so I'm trying to run for the first time the nginx web server with php5-fpm on a debian wheezy server Hitting a php file display simply File not found I have done my research (waste a lot of hours actually ;), there are a lot of people that have similar problems, yet I didn't succeed to correct it with what worked for them. I still have the same error : $ tail /var/log/nginx/access.log /var/log/nginx/error.log /var/log/php5-fpm.log | less == /var/log/nginx/error.log <== 2013/10/26 21:36:00 [error] 6900#0: *1971 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, I have tried a lot of things, it's hard to remember what. I have put my config files on github my /etc/nginx/nginx.conf my /etc/php5/fpm/php-fpm.conf Currently, the nginx.conf configuration uses this... server { server_name mydomain.tld; root /srv/data1/test; location ~ \.php$ { try_files $uri =404; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } } /etc/php5/fpm/pool.d/www.conf contains listen = 127.0.0.1:9000 I have tried the unix socket version, same thing. fastcgi_pass unix:/var/run/php5-fpm.sock; I made sure the server is started $ netstat -alnp | grep LISTEN tcp 0 0 127.0.0.1:9000 0.0.0.0:* LISTEN 6913/php-fpm.conf) tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN 4785/mysqld tcp 0 0 0.0.0.0:842 0.0.0.0:* LISTEN 2286/inetd tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 2812/rpcbind tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 5710/nginx tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 2560/sshd tcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN 5710/nginx tcp6 0 0 :::111 :::* LISTEN 2812/rpcbind unix 2 [ ACC ] STREAM LISTENING 323648 6574/tmux /tmp//tmux-1000/default unix 2 [ ACC ] STREAM LISTENING 619072 6790/fcgiwrap /var/run/fcgiwrap.socket unix 2 [ ACC ] SEQPACKET LISTENING 323 464/udevd /run/udev/control unix 2 [ ACC ] STREAM LISTENING 610686 2812/rpcbind /var/run/rpcbind.sock unix 2 [ ACC ] STREAM LISTENING 318633 4785/mysqld /var/run/mysqld/mysqld.sock Each time I modify the nginx.conf file, I make sure to relaunch this command nginx -t && nginx -s reload && echo "nginx configuration reloaded" and same thing for php5-fpm /etc/init.d/php5-fpm restart Thanks for your help :-)

    Read the article

  • tightvnc authentication failure

    - by broiyan
    When I run a tightvnc client to establish a VNC session I sometimes receive an error message that suggests there are repeated failed VNC login attempts or a brute force attack. The message dialog title is "unsupported security type" and the text content is "too many authentication failures, try another connection? yes/no". This problem goes away if I reboot the Ubuntu server and reload the VNC server program and try again. From that point, it will work for multiple VNC sessions. My VNC sessions are typically about 20 minutes. At some time in the future, the problem may recur so it seems correlated to the time the server has been up or the time tightvnc has been loaded. Typically it takes only a day or so before the problem comes back. I am using tightvnc 1.3 on an server running Ubuntu 12.04. The version of vncserver is rather dated because that seems to be all that is available from tightvnc for linux servers. On the client side I use the newest Java-based VNC client (version 2.5) for both Windows access and Ubuntu access. All my VNC sessions are via SSH. I am the only user and I will typically use only the same client computer. How can I stop this problem from recurring? Edit I found the log file. This is a small excerpt of what I am seeing. Essentially, various IPs, not my own, are attempting to connect. What is the practical solution for this? 05/06/12 20:07:32 Got connection from client 69.194.204.90 05/06/12 20:07:32 Non-standard protocol version 3.4, using 3.3 instead 05/06/12 20:07:32 Too many authentication failures - client rejected 05/06/12 20:07:32 Client 69.194.204.90 gone 05/06/12 20:07:32 Statistics: 05/06/12 20:07:32 framebuffer updates 0, rectangles 0, bytes 0 05/06/12 20:24:56 Got connection from client 79.161.16.40 05/06/12 20:24:56 Non-standard protocol version 3.4, using 3.3 instead 05/06/12 20:24:56 Too many authentication failures - client rejected 05/06/12 20:24:56 Client 79.161.16.40 gone 05/06/12 20:24:56 Statistics: 05/06/12 20:24:56 framebuffer updates 0, rectangles 0, bytes 0 05/06/12 20:29:27 Got connection from client 109.230.246.54 05/06/12 20:29:27 Non-standard protocol version 3.4, using 3.3 instead 05/06/12 20:29:28 rfbVncAuthProcessResponse: authentication failed from 109.230.246.54 05/06/12 20:29:28 Client 109.230.246.54 gone 05/06/12 20:29:28 Statistics: 05/06/12 20:29:28 framebuffer updates 0, rectangles 0, bytes 0

    Read the article

  • pfsense peer-to-peer OpenVPN not connecting

    - by John P
    I'm trying to setup a peer-to-peer OpenVPN between two pfsense servers running 2.0.1-RELEASE, but the client keeps getting the connection dropped, with a status of "reconnecting; ping-restart" and nothing appears to be routing between them. Both these firewalls are also doing PPTP VPNs that are working correctly. FW01 ("server") ======================= LAN: 10.1.1.2/24 WAN: xx.xx.126.34/27 ServerMode: Peer to Peer (Shared Key) Protocol: UDP DeviceMode: tun Interface: WAN Port 1194 Tunnel: 10.0.8.1/30 Local Network: 10.1.1.0/24 Remote Network: 192.168.1.0/24 Firewall Rule in OpenVPN tab: UDP * * * * * none FW03 (client) LAN: 192.168.1.2/24 WAN: xx.xx.9.66/27 ServerMode: Peer to Peer (Shared Key) Protocol: UDP DeviceMode: tun Interface: WAN Server Host: xx.xx.126.34 Tunnel: -- also tried 10.1.8.0/24 Remote Network: 10.1.1.0/24 Client Logs: System Log Apr 6 18:00:08 kernel: ... Restarting packages. Apr 6 18:00:13 check_reload_status: Starting packages Apr 6 18:00:19 php: : Restarting/Starting all packages. Apr 6 18:00:56 kernel: ovpnc1: link state changed to DOWN Apr 6 18:00:56 check_reload_status: Reloading filter Apr 6 18:00:57 check_reload_status: Reloading filter Apr 6 18:00:57 kernel: ovpnc1: link state changed to UP Apr 6 18:00:57 check_reload_status: rc.newwanip starting ovpnc1 Apr 6 18:00:57 check_reload_status: Syncing firewall Apr 6 18:01:02 php: : rc.newwanip: Informational is starting ovpnc1. Apr 6 18:01:02 php: : rc.newwanip: on (IP address: ) (interface: ) (real interface: ovpnc1). Apr 6 18:01:02 php: : rc.newwanip: Failed to update IP, restarting... Apr 6 18:01:02 php: : send_event: sent interface reconfigure got ERROR: incomplete command. all reload reconfigure restart newip linkup sync Client OpenVPN log Apr 6 18:39:14 openvpn[12177]: Inactivity timeout (--ping-restart), restarting Apr 6 18:39:14 openvpn[12177]: SIGUSR1[soft,ping-restart] received, process restarting Apr 6 18:39:16 openvpn[12177]: NOTE: the current --script-security setting may allow this configuration to call user-defined scripts Apr 6 18:39:16 openvpn[12177]: Re-using pre-shared static key Apr 6 18:39:16 openvpn[12177]: Preserving previous TUN/TAP instance: ovpnc1 Apr 6 18:39:16 openvpn[12177]: UDPv4 link local (bound): [AF_INET]64.94.9.66 Apr 6 18:39:16 openvpn[12177]: UDPv4 link remote: [AF_INET]64.74.126.34:1194 Server OpenVPN log Apr 6 14:40:36 openvpn[22117]: UDPv4 link remote: [undef] Apr 6 14:40:36 openvpn[22117]: UDPv4 link local (bound): [AF_INET]xx.xx.126.34:1194 Apr 6 14:40:36 openvpn[21006]: /usr/local/sbin/ovpn-linkup ovpns1 1500 1557 10.1.8.1 10.1.8.2 init Apr 6 14:40:36 openvpn[21006]: /sbin/ifconfig ovpns1 10.1.8.1 10.1.8.2 mtu 1500 netmask 255.255.255.255 up Apr 6 14:40:36 openvpn[21006]: do_ifconfig, tt-ipv6=0, tt-did_ifconfig_ipv6_setup=0 Apr 6 14:40:36 openvpn[21006]: TUN/TAP device /dev/tun1 opened Apr 6 14:40:36 openvpn[21006]: Control Channel Authentication: using '/var/etc/openvpn/server1.tls-auth' as a OpenVPN static key file Apr 6 14:40:36 openvpn[21006]: NOTE: the current --script-security setting may allow this configuration to call user-defined scripts Apr 6 14:40:36 openvpn[21006]: OpenVPN 2.2.0 amd64-portbld-freebsd8.1 [SSL] [LZO2] [eurephia] [MH] [PF_INET6] [IPv6 payload 20110424-2 (2.2RC2)] built on Aug 11 2011 Apr 6 14:40:36 openvpn[17171]: SIGTERM[hard,] received, process exiting Apr 6 14:40:36 openvpn[17171]: /usr/local/sbin/ovpn-linkdown ovpns1 1500 1557 10.1.8.1 10.1.8.2 init Apr 6 14:40:36 openvpn[17171]: ERROR: FreeBSD route delete command failed: external program exited with error status: 1 Apr 6 14:40:36 openvpn[17171]: event_wait : Interrupted system call (code=4) Apr 6 14:06:32 openvpn[17171]: Initialization Sequence Completed Apr 6 14:06:32 openvpn[17171]: UDPv4 link remote: [undef] Apr 6 14:06:32 openvpn[17171]: UDPv4 link local (bound): [AF_INET]xx.xx.126.34:1194

    Read the article

  • multiple webapps in tomcat -- what is the optimal architecture?

    - by rvdb
    I am maintaining a growing base of mainly Cocoon-2.1-based web applications [http://cocoon.apache.org/2.1/], deployed in a Tomcat servlet container [http://tomcat.apache.org/], and proxied with an Apache http server [http://httpd.apache.org/docs/2.2/]. I am conceptually struggling with the best way to deploy multiple web applications in Tomcat. Since I'm not a Java programmer and we don't have any sysadmin staff I have to figure out myself what is the most sensible way to do this. My setup has evolved through 2 scenarios and I'm considering a third for maximal separation of the distinct webapps. [1] 1 Tomcat instance, 1 Cocoon instance, multiple webapps -tomcat |_ webapps |_ webapp1 |_ webapp2 |_ webapp[n] |_ WEB-INF (with Cocoon libs) This was my first approach: just drop all web applications inside a single Cocoon webapps folder inside a single Tomcat container. This seemed to run fine, I did not encounter any memory issues. However, this poses a maintainability drawback, as some Cocoon components are subject to updates, which often affect the webapp coding. Hence, updating Cocoon becomes unwieldy: since all webapps share the same pool of Cocoon components, updating one of them would require the code in all web applications to be updated simultaneously. In order to isolate the web applications, I moved to the second scenario. [2] 1 Tomcat instance, each webapp in its dedicated Cocoon environment -tomcat |_ webapps |_ webapp1 | |_ WEB-INF (with Cocoon libs) |_ webapp1 | |_ WEB-INF (with Cocoon libs) |_ webapp[n] |_ WEB-INF (with Cocoon libs) This approach separates all webapps into their own Cocoon environment, run inside a single Tomcat container. In theory, this works fine: all webapps can be updated independently. However, this soon results in PermGenSpace errors. It seemed that I could manage the problem by increasing memory allocation for Tomcat, but I realise this isn't a structural solution, and that overloading a single Tomcat in this way is prone to future memory errors. This set me thinking about the third scenario. [3] multiple Tomcat instances, each with a single webapp in its dedicated Cocoon environment -tomcat |_ webapps |_ webapp1 |_ WEB-INF (with Cocoon libs) -tomcat |_ webapps |_ webapp2 |_ WEB-INF (with Cocoon libs) -tomcat |_ webapps |_ webapp[n] |_ WEB-INF (with Cocoon libs) I haven't tried this approach, but am thinking of the $CATALINA_BASE variable. A single Tomcat distribution can be multiply instanciated with different $CATALINA_BASE environments, each pointing to a Cocoon instance with its own webapp. I wonder whether such an approach could avoid the structural memory-related problems of approach [2], or will the same issues apply? On the other hand, this approach would complicate management of the Apache http frontend, as it will require the AJP connectors of the different Tomcat instances to be listening at different ports. Hence, Apache's worker configuration has to be updated and reloaded whenever a new webapp (in its own Tomcat instance) is added. And there seems no way to reload worker.properties without restarting the entire Apache http server. Is there perhaps another / more dynamic way of 'modularizing' multiple Tomcat-served webapps, or can one of these scenarios be refined? Any thoughts, suggestions, advice much appreciated. Ron

    Read the article

  • Ubuntu web server 11.10 ftp/server issue

    - by Nate
    I was wondering if I could get some help with FTP, atleast I'm pretty sure it has to do with FTP. Although it could have to do with something else, I'm not 100% sure.. Now, for fare warning, I'm no ubuntu dominator, I'm pretty newb. Anyway, I've attempted to build a webserver to to test php and what not for a site I'm building. Now everything works, the php, the sql etc. By the way, I built this in VMware, so it's virtual, over a network, so I can access stuff from anywhere. I'm in a college right now so yeah. The one problem I have is this. I go into the terminal, and do ifconfig to find my IP. I get it and go to a browser on a different machine and type that IP in. I get the "index of/" page, where I can browse the website I'm making. I can click through folders and what not. I can click on things and they open up. Now lets say I'm working on my desktop and open up an FTP and drag and drop something into there, go to the IP in the browser again and try to open it. I either get "Server error The website encountered an error while retrieving http://my_server_ip/phpinfo.php. It may be down for maintenance or configured incorrectly. Here are some suggestions: Reload this webpage later." or "Forbidden You don't have permission to access /html.html on this server." But, lets say I make it on the server itself, and try, bam, magic it works. I'm sure I set the permissions to let everyone open and view the files, but maybe I didn't? I'm not sure, and this is where I was hoping I could get some help. By the way, I followed a tutorial on changing the www folder (apache) from /var/www to home/"user"/www. I can't recall how I did that, but it's there and my ftp goes to the home/"user"/www folder. But yeah, any and all help is appreciated. Like I said, I'm really new to this, but I do enjoy attempting to make these servers and learning how they work, so it's not like making this webserver is a project for a class, It's just assisting me in testing stuff for another class and possibly other websites later on down the road. Anyway, anyone who decides to help, thanks so much, I'd really appreciate it. Nate. P.S. I'm using Ubuntu 11.10 desktop edition with a LAMP server

    Read the article

  • Nginx Retry of Requests ( Nginx - Haproxy Combination )

    - by vaibhav
    I wanted to ask about Nginx Retry of Requests. I have a Nginx running at the backend which then sends the requests to HaProxy which then passes it on the web server and the request is processed. I am reloading my Haproxy config dynamically to provide elasticity. The problem is that the requests are dropped when I reload Haproxy. So I wanted to have a solution where I can just retry that from Nginx. I looked through the proxy_connect_timeout, proxy_next_upstream in http module and max_fails and fail_timeout in server module. I initially only had 1 server in the upstream connections so I just that up twice now and less requests are getting dropped ( only when ) have say the same server twice in upstream , if I have same server 3-4 times drops increase ). So , firstly I wanted to now , that when a request is not able to establish connection from Nginx to Haproxy so while reloading it seems that conneciton is seen as error and straightway the request is dropped . So how can I either specify the time after the failure I want to retry the request from Nginx to upstream or the time before which Nginx treats it as failed request. ( I have tried increaing proxy_connect_timeout - didn't help , mail_retires , fail_timeout and also putting the same upstream server twice ( that gave the best results so far ) Nginx Conf File upstream gae_sleep { server 128.111.55.219:10000; } server { listen 8080; server_name 128.111.55.219; root /var/apps/sleep/app; # Uncomment these lines to enable logging, and comment out the following two #access_log /var/log/nginx/sleep.access.log upstream; error_log /var/log/nginx/sleep.error.log; access_log off; #error_log /dev/null crit; rewrite_log off; error_page 404 = /404.html; set $cache_dir /var/apps/sleep/cache; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_pass http://gae_sleep; client_max_body_size 2G; proxy_connect_timeout 30; client_body_timeout 30; proxy_read_timeout 30; } location /404.html { root /var/apps/sleep; } location /reserved-channel-appscale-path { proxy_buffering off; tcp_nodelay on; keepalive_timeout 55; proxy_pass http://128.111.55.219:5280/http-bind; } }

    Read the article

  • Python Django sites on Apache+mod_wsgi with nginx proxy: highly fluctuating performance

    - by Halfgaar
    I have an Ubuntu 10.04 box running several dozen Python Django sites using mod_wsgi (embedded mode; the faster mode, if properly configured). Performance highly fluctuates. Sometimes fast, sometimes several seconds delay. The smokeping graphs are al over the place. Recently, I also added an nginx proxy for the static content, in the hopes it would cure the highly fluctuating performance. But, even though it reduced the number of requests Apache has to process significantly, it didn't help with the main problem. When clicking around on websites while running htop, it can be seen that sometimes requests are almost instant, whereas sometimes it causes Apache to consume 100% CPU for a few seconds. I really don't understand where this fluctuation comes from. I have configured the mpm_worker for Apache like this: StartServers 1 MinSpareThreads 50 MaxSpareThreads 50 ThreadLimit 64 ThreadsPerChild 50 MaxClients 50 ServerLimit 1 MaxRequestsPerChild 0 MaxMemFree 2048 1 server with 50 threads, max 50 clients. Munin and apache2ctl -t both show a consistent presence of workers; they are not destroyed and created all the time. Yet, it behaves as such. This tells me that once a sub interpreter is created, it should remain in memory, yet it seems sites have to reload all the time. I also have a nginx+gunicorn box, which performs quite well. I would really like to know why Apache is so random. This is a virtual host config: <VirtualHost *:81> ServerAdmin [email protected] ServerName example.com DocumentRoot /srv/http/site/bla Alias /static/ /srv/http/site/static Alias /media/ /srv/http/site/media WSGIScriptAlias / /srv/http/site/passenger_wsgi.py <Directory /> AllowOverride None </Directory> <Directory /srv/http/site> Options -Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> Ubuntu 10.04 Apache 2.2.14 mod_wsgi 2.8 nginx 0.7.65 Edit: I've put some code in the settings.py file of a site that writes the date to a tmp file whenever it's loaded. I can now see that the site is not randomly reloaded all the time, so Apache must be keeping it in memory. So, that's good, except it doesn't bring me closer to an answer... Edit: I just found an error that might also be related to this: File "/usr/lib/python2.6/subprocess.py", line 633, in __init__ errread, errwrite) File "/usr/lib/python2.6/subprocess.py", line 1049, in _execute_child self.pid = os.fork() OSError: [Errno 12] Cannot allocate memory The server has 600 of 2000 MB free, which should be plenty. Is there a limit that is set on Apache or WSGI somewhere?

    Read the article

  • BIND split-view DNS config problem

    - by organicveggie
    We have two DNS servers: one external server controlled by our ISP and one internal server controlled by us. I'd like internal requests for foo.example.com to map to 192.168.100.5 and external requests continue to map to 1.2.3.4, so I'm trying to configure a view in bind. Unfortunately, bind fails when I attempt to reload the configuration. I'm sure I'm missing something simple, but I can't figure out what it is. options { directory "/var/cache/bind"; forwarders { 8.8.8.8; 8.8.4.4; }; auth-nxdomain no; # conform to RFC1035 listen-on-v6 { any; }; }; zone "." { type hint; file "/etc/bind/db.root"; }; zone "localhost" { type master; file "/etc/bind/db.local"; }; zone "127.in-addr.arpa" { type master; file "/etc/bind/db.127"; }; zone "0.in-addr.arpa" { type master; file "/etc/bind/db.0"; }; zone "255.in-addr.arpa" { type master; file "/etc/bind/db.255"; }; view "internal" { zone "example.com" { type master; notify no; file "/etc/bind/db.example.com"; }; }; zone "example.corp" { type master; file "/etc/bind/db.example.corp"; }; zone "100.168.192.in-addr.arpa" { type master; notify no; file "/etc/bind/db.192"; }; I have excluded the entries in the view for allow-recursion and recursion in an attempt to simplify the configuration. If I remove the view and just load the example.com zone directly, it works fine. Any advice on what I might be missing?

    Read the article

< Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >