Search Results

Search found 42302 results on 1693 pages for 'start screen'.

Page 699/1693 | < Previous Page | 695 696 697 698 699 700 701 702 703 704 705 706  | Next Page >

  • RequestBuilder timeouts and browser connection limits per domain.

    - by WesleyJohnson
    This is specifically about GWT's RequestBuilder, but should apply to general XHR as well. My company is having me build a near realtime chat application over HTTP. Yes, I do realize there are better ways to do chat aplications, but this is what they want. Eventually we want it working on the iPad/iPhone as well so flash is out, which rules out websockets and comet as well, I think? Anyway, I'm running into issues were I've set GWT's RequestBuilder timeout to 10 seconds and we get very random and sporadic timeouts. We've got error handling and emailing on the server side and never get any errors, which suggests the underlying XHR request that RequestBuilder is built on, never gets to the server and times out after 10 seconds. We're using these request to poll the server for new messages rather often and also for sending new messages to the server and also polling (less frequently) for other parts of application. What I'm afraid of is that we're running into the browsers limit on concurrent connections to the same domain (2 for IE by default?). Now my question is - If I construct a RequestBuilder and call it's send() method and the browser blocks it from sending until one of the 2 connections per domain is free, does the timeout still start while the request is being blocked or will it not start until the browser actually releases the underlying XHR? I hope that's clear, if not please let me know and I'll try to explain more.

    Read the article

  • rehabilitating a button

    - by Michele Petraroli
    if($('.click').one('click')){ $('.click').click(function(){ $('.mainContent').animate( {"height":"+=620px"}, 800, 'easeInBack'); $('.eneButton').animate( { "top":"+=310px"}, 1500, 'easeInOutExpo'); $('.eneButton').animate( {"left":"-=310px"}, 1500, 'easeInOutExpo') $('.giardButton').animate( {"top":"+=620px"}, 2000, 'easeInOutExpo') $('.giardButton').animate( {"left":"-=620px"}, 2000, 'easeInOutExpo'); $('.click').off('click'); }) } if ($('.close').one('click')){ $('.close').click(function(){ $('.content, .sec').fadeOut(250); $('.eneButton').animate( {"left":"+=310px"}, 1500, 'easeInOutExpo') $('.eneButton').animate( { "top":"-=310px"}, 1500, 'easeInOutExpo'); $('.giardButton').animate( {"left":"+=620px"}, 2000, 'easeInOutExpo'); $('.giardButton').animate( {"top":"-=620px"}, 2000, 'easeInOutExpo'); $('.mainContent').animate( {"height":"-=620px"}, 3500, 'easeInBack'); $('.click').on('click'); }) } the animation is working fine in both ways but I need that users may restart the animation again when its closed. as you can see from the code you do one click and an animation starts, then you select a list of categories which you can close with a click on a "X" like in windows, when you do that the animation start again till all look like as the begin. Now if I click again on it the animation doesnt start no more. any clue?

    Read the article

  • How to manage lifecycle in a ViewGroup-derived class?

    - by Scott Smith
    I had a bunch of code in an activity that displays a running graph of some external data. As the activity code was getting kind of cluttered, I decided to extract this code and create a GraphView class: public class GraphView extends LinearLayout { public GraphView(Context context, AttributeSet attrs) { super(context, attrs); LayoutInflater inflater = (LayoutInflater) context.getSystemService(Context.LAYOUT_INFLATER_SERVICE); inflater.inflate(R.layout.graph_view, this, true); } public void start() { // Perform initialization (bindings, timers, etc) here } public void stop() { // Unbind, destroy timers, yadda yadda } . . . } Moving stuff into this new LinearLayout-derived class was simple. But there was some lifecycle management code associated with creating and destroying timers and event listeners used by this graph (I didn't want this thing polling in the background if the activity was paused, for example). Coming from a MS Windows background, I kind of expected to find overridable onCreate() and onDestroy() methods or something similar, but I haven't found anything of the sort in LinearLayout (or any of its inherited members). Having to leave all of this initialization code in the Activity, and then having to pass it into the view seemed like it defeated the original purpose of encapsulating all of this code into a reusable view. I ended up adding two additional public methods to my view: start() and stop(). I make these calls from the activity's onResume() and onPause() methods respectively. This seems to work, but it feels like I'm using duct tape here. Does anyone know how this is typically done? I feel like I'm missing something...

    Read the article

  • Facebook style messaging system schema design

    - by Jamie
    Hi all, I'm looking to implement a facebook style messaging system (thread messages) into a site of mine. Do you think this schema markup looks okay? Doctrine schema.yml: UserMessage: tableName: user_message actAs: [Timestampable] columns: id: { type: integer(10), primary: true, autoincrement: true } sender_id : { type: integer(10), notnull: true } sender_read: { type: boolean, default: 1 } subject: { type: string(255), notnull: true } message: { type: string(1000), notnull: true } hash: { type: string(32), notnull: true } relations: UserMessageRecipient as Recipient: type: many local: id foreign: message_id UserMessageReply as Reply: type: many local: id foreign: message_id UserMessageReply: tableName: user_message_reply columns: id: { type: integer(10), primary: true, autoincrement: true } user_message_id as message_id: { type: integer(10), notnull: true } message: { type: string(1000), notnull: true } sender_id: { type: integer(10), notnull: true } relations: UserMessage as Message: local: message_id foreign: id type: one UserMessageRecipient: tableName: user_message_recipient actAs: [Timestampable] columns: id: { type: integer(10), primary: true, autoincrement: true } user_message_id as message_id: { type: integer(10), notnull: true } recipient_id: { type: integer(10), notnull: true } recipient_read: { type: boolean, default: 0 } When I a new reply is made,i'll make sure the boolean for "recipient_read" for each recipient is set to false and of course i'll make sure sender_read is set to false too. I'm using a hash for the URL: http://example.com/user/messages/aadeb18f8bdaea49882ec4d2a8a3c062 (As the id will be starting from 1, i don't wish to have http://example.com/user/messages/1. Yeah, I could start incrementing from a bigger number, but i'd prefer to start at 1.) Is this a good way to go about it? Your thoughts and suggestions would be hugely appreciated. Thanks guys!

    Read the article

  • ServiceStack razor default page

    - by Tom
    Say I have 2 pages /NotADefault.cshtml /Views/Default.cshtml Question 1. Now I run it, page A always gets called implicitly as start-up default page no matter what I name it. Page B will only be called when I explicitly call localhost/View/Default. How do I make page B (the one in View folder) my default page? Question 2. I also have NotADefaultService.cs and DefaultService.cs. I give each page a Service class at the back. However, when page A is called NotADefaultService.cs never gets called. Only DefaultService.cs gets called when page B is called... My observation is that only the pages in the View folder will get their back-end service class working. Outside of View folder it doesn't work. Combining Q1 and Q2. How do I: Option 1. get the backend service class working under / root outside "View" folder? OR Option 2. appoint /View/Default.schtml as my default at start-up where the service class can be hit?

    Read the article

  • [LaTeX] positions of page numbers, position of chapter headings, chapters AND Table of Contents, Ref

    - by kaikanmonaco
    I am writing my PhD thesis (120+ pages) in latex, the deadline is approaching and I am struggling with layout problems. I am using the documentstyle book. I am posting both problems in this one thread because I am not sure if the solution might be related to both problems or not. Problems are: 1.) The page numbers are mostly located on the top-right of each page (this is correct and where I want them to be). However, only on the first page of chapters and on the first page of what I call "special chapters", the page number is located bottom-centered. With "special chapters" I mean: List of Contents, List of Figures, List of Tables, References, Index. My university will not accept the thesis like this. The page number must ALWAYS be top-right one each page, even if the page is the first page of a chapter or the first page of something like the List of Contents. How can I fix this? 2.) On the first page of chapters and "special chapters" (List of Contents...), the chapter title is located far too low on the page. This is the standard layout of LaTeX with documentstyle book I think. However, the chapter title must start at the very top of the page! I.e. the same height as the normal text on the pages that follow. I mean the chapter title, not the header. I.e., if there is a chapter called "Chapter 1 Dynamics of foobar under mechanical stress" then that text has to start from the top the page, but right now it starts several centimeters below the top. How can I fix this? Have tried all kinds of things to no effect, I'd be very thankful for a solution! Thanks.

    Read the article

  • WPF app startup problems

    - by Dave
    My brain is all over the map trying to fully understand Unity right now. So I decided to just dive in and start adding it in a branch to see where it takes me. Surprisingly enough (or maybe not), I am stuck just getting my darn Application to load properly. It seems like the right way to do this is to override OnStartup in App.cs. I've removed my StartupUri from App.xaml so it doesn't create my GUI XAML. My App.cs now looks something like this: public partial class App : Application { private IUnityContainer container { get; set; } protected override void OnStartup(StartupEventArgs e) { container = new UnityContainer(); GUI gui = new GUI(); gui.Show(); } protected override void OnExit(ExitEventArgs e) { container.Dispose(); base.OnExit(e); } } The problem is that nothing happens when I start the app! I put a breakpoint at the container assignment, and it never gets hit. What am I missing? App.xaml is currently set to ApplicationDefinition, but I'd expect this to work because some sample Unity + WPF code I'm looking at (from Codeplex) does the exact same thing, except that it works! I've also started the app by single-stepping, and it eventually hits the first line in App.xaml. When I step into this line, that's when the app just starts "running", but I don't see anything (and my breakpoint isn't hit). If I do the exact same thing in the sample application, stepping into App.xaml puts me right into OnStartup, which is what I'd expect to happen. Argh! Is it a Bad Thing to just put the Unity construction in my GUI's Window_Loaded event handler? Does it really need to be at the App level?

    Read the article

  • Scalability 101: How can I design a scalable web application using PHP?

    - by Legend
    I am building a web-application and have a couple of quick questions. From what I learnt, one should not worry about scalability when initially building the app and should only start worrying when the traffic increases. However, this being my first web-application, I am not quite sure if I should take an approach where I design things in an ad-hoc manner and later "fix" them. I have been reading stories about how people start off with an app that gets millions of users in a week or two. Not that I will face the same situation but I can't help but wonder, how do these people do it? Currently, I bought a shared hosting account on Lunarpages and that got me started in building and testing the application. However, I am interested in learning how to build the same application in a scalable-manner using the cloud, for instance, Amazon's EC2. From my understanding, I can see a couple of components: There is a load balancer that first receives requests and then decides where to route each request This request is then handled by a server replica that then processes the request and updates (if required) the database and sends back the response to the client If a similar request comes in, then a caching mechanism like memcached kicks into picture and returns objects from the cache A blackbox that handles database replication Specifically, I am trying to do the following: Setting up a load balancer (my homework revealed that HAProxy is one such load balancer) Setting up replication so that databases can be synchronized Using memcached Configuring Apache to work with multiple web servers Partitioning application to use Amazon EC2 and Amazon S3 (my application is something that will need great deal of storage) Finally, how can I avoid burning myself when using Amazon services? Because this is just a learning phase, I can probably do with 2-3 servers with a simple load balancer and replication but until I want to avoid paying loads of money accidentally. I am able to find resources on individual topics but am unable to find something that starts off from the big picture. Can someone please help me get started?

    Read the article

  • typeahead.js remote with subset matching

    - by rebelde
    Instead of returning to the server after each additional letter is typed, I want it to only go to the server once, get all matching words, and filter the downloaded data after that. We are having trouble making this work. We are successfully using "remote" to wait until two letters are typed, but we can't get it to stop going to the server as additional letters are typed. Steps: 1. After two letters are typed, retrieve all matching words that start with those two letters. 2. When a third and additional letters are typed, don't go to the server again, just filter from the previous list that was sent. An example: "mo" is typed in. All 100 words that start with "mo" are returned. (Only 10 are shown.) "mor" - now with a third letter, we don't go back to the server. We just find the 20 words that match from within the previous set of words. Can anybody make this work? In real life (using YUI2), we do this and then go back to the server if somebody types in a space after the word. At that point, we know to retrieve additional words. Thanks!

    Read the article

  • A member variable's hashCode() value is different

    - by Jacques René Mesrine
    There's a piece of code that looks like this. The problem is that during bootup, 2 initialization takes place. (1) Some method does a reflection on ForumRepository & performs a newInstance() purely to invoke #setCacheEngine. (2) Another method following that invokes #start(). I am noticing that the hashCode of the #cache member variable is different sometimes in some weird scenarios. Since only 1 piece of code invokes #setCacheEngine, how can the hashCode change during runtime (I am assuming that a different instance will have a different hashCode). Is there a bug here somewhere ? public class ForumRepository implements Cacheable { private static CacheEngine cache; private static ForumRepository instance; public void setCacheEngine(CacheEngine engine) { cache = engine; } public synchronized static void start() { instance = new ForumRepository(); } public synchronized static void addForum( ... ) { cache.add( .. ); System.out.println( cache.hashCode() ); // snipped } public synchronized static void getForum( ... ) { ... cache.get( .. ); System.out.println( cache.hashCode() ); // snipped } }

    Read the article

  • How to catch exceptions from processes in C#

    - by kitofr
    I all... I have an acceptance runner program here that looks something like this: public Result Run(CommandParser parser) { var result = new Result(); var watch = new Stopwatch(); watch.Start(); try { _testConsole.Start(); parser.ForEachInput(input => { _testConsole.StandardInput.WriteLine(input); return _testConsole.TotalProcessorTime.TotalSeconds < parser.TimeLimit; }); if (TimeLimitExceeded(parser.TimeLimit)) { watch.Stop(); _testConsole.Kill(); ReportThatTestTimedOut(result); } else { result.Status = GetProgramOutput() == parser.Expected ? ResultStatus.Passed : ResultStatus.Failed; watch.Stop(); } } catch (Exception) { result.Status = ResultStatus.Exception; } result.Elapsed = watch.Elapsed; return result; } the _testConsole is an Process adapter that wraps a regular .net process into something more workable. I do however have a hard time to catch any exceptions from the started process (i.e. the catch statement is pointless here) I'm using something like: _process = new Process { StartInfo = { FileName = pathToProcess, UseShellExecute = false, CreateNoWindow = true, RedirectStandardInput = true, RedirectStandardOutput = true, RedirectStandardError = true, Arguments = arguments } }; to set up the process. Any ideas?

    Read the article

  • MS Exam 70-536 - How to throw and handle exception from thread?

    - by Max Gontar
    Hello! In MS Exam 70-536 .Net Foundation, Chapter 7 "Threading" in Lesson 1 Creating Threads there is a text: Be aware that because the WorkWithParameter method takes an object, Thread.Start could be called with any object instead of the string it expects. Being careful in choosing your starting method for a thread to deal with unknown types is crucial to good threading code. Instead of blindly casting the method parameter into our string, it is a better practice to test the type of the object, as shown in the following example: ' VB Dim info As String = o as String If info Is Nothing Then Throw InvalidProgramException("Parameter for thread must be a string") End If // C# string info = o as string; if (info == null) { throw InvalidProgramException("Parameter for thread must be a string"); } So, I've tried this but exception is not handled properly (no console exception entry, program is terminated), what is wrong with my code (below)? class Program { static void Main(string[] args) { Thread thread = new Thread(SomeWork); try { thread.Start(null); thread.Join(); } catch (InvalidProgramException ex) { Console.WriteLine(ex.Message); } finally { Console.ReadKey(); } } private static void SomeWork(Object o) { String value = (String)o; if (value == null) { throw new InvalidProgramException("Parameter for "+ "thread must be a string"); } } } Thanks for your time!

    Read the article

  • Regular Expressions .NET

    - by Fosa
    I need a regular expression for some arguments that must match on a string. here it is... The string exists out of minimum 8 en maximum 20 characters. These characters of this string may be characters of the alfabet or special chars --With other words..all charachters except from the whitespaces In the complete string there must be atleast 1 number. The string cannot start with a number or an underscore The last 2 characters of the string must be identical, But it doenst matter if those last --identical characters are capital or non-capital (case insensitive) Must match all : +234567899 a_1de*Gg xy1Me*__ !41deF_hij2lMnopq3ss C234567890123$^67800 *5555555 sDF564zer"" !!!!!!!!!4!!!!!!!!!! abcdefghijklmnopq9ss May not match : Cannot be less then 8 or more then 20 chars: a_1+Eff B41def_hIJ2lmnopq3stt Cannot contain a whitespace: A_4 e*gg b41def_Hij2l nopq3ss Cannot start with a number or an underscore: __1+Eff 841DEf_hij2lmnopq3stt cannot end on 2 diffrent characters: a_1+eFg b41DEf_hij2lmnopq3st Cannot be without a number in the string: abCDefghijklmnopqrss abcdef+++dF !!!!!!!!!!!!!!!!!!!! ------------------------------------------------------ This is what I have so far...But I'm really breaking my head on this... If you Don't know the answer completely it's not a problem... I just want to get in the right direction ([^0-9_])(?=.*\d)(\S{8,20})(?i:[\S])\1

    Read the article

  • perl multithreading issue for autoincrement

    - by user3446683
    I'm writing a multi threaded perl script and storing the output in a csv file. I'm trying to insert a field called sl.no. in the csv file for each row entered but as I'm using threads, the sl. no. overlaps in most. Below is an idea of my code snippet. for ( my $count = 1 ; $count <= 10 ; $count++ ) { my $t = threads->new( \&sub1, $count ); push( @threads, $t ); } foreach (@threads) { my $num = $_->join; } sub sub1 { my $num = shift; my $start = '...'; #distributing data based on an internal logic my $end = '...'; #distributing data based on an internal logic my $next; for ( my $x = $start ; $x <= $end ; $x++ ) { my $count = $x + 1; #part of code from which I get @data which has name and age my $j = 0; if ( $x != 0 ) { $count = $next; } foreach (@data) { #j is required here for some extra code flock( OUTPUT, LOCK_EX ); print OUTPUT $count . "," . $name . "," . $age . "\n"; flock( OUTPUT, LOCK_UN ); $j++; $count++; } $next = $count; } return $num; } I need the count to be incremented which is the serial number for the rows that would be inserted in the csv file. Any help would be appreciated.

    Read the article

  • Bruteforcing Blackberry PersistentStore?

    - by Haoest
    Hello, I am experimenting with Blackberry's Persistent Store, but I have gotten nowhere so far, which is good, I guess. So I have written a a short program that attempts iterator through 0 to a specific upper bound to search for persisted objects. Blackberry seems to intentionally slow the loop. Check this out: String result = "result: \n"; int ub = 3000; Date start = Calendar.getInstance().getTime(); for(int i=0; i<ub; i++){ PersistentObject o = PersistentStore.getPersistentObject(i); if (o.getContents() != null){ result += (String) o.getContents() + "\n"; } } result += "end result\n"; result += "from 0 to " + ub + " took " + (Calendar.getInstance().getTime().getTime() - start.getTime()) / 1000 + " seconds"; From 0 to 3000 took 20 seconds. Is this enough to conclude that brute-forcing is not a practical method to breach the Blackberry? In general, how secure is BB Persistent Store?

    Read the article

  • How do I compare two PropertyInfos or methods reliably?

    - by Rob Ashton
    Same for methods too: I am given two instances of PropertyInfo or methods which have been extracted from the class they sit on via GetProperty or GetMember etc, (or from a MemberExpression maybe). I want to determine if they are in fact referring to the same Property or the same Method so (propertyOne == propertyTwo) or (methodOne == methodTwo) Clearly that isn't going to actually work, you might be looking at the same property, but it might have been extracted from different levels of the class hierarchy (in which case generally, propertyOne != propertyTwo) Of course, I could look at DeclaringType, and re-request the property, but this starts getting a bit confusing when you start thinking about Properties/Methods declared on interfaces and implemented on classes Properties/Methods declared on a base class (virtually) and overridden on derived classes Properties/Methods declared on a base class, overridden with 'new' (in IL world this is nothing special iirc) At the end of the day, I just want to be able to do an intelligent equality check between two properties or two methods, I'm 80% sure that the above bullet points don't cover all of the edge cases, and while I could just sit down, write a bunch of tests and start playing about, I'm well aware that my low level knowledge of how these concepts are actually implemented is not excellent, and I'm hoping this is an already answered topic and I just suck at searching. The best answer would give me a couple of methods that achieve the above, explaining what edge cases have been taken care of and why :-)

    Read the article

  • iOS 7: Best way to implement an textview that presents previous input but is easy to clear

    - by Frank R.
    I'm porting a Mac app to the iPhone and I've run into an unexpected problem. On the Mac there's a text field that is automatically pre-selected (= first responder) when a dialog shows up. The text field shows the text you entered in the field the last time and the text is pre-selected so that if you just start typing it gets cleared away. If you want to edit the existing text instead you just hit the forwards or backwards arrow. On the iPhone this behavior seems very hard to implement. The text view shows up with the old text and I can even get it to pre-select but whatever I do the result is not quite right. When I use [aTextView setMarkedText: myText selectedRange: newRange]; the text does show up as marked and if I just start typing the old text goes away. However there's no equivalent to the cursor keys on iOS, so I cannot NOT erase the text.. which is hardly the point. What kind of iOS idiom would be appropriate for giving the option to either edit or overwrite existing text? Best regards, Frank

    Read the article

  • Where are possible locations of queueing/buffering delays in Linux multicast?

    - by Matt
    We make heavy use of multicasting messaging across many Linux servers on a LAN. We are seeing a lot of delays. We basically send an enormous number of small packages. We are more concerned with latency than throughput. The machines are all modern, multi-core (at least four, generally eight, 16 if you count hyperthreading) machines, always with a load of 2.0 or less, usually with a load less than 1.0. The networking hardware is also under 50% capacity. The delays we see look like queueing delays: the packets will quickly start increasing in latency, until it looks like they jam up, then return back to normal. The messaging structure is basically this: in the "sending thread", pull messages from a queue, add a timestamp (using gettimeofday()), then call send(). The receiving program receives the message, timestamps the receive time, and pushes it in a queue. In a separate thread, the queue is processed, analyzing the difference between sending and receiving timestamps. (Note that our internal queues are not part of the problem, since the timestamps are added outside of our internal queuing.) We don't really know where to start looking for an answer to this problem. We're not familiar with Linux internals. Our suspicion is that the kernel is queuing or buffering the packets, either on the send side or the receive side (or both). But we don't know how to track this down and trace it. For what it's worth, we're using CentOS 4.x (RHEL kernel 2.6.9).

    Read the article

  • How to display buttons after enterframe event is over in Corona?

    - by user1463542
    I am trying to display two buttons called countd_again and main_menu after my enterframe event is over. I can't see those buttons after enterframe event is over,though. Can you check my code,please? And also i want to add new addListener for the buttons using director and scene. module(..., package.seeall) function new() local localGroup = display.newGroup(); display.setStatusBar(display.HiddenStatusBar) local background = display.newImage("background.png") start=os.time() cnt=1 local countd_again = display.newImage("yeniden.png") countd_again.x=100 countd_again.y=100 countd_again.isVisible= false; countd_again.alpha=0; countd_again.scene="helloWorld"; local main_menu= display.newImage("anamenu.png") main_menu.x=100 main_menu.y=300 main_menu.isVisible=false; main_menu.alpha=0; main_menu.scene="helloWorld" -- listener function local function onEveryFrame( event ) if (cnt~=0) then cnt= 3-(os.time()-start) minute = math.floor(cnt/60) second=cnt%60 --print(minute,second) minTxt=display.newText(minute,50,50,nil,100) secTxt=display.newText(second,250,50,nil,100) transition.to(minTxt, {time=100, alpha=0}) transition.to(secTxt,{time=100, alpha=0}) else Runtime: removeEventListener("enterFrame",onEveryFrame) countd_again.isVisible=true; main_menu.isVisible=true; transition.to(countd_again,{time=500,alpha=1}); transition.to(main_menu,{time=500,alpha=1}); countd_again: addEventListener("touch", changeScene) main_menu: addEventListener("touch", changeScene) end end -- assign the above function as an "enterFrame" listener Runtime:addEventListener( "enterFrame", onEveryFrame ) function changeScene(e) if(e.phase=="ended") then director:changeScene(e.target.scene); end end countd_again: addEventListener("touch", changeScene) main_menu: addEventListener("touch", changeScene) localGroup: insert(countd_again) localGroup:insert(main_menu) localGroup:insert(background) return localGroup; end

    Read the article

  • php code works with mamp but not on ubuntu server

    - by user355510
    Hello, I have start looking at a twitter php library http://github.com/abraham/twitteroauth, but i can't get it to work on my ubuntu server, but on my mac, with mamp it works without any problems. This is the code that don't won't to work on my server, but in mamp. Yes i have edit config file <?php /* Start session and load library. */ session_start(); require_once('twitteroauth/twitteroauth.php'); require_once('config.php'); /* Build TwitterOAuth object with client credentials. */ $connection = new TwitterOAuth(CONSUMER_KEY, CONSUMER_SECRET); /* Get temporary credentials. */ $request_token = $connection->getRequestToken(OAUTH_CALLBACK); /* Save temporary credentials to session. */ $_SESSION['oauth_token'] = $token = $request_token['oauth_token']; $_SESSION['oauth_token_secret'] = $request_token['oauth_token_secret']; /* If last connection failed don't display authorization link. */ switch ($connection->http_code) { case 200: /* Build authorize URL and redirect user to Twitter. */ $url = $connection->getAuthorizeURL($token); header('Location: ' . $url); break; default: /* Show notification if something went wrong. */ echo 'Could not connect to Twitter. Refresh the page or try again later.'; } I have enable php session on my ubuntu server, because this code works <?php session_start(); $_SESSION["secretword"] = "hello there"; $secretword = $_SESSION["secretword"] ; ?> <html> <head> <title>A PHP Session Example</title> </head> <body> <?php echo $secretword; ?> </body> </html>

    Read the article

  • Servlet that starts a thread only once for every visitor

    - by user858749
    Hey I want to implement a Java Servlet that starts a thread only once for every single user. Even on refresh it should not start again. My last approach brought me some trouble so no code^^. Any Suggestions for the layout of the servlet? public class LoaderServlet extends HttpServlet { // The thread to load the needed information private LoaderThread loader; // The last.fm account private String lfmaccount; public LoaderServlet() { super(); lfmaccount = ""; } @Override protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { if (loader != null) { response.setContentType("text/plain"); response.setHeader("Cache-Control", "no-cache"); PrintWriter out = response.getWriter(); out.write(loader.getStatus()); out.flush(); out.close(); } else { loader = new LoaderThread(lfmaccount); loader.start(); request.getRequestDispatcher("WEB-INF/pages/loader.jsp").forward( request, response); } } @Override protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { if (lfmaccount.isEmpty()) { lfmaccount = request.getSession().getAttribute("lfmUser") .toString(); } request.getRequestDispatcher("WEB-INF/pages/loader.jsp").forward( request, response); } } The jsp uses ajax to regularly post to the servlet and get the status. The thread just runs like 3 minutes, crawling some last.fm data.

    Read the article

  • MySQL Config File for Large System

    - by Jonathon
    We are running MySQL on a Windows 2003 Server Enterpise Edition box. MySQL is about the only program running on the box. We have approx. 8 slaves replicated to it, but my understanding is that having multiple slaves connecting to the same master does not significantly slow down performance, if at all. The master server has 16G RAM, 10 Terabyte drives in RAID 10, and four dual-core processors. From what I have seen from other sites, we have a really robust machine as our master db server. We just upgraded from a machine with only 4G RAM, but with similar hard drives, RAID, etc. It also ran Apache on it, so it was our db server and our application server. It was getting a little slow, so we split the db server onto this new machine and kept the application server on the first machine. We also distributed the application load amongst a few of our other slave servers, which also run the application. The problem is the new db server has mysqld.exe consuming 95-100% of CPU almost all the time and is really causing the app to run slowly. I know we have several queries and table structures that could be better optimized, but since they worked okay on the older, smaller server, I assume that our my.ini (MySQL config) file is not properly configured. Most of what I see on the net is for setting config files on small machines, so can anyone help me get the my.ini file correct for a large dedicated machine like ours? I just don't see how mysqld could get so bogged down! FYI: We have about 100 queries per second. We only use MyISAM tables, so skip-innodb is set in the ini file. And yes, I know it is reading the ini file correctly because I can change some settings (like the server-id and it will kill the server at startup). Here is the my.ini file: #MySQL Server Instance Configuration File # ---------------------------------------------------------------------- # Generated by the MySQL Server Instance Configuration Wizard # # # Installation Instructions # ---------------------------------------------------------------------- # # On Linux you can copy this file to /etc/my.cnf to set global options, # mysql-data-dir/my.cnf to set server-specific options # (@localstatedir@ for this installation) or to # ~/.my.cnf to set user-specific options. # # On Windows you should keep this file in the installation directory # of your server (e.g. C:\Program Files\MySQL\MySQL Server X.Y). To # make sure the server reads the config file use the startup option # "--defaults-file". # # To run run the server from the command line, execute this in a # command line shell, e.g. # mysqld --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # To install the server as a Windows service manually, execute this in a # command line shell, e.g. # mysqld --install MySQLXY --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # And then execute this in a command line shell to start the server, e.g. # net start MySQLXY # # # Guildlines for editing this file # ---------------------------------------------------------------------- # # In this file, you can use all long options that the program supports. # If you want to know the options a program supports, start the program # with the "--help" option. # # More detailed information about the individual options can also be # found in the manual. # # # CLIENT SECTION # ---------------------------------------------------------------------- # # The following options will be read by MySQL client applications. # Note that only client applications shipped by MySQL are guaranteed # to read this section. If you want your own MySQL client program to # honor these values, you need to specify it as an option during the # MySQL client library initialization. # [client] port=3306 [mysql] default-character-set=latin1 # SERVER SECTION # ---------------------------------------------------------------------- # # The following options will be read by the MySQL Server. Make sure that # you have installed the server correctly (see above) so it reads this # file. # [mysqld] # The TCP/IP Port the MySQL Server will listen on port=3306 #Path to installation directory. All paths are usually resolved relative to this. basedir="D:/MySQL/" #Path to the database root datadir="D:/MySQL/data" # The default character set that will be used when a new schema or table is # created and no character set is defined default-character-set=latin1 # The default storage engine that will be used when create new tables when default-storage-engine=MYISAM # Set the SQL mode to strict #sql-mode="STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION" # we changed this because there are a couple of queries that can get blocked otherwise sql-mode="" #performance configs skip-locking max_allowed_packet = 1M table_open_cache = 512 # The maximum amount of concurrent sessions the MySQL server will # allow. One of these connections will be reserved for a user with # SUPER privileges to allow the administrator to login even if the # connection limit has been reached. max_connections=1510 # Query cache is used to cache SELECT results and later return them # without actual executing the same query once again. Having the query # cache enabled may result in significant speed improvements, if your # have a lot of identical queries and rarely changing tables. See the # "Qcache_lowmem_prunes" status variable to check if the current value # is high enough for your load. # Note: In case your tables change very often or if your queries are # textually different every time, the query cache may result in a # slowdown instead of a performance improvement. query_cache_size=168M # The number of open tables for all threads. Increasing this value # increases the number of file descriptors that mysqld requires. # Therefore you have to make sure to set the amount of open files # allowed to at least 4096 in the variable "open-files-limit" in # section [mysqld_safe] table_cache=3020 # Maximum size for internal (in-memory) temporary tables. If a table # grows larger than this value, it is automatically converted to disk # based table This limitation is for a single table. There can be many # of them. tmp_table_size=30M # How many threads we should keep in a cache for reuse. When a client # disconnects, the client's threads are put in the cache if there aren't # more than thread_cache_size threads from before. This greatly reduces # the amount of thread creations needed if you have a lot of new # connections. (Normally this doesn't give a notable performance # improvement if you have a good thread implementation.) thread_cache_size=64 #*** MyISAM Specific options # The maximum size of the temporary file MySQL is allowed to use while # recreating the index (during REPAIR, ALTER TABLE or LOAD DATA INFILE. # If the file-size would be bigger than this, the index will be created # through the key cache (which is slower). myisam_max_sort_file_size=100G # If the temporary file used for fast index creation would be bigger # than using the key cache by the amount specified here, then prefer the # key cache method. This is mainly used to force long character keys in # large tables to use the slower key cache method to create the index. myisam_sort_buffer_size=64M # Size of the Key Buffer, used to cache index blocks for MyISAM tables. # Do not set it larger than 30% of your available memory, as some memory # is also required by the OS to cache rows. Even if you're not using # MyISAM tables, you should still set it to 8-64M as it will also be # used for internal temporary disk tables. key_buffer_size=3072M # Size of the buffer used for doing full table scans of MyISAM tables. # Allocated per thread, if a full scan is needed. read_buffer_size=2M read_rnd_buffer_size=8M # This buffer is allocated when MySQL needs to rebuild the index in # REPAIR, OPTIMZE, ALTER table statements as well as in LOAD DATA INFILE # into an empty table. It is allocated per thread so be careful with # large settings. sort_buffer_size=2M #*** INNODB Specific options *** innodb_data_home_dir="D:/MySQL InnoDB Datafiles/" # Use this option if you have a MySQL server with InnoDB support enabled # but you do not plan to use it. This will save memory and disk space # and speed up some things. skip-innodb # Additional memory pool that is used by InnoDB to store metadata # information. If InnoDB requires more memory for this purpose it will # start to allocate it from the OS. As this is fast enough on most # recent operating systems, you normally do not need to change this # value. SHOW INNODB STATUS will display the current amount used. innodb_additional_mem_pool_size=11M # If set to 1, InnoDB will flush (fsync) the transaction logs to the # disk at each commit, which offers full ACID behavior. If you are # willing to compromise this safety, and you are running small # transactions, you may set this to 0 or 2 to reduce disk I/O to the # logs. Value 0 means that the log is only written to the log file and # the log file flushed to disk approximately once per second. Value 2 # means the log is written to the log file at each commit, but the log # file is only flushed to disk approximately once per second. innodb_flush_log_at_trx_commit=1 # The size of the buffer InnoDB uses for buffering log data. As soon as # it is full, InnoDB will have to flush it to disk. As it is flushed # once per second anyway, it does not make sense to have it very large # (even with long transactions). innodb_log_buffer_size=6M # InnoDB, unlike MyISAM, uses a buffer pool to cache both indexes and # row data. The bigger you set this the less disk I/O is needed to # access data in tables. On a dedicated database server you may set this # parameter up to 80% of the machine physical memory size. Do not set it # too large, though, because competition of the physical memory may # cause paging in the operating system. Note that on 32bit systems you # might be limited to 2-3.5G of user level memory per process, so do not # set it too high. innodb_buffer_pool_size=500M # Size of each log file in a log group. You should set the combined size # of log files to about 25%-100% of your buffer pool size to avoid # unneeded buffer pool flush activity on log file overwrite. However, # note that a larger logfile size will increase the time needed for the # recovery process. innodb_log_file_size=100M # Number of threads allowed inside the InnoDB kernel. The optimal value # depends highly on the application, hardware as well as the OS # scheduler properties. A too high value may lead to thread thrashing. innodb_thread_concurrency=10 #replication settings (this is the master) log-bin=log server-id = 1 Thanks for all the help. It is greatly appreciated.

    Read the article

  • Extended slice that goes to beginning of sequence with negative stride

    - by recursive
    Bear with me while I explain my question. Skip down to the bold heading if you already understand extended slice list indexing. In python, you can index lists using slice notation. Here's an example: >>> A = list(range(10)) >>> A[0:5] [0, 1, 2, 3, 4] You can also include a stride, which acts like a "step": >>> A[0:5:2] [0, 2, 4] The stride is also allowed to be negative, meaning the elements are retrieved in reverse order: >>> A[5:0:-1] [5, 4, 3, 2, 1] But wait! I wanted to see [4, 3, 2, 1, 0]. Oh, I see, I need to decrement the start and end indices: >>> A[4:-1:-1] [] What happened? It's interpreting -1 as being at the end of the array, not the beginning. I know you can achieve this as follows: >>> A[4::-1] [4, 3, 2, 1, 0] But you can't use this in all cases. For example, in a method that's been passed indices. My question is: Is there any good pythonic way of using extended slices with negative strides and explicit start and end indices that include the first element of a sequence? This is what I've come up with so far, but it seems unsatisfying. >>> A[0:5][::-1] [4, 3, 2, 1, 0]

    Read the article

  • problem in case of window service

    - by prateeksaluja20
    Hello friends, i made a windows service & add project installer.in which only contain this code. System.Diagnostics.Process.Start(@"C:\Windows\system32\notepad.exe"); inside the timer tick event & interval is 60 sec.i just wanted to try to run Windows service. 1st-serviceProcessInstaller1 i have been changed its account setting as local system. 2nd-serviceInstaller1 in this case i have been changed its start up type as Automatic. then i create a setup add another project then right click add project output then add primary output then press ok. then go to Right click on project-view-custom Action-right click on Install-Add custom Action-select Application folder & add primary output.the same thing done for all the remaining options like commit,rollback,uninstall. after that i build the setup it build succesfully then i install the setup it installed properly into program file n create one .exe file n one Instalfile. but problem is that when i search the service into "services.msc" the service is not there. means service is not showing there.i tried but not getting the ans.plz help me to solve this problem.

    Read the article

  • Make function declarations based on function definitions

    - by Clinton Blackmore
    I've written a .cpp file with a number of functions in it, and now need to declare them in the header file. It occurred to me that I could grep the file for the class name, and get the declarations that way, and it would've worked well enough, too, had the complete function declaration before the definition -- return code, name, and parameters (but not function body) -- been on one line. It seems to me that this is something that would be generally useful, and must've been solved a number of times. I am happy to edit the output and not worried about edge cases; anything that gives me results that are right 95% of the time would be great. So, if, for example, my .cpp file had: i2cstatus_t NXTI2CDevice::writeRegisters( uint8_t start_register, // start of the register range uint8_t bytes_to_write, // number of bytes to write uint8_t* buffer = 0) // optional user-supplied buffer { ... } and a number of other similar functions, getting this back: i2cstatus_t NXTI2CDevice::writeRegisters( uint8_t start_register, // start of the register range uint8_t bytes_to_write, // number of bytes to write uint8_t* buffer = 0) for inclusion in the header file, after a little editing, would be fine. Getting this back: i2cstatus_t writeRegisters( uint8_t start_register, uint8_t bytes_to_write, uint8_t* buffer); or this: i2cstatus_t writeRegisters(uint8_t start_register, uint8_t bytes_to_write, uint8_t* buffer); would be even better.

    Read the article

< Previous Page | 695 696 697 698 699 700 701 702 703 704 705 706  | Next Page >