Search Results

Search found 18003 results on 721 pages for 'nidhinzz own'.

Page 574/721 | < Previous Page | 570 571 572 573 574 575 576 577 578 579 580 581  | Next Page >

  • Object Oriented Programming in AS3

    - by Jordan
    I'm building a game in as3 that has balls moving and bouncing off the walls. When the user clicks an explosion appears and any ball that hits that explosion explodes too. Any ball that then hits that explosion explodes and so on. My question is what would be the best class structure for the balls. I have a level system to control levels and such and I've already come up with working ways to code the balls. Here's what I've done. My first attempt was to create a class for Movement, Bounce, Explosion and finally Orb. These all extended each other in the order I just named them. I got it working but having Bounce extend Movement and Explosion extend Bounce, it just doesn't seem very object oriented because what if I wanted to add a box class that didn't move, but did explode? I would need a separate class for that explosion. My second attempt was to create Movement, Bounce and Explosion without extending anything. Instead I passed in a reference to the Orb class to each. Then the class stores that reference and does what it needs to do based on events that are dispatched by the Orb such as update, which was broadcast from Orb every enter frame. This would drive the movement and bounce and also the explosion when the time came. This attempt worked as well but it just doesn't seem right. I've also thought about using Interfaces but because they are more of an outline for classes, I feel like code reuse goes out the window as each class would need its own code for a specific task even if that task is exactly the same. I feel as if I'm searching for some form of multiple inheritance for classes that as3 does not support. Can someone explain to me a better way of doing what I'm attempting to do? Am I being to "Object Oriented" by having classed for Movement, Bounce, Explosion and Orb? Are Interfaces the way to go? Any feedback is appreciated!

    Read the article

  • Returning large collections from WCF Serivce

    - by Nate Bross
    I'm trying to determine the best approach for building a WCF Service, and the area I'm struggling with most is returning lists of objects. The built-in maxMessageSize of 64k seems pretty high, and I really don't want to bump it up (quick googling finds 100s of places bumping the maxMessageSize up to multi-gigabyte range which seems foolish). But, when I'm returning a collection of objects (~150 items) I am exceeding the default 64k. I'm almost to the point of returning my own class which inherits IEnumerable and has properties for hasNext, hasPrevious and PageSize so that I can implement paging on the client side -- this seems like alot of code. The other option is to jackup the maxMessageSize and hope for the best, but that feels wrong. All other aspects of my service are working great, its just returning large collectiosn where I'm having issues. For background, there are two types of consumers of this service, UI applications which will be primarly web and/or wpf applications, and data processing applications, .NET console apps, and maybe some other non-UI apps. For the UI applications, I would like to keep them responsive and keep the messageSize low, on the console apps it doesn't matter as much as they are just pulling data down to do processing and push it back up to the service.

    Read the article

  • Is there a version of the removeElement function in Go for the vector package like Java has in its V

    - by Brian T Hannan
    I am porting over some Java code into Google's Go language and I converting all code except I am stuck on just one part after an amazingly smooth port. My Go code looks like this and the section I am talking about is commented out: func main() { var puzzleHistory * vector.Vector; puzzleHistory = vector.New(0); var puzzle PegPuzzle; puzzle.InitPegPuzzle(3,2); puzzleHistory.Push(puzzle); var copyPuzzle PegPuzzle; var currentPuzzle PegPuzzle; currentPuzzle = puzzleHistory.At(0).(PegPuzzle); isDone := false; for !isDone { currentPuzzle = puzzleHistory.At(0).(PegPuzzle); currentPuzzle.findAllValidMoves(); for i := 0; i < currentPuzzle.validMoves.Len(); i++ { copyPuzzle.NewPegPuzzle(currentPuzzle.holes, currentPuzzle.movesAlreadyDone); copyPuzzle.doMove(currentPuzzle.validMoves.At(i).(Move)); // There is no function in Go's Vector that will remove an element like Java's Vector //puzzleHistory.removeElement(currentPuzzle); copyPuzzle.findAllValidMoves(); if copyPuzzle.validMoves.Len() != 0 { puzzleHistory.Push(copyPuzzle); } if copyPuzzle.isSolutionPuzzle() { fmt.Printf("Puzzle Solved"); copyPuzzle.show(); isDone = true; } } } } If there is no version available, which I believe there isn't ... does anyone know how I would go about implementing such a thing on my own?

    Read the article

  • Wireshark Dissector: How to Identify Missing UDP Frames?

    - by John Dibling
    How do you identify missing UDP frames in a custom Wireshark dissector? I have written a custom dissector for the CQS feed (reference page). One of our servers gaps when receiving this feed. According to Wireshark, some UDP frames are never received. I know that the frames were sent because all of our other servers are gap-free. A CQS frame consists of multiple messages, each having its own sequence number. My custom dissector provides the following data to Wireshark: cqs.frame_gaps - the number of gaps within a UDP frame (always zero) cqs.frame_first_seq - the first sequence number in a UDP frame cqs.frame_expected_seq - the first sequence number expected in the next UDP frame cqs.frame_msg_count - the number of messages in this UDP frame And I am displaying each of these values in custom columns, as shown in this screenshot: I tried adding code to my dissector that simply saves the last-processed sequence number (as a local static), and flags gaps when the dissector processes a frame where current_sequence != (previous_sequence + 1). This did not work because the dissector can be called in random-access order, depending on where you click in the GUI. So you could process frame 10, then frame 15, then frame 11, etc. Is there any way for my dissector to know if the frame that came before it (or the frame that follows) is missing? The dissector is written in C. (See also a companion post on serverfault.com)

    Read the article

  • How to search a PDF in Acrobat Reader AND jump to a certain page via parameter?

    - by agez
    Hi, we are using lucene within a web application to search in a great number of PDF documents. The workflow is like this: A user enters a search term A list of search results is presented to the user. Each search result represents one PDF document and shows the user on which page the search term was found. Each of these pages is represented as a hyperlink. If the user now clicks on such a hyperlink, he directly jumps to that page. But now the user has the problem that the search term isn't highlighted on the page. Therefore the user has to look on his own to find the search term on the page. What we wanted is a way to highlight the search term on the specific page in the PDF. The open parameters for Acrobat Reader allow for either searching a PDF document (with hit highlighting) OR jumping to a specific page. But the combination of both parameters - which we would need - doesn't work. Does anyone have an idea how jumping to a page and highlighting a search term in a pdf document could work? I had a look at the Acrobat SDK but don't see how we can use it (it's terribly documented). Cheers, Helmut

    Read the article

  • Hiring my first employee

    - by Ady
    A few years ago I moved to a new job having been programming for 2 years using C#, however this new company was mainly using VB6. I made the case for .NET and won, but one of the consessions I had to make was to use VB.NET and not C# (understandable as most of the other developers were already using VB). Three years later it was time to move on, but when applying for jobs I couldn't get past the recruitment agents. I realised that when they were looking at the basic requirements (5 years experience) that they could not add 2 and 3 together to make 5. They were looking for 5 years in VB or C# not across both. Frustrated I decided to combine my skills with a designer friend and start my own company. After two years of hard graft we are now looking for our first employee (a programmer), and this question has hit me again, but now I see the employers perspective. Why take the risk of someone getting up to speed when you have thousands of applicants to choose from. So my question is this, if I define the requirements to be too narrow, I could miss the really great candidates. But if they are too broad it's going to take ages to go through them all. This will be our first 'employee' so the choice needs to be good, I can't afford to make a mistake and employ someone naff. Another option would be to choose a bright university graduate, and train them up (less of a risk because we can pay them less). What have others done in this situation, and what would you recommend I do?

    Read the article

  • Yet another "What is this code doing"-type of Perl code

    - by Mike
    I have inherited some code from a guy whose favorite past time was to shorten every line to its absolute minimum (and sometimes only to make it look cool). His code is hard to understand but I managed to understand (and rewrite) most of it. Now I have stumbled on a piece of code which, no matter how hard I try, I cannot understand. my @heads = grep {s/\.txt$//} OSA::Fast::IO::Ls->ls($SysKey,'fo','osr/tiparlo',qr{^\d+\.txt$}) || (); my @selected_heads = (); for my $i (0..1) { $selected_heads[$i] = int rand scalar @heads; for my $j (0..@heads-1) { last if (!grep $j eq $_, @selected_heads[0..$i-1]); $selected_heads[$i] = ($selected_heads[$i] + 1) % @heads; #WTF? } my $head_nr = sprintf "%04d", $i; OSA::Fast::IO::Cp->cp($SysKey,'',"osr/tiparlo/$heads[$selected_heads[$i]].txt","$recdir/heads/$head_nr.txt"); OSA::Fast::IO::Cp->cp($SysKey,'',"osr/tiparlo/$heads[$selected_heads[$i]].cache","$recdir/heads/$head_nr.cache"); } From what I can understand, this is supposed to be some kind of randomizer, but I never saw a more complex way to achieve randomness. Or are my assumptions wrong? At least, that's what this code is supposed to do. Select 2 random files and copy them. === NOTES === The OSA Framework is a Framework of our own. They are named after their UNIX counterparts and do some basic testing so that the application does not need to bother with that.

    Read the article

  • tips for fixing bad coding/dev habits ?

    - by dfafa
    i want to become a better coder....so i have decided to sign up for computing science program...maybe a formal education can assist me. i started working on smaller projects to learn but currently i have really bad coding/dev habits which is hindering my productivity as the codebase increases.... i have highlighted them and perhaps someone could make suggestions (or redirect to resources) or a more efficient method. most stuff that i made in the past were web apps. i usually develop with putty + nano...i just love the minimalist feel i use winscp and develop directly on my private web server...too lazy to do it on localhost and upload it later. i dont use subversion control...which one do i need ? sometimes ctrl +z doesn't work well. when i run out of ideas for naming variable, i use swear words instead. i swear a lot when i get stuck....how to deal with anger issue ? my codes look ugly with comments everywhere. would rather use procedural coding finds "thinking" in OO difficult and time consuming i "write first think later". refactors code only if i am getting paid for it. dislikes configuring linux distro, Apache, MySQL, scaling, designing graphics and layouts. does not like writing tests likes working alone. does not like sharing codes. has an econ degree dislikes reading other people's code would rather write it on my own it seems my only true desire is to translate my ideas to a working prototype as fast as possible....it seems like i am very uninterested in the other details...could it be that i am not cut out to be a coder after all ? is going back to study comp sci a bad idea ?

    Read the article

  • Architecture for new ASP.NET web application

    - by Anders Abel
    I'm maintaining an application which currently is just a web service (built with WCF) and a database backend. The web service is built in layers with a linq-to-sql data access part with core functionality in an own assembly and on top of that the web service assembly which contains the WCF code. The core assembly also handles all business logic rules (very few actually). The customer now wants a Web interface for the application instead of just accessing it through other applications which are consuming the web service. I'm quite lost on modern web application design, so I would like some advice on what architecture and frameworks to use for the web application. The web application will be using the same core assembly with business rules and the linq-to-sql data access layer as the web service. Some concepts I've thought about are: ASP.NET MVC Webforms AJAX controls - possibly leting the AJAX controls access the existing web service through JSON. Are there any more concepts I should look into? Which one is the best for a fresh project? The development tools are Visual Studio 2008 Team Edition for Developers targeting .NET 3.5. An upgrade to Visual Studio 2010 Premium (or maybe even Ultimate) is possible if it gives any benefits.

    Read the article

  • Filtering most out of XML with XSL?

    - by Gnudiff
    I need to transform a lot of XML files (Fedora export) into a different kind of XML. Trying to do it with XSL stylesheets and checking with the msxsl transformer. Supposedly I have xml file like this (assuming there are actually other nodes inside AAA, OBJ, amd all other nodes), Source.XML: <DOC> <AAA> <STUFF>example</STUFF> <OBJ> <OBJVERS id="A1" CREATED="2008-02-18T13:28:08.245Z"/> <OBJVERS id="A2" CREATED="2008-02-19T10:42:41.965Z"/> <OBJVERS id="A13" CREATED="2009-03-16T12:43:11.703Z"/> </OBJ> </AAA> <FFF/> <GGG/> <DDD> <FILE /> </DDD> </DOC> Which I need to look something like this (Target.XML): <MYOBJ> <ELEM>contents of OBJVERS with the biggest id OR creation date (whichever is easier to do) go here</ELEM> <IMAGE> contents of <FILE> node go here</IMAGE> </MYOBJ> The main problem that I have is that since I am new to XSL (and for this particular task do not have enough time to learn it properly) is that I can't understand how to tell XSL processor not to process anything else, I keep getting output from , for example. Update: basically, I solved this problem meanwhile. I will post my own answer and close the question. Update2: OK, Andrew's answer works, too, so I am just accepting it. :)

    Read the article

  • How do programming languages bind identifiers to functions

    - by sub
    I'm talking about C and/or C++ here as this are the only languages I know used for interpreters where the following could be a problem: If we have an interpreted language X how can a library written for it add functions to the language which can then be called from within programs written in the language? PHP example: substr( $str, 5, 10 ); How is the function substr added to the "function pool" of PHP so it can be called from within scripts? It is easy for PHP storing all registered function names in an array and searching through it as a function is called in a script. However, as there obviously is no eval in C(++), how can the function then be called? I assume PHP doesn't have 100MB of code like: if( identifier == "substr" ) { return PHP_SUBSTR(...); } else if( ... ) { ... } Ha ha, that would be pretty funny. I hope you have understood my question so far. How do interpreters solve this problem? How can I solve this for my own experimental toy interpreter?

    Read the article

  • How to inherit from a non-prototype object

    - by Andres Jaan Tack
    The node-binary binary parser builds its object with the following pattern: exports.parse = function parse (buffer) { var self = {...} self.tap = function (cb) {...}; self.into = function (key, cb) {...}; ... return self; }; How do I inherit my own, enlightened parser from this? Is this pattern designed intentionally to make inheritance awkward? My only successful attempt thus far at inheriting all the methods of binary.parse(<something>) is to use _.extend as: var clever_parser = function(buffer) { if (this instanceof clever_parser) { this.parser = binary.parse(buffer); // I guess this is super.constructor(...) _.extend(this.parser, this); // Really? return this.parser; } else { return new clever_parser(buffer); } } This has failed my smell test, and that of others. Is there anything about this that makes in tangerous?

    Read the article

  • How to best configure a central repository/multiple central repositories for Mercurial?

    - by Mario
    I am new to Mercurial and trying to figure out if it could replace SVN. Everyone I work with has used SVN, CVS and VSS (shiver), so this could be quite a large change. I have been very interested after reading about its merge and branch capability, but have a few reservations. We are currently on SVN, and have one central repository. From my reading, it seems as though there is no ONE central repository for all projects when using Mercurial. NOTE: We consider each project a separate logical set of code, or a Visual Studio Solution. It runs on its own. We have around 60 separate projects in our one central SVN repository. After reading about Mercurial it seems to me that I have to create 60 separate central repositories for each one of these projects on the server. QUESTION #1: Should I create a single repository for each project? If yes, then I am worried about configuring and hosting 60 separate central Mercurial servers. I started thinking I could configure one file, but it seems as if each repository must be individually configured using the “C:...\MyRepository.hg\hgrc” file (Windows install). It also seems as I have to run 60 servers (hg serve), I would assume on different ports. QUESTION #2: If the answer to question 1 is yes, there should be a single central repository for each project, then how have people managed many multiple repositories? Finally, I haven’t looked into moving all history and changes from one SVN repository to a bunch of separate Mercurial repositories, but would appreciate any comments from someone who has done this (or if it is even possible).

    Read the article

  • Reducing Time Complexity in Java

    - by Koeneuze
    Right, this is from an older exam which i'm using to prepare my own exam in january. We are given the following method: public static void Oorspronkelijk() { String bs = "Dit is een boodschap aan de wereld"; int max = -1; char let = '*'; for (int i=0;i<bs.length();i++) { int tel = 1; for (int j=i+1;j<bs.length();j++) { if (bs.charAt(j) == bs.charAt(i)) tel++; } if (tel > max) { max = tel; let = bs.charAt(i); } } System.out.println(max + " keer " + let); } The questions are: what is the output? - Since the code is just an algorithm to determine the most occuring character, the output is "6 keer " (6 times space) What is the time complexity of this code? Fairly sure it's O(n²), unless someone thinks otherwise? Can you reduce the time complexity, and if so, how? Well, you can. I've received some help already and managed to get the following code: public static void Nieuw() { String bs = "Dit is een boodschap aan de wereld"; HashMap<Character, Integer> letters = new HashMap<Character, Integer>(); char max = bs.charAt(0); for (int i=0;i<bs.length();i++) { char let = bs.charAt(i); if(!letters.containsKey(let)) { letters.put(let,0); } int tel = letters.get(let)+1; letters.put(let,tel); if(letters.get(max)<tel) { max = let; } } System.out.println(letters.get(max) + " keer " + max); } However, I'm uncertain of the time complexity of this new code: Is it O(n) because you only use one for-loop, or does the fact we require the use of the HashMap's get methods make it O(n log n) ? And if someone knows an even better way of reducing the time complexity, please do tell! :)

    Read the article

  • Calculate an Internet (aka IP, aka RFC791) checksum in C#

    - by Pat
    Interestingly, I can find implementations for the Internet Checksum in almost every language except C#. Does anyone have an implementation to share? Remember, the internet protocol specifies that: "The checksum field is the 16 bit one's complement of the one's complement sum of all 16 bit words in the header. For purposes of computing the checksum, the value of the checksum field is zero." More explanation can be found from Dr. Math. There are some efficiency pointers available, but that's not really a large concern for me at this point. Please include your tests! (Edit: Valid comment regarding testing someone else's code - but I am going off of the protocol and don't have test vectors of my own and would rather unit test it than put into production to see if it matches what is currently being used! ;-) Edit: Here are some unit tests that I came up with. They test an extension method which iterates through the entire byte collection. Please comment if you find fault in the tests. [TestMethod()] public void InternetChecksum_SimplestValidValue_ShouldMatch() { IEnumerable<byte> value = new byte[1]; // should work for any-length array of zeros ushort expected = 0xFFFF; ushort actual = value.InternetChecksum(); Assert.AreEqual(expected, actual); } [TestMethod()] public void InternetChecksum_ValidSingleByteExtreme_ShouldMatch() { IEnumerable<byte> value = new byte[]{0xFF}; ushort expected = 0xFF; ushort actual = value.InternetChecksum(); Assert.AreEqual(expected, actual); } [TestMethod()] public void InternetChecksum_ValidMultiByteExtrema_ShouldMatch() { IEnumerable<byte> value = new byte[] { 0x00, 0xFF }; ushort expected = 0xFF00; ushort actual = value.InternetChecksum(); Assert.AreEqual(expected, actual); }

    Read the article

  • Email Tracking - GMail

    - by Abs
    Hello all, I am creating my own email tracking system for email marketing tracking. I have been able to determine each persons email client they are using by using the http referrer but for some reason GMAIL does not send a HTTP_REFERRER at all! So I am trying to find another way of identifying when gmail requests a transparent image from my server. I get the following headers print_r($_SERVER);: DOCUMENT_ROOT = /usr/local/apache/htdocs GATEWAY_INTERFACE = CGI/1.1 HTTP_ACCEPT = */* HTTP_ACCEPT_CHARSET = ISO-8859-1,utf-8;q=0.7,*;q=0.3 HTTP_ACCEPT_ENCODING = gzip,deflate,sdch HTTP_ACCEPT_LANGUAGE = en-GB,en-US;q=0.8,en;q=0.6 HTTP_CONNECTION = keep-alive HTTP_COOKIE = __utmz=156230011.1290976484.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); __utma=156230011.422791272.1290976484.1293034866.1293050468.7 HTTP_HOST = xx.xxx.xx.xxx HTTP_USER_AGENT = Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US) AppleWebKit/534.10 (KHTML, like Gecko) Chrome/8.0.552.237 Safari/534.10 PATH = /bin:/usr/bin QUERY_STRING = i=MTA= REDIRECT_STATUS = 200 REMOTE_ADDR = xx.xxx.xx.xxx REMOTE_PORT = 61296 REQUEST_METHOD = GET Is there anything of use in that list? Or is there something else I can do to actually get the http referrer, if not how are other ESPs managing to find whether gmail was used to view an email? Btw, I appreciate it if we can hold back on whether this is ethical or not as many ESPs do this already, I just don't want to pay for their service and I want to do it internally. Thanks all for any implementation advice. Update Just thought I would update this question and make it clearer in light of the bounty. I would like to find out when a user opens my email when sent to a GMail inbox. Assume, I have the usual transparent image tracking and the user does not block images. I would like to do this with the single request and the header details I get when the transparent image is requested.

    Read the article

  • A linked list with multiple heads in Java

    - by Emile
    Hi, I have a list in which I'd like to keep several head pointers. I've tried to create multiple ListIterators on the same list but this forbid me to add new elements in my list... (see Concurrent Modification exception). I could create my own class but I'd rather use a built-in implementation ;) To be more specific, here is an inefficient implementation of two basic operations and the one wich doesn't work : class MyList <E { private int[] _heads; private List<E _l; public MyList ( int nbHeads ) { _heads = new int[nbHeads]; _l = new LinkedList<E(); } public void add ( E e ) { _l.add(e); } public E next ( int head ) { return _l.get(_heads[head++]); // ugly } } class MyList <E { private Vector<ListIterator<E _iters; private List<E _l; public MyList ( int nbHeads ) { _iters = new Vector<ListIterator<E(nbHeads); _l = new LinkedList<E(); for( ListIterator<E iter : _iters ) iter = _l.listIterator(); } public void add ( E e ) { _l.add(e); } public E next ( int head ) { // ConcurrentModificationException because of the add() return _iters.get(head).next(); } } Emile

    Read the article

  • Our Flash Streaming Player Occasionally Stutters like a Skipping CD after a Period of Time

    - by Jonathan Fritz
    We offer a streaming player for a number of our clients, who are responsible for their providing us with their own audio streams. We have written a very simple flash player that can play all of the streams that we support (icecast/shoutcast/live365/mp3 over http/etc). Unfortunately, we have found that when listening, our player sometimes begins to stutter (like a skipping cd), sometimes after only 10 minutes, and sometimes after an hour of listening. We have noticed this behaviour in firefox on both linux and windows. Does anybody know anything about this problem? We know that flash isn't ideal for infinite streams of audio, but it's about all that we can find that's on every platform out there. If anybody can suggest a solution to our problem, I'll be your friend forever. Here is a link to the live player: http://cr-jf.jfritz.02.dev.wecreate.com/streaming/player_v5/ Note that you'll need to test in a browser that isn't IE, because we use WMP in IE, and that the JavaScript on the page will cause the player to unload and re-load once an hour because of memory issues. Because I can only put one hyperlink in a post, I'll add a link to the player source code as a comment. Thanks all!

    Read the article

  • Targeted Simplify in Mathematica

    - by Timo
    I generate very long and complex analytic expressions of the general form: (...something not so complex...)(...ditto...)(...ditto...)...lots... When I try to use Simplify, Mathematica grinds to a halt, I am assuming due to the fact that it tries to expand the brackets and or simplify across different brackets. The brackets, while containing long expressions, are easily simplified by Mathematica on their own. Is there some way I can limit the scope of Simplify to a single bracket at a time? Edit: Some additional info and progress. So using the advice from you guys I have now started using something in the vein of In[1]:= trouble = Log[(x + I y) (x - I y) + Sqrt[(a + I b) (a - I b)]]; In[2]:= Replace[trouble, form_ /; (Head[form] == Times) :> Simplify[form],{3}] Out[2]= Log[Sqrt[a^2 + b^2] + (x - I y) (x + I y)] Changing Times to an appropriate head like Plus or Power makes it possible to target the simplification quite accurately. The problem / question that remains, though, is the following: Simplify will still descend deeper than the level specified to Replace, e.g. In[3]:= Replace[trouble, form_ /; (Head[form] == Plus) :> Simplify[form], {1}] Out[3]= Log[Sqrt[a^2 + b^2] + x^2 + y^2] simplifies the square root as well. My plan was to iteratively use Replace from the bottom up one level at a time, but this clearly will result in vast amount of repeated work by Simplify and ultimately result in the exact same bogging down of Mathematica I experienced in the outset. Is there a way to restrict Simplify to a certain level(s)? I realize that this sort of restriction may not produce optimal results, but the idea here is getting something that is "good enough".

    Read the article

  • Refreshing Facebook session from an iframe application

    - by zombat
    I've got a Facebook iframe application that is completely external. By this I mean that once a user accesses the canvas URL to load the application, all the links in the iframe app go to my servers, and the canvas page never gets refreshed unless the user navigates to somewhere else on Facebook and comes back (or does a browser refresh). On the initial load of the app where Facebook creates the iframe, I get passed all the usual parameters like fb_sig_user which allows me to create an internal app session based on the facebook user. This app session (which is not the Facebook session, it's my own app session) is all I need to allow the user to work with the app. The problem comes an hour later. If the user leaves the computer, or uses the app for more than an hour, the Facebook session expires. There are some app pages which require fetching friend information, and once the FB session has expired, these pages break, throwing out errors such as "Error: Session key invalid or no longer valid". My question is whether there is a way to refresh the user's Facebook session from within an iframe application to keep it from expiring an hour later. Do any of the API calls do this? Is there a Facebook Connect trick to ping something? Is there any definitive method to keep it alive? I haven't been able to find any examples that specifically address this.

    Read the article

  • "string" != "string"

    - by Misiur
    Hi. I'm doing some kind of own templates system. I want to change <title>{site('title')}</title> Into function "site" execution with parameter "title". Here's private function replaceFunc($subject) { foreach($this->func as $t) { $args = explode(", ", preg_replace('/\{'.$t.'\(\'([a-zA-Z,]+)\'\)\}/', '$1', $subject)); $subject = preg_replace('/\{'.$t.'\([a-zA-Z,\']+\)\}/', call_user_func_array($t, $args), $subject); } return $subject; } Here's site: function site($what) { global $db; $s = $db->askSingle("SELECT * FROM ".DB_PREFIX."config"); switch($what) { case 'title': return 'Title of page'; break; case 'version': return $s->version; break; case 'themeDir': return 'lolmao'; break; default: return false; } } I've tried to compare $what (which is for this case "title") with "title". MD5 are different. strcmp gives -1, "==", and "===" return false. What is wrong? ($what type is string. You can't change call_user_func_array into call_user_func, because later I'll be using multiple arguments)

    Read the article

  • Why can't I wrap the ServletRequest when trying to capture JSP Output

    - by Patrick Cornelissen
    I am trying to dispatch in a servlet request handler to the JSP processor and capture the content of it. I am providing wrapper instances for the ServletRequest and ServletResponse, they implement the corresponding HTTPServletRequest/-Response interfaces, so they should be drop-in replacements. All methods are currently passed to the original Servlet Request object (I am planning to modify some of them soon). Additionally I have introduced some new methods. (If you want to see the code: http://code.google.com/p/gloudy/source/browse/trunk/gloudyPortal/src/java/org/gloudy/gloudlet/impl/RenderResponseImpl.java) The HttpServletResponse uses it's own output streams to capture the output. When I try to call request.getRequestDispatcher("/WEB-INF/views/test.jsp").include(request, response); With my request and response wrappers the method returns and no content has been captured. When I tried to pass the original request object it worked! But that's not what I need in the long run... request.getRequestDispatcher("/WEB-INF/views/test.jsp").include(request.getServletRequest(), response); This works. getservletRequest() returns the original Request, given by the servlet container. Does anyone know why this is not working with my wrappers?

    Read the article

  • Can I stop the dbml designer from adding a connection string to the dbml file?

    - by drs9222
    We have a custom function AppSettings.GetConnectionString() which is always called to determine the connection string that should be used. How this function works is unimportant to the discussion. It suffices to say that it returns a connection string and I have to use it. I want my LINQ to SQL DataContext to use this so I removed all connection string informatin from the dbml file and created a partial class with a default constructor like this: public partial class SampleDataContext { public SampleDataContext() : base(AppSettings.GetConnectionString()) { } } This works fine until I use the designer to drag and drop a table into the diagram. The act of dragging a table into the diagram will do several unwanted things: A settings file will be created A app.config file will be created My dbml file will have the connection string embedded in it All of this is done before I even save the file! When I save the diagram the designer file is recreated and it will contain its own default constructor which uses the wrong connection string. Of course this means my DataContext now has two default constructors and I can't build anymore! I can undo all of these bad things but it is annoying. I have to manually remove the connection string and the new files after each change! Is there anyway I can stop the designer from making these changes without asking? EDIT The requirement to use the AppSettings.GetConnectionString() method was imposed on me rather late in the game. I used to use something very similar to what it generates for me. There are quite a few places that call the default constructor. I am aware that change them all to create the data context in another way (using a different constructor, static method, factory, ect..). That kind of change would only be slightly annoying since it would only have to be done once. However, I feel, that it is sidestepping the real issue. The dbml file and configuration files would still contain an incorrect, if unused, connection string which at best could confuse other developers.

    Read the article

  • using mod-rewrite to redirect requests for jquery.js to GoogleAPI cache

    - by Aditya Advani
    Hi All, Our Linux server with Apache 2.x, Plesk 8.x hosts a number of e-commerce websites. To take advantage of browser caching we would like to use Google's provided copy of jquery.js. Hence in the vhost.conf file of each we can use the following RewriteRule RewriteCond %{REQUEST_FILENAME} jquery.min.js [nc] RewriteRule . http://ajax.googleapis.com/ajax/libs/jquery/1.4/jquery.min.js [L] And in vhost_ssl.conf RewriteCond %{REQUEST_FILENAME} jquery.min.js [nc] RewriteRule . https://ajax.googleapis.com/ajax/libs/jquery/1.4/jquery.min.js [L] OK now these rules work fine in the individual vhost.conf files of each domain. However we host over 200 domains, I would like for them to work but cannot seem to get them to work globally in the httpd.conf file. Challenges are the following: Get the rewriterule to work in httpd.conf Detect if HTTPS is on, and if it is and the is is a secure page, rewrite to ... Each individual domain will still have it's own custom mod-rewrite rules. Which rules take precedence - global or per-domain? Do they combine? Is it ok if I have the "RewriteEngine On" directive in the global httpd.conf and then again in the vhost.conf? Please let me know what your guys' suggestions are. Desperate for a solution to this problem.

    Read the article

  • Does Github.com have to create a merge commit when you merge from a fork ?

    - by Nishant
    I cloned the master and started doing he my work . Due to permissions I push the branch to my fork . I then sent a pull request to my master and someone with permission does the merge . I notice that Github.com creates a merge commit snapshot which to me looks like just a diff of the entire changes which is actually not necessary but helpful in the sense I can just look at merge commit to see the entire diff . I can see the same sha has as my own branch - hence it looks like the merge is an extra commit which probably aint nexeccary since its a fast forward ? master - a myfork(computer) - a->b->c myfork(github) - a->b->c Pull request myfork - master (which it says I can automatically merge) shows the entire diff and then when I merge it , it shows up as master - a->b->c-d . The d is a merge commit which I think it not really required because it is a fast forward ? Can someone explain why does this happen ? I think this is the same scenario if I rebase master if master had gone ahead , but that has not happened . Master is still at when I merge .

    Read the article

< Previous Page | 570 571 572 573 574 575 576 577 578 579 580 581  | Next Page >