Search Results

Search found 13725 results on 549 pages for 'browser fingerprinting'.

Page 494/549 | < Previous Page | 490 491 492 493 494 495 496 497 498 499 500 501  | Next Page >

  • Sorting by some field and fetching whole tree from DB

    - by Niaxon
    Hello everyone, I am trying to do file browser in a tree form and have a problem to sort it somehow. I use PHP and MySQL for that. I've created mixed (nested set + adjacency) table 'element' with the following fields: element_id, left_key, right_key, level, parent_id, element_name, element_type (enum: 'folder','file'), element_size. Let's not discuss right now that it is better to move information about element (name, type, size) into other table. Function to scan specified directory and fill table work correctly. Noteworthy, i am adding elements to tree in specific order: folders first and only files. After that i can easily fetch and display whole table on the page using simple query: SELECT * FROM element WHERE 1=1 ORDER BY left_key With the result of that query and another function i can generate correct html code (<ul><li>... and so on). Now back to the question (finally, huh?). I am struggling to add sorting functionality. For example i want to order my result by size. Here i need to keep in my mind whole hierarchy of tree and rule: folders first, files later. I believe i can do that by generating in PHP recursive query: SELECT * FROM element WHERE parent_id = {$parentId} ORDER BY element_type (so folders would be first), size (or name for example) After that for each result which is folder i will send another query to get it's content. Also it's possible to fetch whole tree by left_key and after that sort it in PHP as array but i guess that would be worse :) I wonder if there is better and more efficient way to do such thing?

    Read the article

  • Help Redirecting A Page to Another Page with adverts for 5 seconds, and then redirecting to another page.

    - by XcodeDev
    Hey, I am trying to redirect a page to another page, and that was working successfully. However I am trying to redirect the first page to another page with adverts. This page will then redirect to another page after five seconds. I am trying to do that by doing this: <?php include('ads.php'); ?> <?php sleep(2); $url = $_GET['url']; header("Location: ".$url.""); exit; ?> However it is showing the advert in ads.php perfectly, but it is not redirecting after five seconds. I am receiving this error in my web browser: Warning: Cannot modify header information - headers already sent by (output started at /home/nucleusi/public_html/adverts/ads.php:1) in /home/nucleusi/public_html/adverts/index.php on line 7 A typical link I would be redirecting to would be this: http://nucleusiphone.com/adverts/index.php/?url=http%3A%2F%2Fitunes.apple.com%2Fmx%2Falbum%2Fstill-got-the-blues%2Fid14135178%3Fi%3D14135158 Thanks in advanced. PS. I don't know any php so any code helps!

    Read the article

  • Spring Controller's URL request mapping not working as expected

    - by Atharva
    I have created a mapping in web.xml something like this: <servlet> <servlet-name>dispatcher</servlet-name> <servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>dispatcher</servlet-name> <url-pattern>/about/*</url-pattern> </servlet-mapping> In my controller I have something like this: import org.springframework.stereotype.Controller; @Controller public class MyController{ @RequestMapping(value="/about/us", method=RequestMethod.GET) public ModelAndView myMethod1(ModelMap model){ //some code return new ModelAndView("aboutus1.jsp",model); } @RequestMapping(value="/about", method=RequestMethod.GET) public ModelAndView myMethod2(ModelMap model){ //some code return new ModelAndView("aboutus2.jsp",model); } } And my dispatcher-servlet.xml has view resolver like: <mvc:annotation-driven/> <bean id="viewResolver" class="org.springframework.web.servlet.view.InternalResourceViewResolver" p:viewClass="org.springframework.web.servlet.view.JstlView" p:prefix="/WEB-INF/jsp/" p:suffix=".jsp"/> To my surprise: request .../about/us is not reaching to myMethod1 in the controller. The browser shows 404 error. I put a logger inside the method but it isn't printing anything, meaning, its not being executed. .../about works fine! What can be the done to make .../about/us request work? Any suggestions?

    Read the article

  • Why Can't Businesses Upgrade their Browsers from IE6/IE7?

    - by viatropos
    I have read lots these past few weeks on IE6, seeing if it was really that bad to make it look right. I have just learned HTML and CSS this past year so I've been spoiled to start with basically CSS3 and HTML5, and I can do some really cool stuff super fast. I'm no IE6 master and I don't have years of experience with IE. So I thought it'd take a little time to figure out all the hacks to IE6/7 discovered and just implement them. But it's way harder than that (or maybe just way too much work). I'd have to either completely rebuild my design using "Internet Explorer 'Principles'", or cut out a lot of the neat things I could do using more recent technologies. For a million and one other reasons, everyone who builds things online seems to think IE should die. My question is, why can't businesses upgrade their browsers? When I work with businesses, they almost always resist the first time I ask, but 5 seconds later I'll show them what it looks like on my computer and talk about how great the latest stuff is (how much more secure later browser are, all the famous IE security cases, how much smoother and faster they new browsers are, how the IE team has basically missed the boat entirely, how much smoother business processes run, etc.), and they get excited! And within a few seconds they're up and running with Chrome or something. So can businesses not upgrade for some reasons? What are the reasons a business cannot upgrade? The main reason I think of is because they have an old version of windows. But a) wasn't there a legal case against this? and b) somebody must have figured out how to install Chrome or Firefox on ancient versions of Windows by now.

    Read the article

  • Why should I abstract my data layer?

    - by Gazillion
    OOP principles were difficult for me to grasp because for some reason I could never apply them to web development. As I developed more and more projects I started understanding how some parts of my code could use certain design patterns to make them easier to read, reuse, and maintain so I started to use it more and more. The one thing I still can't quite comprehend is why I should abstract my data layer. Basically if I need to print a list of items stored in my DB to the browser I do something along the lines of: $sql = 'SELECT * FROM table WHERE type = "type1"';' $result = mysql_query($sql); while($row = mysql_fetch_assoc($result)) { echo '<li>'.$row['name'].'</li>'; } I'm reading all these How-Tos or articles preaching about the greatness of PDO but I don't understand why. I don't seem to be saving any LoCs and I don't see how it would be more reusable because all the functions that I call above just seem to be encapsulated in a class but do the exact same thing. The only advantage I'm seeing to PDO are prepared statements. I'm not saying data abstraction is a bad thing, I'm asking these questions because I'm trying to design my current classes correctly and they need to connect to a DB so I figured I'd do this the right way. Maybe I'm just reading bad articles on the subject :) I would really appreciate any advice, links, or concrete real-life examples on the subject!

    Read the article

  • Is there any possibility of rendering differences betweeen firefox 3 and 3.5 and IE 7 and 8, even if

    - by metal-gear-solid
    I'm making a site for European client and he said Firefox 3 and IE 7 and 8 has more user than others browser for desktop in Europe http://gs.statcounter.com/#browser_version-eu-monthly-200812-201001-bar I've only IE 7 and Firefox 3.5.7 installed in my PC. Should I download portable Firefox 3.0 and test in it too even if I'm not using any new css property/selector which only has support in Firefox 3.5 or testing in 3.5.7 would be enough? And for IE testing in IE 7 would be enough or should i check my site in IE8 (downloading VPC image of IE8 and testing in VM) even if I'm not using any new css property/selector which only has support in IE8? Or is it necessary to use <meta http-equiv="X-UA-Compatible" content="IE=EmulateIE7" /> in <head> ? But what will happen when user will switch compatibility mode to IE 8 default rendering mode? Can we make site compatible in IE 7 and 8 both without using <meta http-equiv="X-UA-Compatible" content="IE=EmulateIE7" />? If yes, then what special we need to do.care/consider in css to make site identical in both?

    Read the article

  • Been asked a dozen times, but no luck from what I've read. Prevent Anchor Jumping on page load

    - by jasenmp
    I'm currently working with WP theme that can be found here: sanjay.dmediastudios.com I'm currently using 'smooth scroll' on my page, I'm attempting to have the page smoothly scroll to the requested section when coming from an external link (for instance coming from the blog page takes you to sanjay.dmediastudios.com/#portfolio) from there I want the page to start at the top and THEN scroll to the portfolio section. What's happening is it briefly displays the 'portfolio section' (anchor jump) and THEN resets to the top and scrolls down. It's driving me nuts :(. Here is the code I'm using: Click function for smooth scroll: $(function() { $('.menu li a').click(function() { if (location.pathname.replace(/^\//, '') == this.pathname.replace(/^\//, '') && location.hostname == this.hostname) { var target = $(this.hash); target = target.length ? target : $('[name=' + this.hash.slice(1) + ']'); if (target.length) { $root.animate({ scrollTop: target.offset().top - 75 }, 800, 'swing'); return false; } } }); //end of click function }); The page load function: $(window).on("load", function() { if (location.hash) { // do the test straight away window.scrollTo(0, 0); // execute it straight away setTimeout(function() { window.scrollTo(0, 0); // run it a bit later also for browser compatibility }, 1); } var urlHash = window.location.href.split("#")[1]; if (urlHash && $('#' + urlHash).length) $('html,body').animate({ scrollTop: $('#' + urlHash).offset().top - 75 }, 800, 'swing'); }); Any help would be MUCH appreciated.

    Read the article

  • Prompting for authentication from a wxPython program and passing it along to IIS?

    - by MetaHyperBolic
    I have a client (written in Python, with a wxPython front end in dead-simple wizard style) which communicates a website running IIS. A python script receives requests and does the usual client-server dance. I would have written this as a browser application, but for the requirement that certain things happen on the local PC that the web can't help with (file manipulation, interfacing with certain USB hardware, etc.) Right now, I am simply using the logon credentials, compounded as a string from os.environ['USERDOMAIN'] and os.environ['USERNAME'], to pass along to the server, which connects to Active Directory and enumerates the members of the group, looking for those logon credentials. It's an ugly hack, but it works. Obviously, I could make people log out of the generic helper accounts and log back into Windows using specific accounts. However, I wondered how feasible it would be to provide some kind of logon prompt wherein the user can type in a name and password, then some kind of authorization token could be passed on to IIS. This seems like something I would not want to do myself, given that amateurs almost always make huge security mistakes. Now you can see why I am wishing this was purely web-based. What's a good way to handle this?

    Read the article

  • Cakephp 1.3, Weird behavior on firefox when using $this->Html->link ...

    - by ion
    Greetings, I am getting a very weird and unpredictable result in firefox when using the following syntax: $this->Html->link($this->Html->div('p-cpt',$project['Project']['name']) . $this->Html->div('p-img',$this->Html->image('/img/projects/'.$project['Project']['slug'].'/project.thumb.jpg', array('alt'=>$project['Project']['name'],'width'=>100,'height'=>380))),array('controller' => 'projects', 'action' => 'view', $project['Project']['slug']),array('title' => $project['Project']['name'], 'escape' => false),false); OK I know it is big but bear with me. The point is to get the following output: <a href="x" title="x"> <div class="p-ctp">Name</div> <div class="p-img"><img src="z width="y" height="a" alt="d" /></div> </a> I'm not sure if this validates correctly both on cakephp and html but it works everywhere else apart from firefox. You can actually see the result here: http://www.gnomonconstructions.com/projects/browser To reproduce the result use the form with different categories and press search. At some point it will happen!! Although most of the time it renders the way it should, sometimes it produces an invalid output like that: <a href="x" title="x"></a> <div class="p-cpt"> <a href="x" title="x">name</a> </div> <div class="p-img"> <a href="x" title="x"><img src="x" width="x" height="x" alt="x" /></a> </div> Looks like it repeats the link inside each element. To be honest the only reason I used this syntax was because cakephp encourages it. Any help will be much appreciated :)

    Read the article

  • Getting hover text with selenium in java

    - by BinaryEmpire
    I am trying to figure out how to get the product availability text from a page like http://www.walmart.com/browse/TV-Video/TVs/_/N-96v3? (once a store has been selected) I selected 76574 as my zipcode and went to the "In My Store" tab The code I have now is WebElement hoverElement = driver.findElement(By.xpath(".//*[@id='Body_15992428']/span")); WebElement hidden = driver.findElement(By.xpath(".//*[@id='slapInfo_NoVariant_15992428']/div")); Actions builder = new Actions(driver); builder.clickAndHold(hoverElement).build().perform(); System.out.println(hidden.getText()); **Edit: I tried profile.setEnableNativeEvents(false); and the text is now displayed in the automated browser window. I still cannot get to the text I want though. It does not throw an exception, only displays nothing because the driver thinks its still hidden. Any one know how to fix this? I keep getting Exception in thread "main" org.openqa.selenium.InvalidElementStateException: Cannot perform native interaction: Could not load native events component. Even after I do profile.setEnableNativeEvents(true); Are there any other ways I can get the hidden text, or what am I doing wrong here? Additionally while I was inspecting the code with firebug, I saw that there is this code <script type="text/javascript"> WALMART.$(document).ready(function(){ WALMART.$('#Body_15992428').hover(function(){ WALMART.$('#SeeStoreAvailBubble').wmBubble('update',WALMART.$('#bubbleMsgUpdate_15992428').html()); }); }); </script> I dont really know how to do things directly with javascript but is there is any way of getting the message text directly from that with a javascript executor?

    Read the article

  • Why null reference exception in SetMolePublicInstance?

    - by OldGrantonian
    I get a "null reference" exception in the following line: MoleRuntime.SetMolePublicInstance(stub, receiverType, objReceiver, name, null); The program builds and compiles correctly. There are no complaints about any of the parameters to the method. Here's the specification of SetMolePublicInstance, from the object browser: SetMolePublicInstance(System.Delegate _stub, System.Type receiverType, object _receiver, string name, params System.Type[] parameterTypes) Here are the parameter values for "Locals": + stub {Method = {System.String <StaticMethodUnitTestWithDeq>b__0()}} System.Func<string> + receiverType {Name = "OrigValue" FullName = "OrigValueP.OrigValue"} System.Type {System.RuntimeType} objReceiver {OrigValueP.OrigValue} object {OrigValueP.OrigValue} name "TestString" string parameterTypes null object[] I know that TestString() takes no parameters and returns string, so as a starter to try to get things working, I specified "null" for the final parameter to SetMolePublicInstance. As already mentioned, this compiles OK. Here's the stack trace: Unhandled Exception: System.NullReferenceException: Object reference not set to an instance of an object. at Microsoft.ExtendedReflection.Collections.Indexable.ConvertAllToArray[TInput,TOutput](TInput[] array, Converter`2 converter) at Microsoft.Moles.Framework.Moles.MoleRuntime.SetMole(Delegate _stub, Type receiverType, Object _receiver, String name, MoleBindingFlags flags, Type[] parameterTypes) at Microsoft.Moles.Framework.Moles.MoleRuntime.SetMolePublicInstance(Delegate _stub, Type receiverType, Object _receiver, String name, Type[] parameterTypes) at DeqP.Deq.Replace[T](Func`1 stub, Type receiverType, Object objReceiver, String name) in C:\0VisProjects\DecP_04\DecP\DeqC.cs:line 38 at DeqPTest.DecCTest.StaticMethodUnitTestWithDeq() in C:\0VisProjects\DecP_04\DecPTest\DeqCTest.cs:line 28 at Starter.Start.Main(String[] args) in C:\0VisProjects\DecP_04\Starter\Starter.cs:line 14 Press any key to continue . . . To avoid the null parameter, I changed the final "null" to "parameterTypes" as in the following line: MoleRuntime.SetMolePublicInstance(stub, receiverType, objReceiver, name, parameterTypes); I then tried each of the following (before the line): int[] parameterTypes = null; // if this is null, I don't think the type will matter int[] parameterTypes = new int[0]; object[] parameterTypes = new object[0]; // this would allow for various parameter types All three attempts produce a red squiggly line under the entire line for SetMolePublicInstance Mouseover showed the following message: The best overloaded method match for 'Microsoft.Moles.Framework.Moles.MoleRuntime.SetMolePublicInstance(System.Delegate, System.Type, object, string, params System.Type[])' has some invalid arguments. I'm assuming that the first four arguments are OK, and that the problem is with the params array.

    Read the article

  • Javascript function working strangely during the first call in CHROME ?

    - by Sohil
    HI all, Below mentioned javascript code works fine in all browsers including chrome(from second call onwards). function call(val){ url = window.location.href; indexnum = url.lastIndexOf("/"); str = url.slice(indexnum+1); window.location.href = url.replace(str, "sample.php?src_q=") + val; } I am calling this function on onclick of a link as below <?php echo "<a href='#' onclick='javascript:call(\"$fieldvalue\");'>$fieldvalue</a>" ?> Normal Behaviour : In all browser after clicking on the link new formed url is url://localhost/mysite/sample.php?src_q=val Strange Behaviour : When I click on the link for the first time in chrome value of variable val gets replaced by url and its value as follows http://localhost/mysite/sample.php?src_q=http://localhost/mysite/val This strange behaviour happens during the first click in chrome. From the second call onwards in the same tab, the value of variable val works fine and I get desired url. I tried to google on it, but couldn't found any explanation. Thanks in advance.

    Read the article

  • Is this method of static file serving safe in node.js? (potential security hole?)

    - by MikeC8
    I want to create the simplest node.js server to serve static files. Here's what I came up with: fs = require('fs'); server = require('http').createServer(function(req, res) { res.end(fs.readFileSync(__dirname + '/public/' + req.url)); }); server.listen(8080); Clearly this would map http://localhost:8080/index.html to project_dir/public/index.html, and similarly so for all other files. My one concern is that someone could abuse this to access files outside of project_dir/public. Something like this, for example: http://localhost:8080/../../sensitive_file.txt I tried this a little bit, and it wasn't working. But, it seems like my browser was removing the ".." itself. Which leads me to believe that someone could abuse my poor little node.js server. I know there are npm packages that do static file serving. But I'm actually curious to write my own here. So my questions are: Is this safe? If so, why? If not, why not? And, if further, if not, what is the "right" way to do this? My one constraint is I don't want to have to have an if clause for each possible file, I want the server to serve whatever files I throw in a directory.

    Read the article

  • How to create the automatic mass form submitter (javascript-ajax script) to be used on the 3rd part

    - by Daniel
    I need a script that can handle the following tasks. Take user data from my database and fill in and submit / post data to forms located on third part websites.: So I want to know if is it hard to create or do somebody knows if does exists some script for mass form submissions in PHP -Javascript-Ajax ? I run Dancers & Hostess & Model jobs website, I would like to find some script which allows the girls automaticly submit to hundreds websites forms (other 3rd part model agencies) with their similar model application form info on my website previously specified, 1).Firstly the girls will fill out my agency portfolio very detailed form , like this i will get all the model personal info from them , 2) Secondly i would like to allow for example models to submit to 100 and more other model agencies forms (I will find those websites before, and I will get their field names = values and thanks to some script would like to connect them with every girl data already created in my website to submit . I would like to implement it to my wordpress website where the girls has their portfolios instead of my pages . I would like to offer this service especially to models , it should work like some directory submitters , The script knows names - values and fill it out itself, but I want it online - browser side, where the girls should only fill out captcha if there is and click the button "submit".After succesful submit it should offer other form to submit. I would be very happy if you know the answer or if you can redirect me to some article

    Read the article

  • jQuery .load() call doesn't execute javascript in loaded html file

    - by Mike
    This seems to be a problem related to Safari only. I've tried 4 on mac and 3 on windows and am still having no luck. What I'm trying to do is load an external html file and have the Javascript that is embedded execute. The code I'm trying to use is this: $("#myBtn").click(function() { $("#myDiv").load("trackingCode.html"); }); trackingCode.html looks like this (simple now, but will expand once/if I get this working): <html> <head> <title>Tracking HTML File</title> <script language="javascript" type="text/javascript"> alert("outside the jQuery ready"); $(function() { alert("inside the jQuery ready"); }); </script> </head> <body> </body> </html> I'm seeing both alert messages in IE (6 & 7) and Firefox (2 & 3). However, I am not able to see the messages in Safari (the last browser that I need to be concerned with - project requirements - please no flame wars). Any thoughts on why Safari is ignoring the Javascript in the trackingCode.html file? Eventually I'd like to be able to pass Javascript objects to this trackingCode.html file to be used within the jQuery ready call, but I'd like to make sure this is possible in all browsers before I go down that road. Thanks for your help!

    Read the article

  • how to create a https proxy?

    - by davidshen84
    hi, i want to implement a simple ssl web proxy. i do not want to work with the network connection problems. so i think i can utilize a web server (like apache) to help me establish the connection, and my program works like a cgi app on the web server to redirect the web browser request. below is how i want to implement it: client make http/https requests to the target web site, and setting to use my http/https proxy; apache get the request, and use a rewrite rule to redirect the to my cgi app; my app parse the request and make request to the real web site; my app get the response from the real web site, then send the response back to the client; currently, http requests seem to work. but https requests do not work at all. i tried to use curl to make a request to a https web site through my proxy, and the result is CONNECTION FAILED. my question is, will my idea work? if yes, how to make the https requests work.

    Read the article

  • Determine mouse click for all screen resolutions

    - by Hallik
    I have some simple javascript that determines where a click happens within a browser here: var clickDoc = (document.documentElement != undefined && document.documentElement.clientHeight != 0) ? document.documentElement : document.body; var x = evt.clientX; var y = evt.clientY; var w = clickDoc.clientWidth != undefined ? clickDoc.clientWidth : window.innerWidth; var h = clickDoc.clientHeight != undefined ? clickDoc.clientHeight : window.innerHeight; var scrollx = window.pageXOffset == undefined ? clickDoc.scrollLeft : window.pageXOffset; var scrolly = window.pageYOffset == undefined ? clickDoc.scrollTop : window.pageYOffset; params = '&x=' + (x + scrollx) + '&y=' + (y + scrolly) + '&w=' + w + '&random=' + Date(); All of this data gets stored in a DB. Later I retrieve it and display where all the clicks happened on that page. This works fine if I do all my clicks in one resolution, and then display it back in the same resolution, but this not the case. there can be large amounts of resolutions used. In my test case I was clicking on the screen with a screen resolution of 1260x1080. I retrieved all the data and displayed it in the same resolution. But when I use a different monitor (tried 1024x768 and 1920x1080. The marks shift to the incorrect spot. My question is, if I am storing the width and height of the client, and the x/y position of the click. If 3 different users all with different screen resolutions click on the same word, and a 4th user goes to view where all of those clicks happened, how can I plot the x/y position correctly to show that everyone clicked in the same space, no matter the resolution? If this belongs in a better section, please let me know as well.

    Read the article

  • jquery attr problem on firefox

    - by Tomas
    hello, I'm doing full screen background change system with jquery. When enter to site makes full screen size default background, and when click button must change background. Everythink works fine on opera! But FireFox nothink happend. I think problem is with attr function, please help found problem. All this you can see in http://www.hiphopdance.lt $(document).ready(function(){ //default actions var now_img="images/bg.jpg"; resize(1600,900,"#bgimg",now_img); $(window).bind("resize", function() { resize(1600,900,"#bgimg"); }); //default actions end //clicks $('li#red').click(function(){ $("img#bgimg").attr({src:'http://www.hiphopdance.lt/images/redbg.jpg'}); resize(1024,683,"#bgimg"); $(window).bind("resize", function() { resize(1024,683,"#bgimg"); }); }); //end clicks //resize function start function resize(img_width,img_height,img_id) { var ratio = img_height / img_width; // Get browser window size var browserwidth = $(window).width(); var browserheight = $(window).height(); // Scale the image if ((browserheight/browserwidth) > ratio){ $(img_id).height(browserheight); $(img_id).width(browserheight / ratio); } else { $(img_id).width(browserwidth); $(img_id).height(browserwidth * ratio); } // Center the image $(img_id).css('left', (browserwidth - $(img_id).width())/2); $(img_id).css('top', (browserheight - $(img_id).height())/2); }; //resize function end });

    Read the article

  • Optimize SQL connection?

    - by user1484035
    I am building a multi-page web project in HTML and Javascript that is constantly reading from AND writing to an SQL database. I can connect to the database and successfully run my project with this type of connection. var connection = new ActiveXObject("ADODB.Connection") ; var connectionstring="Data Source=<server>;Initial Catalog=<catalog>;User ID=<user>; Password=<password>;Provider=SQLOLEDB"; connection.Open(connectionstring); var rs = new ActiveXObject("ADODB.Recordset"); rs.Open("SELECT * FROM table", connection); rs.MoveFirst while(!rs.eof) { document.write(rs.fields(1)); rs.movenext; } rs.close; connection.close; Works great and runs fine. BUT, the first 5 lines (from var connection = to var rs =) causes the whole browser to freeze for a few seconds while it establishes the connection. I need to speed that up since I am constantly connecting to the database throughout my project. Is there a more effective way of connecting to a SQL database? or is my computer just bad and this should run faster?

    Read the article

  • Choosing approach for an IM client-server app

    - by John
    Update: totally re-wrote this to be more succint. I'm looking at a new application, one part of which will be very similar to standard IM clients, i.e text chat, ability to send attachments, maybe some real-time interaction like a multi-user whiteboard. It will be client-server, i.e all traffic goes through my central server. That means if I want to support cross-communication with other IM systems, I am still free to pick any protocol for my own client<--server communication - my server can use XMPP or whatever to talk to other systems. Clients are expected to include desktop apps, but probably also browser-based as well either through Flex/Silverlight or HTML/AJAX. I see 3 options for my own client-server communication layer: XMPP. The benefits are clients already exist as do open-source servers. However it requires the most up-front research/learning and also appears like it might raise legal issues due to GPL. Custom sockets. A server app makes connections with the clients, allowing any text/binary data to be sent very fast. However this approach requires building said server from scratch, and also makes a JS client tricky Servlets (or similar web server). Using tried and tested Java web-stack, clients send HTTP requests similar to AJAX-based websites. The benefit is the server is easy to write using well-established technologies, and easy to talk to. But what restrictions would this bring? Is it appropriate technology for real-time communication? Advice and suggests are welcome, especially what pros and cons surround using a web-server approach as compared to a socket-based approach.

    Read the article

  • How to retrieve content via .load() or $.get() with this line

    - by Sin
    hello :) I posted a question a day or two ago about how to retrieve php via ajax method in this modal I was using. I kinda found out the right way to go about it, but there's still something I'm not doing right (obviously lol) Here's the section thats giving me the issues: jQuery('div that holds content').fadeIn(200).css({ 'width': Number( popWidth ) }); $('').load('/something/somewhere/this #content'); So, im using safari, and a local server (mamp), when I check activity in my browser, it shows that it is loading the content with every click, AND the pop up pops up, but no content. When I simply retrieve content via hidden div, ofcourse, i get it. This is what I'm trying to avoid. right now I have that div in my footer stashed as hidden. I'd rather just make a call when its needed, instead of loading it every single time a page is accessed. you can go here to see the whole script i posted in my last question: How to use ajax to show php in a modal pop up Anyone have any idea? I read that .load() has the ability to grab specific content from a request, but im not sure the major difference between that and $.get() I've tried both, and I get the same results. Im using wordpress, and wordpress's ajax requests run smooth as ever, so I know its not a local problem, i'ts my coding lol Ok....Im done typing :)

    Read the article

  • Google Drive API invalid_grant after removing access

    - by Sparafusile
    I have been writing a desktop application that uses the Google Drive API v2. I have the following code: var credential = GoogleWebAuthorizationBroker.AuthorizeAsync ( new ClientSecrets { ClientId = ClientID, ClientSecret = ClientSecret }, new[] { DriveService.Scope.Drive }, "user", CancellationToken.None ) .Result; this.Service = new DriveService( new BaseClientService.Initializer() { HttpClientInitializer = credential, ApplicationName = "My Test App", } ); var request = this.Service.Files.List(); request.Q = "title = 'foo' and trashed = false"; var result = request.Execute(); The first time I ran this code it opened a browser and asked me to grant permissions to the App, which I did. Everything worked successfully until I realized I was using the wrong Google account. At that point I logged into the wrong Google account and revoked access to my App. Now, whenever I run the same code it throws an exception: Error:"invalid_grant", Description:"", Uri:"" When I examine the service and request objects, it looks like the oauth_token isn't getting created any more. I know what I did to mess things up, but I can't figure out how to correct it so I can use a different Google account for testing. What do I need to do?

    Read the article

  • Need help/guidance about creating a desktop application with gui

    - by Somebody still uses you MS-DOS
    I'm planning to do an Desktop application using Python, to learn some Desktop concepts. I'm going to use GTK or Qt, I still haven't decided which one. Fact is: I would like to create an application with the possibility to be called from command line, AND using a GUI. So it would be useful for cmd fans, and GUI users as well. It would be interesting to create a web interface too in the future, so it could be run in a server somewhere using an html interface created with a template language. I'm thinking about two approaches: - Creating a "model" with a simple interface which is called from a desktop/web implementation; - Creating a "model" with an html interface, and embeb a browser component so I could reuse all the code in both desktop/web scenarios. My question is: which exactly concepts are involved in this project? What advantages/disadvantages each approach has? Are they possible? By naming "interface", I'm planning to just do some interfaces.py files with def calls. Is this a bad approach? I would like to know some book recommendations, or resources to both options - or source code from projects which share the same GUI/cmd/web goals I'm after. Thanks in advance!

    Read the article

  • Convert asp.net application to windows forms app

    - by rogdawg
    I have written and deployed an ASP.NET application that is pretty complex. It uses XSL transformations to create web forms for a large variety of data objects. The data comes from the database as XML via a web service. Now, I need to create a Windows desktop application that will provide a small subset of the web applications functionality to a user who may not have access to the web (working in remote areas). I will provide the data syncing using the MS Sync Framework. And I will have the desktop use a local data store. I would like to use the same xslt files in the desktop app that I use in the web app for the form creation so that, if changes are made, the desktop app can update itself when it connects and syncs its data. But, I am wondering how to replicate the asp.net codebehind logic of my web app in the windows forms. If I use a browser control to render the XSLTransformation result, then how could I handle click events, etc, in the form? Also, can I launch other windows as "dialog boxes" from my windows forms (I do this in my web app using RadControls functionality)? Thanks for any advice you can give.

    Read the article

  • Windows Azure Worldwide availability

    - by Insomniac
    Hi, I've been reviewing Windows Azure platform for some time, and can't find answer to one very important question. If I deploy my application within a cloud, how it will be reached from different places worldwide? For example if I have a web application with a database and want it to be accessible to users in UK, US, China and etc. Can I be sure that any user in the world will get almost the same request processing time? I think of it this way. 1. User sends request (navigates in browser to my web site) 2. This request gets in a cloud in a nearest location (closest to user MS Data Center?) 3. It is processed by an instance of my web application (in nearest location, with request to my centralized DB which can be far away but SQL request goes via MS internal network, which I believe should be very fast). 4. Response sent to user. Please let me know if I'm wrong. Thanks.

    Read the article

< Previous Page | 490 491 492 493 494 495 496 497 498 499 500 501  | Next Page >