Search Results

Search found 11671 results on 467 pages for 'man pages'.

Page 400/467 | < Previous Page | 396 397 398 399 400 401 402 403 404 405 406 407  | Next Page >

  • git pull crashes after other member push something

    - by naiad
    Here it's the story... we have a Github account. I clone the repository ... then I can work with it, commit things, push things, etc ... I use Linux with command line and git version 1.7.7.3 Then other user, using Eclipse and git plugin for eclipse eGit 1.1.0 pushes something, and it appears in the github web pages as the last commit, but when I try to pull: $ git pull remote: Counting objects: 13, done. remote: Compressing objects: 100% (6/6), done. remote: Total 9 (delta 2), reused 7 (delta 0) Unpacking objects: 100% (9/9), done. error: unable to find 3e6c5386cab0c605877f296642d5183f582964b6 fatal: object 3e6c5386cab0c605877f296642d5183f582964b6 not found "3e6c5386cab0c605877f296642d5183f582964b6" is the commit hash of the last commit, done by the other user ... there's no problem at all to browse it through web page ... but for me it's impossible to pull it. It's strange, because my command line output tells about that commit hash, so it knows that one is the last one commit in the github system, but my git can not pull it ! Maybe the git protocol used in eGit is incompatible with the console git 1.7.7.3...

    Read the article

  • Symfony + JQueryUI Tabs navigation menu question

    - by Andy Poquette
    I'm trying to use the tabs purely as a navigational menu, loading my pages in the tabs, but being able to bookmark said tabs, and have the address bar correspond with the action I'm executing. So here is my simple nav partial: <div id='tabs'> <ul> <li id='home'><a href="<?php echo url_for('post/index') ?>" title="Home">Home</a></li> <li id='test'><a href="<?php echo url_for('post/test') ?>" title="Test">Test</a></li> </ul> </div> And the simple tabs intializer: $(document).ready(function() { $('#tabs').tabs({spinner: '' }); }); My layout just displays $sf_content. I want to display the content for the action I'm executing (currently home or test) in the tab, via ajax, and have the tab be bookmarkable. I've googled this like 4000 times and I can't find an example of what I'm trying to do. If anyone has a resource they know of, I can look it up, or if you can help, I would greatly appreciate it. Thank you in advance.

    Read the article

  • Positioning problem in VIsta when Aero Theme is enabled

    - by Adeel Rehman
    Hi, I have a Windows form which has tab pages. On a Tab Page, I have a Combo Box. When I click the combobox a drop down opens up just below the combo box to give the impression of a drop down. The drop down is a windows form and this is how I set its position Popup.Location = control.PointToScreen(parentcontrol.PointToClient(new System.Drawing.Point(0, control.Height))); Popup = drop down that opens below the combo box (Windows Form) control = my combo box (Combo Box) parentcontrol = the windows form on which the control is present (Parent Form) Problem:- The X coordinate is not plotted correctly on the screen with Aero theme enabled. This works perfectly fine on XP (with some variance in y due to tablelayout panel i assume). But when i use the same code on Vista with Aero theme enabled the x-cordinate wanders away about 20-30 pixels. If I Turn Aero theme off on Vista, it works fine. I have found the X and Y coordiantes calculated in both cases are the same. But the way Vista draws these coordinates on the screen (when aero theme is enabled) is different. Is there any solution to this problem? Thanks, Adeel Rehman.

    Read the article

  • Slowdowns when reading from an urlconnection's inputstream (even with byte[] and buffers)

    - by user342677
    Ok so after spending two days trying to figure out the problem, and reading about dizillion articles, i finally decided to man up and ask to for some advice(my first time here). Now to the issue at hand - I am writing a program which will parse api data from a game, namely battle logs. There will be A LOT of entries in the database(20+ million) and so the parsing speed for each battle log page matters quite a bit. The pages to be parsed look like this: http://api.erepublik.com/v1/feeds/battle_logs/10000/0. (see source code if using chrome, it doesnt display the page right). It has 1000 hit entries, followed by a little battle info(lastpage will have <1000 obviously). On average, a page contains 175000 characters, UTF-8 encoding, xml format(v 1.0). Program will run locally on a good PC, memory is virtually unlimited(so that creating byte[250000] is quite ok). The format never changes, which is quite convenient. Now, I started off as usual: //global vars,class declaration skipped public WebObject(String url_string, int connection_timeout, int read_timeout, boolean redirects_allowed, String user_agent) throws java.net.MalformedURLException, java.io.IOException { // Open a URL connection java.net.URL url = new java.net.URL(url_string); java.net.URLConnection uconn = url.openConnection(); if (!(uconn instanceof java.net.HttpURLConnection)) { throw new java.lang.IllegalArgumentException("URL protocol must be HTTP"); } conn = (java.net.HttpURLConnection) uconn; conn.setConnectTimeout(connection_timeout); conn.setReadTimeout(read_timeout); conn.setInstanceFollowRedirects(redirects_allowed); conn.setRequestProperty("User-agent", user_agent); } public void executeConnection() throws IOException { try { is = conn.getInputStream(); //global var l = conn.getContentLength(); //global var } catch (Exception e) { //handling code skipped } } //getContentStream and getLength methods which just return'is' and 'l' are skipped Here is where the fun part began. I ran some profiling (using System.currentTimeMillis()) to find out what takes long ,and what doesnt. The call to this method takes only 200ms on avg public InputStream getWebPageAsStream(int battle_id, int page) throws Exception { String url = "http://api.erepublik.com/v1/feeds/battle_logs/" + battle_id + "/" + page; WebObject wobj = new WebObject(url, 10000, 10000, true, "Mozilla/5.0 " + "(Windows; U; Windows NT 5.1; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 ( .NET CLR 3.5.30729)"); wobj.executeConnection(); l = wobj.getContentLength(); // global variable return wobj.getContentStream(); //returns 'is' stream } 200ms is quite expected from a network operation, and i am fine with it. BUT when i parse the inputStream in any way(read it into string/use java XML parser/read it into another ByteArrayStream) the process takes over 1000ms! for example, this code takes 1000ms IF i pass the stream i got('is') above from getContentStream() directly to this method: public static Document convertToXML(InputStream is) throws ParserConfigurationException, IOException, SAXException { DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); DocumentBuilder db = dbf.newDocumentBuilder(); Document doc = db.parse(is); doc.getDocumentElement().normalize(); return doc; } this code too, takes around 920ms IF the initial InputStream 'is' is passed in(dont read into the code itself - it just extracts the data i need by directly counting the characters, which can be done thanks to the rigid api feed format): public static parsedBattlePage convertBattleToXMLWithoutDOM(InputStream is) throws IOException { // Point A BufferedReader br = new BufferedReader(new InputStreamReader(is)); LinkedList ll = new LinkedList(); String str = br.readLine(); while (str != null) { ll.add(str); str = br.readLine(); } if (((String) ll.get(1)).indexOf("error") != -1) { return new parsedBattlePage(null, null, true, -1); } //Point B Iterator it = ll.iterator(); it.next(); it.next(); it.next(); it.next(); String[][] hits_arr = new String[1000][4]; String t_str = (String) it.next(); String tmp = null; int j = 0; for (int i = 0; t_str.indexOf("time") != -1; i++) { hits_arr[i][0] = t_str.substring(12, t_str.length() - 11); tmp = (String) it.next(); hits_arr[i][1] = tmp.substring(14, tmp.length() - 9); tmp = (String) it.next(); hits_arr[i][2] = tmp.substring(15, tmp.length() - 10); tmp = (String) it.next(); hits_arr[i][3] = tmp.substring(18, tmp.length() - 13); it.next(); it.next(); t_str = (String) it.next(); j++; } String[] b_info_arr = new String[9]; int[] space_nums = {13, 10, 13, 11, 11, 12, 5, 10, 13}; for (int i = 0; i < space_nums.length; i++) { tmp = (String) it.next(); b_info_arr[i] = tmp.substring(space_nums[i] + 4, tmp.length() - space_nums[i] - 1); } //Point C return new parsedBattlePage(hits_arr, b_info_arr, false, j); } I have tried replacing the default BufferedReader with BufferedReader br = new BufferedReader(new InputStreamReader(is), 250000); This didnt change much. My second try was to replace the code between A and B with: Iterator it = IOUtils.lineIterator(is, "UTF-8"); Same result, except this time A-B was 0ms, and B-C was 1000ms, so then every call to it.next() must have been consuming some significant time.(IOUtils is from apache-commons-io library). And here is the culprit - the time taken to parse the stream to string, be it by an iterator or BufferedReader in ALL cases was about 1000ms, while the rest of the code took 0ms(e.g. irrelevant). This means that parsing the stream to LinkedList, or iterating over it, for some reason was eating up a lot of my system resources. question was - why? Is it just the way java is made...no...thats just stupid, so I did another experiment. In my main method I added after the getWebPageAsStream(): //Point A ba = new byte[l]; // 'l' comes from wobj.getContentLength above bytesRead = is.read(ba); //'is' is our URLConnection original InputStream offset = bytesRead; while (bytesRead != -1) { bytesRead = is.read(ba, offset - 1, l - offset); offset += bytesRead; } //Point B InputStream is2 = new ByteArrayInputStream(ba); //Now just working with 'is2' - the "copied" stream The InputStream-byte[] conversion took again 1000ms - this is the way many ppl suggested to read an InputStream, and stil it is slow. And guess what - the 2 parser methods above (convertToXML() and convertBattlePagetoXMLWithoutDOM(), when passed 'is2' instead of 'is' took, in all 4 cases, under 50ms to complete. I read a suggestion that the stream waits for connection to close before unblocking, so i tried using HttpComponentsClient 4.0 (http://hc.apache.org/httpcomponents-client/index.html) instead, but the initial InputStream took just as long to parse. e.g. this code: public InputStream getWebPageAsStream2(int battle_id, int page) throws Exception { String url = "http://api.erepublik.com/v1/feeds/battle_logs/" + battle_id + "/" + page; HttpClient httpclient = new DefaultHttpClient(); HttpGet httpget = new HttpGet(url); HttpParams p = new BasicHttpParams(); HttpConnectionParams.setSocketBufferSize(p, 250000); HttpConnectionParams.setStaleCheckingEnabled(p, false); HttpConnectionParams.setConnectionTimeout(p, 5000); httpget.setParams(p); HttpResponse response = httpclient.execute(httpget); HttpEntity entity = response.getEntity(); l = (int) entity.getContentLength(); return entity.getContent(); } took even longer to process(50ms more for just the network) and the stream parsing times remained the same. Obviously it can be instantiated so as to not create HttpClient and properties every time(faster network time), but the stream issue wont be affected by that. So we come to the center problem - why does the initial URLConnection InputStream(or HttpClient InputStream) take so long to process, while any stream of same size and content created locally is orders of magnitude faster? I mean, the initial response is already somewhere in RAM, and I cant see any good reasong why it is processed so slowly compared to when a same stream is just created from a byte[]. Considering I have to parse million of entries and thousands of pages like that, a total processing time of almost 1.5s/page seems WAY WAY too long. Any ideas? P.S. Please ask in any more code is required - the only thing I do after parsing is make a PreparedStatement and put the entries into JavaDB in packs of 1000+, and the perfomance is ok ~ 200ms/1000entries, prb could be optimized with more cache but I didnt look into it much.

    Read the article

  • How can I use R (Rcurl/XML packages ?!) to scrap this webpage ?

    - by Tal Galili
    Hi all, I have a (somewhat complex) webscraping challenge that I wish to accomplish and would love for some direction (to whatever level you feel like sharing) here goes: I would like to go through all the "species pages" present in this link: http://gtrnadb.ucsc.edu/ So for each of them I will go to: The species page link (for example: http://gtrnadb.ucsc.edu/Aero_pern/) And then to the "Secondary Structures" page link (for example: http://gtrnadb.ucsc.edu/Aero_pern/Aero_pern-structs.html) Inside that link I wish to scrap the data in the page so that I will have a long list containing this data (for example): chr.trna3 (1-77) Length: 77 bp Type: Ala Anticodon: CGC at 35-37 (35-37) Score: 93.45 Seq: GGGCCGGTAGCTCAGCCtGGAAGAGCGCCGCCCTCGCACGGCGGAGGcCCCGGGTTCAAATCCCGGCCGGTCCACCA Str: >>>>>>>..>>>>.........<<<<.>>>>>.......<<<<<.....>>>>>.......<<<<<<<<<<<<.... Where each line will have it's own list (inside the list for each "trna" inside the list for each animal) I remember coming across the packages Rcurl and XML (in R) that can allow for such a task. But I don't know how to use them. So what I would love to have is: 1. Some suggestion on how to build such a code. 2. And recommendation for how to learn the knowledge needed for performing such a task. Thanks for any help, Tal

    Read the article

  • ASP.NET MVC on GoDaddy Not Working (Not Primary Domain Deployment)

    - by JPrescottSanders
    I am trying to get ASP.NET MVC working on GoDaddy and I'm not having much luck. I have read the post on SO that covers the subject, but I must have a slightly different configuration or must be missing somehting along the way because the main MVC page comes up, but all links seem to fail and no amount of tweaking the URLs seems to get it to work. A little back ground. I have a single hosting plan with many domains pointed to sub folders of the main domain. Basic ASP.NET web forms pages work just fine, but of course I wanted to try and host a sample MVC site in one of these non-primary domains. You can go to the URL here. As you can see this first page comes up, but if you click on Home or About it doesn't work. Clicking on Home creates this link "http://www.jprescottsanders.com/jps/" and clicking on about creates this link "http://www.jprescottsanders.com/jps/Home/About". As you can see JPS sneaks in there, this of course is the sub folder that i place my web app files in. I would like to know if this is a MVC related issue or a GoDaddy issue. I suspect that MVC may want to sit in the root directory of the site, and when it puts the "jps" into the URLs it breaks the routing mechanisms (but this is conjecture). I know Dan said this was possible so I'm hoping he sees this and helps me get to the bottom of this deployment strategy for MVC.

    Read the article

  • best way to switch between secure and unsecure connection without bugging the user

    - by Brian Lang
    The problem I am trying to tackle is simple. I have two pages - the first is a registration page, I take in a few fields from the user, once they submit it takes them to another page that processes the data, stores it to a database, and if successful, gives a confirmation message. Here is my issue - the data from the user is sensitive - as in, I'm using an https connection to ensure no eavesdropping. After that is sent to the database, I'd like on the confirmation page to do some nifty things like Google Maps navigation (this is for a time reservation application). The problem is by using the Google Maps api, I'd be linking to items through a unsecure source, which in turn prompts the user with a nasty warning message. I've browsed around, Google has an alternative to enterprise clients, but it costs $10,000 a year. What I am hoping is to find a workaround - use a secure connection to take in the data, and after it is processed, bring them to a page that isn't secure and allows me to utilize the Google Maps API. If any of you have a Netflix account you can see exactly what I would like to do when you sign-in, it is a secure page, which then takes you to your account / queue, on an unsecure page. Any suggestions? Thanks!

    Read the article

  • Inheritance of list-style-type property in Firefox (bug in Firebug?)

    - by Marcel Korpel
    Let's have a look at some comments on a page generated by Wordpress (it's not a site I maintain, I'm just wondering what's going on here). As these pages might disappear in the near future, I've put some screenshots online. Here's what I saw: Obviously, the list-item markers shouldn't be there. So I decided to look at the source using Firebug. As you can see, Firebug claims that the list-style property (containing none) is inherited from ol.commentlist. But if that's the case, why are the circle and the square visible? When checking the computed style, Firebug shows the list-style-types correctly. What's the correct behaviour? I just did a quick check in Chromium, whose Web Inspector gave a better view of reality (the list item markers were also displayed in this browser): According to WebKit, list-style of ol.commentlist isn't inherited, only the default value of list-style-type from the rendering engine. So, we may conclude that the output of both browsers is correct and that Firefox (Firebug) shows an incorrect representation of inherited styles. What does the CSS specification say? Inheritance will transfer the list-style values from OL and UL elements to LI elements. This is the recommended way to specify list style information. Not much about the inheritance of ol properties to uls. Is Firebug wrong in this respect? BTW, I managed to let the markers disappear by just changing line 312 of style.css to ol.commentlist, li.commentlist, ul.children { When also explicitly defining the list-style of ul.children to none, the markers are not painted. You can have a look at screenshots of Firebug and WebKit's Web Inspector in this case, if you like.

    Read the article

  • Factory Method Implementation

    - by cedar715
    I was going through the 'Factory method' pages in SO and had come across this link. And this comment. The example looked as a variant and thought to implement in its original way: to defer instantiation to subclasses... Here is my attempt. Does the following code implements the Factory pattern of the example specified in the link? Please validate and suggest if this has to undergo any re-factoring. public class ScheduleTypeFactoryImpl implements ScheduleTypeFactory { @Override public IScheduleItem createLinearScheduleItem() { return new LinearScheduleItem(); } @Override public IScheduleItem createVODScheduleItem() { return new VODScheduleItem(); } } public class UseScheduleTypeFactory { public enum ScheduleTypeEnum { CableOnDemandScheduleTypeID, BroadbandScheduleTypeID, LinearCableScheduleTypeID, MobileLinearScheduleTypeID } public static IScheduleItem getScheduleItem(ScheduleTypeEnum scheduleType) { IScheduleItem scheduleItem = null; ScheduleTypeFactory scheduleTypeFactory = new ScheduleTypeFactoryImpl(); switch (scheduleType) { case CableOnDemandScheduleTypeID: scheduleItem = scheduleTypeFactory.createVODScheduleItem(); break; case BroadbandScheduleTypeID: scheduleItem = scheduleTypeFactory.createVODScheduleItem(); break; case LinearCableScheduleTypeID: scheduleItem = scheduleTypeFactory.createLinearScheduleItem(); break; case MobileLinearScheduleTypeID: scheduleItem = scheduleTypeFactory.createLinearScheduleItem(); break; default: break; } return scheduleItem; } }

    Read the article

  • Editing a User's Likes on Facebook

    - by Ed Marty
    I've been looking at the Facebook API to find some way to edit a user's Likes (that is, add or remove items from https://graph.facebook.com/me/likes/). The API doesn't say anything about it specifically, but does say this: You can publish to the Facebook graph by issuing HTTP POST requests to the appropriate connection URLs above. Where above, one of the connection URLs is the aforementioned https://graph.facebook.com/me/likes link. However, there's no documentation for the PROFILE_ID/likes post, and whenever I try to post it returns the error "invalid post_id". I assume this is because to like something, you post a request to POST_ID/likes. It's a bit inconsistent. What I'm trying to do is get the user's profile to add a Page to their likes (by posting using the page's id as an "id" parameter in the post body). However, it seems like there's just no way to edit user's likes. At the end of the day, I just want to allow a user to click a button in my application (mobile device application, not a web app) and have them add our Facebook page into their list of pages, and I've found no way of doing that short of presenting our page to them and making them click on the "Like" button manually. Many other things are supported without showing the Facebook website, like posting to their wall or making albums, but I can't find anything to do this. Any ideas?

    Read the article

  • Google Chrome Extension - Help needed

    - by Jim-Y
    Im new on Google Chrome Extensions coding, and i have some basic questions. I want to make a Chrome Extension, and the scheme is the following: -a popup window, containing buttons and result fields (popup.html) -when a button is clicked, i want to trigger an event, this event should connect to a webserver (i make the servlet too), and gather information from the server. (XMLHttpRequest()) -after that, i want my extension to load the gathered information into one of the result fields. Simple, isn't it? But i have several problems, right at the beginning:( I started developing with reading tutorials, but i have fog on the main structure of an extension. Now, i started an app, containing a popup.html, manifest.json ... In popup.html theres a result field, and a button <div id="extension_container"> <div id="header"> <p id="intro">Result here</p> <button type="button" id="button">Click Me!</button> </div> <!-- END header --> <div id="content"> </div> <!-- END content --> When button is clicked, i trigger an event, handeled with jquery, code here: <script> $(document).ready(function(){ $("#button").click(function(){ $("#intro").text("Hello, im added"); alert("Clicked"); }); }); </script> And here comes the problem, in popup.html this doesnt work, if i load it to Chrome, nothing happens. Otherwise, if i open popup.html in browser, not as an extension, everything works fine. So, i think i have basic misunderstandings on extension structures, starting with background pages, background javascript and so on.. :( Could anyone help me?

    Read the article

  • Website --> Add Reference also creates files with pdb extensions

    - by SourceC
    Hello, Q1 Any assemblies stored in Bin directory will automatically be referenced by web application. We can add assembly reference via Website -- Add Reference or simply by copying dll into Bin folder. But I noticed that when we add reference via Website -- Add Reference, that additional files with .pdb extension are placed inside Bin. If these files are also needed, then why does reference still work even if we only place referenced dll into Bin, but not pdb files Q2 It appears that if you add a new item to web project, this class will automatically be added to project list and we can reference it from all pages in this project. So are all files added to project list automatically being referenced ? thanx EDIT: On your second question, you are adding a public class to a namespace so will be visible to other classes in that project and in that namespace. I don’t know much about assemblies, but I’d assume the reason why item( class ) added to the project is visible to other classes in that project is for the simple fact that in web project all classes get compiled into single assembly and for the fact that public classes contained in the same assembly are always visible to each other? much appreciated

    Read the article

  • Refresh iFrame (Cache Issue)

    - by Ramiz Uddin
    We are getting a weird issue on which we are not sure what exactly cause it. Let me elaborate the issue. Suppose, we have two different html pages a.html and b.html. And a little script written in index.html: <html> <head> <script> function reloadFrame(iframe, src) { iframe.src = src; } </script> </head> <body> <form> <iframe id="myFrame"></iframe> <input type="button" value="Load a.html" onclick="reloadFrame(document.getElementById('myFrame'), 'a.html')"> <input type="button" value="Load b.html" onclick="reloadFrame(document.getElementById('myFrame'), 'b.html')"> </form> </body> </html> A server component is continuously updating both files a.html and b.html. The problem is the content of both files are successfully updating on the server side. If we open we can see the updated changes but client getting the older content which doesn't show the updated changes. Any idea?

    Read the article

  • Cakephp ACL authentication issue - I'm locked out

    - by Baseer
    I've followed the CakePHP Cookbook ACL tutorial And as of right now I'm just trying to add users using the scaffolding method. I'm trying to go to /users/add but it always redirects me to the login screen even though I have added $this->Auth->allow('*'); in beforeFilter() temporarily to allow access to all pages. I've done this in both the UsersController and GroupsController as the tutorial asked. Below is my code for UsersController which I think will be the most relevant of all the files. Let me know if any other piece of code is required. <?php class UsersController extends AppController { var $name = 'Users'; var $scaffold; function beforeFilter() { parent::beforeFilter(); $this->Auth->allow('*'); } function login() { //Auth Magic } function logout() { //Leave empty for now. } } ?> I think I've pretty much followed the tutorial, any ideas as to what I may be missing? Thanks. I've been stuck on this for a while. =(

    Read the article

  • Automatic web form testing/filling

    - by Polatrite
    I recently became lead on getting an inordinate amount of testing done in a very short period of time. We have many different web forms, using custom (Telerik) controls that need to be tested for proper data validation and sensible handling of the data. Some of the forms are several pages long with 30-80 different controls for data entry. I am looking for a software solution (that is free) that would allow me to automate the process of filling in these forms by designing a script, or using a UI. The other requirement is that I can't use any browsers but IE6 (terrible, I know). I have previously used AutoHotkey to great success for automatic Windows form testing, since Autohotkey's API allows you to directly reference controls on the Windows form. However Autohotkey does not have similar support for web forms (everything is just one big "InternetExplorer" control). While I would prefer that I could script some variance in the data to help serialize each test, it's not necessary, as I could go back through and manually edit a field or two (plus "break" whatever control I'm currently testing) to serialize each test. If you've ever seen Spawner: http://forge.mysql.com/projects/project.php?id=214 It's almost exactly the sort of thing I'm looking for (Spawner generates dummy SQL data, as opposed to dummy webform data) - but I won't be picky, I've got a really short deadline to meet and had this thrust in my lap just today. ;) Edit1: One of the challenges of just using Autohotkey to simulate keyboard input (tabbing through controls) is that some controls don't currently have tab index (bug), and some controls cause a page reload after modification, resulting in inconsistent control focus (tabbing screwed up). Our application makes heavy use of page reloads to populate fields (select a location, it auto-populates a city, for example).

    Read the article

  • Rendering LaTeX on third-party websites?

    - by A. Rex
    There are some sites on the web that render LaTeX into some more readable form, such as Wikipedia, some Wordpress blogs, and MathOverflow. They may use images, MathML, jsMath, or something like that. There are other sites on the web where LaTeX appears inline and is not rendered, such as the arXiv, various math forums, or my email. In fact, it is quite common to see an arXiv paper's abstract with raw LaTeX in it, e.g. this paper. Is there a plugin available for Firefox, or would it be possible to write one, that renders LaTeX within pages that do not provide a rendering mechanism themselves? (The LaTeX would be enclosed within dollar signs, e.g. $\pi$. See the arXiv link above.) Some notes: It may be impossible to render some of the code, because authors often copy-paste code directly from their source TeX files, which may contain things like "\cite{foo}" or undefined commands. These should be left alone. This question is a repost of a question from MathOverflow that was closed for not being related to math. There is one answer there, which is helpful, but perhaps Stack Overflow can provide better answers. I program a lot, but Javascript is not my specialty, so comments along the lines of "look at this library" are not particularly helpful to me (but may be to others).

    Read the article

  • Drupal with clean urls turned on is putting question marks in URL

    - by aussiegeek
    I have a drupal site with clean urls, the pages load correctly, but then the URL is rewritten, which I really don't to happen. My .htaccess is: <IfModule mod_rewrite.c> RewriteEngine on # If your site can be accessed both with and without the 'www.' prefix, you # can use one of the following settings to redirect users to your preferred # URL, either WITH or WITHOUT the 'www.' prefix. Choose ONLY one option: # # To redirect all users to access the site WITH the 'www.' prefix, # (http://example.com/... will be redirected to http://www.example.com/...) # adapt and uncomment the following: # RewriteCond %{HTTP_HOST} ^example\.com$ [NC] # RewriteRule ^(.*)$ http://www.example.com/$1 [L,R=301] # # To redirect all users to access the site WITHOUT the 'www.' prefix, # (http://www.example.com/... will be redirected to http://example.com/...) # uncomment and adapt the following: # RewriteCond %{HTTP_HOST} ^www\.example\.com$ [NC] # RewriteRule ^(.*)$ http://example.com/$1 [L,R=301] # Modify the RewriteBase if you are using Drupal in a subdirectory or in a # VirtualDocumentRoot and the rewrite rules are not working properly. # For example if your site is at http://example.com/drupal uncomment and # modify the following line: # RewriteBase /drupal # # If your site is running in a VirtualDocumentRoot at http://example.com/, # uncomment the following line: RewriteBase / # Rewrite URLs of the form 'x' to the form 'index.php?q=x'. RewriteCond %{REQUEST_URI} !(connect|administration) RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_URI} !=/favicon.ico RewriteRule ^(.*)$ index.php?q=$1 [L,QSA] </IfModule>

    Read the article

  • Setup SSL (self signed cert) with tomcat

    - by Danny
    I am mostly following this page: http://tomcat.apache.org/tomcat-6.0-doc/ssl-howto.html I used this command to create the keystore keytool -genkey -alias tomcat -keyalg RSA -keystore /etc/tomcat6/keystore and answered the prompts Then i edited my server.xml file and uncommented/edited this line <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" keystoreFile="/etc/tomcat6/keystore" keystorePass="tomcat" /> then I go to the web.xml file for my project and add this into the file <security-constraint> <web-resource-collection> <web-resource-name>Security</web-resource-name> <url-pattern>/*</url-pattern> </web-resource-collection> <user-data-constraint> <transport-guarantee>CONFIDENTIAL</transport-guarantee> </user-data-constraint> </security-constraint> When I try to run my webapp I am met with this: Unable to connect Firefox can't establish a connection to the server at localhost:8443. * The site could be temporarily unavailable or too busy. Try again in a few moments. * If you are unable to load any pages, check your computer's network connection. If I comment out the lines I've added to my web.xml file, the webapp works fine. My log file in /var/lib/tomcat6/logs says nothing. I can't figure out if this is a problem with my keystore file, my server.xml file or my web.xml file.... Any assistance is appreciated I am using tomcat 6 on ubuntu.

    Read the article

  • jQuery show based on checkbox value at page load.

    - by Jacob Huggart
    I have an ASP MVC web app and on one of the pages there is a set of Main checkboxes with sub-checkboxes underneath them. The sub-checkboxes should only show up when the corresponding main checkbox is checked. I have the following code that works just fine as long as none of the checkboxes are checked when the page loads. $("input[id$=Suffix]").change(function() { prefix = this.id; if (!$(this).hasClass("checked")) { $("tr[id^=" + prefix + "]").show(); $(this).addClass("checked"); } else { $("tr[id^=" + prefix + "]").hide(); $(this).removeClass("checked"); } }); Now I need to check a database for the values of the main checkboxes. I get the values, and can check the boxes on page load. But when the page comes up, the sub-checkboxes are not displayed when the main checkbox is checked. Also, if the main checkbox is checked when the page loads, the sub-checkboxes are only displayed when the main chcekbox is unchecked (obviously because the above function only acts on .change()). What do you all suggest I try? If you need further explanation feel free to ask. edit: btw, all of this is in $(document).ready()

    Read the article

  • How to convert none-Latin-based encoded text into UTF-8, or make them coexist on same page?

    - by Yallaa
    Good day, I have a script that scrapes the title/description of remote pages and prints those values into a corresponding charset=UTF-8 encoded page. Here is the problem, whenever a remote page is encoded with non-Latin characters encoding like (Arabic, Russian, Chinese, Japanese etc.) the imported values print as garbled text. I've tried passing those values through either iconv or mb_convert_encoding converters but without much success. Then, I tried detecting the remote encoding first, then change my presentation page's encoding into the remote one instead of the current utf-8, which works okay with the imported values, but the other existing utf-8 content of that language on the page gets garbled instead. Example: If I try to import those values from a Russian windows-1251 into my UTF-8 encoded page which has a mix English/Russian content. I change the imported non-utf-8 string into a utf-8 using either iconv or mb_convert_encoding. I tried: $RemoteValue = iconv($RemoteEncoding, 'UTF-8', $RemoteValue); or $RemoteValue mb_convert_encoding($RemoteValue, "UTF-8", $RemoteEncoding); or $RemoteValue mb_convert_encoding($RemoteValue, "UTF-8", "auto"); without success. If I detect that the remote page is windows-1251 encoded and I change my presentation page (which already has UTF-8 encoded mixed language content) to be similar to the remote page, then the japanese utf-8 content on the existing page gets garbled... Can 2 differently encoded strings coexist on the same page (ex. utf-8 & windows-1251)? Am I using the converters correctly? any hints as to why they don't work? Is there any better way to do this? Thank you for your help

    Read the article

  • asp.net multiligual website culture settings

    - by Hemant Kothiyal
    Hi, In asp.net multilingual website in english Uk and swedish, i have three rsources file 1. en-GB.resx 2. sv-SE.resx 3. Culture neutral file. I have create one base class and all pages is inherited from that class. There i write following lines to set UICULTURE and culture 1. Thread.CurrentThread.CurrentUICulture = CultureInfo.CurrentUICulture.Name; 2. Thread.CurrentThread.CurrentCulture = CultureInfo.CurrentCulture.Name; Question: Suppose my browser language is Swedish(sv-SE) then this code will run because it find CurrentUICulture and CurrentCulture values as sv-SE. Now if suppose browser language is Swedish(sv) only, in that case values will be set as CurrentUICulture = sv; and CurrentCulture = sv-SE Now the problem is that user can able to view all text in Culture neutral resource file that i kept as english while all decimal saperators, currency and other will be appear in swedish. It looks confusing to usr. What would be right approach. I am thinking following solution. Please correct me? 1. i can create extra resource file for sv also. 2. I check value of CurrentUICulture in base class and if it is sv then replace it with sv-SE Please correct me which one is right approach or Is there any other good way of doing?

    Read the article

  • Problems with jQuery plugin Tablesorter 2.0

    - by gunther-von-goetzen-sanchez
    I downloaded the jQuery plugin Tablesorter 2.0 from http://tablesorter.com/jquery.tablesorter.zip and modified the example-pager.html which is in tablesorter/docs directory I wrote additional Rollover effects: $(function() { $("table") .tablesorter({widthFixed: true, widgets: ['zebra']}) .tablesorterPager({container: $("#pager")}); $("tr").mouseover(function () { $(this).addClass('over'); }); $("tr").mouseout(function () { $(this).removeClass('over'); }); $("tr").click(function () { $(this).addClass('marked'); }); $("tr").dblclick(function () { $(this).removeClass('marked'); }); }); And of course modified the themes/blue/style.css file: /* Additional code */ table.tablesorter tbody tr.odd td { background-color: #D1D1F0; } table.tablesorter tbody tr.even td { background-color: #EFDEDE; } table.tablesorter tbody tr.over td { background-color: #FBCA33; } table.tablesorter tbody tr.marked td { background-color: #FB4133; } /* Additional code END*/ All this works fine BUT when I go to further pages i.e. page 2 3 or 4 the effects are gone! I really appreciate your help

    Read the article

  • Best pattern to load enumerated values from DAL using WCF RIA Services

    - by Dale Halliwell
    I would like to be able to load several RIA entitysets in a single call without chaining/nesting several small LoadOperations together so that they load sequentially. I have several pages that have a number of comboboxes on them. These comboboxes are populated with static values from a database (for example status values). Right now I preload these values in my VM by one method that strings together a series of LoadOperations for each type that I want to load. For example: public void LoadEnums() { context.Load(context.GetMyStatusValues1Query()).Completed += (s, e) => { this.StatusValues1 = context.StatusValues1; context.Load(context.GetMyStatusValues2()).Completed += (s1, e1) => { this.StatusValues2 = context.StatusValues2; context.Load(context.GetMyStatusValues3Query()).Completed += (s2, e2) => { this.StatusValues3 = context.StatusValues3; (....and so on) }; }; }; }; While this works fine, it seems a bit nasty. Also, I would like to know when the last loadoperation completes so that I can load whatever entity I want to work on after this, so that these enumerated values resolve properly in form elements like comboboxes and listboxes. (I think) I can't do this easily above without creating a delegate and calling that on the completion of the last loadoperation. So my question is: does anyone out there know a better pattern to use, ideally where I can load all my static entitysets in a single LoadOperation?

    Read the article

  • Change column height as other column gets longer

    - by Infiniti Fizz
    Hi, I have tried a few things to solve this problem but I can't seem to get it working. The problem is that I have 2 columns as the main part of my website, right and left. On some pages, there is a lot of text in the left column, therefore it is very long, the problem is that the right column doesn't elongate with the left column. Both columns have the same background colour and a footer s displayed across the width of both columns after the columns finish. My first thought was to put both columns inside a div which would have the same background colour as them and therefore if the left column became 1500px long in total and the right column stayed at around 600px (due to the elements inside it) then this wouldn't show as the new, outer div would elongate along with the left column. But for some reason this didn't work. Could it be because the columns are floated? Does anyone have any other ideas? Here is the website (Obviously not finished yet): Beansheaf Hotel I have chosen a page where there is a lot of text in the left column so the problem is apparent. Thanks in advance, InfinitiFizz

    Read the article

  • Redirect to another action in an interceptor in struts 2

    - by user292662
    I am currently in the process of learning Struts 2 and I am currently building a simple application where unverified users are redirected to a login form. I have a login form and action functional which takes the users credentials, verifies them and stores a User object in the session however I am now trying to prevent access to pages before the login has taken place and I am trying to do this with an interceptor. My problem is that I have written an interceptor that checks whether the User object has been saved in the session but if it has not I want to redirect to the login page and can't find any way of doing this without bypassing struts and using the HttpServletResponse.sendRedirect method Configuration: <package name="mypackage" extends="struts-default" namespace="/admin"> <interceptors> <interceptor name="login" class="my.LoginInterceptor" /> </interceptors> <default-interceptor-ref name="login"/> <action name="login" class="my.LoginAction"> <result name="input">/admin/login.jsp</result> <result name="success" type="redirect">/admin</result> </action> <action name="private" class="my.PrivateAction"> <result>/admin/private.jsp</result> </action> </package> The interceptor code: @Override public String intercept(ActionInvocation inv) throws Exception { Map<String, Object> session = inv.getInvocationContext().getSession(); Object user = session.get("user"); if(user == null) { // redirect to the 'login' action here } else { return inv.invoke(); } }

    Read the article

< Previous Page | 396 397 398 399 400 401 402 403 404 405 406 407  | Next Page >