Search Results

Search found 23265 results on 931 pages for 'justin case'.

Page 695/931 | < Previous Page | 691 692 693 694 695 696 697 698 699 700 701 702  | Next Page >

  • Spoofing UserAgent in Opera

    - by PoweRoy
    I'm trying to spoof Opera (under linux) to be an other browser, in this case iPad for some testing purposes. Now I know sites can check which browser is accessing the using for example in PHP $useragent = $_SERVER['HTTP_USER_AGENT']; and in javascript navigator.userAgent (or navigator.platform). In firefox you can use an addon to easily switch your useragent and other relevant information, but in Opera it seems it bit hard to do. First in opera.ini you can do: [User Agent] Spoof UserAgent ID=1 But this is limited to a predefined list of UserAgents. No room for custom ones. Also in opera.ini [ISP] Id=iPad This will add iPad to the User Agent of Opera. It's a start and works most of the time on the sites. In opera.ini you can set a 'User JavaScript file' to load a custom JavaScript file before loading a website: [User Prefs] User JavaScript File=/opera_dir/userjs/load.js In load.js you can do: navigator.userAgent = "Mozilla/5.0 (iPad; U; CPU OS 3_2 like Mac OS X; en-us) AppleWebKit/531.21.10 (KHTML, like Gecko) Version/4.0.4 Mobile/7B334b Safari/531.21.10" Because this file gets executed before loading the website I can modify the UserAgent, but this won't work when a site is checking the UserAgent via PHP, but it works for sites checking with Javascript) So here's my question: is there another way of spoofing a complete custom UserAgent?

    Read the article

  • How to best transfer large payloads of data using wsHttp with WCF with message security

    - by jpierson
    I have a case where I need to transfer large amounts of serialized object graphs (via NetDataContractSerializer) using WCF using wsHttp. I'm using message security and would like to continue to do so. Using this setup I would like to transfer serialized object graph which can sometimes approach around 300MB or so but when I try to do so I've started seeing a exception of type System.InsufficientMemoryException appear. After a little research it appears that by default in WCF that a result to a service call is contained within a single message by default which contains the serialized data and this data is buffered by default on the server until the whole message is completely written. Thus the memory exception is being caused by the fact that the server is running out of memory resources that it is allowed to allocate because that buffer is full. The two main recommendations that I've come across are to use streaming or chunking to solve this problem however it is not clear to me what that involves and whether either solution is possible with my current setup (wsHttp/NetDataContractSerializer/Message Security). So far I understand that to use streaming message security would not work because message encryption and decryption need to work on the whole set of data and not a partial message. Chunking however sounds like it might be possible however it is not clear to me how it would be done with the other constraints that I've listed. If anybody could offer some guidance on what solutions are available and how to go about implementing it I would greatly appreciate it. Related resources: Chunking Channel How to: Enable Streaming Large attachments over WCF Custom Message Encoder Another spotting of InsufficientMemoryException I'm also interested in any type of compression that could be done on this data but it looks like I would probably be best off doing this at the transport level once I can transition into .NET 4.0 so that the client will automatically support the gzip headers if I understand this properly.

    Read the article

  • Why do I get garbage output when printing an int[]?

    - by Kat
    My program is suppose to count the occurrence of each character in a file ignoring upper and lower case. The method I wrote is: public int[] getCharTimes(File textFile) throws FileNotFoundException { Scanner inFile = new Scanner(textFile); int[] lower = new int[26]; char current; int other = 0; while(inFile.hasNext()){ String line = inFile.nextLine(); String line2 = line.toLowerCase(); for (int ch = 0; ch < line2.length(); ch++) { current = line2.charAt(ch); if(current >= 'a' && current <= 'z') lower[current-'a']++; else other++; } } return lower; } And is printed out using: for(int letter = 0; letter < 26; letter++) { System.out.print((char) (letter + 'a')); System.out.println(": " + ts.getCharTimes(file)); } Where ts is a TextStatistic object created earlier in my main method. However when I run my program, instead of printing out the number of how often the character occurs it prints: a: [I@f84386 b: [I@1194a4e c: [I@15d56d5 d: [I@efd552 e: [I@19dfbff f: [I@10b4b2f And I don't know what I'm doing wrong.

    Read the article

  • Implement DDD and drawing the line between the an Entity and value object

    - by William
    I am implementing an EMR project. I would like to apply a DDD based approach to the problem. I have identified the "Patient" as being the core object of the system. I understand Patient would be an entity object as well as an aggregrate. I have also identified that every patient must have a "Doctor" and "Medical Records". The medical records would encompass Labs, XRays, Encounter.... I believe those would be entity objects as well. Let us take a Encounter for example. My implementation currently has a few fields as "String" properties, which are the complaint, assessment and plan. The other items necessary for an Encounter are vitals. I have implemented vitals as a value object. Given that it will be necessary to retrieve vitals without haveing to retrieve each Encounter then do vitals become part of the Encounter aggregate and patient aggregrate. I am assuming I could view the Encounter as an aggregrate, because other items are spwaned from the Encounter like prescriptions, lab orders, xrays. Is approach right that I am taking in identifying my entities and aggregates. In the case of vitals, they are specific to a patient, but outside of that there is not any other identity associated with them.

    Read the article

  • Javascript/Greasemonkey: search for something then set result as a value

    - by thewinchester
    Ok, I'm a bit of a n00b when it comes to JS (I'm not the greatest programmer) so please be gentle - specially if my questions been asked already somewhere and I'm too stupid to find the right answer. Self deprecation out of the way, let's get to the question. Problem There is a site me and a large group of friends frequently use which doesn't display all the information we may like to know - in this case an airline bookings site and the class of travel. While the information is buried in the code of the page, it isn't displayed anywhere to the user. Using a Greasemonkey script, I'd like to liberate this piece of information and display it in a suitable format. Here's the psuedocode of what I'm looking to do. Search dom for specified element define variables Find a string of text If found Set result to a variable Write contents to page at a specific location (before a specified div) If not found Do nothing I think I've achieved most of it so far, except for the key bits of: Searching for the string: The page needs to search for the following piece of text in the page HEAD: mileageRequest += "&CLASSES=S,S-S,S-S"; The Content I need to extract and store is between the second equals (=) sign and the last comma ("). The contents of this area can be any letter between A-Z. I'm not fussed about splitting it up into an array so I could use the elements individually at this stage. Writing the result to a location: Taking that found piece of text and writing it to another location. Code so far This is what I've come up so far, with bits missing highlighted. buttons = document.getElementById('buttons'); ''Search goes here var flightClasses = document.createElement("div"); flightClasses.innerHTML = '<div id="flightClasses"> ' + '<h2>Travel classes</h2>' + 'For the above segments, your flight classes are as follows:' + 'write result here' + '</div>'; main.parentNode.insertBefore(flightClasses, buttons); If anyone could help me, or point me in the right direction to finish this off I'd appreciate it.

    Read the article

  • Rails: Single Table Inheritance and models subdirectories

    - by Chris
    I have a card-game application which makes use of Single Table Inheritance. I have a class Card, and a database table cards with column type, and a number of subclasses of Card (including class Foo < Card and class Bar < Card, for the sake of argument). As it happens, Foo is a card from the original printing of the game, which Bar is a card from an expansion. In an attempt to rationalise my models, I have created a directory structure like so: app/ + models/ + card.rb + base_game/ + foo.rb + expansion/ + bar.rb And modified environment.rb to contain: Rails::Initializer.run do |config| config.load_paths += Dir["#{RAILS_ROOT}/app/models/**"] end However, when my reads a card from the database, Rails throws the following exception: ActiveRecord::SubclassNotFound (The single-table inheritance mechanism failed to locate the subclass: 'Foo'. This error is raised because the column 'type' is reserved for storing the class in case of inheritance. Please rename this column if you didn't intend it to be used for storing the inheritance class or overwrite Card.inheritance_column to use another column for that information.) Is it possible to make this work, or am I doomed to a flat directory structure?

    Read the article

  • ASP.Net: Adding client side onClick to a HyperlinkField in GridView

    - by Nir
    I have an existing GridView which contains the field "partner name". It is sortable by partner name. Now I need to change the Partner Name field and in some condition make it clickable and alert() something. The existing code is: <asp:GridView ID="gridViewAdjustments" runat="server" AutoGenerateColumns="false" AllowSorting="True" OnSorting="gridView_Sorting" OnRowDataBound="OnRowDataBoundAdjustments" EnableViewState="true"> <asp:BoundField DataField="PartnerName" HeaderText="Name" SortExpression="PartnerName"/> I've added the column: <asp:hyperlinkfield datatextfield="PartnerName" SortExpression="PartnerName" headertext="Name" ItemStyle-CssClass="text2"/> which enables me to control the CSS and sort. However, I can't find how to add a client side javascript function to it. I found that adding : <asp:TemplateField HeaderText="Edit"> <ItemTemplate> <a id="lnk" runat="server">Edit</a> enable me to access "lnk" by id and add to its attributes. However, I lose the Sort ability. What's the correct solution in this case? Thanks.

    Read the article

  • Usinng svnadmin dump to revert the latest revision committed

    - by Wux
    What I need is that the latest (mistake) revision being reverted and that the repository does not store it in anyway. That is, I'm trying to erase the latest revision out of existence, NOT trying to fix things by coming back to the latest-1 revision. In other words, I want to avoid the repository growing in size. Suppose head revision is 100. I knew that the suggested answer is that svnadmin dump -r0:80 old-repo | svnadmin load --force-uuid new-repo. What I'm confusing myself about is why not svnadmin dump -r81:100 old-repo Why the first and not the second solution? I suppose svnadmin dump will erase the repository completely? And keeping only revision 0 - 80 in a dump file? Is my understanding of "taking a part out of the repository into a dump file" about svnadmin dump completely wrong? (That is revision 81 - 100 is still there) Sincere apologies if this has been asked. I did spend some time searching though no specific things about this were found. A topic link in case I miss it would be greatly appreciated.

    Read the article

  • Migrating complex SVN branch hierarchy to Mercurial

    - by Christian Hang
    Our team has been using SVN for managing an application of decent size and over time a rather complex hierarchy of branches and tags has built up, which is following the basic standard layout for SVN repositories, but is more nested: |-trunk |-branches | |-releases | | |-releaseA | | `-releaseB | `-features | |-featureX | `-featureY |-tags |-releaseA | |-beta | `-RTP `-releaseB |-beta `-RTP (The feature branches are obviously temporary branches but we have to take them into consideration as it won't be feasible to close all of them at once in the near future) For several reasons but primarily because merges have been becoming an increasing pain, we are considering to switch to Mercurial. The main problem we are currently facing is migrating the existing code base without losing our history. I've tried several migration tools (e.g., yasvn2hg, hg convert and svn2hg) with yasvn2hg being the most promising, but none of them seem to be able to deal with nested hierarchies but they all assume that branches and tags are organized in one flat directory respectively. The choice between named branches or clones as the conversion target of old SVN branches is not a limiting factor in this case, as either solution would be appreciated. We are currently experimenting with both options and how they would fit into our current processes but haven't decided on one yet. I'd obviously be interested in recommendations or experiences with similar setups concerning that issue as well. So, what is the best way to convert a nested SVN branch hierarchy like this to Mercurial? Converting one branch at a time into a separate repository would be quite annoying and I am not sure if that would be the right approach in the first place, depending on how the tools handle historic merges and need to be aware of all other branches?

    Read the article

  • should I use Entity Framework instead of raw ADO.NET

    - by user110182
    I am new to CSLA and Entity Framework. I am creating a new CSLA / Silverlight application that will replace a 12 year old Win32 C++ system. The old system uses a custom DCOM business object library and uses ODBC to get to SQL Server. The new system will not immediately replace the old system -- they must coexist against the same database for years to come. At first I thought EF was the way to go since it is the latest and greatest. After making a small EF model and only 2 CSLA editable root objects (I will eventually have hundreds of objects as my DB has 800+ tables) I am seriously questioning the use of EF. In the current system I have the need many times to do fine detail performance tuning of the queries which I can do because of 100% control of generated SQL. But it seems in EF that so much happens behind the scenes that I lose that control. Article like http://toomanylayers.blogspot.com/2009/01/entity-framework-and-linq-to-sql.html don't help my impression of EF. People seem to like EF because of LINQ to EF but since my criteria is passed between client and server as criteria object it seems like I could build queries just as easily without LINQ. I understand in WCF RIA that there is query projection (or something like that) where I can do client side LINQ which does move to the server before translation into actual SQL so in that case I can see the benefit of EF, but not in CSLA. If I use raw ADO.NET, will I regret my decision 5 years from now? Has anyone else made this choice recently and which way did you go?

    Read the article

  • should I use Entity Framework instead of raw ADO.NET

    - by user110182
    I am new to CSLA and Entity Framework. I am creating a new CSLA / Silverlight application that will replace a 12 year old Win32 C++ system. The old system uses a custom DCOM business object library and uses ODBC to get to SQL Server. The new system will not immediately replace the old system -- they must coexist against the same database for years to come. At first I thought EF was the way to go since it is the latest and greatest. After making a small EF model and only 2 CSLA editable root objects (I will eventually have hundreds of objects as my DB has 800+ tables) I am seriously questioning the use of EF. In the current system I have the need many times to do fine detail performance tuning of the queries which I can do because of 100% control of generated SQL. But it seems in EF that so much happens behind the scenes that I lose that control. Article like http://toomanylayers.blogspot.com/2009/01/entity-framework-and-linq-to-sql.html don't help my impression of EF. People seem to like EF because of LINQ to EF but since my criteria is passed between client and server as criteria object it seems like I could build queries just as easily without LINQ. I understand in WCF RIA that there is query projection (or something like that) where I can do client side LINQ which does move to the server before translation into actual SQL so in that case I can see the benefit of EF, but not in CSLA. If I use raw ADO.NET, will I regret my decision 5 years from now? Has anyone else made this choice recently and which way did you go?

    Read the article

  • MySQL Connection Timeout Issue - Grails Application on Tomcat using Hibernate and ORM

    - by gav
    Hi Guys I have a small grails application running on Tomcat in Ubuntu on a VPS. I use MySql as my datastore and everything works fine unless I leave the application for more than half a day (8 hours?). I did some searching and apparently this is the default wait_timeout in mysql.cnf so after 8 hours the connection will die but Tomcat won't know so when the next user tries to view the site they will see the connection failure error. Refreshing the page will fix this but I want to get rid of the error altogether. For my version of MySql (5.0.75) I have only my.cnf and it doesn't contain such a parameter, In any case changing this parameter doesn't solve the problem. This Blog Post seems to be reporting a similar error but I still don't fully understand what I need to configure to get this fixed and also I am hoping that there is a simpler solution than another third party library. The machine I'm running on has 256MB ram and I'm trying to keep the number of programs/services running to a minimum. Is there something I can configure in Grails / Tomcat / MySql to get this to go away? Thanks in advance, Gav From my Catalina.out; 2010-04-29 21:26:25,946 [http-8080-2] ERROR util.JDBCExceptionReporter - The last packet successfully received from the server was 102,906,722 milliseconds$ 2010-04-29 21:26:25,994 [http-8080-2] ERROR errors.GrailsExceptionResolver - Broken pipe java.net.SocketException: Broken pipe at java.net.SocketOutputStream.socketWrite0(Native Method) ... 2010-04-29 21:26:26,016 [http-8080-2] ERROR util.JDBCExceptionReporter - Already closed. 2010-04-29 21:26:26,016 [http-8080-2] ERROR util.JDBCExceptionReporter - Already closed. 2010-04-29 21:26:26,017 [http-8080-2] ERROR servlet.GrailsDispatcherServlet - HandlerInterceptor.afterCompletion threw exception org.hibernate.exception.GenericJDBCException: Cannot release connection at java.lang.Thread.run(Thread.java:619) Caused by: java.sql.SQLException: Already closed. at org.apache.commons.dbcp.PoolableConnection.close(PoolableConnection.java:84) at org.apache.commons.dbcp.PoolingDataSource$PoolGuardConnectionWrapper.close(PoolingDataSource.java:181) ... 1 more

    Read the article

  • solr JOIN query

    - by Sfairas
    I need to run a JOIN query on a solr index. I've got two xmls that I have indexed, person.xml and subject.xml. Person: <doc> <field name="id">P39126</field> <field name="family">Smith</field> <field name="given">John</field> <field name="subject">S1276</field> <field name="subject">S1312</field> </doc> Subject: <doc> <field name="id">S1276</field> <field name="topic">Abnormalities, Human</field> </doc> I need to only display information from the person doc but each query should match fields in both person and subject. In the case the query matches only the subject doc I need to display all docs from the person that have a matching id. Is this possible to do without running two seperate queries? Something like a JOIN query would do the job. Any help?

    Read the article

  • Session State with MVP and Application Controller patterns

    - by Graham Bunce
    Hi, I've created an MVP (passive view) framework for development and decided to go for an Application Controller pattern to manage the navigation between views. This is targeted at WinForms, ASP.NET and WPF interfaces. Although I'm not 100% convinced that these view technologies really swappable, that's my aim at the moment so my MVP framework is quite lightweight. What I'm struggling to fit in is the concept of a "Business Conversation" that needs state information to be either (a) maintained for the lifetime of the View or, more likely, (b) maintained across several views for the lifetime of a use case (business conversation). I want state management to be part of the framework as I don't want developers to worry about it. All they need to do is to "start" a conversation, "Register" objects and the framework does the rest until the "end" a conversation. Has anybody got any thoughts (patterns) to how to fit this into MVP? I was thinking it may be part of the Application Controller responsibility (delegating to a Conversation Manager object) as it knows about current state in order to send the user to the next view.... but then I thought it may be up to the Presenter to start and end the conversation so then it comes down the presenters to manage conversations and the objects registered for the that conversation. Unfortunately that means presenters can't be used in different conversations... so that idea doesn't seem right. As you can see, I don't think there is an easy answer (and I've looked for a while). So anybody else got any thoughts?

    Read the article

  • IP address spoofing using Source Routing

    - by iamrohitbanga
    With IP options we can specify the route we want an IP packet to take while connecting to a server. If we know that a particular server provides some extra functionality based on the IP address can we not utilize this by spoofing an IP packet so that the source IP address is the privileged IP address and one of the hosts on the Source Routing is our own. So if the privileged IP address is x1 and server IP address is x2 and my own IP address is x3. I send a packet from x1 to x2 which is supposed to pass through x3. x1 does not actually send the packet. It is just that x2 thinks the packet came from x1 via x3. Now in response if x2 uses the same routing policy (as a matter of courtesy to x1) then all packets would be received by x3. Will the destination typically use the same IP address sequences as specified in the routing header so that packets coming from the server pass through my IP where I can get the required information? Can we not spoof a TCP connection in the above case? Is this attack used in practice?

    Read the article

  • Improve XPath efficiency for repeated, parameterized queries

    - by Chris Allan
    Hi, I am repeatedly performing the following XPath query (though parameterized by 'keywordText') around 40,000 times: String query = SystemGlobal.YAHOO_KEYWORDSSUBNODE + "/" + SystemGlobal.YAHOO_KEYWORDNODE + "[" + SystemGlobal.YAHOO_ATTRKEYPHRASE + "='" + keywordText + "']"; CachedXPathAPI cachedXPathAPI = new CachedXPathAPI(); NodeIterator nl = cachedXPathAPI.selectNodeIterator(doc.getElementsByTagName(SystemGlobal.YAHOO_KEYWORDSROOT).item(0), query); Node n; if ((n = nl.nextNode()) != null) { keyword.setKeywordId(Long.parseLong(cachedXPathAPI.selectSingleNode(n, SystemGlobal.YAHOO_ATTRKEYID).getTextContent())); keyword.setKeyPhrase(cachedXPathAPI.selectSingleNode(n, SystemGlobal.YAHOO_ATTRKEYPHRASE).getTextContent()); keyword.setStatus(mapStatus(cachedXPathAPI.selectSingleNode(n, SystemGlobal.YAHOO_ATTRSTATUS).getTextContent())); keyword.setCampaignId(Long.parseLong(cachedXPathAPI.selectSingleNode(n, "../../" + SystemGlobal.YAHOO_ATTRCAMPAIGNID).getTextContent())); keyword.setAdGroupId(Long.parseLong(cachedXPathAPI.selectSingleNode(n, "../" + SystemGlobal.YAHOO_ATTRADGROUPID).getTextContent())); On the first run of the script, all 40,000 runs of this piece of code will have nl.nextNode() == null, and everything runs quite quickly. However, on the following runs, when nl.nextNode() != null, then things slow down a lot - this takes around an additional 40min to run (whereas the first run takes maybe 1 minute). Oh, and the doc is constructed like so: InputSource in = new InputSource(new FileInputStream(filename)); DocumentBuilderFactory dfactory = DocumentBuilderFactory.newInstance(); dfactory.setNamespaceAware(true); doc = dfactory.newDocumentBuilder().parse(in); I tried including the following lines reportEvaluator = new XPathEvaluatorImpl(reportDoc); reportResolver = reportEvaluator.createNSResolver(reportDoc); and rather creating a NodeIterator, instead creating an XPathResult: XPathResult result = (XPathResult)reportEvaluator.evaluate(query, doc.getElementsByTagName(SystemGlobal.YAHOO_KEYWORDSROOT).item(0), reportResolver, XPathResult.UNORDERED_NODE_ITERATOR_TYPE, null); however this ran even slower Is there a way in which I can speed up the running of this script? I have seen references to precompiled queries, though I haven't seen many actual details. Also, as seen in the code, I am using CachedXPathAPI, though the benefit for this case is not so great. Any help is much appreciated! Chris Allan

    Read the article

  • Segmentation fault on MPI, runs properly on OpenMP

    - by Bellman
    Hi, I am trying to run a program on a computer cluster. The structure of the program is the following: PROGRAM something ... CALL subroutine1(...) ... END PROGRAM SUBROUTINE subroutine1(...) ... DO i=1,n CALL subroutine2(...) ENDDO ... END SUBROUTINE SUBROUTINE subroutine2(...) ... CALL subroutine3(...) CALL subroutine4(...) ... END SUBROUTINE The idea is to parallelize the loop that calls subroutine2. Main program basically only makes the call to subroutine1 and only its arguments are declared. I use two alternatives. On the one hand, I write OpenMP clauses arround the loop. On the other hand, I add an IF conditional branch arround the call and I use MPI to share the results. In the OpenMP case, I add CALL KMP_SET_STACKSIZE(402653184) at the beginning of the main program and I can run it with 8 threads on an 8 core machine. When I run it (on the same 8 core machine) with MPI (either using 8 or 1 processors) it crashes just when makes the call to subroutine3 with a segmentation fault (signal 11) error. If I comment subroutine4, then it doesn't crash (notice that it crashed just when calling subroutine3 and it works when commenting subroutine4). I compile with mpif90 using MPICH2 libraries and the following flags: -O3 -fpscomp logicals -openmp -threads -m64 -xS. The machine has EM64T architecture and I use a Debian Linux distribution. I set ulimit -s hard before running the program. Any ideas on what is going on? Has it something to do with stack size? Thanks in advance

    Read the article

  • Swig typecast to derived class?

    - by Zack
    I notice that Swig provides a whole host of functions to allow for typecasting objects to their parent classes. However, in C++ one can produce a function like the following: A * getAnObject() { if(someBoolean) return (A *) new B; else return (A *) new C; } Where "A" is the parent of classes "B" and "C". One can then typecast the pointer returned into being a "B" type or "C" type at one's convenience like: B * some_var = (B *) getAnObject(); Is there some way I can typecast an object I've received from a generic-pointer-producing function at run-time in the scripting language using the wrappers? (In my case, Lua?) I have a function that could produce one of about a hundred possible classes, and I'd like to avoid writing an enormous switch structure that I'd have to maintain in C++. At the point where I receive the generic pointer, I also have a string representation of the data type I'd like to cast it to. Any thoughts? Thanks! -- EDIT -- I notice that SWIG offers to generate copy constructors for all of my classes. If I had it generate those, could I do something like the following?: var = myModule.getAnObject(); -- Function that returns an object type-cast down to a pointer of the parent class, as in the function getAnObject() above. var = myModule.ClassThatExtendsBaseClass(var); -- A copy constructor that SWIG theoretically creates for me and have var then be an instance of the inheriting class that knows it's an instance of the inheriting class?

    Read the article

  • HTML5 video (mp4 and ogv) problems in Safari and Firefox - but Chrome is all good

    - by qryss
    Hi folks, I have the following code: <video width="640" height="360" controls id="video-player" poster="/movies/poster.png"> <source src="/movies/640x360.m4v" type='video/mp4; codecs="avc1.42E01E, mp4a.40.2"'> <source src="/movies/640x360.ogv" type='video/ogg; codecs="theora, vorbis"'> </video> I'm using Rails (Mongrel in development and Mongrel+Apache in production). Chrome (Mac and Win) can play either file (tested by one then the other source tags) whether locally or from my production servers. Safari (Mac and Win) can play the mp4 file fine locally but not from production. Firefox 3.6 won't play the video in either OS. I just get a grey cross in the middle of the video player area. I've made sure that both Mongrel and Apache in each case have the right MIME types set. From Chrome's results I know there is nothing inherently wrong with my video files or the way the files are being asked for or delivered. Anyone got any clues? Or even a clue as to how to diagnose the problem? For Firefox I looked at https://developer.mozilla.org/En/Using_audio_and_video_in_Firefox where it refers to an 'error' event and an 'error' attribute. It seems the 'error' event is thrown pretty well straightaway and at that time there is no error attribute. Very helpful... :( Help enormously appreciated! Thanks in advance... Chris

    Read the article

  • CKEditor instance already exists

    - by jackboberg
    I am using jquery dialogs to present forms (fetched via AJAX). On some forms I am using a CKEditor for the textareas. The editor displays fine on the first load. When the user cancels the dialog, I am removing the contents so that they are loaded fresh on a later request. The issue is, once the dialog is reloaded, the CKEditor claims the editor already exists. uncaught exception: [CKEDITOR.editor] The instance "textarea_name" already exists. The API includes a method for destroying existing editors, and I have seen people claiming this is a solution: if (CKEDITOR.instances['textarea_name']) { CKEDITOR.instances['textarea_name'].destroy(); } CKEDITOR.replace('textarea_name'); This is not working for me, as I receive a new error instead: TypeError: Result of expression 'i.contentWindow' [null] is not an object. This error seems to occur on the "destroy()" rather than the "replace()". Has anyone experienced this and found a different solution? Is is possible to 're-render' the existing editor, rather than destroying and replacing it? UPDATED Here is another question dealing with the same problem, but he has provided a downloadable test case.

    Read the article

  • Fluent nhibernate: Enum in composite key gets mapped to int when I need string

    - by Quintin Par
    By default the behaviour of FNH is to map enums to its string in the db. But while mapping an enum as part of a composite key, the property gets mapped as int. e.g. in this case public class Address : Entity { public Address() { } public virtual AddressType Type { get; set; } public virtual User User { get; set; } Where AddresType is of public enum AddressType { PRESENT, COMPANY, PERMANENT } The FNH mapping is as mapping.CompositeId().KeyReference(x => x.User, "user_id").KeyProperty(x => x.Type); the schema creation of this mapping results in create table address ( Type INTEGER not null, user_id VARCHAR(25) not null, and the hbm as <composite-id mapped="true" unsaved-value="undefined"> <key-property name="Type" type="Company.Core.AddressType, Company.Core, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null"> <column name="Type" /> </key-property> <key-many-to-one name="User" class="Company.Core.CompanyUser, Company.Core, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null"> <column name="user_id" /> </key-many-to-one> </composite-id> Where the AddressType should have be generated as type="FluentNHibernate.Mapping.GenericEnumMapper`1[[Company.Core.AddressType, How do I instruct FNH to mappit as the default string enum generic mapper?

    Read the article

  • Detect the language & django locale-url

    - by mamcx
    I want to deploy a website in english & spanish and detect the user browser languaje & redirect to the correct locale site. My site is www.elmalabarista.com I install django-localeurl, but I discover that the languaje is not correctly detected. This are my middlewares: MIDDLEWARE_CLASSES = ( 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.locale.LocaleMiddleware', 'multilingual.middleware.DefaultLanguageMiddleware', 'middleware.feedburner.FeedburnerMiddleware', 'lib.threadlocals.ThreadLocalsMiddleware', 'middleware.url.UrlMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'maintenancemode.middleware.MaintenanceModeMiddleware', 'middleware.redirect.RedirectMiddleware', 'openidconsumer.middleware.OpenIDMiddleware', 'django.middleware.doc.XViewMiddleware', 'middleware.ajax_errors.AjaxMiddleware', 'pingback.middleware.PingbackMiddleware', 'localeurl.middleware.LocaleURLMiddleware', 'multilingual.flatpages.middleware.FlatpageFallbackMiddleware', 'django.middleware.common.CommonMiddleware', ) But ALWAYS the site get to US despite the fact my OS & Browser setup is spanish. LANGUAGES = ( ('en', ugettext('English')), ('es', ugettext('Spanish')), ) DEFAULT_LANGUAGE = 1 Then, I hack the middleware of locale-url and do this: def process_request(self, request): locale, path = self.split_locale_from_request(request) if request.META.has_key('HTTP_ACCEPT_LANGUAGE'): locale = utils.supported_language(request.META['HTTP_ACCEPT_LANGUAGE'].split(',')[0]) locale_path = utils.locale_path(path, locale) if locale_path != request.path_info: if request.META.get("QUERY_STRING", ""): locale_path = "%s?%s" % (locale_path, request.META['QUERY_STRING']) return HttpResponseRedirect(locale_path) request.path_info = path if not locale: locale = settings.LANGUAGE_CODE translation.activate(locale) request.LANGUAGE_CODE = translation.get_language() However, this detect fine the language but redirect the "en" urls to "es". So is impossible navigate in english. UPDATE: This is the final code (after the input from Carl Meyer) with a fix for the case of "/": def process_request(self, request): locale, path = self.split_locale_from_request(request) if (not locale) or (locale==''): if request.META.has_key('HTTP_ACCEPT_LANGUAGE'): locale = utils.supported_language(request.META['HTTP_ACCEPT_LANGUAGE'].split(',')[0]) else: locale = settings.LANGUAGE_CODE locale_path = utils.locale_path(path, locale) if locale_path != request.path_info: if request.META.get("QUERY_STRING", ""): locale_path = "%s?%s" % (locale_path, request.META['QUERY_STRING']) return HttpResponseRedirect(locale_path) request.path_info = path translation.activate(locale) request.LANGUAGE_CODE = translation.get_language()

    Read the article

  • How do I copy a python function to a remote machine and then execute it?

    - by Hugh
    I'm trying to create a construct in Python 3 that will allow me to easily execute a function on a remote machine. Assuming I've already got a python tcp server that will run the functions it receives, running on the remote server, I'm currently looking at using a decorator like @execute_on(address, port) This would create the necessary context required to execute the function it is decorating and then send the function and context to the tcp server on the remote machine, which then executes it. Firstly, is this somewhat sane? And if not could you recommend a better approach? I've done some googling but haven't found anything that meets these needs. I've got a quick and dirty implementation for the tcp server and client so fairly sure that'll work. I can get a string representation the function (e.g. func) being passed to the decorator by import inspect string = inspect.getsource(func) which can then be sent to the server where it can be executed. The problem is, how do I get all of the context information that the function requires to execute? For example, if func is defined as follows, import MyModule def func(): result = MyModule.my_func() MyModule will need to be available to func either in the global context or funcs local context on the remote server. In this case that's relatively trivial but it can get so much more complicated depending on when and how import statements are used. Is there an easy and elegant way to do this in Python? The best I've come up with at the moment is using the ast library to pull out all import statements, using the inspect module to get string representations of those modules and then reconstructing the entire context on the remote server. Not particularly elegant and I can see lots of room for error. Thanks for your time

    Read the article

  • serializing type definitions?

    - by Dave
    I'm not positive I'm going about this the right way. I've got a suite of applications that have varying types of output (custom defined types). For example, I might have a type called Widget: Class Widget Public name as String End Class Throughout the course of operation, when a user experiences a certain condition, the application will take that output instance of widget that user received, serialize it, and log it to the database noting the name of the type. Now, I have other applications that do something similar, but instead of dealing with Widget, it could be some totally random other type with different attributes, but again I serialize the instance, log it to the db, and note the name of the type. I have maybe a half dozen different types and don't anticipate too many additional ones in the future. After all this is said and done, I have an admin interface that looks through these logs, and has the ability for the user to view the contents of this data thats been logged. The Admin app has a reference to all the types involved, and with some basic switch case logic hinged upon the name of the type, will cast it into their original types, and pass it on to some handlers that have basic display logic to spit the data back out in a readable format (one display handler for each type) NOW... all this is well and good... Until one day, my model changed. The Widget class now has deprecated the name attribute and added on a bunch of other attributes. I will of course get type mismatches in the admin side when I try to reconstitute this data. I was wondering if there was some way, at runtime, i could perhaps reflect through my code and get a snapshot of the type definition at that precise moment, serialize it, and store it along with the data so that I could somehow use this to reconstitute it in the future?

    Read the article

  • Dynamic Programming Recursion and a sprinkle of Memoization

    - by Auburnate
    I have this massive array of ints from 0-4 in this triangle. I am trying to learn dynamic programming with Ruby and would like some assistance in calculating the number of paths in the triangle that meet three criterion: You must start at one of the zero points in the row with 70 elements. Your path can be directly above you one row (if there is a number directly above) or one row up heading diagonal to the left. One of these options is always available The sum of the path you take to get to the zero on the first row must add up to 140. Example, start at the second zero in the bottom row. You can move directly up to the one or diagonal left to the 4. In either case, the number you arrive at must be added to the running count of all the numbers you have visited. From the 1 you can travel to a 2 (running sum = 3) directly above or to the 0 (running sum = 1) diagonal to the left. 0 41 302 2413 13024 024130 4130241 30241302 241302413 1302413024 02413024130 413024130241 3024130241302 24130241302413 130241302413024 0241302413024130 41302413024130241 302413024130241302 2413024130241302413 13024130241302413024 024130241302413024130 4130241302413024130241 30241302413024130241302 241302413024130241302413 1302413024130241302413024 02413024130241302413024130 413024130241302413024130241 3024130241302413024130241302 24130241302413024130241302413 130241302413024130241302413024 0241302413024130241302413024130 41302413024130241302413024130241 302413024130241302413024130241302 2413024130241302413024130241302413 13024130241302413024130241302413024 024130241302413024130241302413024130 4130241302413024130241302413024130241 30241302413024130241302413024130241302 241302413024130241302413024130241302413 1302413024130241302413024130241302413024 02413024130241302413024130241302413024130 413024130241302413024130241302413024130241 3024130241302413024130241302413024130241302 24130241302413024130241302413024130241302413 130241302413024130241302413024130241302413024 0241302413024130241302413024130241302413024130 41302413024130241302413024130241302413024130241 302413024130241302413024130241302413024130241302 2413024130241302413024130241302413024130241302413 13024130241302413024130241302413024130241302413024 024130241302413024130241302413024130241302413024130 4130241302413024130241302413024130241302413024130241 30241302413024130241302413024130241302413024130241302 241302413024130241302413024130241302413024130241302413 1302413024130241302413024130241302413024130241302413024 02413024130241302413024130241302413024130241302413024130 413024130241302413024130241302413024130241302413024130241 3024130241302413024130241302413024130241302413024130241302 24130241302413024130241302413024130241302413024130241302413 130241302413024130241302413024130241302413024130241302413024 0241302413024130241302413024130241302413024130241302413024130 41302413024130241302413024130241302413024130241302413024130241 302413024130241302413024130241302413024130241302413024130241302 2413024130241302413024130241302413024130241302413024130241302413 13024130241302413024130241302413024130241302413024130241302413024 024130241302413024130241302413024130241302413024130241302413024130 4130241302413024130241302413024130241302413024130241302413024130241 30241302413024130241302413024130241302413024130241302413024130241302 241302413024130241302413024130241302413024130241302413024130241302413 1302413024130241302413024130241302413024130241302413024130241302413024 02413024130241302413024130241302413024130241302413024130241302413024130

    Read the article

< Previous Page | 691 692 693 694 695 696 697 698 699 700 701 702  | Next Page >