Search Results

Search found 29495 results on 1180 pages for 'cross site scripting'.

Page 1037/1180 | < Previous Page | 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044  | Next Page >

  • Does my fat-client application belong in the MVC pattern?

    - by boatingcow
    The web-based application I’m currently working on is growing arms and legs! It’s basically an administration system which helps users to keep track of bookings, user accounts, invoicing etc. It can also be accessed via a couple of different websites using a fairly crude API. The fat-client design loosely follows the MVC pattern (or perhaps MVP) with a php/MySQL backend, Front Controller, several dissimilar Page Controllers, a liberal smattering of object-oriented and procedural Models, a confusing bunch of Views and templates, some JavaScripts, CSS files and Flash objects. The programmer in me is a big fan of the principle of “Separation of Concerns” and on that note, I’m currently trying to figure out the best way to separate and combine the various concerns as the project grows and more people contribute to it. The problem we’re facing is that although JavaScript (or Flash with ActionScript) is normally written with the template, hence part of the View and decoupled from the Controller and Model, we find that it actually encompasses the entire MVC pattern... Swap an image with an onmouseover event - that’s Behaviour. Render a datagrid - we’re manipulating the View. Send the result of reordering a list via AJAX - now we’re in Control. Check a form field to see if an email address is in a valid format - we’re consulting the Model. Is it wise to let the database people write up the validation Model with jQuery? Can the php programmers write the necessary Control structures in JavaScript? Can the web designers really write a functional AJAX form for their View? Should there be a JavaScript overlord for every project? If the MVC pattern could be applied to the people instead of the code, we would end up with this: Model - the database boffins - “SELECT * FROM mind WHERE interested IS NULL” Control - pesky programmers - “class Something extends NothingAbstractClass{…}” View - traditionally the domain of the graphic/web designer - “” …and a new layer: Behaviour - interaction and feedback designer - “CSS3 is the new black…” So, we’re refactoring and I’d like to stick to best practice design, but I’m not sure how to proceed. I don’t want to reinvent the wheel, so would anyone have any hints or tips as to what pattern I should be looking at or any code samples from someone who’s already done the dirty work? As the programmer guy, how can I rewrite the app for backend and front end whilst keeping the two separate? And before you ask, yes I’ve looked at Zend, CodeIgnitor, Symfony, etc., and no, they don’t seem to cross the boundary between server logic and client logic!

    Read the article

  • Need to set cursor position to the end of a contentEditable div, issue with selection and range obje

    - by DavidR
    I'm forgetting about cross-browser compatibility for the moment, I just want this to work. What I'm doing is trying to modify a script (and you probably don't need to know this) located at typegreek.com The basic script is found here. Basically what it does is when you type in characters, it converts the character your are typing into greek characters and prints it onto the screen. What I'm trying to do is to get it to work on contentEditable div's (It only works for Textareas) My issue is with this one function: The user types a key, it get's converted to a greek key, and goes to a function, it gets sorted through some if's, and where it ends up is where I can add div support. Here is what I have so far, myField is the div, myValue is the greek character. //Get selection object... var userSelection if (window.getSelection) {userSelection = window.getSelection();} else if (document.selection) {userSelection = document.selection.createRange();} //Now get the cursor position information... var startPos = userSelection.anchorOffset; var endPos = userSelection.focusOffset; var cursorPos = endPos; //Needed later when reinserting the cursor... var rangeObj = userSelection.getRangeAt(0) var container = rangeObj.startContainer //Now take the content from pos 0 -> cursor, add in myValue, then insert everything after myValue to the end of the line. myField.textContent = myField.textContent.substring(0, startPos) + myValue + myField.textContent.substring(endPos, myField.textContent.length); //Now the issue is, this updates the string, and returns the cursor to the beginning of the div. //so that at the next keypress, the character is inserted into the beginning of the div. //So we need to reinsert the cursor where it was. //Re-evaluate the cursor position, taking into account the added character. var cursorPos = endPos + myValue.length; //Set the caracter position. rangeObj.setStart(container,cursorPos) Now, this works only as long as I don't type more than the size of the original text. Say I had 30 characters in the div before hand. If I type more than that 30, it adds character 31, but places the cursor back at 30. I can type character 32 at pos.31, then character 33 at pos.32, but if I try to put character 34 in, it adds the character, and sets the cursor back at 32. The issue is that the function for adding the new character screws up if cursorPos is greater than what is defined in the range. Any ideas?

    Read the article

  • Fixed point math in c#?

    - by x4000
    Hi there, I was wondering if anyone here knows of any good resources for fixed point math in c#? I've seen things like this (http://2ddev.72dpiarmy.com/viewtopic.php?id=156) and this (http://stackoverflow.com/questions/79677/whats-the-best-way-to-do-fixed-point-math), and a number of discussions about whether decimal is really fixed point or actually floating point (update: responders have confirmed that it's definitely floating point), but I haven't seen a solid C# library for things like calculating cosine and sine. My needs are simple -- I need the basic operators, plus cosine, sine, arctan2, PI... I think that's about it. Maybe sqrt. I'm programming a 2D RTS game, which I have largely working, but the unit movement when using floating-point math (doubles) has very small inaccuracies over time (10-30 minutes) across multiple machines, leading to desyncs. This is presently only between a 32 bit OS and a 64 bit OS, all the 32 bit machines seem to stay in sync without issue, which is what makes me think this is a floating point issue. I was aware from this as a possible issue from the outset, and so have limited my use of non-integer position math as much as possible, but for smooth diagonal movement at varying speeds I'm calculating the angle between points in radians, then getting the x and y components of movement with sin and cos. That's the main issue. I'm also doing some calculations for line segment intersections, line-circle intersections, circle-rect intersections, etc, that also probably need to move from floating-point to fixed-point to avoid cross-machine issues. If there's something open source in Java or VB or another comparable language, I could probably convert the code for my uses. The main priority for me is accuracy, although I'd like as little speed loss over present performance as possible. This whole fixed point math thing is very new to me, and I'm surprised by how little practical information on it there is on google -- most stuff seems to be either theory or dense C++ header files. Anything you could do to point me in the right direction is much appreciated; if I can get this working, I plan to open-source the math functions I put together so that there will be a resource for other C# programmers out there. UPDATE: I could definitely make a cosine/sine lookup table work for my purposes, but I don't think that would work for arctan2, since I'd need to generate a table with about 64,000x64,000 entries (yikes). If you know any programmatic explanations of efficient ways to calculate things like arctan2, that would be awesome. My math background is all right, but the advanced formulas and traditional math notation are very difficult for me to translate into code.

    Read the article

  • .NET asmx web services: serialize object property as string property to support versioning

    - by mcliedtk
    I am in the process of upgrading our web services to support versioning. We will be publishing our versioned web services like so: http://localhost/project/services/1.0/service.asmx http://localhost/project/services/1.1/service.asmx One requirement of this versioning is that I am not allowed to break the original wsdl (the 1.0 wsdl). The challenge lies in how to shepherd the newly versioned classes through the logic that lies behind the web services (this logic includes a number of command and adapter classes). Note that upgrading to WCF is not an option at the moment. To illustrate this, let's consider an example with Blogs and Posts. Prior to the introduction of versions, we were passing concrete objects around instead of interfaces. So an AddPostToBlog command would take in a Post object instead of an IPost. // Old AddPostToBlog constructor. public AddPostToBlog(Blog blog, Post post) { // constructor body } With the introduction of versioning, I would like to maintain the original Post while adding a PostOnePointOne. Both Post and PostOnePointOne will implement the IPost interface (they are not extending an abstract class because that inheritance breaks the wsdl, though I suppose there may be a way around that via some fancy xml serialization tricks). // New AddPostToBlog constructor. public AddPostToBlog(Blog blog, IPost post) { // constructor body } This brings us to my question regarding serialization. The original Post class has an enum property named Type. For various cross-platform compatibility issues, we are changing our enums in our web services to strings. So I would like to do the following: // New IPost interface. public interface IPost { object Type { get; set; } } // Original Post object. public Post { // The purpose of this attribute would be to maintain how // the enum currently is serialized even though now the // type is an object instead of an enum (internally the // object actually is an enum here, but it is exposed as // an object to implement the interface). [XmlMagic(SerializeAsEnum)] object Type { get; set; } } // New version of Post object public PostOnePointOne { // The purpose of this attribute would be to force // serialization as a string even though it is an object. [XmlMagic(SerializeAsString)] object Type { get; set; } } The XmlMagic refers to an XmlAttribute or some other part of the System.Xml namespace that would allow me to control the type of the object property being serialized (depending on which version of the object I am serializaing). Does anyone know how to accomplish this?

    Read the article

  • Seg Fault when using std::string on an embedded Linux platform

    - by Brad
    Hi, I have been working for a couple of days on a problem with my application running on an embedded Arm Linux platform. Unfortunately the platform precludes me from using any of the usual useful tools for finding the exact issue. When the same code is run on the PC running Linux, I get no such error. In the sample below, I can reliably reproduce the problem by uncommenting the string, list or vector lines. Leaving them commented results in the application running to completion. I expect that something is corrupting the heap, but I cannot see what? The program will run for a few seconds before giving a segmentation fault. The code is compiled using a arm-linux cross compiler: arm-linux-g++ -Wall -otest fault.cpp -ldl -lpthread arm-linux-strip test Any ideas greatly appreciated. #include <stdio.h> #include <vector> #include <list> #include <string> using namespace std; ///////////////////////////////////////////////////////////////////////////// class TestSeg { static pthread_mutex_t _logLock; public: TestSeg() { } ~TestSeg() { } static void* TestThread( void *arg ) { int i = 0; while ( i++ < 10000 ) { printf( "%d\n", i ); WriteBad( "Function" ); } pthread_exit( NULL ); } static void WriteBad( const char* sFunction ) { pthread_mutex_lock( &_logLock ); printf( "%s\n", sFunction ); //string sKiller; // <----------------------------------Bad //list<char> killer; // <----------------------------------Bad //vector<char> killer; // <----------------------------------Bad pthread_mutex_unlock( &_logLock ); return; } void RunTest() { int threads = 100; pthread_t _rx_thread[threads]; for ( int i = 0 ; i < threads ; i++ ) { pthread_create( &_rx_thread[i], NULL, TestThread, NULL ); } for ( int i = 0 ; i < threads ; i++ ) { pthread_join( _rx_thread[i], NULL ); } } }; pthread_mutex_t TestSeg::_logLock = PTHREAD_MUTEX_INITIALIZER; int main( int argc, char *argv[] ) { TestSeg seg; seg.RunTest(); pthread_exit( NULL ); }

    Read the article

  • CSS: Chrome and Safari seem to 'add' border to width, while IE, Firefox & Opera don't

    - by Michiel
    Hey guys, I'm trying to achieve cross-browser consistency for my website, but I have been trying all day now and its driving me nuts (0.38 am here in Europe now..). It's about this page: http://www[insert-dot-here]geld[insert-dash-here]surfen[insert-dot-here]nl/uitbetalingen.html (please note that I prefer this URL not to be made crawlable for seo-bots) If you view this page in IE, Firefox or Opera, everything is fine, but in Chrome and Safari the tables are a little out of line (as you'll probably clearly notice). What seems to be the problem?; it appears to me that in Chrome and Safari the left and right border (2px) in total are added to the set table width, while in the other browsers the border is considered part of the width. The (most) relevant CSS-lines are the following ones (from the tabel.css-file, also available through the page's source file): table.uitbetaling { margin: 11px 18px 10px 19px; border: 1px solid #8ccaee; width: 498px; padding: 0; } table.uitbetaling img, table.uitbetaling td { margin: 0; border: 0; padding: 0; width: 496px; } table.uitbetaling tr { margin: 0; border: 0; padding: 0 1px 0 0; } So basically I have used a table-structure to organize images, like this; (the class of the table is 'uitbetaling') <table> <tr><td><img /></td></tr> <tr><td><img /></td></tr> ... <tr><td><img /></td></tr> </table> If, here, I set the width of 'table.uitbetaling' and 'table.uitbetaling img, table.uitbetaling td' to the same value (e.g. both 496 or 498), the 'problem' in Chrome and Safari is solved, however in Firefox the right side border is than blank. Because the right-side border can't 'fit' in anymore. 'img' and 'td' must be at least 2px more narrow than 'table.uitbetaling' for the right-border be visible in Firefox. Is there any way to solve this? Thanks so much in advance for your insights!!

    Read the article

  • Error inflating class android.widget.CompoundButton

    - by snctln
    [Disclaimer: This has been cross posted on the Android Developers Google Group I am trying to use a CompoundButton in a project I am working on. Every time I try and use it by declaring it in my layout xml file I receive the error "01-04 12:27:46.471: ERROR/AndroidRuntime(1771): Caused by: android.view.InflateException: Binary XML file line #605: Error inflating class android.widget.CompoundButton" After fighting the error for a half an hour I decided to try a minimalistic example. I am using the latest eclipse developer tools, and targeting android 2.2 makign the minimum sdk required 2.2 (8). Here is the activity java code: package com.example.CompoundButtonExample; import android.app.Activity; import android.os.Bundle; public class CompoundButtonExampleActivity extends Activity { /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); } } Here is the layout xml code: <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:orientation="vertical" android:layout_width="fill_parent" android:layout_height="fill_parent" > <TextView android:layout_width="fill_parent" android:layout_height="wrap_content" android:text="@string/hello" /> <CompoundButton android:layout_width="fill_parent" android:layout_height="wrap_content" android:text="@string/hello" /> </LinearLayout> Here is the manifest xml code: <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.example.CompoundButtonExample" android:versionCode="1" android:versionName="1.0"> <application android:icon="@drawable/icon" android:label="@string/app_name"> <activity android:name=".CompoundButtonExampleActivity" android:label="@string/app_name"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> </application> <uses-sdk android:minSdkVersion="8" /> </manifest> As you can see it is just the default hello world project that eclipse creates for you when you start a new Android project. It only differs in the fact that I add a "CompoundButton" to the main layout in a vertical LinearLayout. Can anyone confirm this bug? Or tell me what I am doing wrong?

    Read the article

  • HTML/CSS set div to height of sibling

    - by Paul
    I have 2 div's contained in a third. One of the contained div's is floated left, the other floated right. I would like the 2 sibling div's to always be at the same height, but am having a problem with this. So far I am only viewing the page in Firefox, and figured I'd worry about any cross-browser issues after I get it working in at least one browser. Here is the markup: <div id="main-container" class="border clearfix"> <div id="left-div" class="border"> ... </div> <div id="right-div" class="border"> ... </div> </div> Here is the CSS: #main-container { position: relative; min-height: 500px; } #left-div { position: relative; float: left; width: 700px; min-height: inherit; } #right-div { position: relative; float: right; width: 248px; min-height: inherit; height: inherit; } .clearfix:after { content: " "; display: block; height: 0; clear: both; visibility: hidden; } .clearfix { display: inline-block; _height: 1%; clear: both; } .clearfix { display: block; clear: both; } .border { border: solid 1px #000; } If the content in the #left-div is longer than 500px, the #right-div does not expand to match. In an example I tried, Firefox said the computed style height of the #main-container was 804px, the computed style height of the #left-div was 800px, and the computed style height of the #right-div was 586.2px, as it had expanded to fit it's own content. I understand I might be going about this the wrong way, and if this is a duplicate questions then I apologize, but I wasn't quite sure what to search under.

    Read the article

  • CSS Transform offset varies with text length

    - by Mr. Polywhirl
    I have set up a demo that has 5 floating <div>s with rotated text of varying length. I am wondering if there is a way to have a CSS class that can handle centering of all text regardless of length. At the moment I have to create a class for each length of characters in my stylesheet. This could get too messy. I have also noticed that the offsets get screwd up is I increase or decrease the size of the wrapping <div>. I will be adding these classes to divs through jQuery, but I still have to set up each class for cross-browser compatibility. .transform3 { -webkit-transform-origin: 65% 100%; -moz-transform-origin: 65% 100%; -ms-transform-origin: 65% 100%; -o-transform-origin: 65% 100%; transform-origin: 65% 100%; } .transform4 { -webkit-transform-origin: 70% 110%; -moz-transform-origin: 70% 110%; -ms-transform-origin: 70% 110%; -o-transform-origin: 70% 110%; transform-origin: 70% 110%; } .transform5 { -webkit-transform-origin: 80% 120%; -moz-transform-origin: 80% 120%; -ms-transform-origin: 80% 120%; -o-transform-origin: 80% 120%; transform-origin: 80% 120%; } .transform6 { -webkit-transform-origin: 85% 136%; -moz-transform-origin: 85% 136%; -ms-transform-origin: 85% 136%; -o-transform-origin: 85% 136%; transform-origin: 85% 136%; } .transform7 { -webkit-transform-origin: 90% 150%; -moz-transform-origin: 90% 150%; -ms-transform-origin: 90% 150%; -o-transform-origin: 90% 150%; transform-origin: 90% 150%; } Note: The offset values I set were eye-balled. Update Although I would like this handled with a stylesheet, I believe that I will have to calculate the transformations of the CSS in JavaScript. I have created the following demo to demonstrate dynamic transformations. In this demo, the user can adjust the font-size of the .v_text class and as long as the length of the text does not exceed the .v_text_wrapper height, the text should be vertically aligned in the center, but be aware that I have to adjust the magicOffset value. Well, I just noticed that this does not work in iOS...

    Read the article

  • Understanding PTS and DTS in video frames

    - by theateist
    I had fps issues when transcoding from avi to mp4(x264). Eventually the problem was in PTS and DTS values, so lines 12-15 where added before av_interleaved_write_frame function: 1. AVFormatContext* outContainer = NULL; 2. avformat_alloc_output_context2(&outContainer, NULL, "mp4", "c:\\test.mp4"; 3. AVCodec *encoder = avcodec_find_encoder(AV_CODEC_ID_H264); 4. AVStream *outStream = avformat_new_stream(outContainer, encoder); 5. // outStream->codec initiation 6. // ... 7. avformat_write_header(outContainer, NULL); 8. // reading and decoding packet 9. // ... 10. avcodec_encode_video2(outStream->codec, &encodedPacket, decodedFrame, &got_frame) 11. 12. if (encodedPacket.pts != AV_NOPTS_VALUE) 13. encodedPacket.pts = av_rescale_q(encodedPacket.pts, outStream->codec->time_base, outStream->time_base); 14. if (encodedPacket.dts != AV_NOPTS_VALUE) 15. encodedPacket.dts = av_rescale_q(encodedPacket.dts, outStream->codec->time_base, outStream->time_base); 16. 17. av_interleaved_write_frame(outContainer, &encodedPacket) After reading many posts I still do not understand: outStream->codec->time_base = 1/25 and outStream->time_base = 1/12800. The 1st one was set by me but I cannot figure out why and who set 12800? I noticed that before line (7) outStream->time_base = 1/90000 and right after it it changes to 1/12800, why? When I transcode from avi to avi, meaning changing the line (2) to avformat_alloc_output_context2(&outContainer, NULL, "avi", "c:\\test.avi"; , so before and after line (7) outStream->time_base remains always 1/25 and not like in mp4 case, why? What is the difference between time_base of outStream->codec and outStream? To calc the pts av_rescale_q does: takes 2 time_base, multiplies their fractions in cross and then compute the pts. Why it does this in this way? As I debugged, the encodedPacket.pts has value incremental by 1, so why changing it if it does has value? At the beginning the dts value is -2 and after each rescaling it still has negative number, but despite this the video played correctly! Shouldn't it be positive?

    Read the article

  • Lazy loading the addthis script? (or lazy loading external js content dependent on already fired eve

    - by Keith Bentrup
    I want to have the addthis widget available for my users, but I want to lazy load it so that my page loads as quickly as possible. However, after trying it via a script tag and then via my lazy loading method, it appears to only work via the script tag. In the obfuscated code, I see something that looks like it's dependent on the DOMContentLoaded event (at least for firefox). Since the DOMContentLoaded event has already fired, the widget doesn't render properly. What to do? I could just use a script tag (slower)... or could I fire (in a cross browser way) the DOMContentLoaded (or equivalent) event? I have a feeling this may not be possible b/c I believe that (like jQuery) there are multiple tests of the content ready event, and so multiple simulated events would have to occur. Nonetheless, this is an interesting problem b/c I have seen a couple widgets now assume that you are including their stuff via static script tags. It would be nice if they wrote code that was more useful to developers concerned about speed, but until then, is there a work around?? And/or are any of my assumptions wrong? Edit: Because the 1st answer to the question seemed to miss the point of my problem, I wanted to clarify the situation. This is about a specific problem. I'm not looking for yet another lazy load script or check if some dependencies are loaded script. Specifically this problem deals with external widgets that you do not have control over and may or may not be obfuscated delaying the load of the external widgets until they are needed or at least, til substantially after everything else has been loaded including other deferred elements b/c of the how the widget was written, precludes existing, typical lazy loading paradigms While it's esoteric, I have seen it happen with a couple widgets - where the widget developers assume that you're just willing to throw in another script tag at the bottom of the page. I'm looking to save those 500-1000 ms** though as numerous studies by yahoo, google, and amazon show it to be important to your user's experience. **My testing with hammerhead and personal experience indicates that this will be my savings in this case.

    Read the article

  • Unique element ID, even if element doesn't have one

    - by Robert J. Walker
    I'm writing a GreaseMonkey script where I'm iterating through a bunch of elements. For each element, I need a string ID that I can use to reference that element later. The element itself doesn't have an id attribute, and I can't modify the original document to give it one (although I can make DOM changes in my script). I can't store the references in my script because when I need them, the GreaseMonkey script itself will have gone out of scope. Is there some way to get at an "internal" ID that the browser uses, for example? A Firefox-only solution is fine; a cross-browser solution that could be applied in other scenarios would be awesome. Edit: If the GreaseMonkey script is out of scope, how are you referencing the elements later? They GreaseMonkey script is adding events to DOM objects. I can't store the references in an array or some other similar mechanism because when the event fires, the array will be gone because the GreaseMonkey script will have gone out of scope. So the event needs some way to know about the element reference that the script had when the event was attached. And the element in question is not the one to which it is attached. Can't you just use a custom property on the element? Yes, but the problem is on the lookup. I'd have to resort to iterating through all the elements looking for the one that has that custom property set to the desired id. That would work, sure, but in large documents it could be very time consuming. I'm looking for something where the browser can do the lookup grunt work. Wait, can you or can you not modify the document? I can't modify the source document, but I can make DOM changes in the script. I'll clarify in the question. Can you not use closures? Closuses did turn out to work, although I initially thought they wouldn't. See my later post. It sounds like the answer to the question: "Is there some internal browser ID I could use?" is "No."

    Read the article

  • rails named_scope ignores eager loading

    - by Craig
    Two models (Rails 2.3.8): User; username & disabled properties; User has_one :profile Profile; full_name & hidden properties I am trying to create a named_scope that eliminate the disabled=1 and hidden=1 User-Profiles. The User model is usually used in conjunction with the Profile model, so I attempt to eager-load the Profile model (:include = :profile). I created a named_scope on the User model called 'visible': named_scope :visible, { :joins => "INNER JOIN profiles ON users.id=profiles.user_id", :conditions => ["users.disabled = ? AND profiles.hidden = ?", false, false] } I've noticed that when I use the named_scope in a query, the eager-loading instruction is ignored. Variation 1 - User model only: # UserController @users = User.find(:all) # User's Index view <% for user in @users %> <p><%= user.username %></p> <% end %> # generates a single query: SELECT * FROM `users` Variation 2 - use Profile model in view; lazy load Profile model # UserController @users = User.find(:all) # User's Index view <% for user in @users %> <p><%= user.username %></p> <p><%= user.profile.full_name %></p> <% end %> # generates multiple queries: SELECT * FROM `profiles` WHERE (`profiles`.user_id = 1) ORDER BY full_name ASC LIMIT 1 SHOW FIELDS FROM `profiles` SELECT * FROM `profiles` WHERE (`profiles`.user_id = 2) ORDER BY full_name ASC LIMIT 1 SELECT * FROM `profiles` WHERE (`profiles`.user_id = 3) ORDER BY full_name ASC LIMIT 1 SELECT * FROM `profiles` WHERE (`profiles`.user_id = 4) ORDER BY full_name ASC LIMIT 1 SELECT * FROM `profiles` WHERE (`profiles`.user_id = 5) ORDER BY full_name ASC LIMIT 1 SELECT * FROM `profiles` WHERE (`profiles`.user_id = 6) ORDER BY full_name ASC LIMIT 1 Variation 3 - eager load Profile model # UserController @users = User.find(:all, :include => :profile) #view; no changes # two queries SELECT * FROM `users` SELECT `profiles`.* FROM `profiles` WHERE (`profiles`.user_id IN (1,2,3,4,5,6)) Variation 4 - use name_scope, including eager-loading instruction #UserConroller @users = User.visible(:include => :profile) #view; no changes # generates multiple queries SELECT `users`.* FROM `users` INNER JOIN profiles ON users.id=profiles.user_id WHERE (users.disabled = 0 AND profiles.hidden = 0) SELECT * FROM `profiles` WHERE (`profiles`.user_id = 1) ORDER BY full_name ASC LIMIT 1 SELECT * FROM `profiles` WHERE (`profiles`.user_id = 2) ORDER BY full_name ASC LIMIT 1 SELECT * FROM `profiles` WHERE (`profiles`.user_id = 3) ORDER BY full_name ASC LIMIT 1 SELECT * FROM `profiles` WHERE (`profiles`.user_id = 4) ORDER BY full_name ASC LIMIT 1 Variation 4 does return the correct number of records, but also appears to be ignoring the eager-loading instruction. Is this an issue with cross-model named scopes? Perhaps I'm not using it correctly. Is this sort of situation handled better by Rails 3?

    Read the article

  • What does Ruby have that Python doesn't, and vice versa?

    - by Lennart Regebro
    There is a lot of discussions of Python vs Ruby, and I all find them completely unhelpful, because they all turn around why feature X sucks in language Y, or that claim language Y doesn't have X, although in fact it does. I also know exactly why I prefer Python, but that's also subjective, and wouldn't help anybody choosing, as they might not have the same tastes in development as I do. It would therefore be interesting to list the differences, objectively. So no "Python's lambdas sucks". Instead explain what Ruby's lambdas can do that Python's can't. No subjectivity. Example code is good! Don't have several differences in one answer, please. And vote up the ones you know are correct, and down those you know are incorrect (or are subjective). Also, differences in syntax is not interesting. We know Python does with indentation what Ruby does with brackets and ends, and that @ is called self in Python. UPDATE: This is now a community wiki, so we can add the big differences here. Ruby has a class reference in the class body In Ruby you have a reference to the class (self) already in the class body. In Python you don't have a reference to the class until after the class construction is finished. An example: class Kaka puts self end self in this case is the class, and this code would print out "Kaka". There is no way to print out the class name or in other ways access the class from the class definition body in Python. All classes are mutable in Ruby This lets you develop extensions to core classes. Here's an example of a rails extension: class String def starts_with?(other) head = self[0, other.length] head == other end end Ruby has Perl-like scripting features Ruby has first class regexps, $-variables, the awk/perl line by line input loop and other features that make it more suited to writing small shell scripts that munge text files or act as glue code for other programs. Ruby has first class continuations Thanks to the callcc statement. In Python you can create continuations by various techniques, but there is no support built in to the language. Ruby has blocks With the "do" statement you can create a multi-line anonymous function in Ruby, which will be passed in as an argument into the method in front of do, and called from there. In Python you would instead do this either by passing a method or with generators. Ruby: amethod { |here| many=lines+of+code goes(here) } Python: def function(here): many=lines+of+code goes(here) amethod(function) Interestingly, the convenience statement in Ruby for calling a block is called "yield", which in Python will create a generator. Ruby: def themethod yield 5 end themethod do |foo| puts foo end Python: def themethod(): yield 5 for foo in themethod: print foo Although the principles are different, the result is strikingly similar. Python has built-in generators (which are used like Ruby blocks, as noted above) Python has support for generators in the language. In Ruby you could use the generator module that uses continuations to create a generator from a block. Or, you could just use a block/proc/lambda! Moreover, in Ruby 1.9 Fibers are, and can be used as, generators. docs.python.org has this generator example: def reverse(data): for index in range(len(data)-1, -1, -1): yield data[index] Contrast this with the above block examples. Python has flexible name space handling In Ruby, when you import a file with require, all the things defined in that file will end up in your global namespace. This causes namespace pollution. The solution to that is Rubys modules. But if you create a namespace with a module, then you have to use that namespace to access the contained classes. In Python, the file is a module, and you can import its contained names with from themodule import *, thereby polluting the namespace if you want. But you can also import just selected names with from themodule import aname, another or you can simply import themodule and then access the names with themodule.aname. If you want more levels in your namespace you can have packages, which are directories with modules and an __init__.py file. Python has docstrings Docstrings are strings that are attached to modules, functions and methods and can be introspected at runtime. This helps for creating such things as the help command and automatic documentation. def frobnicate(bar): """frobnicate takes a bar and frobnicates it >>> bar = Bar() >>> bar.is_frobnicated() False >>> frobnicate(bar) >>> bar.is_frobnicated() True """ Python has more libraries Python has a vast amount of available modules and bindings for libraries. Python has multiple inheritance Ruby does not ("on purpose" -- see Ruby's website, see here how it's done in Ruby). It does reuse the module concept as a sort of abstract classes. Python has list/dict comprehensions Python: res = [x*x for x in range(1, 10)] Ruby: res = (0..9).map { |x| x * x } Python: >>> (x*x for x in range(10)) <generator object <genexpr> at 0xb7c1ccd4> >>> list(_) [0, 1, 4, 9, 16, 25, 36, 49, 64, 81] Ruby: p = proc { |x| x * x } (0..9).map(&p) Python: >>> {x:str(y*y) for x,y in {1:2, 3:4}.items()} {1: '4', 3: '16'} Ruby: >> Hash[{1=>2, 3=>4}.map{|x,y| [x,(y*y).to_s]}] => {1=>"4", 3=>"16"} Python has decorators Things similar to decorators can be created in Ruby, and it can also be argued that they aren't as necessary as in Python.

    Read the article

  • Best Practices / Patterns for Enterprise Protection/Remediation of SSNs (Social Security Numbers)

    - by Erik Neu
    I am interested in hearing about enterprise solutions for SSN handling. (I looked pretty hard for any pre-existing post on SO, including reviewing the terriffic SO automated "Related Questions" list, and did not find anything, so hopefully this is not a repeat.) First, I think it is important to enumerate the reasons systems/databases use SSNs: (note—these are reasons for de facto current state—I understand that many of them are not good reasons) Required for Interaction with External Entities. This is the most valid case—where external entities your system interfaces with require an SSN. This would typically be government, tax and financial. SSN is used to ensure system-wide uniqueness. SSN has become the default foreign key used internally within the enterprise, to perform cross-system joins. SSN is used for user authentication (e.g., log-on) The enterprise solution that seems optimum to me is to create a single SSN repository that is accessed by all applications needing to look up SSN info. This repository substitutes a globally unique, random 9-digit number (ASN) for the true SSN. I see many benefits to this approach. First of all, it is obviously highly backwards-compatible—all your systems "just" have to go through a major, synchronized, one-time data-cleansing exercise, where they replace the real SSN with the alternate ASN. Also, it is centralized, so it minimizes the scope for inspection and compliance. (Obviously, as a negative, it also creates a single point of failure.) This approach would solve issues 2 and 3, without ever requiring lookups to get the real SSN. For issue #1, authorized systems could provide an ASN, and be returned the real SSN. This would of course be done over secure connections, and the requesting systems would never persist the full SSN. Also, if the requesting system only needs the last 4 digits of the SSN, then that is all that would ever be passed. Issue #4 could be handled the same way as issue #1, though obviously the best thing would be to move away from having users supply an SSN for log-on. There are a couple of papers on this: UC Berkely: http://bit.ly/bdZPjQ Oracle Vault: bit.ly/cikbi1

    Read the article

  • Understanding CLR 2.0 Memory Model

    - by Eloff
    Joe Duffy, gives 6 rules that describe the CLR 2.0+ memory model (it's actual implementation, not any ECMA standard) I'm writing down my attempt at figuring this out, mostly as a way of rubber ducking, but if I make a mistake in my logic, at least someone here will be able to catch it before it causes me grief. Rule 1: Data dependence among loads and stores is never violated. Rule 2: All stores have release semantics, i.e. no load or store may move after one. Rule 3: All volatile loads are acquire, i.e. no load or store may move before one. Rule 4: No loads and stores may ever cross a full-barrier (e.g. Thread.MemoryBarrier, lock acquire, Interlocked.Exchange, Interlocked.CompareExchange, etc.). Rule 5: Loads and stores to the heap may never be introduced. Rule 6: Loads and stores may only be deleted when coalescing adjacent loads and stores from/to the same location. I'm attempting to understand these rules. x = y y = 0 // Cannot move before the previous line according to Rule 1. x = y z = 0 // equates to this sequence of loads and stores before possible re-ordering load y store x load 0 store z Looking at this, it appears that the load 0 can be moved up to before load y, but the stores may not be re-ordered at all. Therefore, if a thread sees z == 0, then it also will see x == y. If y was volatile, then load 0 could not move before load y, otherwise it may. Volatile stores don't seem to have any special properties, no stores can be re-ordered with respect to each other (which is a very strong guarantee!) Full barriers are like a line in the sand which loads and stores can not be moved over. No idea what rule 5 means. I guess rule 6 means if you do: x = y x = z Then it is possible for the CLR to delete both the load to y and the first store to x. x = y z = y // equates to this sequence of loads and stores before possible re-ordering load y store x load y store z // could be re-ordered like this load y load y store x store z // rule 6 applied means this is possible? load y store x // but don't pop y from stack (or first duplicate item on top of stack) store z What if y was volatile? I don't see anything in the rules that prohibits the above optimization from being carried out. This does not violate double-checked locking, because the lock() between the two identical conditions prevents the loads from being moved into adjacent positions, and according to rule 6, that's the only time they can be eliminated. So I think I understand all but rule 5, here. Anyone want to enlighten me (or correct me or add something to any of the above?)

    Read the article

  • How do I pass a variable number of parameters along with a callback function?

    - by Bungle
    I'm using a function to lazy-load the Sizzle selector engine (used by jQuery): var sizzle_loaded; // load the Sizzle script function load_sizzle(module_name) { var script; // load Sizzle script and set up 'onload' and 'onreadystatechange' event // handlers to ensure that external script is loaded before dependent // code is executed script = document.createElement('script'); script.src = 'sizzle.min.js'; script.onload = function() { sizzle_loaded = true; gather_content(module_name); }; script.onreadystatechange = function() { if ((script.readyState === 'loaded' || script.readyState === 'complete') && !sizzle_loaded) { sizzle_loaded = true; gather_content(module_name); } }; // append script to the document document.getElementsByTagName('head')[0].appendChild(script); } I set the onload and onreadystatechange event handlers, as well as the sizzle_loaded flag to call another function (gather_content()) as soon as Sizzle has loaded. All of this is needed to do this in a cross-browser way. Until now, my project only had to lazy-load Sizzle at one point in the script, so I was able to just hard-code the gather_content() function call into the load_sizzle() function. However, I now need to lazy-load Sizzle at two different points in the script, and call a different function either time once it's loaded. My first instinct was to modify the function to accept a callback function: var sizzle_loaded; // load the Sizzle script function load_sizzle(module_name, callback) { var script; // load Sizzle script and set up 'onload' and 'onreadystatechange' event // handlers to ensure that external script is loaded before dependent // code is executed script = document.createElement('script'); script.src = 'sizzle.min.js'; script.onload = function() { sizzle_loaded = true; callback(module_name); }; script.onreadystatechange = function() { if ((script.readyState === 'loaded' || script.readyState === 'complete') && !sizzle_loaded) { sizzle_loaded = true; callback(module_name); } }; // append script to the document document.getElementsByTagName('head')[0].appendChild(script); } Then, I could just call it like this: load_sizzle(module_name, gather_content); However, the other callback function that I need to use takes more parameters than gather_content() does. How can I modify my function so that I can specify a variable number of parameters, to be passed with the callback function? Or, am I going about this the wrong way? Ultimately, I just want to load Sizzle, then call any function that I need to (with any arguments that it needs) once it's done loading. Thanks for any help!

    Read the article

  • ASP.NET application partially reading external configuration

    - by Trent
    I have an ASP.NET web app and am attempting to reference an external config (using enterprise application blocks configuration) for some of the configuration but it is not entirely working. I previously had all of the configuration info in the web.config (and it was working), but we are wanting to share some of this configuration information between multiple apps. When I put configurationSource tag in the web.config, and read the configuration through the WebConfigurationManager object, it loads some of the external config info (Logging) but not the connectionStrings and not the custom section I created. So its reading it (logging is working), but some dots aren't being connected and my connection strings aren't coming through. Again, it worked when it was all in the web.config. Any idea what needs to change to be able to reference an external configuration source and have it all come through? [Code that accesses web.config] Configuration webConfig = System.Web.Configuration.WebConfigurationManager.OpenWebConfiguration("~"); ConnectionStringSettingsCollection connectionStrings = System.Web.Configuration.WebConfigurationManager.ConnectionStrings; [web.config] <configuration> <configSections> <section name="enterpriseLibrary.ConfigurationSource" type="Microsoft.Practices.EnterpriseLibrary.Common.Configuration.ConfigurationSourceSection, Microsoft.Practices.EnterpriseLibrary.Common, Version=4.1.0.0, Culture=neutral, PublicKeyToken=74025d8738dfe4ce" /> <sectionGroup name="system.web.extensions" type="System.Web.Configuration.SystemWebExtensionsSectionGroup, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"> <sectionGroup name="scripting" type="System.Web.Configuration.ScriptingSectionGroup, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"> <section name="scriptResourceHandler" type="System.Web.Configuration.ScriptingScriptResourceHandlerSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication" /> <sectionGroup name="webServices" type="System.Web.Configuration.ScriptingWebServicesSectionGroup, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"> <section name="jsonSerialization" type="System.Web.Configuration.ScriptingJsonSerializationSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="Everywhere" /> <section name="profileService" type="System.Web.Configuration.ScriptingProfileServiceSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication" /> <section name="authenticationService" type="System.Web.Configuration.ScriptingAuthenticationServiceSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication" /> <section name="roleService" type="System.Web.Configuration.ScriptingRoleServiceSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication" /> </sectionGroup> </sectionGroup> </sectionGroup> </configSections> <enterpriseLibrary.ConfigurationSource selectedSource="File Configuration Source"> <sources> <add name="File Configuration Source" type="Microsoft.Practices.EnterpriseLibrary.Common.Configuration.FileConfigurationSource, Microsoft.Practices.EnterpriseLibrary.Common, Version=4.1.0.0, Culture=neutral, PublicKeyToken=74025d8738dfe4ce" filePath="C:\MSEAB\MSEAB.config" /> </sources> </enterpriseLibrary.ConfigurationSource> ... ... </configuration> [external MSEAB.config] <configuration> <configSections> <section name="loggingConfiguration" type="Microsoft.Practices.EnterpriseLibrary.Logging.Configuration.LoggingSettings, Microsoft.Practices.EnterpriseLibrary.Logging, Version=4.1.0.0, Culture=neutral, PublicKeyToken=74025d8738dfe4ce" /> <section name="dataConfiguration" type="Microsoft.Practices.EnterpriseLibrary.Data.Configuration.DatabaseSettings, Microsoft.Practices.EnterpriseLibrary.Data, Version=4.1.0.0, Culture=neutral, PublicKeyToken=74025d8738dfe4ce" /> <sectionGroup name="customSectionGroup"> <section name="customSection" type="app.customSection" allowLocation="true" allowDefinition="Everywhere" /> </sectionGroup> </configSections> <loggingConfiguration name="Logging Application Block" tracingEnabled="true" defaultCategory="General" logWarningsWhenNoCategoriesMatch="true"> ... </loggingConfiguration> <connectionStrings> <clear /> <add name="DB.DEV" connectionString="User ID=user;Password=pwd;Data Source=DV408;" providerName="Oracle.DataAccess.Client"/> <add name="DB.TEST" connectionString="User ID=user;Password=pwd;Data Source=TS408;" providerName="Oracle.DataAccess.Client"/> ... </connectionStrings> <customSectionGroup> <customSection notificationemail="[email protected]" dirPath="C:\Dir" initialrowlimit="500" maxrowlimit="1500" adminadgroup="_admins"> </customSection> </customSectionGroup> </configuration>

    Read the article

  • JavaScript: How to get text from all descendents of an element, disregarding scripts?

    - by Bungle
    My current project involves gathering text content from an element and all of its descendents, based on a provided selector. For example, when supplied the selector #content and run against this HTML: <div id="content"> <p>This is some text.</p> <script type="text/javascript"> var test = true; </script> <p>This is some more text.</p> </div> my script would return (after a little whitespace cleanup): This is some text. var test = true; This is some more text. However, I need to disregard text nodes that occur within <script> elements. This is an excerpt of my current code: // get text content of all matching elements for (x = 0; x < selectors.length; x++) { matches = Sizzle(selectors[x], document); for (y = 0; y < matches.length; y++) { match = matches[y]; if (match.innerText) { // IE content += match.innerText + ' '; } else if (match.textContent) { // other browsers content += match.textContent + ' '; } } } It's a bit overly simplistic in that it just returns all text nodes within the element (and its descendants) that matches the provided selector. The solution I'm looking for would return all text nodes except for those that fall within script elements. It doesn't need to be especially high-performance, but I do need it to ultimately be cross-browser compatible. I'm assuming that I'll need to somehow loop through all children of the element that matches the selector and accumulate all text nodes other than ones within <script> elements; it doesn't look like there's any way to identify JavaScript once it's already rolled into the string accumulated from all of the text nodes. I can't use jQuery (for performance/bandwidth reasons), although you may have noticed that I do use its Sizzle selector engine, so jQuery's selector logic is available. Thanks in advance for any help!

    Read the article

  • CLR 4.0 inlining policy? (maybe bug with MethodImplOptions.NoInlining)

    - by ControlFlow
    I've testing some new CLR 4.0 behavior in method inlining (cross-assembly inlining) and found some strage results: Assembly ClassLib.dll: using System.Diagnostics; using System; using System.Reflection; using System.Security; using System.Runtime.CompilerServices; namespace ClassLib { public static class A { static readonly MethodInfo GetExecuting = typeof(Assembly).GetMethod("GetExecutingAssembly"); public static Assembly Foo(out StackTrace stack) // 13 bytes { // explicit call to GetExecutingAssembly() stack = new StackTrace(); return Assembly.GetExecutingAssembly(); } public static Assembly Bar(out StackTrace stack) // 25 bytes { // reflection call to GetExecutingAssembly() stack = new StackTrace(); return (Assembly) GetExecuting.Invoke(null, null); } public static Assembly Baz(out StackTrace stack) // 9 bytes { stack = new StackTrace(); return null; } public static Assembly Bob(out StackTrace stack) // 13 bytes { // call of non-inlinable method! return SomeSecurityCriticalMethod(out stack); } [SecurityCritical, MethodImpl(MethodImplOptions.NoInlining)] static Assembly SomeSecurityCriticalMethod(out StackTrace stack) { stack = new StackTrace(); return Assembly.GetExecutingAssembly(); } } } Assembly ConsoleApp.exe using System; using ClassLib; using System.Diagnostics; class Program { static void Main() { Console.WriteLine("runtime: {0}", Environment.Version); StackTrace stack; Console.WriteLine("Foo: {0}\n{1}", A.Foo(out stack), stack); Console.WriteLine("Bar: {0}\n{1}", A.Bar(out stack), stack); Console.WriteLine("Baz: {0}\n{1}", A.Baz(out stack), stack); Console.WriteLine("Bob: {0}\n{1}", A.Bob(out stack), stack); } } Results: runtime: 4.0.30128.1 Foo: ClassLib, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null at ClassLib.A.Foo(StackTrace& stack) at Program.Main() Bar: ClassLib, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null at ClassLib.A.Bar(StackTrace& stack) at Program.Main() Baz: at Program.Main() Bob: ClassLib, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null at Program.Main() So questions are: Why JIT does not inlined Foo and Bar calls as Baz does? They are lower than 32 bytes of IL and are good candidates for inlining. Why JIT inlined call of Bob and inner call of SomeSecurityCriticalMethod that is marked with the [MethodImpl(MethodImplOptions.NoInlining)] attribute? Why GetExecutingAssembly returns a valid assembly when is called by inlined Baz and SomeSecurityCriticalMethod methods? I've expect that it performs the stack walk to detect the executing assembly, but stack will contains only Program.Main() call and no methods of ClassLib assenbly, to ConsoleApp should be returned.

    Read the article

  • Recommendations for a free GIS library supporting raster images

    - by gspr
    Hi. I'm quite new to the whole field of GIS, and I'm about to make a small program that essentially overlays GPS tracks on a map together with some other annotations. I primarily need to allow scanned (thus raster) maps (although it would be nice to support proper map formats and something like OpenStreetmap in the long run). My first exploratory program uses Qt's graphics view framework and overlays the GPS points by simply projecting them onto the tangent plane to the WGS84 ellipsoid at a calibration point. This gives half-decent accuracy, and actually looks good. But then I started wondering. To get the accuracy I need (i.e. remove the "half" in "half-decent"), I have to correct for the map projection. While the math is not a problem in itself, supporting many map projection feels like needless work. Even though a few projections would probably be enough, I started thinking about just using something like the PROJ.4 library to do my projections. But then, why not take it all the way? Perhaps I might aswell use a full-blown map library such as Mapnik (edit: Quantum GIS also looks very nice), which will probably pay off when I start to want even more fancy annotations or some other symptom of featuritis. So, finally, to the question: What would you do? Would you use a full-blown map library? If so, which one? Again, it's important that it supports using (and zooming in and out with) raster maps and has pretty overlay features. Or would you just keep it simple, and go with Qt's own graphics view framework together with something like PROJ.4 to handle the map projections? I appreciate any feedback! Some technicalities: I'm writing in C++ with a Qt-based GUI, so I'd prefer something that plays relatively nicely with those. Also, the library must be free software (as in FOSS), and at least decently cross-platform (GNU/Linux, Windows and Mac, at least). Edit: OK, it seems I didn't do quite enough research before asking this question. Both Quantum GIS and Mapnik seem very well suited for my purpose. The former especially so since it's based on Qt.

    Read the article

  • (resolved) empty response body in ajax (or 206 Partial Content)

    - by Nikita Rybak
    Hi guys, I'm feeling completely stupid because I've spent two hours solving task which should be very simple and which I solved many times before. But now I'm not even sure in which direction to dig. I fail to fetch static content using ajax from local servers (Apache and Mongrel). I get responses 200 and 206 (depending on the server), empty response text (although Content-Length header is always correct), firebug shows request in red. Javascript is very generic, I'm getting same results even here: http://www.w3schools.com/ajax/tryit.asp?filename=tryajax_first (just change document location to 'http://localhost:3000/whatever') So, it's probably not the cause. Well, now I'm out of ideas. I can also post http headers, if it'll help. Thanks! Response Headers Connection close Date Sat, 01 May 2010 21:05:23 GMT Last-Modified Sun, 18 Apr 2010 19:33:26 GMT Content-Type text/html Content-Length 7466 Request Headers Host localhost:3000 User-Agent Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 Accept text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language en-us,en;q=0.5 Accept-Encoding gzip,deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 115 Connection keep-alive Referer http://www.w3schools.com/ajax/tryit_view.asp Origin http://www.w3schools.com Response Headers Date Sat, 01 May 2010 21:54:59 GMT Server Apache/2.2.14 (Unix) mod_ssl/2.2.14 OpenSSL/0.9.8l DAV/2 mod_jk/1.2.28 Etag "3d5cbdb-fb4-4819c460d4a40" Accept-Ranges bytes Content-Length 4020 Cache-Control max-age=7200, public, proxy-revalidate Expires Sat, 01 May 2010 23:54:59 GMT Content-Range bytes 0-4019/4020 Keep-Alive timeout=5, max=100 Connection Keep-Alive Content-Type application/javascript Request Headers Host localhost User-Agent Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 Accept text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language en-us,en;q=0.5 Accept-Encoding gzip,deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 115 Connection keep-alive Origin null UPDATED: I've found a problem, it was about cross-domain requests. I knew that there are restrictions, but thought they're relaxed for local filesystem and local servers. (and expected more descriptive error message, anyway) Thanks everybody!

    Read the article

  • What's the best Scala build system?

    - by gatoatigrado
    I've seen questions about IDE's here -- Which is the best IDE for Scala development? and What is the current state of tooling for Scala?, but I've had mixed experiences with IDEs. Right now, I'm using the Eclipse IDE with the automatic workspace refresh option, and KDE 4's Kate as my text editor. Here are some of the problems I'd like to solve: use my own editor IDEs are really geared at everyone using their components. I like Kate better, but the refresh system is very annoying (it doesn't use inotify, rather, maybe a 10s polling interval). The reason I don't use the built-in text editor is because broken auto-complete functionalities cause the IDE to hang for maybe 10s. rebuild only modified files The Eclipse build system is broken. It doesn't know when to rebuild classes. I find myself almost half of the time going to project-clean. Worse, it seems even after it has finished building my project, a few minutes later it will pop up with some bizarre error (edit - these errors appear to be things that were previously solved with a project clean, but then come back up...). Finally, setting "Preferences / Continue launch if project contains errors" to "prompt" seems to have no effect for Scala projects (i.e. it always launches even if there are errors). build customization I can use the "nightly" release, but I'll want to modify and use my own Scala builds, not the compiler that's built into the IDE's plugin. It would also be nice to pass [e.g.] -Xprint:jvm to the compiler (to print out lowered code). fast compiling Though Eclipse doesn't always build right, it does seem snappy -- even more so than fsc. I looked at Ant and Maven, though haven't employed either yet (I'll also need to spend time solving #3 and #4). I wanted to see if anyone has other suggestions before I spend time getting a suboptimal build system working. Thanks in advance! UPDATE - I'm now using Maven, passing a project as a compiler plugin to it. It seems fast enough; I'm not sure what kind of jar caching Maven does. A current repository for Scala 2.8.0 is available [link]. The archetypes are very cool, and cross-platform support seems very good. However, about compile issues, I'm not sure if fsc is actually fixed, or my project is stable enough (e.g. class names aren't changing) -- running it manually doesn't bother me as much. If you'd like to see an example, feel free to browse the pom.xml files I'm using [github]. UPDATE 2 - from benchmarks I've seen, Daniel Spiewak is right that buildr's faster than Maven (and, if one is doing incremental changes, Maven's 10 second latency gets annoying), so if one can craft a compatible build file, then it's probably worth it...

    Read the article

  • Escaping Code for Different Shells

    - by Jon Purdy
    Question: What characters do I need to escape in a user-entered string to securely pass it into shells on Windows and Unix? What shell differences and version differences should be taken into account? Can I use printf "%q" somehow, and is that reliable across shells? Backstory (a.k.a. Shameless Self-Promotion): I made a little DSL, the Vision Web Template Language, which allows the user to create templates for X(HT)ML documents and fragments, then automatically fill them in with content. It's designed to separate template logic from dynamic content generation, in the same way that CSS is used to separate markup from presentation. In order to generate dynamic content, a Vision script must defer to a program written in a language that can handle the generation logic, such as Perl or Python. (Aside: using PHP is also possible, but Vision is intended to solve some of the very problems that PHP perpetuates.) In order to do this, the script makes use of the @system directive, which executes a shell command and expands to its output. (Platform-specific generation can be handled using @unix or @windows, which only expand on the proper platform.) The problem is obvious, I should think: test.htm: <!-- ... --> <form action="login.vis" method="POST"> <input type="text" name="USERNAME"/> <input type="password" name="PASSWORD"/> </form> <!-- ... --> login.vis: #!/usr/bin/vision # Think USERNAME = ";rm -f;" @system './login.pl' { USERNAME; PASSWORD } One way to safeguard against this kind of attack is to set proper permissions on scripts and directories, but Web developers may not always set things up correctly, and the naive developer should get just as much security as the experienced one. The solution, logically, is to include a @quote directive that produces a properly escaped string for the current platform. @system './login.pl' { @quote : USERNAME; @quote : PASSWORD } But what should @quote actually do? It needs to be both cross-platform and secure, and I don't want to create terrible problems with a naive implementation. Any thoughts?

    Read the article

  • Embedding mercurial revision information in Visual Studio c# projects automatically

    - by Mark Booth
    Original Problem In building our projects, I want the mercurial id of each repository to be embedded within the product(s) of that repository (the library, application or test application). I find it makes it so much easier to debug an application ebing run by custiomers 8 timezones away if you know precisely what went into building the particular version of the application they are using. As such, every project (application or library) in our systems implement a way of getting at the associated revision information. I also find it very useful to be able to see if an application has been compiled with clean (un-modified) changesets from the repository. 'Hg id' usefully appends a + to the changeset id when there are uncommitted changes in a repository, so this allows is to easily see if people are running a clean or a modified version of the code. My current solution is detailed below, and fulfills the basic requirements, but there are a number of problems with it. Current Solution At the moment, to each and every Visual Studio solution, I add the following "Pre-build event command line" commands: cd $(ProjectDir) HgID I also add an HgID.bat file to the Project directory: @echo off type HgId.pre > HgId.cs For /F "delims=" %%a in ('hg id') Do <nul >>HgID.cs set /p = @"%%a" echo ; >> HgId.cs echo } >> HgId.cs echo } >> HgId.cs along with an HgId.pre file, which is defined as: namespace My.Namespace { /// <summary> Auto generated Mercurial ID class. </summary> internal class HgID { /// <summary> Mercurial version ID [+ is modified] [Named branch]</summary> public const string Version = When I build my application, the pre-build event is triggered on all libraries, creating a new HgId.cs file (which is not kept under revision control) and causing the library to be re-compiled with with the new 'hg id' string in 'Version'. Problems with the current solution The main problem is that since the HgId.cs is re-created at each pre-build, every time we need to compile anything, all projects in the current solution are re-compiled. Since we want to be able to easily debug into our libraries, we usually keep many libraries referenced in our main application solution. This can result in build times which are significantly longer than I would like. Ideally I would like the libraries to compile only if the contents of the HgId.cs file has actually changed, as opposed to having been re-created with exactly the same contents. The second problem with this method is it's dependence on specific behaviour of the windows shell. I've already had to modify the batch file several times, since the original worked under XP but not Vista, the next version worked under Vista but not XP and finally I managed to make it work with both. Whether it will work with Windows 7 however is anyones guess and as time goes on, I see it more likely that contractors will expect to be able to build our apps on their Windows 7 boxen. Finally, I have an aesthetic problem with this solution, batch files and bodged together template files feel like the wrong way to do this. My actual questions How would you solve/how are you solving the problem I'm trying to solve? What better options are out there than what I'm currently doing? Rejected Solutions to these problems Before I implemented the current solution, I looked at Mercurials Keyword extension, since it seemed like the obvious solution. However the more I looked at it and read peoples opinions, the more that I came to the conclusion that it wasn't the right thing to do. I also remember the problems that keyword substitution has caused me in projects at previous companies (just the thought of ever having to use Source Safe again fills me with a feeling of dread *8'). Also, I don't particularly want to have to enable Mercurial extensions to get the build to complete. I want the solution to be self contained, so that it isn't easy for the application to be accidentally compiled without the embedded version information just because an extension isn't enabled or the right helper software hasn't been installed. I also thought of writing this in a better scripting language, one where I would only write HgId.cs file if the content had actually changed, but all of the options I could think of would require my co-workers, contractors and possibly customers to have to install software they might not otherwise want (for example cygwin). Any other options people can think of would be appreciated. Update Partial solution Having played around with it for a while, I've managed to get the HgId.bat file to only overwrite the HgId.cs file if it changes: @echo off type HgId.pre > HgId.cst For /F "delims=" %%a in ('hg id') Do <nul >>HgId.cst set /p = @"%%a" echo ; >> HgId.cst echo } >> HgId.cst echo } >> HgId.cst fc HgId.cs HgId.cst >NUL if %errorlevel%==0 goto :ok copy HgId.cst HgId.cs :ok del HgId.cst Problems with this solution Even though HgId.cs is no longer being re-created every time, Visual Studio still insists on compiling everything every time. I've tried looking for solutions and tried checking "Only build startup projects and dependencies on Run" in Tools|Options|Projects and Solutions|Build and Run but it makes no difference. The second problem also remains, and now I have no way to test if it will work with Vista, since that contractor is no longer with us. If anyone can test this batch file on a Windows 7 and/or Vista box, I would appreciate hearing how it went. Finally, my aesthetic problem with this solution, is even strnger than it was before, since the batch file is more complex and this there is now more to go wrong. If you can think of any better solution, I would love to hear about them.

    Read the article

< Previous Page | 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044  | Next Page >