Search Results

Search found 8268 results on 331 pages for 'difference'.

Page 279/331 | < Previous Page | 275 276 277 278 279 280 281 282 283 284 285 286  | Next Page >

  • Auditing front end performance on web application

    - by user1018494
    I am currently trying to performance tune the UI of a company web application. The application is only ever going to be accessed by staff, so the speed of the connection between the server and client will always be considerably more than if it was on the internet. I have been using performance auditing tools such as Y Slow! and Google Chrome's profiling tool to try and highlight areas that are worth targeting for investigation. However, these tools are written with the internet in mind. For example, the current suggestions from a Google Chrome audit of the application suggests is as follows: Network Utilization Combine external CSS (Red warning) Combine external JavaScript (Red warning) Enable gzip compression (Red warning) Leverage browser caching (Red warning) Leverage proxy caching (Amber warning) Minimise cookie size (Amber warning) Parallelize downloads across hostnames (Amber warning) Serve static content from a cookieless domain (Amber warning) Web Page Performance Remove unused CSS rules (Amber warning) Use normal CSS property names instead of vendor-prefixed ones (Amber warning) Are any of these bits of advice totally redundant given the connection speed and usage pattern? The users will be using the application frequently throughout the day, so it doesn't matter if the initial hit is large (when they first visit the page and build their cache) so long as a minimal amount of work is done on future page views. For example, is it worth the effort of combining all of our CSS and JavaScript files? It may speed up the initial page view, but how much of a difference will it really make on subsequent page views throughout the working day? I've tried searching for this but all I keep coming up with is the standard internet facing performance advice. Any advice on what to focus my performance tweaking efforts on in this scenario, or other auditing tool recommendations, would be much appreciated.

    Read the article

  • Inconsistent behavior working with "Flex on Rails" example.

    - by kmontgom
    I'm experimenting with Flex and Rails right now (Rails is cool). I'm following the examples in the book "Flex on Rails", and I'm getting some puzzling and inconsistent behavior. Heres the Flex MXML: <mx:HTTPService id="index" url="http://localhost:3000/people.xml" resultFormat="e4x" /> <mx:DataGrid dataProvider="{index.lastResult.person}" width="100%" height="100%"> <mx:columns> <mx:DataGridColumn headerText="First Name" dataField="first-name"/> <mx:DataGridColumn headerText="Last Name" dataField="last-name"/> </mx:columns> </mx:DataGrid> <mx:Script> <![CDATA[ import mx.controls.Alert; private function main():void { Alert.show( "In main()" ); } ]]> </mx:Script> When I run the app from my IDE (Amythyst beta, also cool), the DataGrid appears, but is not populated. The Alert.show() also triggers. When I go out to a web browser and manually enter the url (http://localhost:3000/people.xml), the Mongrel console shows the request coming through and the browser shows the web response. No exceptions or other error messages occur. Whats the difference? Do I need to alter some OS setting? I'm using Win7 on an x64 machine.

    Read the article

  • SSIS - Skip Missing Files

    - by Greg
    I have a SSIS 2008 package that calls about 10 other SSIS packages (legacy issues, don't ask). Each of those child packages loads a specific file into a table. But sometimes one or more of these input files will be missing. How can I let a child package fail (because a file is missing) but let the rest of the parent package keep on running? I've tried increasing the maximum error count on the parent package, the tasks in the parent package that call each child, and in the child package itself. None of that seemed to make any difference. I still get this error when I run it with a file missing: SSIS Warning Code DTS_W_MAXIMUMERRORCOUNTREACHED. The Execution method succeeded, but the number of errors raised (2) reached the maximum allowed (1); resulting in failure. This occurs when the number of errors reaches the number specified in MaximumErrorCount. Change the MaximumErrorCount or fix the errors. Edit: failpackageonfailure and faulparentonfailure are already all set to false everywhere.

    Read the article

  • Entity Framework: Auto-updating foreign key when setting a new object reference

    - by Adrian Grigore
    Hi, I am porting an existing application from Linq to SQL to Entity Framework 4 (default code generation). One difference I noticed between the two is that a foreign key property are not updated when resetting the object reference. Now I need to decide how to deal with this. For example supposing you have two entity types, Company and Employee. One Company has many Employees. In Linq To SQL, setting the company also sets the company id: var company=new Company(ID=1); var employee=new Employee(); Debug.Assert(employee.CompanyID==0); employee.Company=company; Debug.Assert(employee.CompanyID==1); //Works fine! In Entity Framework (and without using any code template customization) this does not work: var company=new Company(ID=1); var employee=new Employee(); Debug.Assert(employee.CompanyID==0); employee.Company=company; Debug.Assert(employee.CompanyID==1); //Throws, since CompanyID was not updated! How can I make EF behave the same way as LinqToSQL? I had a look at the default code generation T4 template, but I could not figure out how to make the necessary changes.

    Read the article

  • Time with and without OpenMP

    - by was
    I have a question.. I tried to improve a well known program algorithm in C, FOX algorithm for matrix multiplication.. relative link without openMP: (http://web.mst.edu/~ercal/387/MPI/ppmpi_c/chap07/fox.c). The initial program had only MPI and I tried to insert openMP in the matrix multiplication method, in order to improve the time of computation: (This program runs in a cluster and computers have 2 cores, thus I created 2 threads.) The problem is that there is no difference of time, with and without openMP. I observed that using openMP sometimes, time is equivalent or greater than the time without openMP. I tried to multiply two 600x600 matrices. void Local_matrix_multiply( LOCAL_MATRIX_T* local_A /* in */, LOCAL_MATRIX_T* local_B /* in */, LOCAL_MATRIX_T* local_C /* out */) { int i, j, k; chunk = CHUNKSIZE; // 100 #pragma omp parallel shared(local_A, local_B, local_C, chunk, nthreads) private(i,j,k,tid) num_threads(2) { /* tid = omp_get_thread_num(); if(tid == 0){ nthreads = omp_get_num_threads(); printf("O Pollaplasiamos pinakwn ksekina me %d threads\n", nthreads); } printf("Thread %d use the matrix: \n", tid); */ #pragma omp for schedule(static, chunk) for (i = 0; i < Order(local_A); i++) for (j = 0; j < Order(local_A); j++) for (k = 0; k < Order(local_B); k++) Entry(local_C,i,j) = Entry(local_C,i,j) + Entry(local_A,i,k)*Entry(local_B,k,j); } //end pragma omp parallel } /* Local_matrix_multiply */

    Read the article

  • Why does Rails screw up timezones when I am editing a resource?

    - by DJTripleThreat
    Steps to produce this: prompt>rails test_app prompt>cd test_app prompt>script/generate scaffold date_test my_date:datetime prompt>rake db:migrate now edit your app/views/date_tests/edit.html.erb: <h1>Editing date_test</h1> <% form_for(@date_test) do |f| %> <%= f.error_messages %> <p> RIGHT!<br/> <%= text_field_tag @date_test, f.object.my_date %> </p> <p> WRONG!<br /> <%= f.text_field :my_date %> </p> <p> <%= f.submit 'Update' %> </p> <% end %> <%= link_to 'Show', @date_test %> | <%= link_to 'Back', date_tests_path %> now edit your config/environment.rb: #add this config.time_zone = 'Central Time (US & Canada)' This recreates the problem I am having in my actual app. The problem with my app is that I'm storing a date in a hidden field and rendering a "user friendly" version. Creating a resource works fine but as soon as I try to edit it the time changes (it adds the difference between my current time zone configuration and UTC). go to http://localhost:3000/date_tests/new and save the time then go to reedit it and you will have two different representations of the date/time one which will save incorrectly and the other that will.

    Read the article

  • XForms relation of 'constraint' and 'required' properties

    - by Danny
    As a reference, the most similar question already asked is: http://stackoverflow.com/questions/8667849/making-xforms-enforce-the-constraint-and-type-model-item-properties-only-when-fi The difference is that I cannot use the relevant properties since I do want the field to be visible and accessible. I'm attempting to make a XForms form that has the following properties: It displays a text field named 'information'. (for the example) This field must not be required, since it may not be necessary to enter data. (Or this data will be entered at a later time.) However, if data is entered in this field, it must adhere to the specified constraint. I cannot mark the field as not relevant since this would hide the field and some data may need to be entered in it. The trouble now is that even though the field has no data in it, the constraint is still enforced (i.e. even though it is not marked as 'required'). I have taken a look at the XForms 1.1 specification, however it does not seem to describe how the properties 'required' and 'constraint' should interact. The only option I see, is to add a part to the constraint such that an empty value is allowed. e.g.: . = '' or <actual-constraint However, I don't like this. It feels like a workaround to add this to every such field. Is there any other way to express that non-required fields should not need to match the constraint for that field? (Am I missing something?)

    Read the article

  • Why can't decimal numbers be represented exactly in binary?

    - by Barry Brown
    There have been several questions posted to SO about floating-point representation. For example, the decimal number 0.1 doesn't have an exact binary representation, so it's dangerous to use the == operator to compare it to another floating-point number. I understand the principles behind floating-point representation. What I don't understand is why, from a mathematical perspective, are the numbers to the right of the decimal point any more "special" that the ones to the left? For example, the number 61.0 has an exact binary representation because the integral portion of any number is always exact. But the number 6.10 is not exact. All I did was move the decimal one place and suddenly I've gone from Exactopia to Inexactville. Mathematically, there should be no intrinsic difference between the two numbers -- they're just numbers. By contrast, if I move the decimal one place in the other direction to produce the number 610, I'm still in Exactopia. I can keep going in that direction (6100, 610000000, 610000000000000) and they're still exact, exact, exact. But as soon as the decimal crosses some threshold, the numbers are no longer exact. What's going on? Edit: to clarify, I want to stay away from discussion about industry-standard representations, such as IEEE, and stick with what I believe is the mathematically "pure" way. In base 10, the positional values are: ... 1000 100 10 1 1/10 1/100 ... In binary, they would be: ... 8 4 2 1 1/2 1/4 1/8 ... There are also no arbitrary limits placed on these numbers. The positions increase indefinitely to the left and to the right.

    Read the article

  • BasicDBObject or QueryBuilder and some newbie questions of Java and mongo

    - by Kevin Xu
    hi I'm a fresh newbie to mongodb Q1 using query=new BasicDBObject(); query.put("i", new BasicDBObject("$gt",13)); and query=new QueryBuilder().put("i").Greaterthan(13).get() is there any difference inside of the system? Q2 I've created a class class findkv extends BasicDBObject{ //is gt gte lt lte public findkv(String fieldname,String op,Object tvalue) { if (op=="") this.put(fieldname,tvalue); else this.put(fieldname, new BasicDBObject(op,tvalue)); } } shall I use it or shall I just use original function? Q3 I've used mongo shell for a few weeks, and was customed to it, and find writing in mongo shell faster and shorter, which side has more advantage, writing in mongo or in java? I shall dump them from mongo to mysql Q4 I've an if (statement==true) return else dowhat; seems can't be compiled I know I can write if (statement!=true) dowhat else return, but can I still write in first style? q5 my eclipse is Eclipse Java EE IDE for Web Developers. Version: Juno Release Build id: 20120614-1722 I'd like to install Perl which I haven't learned yet I choose Install Update http://e-p-i-c.sf.net/updates/testing but it doesn't work, any method to install perl to eclipse manually?

    Read the article

  • What is different about C++ math.h abs() compared to my abs()

    - by moka
    I am currently writing some glsl like vector math classes in c++, and I just implemented an abs() function like this: template<class T> static inline T abs(T _a) { return _a < 0 ? -_a : _a; } I compared its speed to the default c++ abs from math.h like this: clock_t begin = clock(); for(int i=0; i<10000000; ++i) { float a = abs(-1.25); }; clock_t end = clock(); unsigned long time1 = (unsigned long)((float)(end-begin) / ((float)CLOCKS_PER_SEC/1000.0)); begin = clock(); for(int i=0; i<10000000; ++i) { float a = myMath::abs(-1.25); }; end = clock(); unsigned long time2 = (unsigned long)((float)(end-begin) / ((float)CLOCKS_PER_SEC/1000.0)); std::cout<<time1<<std::endl; std::cout<<time2<<std::endl; Now the default abs takes about 25ms while mine takes 60. I guess there is some low level optimisation going on. Does anybody know how math.h abs works internally? The performance difference is nothing dramatic, but I am just curious!

    Read the article

  • Help a CRUD programmer think about an "approval workflow"

    - by gerdemb
    I've been working on a web application that is basically a CRUD application (Create, Read, Update, Delete). Recently, I've started working on what I'm calling an "approval workflow". Basically, a request is generated for a material and then sent for approval to a manager. Depending on what is requested, different people need to approve the request or perhaps send it back to the requester for modification. The approvers need to keep track of what to approve what has been approved and the requesters need to see the status of their requests. As a "CRUD" developer, I'm having a hard-time wrapping my head around how to design this. What database tables should I have? How do I keep track of the state of the request? How should I notify users of actions that have happened to their requests? Is their a design pattern that could help me with this? Should I be drawing state-machines in my code? I think this is a generic programing question, but if it makes any difference I'm using Django with MySQL.

    Read the article

  • Emptying the datastore in GAE

    - by colwilson
    I know what you're thinking, 'O not that again!', but here we are since Google have not yet provided a simpler method. I have been using a queue based solution which worked fine: import datetime from models import * DELETABLE_MODELS = [Alpha, Beta, AlphaBeta] def initiate_purge(): for e in config.DELETABLE_MODELS: deferred.defer(delete_entities, e, 'purging', _queue = 'purging') class NotEmptyException(Exception): pass def delete_entities(e, queue): try: q = e.all(keys_only=True) db.delete(q.fetch(200)) ct = q.count(1) if ct > 0: raise NotEmptyException('there are still entities to be deleted') else: logging.info('processing %s completed' % queue) except Exception, err: deferred.defer(delete_entities, e, then, queue, _queue = queue) logging.info('processing %s deferred: %s' % (queue, err)) All this does is queue a request to delete some data (once for each class) and then if the queued process either fails or knows there is still some stuff to delete, it re-queues itself. This beats the heck out of hitting the refresh on a browser for 10 minutes. However, I'm having trouble deleting AlphaBeta entities, there are always a few left at the end. I think because it contains Reference Properties: class AlphaBeta(db.Model): alpha = db.ReferenceProperty(Alpha, required=True, collection_name='betas') beta = db.ReferenceProperty(Beta, required=True, collection_name='alphas') I have tried deleting the indexes relating to these entity types, but that did not make any difference. Any advice would be appreciated please.

    Read the article

  • Diffrernce between BackgroundWorker.ReportProgress() and Control.Invoke()

    - by ohadsc
    What is the difference between options 1 and 2 in the following? private void BGW_DoWork(object sender, DoWorkEventArgs e) { for (int i=1; i<=100; i++) { string txt = i.ToString(); if (Test_Check.Checked) //OPTION 1 Test_BackgroundWorker.ReportProgress(i, txt); else //OPTION 2 this.Invoke((Action<int, string>)UpdateGUI, new object[] {i, txt}); } } private void BGW_ProgressChanged(object sender, ProgressChangedEventArgs e) { UpdateGUI(e.ProgressPercentage, (string)e.UserState); } private void UpdateGUI(int percent, string txt) { Test_ProgressBar.Value = percent; Test_RichTextBox.AppendText(txt + Environment.NewLine); } Looking at reflector, the Control.Invoke() appears to use: this.FindMarshalingControl().MarshaledInvoke(this, method, args, 1); whereas BackgroundWorker.Invoke() appears to use: this.asyncOperation.Post(this.progressReporter, args); (I'm just guessing these are the relevant function calls.) If I understand correctly, BGW Posts to the WinForms window its progress report request, whereas Control.Invoke uses a CLR mechanism to invoke on the right thread. Am I close? And if so, what are the repercussions of using either ? Thanks

    Read the article

  • Wrapper Classes for Backward compatibility in Java

    - by Casebash
    There is an interesting article here on maintaing backwards compatibility for Java. In the wrapper class section, I can't actually understand what the wrapper class accomplishes. In the following code from MyApp, WrapNewClass.checkAvailable() could be replaced by Class.forName("NewClass"). static { try { WrapNewClass.checkAvailable(); mNewClassAvailable = true; } catch (Throwable ex) { mNewClassAvailable = false; } } Consider when NewClass is unavailable. In the code where we use the wrapper (see below), all we have done is replace a class that doesn't exist, with one that exists, but which can't be compiled as it uses a class that doesn't exist. public void diddle() { if (mNewClassAvailable) { WrapNewClass.setGlobalDiv(4); WrapNewClass wnc = new WrapNewClass(40); System.out.println("newer API is available - " + wnc.doStuff(10)); }else { System.out.println("newer API not available"); } } Can anyone explain why this makes a difference? I assume it has something to do with how Java compiles code - which I don't know much about.

    Read the article

  • pure/const functions in C++

    - by Albert
    Hi, I'm thinking of using pure/const functions more heavily in my C++ code. (pure/const attribute in GCC) However, I am curious how strict I should be about it and what could possibly break. The most obvious case are debug outputs (in whatever form, could be on cout, in some file or in some custom debug class). I probably will have a lot of functions, which don't have any side effects despite this sort of debug output. No matter if the debug output is made or not, this will absolutely have no effect on the rest of my application. Or another case I'm thinking of is the use of my own SmartPointer class. In debug mode, my SmartPointer class has some global register where it does some extra checks. If I use such an object in a pure/const function, it does have some slight side effects (in the sense that some memory probably will be different) which should not have any real side effects though (in the sense that the behaviour is in any way different). Similar also for mutexes and other stuff. I can think of many complex cases where it has some side effects (in the sense of that some memory will be different, maybe even some threads are created, some filesystem manipulation is made, etc) but has no computational difference (all those side effects could very well be left out and I would even prefer that). How does it work out in practice? If I mark such functions as pure/const, could it break anything (considering that the code is all correct)?

    Read the article

  • leak in fgets when assigning to buffer

    - by monkeyking
    I'm having problems understanding why following code leaks in one case, and not in the other case. The difference is while(NULL!=fgets(buffer,length,file))//doesnt leak while(NULL!=(buffer=fgets(buffer,length,file))//leaks I thought it would be the same. Full code below. #include <stdio.h> #include <stdlib.h> #define LENS 10000 void no_leak(const char* argv){ char *buffer = (char *) malloc(LENS); FILE *fp=fopen(argv,"r"); while(NULL!=fgets(buffer,LENS,fp)){ fprintf(stderr,"%s",buffer); } fclose(fp); fprintf(stderr,"%s\n",buffer); free(buffer); } void with_leak(const char* argv){ char *buffer = (char *) malloc(LENS); FILE *fp=fopen(argv,"r"); while(NULL!=(buffer=fgets(buffer,LENS,fp))){ fprintf(stderr,"%s",buffer); } fclose(fp); fprintf(stderr,"%s\n",buffer); free(buffer); }

    Read the article

  • Resque: Slow worker startup and Forking

    - by David John
    I'm currently moving my application from a Linode setup to EC2. Redis is currently installed on a remote instance with various worker instances interacting with the queue. Thats all going fantastic. My problem is with the amount of time it takes for a worker to be 'instantiated' and slow forking. Starting a worker will usually take between 30 seconds and a minute(from god.rb starting the worker rake task and the worker actively starting work on the queue). I could live with that, but I've not experienced such a wait time on my current Linode production box so I believe its one of my symptoms to a bigger problem. Next issue is that jobs that took a second or less in my previous environment now seem to take about 5 to 10 times longer.. I'm assuming this must be some sort of issue with my Ubuntu install on EC2? One notable difference is that I'm running REE 1.8.7-2010.01 in my new setup, and REE 1.8.6 on the old Linode boxes. Anyone else experienced these issues?

    Read the article

  • c# "==" operator : compiler behaviour with different structs

    - by Moe Sisko
    Code to illustrate : public struct MyStruct { public int SomeNumber; } public string DoSomethingWithMyStruct(MyStruct s) { if (s == null) return "this can't happen"; else return "ok"; } private string DoSomethingWithDateTime(DateTime s) { if (s == null) return "this can't happen"; // XX else return "ok"; } Now, "DoSomethingWithStruct" fails to compile with : "Operator '==' cannot be applied to operands of type 'MyStruct' and '<null>'". This makes sense, since it doesn't make sense to try a reference comparison with a struct, which is a value type. OTOH, "DoSomethingWithDateTime" compiles, but with compiler warning : "Unreachable code detected" at line marked "XX". Now, I'm assuming that there is no compiler error here, because the DateTime struct overloads the "==" operator. But how does the compiler know that the code is unreachable ? e.g. Does it look inside the code which overloads the "==" operator ? (This is using Visual Studio 2005 in case that makes a difference). Note : I'm more curious than anything about the above. I don't usually try to use "==" on structs and nulls.

    Read the article

  • Fails proceeding after POSTing to web server

    - by OverTheRainbow
    Hello According to this question, it seems like the error "Too many automatic redirections were attempted" is caused when forgetting to use a cookiecontainer to connect to a web server that uses cookies to keep track of the user. However, even though I used "request.CookieContainer = MyCookieContainer", I'm still getting into an endless loop that is terminated by VB Express with this error message. Imports System.IO Imports System.Net 'Remember to add reference to System.Web DLL Imports System.Web Imports System.Text Public Class Form1 Const ConnectURL = "http://www.acme.com/logon.php" Private Sub Button1_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button1.Click Dim request As HttpWebRequest = WebRequest.Create(ConnectURL) 'Build POST data request.Method = "POST" request.ContentType = "application/x-www-form-urlencoded" Dim Data As New StringBuilder Data.Append("Account=" + HttpUtility.UrlEncode("jdoe")) Data.Append("&Password=" + HttpUtility.UrlEncode("test")) Dim byteData() As Byte byteData = UTF8Encoding.UTF8.GetBytes(Data.ToString()) request.ContentLength = byteData.Length Dim postStream As Stream = Nothing Try postStream = request.GetRequestStream() postStream.Write(byteData, 0, byteData.Length) Finally If Not postStream Is Nothing Then postStream.Close() End Try 'Dim MyCookieContainer As New CookieContainer Dim MyCookieContainer As CookieContainer = New CookieContainer() request.CookieContainer = MyCookieContainer 'Makes no difference 'request.KeepAlive = True 'request.AllowAutoRedirect = True Dim response As HttpWebResponse 'HERE '"Too many automatic redirections were attempted" response = request.GetResponse() Dim reader As StreamReader = New StreamReader(response.GetResponseStream()) RichTextBox1.Text = reader.ReadToEnd End Sub End Class This is probably a newbie issue, but I don't know what else to try. Any idea? Thank you for any hint.

    Read the article

  • Visual studio erroneous errors when building a website?

    - by Curtis White
    Visual Studio 2008 shows a lot of erroneous errors when building a website (not a web project) in the errors list. These errors are usually corrected (removed) when I rebuild the site a couple times but they cost me wasted time. Is there anyway to hide the erroneous errors? Update: I've decided to look into this to see if I could reproduce it. This is the exact behavior I am seeing, using the website model, I type some invalid syntax on a page. The errors list fills up with errors. I correct the error and the errors list does not update. I build the project and the errors list still shows the errors but the build shows as build completed. I build the project a second time and the errors list is cleared. My question is there anyway to make the errors list clear on the first build? I thought it might have something to do with page build vs website build but it seems to make no difference. I am not using any third party dlls on this website.

    Read the article

  • Practice of checking 'trueness' or 'equality' in conditional statements - does it really make sense?

    - by Senthil
    I remember many years back, when I was in school, one of my computer science teachers taught us that it was better to check for 'trueness' or 'equality' of a condition and not the negative stuff like 'inequality'. Let me elaborate - If a piece of conditional code can be written by checking whether an expression is true or false, we should check the 'trueness'. Example: Finding out whether a number is odd - it can be done in two ways: if ( num % 2 != 0 ) { // Number is odd } or if ( num % 2 == 1 ) { // Number is odd } When I was beginning to code, I knew that num % 2 == 0 implies the number is even, so I just put a ! there to check if it is odd. But he was like 'Don't check NOT conditions. Have the practice of checking the 'trueness' or 'equality' of conditions whenever possible.' And he recommended that I use the second piece of code. I am not for or against either but I just wanted to know - what difference does it make? Please don't reply 'Technically the output will be the same' - we ALL know that. Is it a general programming practice or is it his own programming practice that he is preaching to others?

    Read the article

  • C# / IronPython Interop with shared C# Class Library

    - by Adam Haile
    I'm trying to use IronPython as an intermediary between a C# GUI and some C# libraries, so that it can be scripted post compile time. I have a Class library DLL that is used by both the GUI and the python and is something along the lines of this: namespace MyLib { public class MyClass { public string Name { get; set; } public MyClass(string name) { this.Name = name; } } } The IronPython code is as follows: import clr clr.AddReferenceToFile(r"MyLib.dll") from MyLib import MyClass ReturnObject = MyClass("Test") Then, in C# I would call it as follows: ScriptEngine engine = Python.CreateEngine(); ScriptScope scope = null; scope = engine.CreateScope(); ScriptSource source = engine.CreateScriptSourceFromFile("Script.py"); source.Execute(scope); MyClass mc = scope.GetVariable<MyClass>("ReturnObject ") When I call this last bit of code, source.Execute(scope) runs returns successfully, but when I try the GetVariable call, it throw the following exception Microsoft.Scripting.ArgumentTypeException: expected MyClass , got MyClass So, you can see that the class names are exactly the same, but for some reason it thinks they are different. The DLL is in a different directory than the .py file (I just didn't bother to write out all the path setup stuff), could it be that there is an issue with the interpreter for IronPython seeing these objects as difference because it's somehow seeing them as being in a different context or scope?

    Read the article

  • what use does the javascript forEach method have (that map can't do)?

    - by JohnMerlino
    Hey all, The only difference I see in map and foreach is that map is returning an array and foreach is not. However, I don't even understand the last line of the foreach method "func.call(scope, this[i], i, this);". For example, isn't "this" and "scope" referring to same object and isn't this[i] and i referring to the current value in the loop? I noticed on another post someone said "Use forEach when you want to do something on the basis of each element of the list. You might be adding things to the page, for example. Essentially, it's great for when you want "side effects". I don't know what is meant by side effects. Array.prototype.map = function(fnc) { var a = new Array(this.length); for (var i = 0; i < this.length; i++) { a[i] = fnc(this[i]); } return a; } Array.prototype.forEach = function(func, scope) { scope = scope || this; for (var i = 0, l = this.length; i < l; i++) func.call(scope, this[i], i, this); } Finally, are there any real uses for these methods in javascript (since we aren't updating a database) other than to manipulate numbers like this: alert([1,2,3,4].map(function(x){ return x + 1})); //this is the only example I ever see of map in javascript. Thanks for any reply.

    Read the article

  • Need help with strange Class#getResource() issue

    - by Andreas_D
    I have some legacy code that reads a configuration file from an existing jar, like: URL url = SomeClass.class.getResource("/configuration.properties"); // some more code here using url variable InputStream in = url.openStream(); Obviously it worked before but when I execute this code, the URL is valid but I get an IOException on the third line, saying it can't find the file. The url is something like "file:jar:c:/path/to/jar/somejar.jar!configuration.properties" so it doesn't look like a classpath issue - java knows pretty well where the file can be found.. The above code is part of an ant task and it fails while the task is executed. Strange enough - I copied the code and the jar file into a separate class and it works as expected, the properties file is readable. At some point I changed the code of the ant task to URL url = SomeClass.class.getResource("/configuration.properties"); // some more code here using url variable InputStream in = SomeClass.class.getResourceAsStream("/configuration.properties"); and now it works - just until it crashes in another class where a similiar access pattern is implemented.. Why could it have worked before, why does it fail now? The only difference I see at the moment is, that the old build was done with java 1.4 while I'm trying it with Java 6 now. Workaround Today I installed Java 1.4.2_19 on the build server and made ant to use it. To my totally frustrating surprise: The problem is gone. It looks to me, that java 1.4.2 can handle URLs of this type while Java 1.6 can't (at least in my context/environment). I'm still hoping for an explanation although I'm facing the work to rewrite parts of the code to use Class#getRessourceAsStream which behaved much more stable...

    Read the article

  • ASP.NET AJAX weirdness

    - by LoveMeSomeCode
    Ok, I thought I understood these topics well, but I guess not, so hopefully someone here can clear this up. Page.IsAsync seems to be broken. It always returns false. But ScriptManager.IsInAsyncPostBack seems to work, sort of. It returns true during the round trip for controls inside UpdatePanels. This is good; I can tell if it's a partial postback or a regular one. ScriptManager.IsInAsyncPostBack returns false however for async Page Methods. Why is this? It's not a regular postback, I'm just calling a public static method on the page. It causes a problem because I also realized that if you have a control with AutoPostBack = false, it won't trigger a postback on it's own, but if it has an event handler on the page, that event handler code WILL run on the next postback, regardless of how the postback occurred, IF the value has changed. i.e. if I tweak a dropdown and then hit a button, that dropdown's handler code will fire. This is ok, except that it will also happen during Page Method calls, and I have no way to know the difference. Any thoughts?

    Read the article

< Previous Page | 275 276 277 278 279 280 281 282 283 284 285 286  | Next Page >