Search Results

Search found 6001 results on 241 pages for 'requires'.

Page 183/241 | < Previous Page | 179 180 181 182 183 184 185 186 187 188 189 190  | Next Page >

  • Python: Best way to check for Python version in program that uses new language features?

    - by Mark Harrison
    If I have a python script that requires at least a particular version of python, what is the correct way to fail gracefully when an earlier version of python is used to launch the script? How do I get control early enough to issue an error message and exit? For example, I have a program that uses the ternery operator (new in 2.5) and "with" blocks (new in 2.6). I wrote a simple little interpreter-version checker routine which is the first thing the script would call ... except it doesn't get that far. Instead, the script fails during python compilation, before my routines are even called. Thus the user of the script sees some very obscure synax error tracebacks - which pretty much require an expert to deduce that it is simply the case of running the wrong version of python. update I know how to check the version of python. The issue is that some syntax is illegal in older versions of python. Consider this program: import sys if sys.version_info < (2, 4): raise "must use python 2.5 or greater" else: # syntax error in 2.4, ok in 2.5 x = 1 if True else 2 print x When run under 2.4, I want this result $ ~/bin/python2.4 tern.py must use python2.5 or greater and not this result: $ ~/bin/python2.4 tern.py File "tern.py", line 5 x = 1 if True else 2 ^ SyntaxError: invalid syntax (channeling for a coworker)

    Read the article

  • Problem with looping a video in Flash

    - by Blaze
    I am trying to loop a video and i am having some issues with this in flash. You can view the video here: http://www.healthcarepros.net/travel.html Here the specific code for the flash video: <script language="javascript"> if (AC_FL_RunContent == 0) { alert("This page requires AC_RunActiveContent.js."); } else { AC_FL_RunContent( 'codebase', 'http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=9,0,0,0', 'width', '330', 'height', '245', 'src', 'healthcare-video', 'quality', 'high', 'pluginspage', 'http://www.macromedia.com/go/getflashplayer', 'align', 'middle', 'play', 'true', 'loop', 'true', 'scale', 'showall', 'wmode', 'window', 'devicefont', 'false', 'id', 'healthcare-video', 'bgcolor', '#ffffff', 'name', 'healthcare-video', 'menu', 'true', 'allowFullScreen', 'false', 'allowScriptAccess','sameDomain', 'movie', 'healthcare-video', 'salign', '' ); //end AC code } </script> <noscript> <object classid="clsid:d27cdb6e-ae6d-11cf-96b8-444553540000" codebase="http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=9,0,0,0" width="330" height="245" id="healthcare-video" align="middle"> <param name="allowScriptAccess" value="sameDomain" /> <param name="allowFullScreen" value="false" /> <param name="loop" value="true" /> <param name="play" value="true" /> <param name="movie" value="healthcare-video.swf" /><param name="quality" value="high" /><param name="bgcolor" value="#ffffff" /> <embed src="healthcare-video.swf" play="true" flashvars="autoplay=true&play=true" quality="high" bgcolor="#ffffff" width="330" height="245" name="healthcare-video" align="middle" allowScriptAccess="sameDomain" allowFullScreen="false" type="application/x-shockwave-flash" pluginspage="http://www.macromedia.com/go/getflashplayer" /> </object> Additionally, i have added in the parameter code that calls the loop function but for some reason it still doesnt seem to work, any suggestions?

    Read the article

  • git doesn't show where code was removed.

    - by Andrew Myers
    So I was tasked at replacing some dummy code that our project requires for historical compatibility reasons but has mysteriously dropped out sometime since the last release. Since disappearing code makes me nervous about what else might have gone missing but un-noticed I've been digging through the logs trying to find in what commit this handful of lines was removed. I've tried a number of things including "git log -S'add-visit-resource-pcf'", git blame, and even git bisect with a script that simply checks for the existence of the line but have been unable to pinpoint exactly where these lines were removed. I find this very perplexing, particularly since the last log entry (obtained by the above command) before my re-introduction of this code was someone else adding the code as well. commit 0b0556fa87ff80d0ffcc2b451cca1581289bbc3c Author: Andrew Date: Thu May 13 10:55:32 2010 -0400 Re-introduced add-visit-resource-pcf, see PR-65034. diff --git a/spike/hst/scheduler/defpackage.lisp b/spike/hst/scheduler/defpackage.lisp index f8e692d..a6f8d38 100644 --- a/spike/hst/scheduler/defpackage.lisp +++ b/spike/hst/scheduler/defpackage.lisp @@ -115,6 +115,7 @@ #:add-to-current-resource-pcf #:add-user-package-nickname #:add-value-criteria + #:add-visit-resource-pcf #:add-window-to-gs-params #:adjust-derived-resources #:adjust-links-candidate-criteria-types commit 9fb10e25572c537076284a248be1fbf757c1a6e1 Author: Bob Date: Sun Jan 17 18:35:16 2010 -0500 update-defpackage for Spike 33.1 Delivery diff --git a/spike/hst/scheduler/defpackage.lisp b/spike/hst/scheduler/defpackage.lisp index 983666d..47f1a9a 100644 --- a/spike/hst/scheduler/defpackage.lisp +++ b/spike/hst/scheduler/defpackage.lisp @@ -118,6 +118,7 @@ #:add-user-package-nickname #:add-value-criteria #:add-vars-from-proposal + #:add-visit-resource-pcf #:add-window-to-gs-params #:adjust-derived-resources #:adjust-links-candidate-criteria-types This is for one of our package definition files, but the relevant source file reflects something similar. Does anyone know what could be going on here and how I could find the information I want? It's not really that important but this kind of things makes me a bit nervous.

    Read the article

  • database design suggestion needed

    - by JMSA
    I need to design a table for daily sales of pharmaceutical products. There are hundreds of types of products available {Name, code}. Thousands of sales-persons are employed to sell those products{name, code}. They collect products from different depots{name, code}. They work in different Areas - Zones - Markets - Outlets, etc. {All have names and codes} Each product has various types of prices {Production Price, Trade Price, Business Price, Discount Price, etc.}. And, sales-persons are free to choose from those combination to estimate the sales price. The problem is, daily sales requires huge amount of data-entry. Within couple of years there may be gigabytes of data (if not terabytes). If I need to show daily, weekly, monthly, quarterly and yearly sales reports there will be various types of sql queries I shall need. This is my initial design: Product {ID, Code, Name, IsActive} ProductXYZPriceHistory {ID, ProductID, Date, EffectDate, Price, IsCurrent} SalesPerson {ID, Code, Name, JoinDate, and so on..., IsActive} SalesPersonSalesAraeaHistory {ID, SalesPersonID, SalesAreaID, IsCurrent} Depot {ID, Code, Name, IsActive} Outlet {ID, Code, Name, AreaID, IsActive} AreaHierarchy {ID, Code, Name, PrentID, AreaLevel, IsActive} DailySales {ID, ProductID, SalesPersonID, OutletID, Date, PriceID, SalesPrice, Discount, etc...} Now, apart from indexing, how can I normalize my DailySales table to have a fine grained design that I shall not need to change for years to come? Please show me a sample design of only the DailySales data-entry table (from which all types of reports would be queried) on the basis of above information. I don't need a detailed design advice. I just need an advice regarding only the DailySales table. Is there any way to break this particular table to achieve granularity?

    Read the article

  • A way to specify a different host in an SSH tunnel from the host in use

    - by Tom
    I am trying to setup an SSH tunnel to access Beanstalk (to bypass an annoying proxy server). I can get this to work, but with one caveat: I have to map my Beanstalk host URL (username.svn.beanstalkapp.com) in my hosts file to 127.0.0.1 (and use the ip in place of the domain when setting up the tunnel). The reason (I think) is that I am creating the tunnel using the local SSH instance (on Snow Leopard) and if I use localhost or 127.0.0.1 when talking to Beanstalk, it rejects the authorisation credentials. I believe this is because Beanstalk use the hostname specified in a request to determine which account the username / password combination should be checked against. If localhost is used, I think this information is missing (in some manner which Beanstalk requires) from the requests. At the moment I dig the IP for username.svn.beanstalkapp.com, map username.svn.beanstalkapp.com to 127.0.0.1 in my hosts file, then for the tunnel I use the command: ssh -L 8080:ip:443 -p 22 -l tom -N 127.0.0.1 I can tell Subversion that the repo. is located at: https://username.svn.beanstalkapp.com:8080/repo-name This uses my tunnel and the username and password are accepted. So, my question is if there is an option when setting up the SSH tunnel which would mean I wouldn't have to use my hosts file workaround?

    Read the article

  • Vim, LaTeX, word-wrapping, and version control

    - by Bkkbrad
    I'm writing a LaTeX document in vim, and I have it hard wrapping at 80 characters to make reading easier. However, this causes problems with tracking changes with in version control. For example, inserting "Lorem ipsum" at the beginning of this text: 1 Dolor sit amet, consectetur adipiscing elit. Phasellus bibendum lobortis lectus 2 quis porta. Aenean vestibulum magna vel purus laoreet at molestie massa 3 suscipit. Vestibulum vestibulum, mauris nec convallis ultrices, tellus sapien 4 ullamcorper elit, dignissim consectetur justo tellus et nunc. results in: 1 Lorum ipsum dolor sit amet, consectetur adipiscing elit. Phasellus bibendum 2 lobortis lectus quis porta. Aenean vestibulum magna vel purus laoreet at 3 molestie massa suscipit. Vestibulum vestibulum, mauris nec convallis ultrices, 4 tellus sapien ullamcorper elit, dignissim consectetur justo tellus et nunc. When I review this change in git, it tells me that all the lines of the paragraph have changed because of the wrapping, even though only one semantic change has occurred. One way around this problem is to have every sentence on its own line. This looks the same in the rendered document, but the source now is harder to read, because each line has quite a different line length: 1 Lorum ipsum dolor sit amet, consectetur adipiscing elit. 2 Phasellus bibendum lobortis lectus quis porta. 3 Aenean vestibulum magna vel purus laoreet at molestie massa suscipit. 4 Vestibulum vestibulum, mauris nec convallis ultrices, tellus sapien ullamcorper elit, dignissim consectetur justo tellus et nunc. (If I soft wrap at 80, things still look bad, just in a different way.) Is it possible to have my text on disk with one newline per sentence, but display and edit it in vim as if the text of each paragraph was one long line, soft wrapped at 80 characters? I assume it requires some vim-foo rather than tweaking git or LaTeX.

    Read the article

  • Using the Microsoft Ajax Minifier with Web Setup project & Source Control

    - by Rob
    I've just started investigating the Microsoft Ajax Minifer 4.0 for use with a Visual Studio 2008 Web Application I work on. It's proven easy enough to hook it into the .csproj file so it produced .min.js files for all scripts, however I'm stumped as to how to integrate this with the Web Setup project & Source Control. Essentially what I want to do is have the resultant .min.js files included in the Web Setup project without having them included in Source Control because: Having to check them out prior to the build being executing is a pain (the minifier cannot modify them if they're not checked out). As they're created as a "build artifact" it just seems wrong to have them stored under source control. The only option I've managed to come across so far is to explicitly include the .min.js files as part of the Setup project by right clicking on the Web Setup project and choosing "Add File", and then having the relevant folder hierarchy duplicated in "File System on Target Machine" so that I can force the file to the correct location. This is neither elegant or simple/robust as: It requires me to manually add every minified js file to the Web Setup project by hand Maintain a copy of the relevant directory structure in both the Web Application project and the Web Setup project Remember to add any new js files minified versions to the Web Setup project Is there a better way of doing this?

    Read the article

  • Ruby and duck typing: design by contract impossible?

    - by davetron5000
    Method signature in Java: public List<String> getFilesIn(List<File> directories) similar one in ruby def get_files_in(directories) In the case of Java, the type system gives me information about what the method expects and delivers. In Ruby's case, I have no clue what I'm supposed to pass in, or what I'll expect to receive. In Java, the object must formally implement the interface. In Ruby, the object being passed in must respond to whatever methods are called in the method defined here. This seems highly problematic: Even with 100% accurate, up-to-date documentation, the Ruby code has to essentially expose its implementation, breaking encapsulation. "OO purity" aside, this would seem to be a maintenance nightmare. The Ruby code gives me no clue what's being returned; I would have to essentially experiment, or read the code to find out what methods the returned object would respond to. Not looking to debate static typing vs duck typing, but looking to understand how you maintain a production system where you have almost no ability to design by contract. Update No one has really addressed the exposure of a method's internal implementation via documentation that this approach requires. Since there are no interfaces, if I'm not expecting a particular type, don't I have to itemize every method I might call so that the caller knows what can be passed in? Or is this just an edge case that doesn't really come up?

    Read the article

  • Splitting EJBs and interfaces into separate module -- deployment fails

    - by Hank
    I'm having trouble following this guide to "extract" my interfaces and entities from my EAR to use them from another Web Application: I use NetBeans 6.8 and Glassfish 3.0.1 "Java Class Library" project contains all the entities and interfaces "Java EE Application" project class library added to the project, is packaged into the EAR contains EJB implementations, MDBs, Test "Java Web Application" project class library added to the project, is packaged into the WAR contains REST interface When I build and deploy the Web Application, all goes well. When I build the JEE application, I can see the jar-file (interfaces, entities) being included. But when I try to deploy the EAR, Glassfish refuses it with a java.lang.NoClassDefFoundError error: [#|2010-03-28T18:25:59.875+0200|WARNING|glassfishv3.0|javax.enterprise.system.tools.deployment.org.glassfish.deployment.common|_ThreadID=28;_ThreadName=Thread-1;|Error in annotation processing: java.lang.NoClassDefFoundError: mvs/core/StoreServiceLocal|#] [#|2010-03-28T18:25:59.876+0200|SEVERE|glassfishv3.0|javax.enterprise.system.core.com.sun.enterprise.v3.server|_ThreadID=28;_ThreadName=Thread-1;|Exception while deploying the app java.lang.IllegalArgumentException: Invalid ejb jar [CoreServer]: it contains zero ejb. Note: 1. A valid ejb jar requires at least one session, entity (1.x/2.x style), or message-driven bean. 2. EJB3+ entity beans (@Entity) are POJOs and please package them as library jar. 3. If the jar file contains valid EJBs which are annotated with EJB component level annotations (@Stateless, @Stateful, @MessageDriven, @Singleton), please check server.log to see whether the annotations were processed properly. 'mvs/core/StoreServiceLocal' is an interface which is defined in the library jar file. What am I doing wrong?

    Read the article

  • Encryption puzzle / How to create a PassStub for a Remote Assistance ticket

    - by Jon Clegg
    I am trying to create a ticket for Remote Assistance. Part of that requires creating a PassStub parameter. As of the documentation: http://msdn.microsoft.com/en-us/library/cc240115(PROT.10).aspx PassStub: The encrypted novice computer's password string. When the Remote Assistance Connection String is sent as a file over e-mail, to provide additional security, a password is used.<16 In part 16 they detail how to create as PassStub. In Windows XP and Windows Server 2003, when a password is used, it is encrypted using PROV_RSA_FULL predefined Cryptographic provider with MD5 hashing and CALG_RC4, the RC4 stream encryption algorithm. As PassStub looks like this in the file: PassStub="LK#6Lh*gCmNDpj" If you want to generate one yourself run msra.exe in Vista or run the Remote Assistance tool in WinXP. The documentation says this stub is the result of the function CryptEncrypt with the key derived from the password and encrypted with the session id (Those are also in the ticket file). The problem is that CryptEncrypt produces a binary output way larger then the 15 byte PassStub. Also the PassStub isn't encoding in any way I've seen before. Some interesting things about the PassStub encoding. After doing statistical analysis the 3rd char is always a one of: !#$&()+-=@^. Only symbols seen everywhere are: *_ . Otherwise the valid characters are 0-9 a-z A-Z. There are a total of 75 valid characters and they are always 15 bytes. Running msra.exe with the same password always generates a different PassStub, indicating that it is not a direct hash but includes the rasessionid as they say. Some other ideas I've had is that it is not the direct result of CryptEncrypt, but a result of the rasessionid in the MD5 hash. In MS-RA (http://msdn.microsoft.com/en-us/library/cc240013(PROT.10).aspx). The "PassStub Novice" is simply hex encoded, and looks to be the right length. The problem is I have no idea how to go from any hash to way the PassStub looks like.

    Read the article

  • linux new/delete, malloc/free large memory blocks

    - by brian_mk
    Hi folks, We have a linux system (kubuntu 7.10) that runs a number of CORBA Server processes. The server software uses glibc libraries for memory allocation. The linux PC has 4G physical memory. Swap is disabled for speed reasons. Upon receiving a request to process data, one of the server processes allocates a large data buffer (using the standard C++ operator 'new'). The buffer size varies depening upon a number of parameters but is typically around 1.2G Bytes. It can be up to about 1.9G Bytes. When the request has completed, the buffer is released using 'delete'. This works fine for several consecutive requests that allocate buffers of the same size or if the request allocates a smaller size than the previous. The memory appears to be free'd ok - otherwise buffer allocation attempts would eventually fail after just a couple of requests. In any case, we can see the buffer memory being allocated and freed for each request using tools such as KSysGuard etc. The problem arises when a request requires a buffer larger than the previous. In this case, operator 'new' throws an exception. It's as if the memory that has been free'd from the first allocation cannot be re-allocated even though there is sufficient free physical memory available. If I kill and restart the server process after the first operation, then the second request for a larger buffer size succeeds. i.e. killing the process appears to fully release the freed memory back to the system. Can anyone offer an explanation as to what might be going on here? Could it be some kind of fragmentation or mapping table size issue? I am thinking of replacing new/delete with malloc/free and use mallopt to tune the way the memory is being released to the system. BTW - I'm not sure if it's relevant to our problem, but the server uses Pthreads that get created and destroyed on each processing request. Cheers, Brian.

    Read the article

  • SQL: script to create country, state tables

    - by pcampbell
    Consider writing an application that requires registration for an entity, and the schema has been defined to require the country, state/prov/county data to be normalized. This is fairly typical stuff here. Naming also is important to reflect. Each country has a different name for this entity: USA = states Australia = states + territories Canada = provinces + territories Mexico = states Brazil = states Sweden = provinces UK = counties, principalities, and perhaps more! Most times when approaching this problem, I have to scratch together a list of good countries, and the states/prov/counties of each. The app may be concerned with a few countries and not others. The process is full of pain. It typically involves one of two approaches: opening up some previous DB and creating a CREATE script based on those tables. Run that script in the context of the new system. creating a DTS package from database1 to database2, with all the DDL and data included in the transfer. My goal now is to script the creation and insert of the countries that I'd be concerned with in the app of the day. When I want to roll out Countries X/Y/Z, I'll open CountryX.sql, and load its states into the ProvState table. Question: do you have a set of scripts in your toolset to create schema and data for countries and state/province/county? If so, would you share your scripts here? (U.K. citizens, please feel free to correct me by way of a comment in the use of counties.)

    Read the article

  • How to generate an ear file from a maven-archetype-webapp artifact?

    - by Mike
    I currently have a project built with maven-archetype-webapp artifact. The default packaging for this project is war. Is it possible for me to insert the maven-ear-plugin in this webapp pom.xml generate an ear file that contains this project war? I tried that, but the war file doesn't get embedded in the generated ear file. It has everything except the war file. I read many Maven related articles, and perhaps I could use maven-archetype-j2ee-simple artifact. However, I'm reluctant to this use for 2 reasons:- This artifact handles ejbs and all the extra features that I don't use. It makes my project looks bloated. Second, it seems like it requires me to install the web module into the repository first before I can create the ear file. Is this the preferred way to create an ear file? How do I create an ear file that contains the war file using maven-ear-plugin from my webapp's pom.xml? If this way is not possible, what's the preferred way? I'm sorry if my questions sound a little novice, I realized I have whole lot more to learn about Maven. Thanks much.

    Read the article

  • How do I write test code to exercise a C# generic Pair<TKey, TValue> ?

    - by Scott Davies
    Hi, I am reading through Jon Skeet's "C# in Depth", first edition (which is a great book). I'm in section 3.3.3, page 84, "Implementing Generics". Generics always confuse me, so I wrote some code to exercise the sample. The code provided is: using System; using System.Collections.Generic; public sealed class Pair<TFirst, TSecond> : IEquatable<Pair<TFirst, TSecond>> { private readonly TFirst first; private readonly TSecond second; public Pair(TFirst first, TSecond second) { this.first = first; this.second = second; } ...property getters... public bool Equals(Pair<TFirst, TSecond> other) { if (other == null) { return false; } return EqualityComparer<TFirst>.Default.Equals(this.First, other.First) && EqualityComparer<TSecond>.Default.Equals(this.Second, other.Second); } My code is: class MyClass { public static void Main (string[] args) { // Create new pair. Pair thePair = new Pair(new String("1"), new String("1")); // Compare a new pair to previous pair by generating a second pair. if (thePair.Equals(new Pair(new string("1"), new string("1")))) System.Console.WriteLine("Equal"); else System.Console.WriteLine("Not equal"); } } The compiler complains: "Using the generic type 'ManningListing36.Paie' requires 2 type argument(s) CS0305" What am I doing wrong ? Thanks, Scott

    Read the article

  • SQL Server 2000 intermittent connection exceptions on production server - specific environment probl

    - by StickyMcGinty
    We've been having intermittent problems causing users to be forcibly logged out of out application. Our set-up is ASP.Net/C# web application on Windows Server 2003 Standard Edition with SQL Server 2000 on the back end. We've recently performed a major product upgrade on our client's VMWare server (we have a guest instance dedicated to us) and whereas we had none of these issues with the previous release the added complexity that the new upgrade brings to the product has caused a lot of issues. We are also running SQL Server 2000 (build 8.00.2039, or SP4) and the IIS/ASP.NET (.Net v2.0.50727) application on the same box and connecting to each other via a TCP/IP connection. Primarily, the exceptions being thrown are: System.IndexOutOfRangeException: Cannot find table 0. System.ArgumentException: Column 'password' does not belong to table Table. [This exception occurs in the log in script, even though there is clearly a password column available] System.InvalidOperationException: There is already an open DataReader associated with this Command which must be closed first. [This one is occurring very regularly] System.InvalidOperationException: This SqlTransaction has completed; it is no longer usable. System.ApplicationException: ExecuteReader requires an open and available Connection. The connection's current state is connecting. System.Data.SqlClient.SqlException: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. And just today, for the first time: System.Web.UI.ViewStateException: Invalid viewstate. We have load tested the app using the same number of concurrent users as the production server and cannot reproduce these errors. They are very intermittent and occur even when there are only 8/9/10 user connections. My gut is telling me its ASP.NET - SQL Server 2000 connection issues.. We've pretty much ruled out code-level Data Access Layer errors at this stage (we've a development team of 15 experienced developers working on this) so we think its a specific production server environment issue.

    Read the article

  • Canonical resource for forms-based design in ASP.NET MVC?

    - by Robert Harvey
    Is there a resource on the web that describes various form scenarios in ASP.NET MVC, and gives example solutions within a sensible, consistent design philosophy? Examples of such scenarios might be: One-to-many forms, like invoice data-entry forms. Foreign-table forms such as Add New User in a form that requires specifying a user Forms that require dynamic interaction, using Ajax or JSON. Popup forms Forms requiring multiple data records to be input, without postbacks. Note that there is considerable conceptual and technological overlap among these example scenarios. I am aware that there is a vast patchwork quilt of available technologies and examples out there that provide partial solutions and pieces of solutions, such as jQuery Ajax, CSS, and so forth. But I would like guidance in using these technologies in more effective and consistent ways. I am not considering web forms integration with an ASP.NET MVC application; I would still like my applications to be pure MVC. Nor am I, at the moment, considering a paid solution like Telerik. But I would like to know if someone has already done some of the work combining these technologies into a consistent, cohesive whole, that follows a sensible design philosophy. (an open source framework, perhaps?)

    Read the article

  • Windows Workflow Foundation: Recommendations how to design architecture

    - by Petr Felzmann
    We are running several the same ASP.NET applications (one per customer) based on our custom framework (libraries). Each application use its own database (Initial Catalog in the term of connection string). Now we would like to add workflow capability (of course 4.0 ;) to the applications. So the particular workflows will be the same for all the applications only some initial settings of each workflow can vary, e.g. in one application the e-mail will be send to the user X, but in other application to the user Y. I have several general questions how to design architecture: (1) Can be the workflow database shared for all the applications? (2) Where to host workflow engine - inside our custom windows NT service or inside IIS? What are the criteria to choose the right host? (3) How the workflow engine should communicate with applications? Should application call some WCF endpoint API configured in workflow host or vice verse - should each application provide WCF endpoint API and workflow engine will call it? How then the workflow engine will identify applications? Both cases requires probably some application identifier as a parameter in API calls? (4) We would like to also store some information to the application databases based on the workflow states. Is it possible? Thanks for suggestions!

    Read the article

  • How to lazy load scripts in YUI that accompany ajax html fragments

    - by Chris Beck
    I have a web app with Tabs for Messages and Contacts (think gmail). Each Tab has a Y.on('click') event listener that retrieves an HTML fragment via Y.io() and renders the fragment via node.setContent(). However, the Contact Tab also requires contact.js to enable an Edit button in the fragment. How do I defer the cost of loading contact.js until the user clicks on the Contacts tab? How should contact.js add it's listener to the Edit button? The Complete function of my Tab's on('click') event could serialize Get.script('contact.js') after Y.io('fragment'). However, for better performance, I would prefer to download the script in parallel to downloading the HTML fragment. In that case, I would need to defer adding an event listener to the Edit button until the Edit button is available. This seems like a common RIA design pattern. What is the best way to implement this with YUI? How should I get the script? How should I defer sections of the script until elements in the fragment are available in the DOM?

    Read the article

  • Best practice - When to evaluate conditionals of function execution

    - by Tesserex
    If I have a function called from a few places, and it requires some condition to be met for anything it does to execute, where should that condition be checked? In my case, it's for drawing - if the mouse button is held down, then execute the drawing logic (this is being done in the mouse movement handler for when you drag.) Option one says put it in the function so that it's guaranteed to be checked. Abstracted, if you will. public function Foo() { DoThing(); } private function DoThing() { if (!condition) return; // do stuff } The problem I have with this is that when reading the code of Foo, which may be far away from DoThing, it looks like a bug. The first thought is that the condition isn't being checked. Option two, then, is to check before calling. public function Foo() { if (condition) DoThing(); } This reads better, but now you have to worry about checking from everywhere you call it. Option three is to rename the function to be more descriptive. public function Foo() { DoThingOnlyIfCondition(); } private function DoThingOnlyIfCondition() { if (!condition) return; // do stuff } Is this the "correct" solution? Or is this going a bit too far? I feel like if everything were like this function names would start to duplicate their code. About this being subjective: of course it is, and there may not be a right answer, but I think it's still perfectly at home here. Getting advice from better programmers than I is the second best way to learn. Subjective questions are exactly the kind of thing Google can't answer.

    Read the article

  • Rate My Script: Finding Flash Files Embedded in Office Files

    - by Shaun Johnson
    Can anyone improve on this? Requires Sysinternals Strings date /T >N:\output.txt net use z: /delete net use z: \\svr-002\rmstudentwork @cd /d "z:\" "N:\Scripts and Reg Frags\FindEmbededFlashFiles\strings.exe" -s *.xls | findstr \.swf >> "N:\output.txt" "N:\Scripts and Reg Frags\FindEmbededFlashFiles\strings.exe" -s *.ppt | findstr \.swf >> "N:\output.txt" "N:\Scripts and Reg Frags\FindEmbededFlashFiles\strings.exe" -s *.doc | findstr \.swf >> "N:\output.txt" "N:\Scripts and Reg Frags\FindEmbededFlashFiles\strings.exe" -s *.xlsx | findstr \.swf >> "N:\output.txt" "N:\Scripts and Reg Frags\FindEmbededFlashFiles\strings.exe" -s *.pptx | findstr \.swf >> "N:\output.txt" "N:\Scripts and Reg Frags\FindEmbededFlashFiles\strings.exe" -s *.docx | findstr \.swf >> "N:\output.txt" date /T >>N:\output.txt net use z: /delete /yes >>N:\output.txt net use z: \\svr-003\rmstudentwork "N:\Scripts and Reg Frags\FindEmbededFlashFiles\strings.exe" -s *.xls | findstr \.swf >> "N:\output.txt" "N:\Scripts and Reg Frags\FindEmbededFlashFiles\strings.exe" -s *.ppt | findstr \.swf >> "N:\output.txt" "N:\Scripts and Reg Frags\FindEmbededFlashFiles\strings.exe" -s *.doc | findstr \.swf >> "N:\output.txt" "N:\Scripts and Reg Frags\FindEmbededFlashFiles\strings.exe" -s *.xlsx | findstr \.swf >> "N:\output.txt" "N:\Scripts and Reg Frags\FindEmbededFlashFiles\strings.exe" -s *.pptx | findstr \.swf >> "N:\output.txt" "N:\Scripts and Reg Frags\FindEmbededFlashFiles\strings.exe" -s *.docx | findstr \.swf >> "N:\output.txt" net use z: /delete /yes Basically it mounts a share as a network drive then runs through the share looking for swf files inside office documents.

    Read the article

  • Does git clone work through NTLM proxies?

    - by AndreaG
    I've tried both using export http_proxy=http://[username]:[pwd]@[proxy] and git config --global http.proxy http://[username]:[pwd]@[proxy]. I couldn't make it work. It looks like git uses Basic authentication: Initialized empty Git repository in /home/.../.git/ * Couldn't find host github.com in the .netrc file, using defaults * About to connect() to github.com port 8080 (#0) * Trying 10.... * Connected to github.com (10....) port 8080 (#0) * Proxy auth using Basic with user '...' > GET http://github.com/sunlightlabs/fiftystates.git/info/refs HTTP/1.1 Proxy-Authorization: Basic MD... User-Agent: git/1.6.1.2 Host: github.com Pragma: no-cache Accept: */* Proxy-Connection: Keep-Alive < HTTP/1.1 407 Proxy Authentication Required ( The ISA Server requires authorization to fulfill the request. Access to t he Web Proxy filter is denied. ) < Via: 1.1 ... < Proxy-Authenticate: Negotiate < Proxy-Authenticate: Kerberos < Proxy-Authenticate: NTLM < Connection: Keep-Alive < Proxy-Connection: Keep-Alive < Pragma: no-cache < Cache-Control: no-cache < Content-Type: text/html < Content-Length: 4118 * The requested URL returned error: 407 * Closing connection #0 fatal: http://github.com/sunlightlabs/fiftystates.git/info/refs download error - The requested URL returned error: 407 Google search returned mixed and probably not updated results. Somewhere it says that curl is (was?) used under the hood, but its options are (were?) hardwired into code. For example, curl --proxy-ntlm --proxy ...:8080 google.com works, and I'd like to use the same option with git. I need some more definite answers here: has anybody succeed using git through Windows proxies? Which version? Thanks.

    Read the article

  • java - codesprint2 programming contest answer

    - by arya
    I recently took part in Codesprint2. I was unable to submit the solution to the following question. http://www.spoj.pl/problems/COINTOSS/ ( I have posted the link to the problem statement at spoj because the codesprint link requires login ) I checked out one of the successful solutions, ( url : http://pastebin.com/uQhNh9Rc ) and the logic used is exactly the same as mine, yet I am still getting "Wrong Answer" I would be very thankful if someone could please tell me what error I have made, as I am unable to find it. Thank you. My code : import java.io.BufferedReader; import java.io.IOException; import java.io.InputStreamReader; import java.text.DecimalFormat; import java.util.StringTokenizer; public class Solution { static double solve( int n, int m ) { if( m==n ) return 0; if( m==0 ) return ( Math.pow( 2, n+1 ) - 2 ); else { double res = 1 + ( Double )( solve( n, m+1 ) + solve( n, 0 ))/2; return res; } } public static void main( String[] args ) throws IOException { BufferedReader br = new BufferedReader( new InputStreamReader( System.in )); int n, m; int t = Integer.parseInt( br.readLine() ); StringTokenizer tok; String s; for( int T=0; T<t; T++ ) { s = br.readLine(); tok = new StringTokenizer( s ); n = Integer.parseInt( tok.nextToken() ); m = Integer.parseInt( tok.nextToken() ); DecimalFormat df = new DecimalFormat(); df.setMaximumFractionDigits(2); df.setMinimumFractionDigits(2); System.out.println( df.format ( solve( n, m ) )); } } }

    Read the article

  • php user authentication libraries / frameworks ... what are the options?

    - by es11
    I am using PHP and the codeigniter framework for a project I am working on, and require a user login/authentication system. For now I'd rather not use SSL (might be overkill and the fact that I am using shared hosting discourages this). I have considered using openID but decided that since my target audience is generally not technical, it might scare users away (not to mention that it requires mirroring of login information etc.). I know that I could write a hash based authentication (such as sha1) since there is no sensitive data being passed (I'd compare the level of sensitivity to that of stackoverflow). That being said, before making a custom solution, it would be nice to know if there are any good libraries or packages out there that you have used to provide semi-secure authentication? I am new to codeigniter, but something that integrates well with it would be preferable. Any ideas? (i'm open to criticism on my approach and open to suggestions as to why I might be crazy not to just use ssl). Thanks in advance. Update: I've looked into some of the suggestions. I am curious to try out zend-auth since it seems well supported and well built. Does anyone have experience with using zend-auth in codeigniter (is it too bulky?) and do you have a good reference on integrating it with CI? I do not need any complex authentication schemes..just a simple login/logout/password-management authorization system. Also, dx_auth seems interesting as well, however I am worried that it is too buggy. Has anybody else had success with this? I realized that I would also like to manage guest users (i.e. users that do not login/register) in a similar way to stackoverflow..so any suggestions that have this functionality would be great

    Read the article

  • Really simple JSON serialization in .NET

    - by Evgeny
    I have some simple .NET objects I'd like to serialize to JSON and back again. The set of objects to be serialized is quite small and I control the implementation, so I don't need a generic solution that will work for everything. Since my assembly will be distributed as a library I'd really like to avoid a dependency on some third-party DLL: I just want to give users one assembly that they can reference. I've read the other questions I could find on converting to and from JSON in .NET. The recommended solution of JSON.NET does work, of course, but it requires distributing an extra DLL. I don't need any of the fancy features of JSON.NET. I just need to handle a simple object (or even dictionary) that contains strings, integers, DateTimes and arrays of strings and bytes. On deserializing I'm happy to get back a dictionary - it doesn't need to create the object again. Is there some really simple code out there that I could compile into my assembly to do this simple job? I've also tried System.Web.Script.Serialization.JavaScriptSerializer, but where it falls down is the byte array: I want to base64-encode it and even registering a converter doesn't let me easily accomplish that due to the way that API works (it doesn't pass in the name of the field).

    Read the article

  • Automated Oracle Schema Migration Tool

    - by Dave Jarvis
    What are some tools (commercial or OSS) that provide a GUI-based mechanism for creating schema upgrade scripts? To be clear, here are the tool responsibilities: Obtain connection to recent schema version (called "source"). Obtain connection to previous schema version (called "target"). Compare all schema objects between source and target. Create a script to make the target schema equivalent to the source schema ("upgrade script"). Create a rollback script to revert the source schema, used if the upgrade script fails (at any point). Create individual files for schema objects. The software must: Use ALTER TABLE instead of DROP and CREATE for renamed columns. Work with Oracle 10g or greater. Create scripts that can be batch executed (via command-line). Trivial installation process. (Bonus) Create scripts that can be executed with SQL*Plus. Here are some examples (from StackOverflow, ServerFault, and Google searches): Change Manager Oracle SQL Developer Software that does not meet the criteria, or cannot be evaluated, includes: TOAD PL/SQL Developer - Invalid SQL*Plus statements. Does not produce ALTER statements. SQL Fairy - No installer. Complex installation process. Poorly documented. DBDiff - Crippled data set evaluation, poor customer support. OrbitDB - Crippled data set evaluation. SchemaCrawler - No easily identifiable download version for Oracle databases. SQL Compare - SQL Server, not Oracle. LiquiBase - Requires changing the development process. No installer. Manually edit config files. Does not recognize its own baseUrl parameter. The only acceptable crippling of the evaluation version is by time. Crippling by restricting the number of tables and views hides possible bugs that are only visible in the software during the attempt to migrate hundreds of tables and views.

    Read the article

< Previous Page | 179 180 181 182 183 184 185 186 187 188 189 190  | Next Page >