Search Results

Search found 6001 results on 241 pages for 'requires'.

Page 175/241 | < Previous Page | 171 172 173 174 175 176 177 178 179 180 181 182  | Next Page >

  • loading js files and other dependent js files asynchronously

    - by taber
    I'm looking for a clean way to asynchronously load the following types of javascript files: a "core" js file (hmm, let's just call it, oh i don't know, "jquery!" haha), x number of js files that are dependent on the "core" js file being loaded, and y number of other unrelated js files. I have a couple ideas of how to go about it, but not sure what the best way is. I'd like to avoid loading scripts in the document body. So for example, I want the following 4 javascript files to load asynchronously, appropriately named: /js/my-contact-page-js-functions.js // unrelated/independent script /js/jquery-1.3.2.min.js // the "core" script /js/jquery.color.min.js // dependent on jquery being loaded http://thirdparty.com/js/third-party-tracking-script.js // another unrelated/independent script But this won't work because it's not guaranteed that jQuery is loaded before the color plugin... (function() { a=[ '/js/my-contact-page-functions.js', '/js/jquery-1.4.2.min.js', '/js/jquery.color.js', 'http://cdn.thirdparty.com/third-party-tracking-script.js', ], d=document, h=d.getElementsByTagName('head')[0], s, i, l=a.length; for(i=0;i<l;i++){ s=d.createElement('script'); s.type='text/javascript'; s.async=true; s.src=a[i]; h.appendChild(s); } })(); Is it pretty much not possible to load jquery and the color plugin asynchronously? (Since the color plugin requires that jQuery is loaded first.) The first method I was considering is to just combine the color plugin script with jQuery source into one file. Then another idea I had was loading the color plugin like so: $(window).ready(function() { $.getScript("/js/jquery.color.js"); }); Anyone have any thoughts on how you'd go about this? Thanks!

    Read the article

  • using ghostscript in server mode to convert pdfs to pngs

    - by emh
    while i am able to convert a specific page of a PDF to a PNG like so: gs -dSAFER -dBATCH -dNOPAUSE -sDEVICE=png16m -dGraphicsAlphaBits=4 -sOutputFile=gymnastics-20.png -dFirstPage=20 -dLastPage=20 gymnastics.pdf i am wondering if i can somehow use ghostscript's JOBSERVER mode to process several conversions without having to incur the cost of starting up ghostscript each time. from: http://pages.cs.wisc.edu/~ghost/doc/svn/Use.htm -dJOBSERVER Define \004 (^D) to start a new encapsulated job used for compatibility with Adobe PS Interpreters that ordinarily run under a job server. The -dNOOUTERSAVE switch is ignored if -dJOBSERVER is specified since job servers always execute the input PostScript under a save level, although the exitserver operator can be used to escape from the encapsulated job and execute as if the -dNOOUTERSAVE was specified. This also requires that the input be from stdin, otherwise an error will result (Error: /invalidrestore in --restore--). Example usage is: gs ... -dJOBSERVER - < inputfile.ps -or- cat inputfile.ps | gs ... -dJOBSERVER - Note: The ^D does not result in an end-of-file action on stdin as it may on some PostScript printers that rely on TBCP (Tagged Binary Communication Protocol) to cause an out-of-band ^D to signal EOF in a stream input data. This means that direct file actions on stdin such as flushfile and closefile will affect processing of data beyond the ^D in the stream. the idea is to run ghostscript in-process. the script would receive a request for a particular page of a pdf and would use ghostscript to generate the specified image. i'd rather not start up a new ghostscript process every time.

    Read the article

  • Database structure and source control - best practice

    - by Paddy
    Background I came from several years working in a company where all the database objects were stored in source control, one file per object. We had a list of all the objects that was maintained when new items were added (to allow us to have scripts run in order and handle dependencies) and a VB script that ran to create one big script for running against the database. All the tables were 'create if not exists' and all the SP's etc. were drop and recreate. Up to the present and I am now working in a place where the database is the master and there is no source control for DB objects, but we do use redgate's tools for updating our production database (SQL compare), which is very handy, and requires little work. Question How do you handle your DB objects? I like to have them under source control (and, as we're using GIT, I'd like to be able to handle merge conflicts in the scripts, rather than the DB), but I'm going to be pressed to get past the ease of using SQL compare to update the database. I don't really want to have us updating scripts in GIT and then using SQL compare to update the production database from our DEV DB, as I'd rather have 'one version of the truth', but I don't really want to get into re-writing a custom bit of software to bundle the whole lot of scripts together. I think that visual studio database edition may do something similar to this, but I'm not sure if we will have the budget for it. I'm sure that this has been asked to death, but I can't find anything that seems to quite have the answer I'm looking for. Similar to this, but not quite the same: http://stackoverflow.com/questions/340614/what-are-the-best-practices-for-database-scripts-under-code-control

    Read the article

  • How to create X509 self signed certificate for use in Apache Tomcat

    - by DaveJohnston
    I have a Java application that runs on Windows Mobile devices using a 3rd Party JVM. The application communicates with an Apache Tomcat server over HTTP. We have also used HTTPS for some connections and the certificates were created using the Sun keytool utility. First a keystore was created using genkey, then the certificate exported using export and finally that was imported into another keystore using import. The file created by genkey was loaded into the Apache server and the keystore created using import was loaded into the JVM on the PDA. Everything works as expected. I am now working with a new JVM on the PDA and (for whatever reason) I have established that this JVM requires the keystore to be in X509 (DER) format. I started working on this about a month ago and had it working, but stupidly never wrote down the steps I took, and now I can't for the life of me remember what I did. I seem to remember using openssl but other than that I am totally lost. Anything I create now using openssl and try to load into Apache causes an error at startup (Invalid Keystore Format) so I am probably missing something out entirely. Does anyone have any ideas how I should be going about creating this self-signed X509 certificate that can be loaded into Apache server and JVM running on a PDA?

    Read the article

  • Update PEAR on MAMP MacOsX

    - by Jevgeni Smirnov
    Current I am trying to install phpunit on my mac os x and mamp server: pear config-set auto_discover 1 pear install pear.phpunit.de/PHPUnit Errors which I got during installation: Validation Error: This package.xml requires PEAR version 1.9.4 to parse properly, we are version 1.9.2 pear upgrade pear Nothing to upgrade UPDATE 1 This is my pear config. I assume that I messed up local and mamp installs(I didn't know that mamp also has pear, so I installed local one). I suppose something wrong with bin_dir, php_dir and other paths? Keefir-Samolet-iMac:MAMP jevgenismirnov$ pear config-show Configuration (channel pear.php.net): ===================================== Auto-discover new Channels auto_discover 1 Default Channel default_channel pear.php.net HTTP Proxy Server Address http_proxy PEAR server [DEPRECATED] master_server pear.php.net Default Channel Mirror preferred_mirror pear.php.net Remote Configuration File remote_config PEAR executables directory bin_dir /Users/jevgenismirnov/pear/bin PEAR documentation directory doc_dir /Users/jevgenismirnov/pear/docs PHP extension directory ext_dir /Applications/MAMP/bin/php/php5.3.6/lib/php/extensions/no-debug-non-zts-20090626/ PEAR directory php_dir /Users/jevgenismirnov/pear/share/pear PEAR Installer cache directory cache_dir /var/folders/k7/xpwbcbrs1xs8tlxjk5mvkwrr0000gp/T//pear/cache PEAR configuration file cfg_dir /Users/jevgenismirnov/pear/cfg directory PEAR data directory data_dir /Users/jevgenismirnov/pear/data PEAR Installer download download_dir /tmp/pear/install directory PHP CLI/CGI binary php_bin /Applications/MAMP/bin/php/php5.3.6/bin/php php.ini location php_ini --program-prefix passed to php_prefix PHP's ./configure --program-suffix passed to php_suffix PHP's ./configure PEAR Installer temp directory temp_dir /tmp/pear/install PEAR test directory test_dir /Users/jevgenismirnov/pear/tests PEAR www files directory www_dir /Users/jevgenismirnov/pear/www Cache TimeToLive cache_ttl 3600 Preferred Package State preferred_state stable Unix file mask umask 22 Debug Log Level verbose 1 PEAR password (for password maintainers) Signature Handling Program sig_bin /usr/local/bin/gpg Signature Key Directory sig_keydir /Applications/MAMP/bin/php/php5.3.6/conf/pearkeys Signature Key Id sig_keyid Package Signature Type sig_type gpg PEAR username (for username maintainers) User Configuration File Filename /Users/jevgenismirnov/.pearrc System Configuration File Filename /Applications/MAMP/bin/php/php5.3.6/conf/pear.conf

    Read the article

  • Advice needed on best and most efficient practices with developing google apps application...

    - by Ali
    Hi guys , I'm getting my feet wet with developing my order management applications for integration with google apps. However there are certain aspects I need to take into consideration prior to proceeding any further. My application is such that it would upload documents to google documents and store contacts in google contacts. It requires such that a single order can have a number of uploaded documents associated with it as well as some contacts associated with it. MY question however is what would be the most efficient way to implement this. I could keep key tables for both contacts and documents which woudl contain just an ID and link to the documents/contacts or their respective identification id on google. Or I could maintain an exact replica of the information on my own database as well as a link to the contact on google. However won't that be too redundant. I don't want my application to be really slow as I'm afraid that everytime I make a call to google docs to retrieve a list of documents or google contacts it would be really slow on my application - or am I getting worried for no reason? Any advice would be most appreciated.

    Read the article

  • Package creation issues using SQL Developer

    - by Carter
    So I've never worked with stored procedures and have not a whole lot of DB experience in general and I've been assigned a task that requires I create a package and I'm stuck. Using SQL Developer, I'm trying to create a package called JUMPTO with this code... create or replace package JUMPTO is type t_locations is ref cursor; procedure procGetLocations(locations out t_locations); end JUMPTO; When I run it, it spits out this PL/SQL code block... DECLARE LOCATIONS APPLICATION.JUMPTO.t_locations; BEGIN JUMPTO.PROCGET_LOCATIONS( LOCATIONS = LOCATIONS ); -- Modify the code to output the variable -- DBMS_OUTPUT.PUT_LINE('LOCATIONS = ' || LOCATIONS); END; A tutorial I found said to take out the comment for that second line there. I've tried with and without the comment. When I hit "ok" I get the error... ORA-06550: line 2, column 32: PLS-00302: component 'JUMPTO' must be declared ORA-06550: line 2, column 13: PL/SQL: item ignored ORA-06550: line 6, column 18: PLS-00320: the declaration of the type of this expression is incomplete or malformed ORA-06550: line 5, column 3: PL/SQL: Statement ignored ORA-06512: at line 58 I really don't have any idea what's going on, this is all completely new territory for me. I tried creating a body that just selected some stuff from the database but nothing is working the way it seems like it should in my head. Can anyone give me any insight into this?

    Read the article

  • svcutil, WSDL, and the generated interfaces not being sufficient for implementation

    - by chtmd
    I have a WSDL file defining a service that I have to implement in WCF. I had read that I could generate the proxy using svcutil from the WSDL file, and that I could then use the generated interfaces to implement the service. Unfortunately, I can't quite seem to find a way to have the interfaces contain the correct attributes to expose the contracts. All operations have the "OperationContractAttribute" attribute, but it appears as though for the service to be exposed, I require the "OperationContract" for each one. Same thing with "ServiceContractAttribute" and "ServiceContract", and I imagine DataContract, but I haven't gotten that far. I could manually make these changes, but I would much prefer a technique where the existing code could be easily used, or better code could be generated for my uses. Is there some way that this can be done? Thanks. EDIT: Command used: svcutil ObjectManagerService.wsdl /n:*,Sample /o:ObjectManagerServiceProxy.cs /nologo Code sample: public interface ObjectManagerSyncPortType { // CODEGEN: Generating message contract since the operation createObject is neither RPC nor document wrapped. [System.ServiceModel.OperationContractAttribute(Action="http://www.sample.com/createObject", ReplyAction="*")] [System.ServiceModel.XmlSerializerFormatAttribute()] Sample.createObjectResponse1 createObject(Sample.createObjectRequest1 request); As best as I can tell/see the WSDL file is entirely self-contained and requires no additional XSD files.

    Read the article

  • Apache2: mod_wsgi or mod_python, which one is better?

    - by Algorist
    Hi, I am planning to write web service in python. But, I found wsgi also does the similar thing. Which one can be preferred? Thank you Bala Update I am still confused. Please help. Better in my sense means: 1. Bug will be fixed periodically. 2. Chosen by most developers. 3. Additional features like authentication tokens like AWS, can be supported out of the box. 4. No strong dependency on version.( I see that wsgi requires python 2.6) 5. All python libraries will work out of the box. 6. Scalable in the future. 7. Future upgrade don't cause any issues. With my limited experience, I want these features. There might be some I might be missing. Thanks Bala Update I am sorry for all the confusion caused. I just want to expose a restful web services in python language. Is there a good framework?

    Read the article

  • Best full text search for mysql?

    - by ConroyP
    We're currently running MySQL on a LAMP stack and have been looking at implementing a more thorough, full-text search on our site. We've looked at MySQL's own freetext search, but it doesn't seem to cope well with large databases, which makes it far too slow for our needs. Our main requirements are: speed returning results simple updating of index In addition to the above, our "nice to have"s are: ideally not something that requires adding a module to MySQL plays nicely with PHP (majority of our dev work done using PHP) There seems to be quite a few healthy open-source projects to add fast, reliable full-text search to MySQL, so I'm basically looking for recommendations/suggestions on what you've found to be the most useful product out there, easiest to set up, etc. So far, the list of ones we've been starting to play around with are: Sphinx, C++ based, used by craigslist, thepiratebay Lucene, Java-based Apache project, powers zeoh.com and zoomf.com Solr, Java-based offshoot of Lucene, used to power searches on Digg, CNet & AOL Channels Are there any better ones out there that we haven't come across yet? Can you recommend / suggest against any of the options we've gathered so far? Thanks for your help! Update @Cletus suggested Google's Custom Search Engine. We recently trialled this on a couple of projects, and it's an almost-perfect fit for our needs. The problem is that entries on our site are updated quite regularly, and unfortunately the speed at which entries go in/get updated in Google's index was just too slow and erratic for us to rely on, even with the addition of sitemaps and requested crawl rate changes.

    Read the article

  • Add a custom variable to an email header already within a gmail inbox

    - by Ali
    Hi guys - this may seem odd but I was wondering if it was possible to add custom header details to emails already in an inbox. Like lets say I wish to add in the Header of the email something like - myvariable = myvalue and then be able to query it somehow. I'm looking at code from Iloha mail and most of the details like subject and from recieved etc are in the headers and you can search through them. SO is it possible to add my own custom variable to an email header and query it in the same way? How can it be done using php? EDIT ==================== Thanks I know how you can modify the headers of sent messages plus also query for custom variables in message headers however in this case I want to know if it would be possible to add a custom variable in a recieved message already in my inbox. Actually let me define the situation here. I'm working on a google apps solution which requires maintaining references to emails. Basically the application is as such that when an email comes in - we create an order from that email and wish to maintain a reference to that EXACT email by some kind of identifier which would enable us to identify that email. The fact is that we don't want to download the emails in a database and maintain a separate store as we would want to keep all the emailing on GMAIL. We just need: A way to be able to 'link' to a specific email permanently - the UID is just a sequence number and not very reliable. We couldn't find any property of emails that could function as a unique ID or primary key and so we thought if we could instead generate a key on our end and store it in a custom variable on the email itself. However it seems unfortunately that there isn't a way to manipulate headers of an already existing email. :( is there any solution to this problem I could use any IDEA !

    Read the article

  • Can a call to WaitHandle.SignalAndWait be ignored for performance profiling purposes?

    - by Dan Tao
    I just downloaded the trial version of ANTS Performance Profiler from Red Gate and am investigating some of my team's code. Immediately I notice that there's a particular section of code that ANTS is reporting as eating up to 99% CPU time. I am completely unfamiliar with ANTS or performance profiling in general (that is, aside from self-profiling using what I'm sure are extremely crude and frowned-upon methods such as double timeToComplete = (endTime - startTime).TotalSeconds), so I'm still fiddling around with the application and figuring out how it's used. But I did call the developer responsible for the code in question and his immediate reaction was "Yeah, that doesn't surprise me that it says that; but that code calls SignalAndWait [which I could see for myself, thanks to ANTS], which doesn't use any CPU, it just sits there waiting for something to do." He advised me to simply ignore that code and look for anything ELSE I could find. My question: is it true that SignalAndWait requires NO CPU overhead (and if so, how is this possible?), and is it reasonable that a performance profiler would view it as taking up 99% CPU time? I find this particularly curious because, if it's at 99%, that would suggest that our application is often idle, wouldn't it? And yet its performance has become rather sluggish lately. Like I said, I really am just a beginner when it comes to this tool, and I don't know anything about the WaitHandle class. So ANY information to help me to understand what's going on here would be appreciated.

    Read the article

  • Setting LD_LIBRARY_PATH in Apache PassEnv/SetEnv still cant find library

    - by DoMoreASAP
    I am trying to test the Cybersource 3d party implementation. I was able to get the test files running fine from the command line, which requires that on Linux I export the path to the payment libraries to LD_LIBRARY_PATH. to try to test this on my server I have created the apache config below <VirtualHost 127.0.0.1:12345> AddHandler cgi-script .cgi AddHandler fcgid-script .php .fcgi FCGIWrapper /my/path/to/php_fcgi/bin/php-cgi .php AddType text/html .shtml AddOutputFilter INCLUDES .shtml DocumentRoot /my/path/to/cybersource/simapi-php-5.0.1/ ProxyPreserveHost on <Directory /my/path/to/cybersource/simapi-php-5.0.1> SetEnv LD_LIBRARY_PATH /my/path/to/cybersource/LinkedLibraries/lib/ AllowOverride all Options +Indexes IndexOptions Charset=UTF-8 </Directory> </VirtualHost> I have set the env variable there with SetEnv command, which seems to be working when i run a page that prints <?php phpinfo(); ?> however the test script when called through the browser still wont work, apache says: tail /my/apache/error_log [Tue Mar 30 23:11:46 2010] [notice] mod_fcgid: call /my/path/to/cybersource/index.php with wrapper /my/path/to/cybersource/php_fcgi/bin/php-cgi PHP Warning: PHP Startup: Unable to load dynamic library '/my/path/to/cybersource/extensionsdir/php5_cybersource.so' - libspapache.so: cannot open shared object file: No such file or directory in Unknown on line 0 so it cant find the linked file libspapache.so even though it is in the LD_LIBRARY_PATH that is supposedly defined i really appreciate the help. thanks so much.

    Read the article

  • Bind to a method in WPF?

    - by Cameron MacFarland
    How do you bind to an objects method in this scenario in WPF? public class RootObject { public string Name { get; } public ObservableCollection<ChildObject> GetChildren() {...} } public class ChildObject { public string Name { get; } } XAML: <TreeView ItemsSource="some list of RootObjects"> <TreeView.Resources> <HierarchicalDataTemplate DataType="{x:Type data:RootObject}" ItemsSource="???"> <TextBlock Text="{Binding Path=Name}" /> </HierarchicalDataTemplate> <HierarchicalDataTemplate DataType="{x:Type data:ChildObject}"> <TextBlock Text="{Binding Path=Name}" /> </HierarchicalDataTemplate> </TreeView.Resources> </TreeView> Here I want to bind to the GetChildren method on each RootObject of the tree. EDIT Binding to an ObjectDataProvider doesn't seem to work because I'm binding to a list of items, and the ObjectDataProvider needs either a static method, or it creates it's own instance and uses that. For example, using Matt's answer I get: System.Windows.Data Error: 33 : ObjectDataProvider cannot create object; Type='RootObject'; Error='Wrong parameters for constructor.' System.Windows.Data Error: 34 : ObjectDataProvider: Failure trying to invoke method on type; Method='GetChildren'; Type='RootObject'; Error='The specified member cannot be invoked on target.' TargetException:'System.Reflection.TargetException: Non-static method requires a target.

    Read the article

  • An alternative to reading input from Java's System.in

    - by dvanaria
    I’m working on the UVa Online Judge problem set archive as a way to practice Java, and as a way to practice data structures and algorithms in general. They give an example input file to submit to the online judge to use as a starting point (it’s the solution to problem 100). Input from the standard input stream (java.lang.System.in) is required as part of any solution on this site, but I can’t understand the implementation of reading from System.in they give in their example solution. It’s true that the input file could consist of any variation of integers, strings, etc, but every solution program requires reading basic lines of text input from System.in, one line at a time. There has to be a better (simpler and more robust) method of gathering data from the standard input stream in Java than this: public static String readLn(int maxLg) { byte lin[] = new byte[maxLg]; int lg = 0, car = -1; String line = “”; try { while (lg < maxLg) { car = System.in.read(); if ((car < 0) || (car == ‘\n’)) { break; } lin[lg++] += car; } } catch (java.io.IOException e) { return (null); } if ((car < 0) && (lg == 0)) { return (null); // eof } return (new String(lin, 0, lg)); } I’m really surprised by this. It looks like something pulled directly from K&R’s “C Programming Language” (a great book regardless), minus the access level modifer and exception handling, etc. Even though I understand the implementation, it just seems like it was written by a C programmer and bypasses most of Java’s object oriented nature. Isn’t there a better way to do this, using the StringTokenizer class or maybe using the split method of String or the java.util.regex package instead?

    Read the article

  • sencha dataitem datamap setItems

    - by user1795667
    I'm trying to follow the kitten example given here http://www.sencha.com/blog/dive-into-dataview-with-sencha-touch-2-beta-2#comment_form and I have complex components in which one of the property of my data is a list of objects. And I do find a method for setting a list of objects which is setItems however it does not seem to work. My object array is my model MyApp.Model.Sponsor. Could anyone suggest what I'm missing to get this working? Ext.define('MyListItem', { extend: 'Ext.dataview.component.DataItem', requires: ['Ext.Button','Ext.Img', 'MyApp.model.Sponsors', 'MyApp.model.Sponsor'], xtype: 'mylistitem', config: { sponsor: true, dataMap: { getSponsor: { setItems: 'sponsor' } } }, applySponsor: function(config) { // I put an alert here to see if I get getSponsor() but the object I get here is undefined alert(this.getSponsor()); return Ext.factory(config, MyApp.model.Sponsor, this.getSponsor()); }, updateSponsor: function(newNameButton, oldNameButton) { if (oldNameButton) { this.remove(oldNameButton); } if (newNameButton) { this.add(newNameButton); } }, onSponsorTap: function(button, e) { var sponsors = record.get('sponsor'); //my specific action } }); Ext.define('MyApp.model.Sponsors', { extend: 'Ext.data.Model', xtype:'Sponsors_m', config: { fields: [ {name: 'level', type: 'auto'}, {name: 'id', type: 'int'}, {name: 'sponsor', type: 'Sponsor'} ] } }); Ext.define('MyApp.model.Sponsor', { extend: 'Ext.data.Model', xtype:'Sponsor_m', config: { fields: [ {name: 'name', type: 'auto'}, {name: 'image', type: 'auto'}, {name: 'url', type: 'auto'}, {name: 'description', type: 'auto'} ] } });

    Read the article

  • Backend raising (INotify)PropertyChanged events to all connected clients?

    - by Jörg Battermann
    One of our 'frontend' developers keeps requesting from us backend developers that the backend notifies all connected clients (it's a client/server environment) of changes to objects. As in: whenever one user makes a change, all other connected clients must be notified immediately of the change. At the moment our architecture does not have a notification system of that kind and we don't have a sort of pub/sub model for explicitly chosen objects (e.g. the one the frontend is currently implementing).. which would make sense in such a usecase imho, but obviously requires extra implementation. However, I thought frontends typically check for locks for concurrently existing user changes on the same object and rather pull for changes / load on demand and in the background rather than the backend pushing all changes to all clients for all objects constantly.. which seems rather excessive to me. However, it's being argumented that e.g. the MS Entity Framework does in fact publish (INotify)PropertyChanged not only for local changes, but for all such changes including other client connections, but I have found no proof or details regarding this. Can anyone shed some light into this? Do other e.g. ORMs etc provide broadcasted (INotify)PropertyChanged events on entities?

    Read the article

  • What is the best "forgot my password" method?

    - by Edward Tanguay
    I'm programming a community website. I want to build a "forgot my password" feature. Looking around at different sites, I've found they employ one of three options: send the user an email with a link to a unique, hidden URL that allows him to change his password (Gmail and Amazon) send the user an email with a new, randomly generated password (Wordpress) send the user his current password (www.teach12.com) Option #3 seems the most convenient to the user but since I save passwords as an MD5 hash, I don't see how option #3 would be available to me since MD5 is irreversible. This also seems to be insecure option since it means that the website must be saving the password in clear text somewhere, and at the least the clear-text password is being sent over insecure e-mail to the user. Or am I missing something here? So if I can't do option #1, option #2 seems to be the simplest to program since I just have to change the user's password and send it to him. Although this is somewhat insecure since you have to have a live password being communicated via insecure e-mail. However, this could also be misused by trouble-makers to pester users by typing in random e-mails and constantly changing passwords of various users. Option #1 seems to be the most secure but requires a little extra programming to deal with a hidden URL that expires etc., but it seems to be what the big sites use. What experience have you had using/programming these various options? Are there any options I've missed?

    Read the article

  • Password Cracking Windows Accounts

    - by Kevin
    At work we have laptops with encrypted harddrives. Most developers here (on occasion I have been guilty of it too) leave their laptops in hibernate mode when they take them home at night. Obviously, Windows (i.e. there is a program running in the background which does it for windows) must have a method to unencrypt the data on the drive, or it wouldn't be able to access it. That being said, I always thought that leaving a windows machine on in hibernate mode in a non-secure place (not at work on a lock) is a security threat, because someone could take the machine, leave it running, hack the windows accounts and use it to encrypt the data and steal the information. When I got to thinking about how I would go about breaking into the windows system without restarting it, I couldn't figure out if it was possible. I know it is possible to write a program to crack windows passwords once you have access to the appropriate file(s). But is it possible to execute a program from a locked Windows system that would do this? I don't know of a way to do it, but I am not a Windows expert. If so, is there a way to prevent it? I don't want to expose security vulnerabilities about how to do it, so I would ask that someone wouldn't post the necessary steps in details, but if someone could say something like "Yes, it's possible the USB drive allows arbitrary execution," that would be great! EDIT: The idea being with the encryption is that you can't reboot the system, because once you do, the disk encryption on the system requires a login before being able to start windows. With the machine being in hibernate, the system owner has already bypassed the encryption for the attacker, leaving windows as the only line of defense to protect the data.

    Read the article

  • geocoder.getFromLocationName returns only null

    - by test
    Hello, I am going out of my mind for the last 2 days with an IllegalArgumentException error i receive in android code when trying to get a coordinates out of an address, or even reverse, get address out of longitude and latitude. this is the code, but i cannot see an error. is a standard code snippet that is easily found on a google search. public GeoPoint determineLatLngFromAddress(Context appContext, String strAddress) { Geocoder geocoder = new Geocoder(appContext, Locale.getDefault()); GeoPoint g = null; try { System.out.println("str addres: " + strAddress); List<Address> addresses = geocoder.getFromLocationName(strAddress, 5); if (addresses.size() > 0) { g = new GeoPoint((int) (addresses.get(0).getLatitude() * 1E6), (int) (addresses.get(0).getLongitude() * 1E6)); } } catch (Exception e) { throw new IllegalArgumentException("locationName == null"); } return g; } These are the permissions from manifest.xml file: <uses-permission android:name="android.permission.INTERNET" /> <uses-permission android:name="android.permission.ACCESS_FINE_LOCATION" /> <uses-permission android:name="android.permission.ACCESS_COARSE_LOCATION" /> <uses-permission android:name="android.permission.ACCESS_MOCK_LOCATION" /> I do have the Google Api key declared too: <uses-library android:name="com.google.android.maps" /> From the code snippet above, geo coder is not null, neither is the address or appContext, and i stumble here: geocoder.getFromLocationName(strAddress, 5); I did a lot of google searching and found nothing that worked, and the most important info i found is this: ""The Geocoder class requires a backend service that is not included in the core android framework." Sooo, i am confuzed now. What do I have to call, import, add, use in code.... to make this work? I am using Google Api2.2, Api level 8. If somebody has found a solution for this, or a pointer for documentation, something that i didn't discover, please let us know. Thank you for your time.

    Read the article

  • htaccess mod_rewrite check file/directory existence, else rewrite?

    - by devians
    I have a very heavy htaccess mod_rewrite file that runs my application. As we sometimes take over legacy websites, I sometimes need to support old urls to old files, where my application processes everything post htaccess. My ultimate goal is to have a 'Demilitarized Zone' for old file structures, and use mod rewrite to check for existence there before pushing to the application. This is pretty easy to do with files, by using: RewriteCond %{IS_SUBREQ} true RewriteRule .* - [L] RewriteCond %{ENV:REDIRECT_STATUS} 200 RewriteRule .* - [L] RewriteCond Public/DMZ/$1 -F [OR] RewriteRule ^(.*)$ Public/DMZ/$1 [QSA,L] This allows pseudo support for relative urls by not hardcoding my base path (I cant assume I will ever be deployed in document root) anywhere and using subrequests to check for file existence. Works fine if you know the file name, ie http://domain.com/path/to/app/legacyfolder/index.html However, my legacy urls are typically http://domain.com/path/to/app/legacyfolder/ Mod_Rewrite will allow me to check for this by using -d, but it needs the complete path to the directory, ie RewriteCond Public/DMZ/$1 -F [OR] RewriteCond /var/www/path/to/app/Public/DMZ/$1 -d RewriteRule ^(.*)$ Public/DMZ/$1 [QSA,L] I want to avoid the hardcoded base path. I can see one possible solutions here, somehow determining my path and attaching it to a variable [E=name:var] and using it in the condition. Another option is using -U, but the tricky part is stopping it from hijacking every other request when they should flow through, since -U is really easy to satisfy. Any implementation that allows me to existence check a directory is more than welcome. I am not interested in using RewriteBase, as that requires my htaccess to have a hardcoded base path.

    Read the article

  • Create Jinja2 macros that put content in separate places

    - by Brian M. Hunt
    I want to create a table of contents and endnotes in a Jinja2 template. How can one accomplish these tasks? For example, I want to have a template as follows: {% block toc %} {# ... the ToC goes here ... #} {% endblock %} {% include "some other file with content.jnj" %} {% block endnotes %} {# ... the endnotes go here ... #} {% endblock %} Where the some other file with content.jnj has content like this: {% section "One" %} Title information for Section One (may be quite long); goes in Table of Contents ... Content of section One {% section "Two" %} Title information of Section Two (also may be quite long) <a href="#" id="en1">EndNote 1</a> <script type="text/javsacript">...(may be reasonably long) </script> {# ... Everything up to here is included in the EndNote #} Where I say "may be quite/reasonably long" I mean to say that it can't reasonably be put into quotes as an argument to a macro or global function. I'm wondering if there's a pattern for this that may accommodate this, within the framework of Jinja2. My initial thought is to create an extension, so that one can have a block for sections and end-notes, like-so: {% section "One" %} Title information goes here. {% endsection %} {% endnote "one" %} <a href="#">...</a> <script> ... </script> {% endendnote %} Then have global functions (that pass in the Jinja2 Environment): {{ table_of_contents() }} {% include ... %} {{ endnotes() }} However, while this will work for endnotes, I'd presume it requires a second pass by something for the table of contents. Thank you for reading. I'd be much obliged for your thoughts and input. Brian

    Read the article

  • Unit Testing - Validation of ViewModel ASP.NET MVC 2

    - by dean nolan
    I am currently unit testing a service that adds users to a repository. I am using dependency injection to test using a fake repository. The repository has a method CreateUser(User user) which just adds it to the database or in this case a List of Users. The logic for the creation is in the UserServices class. The application has a form for creating a user that requires some properties such as name and address. This is an MVC 2 app and I will be using the new validation using data annotations. This makes me wonder about a few things: 1) Should I annotate a POCO object that will map to the database? Or should I create a specific View Model that has these annotations and pass this data to the UserServices class? 2)Should the UserServicesClass also check this data? Would I best be constructing a Usr out of the ViewModel and passing this into the Service as a parameter? 3) The actual unit testing would depend on 2), I either populate a User object and pass that in, or I pass a large list of strings to the method CreateUser. Writing this out I get a basic idea that I should probably annotate the view model only, pass in a user (constructed by the view model if the data is valid) and also just construct the user in the unit test also. Is this the best way to go?

    Read the article

  • HTTP Basic Authentication with HTTPService Objects in Adobe Flex/AIR

    - by Bob Somers
    I'm trying to request a HTTP resource that requires basic authorization headers from within an Adobe AIR application. I've tried manually adding the headers to the request, as well as using the setRemoteCredentials() method to set them, to no avail. Here's the code: <mx:Script> <![CDATA[ import mx.rpc.events.ResultEvent; import mx.rpc.events.FaultEvent; private function authAndSend(service:HTTPService):void { service.setRemoteCredentials('someusername', 'somepassword'); service.send(); } private function resultHandler(event:ResultEvent):void { apiResult.text = event.result.toString(); } private function resultFailed(event:FaultEvent):void { apiResult.text = event.fault.toString(); } ]]> </mx:Script> <mx:HTTPService id="apiService" url="https://mywebservice.com/someFileThatRequiresBasicAuth.xml" resultFormat="text" result="resultHandler(event)" fault="resultFailed(event)" /> <mx:Button id="apiButton" label="Test API Command" click="authAndSend(apiService)" /> <mx:TextArea id="apiResult" /> However, a standard basic auth dialog box still pops up prompting the user for their username and password. I have a feeling I'm not doing this the right way, but all the info I could find (Flex docs, blogs, Google, etc.) either hasn't worked or was too vague to help. Any black magic, oh Flex gurus? Thanks. EDIT: Changing setRemoteCredentials() to setCredentials() yields the following ActionScript error: [MessagingError message='Authentication not supported on DirectHTTPChannel (no proxy).'] EDIT: Problem solved, after some attention from Adobe. See the posts below for a full explanation. This code will work for HTTP Authentication headers of arbitrary length. import mx.utils.Base64Encoder; private function authAndSend(service:HTTPService):void { var encoder:Base64Encoder = new Base64Encoder(); encoder.insertNewLines = false; // see below for why you need to do this encoder.encode("someusername:somepassword"); service.headers = {Authorization:"Basic " + encoder.toString()}; service.send(); }

    Read the article

  • Building error in Eclipse with build.xml

    - by Zachary
    I am working on a Java project with Eclipse. This project requires a second project (not mine), named sams in its build-path. The sams is provided with a build.xml file and it should generate some code using Apache CXF when building it. When I use Apache ANT on Eclipse and run the cxf.generated command from its build file I get the following error: Buildfile: C:\Docs\ZacRocha\Desktop\sams\build.xml cxf.generated: [echo] Generating code using Apache CXF wsdl2java... [java] 16-Jun-2010 16:04:08 org.apache.cxf.binding.corba.CorbaConduit prepare [java] SEVERE: Could not resolve target object [java] 16-Jun-2010 16:04:08 org.apache.cxf.binding.corba.CorbaConduit prepare [java] SEVERE: Could not resolve target object [java] WSDLToJava Error: org.apache.cxf.wsdl11.WSDLRuntimeException: Fail to create wsdl definition from : file:/C:/Docs/ZacRocha/Desktop/sams/$%7barchivesoftware.wsdl%7d [java] Caused by : WSDLException: faultCode=PARSER_ERROR: Problem parsing 'file:/C:/Docs/ZacRocha/Desktop/sams/$%7barchivesoftware.wsdl%7d'.: java.io.FileNotFoundException: C:\Docs\ZacRocha\Desktop\sams\${archivesoftware.wsdl} (The system cannot find the file specified) [java] 16-Jun-2010 16:04:10 org.apache.cxf.binding.corba.CorbaConduit prepare [java] SEVERE: Could not resolve target object [java] 16-Jun-2010 16:04:10 org.apache.cxf.binding.corba.CorbaConduit prepare [java] SEVERE: Could not resolve target object [java] WSDLToJava Error: org.apache.cxf.wsdl11.WSDLRuntimeException: Fail to create wsdl definition from : file:/C:/Docs/ZacRocha/Desktop/sams/$%7barchivehardware.wsdl%7d [java] Caused by : WSDLException: faultCode=PARSER_ERROR: Problem parsing 'file:/C:/Docs/ZacRocha/Desktop/sams/$%7barchivehardware.wsdl%7d'.: java.io.FileNotFoundException: C:\Docs\ZacRocha\Desktop\sams\${archivehardware.wsdl} (The system cannot find the file specified) BUILD SUCCESSFUL Total time: 4 seconds I am used to program on Eclipse and I know very little about building with Apache ANT. Can someone tell me where exactly the problem may be? Thanks in advance!

    Read the article

< Previous Page | 171 172 173 174 175 176 177 178 179 180 181 182  | Next Page >