Search Results

Search found 8893 results on 356 pages for 'stored'.

Page 270/356 | < Previous Page | 266 267 268 269 270 271 272 273 274 275 276 277  | Next Page >

  • Weird RecordStore behavior when RecordStoreFullException occurs

    - by Michael P
    Hello everyone! I'm developing a small J2ME application that displays bus stops timetables - they are stored as records in MIDP RecordStores. Sometimes records can not fit a single RecordStore, especially on record update - using setRecord method - a RecordStoreFullException occurs. I catch the exception, and try to write the record to a new RecordStore along with deleting the previous one in the old RecordStore. Everything works fine except of deleting record from RecordStore where the RecordStoreFullException occurs. If I make an attempt to delete record that could not be updated, another Exception of type InvalidRecordIDException is thrown. This is weird and undocumented in MIDP javadoc. I have tested it on Sun WTK 2.5.2, MicroEdition SDK 3.0 and Nokia Series 40 SDK. Furthermore I created a code that reproduces this strange behaviour: RecordStore rms = null; int id = 0; try { rms = RecordStore.openRecordStore("Test", true); byte[] raw = new byte[192*10024]; //Big enough to cause RecordStoreFullException id = rms.addRecord(raw, 0, 160); rms.setRecord(id, raw, 0, raw.length); } catch (Exception e) { try { int count = rms.getNumRecords(); RecordEnumeration en = rms.enumerateRecords(null, null, true); count = en.numRecords(); while(en.hasNextElement()){ System.out.println("NextID: "+en.nextRecordId()); } rms.deleteRecord(id); //this won't work! rms.setRecord(id, new byte[5], 0, 5); //this won't work too! } catch (Exception ex) { ex.printStackTrace(); } } I added extra enumeration code to produce other weird behavior - when RecordStoreFullException occurs, count variable will be set to 1 (if RMS was empty) by both methods - getNumRecords and numRecords. System.out.println will produce NextID: 0! It is not acceptable because record ID can not be 0! Could someone explain this strange behavior? Sorry for my bad English.

    Read the article

  • ASP.NET MVC Session Expiration

    - by Andrew Flanagan
    We have an internal ASP.NET MVC application that requires a logon. Log on works great and does what's expected. We have a session expiration of 15 minutes. After sitting on a single page for that period of time, the user has lost the session. If they attempt to refresh the current page or browse to another, they will get a log on page. We keep their request stored so once they've logged in they can continue on to the page that they've requested. This works great. However, my issue is that on some pages there are AJAX calls. For example, they may fill out part of a form, wander off and let their session expire. When they come back, the screen is still displayed. If they simply fill in a box (which will make an AJAX call) the AJAX call will return the Logon page (inside of whatever div the AJAX should have simply returned the actual results). This looks horrible. I think that the solution is to make the page itself expire (so that when a session is terminated, they automatically are returned to the logon screen without any action by them). However, I'm wondering if there are opinions/ideas on how best to implement this specifically in regards to best practices in ASP.NET MVC.

    Read the article

  • Preventing Flex application caching in browser (multiple modules)

    - by Simon_Weaver
    I have a Flex application with multiple modules. When I redeploy the application I was finding that modules (which are deployed as separate swf files) were being cached in the browser and the new versions weren't being loaded. So i tried the age old trick of adding ?version=xxx to all the modules when they are loaded. The value xxx is a global parameter which is actually stored in the host html page: var moduleSection:ModuleLoaderSection; moduleSection = new ModuleLoaderSection(); moduleSection.visible = false; moduleSection.moduleName = moduleName + "?version=" + MySite.masterVersion; In addition I needed to add ?version=xxx to the main .swf that was being loaded. Since this is done by HTML I had to do this by modifying my AC_OETags.js file as below : function AC_FL_RunContent(){ var ret = AC_GetArgs ( arguments, ".swf?mv=" + getMasterVersion(), "movie", "clsid:d27cdb6e-ae6d-11cf-96b8-444553540000" , "application/x-shockwave-flash" ); AC_Generateobj(ret.objAttrs, ret.params, ret.embedAttrs); } This is all fine and works great. I just have a hard time believing that Adobe doesn't already have a way to handle this. Given that Flex is being targeted to design modular applications for business I find it especially surprising. What do other people do? I need to make sure my application reloads correctly even if someone has once per session selected for their 'browser cache checking policy'.

    Read the article

  • iPad: Move UIView with animation, then move it back

    - by Joel
    I have the following code in a UIView subclass that will move it off the screen with an animation: float currentX = xposition; //variables that store where the UIView is located float currentY = yposition; float targetX = -5.0f - self.width; float targetY = -5.0f - self.height; moveX = targetX - currentX; //stored as an instance variable so I can hang on to it and move it back moveY = targetY - currentY; [UIView beginAnimations:@"UIBase Hide" context:nil]; [UIView setAnimationDuration:duration]; self.transform = CGAffineTransformMakeTranslation(moveX,moveY); [UIView commitAnimations]; It works just great. I've changed targetX/Y to 0 to see if it does indeed move to the specified point, and it does. The problem is when I try and move the view back to where it was originally: moveX = -moveX; moveY = -moveY; [UIView beginAnimations:@"UIBase Show" context:nil]; [UIView setAnimationDuration:duration]; self.transform = CGAffineTransformMakeTranslation(moveX,moveY); [UIView commitAnimations]; This puts the UIView about 3x as far as it should. So the view usually gets put off the right side of the screen (depending on where it is). Is there something with the CGAffineTransforMakeTranslation function that I should know? I read the documentation, and from what I can tell, it should be doing what I'm expecting it to do. Anyone have a suggestion of how to fix this? Thanks.

    Read the article

  • Database Change Management - Setup for Initial Create Scripts, Subsequent Migration Scripts

    - by Martin Aatmaa
    I've got a database change management workflow in place. It's based on SQL scripts (so, it's not a managed code-based solution). The basic setup looks like this: Initial/ Generate Initial Schema.sql Generate Initial Required Data.sql Generate Initial Test Data.sql Migration 0001_MigrationScriptForChangeOne.sql 0002_MigrationScriptForChangeTwo.sql ... The process to spin up a database is to then run all the Initlal scripts, and then run the sequential Migration scripts. A tool takes case of the versioning requirements, etc. My question is, in this kind of setup, is it useful to also maintain this: Current/ Stored Procedures/ dbo.MyStoredProcedureCreateScript.sql ... Tables/ dbo.MyTableCreateScript.sql ... ... By "this" I mean a directory of scripts (separated by object type) that represents the create scripts for spinning up the current/latest version of the database. For some reason, I really like the idea, but I can't concretely justify it's need. Am I missing something? The advantages would be: For dev and source control, we would have the same object-per-file setup that we're used to For deployment, we can spin up a new DB instance to the latest version either by running the Initial+Migrate, or by running the scripts from Current/ For dev, we do not need a DB instance running in order to do development. We can do "offline" development on the Current/ folder. The disadvantages would be: For each change, we need to update the scripts in the Current/ folder, as well as create a Migration script (in the Migration/ folder) Thanks in advance for any input!

    Read the article

  • JPA Lookup Hierarchy Mapping

    - by Andy Trujillo
    Given a lookup table: | ID | TYPE | CODE | DESCRIPTION | | 1 | ORDER_STATUS | PENDING | PENDING DISPOSITION | | 2 | ORDER_STATUS | OPEN | AWAITING DISPOSITION | | 3 | OTHER_STATUS | OPEN | USED BY OTHER ENTITY | If I have an entity: @MappedSuperclass @Table(name="LOOKUP") @Inheritance(strategy=InheritanceType.SINGLE_TABLE) @DiscriminatorColumn(name="TYPE", discriminatorType=DiscriminatorType.STRING) public abstract class Lookup { @Id @Column(name="ID") int id; @Column(name="TYPE") String type; @Column(name="CODE") String code; @Column(name="DESC") String description; ... } Then a subclass: @Entity @DiscriminatorValue("ORDER_STATUS") public class OrderStatus extends Lookup { } The expectation is to map it: @Entity @Table(name="ORDERS") public class OrderVO { ... @ManyToOne @JoinColumn(name="STATUS", referencedColumnName="CODE") OrderStatus status; ... } So that the CODE is stored in the database. I expected that if I wrote: OrderVO o = entityManager.find(OrderVO.class, 123); the SQL generated using OpenJPA would look something like: SELECT t0.id, ..., t1.CODE, t1.TYPE, t1.DESC FROM ORDERS t0 LEFT OUTER JOIN LOOKUP t1 ON t0.STATUS = t1.CODE AND t1.TYPE = 'ORDER_STATUS' WHERE t0.ID = ? But the actual SQL that gets generated is missing the AND t1.TYPE = 'ORDER_STATUS' This causes a problem when the CODE is not unique. Is there a way to have the discriminator included on joins or am I doing something wrong here?

    Read the article

  • Encrypting using RSA via COM Interop = "The requested operation requires delegation to be enabled on

    - by Mr AH
    Hi Guys, So i've got this little static method in a .Net class which takes a string, uses some stored public key and returns the encrypted version of that key. This is basically so some user entered data can be saved an encrypted, then retrieved and decrypted at a later date. Pretty basic stuff and the unit test works fine. However, part of the application is in classic ASP. This then uses some COM visible version of the class to go off and invoke the method on the real class and return the same string to the COM client (classic ASP). I use this kind of stuff all the time, but in this case we have a major problem. As the method is doing something with RSA keys and has to access certain machine information to do so, we get the error: "The requested operation requires delegation to be enabled on the machine. I've searched around a lot, but can't really understand what this means. I assume I am getting this error on the COM but not the UT because the UT runs as me (Administrator) and classic ASP as IWAM. Anyone know what I need to do to enable IWAM to do this? Or indeed if this is the real problem here?

    Read the article

  • input type file alternative and file upload best practice

    - by Ioxp
    Background: I am working on a file upload page that will extend an existing web portal. This page will allow for an end user to upload files from there local computer to our network (the files will not be stored on the web server, rather a remote workstation). The end user will have the ability to view the data that they have submitted by hyper-linking the files that have been uploaded on this page. Question 1: Is there an ASP.net alternative to the <input type="file" runat="server" /> HTML tag? The reason for asking is i would rather use an image button and display the file as an asp label on the portal to keep with a consistent style. Question 2: So i understand that giving the end user the ability to upload files to the server and then turn around to show them the data that they posted poses a security threat. So far i am using the id.PostedFile.ContentType and the file extension to reject the data if its not an accepted format (i.e. "text/plain", "application/pdf", "application/vnd.ms-excel", or "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet"). Also the location where the files are uploaded to has a sufficient amount of virus and malware protection and this is not a concern. What, from the C# point of view, additional steps should i take to ensure that the end user cant take advantage and compromise the system in regards to allowing them to upload files?

    Read the article

  • NHibernate query with Projections.Cast to DateTime

    - by stiank81
    I'm experimenting with using a string for storing different kind of data types in a database. When I do queries I need to cast the strings to the right type in the query itself. I'm using .Net with NHibernate, and was glad to learn that there exists functionality for this. Consider the simple class: public class Foo { public string Text { get; set; } } I successfully use Projections.Cast to cast to numeric values, e.g. the following query correctly returns all Foos with an interger stored as int - between 1-10. var result = Session.CreateCriteria<Foo>() .Add(Restrictions.Between(Projections.Cast(NHibernateUtil.Int32, Projections.Property("Text")), 1, 10)) .List<Foo>(); Now if I try using this for DateTime I'm not able to make it work no matter what I try. Why?! var date = new DateTime(2010, 5, 21, 11, 30, 00); AddFooToDb(new Foo { Text = date.ToString() } ); // Will add it to the database... var result = Session .CreateCriteria<Foo>() .Add(Restrictions.Eq(Projections.Cast(NHibernateUtil.DateTime, Projections.Property("Text")), date)) .List<Foo>();

    Read the article

  • Can I convert an ASCII MD5 hashed password into a Unicode MD5 hashed password?

    - by Jimmy Moo Moo
    Hello, I'm looking for help to convert an ASCII MD5 hashed password into a Unicode MD5 hashed password? For example, I'll use the string "password" . When it's converted to an ascii byte array, I get a base64 encoded hash of X03MO1qnZdYdgyfeuILPmQ== When it's converted into a unicode byte array, I get a base64 encoded hash of sIHb6F4ew//D1OfQInQAzQ== All my passwords are stored in an md5 hash that was applied to an ascii byte array, but I'm trying to migrate my application's user data to a system that stores password in an md5 hash that is applied a unicode byte array. In case it's not clear, with the following C#code: var passwordBytes = Encoding.ASCII.GetBytes("password"); var hashAlgorithm = HashAlgorithm.Create("MD5"); var hashBytes = hashAlgorithm.ComputeHash(passwordBytes); My current system uses this, but the system I'm moving to has a diff first time. It usese Encoding.Unicode.GetBytes. Does anybody know how I can convert my passwords? From X03MO1qnZdYdgyfeuILPmQ== into sIHb6F4ew//D1OfQInQAzQ== I'm guessing the answer is that I can't.. the encoding is being done before the hashing, but I thought I'd inquire the bright minds of stackoverflow and see if anybody has a way.

    Read the article

  • Internet Explorer 8 64bit and Selenium Not working.

    - by chobo2
    I am trying to get selenium tests to run. Yet every time I try to run a tests that should run IE I get a error on line 863 of htmlutils.js It says that I should disable my popup blocker. The thing is I went to IE tools- turn of popup block. So it is disabled and I get this error. Is there something else I need to disable. I actually don't even know what version of Internet explorer it is running since I am using Windows 7 Pro 64bit version. So when I do use IE I use 64bit version but I am under the understanding if the site or something like that does not support 64bit it goes to 32bit. So not sure what I need to do it to make it work. This is the lines where it does function openSeparateApplicationWindow(url, suppressMozillaWarning) { // resize the Selenium window itself window.resizeTo(1200, 500); window.moveTo(window.screenX, 0); var appWindow = window.open(url + '?start=true', 'selenium_main_app_window'); if (appWindow == null) { var errorMessage = "Couldn't open app window; is the pop-up blocker enabled?" LOG.error(errorMessage); throw new Error("Couldn't open app window; is the pop-up blocker enabled?"); } Where is this log.error message stored? Maybe I can post that too.

    Read the article

  • Using Freepascal\Lazarus JSON Libraries

    - by Gizmo_the_Great
    I'm hoping for a bit of a "simpletons" demo\explanation for using Lazarus\Freepascal JSON parsing. I've asked a question here but all the replies are "read this" and none of them are really helping me get a grasp because the examples are bit too in-depth and I'm seeking a very simple example to help me understand how it works. In brief, my program reads an untyped binary file in chunks of 4096 bytes. The raw data then gets converted to ASCII and stored in a string. It then goes through the variable looking for certain patterns, which, it turned out, are JSON data structures. I've currently coded the parsing the hard way using Pos and ExtractANSIString etc. But I'vesince learnt that there are JSON libraries for Lazarus & FPC, namely fcl-json, fpjson, jsonparser, jsonscanner etc. https://bitbucket.org/reiniero/fpctwit/src http://fossies.org/unix/misc/fpcbuild-2.6.0.tar.gz:a/fpcbuild-2.6.0/fpcsrc/packages/fcl-json/src/ http://svn.freepascal.org/cgi-bin/viewvc.cgi/trunk/packages/fcl-json/examples/ However, I still can't quite work out HOW I read my string variable and parse it for JSON data and then access those JSON structures. Can anyone give me a very simple example, to help getting me going? My code so far (without JSON) is something like this: try SourceFile.Position := 0; while TotalBytesRead < SourceFile.Size do begin BytesRead := SourceFile.Read(Buffer,sizeof(Buffer)); inc(TotalBytesRead, BytesRead); StringContent := StripNonAsciiExceptCRLF(Buffer); // A custom function to strip out binary garbage leaving just ASCII readable text if Pos('MySearchValue', StringContent) > 0 then begin // Do the parsing. This is where I need to do the JSON stuff ...

    Read the article

  • Schema objects not visible in SQL Server Management Studio 2008

    - by Germ
    I'm experiencing a weird problem with a SQL login. When I connect to the server in Microsoft SQL Server Management Studio (2008) using this account, I cannot see any of the tables, stored procedures etc. that this account should have access to on a particular database. When I connect to the same server within Visual Studio (2008) with the same account everything is there. When I connect with the same account on a Virtual Machine everything is there. I've also had a co-worker connect to the server using the same login and he's able to view everything as well. I use Microsoft SQL Server Management Studio all day connecting to different servers and databases and I've never experienced this problem. Does anyone have any suggestions on how I can diagnose this problem? I've checked to make sure I don't have any Table filters etc. There's several database on this server and I'm able to see the correct tables that this account has access to in the other databases just fine. Running this query lists the tables I'm expecting to see. SELECT * FROM INFORMATION_SCHEMA.TABLES

    Read the article

  • How to create this MongoMapper custom data type?

    - by Kapslok
    I'm trying to create a custom MongoMapper data type in RoR 2.3.5 called Translatable: class Translatable < String def initialize(translation, culture="en") end def languages end def has_translation(culture)? end def self.to_mongo(value) end def self.from_mongo(value) end end I want to be able to use it like this: class Page include MongoMapper::Document key :title, Translatable, :required => true key :content, String end Then implement like this: p = Page.new p.title = "Hello" p.title(:fr) = "Bonjour" p.title(:es) = "Hola" p.content = "Some content here" p.save p = Page.first p.languages => [:en, :fr, :es] p.has_translation(:fr) => true en = p.title => "Hello" en = p.title(:en) => "Hello" fr = p.title(:fr) => "Bonjour" es = p.title(:es) => "Hola" In mongoDB I imagine the information would be stored like: { "_id" : ObjectId("4b98cd7803bca46ca6000002"), "title" : { "en" : "Hello", "fr" : "Bonjour", "es" : "Hola" }, "content" : "Some content here" } So Page.title is a string that defaults to English (:en) when culture is not specified. I would really appreciate any help.

    Read the article

  • ASP.Net Entity Framework Repository & Linq

    - by Chris Klepeis
    My scenario: This is an ASP.NET 4.0 web app programmed via C# I implement a repository pattern. My repositorys all share the same ObjectContext, which is stored in httpContext.Items. Each repository creates a new ObjectSet of type E. Heres some code from my repository: public class Repository<E> : IRepository<E>, IDisposable where E : class { private DataModelContainer _context = ContextHelper<DataModelContainer>.GetCurrentContext(); private IObjectSet<E> _objectSet; private IObjectSet<E> objectSet { get { if (_objectSet == null) { _objectSet = this._context.CreateObjectSet<E>(); } return _objectSet; } } public IQueryable<E> GetQuery() { return objectSet; } Lets say I have 2 repositorys, 1 for states and 1 for countrys and want to create a linq query against both. Note that I use POCO classes with the entity framework. State and Country are 2 of these POCO classes. Repository stateRepo = new Repository<State>(); Repository countryRepo = new Repository<Country>(); IEnumerable<State> states = (from s in _stateRepo.GetQuery() join c in _countryRepo.GetQuery() on s.countryID equals c.countryID select s).ToList(); Debug.WriteLine(states.First().Country.country) essentially, I want to retrieve the state and the related country entity. The query only returns the state data... and I get a null argument exception on the Debug.WriteLine LazyLoading is disabled in my .edmx... thats the way I want it.

    Read the article

  • IIS 6 with wildcard mapping and UNC virtual directory problem

    - by El Che
    Hi. On our production servers (win 2003 with IIS6 and load balanced with an F5 BIGIP), we have a problem when introducing wildcardmapping on IIS6. We use .net Framework 3.5 SP1. The issue manifests itself as by the server only sometimes serving the images stored on a virtual directory pointing to a UNC path. Sometimes the images are displayed, and sometimes not. Removing the wildcard mapping solved this problem. I will need wildcard mapping on the server for future features, so any help/pointers to if this is a known problem will be very helpful. In advance, thanks for any help. Edit: The exception it fails with is the following: Message: Failed to start monitoring changes to '\ourFileServer\folder1\thumbnails' because the network BIOS command limit has been reached. For more information on this error, please refer to Microsoft knowledge base article 810886. Hosting on a UNC share is not supported for the Windows XP Platform. Source: System.Web Data: System.Collections.ListDictionaryInternal TargetSizeVoid .ctor(System.Web.DirectoryMonitor, System.String, Boolean, UInt32) StackTrace at System.Web.DirMonCompletion..ctor(DirectoryMonitor dirMon, String dir, Boolean watchSubtree, UInt32 notifyFilter) at System.Web.DirectoryMonitor.StartMonitoring() at System.Web.DirectoryMonitor.StartMonitoringFile(String file, FileChangeEventHandler callback, String alias) at System.Web.FileChangesMonitor.StartMonitoringFile(String alias, FileChangeEventHandler callback) at System.Web.Configuration.WebConfigurationHost.StartMonitoringStreamForChanges(String streamName, StreamChangeCallback callback) at System.Configuration.BaseConfigurationRecord.MonitorStream(String configKey, String configSource, String streamname) at System.Configuration.BaseConfigurationRecord.InitConfigFromFile()

    Read the article

  • Maven 2 checkstyle plugin version 2.5 - Problem with configLocation

    - by Nils Schmidt
    Hi there, I am using checkstyle plugin in maven 2. I now want to switch my config file, from the default one to a) an online file, or b) a local file. I tried the following two things, which both didnt work. Any suggestions? A) Local file, which is directly in my project folder next to the pom.xml <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-checkstyle-plugin</artifactId> <configuration> <configLocation>checkstyle.xml</configLocation> </configuration> </plugin> B) Remote file, that is stored on a server <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-checkstyle-plugin</artifactId> <configuration> <configLocation>http://stud.hs-heilbronn.de/~nischmid/development/checkstyle-config.xml</configLocation> </configuration> </plugin> Both cases result in an error like this: [INFO] An error has occurred in Checkstyle report generation. Embedded error: Failed during checkstyle execution Could not find resource 'file:checkstyle.xml'. Any help would be appreciated!

    Read the article

  • C# inaccurate timer?

    - by Ivan
    Hi there, I'm developing an application and I need to get the current date from a server (it differs from the machine's date). I receive the date from the server and with a simple Split I create a new DateTime: globalVars.fec = new DateTime(DateTime.Now.Year, DateTime.Now.Month, DateTime.Now.Day, int.Parse(infoHour[0]), int.Parse(infoHour[1]), int.Parse(infoHour[2])); globalVars is a class and fec is a public static variable so that I can access it anywhere in the application (bad coding I know...). Now I need to have a timer checking if that date is equal to some dates I have stored in a List and if it is equal I just call a function. List<DateTime> fechas = new List<DateTime>(); Before having to obtain the date from a server I was using computer's date, so to check if the dates matched I was using this: private void timerDatesMatch_Tick(object sender, EventArgs e) { DateTime tick = DateTime.Now; foreach (DateTime dt in fechas) { if (dt == tick) { //blahblah } } } Now I have the date from the server so DateTime.Now can't be used here. Instead I have created a new timer with Interval=1000 and on tick I'm adding 1 second to globalVars.fec using: globalVars.fec = globalVars.fec.AddSeconds(1); But the clock isn't accurate and every 30 mins the clock loses about 30 seconds. Is there another way of doing what I'm trying to do? I've thought about using threading.timer instead but I need to have access to other threads and non-static functions. Thanks in advance, Ivan

    Read the article

  • Cannot use READPAST in snapshot isolation mode

    - by Marcus
    I have a process which is called from multiple threads which does the following: Start transaction Select unit of work from work table with by finding the next row where IsProcessed=0 with hints (UPDLOCK, HOLDLOCK, READPAST) Process the unit of work (C# and SQL stored procedures) Commit the transaction The idea of this is that a thread dips into the pool for the "next" piece of work, and processes it, and the locks are there to ensure that a single piece of work is not processed twice. (the order doesn't matter). All this has been working fine for months. Until today that is, when I happened to realise that despite enabling snapshot isolation and making it the default at the database level, the actual transaction creation code was manually setting an isolation level of "ReadCommitted". I duly changed that to "Snapshot", and of course immediately received the "You can only specify the READPAST lock in the READ COMMITTED or REPEATABLE READ" error message. Oops! The main reason for locking the row was to "mark the row" in such a way that the "mark" would be removed when the transaction that applied the mark was committed and the lock seemed to be the best way to do this, since this table isn't read otherwise except by these threads. If I were to use the IsProcessed flag as the lock, then presumably I would need to do the update first, and then select the row I just updated, but I would need to employ the NOLOCK flag to know whether any other thread had set the flag on a row. All sounds a bit messy. The easiest option would be to abandon the snapshot isolation mode altogether, but the design of step #3 requires it. Any bright ideas on the best way to resolve this problem? Thanks Marcus

    Read the article

  • SQL Injection Protection for dynamic queries

    - by jbugeja
    The typical controls against SQL injection flaws are to use bind variables (cfqueryparam tag), validation of string data and to turn to stored procedures for the actual SQL layer. This is all fine and I agree, however what if the site is a legacy one and it features a lot of dynamic queries. Then, rewriting all the queries is a herculean task and it requires an extensive period of regression and performance testing. I was thinking of using a dynamic SQL filter and calling it prior to calling cfquery for the actual execution. I found one filter in CFLib.org (http://www.cflib.org/udf/sqlSafe): <cfscript> /** * Cleans string of potential sql injection. * * @param string String to modify. (Required) * @return Returns a string. * @author Bryan Murphy ([email protected]) * @version 1, May 26, 2005 */ function metaguardSQLSafe(string) { var sqlList = "-- ,'"; var replacementList = "#chr(38)##chr(35)##chr(52)##chr(53)##chr(59)##chr(38)##chr(35)##chr(52)##chr(53)##chr(59)# , #chr(38)##chr(35)##chr(51)##chr(57)##chr(59)#"; return trim(replaceList( string , sqlList , replacementList )); } </cfscript> This seems to be quite a simple filter and I would like to know if there are ways to improve it or to come up with a better solution?

    Read the article

  • Writing an efficient cron job script utilizing Zend_Mail_Storage_Imap.

    - by fireeyedboy
    I'm new to the IMAP protocol and Zend_Mail_Storage and I'm writing a small php script for a cron job that should regularly poll an IMAP account and check for new messages, and send an e-mail if new messages have arrived. As you can imagine, I want to only poll the IMAP account for relevant messages, and I only want to send a new e-mail if new messages have arrived since the last polled new message. So I thought of keeping track of the last message I polled with some unique identifier for a message. But I'm a bit uncertain about whether the methods I want to utilize for this do what I expect them to do though. So my questions are: Does the iterator position of Zend_Mail_Storage_Imap actually resemble some IMAP unique identifier for messages, or is it simply only and internal position of Zend_Mail_Storage_Abstract? For instance, if I tell it to seek() to message 5 (which I stored from an earlier session) will it indeed seek to the appropriate message on the IMAP server, even if for instance messages have been deleted since last session? Would keeping track of this latest polled message id in a file suffice for a cron job that, say, polls the account every 5 or 10 minutes? Or is this too naive, and should I be using a database for instance. Or is there maybe a much easier way to keep track of such state with Zend_Mail_Storage_Abstract? Also, do I need to poll every IMAP folder? Or is everything accumulated when I poll INBOX? If you could shed some light on any of these matters, I'ld appreciate it. Thanks in advance.

    Read the article

  • Question regarding the iPhone In App Purchase capabilities

    - by Wesley
    Hi everyone. I am working on releasing an app that will greatly benefit from using the in app purchase model. The app is a sort of book viewer, and the content I would like to make available for purchase will be more books in various languages. Each book is stored in sqlite format, in separate .db files. Now, the way that my developer has set it all up is that he added a line onto the bottom of the info.plist file called databases. Inside that database section, I can type in 'es' as the key, and for the value the name of the spanish database, and it populates in a uitableview in spanish, using NSLocale. So it is very easy to setup and implement different databases, which is great, but now I am confused about how I can implement my in app purchase model. Sorry about the long winded intro, here is my question: Is it possible to have an in app purchase add a line to my info.plist file? Or if not, is it possible to have the purchase reveal a new and updated plist file altogether that I would have already setup correctly? Any help here would be appreciated. Thanks

    Read the article

  • flushing database cache in SWI-Prolog

    - by JPro
    We are using swi-prolog to run our testcases. Whenever the test starts, I am opening the connection to MYSQL database and storing the Name of the Test hat is being done and then closing the DB. These tests run for about 2 days continuously. After the tests are done, the results basically gets stored in folder in the server. There is a predicate in another prolog file that is called to update the results to the MYSQL database. The code is simple, I use odbc library and just call odbc_* predicates to connect and update the mysql by issuing direct queries. The actual problem is : If I try to call the Predicate from the same Prolog window, where the test just got completed, I get an error as updating to the DB server. Although I do not get any error in the connection. If I close the session of that prolog with halt and closing all the open prolog windows , then open an other complete new instance of Prolog and run the predicate the update goes well. I have a feeling that there is some connection reference to the MySQL DB in Prolog database. Is there any way to clear the database in prolog so that I can run the same predicate without closing any existing prolog windows? Any ideas appreciated. Thanks.

    Read the article

  • GDB Not Skipping Functions without Debug Info

    - by Alan Lue
    I compiled GDB 7 on a Mac OS X Leopard system. When stepping through a C program, GDB fails to step through 'printf()' statements, which probably don't have associated debug information, and starts printing "Cannot find bounds of current function." Here's some output: $ /usr/local/bin/gdb try1 GNU gdb (GDB) 7.1 Copyright (C) 2010 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html> This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "x86_64-apple-darwin10". (gdb) list 1 #include <stdio.h> 2 static void display(int i, int *ptr); 3 4 int main(void) { 5 int x = 5; 6 int *xptr = &x; 7 printf("In main():\n"); 8 printf(" x is %d and is stored at %p.\n", x, &x); 9 printf(" xptr holds %p and points to %d.\n", xptr, *xptr); 10 display(x, xptr); (gdb) b 6 Breakpoint 1 at 0x1e8e: file try1.c, line 6. (gdb) r Starting program: /tmp/try1 Breakpoint 1, main () at try1.c:6 6 int *xptr = &x; (gdb) n 7 printf("In main():\n"); (gdb) n 0x0000300a in ?? () (gdb) n Cannot find bounds of current function (gdb) n Cannot find bounds of current function Any idea what's going on? Alan

    Read the article

  • Storing UTF8 string in a UnicodeString

    - by Mick
    In Delphi 2007 you can store a UTF8 string in a WideString and then pass that onto a Win32 function, e.g. var UnicodeStr: WideString; UTF8Str: WideString; begin UnicodeStr:='some unicode text'; UTF8Str:=UTF8Encode(UnicodeStr); Windows.SomeFunction(PWideChar(UTF8Str), ...) end; Delphi 2007 does not interfere with the contents of UTF8Str, i.e. it is left as a UTF8 encoded string stored in a WideString. But in Delphi 2010 I'm struggling to find a way to do the same thing, i.e. store a UTF8 encoded string in a WideString without it being automatically converted from UTF8. I cannot pass a pointer to UTF8 string (or RawByteString), e.g. the following will obviously not work: var UnicodeStr: WideString; UTF8Str: UTF8String; begin UnicodeStr:='some unicode text'; UTF8Str:=UTF8Encode(UnicodeStr); Windows.SomeFunction(PWideChar(UTF8Str), ...) end; Any help appreciated. Thanks.

    Read the article

< Previous Page | 266 267 268 269 270 271 272 273 274 275 276 277  | Next Page >