Search Results

Search found 24931 results on 998 pages for 'information visualization'.

Page 835/998 | < Previous Page | 831 832 833 834 835 836 837 838 839 840 841 842  | Next Page >

  • Unable to create PDB file

    - by Ryan Smith
    For some reason this error started popping up today on one of my projects. Error 1 Unable to write to output file 'C:\MyProject\Release\MyProject.pdb': Unspecified error If I go into advanced compile options and change it to not generate and debug info, my project compiles fine. I have tried setting the permissions on the Release folder to full for everyone, so I would assume it's not a permissions issue. Also, I don't see anything in my log files that would provide me with more information about the issue. Does anyone know why this error would just start showing up or a way to fix it? Thanks. Update: I have rebooted my machine, restarted VS several times and have even completely deleted the existing OBJ file where the issue is happening. It's still giving me the same error. This is a simple one project solution that was working fine just last week. It appears to be an issue with VS trying to build the PDB file because I can delete them out of the Release and Debug folders without issue. When I try rebuilding them VS will start creating the file (about 1.4MB is size) but I still get the error.

    Read the article

  • Help me choose a web development framework/platform that will make me learn something

    - by Sergio Tapia
    I'm having a bit of an overload of information these past two days. I'm planning to start my own website that will allow local businesses to list their items on sale, and then users can come in and search for "Abercrombie t-shirt" and the stores that sell them will be listed. It's a neat little project I'm really excited for and I'm sure it'll take off, but I'm having problems from the get go. Sure I could use ASP.Net for it, I'm a bit familiar with it and the IDE for ASP.Net pages is bar-none, but I feel this is a great chance for me to learn something new to branch out a bit and not regurgitate .NET like a robot. I've been looking and asking around but it's all just noise and I can't make an educated decision. Can you help me choose a framework/platform that will make me learn something that's a nice thing to know in the job market, but also nice for me to grow as a professional? So far I've looked at: Ruby on Rails Kohana CakePHP CodeIgniter Symfony But they are all very esoteric to me, and I have trouble even finding out which IDE to use to that will let me use auto-complete for the proprietary keywords/methods. Thank you for your time.

    Read the article

  • Is there a pattern for initializing objects created wth a DI container

    - by Igor Zevaka
    I am trying to get Unity to manage the creation of my objects and I want to have some initialization parameters that are not known until run-time: At the moment the only way I could think of the way to do it is to have an Init method on the interface. interface IMyIntf { void Initialize(string runTimeParam); string RunTimeParam { get; } } Then to use it (in Unity) I would do this: var IMyIntf = unityContainer.Resolve<IMyIntf>(); IMyIntf.Initialize("somevalue"); In this scenario runTimeParam param is determined at run-time based on user input. The trivial case here simply returns the value of runTimeParam but in reality the parameter will be something like file name and initialize method will do something with the file. This creates a number of issues, namely that the Initialize method is available on the interface and can be called multiple times. Setting a flag in the implementation and throwing exception on repeated call to Initialize seems way clunky. At the point where I resolve my interface I don't want to know anything about the implementation of IMyIntf. What I do want, though, is the knowledge that this interface needs certain one time initialization parameters. Is there a way to somehow annotate(attributes?) the interface with this information and pass those to framework when the object is created? Edit: Described the interface a bit more.

    Read the article

  • Help me with the simplest program for "Trusted" application

    - by idazuwaika
    Hi, I hope anyone from the large community here can help me write the simplest "Trusted" program that I can expand from. I'm using Ubuntu Linux 9.04, with TPM emulator 0.60 from Mario Strasser (http://tpm-emulator.berlios.de/). I have installed the emulator and Trousers, and can successfully run programs from tpm-tools after running tpmd and tcsd daemons. I hope to start developing my application, but I have problems compiling the code below. #include <trousers/tss.h> #include <trousers/trousers.h> #include <stdio.h> TSS_HCONTEXT hContext; int main() { Tspi_Context_Create(&hContext); Tspi_Context_Close(hContext); return 0; } After trying to compile with g++ tpm.cpp -o tpmexe I receive errors undefined reference to 'Tspi_Context_Create' undefined reference to 'Tspi_Context_Close' What do I have to #include to successfully compile this? Is there anything that I miss? I'm familiar with C, but not exactly so with Linux/Unix programming environment. ps: I am a part time student in Master in Information Security programme. My involvement with programming has been largely for academic purposes.

    Read the article

  • SOAP security in Salesforce

    - by Dean Barnes
    I am trying to change the wsdl2apex code for a web service call header that currently looks like this: <env:Header> <Security xmlns="http://docs.oasis-open.org/wss/oasis-wss-wssecurity-secext-1.1.xsd"> <UsernameToken Id="UsernameToken-4"> <Username>test</Username> <Password>test</Password> </UsernameToken> </Security> </env:Header> to look like this: <soapenv:Header> <wsse:Security xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> <wsse:UsernameToken wsu:Id="UsernameToken-4" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd"> <wsse:Username>Test</wsse:Username> <wsse:Password Type="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText">Test</wsse:Password> </wsse:UsernameToken> </wsse:Security> </soapenv:Header> One problem is that I can't work out how to change the namespaces for elements (or even if it matters what name they have). A secondary problem is putting the Type attribute onto the Password element. Can any provide any information that might help? Thanks

    Read the article

  • How to write a flexible modular program with good interaction possibilities between modules?

    - by PeterK
    I went through answers on similar topics here on SO but could't find a satisfying answer. Since i know this is a rather large topic, i will try to be more specific. I want to write a program which processes files. The processing is nontrivial, so the best way is to split different phases into standalone modules which then would be used as necessary (since sometimes i will be only interested in the output of module A, sometimes i would need output of five other modules, etc). The thing is, that i need the modules to cooperate, because the output of one might be the input of another. And i need it to be FAST. Moreover i want to avoid doing certain processing more than once (if module A creates some data which then need to be processed by module B and C, i don't want to run module A twice to create the input for modules B,C ). The information the modules need to share would mostly be blocks of binary data and/or offsets into the processed files. The task of the main program would be quite simple - just parse arguments, run required modules (and perhaps give some output, or should this be the task of the modules?). I don't need the modules to be loaded at runtime. It's perfectly fine to have libs with a .h file and recompile the program every time there is a new module or some module is updated. The idea of modules is here mainly because of code readability, maintaining and to be able to have more people working on different modules without the need to have some predefined interface or whatever (on the other hand, some "guidelines" on how to write the modules would be probably required, i know that). We can assume that the file processing is a read-only operation, the original file is not changed. Could someone point me in a good direction on how to do this in C++ ? Any advice is wellcome (links, tutorials, pdf books...).

    Read the article

  • MS Acess SQL - where clause to get year from date based on the year - data located in MS access form

    - by primus285
    OK, back again. I have a problem getting a drop down list to populate based on information in two fields. I have got the SQL correct as to Select just one year if I put DateValue('01/01/2001') in both places, but I am trying now to get it to grab the year data from the MS access form - another drop down named "cboYear". I'd hate to have to do something in VB, unless necessary. so far I have gotten this to pull up something (its always incorrect) SELECT DISTINCT Database_New.ASEC FROM Database_New WHERE Database_New.Date>=DateValue('01/01/' & [cboYear]) And Database_New.Date<=DateValue('12/31/' & [cboYear]); and SELECT DISTINCT Database_New.ASEC FROM Database_New WHERE Database_New.Date>=DateValue('01/01/' + [cboYear]) And Database_New.Date<=DateValue('12/31/' + [cboYear]); SELECT DISTINCT Database_New.ASEC FROM Database_New WHERE Database_New.Date>=DateValue('01/01/' AND [cboYear]) And Database_New.Date<=DateValue('12/31/' AND [cboYear]); both give errors saying that it is too complex to compute. Its probably something simple, but where do I go from here?

    Read the article

  • This pagination script is doodoo - I need a better one!

    - by ClarkSKent
    Hello, I was looking at the pagination script (posted below) and found it to be gros,s and not very good at all especially when trying to customize it. This is what the main page looks like: <?php include('config.php'); $per_page = 9; //Calculating no of pages $sql = "select * from messages"; $result = mysql_query($sql); $count = mysql_num_rows($result); $pages = ceil($count/$per_page) ?> <script type="text/javascript" src="http://ajax.googleapis.com/ajax/ libs/jquery/1.3.0/jquery.min.js"></script> <script type="text/javascript" src="jquery_pagination.js"></script> <div id="loading" ></div> <div id="content" ></div> <ul id="pagination"> <?php //Pagination Numbers for($i=1; $i<=$pages; $i++) { echo '<li id="'.$i.'">'.$i.'</li>'; } ?> </ul> The top part of the code gets the results from the mysql db and than uses this information to display the numbers in the body of this page. I am trying to put something like this on a separate page like count_page.php and then just include it. I guess my question is, if there is a better way of doing the above with better structure. A better way to go through the db and count the results and display the appropriate numbers. The above seems messy. Thanks for any help or suggestions on this.

    Read the article

  • MVC 3, View Model for user registration process. Password validation not working properly

    - by sec_goat
    I am trying to create a user registration page using MVC 3, so that I can better understand the process of how it works, what's going on behind the scenes etc. I am running into some issues when trying to use [Compare] to check to see that the user entered the same password twice. I tried adding the ComparePassword field to my user model first, and found that would not work the way I wanted as I did not have the field in the database, so the obvious answer was to create a View Model using the same information including the ComparePassword field. So I now have created a User model and a RegistrationViewModel, however it appears that the [Compare] on the password is not returning anything, for instance no matter what I put in the two boxes, when I click create it gives no error, which seems to me to mean it was successfully validated. I am not sure what I am doing or not doing to make this work properly. I have tried updating the jQuery.Validate to the newest version as there were some bugs reported in older version, this has not helped my efforts. Below is a wall of code, that is what I am working with. } public class RegistrationViewModel { [Required] [StringLength(15, MinimumLength = 3)] [Display(Name = "User Name")] [RegularExpression(@"(\S)+", ErrorMessage = " White Space is not allowed in User Names")] [ScaffoldColumn(false)] public String Username { get; set; } [Required] [StringLength(15, MinimumLength = 3)] [Display(Name = "First Name")] public String firstName { get; set; } [Required] [StringLength(15, MinimumLength = 3)] [Display(Name = "Last Name")] public String lastName { get; set; } [Required] [Display(Name = "Email")] public String email { get; set; } [Required] [Display(Name = "Password")] [DataType(DataType.Password)] public String password { get; set; } [Required] [DataType(DataType.Password)] [Display(Name = "Re-enter Password")] [Compare("Password", ErrorMessage = "Passwords do not match.")] public String comparePassword { get; set; } }

    Read the article

  • Vector insert() causes program to crash

    - by wrongusername
    This is the first part of a function I have that's causing my program to crash: vector<Student> sortGPA(vector<Student> student) { vector<Student> sorted; Student test = student[0]; cout << "here\n"; sorted.insert(student.begin(), student[0]); cout << "it failed.\n"; ... It crashes right at the sorted part because I can see "here" on the screen but not "it failed." The following error message comes up: Debug Assertion Failed! (a long path here...) Expression: vector emplace iterator outside range For more information on how your program can cause an assertion failure, see the Visual C++ documentation on asserts. I'm not sure what's causing the problem now, since I have a similar line of code elsewhere student.insert(student.begin() + position(temp, student), temp); that does not crash (where position returns an int and temp is another declaration of a struct Student). What can I do to resolve the problem, and how is the first insert different from the second one?

    Read the article

  • To what extent should code try to explain fatal exceptions?

    - by Andrzej Doyle
    I suspect that all non-trivial software is likely to experience situations where it hits an external problem it cannot work around and thus needs to fail. This might be due to bad configuration, an external server being down, disk full, etc. In these situations, especially if the software is running in non-interactive mode, I expect that all one can really do is log an error and wait for the admin to read the logs and fix the problem. If someone happens to interact with the software in the meantime, e.g. a request comes in to a server that failed to initialize properly, then perhaps an appropriate hint can be given to check the logs and maybe even the error can be echoed (depending on whether you can tell if they're a technical guy as opposed to a business user). For the moment though let's not think too hard about this part. My question is, to what extent should the software be responsible for trying to explain the meaning of the fatal error? In general, how much competence/knowledge are you allowed to presume on administrators of the software, and how much should you include troubleshooting information and potential resolution steps when logging fatal errors? Of course if there's something that's unique to the runtime context this should definitely be logged; but lets assume your software needs to talk to Active Directory via LDAP and gets back an error "[LDAP: error code 49 - 80090308: LdapErr: DSID-0C090334, comment: AcceptSecurityContext error, data 525, vece]". Is it reasonable to assume that the maintainers will be able to Google the error code and work out what it means, or should the software try to parse the error code and log that this is caused by an incorrect user DN in the LDAP config? I don't know if there is a definitive best-practices answer for this, so I'm keen to hear a variety of views.

    Read the article

  • Hashing the state of a complex object in .NET

    - by Jan
    Some background information: I am working on a C#/WPF application, which basically is about creating, editing, saving and loading some data model. The data model contains of a hierarchy of various objects. There is a "root" object of class A, which has a list of objects of class B, which each has a list of objects of class C, etc. Around 30 classes involved in total. Now my problem is that I want to prompt the user with the usual "you have unsaved changes, save?" dialog, if he tries to exit the program. But how do I know if the data in current loaded model is actually changed? There is of course ways to solve this, like e.g. reloading the model from file and compare against the one in memory value by value or make every UI control set a flag indicating the model has been changed. Now instead, I want to create a hash value based on the model state on load and generate a new value when the user tries to exit, and compare those two. Now the question: So inspired of that, I was wondering if there exist some way to generate a hash value from the (value)state of some arbitrary complex object? Preferably in a generic way, e.g. no need to apply attributes to each involved class/field. One idea could be to use some of .NET's serialization functionality (assuming it will work out-of-the-box in this case) and apply a hash function to the content of the resulting file. However, I guess there exist some more suitable approach. Thanks in advance.

    Read the article

  • JDBC transaction dead-lock solution required?

    - by user49767
    It's a scenario described my friend and challenged to find solution. He is using Oracle database and JDBC connection with read committed as transaction isolation level. In one of the transaction, he updates a record and executes selects statement and commits the transaction. when everything happening within single thread, things are fine. But when multiple requests are handled, dead-lock happens. Thread-A updates a record. Thread B updates another record. Thread-A issues select statement and waits for Thread-B's transaction to complete the commit operation. Thread-B issues select statement and waits for Thread-A's transaction to complete the commit operation. Now above causes dead-lock. Since they use command pattern, the base framework allows to issue commit only once (at the end of all the db operation), so they are unable to issue commit immediately after select statement. My argument was Thread-A supposed to select all the records which are committed and hence should not be issue. But he said that Thread-A will surely wait till Thread-B commits the record. is that true? What are all the ways, to avoid the above issue? is it possible to change isolation-level? (without changing underlying java framework) Little information about base framework, it is something similar to Struts action, their each and every request handled by one action, transaction begins before execution and commits after execution.

    Read the article

  • RTTI Dynamic array TValue Delphi 2010

    - by user558126
    Hello I have a question. I am a newbie with Run Time Type Information from Delphi 2010. I need to set length to a dynamic array into a TValue. You can see the code. Type TMyArray = array of integer; TMyClass = class publihed function Do:TMyArray; end; function TMyClass.Do:TMyArray; begin SetLength(Result,5); for i:=0 to 4 Result[i]=3; end; ....... ....... ...... y:TValue; Param:array of TValue; ......... y=Methods[i].Invoke(Obj,Param);//delphi give me a DynArray type kind, is working, Param works to any functions. if Method[i].ReturnType.TypeKind = tkDynArray then//is working... begin I want to set length for y to 10000//i don't know how to write. end; I don't like Generics Collections.

    Read the article

  • Silverlight WinDg Memory Release Issue

    - by Chris Newton
    Hi, I have used WinDbg succesfully on a number of occasions to track down and fix memory leaks (or more accurately the CLRs inability to garbage collect a released object), but am stuck with one particular control. The control is displayed within a child window and when the window is closed a reference to the control remains and cannot be garbage collected. I have resolved what I believe to be the majority of the issues that could have caused the leak, but the !gcroot of the affected object is not clear (to me at least) as to what is still holding on to this object. The ouput is always the same regardless of the content being presented in the child window: DOMAIN(03FB7238):HANDLE(Pinned):79b12f8:Root: 06704260(System.Object[])- 05719f00(System.Collections.Generic.Dictionary2[[System.IntPtr, mscorlib],[System.Object, mscorlib]])-> 067c1310(System.Collections.Generic.Dictionary2+Entry[[System.IntPtr, mscorlib],[System.Object, mscorlib]][])- 064d42b0(System.Windows.Controls.Grid)- 064d4314(System.Collections.Generic.Dictionary2[[MS.Internal.IManagedPeerBase, System.Windows],[System.Object, mscorlib]])-> 064d4360(System.Collections.Generic.Dictionary2+Entry[[MS.Internal.IManagedPeerBase, System.Windows],[System.Object, mscorlib]][])- 064d3860(System.Windows.Controls.Border)- 064d4218(System.Collections.Generic.Dictionary2[[MS.Internal.IManagedPeerBase, System.Windows],[System.Object, mscorlib]])-> 064d4264(System.Collections.Generic.Dictionary2+Entry[[MS.Internal.IManagedPeerBase, System.Windows],[System.Object, mscorlib]][])- 064d3bfc(System.Windows.Controls.ContentPresenter)- 064d3d64(System.Collections.Generic.Dictionary2[[MS.Internal.IManagedPeerBase, System.Windows],[System.Object, mscorlib]])-> 064d3db0(System.Collections.Generic.Dictionary2+Entry[[MS.Internal.IManagedPeerBase, System.Windows],[System.Object, mscorlib]][])- 064d3dec(System.Collections.Generic.Dictionary2[[System.UInt32, mscorlib],[System.Windows.DependencyObject, System.Windows]])-> 064d3e38(System.Collections.Generic.Dictionary2+Entry[[System.UInt32, mscorlib],[System.Windows.DependencyObject, System.Windows]][])- 06490b04(Insurer.Analytics.SharedResources.Controls.HistoricalKPIViewerControl) If anyone has any ideas about what could potentially be the problem, or if you require more information, please let me know. Kind Regards, Chris

    Read the article

  • File descriptor limits and default stack sizes

    - by Charles
    Where I work we build and distribute a library and a couple complex programs built on that library. All code is written in C and is available on most 'standard' systems like Windows, Linux, Aix, Solaris, Darwin. I started in the QA department and while running tests recently I have been reminded several times that I need to remember to set the file descriptor limits and default stack sizes higher or bad things will happen. This is particularly the case with Solaris and now Darwin. Now this is very strange to me because I am a believer in 0 required environment fiddling to make a product work. So I am wondering if there are times where this sort of requirement is a necessary evil, or if we are doing something wrong. Edit: Great comments that describe the problem and a little background. However I do not believe I worded the question well enough. Currently, we require customers, and hence, us the testers, to set these limits before running our code. We do not do this programatically. And this is not a situation where they MIGHT run out, under normal load our programs WILL run out and seg fault. So rewording the question, is requiring the customer to change these ulimit values to run our software to be expected on some platforms, ie, Solaris, Aix, or are we as a company making it to difficult for these users to get going? Bounty: I added a bounty to hopefully get a little more information on what other companies are doing to manage these limits. Can you set these pragmatically? Should we? Should our programs even be hitting these limits or could this be a sign that things might be a bit messy under the covers? That is really what I want to know, as a perfectionist a seemingly dirty program really bugs me.

    Read the article

  • SQL Server service broker reporting as off when I have written a query to turn it on

    - by dotnetdev
    I have made a small ASP.NET website. It uses sqlcachedependency The SQL Server Service Broker for the current database is not enabled, and as a result query notifications are not supported. Please enable the Service Broker for this database if you wish to use notifications. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.InvalidOperationException: The SQL Server Service Broker for the current database is not enabled, and as a result query notifications are not supported. Please enable the Service Broker for this database if you wish to use notifications. Source Error: Line 12: System.Data.SqlClient.SqlDependency.Start(connString); This is the erroneous line in my global.asax. However, in sql server (2005), I enabled service broker like so (I connect and run the SQL Server service when I debug my site): ALTER DATABASE mynewdatabase SET ENABLE_BROKER with rollback immediate And this was successful. What am I missing? I am trying to use sql caching dependency and have followed all procedures. Thanks

    Read the article

  • Work for huge company or small company that makes products for huge company?

    - by TheGambler
    I'm facing a career decision. I currently work for a huge, huge, huge company (this should bring 3-4 companies to mind) as a programmer. I work on a really big (3 million lines of code) J2EE web application. I've been out of college for 2 years and I pretty much know that big corporate life isn't for me. There is a career opportunity to work for a small company that has a lot of funding and huge potential to sell their main project to this huge company I work for.This company makes kiosk software and works with a nearby kiosk software company. The deal isn't done, but I would come straight in to work on this project. This would be more of a interactive media position and I would do the development in flash/action script. I personally think this will be more interesting work then what I do now. This company does have other clients right now, just not on the scale of this potential huge client. I was told that my initial salary would be equal to what I'm making now, but if we land this client it could go up a lot more. I have a wife and child which should and will play a part in my decision if I should take this job. We do though have family that if crap hit the fan we could live with until we got back on our feet. So with the information provided, does this sound like a good opportunity and should I take it? This is a huge decision and the reason I'm asking the question here is because there are a lot of people here that have probably faced this decision before.

    Read the article

  • AudioRecord problems with non-HTC devices

    - by Marc
    I'm having troubles using AudioRecord. An example using some of the code derived from the splmeter project: private static final int FREQUENCY = 8000; private static final int CHANNEL = AudioFormat.CHANNEL_CONFIGURATION_MONO; private static final int ENCODING = AudioFormat.ENCODING_PCM_16BIT; private int BUFFSIZE = 50; private AudioRecord recordInstance = null; ... android.os.Process.setThreadPriority(android.os.Process.THREAD_PRIORITY_URGENT_AUDIO); recordInstance = new AudioRecord(MediaRecorder.AudioSource.MIC, FREQUENCY, CHANNEL, ENCODING, 8000); recordInstance.startRecording(); short[] tempBuffer = new short[BUFFSIZE]; int retval = 0; while (this.isRunning) { for (int i = 0; i < BUFFSIZE - 1; i++) { tempBuffer[i] = 0; } retval = recordInstance.read(tempBuffer, 0, BUFFSIZE); ... // process the data } This works on the HTC Dream and the HTC Magic perfectly without any log warnings/errors, but causes problems on the emulators and Nexus One device. On the Nexus one, it simply never returns useful data. I cannot provide any other useful information as I'm having a remote friend do the testing. On the emulators (Android 1.5, 2.1 and 2.2), I get weird errors from the AudioFlinger and Buffer overflows with the AudioRecordThread. I also get a major slowdown in UI responsiveness (even though the recording takes place in a separate thread than the UI). Is there something apparent that I'm doing incorrectly? Do I have to do anything special for the Nexus One hardware?

    Read the article

  • Google App Engine Java app couldn't find javac ?

    - by Frank
    I'm learning to use Google App Engine, I installed it in Netbeans, the project works, but when I clicked on "Deploy To Google App Engine", I got the following error : Beginning server interaction for ... 0% Creating staging directory 5% Scanning for jsp files. 8% Compiling jsp files. 11% Compiling java files. Error Details: Apr 20, 2010 3:51:23 PM org.apache.jasper.JspC processFile INFO: Built File: \PayPal_Monitor.jsp java.lang.IllegalStateException: cannot find javac executable based on java.home, tried "C:\Program Files (x86)\Java\jre6\bin\javac.exe" and "C:\Program Files (x86)\Java\bin\javac.exe" Unable to update app: cannot find javac executable based on java.home, tried "C:\Program Files (x86)\Java\jre6\bin\javac.exe" and "C:\Program Files (x86)\Java\bin\javac.exe" Please see the logs [C:\Users\NM\AppData\Local\Temp\appcfg3946701335172983337.log] for further information. The file "javac.exe" is in : C:\Program Files (x86)\Java\jdk1.6.0_18\bin How can I add it to "java.home" ? I'm using Win Vista, and I tried to add it from "System - Environment Variables", but there is no "java.home" in there. Where can I find it ? Frank

    Read the article

  • Archiver Securing SQLite Data without using Encryption on iPhone

    - by Redrocks
    I'm developing an iphone app that uses Core Data with a SQLite data store and lots of images in the resource bundle. I want a "simple" way to obfuscate the file structure of the SQLite database and the image files to prevent the casual hacker/unscrupulous developer from gaining access to them. When the app is deployed, the database file and image files would be obfuscated. Upon launching the app it would read in and un-obfuscate the database file, write the un-obfuscated version to the users "tmp" directory for use by core data, and read/un-obfuscate image files as needed. I'd like to apply a simple algorithm to the files that would somehow scramble/manipulate the file data so that the sqlite database data isn't discernible when the db is opened in a text editor and so that neither is recognized by other applications (SQLite Manager, Photoshop, etc.) It seems, from the information I've read, that I could use NSFileManager, NSKeyedArchiver, and NSData to accomplish this but I'm not sure how to proceed. Been developing software for many years but I'm new to everything CocoaTouch, Mac and iPhone. Also never had to secure/encrypt my data so this is new. Any thoughts, suggestions, or links to solutions are appreciated.

    Read the article

  • Classic ASP application-wide initializations and object caching

    - by slack3r
    In classic ASP (which I am forced to use), I have a few factory functions, that is, functions that return classes. I use JScript. In one include file I use these factory functions to create some classes that are used throughout the application. This include file is included with the #include directive in all pages. These factory functions do some "heavy lifting" and I don't want them to be executed on every page load. So, to make this clear I have something like this: // factory.inc function make_class(arg1, arg2) { function klass() { //... } // ... Some heavy stuff return klass; } // init.inc, included everywhere <!-- #include FILE="factory.inc" --> // ... MyClass1 = make_class(myarg01, myarg02); MyClass2 = make_class(myarg11, myarg12); //... How can I achieve the same effect without calling make_class on every page load? I know that I can't cache the classes in the Application object I can't use the Application_OnStart hook in Global.asa I could probably create a scripting component, but I really don't want to do that So, is there something else I can do? Maybe some way to achieve caching of these classes, which are really objects in JScript. PS: [further clarification] In the above code "heavy stuff" is not so heavy, but I just want to know if there's a way to avoid it being executed all the time. It reads database meta information, builds a table of the primary keys in the database and another table that resolves strings to classes, etc.

    Read the article

  • ISBNs are used as primary key, now I want to add non-book things to the DB - should I migrate to EAN

    - by fish2000
    I built an inventory database where ISBN numbers are the primary keys for the items. This worked great for a while as the items were books. Now I want to add non-books. some of the non-books have EANs or ISSNs, some do not. It's in PostgreSQL with django apps for the frontend and JSON api, plus a few supporting python command-line tools for management. the items in question are mostly books and artist prints, some of which are self-published. What is nice about using ISBNs as primary keys is that in on top of relational integrity, you get lots of handy utilities for validating ISBNs, automatically looking up missing or additional information on the book items, etcetera, many of which I've taken advantage. some such tools are off-the-shelf (PyISBN, PyAWS etc) and some are hand-rolled -- I tried to keep all of these parts nice and decoupled, but you know how things can get. I couldn't find anything online about 'private ISBNs' or 'self-assigned ISBNs' but that's the sort of thing I was interested in doing. I doubt that's what I'll settle on, since there is already an apparent run on ISBN numbers. should I retool everything for EAN numbers, or migrate off ISBNs as primary keys in general? if anyone has any experience with working with these systems, I'd love to hear about it, your advice is most welcome.

    Read the article

  • Combining JSON Arrays

    - by George
    I have 3 json arrays, each with information listed in the same format: Array: ID: NAME: DATA: ID: NAME: DATA: etc... My goal is to combine all 3 arrays into one array, and sort and display by NAME by passing the 3 arrays into a function. The function I've tried is: JSCRIPT Call: // to save time I'm just passing the name of the array, I've tried passing // the full array name as json[0]['DATA'][array_1][0]['NAME'] as well. combineNames(['array_1','array_2']); FUNCTION: function combineNames(names) { var allNames = [] for (i=0;i<names.length;i++) { for (j=0;j<json[0]['DATA'][names[i]].length;j++) { allNames.push(json[0]['DATA'][names[i]][j]['NAME']); } } return allNames.sort(); } The above gives me the error that NAME is null or undefined. I've also tried using the array.concat function which works when I hard code it: var names = []; var allNames = []; var names = names.concat(json[0]['DATA']['array_1'],json[0]['DATA']['array_2']); for (i=0;i<names.length;i++) { allNames.push(names[i]['NAME']); } return allNames.sort(); But I can't figure out how to pass in the arrays into the function (and if possible I would like to just pass in the array name part instead of the whole json[0]['DATA']['array_name'] like I was trying to do in the first function...

    Read the article

  • Recommendations for Continuous integration for Mercurial/Kiln + MSBuild + MSTest

    - by TDD
    We have our source code stored in Kiln/Mercurial repositories; we use MSBuild to build our product and we have Unit Tests that utilize MSTest (Visual Studio Unit Tests). What solutions exist to implement a continuous integration machine (i.e. Build machine). The requirements for this are: A build should be kicked of when necessary (i.e. code has changed in the Repositories we care about) Before the actual build, the latest version of the source code must be acquired from the repository we are building from The build must build the entire product The build must build all Unit Tests The build must execute all unit tests A summary of success/failure must be sent out after the build has finished; this must include information about the build itself but also about which Unit Tests failed and which ones succeeded. The summary must contain which changesets were in this build that were not yet in the previous successful (!) build The system must be configurable so that it can build from multiple branches(/Repositories). Ideally, this system would run on a single box (our product isn't that big) without any server components. What solutions are currently available? What are their pros/cons? From the list above, what can be done and what cannot be done? Thanks

    Read the article

< Previous Page | 831 832 833 834 835 836 837 838 839 840 841 842  | Next Page >