Search Results

Search found 2730 results on 110 pages for 'storing'.

Page 92/110 | < Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >

  • PHP curl timing mismatch

    - by JonoB
    I am running a php script that: queries a local database to retrieve an amount executes a curl statement to update an external database with the above amount + x queries the local database again to insert a new row reflecting that the curl statement has been executed. One of the problems that I am having is that the curl statement takes 2-4 seconds to execute, so I have two different users from the same company running the same script at the same time, the execution time of the curl command can cause a mismatch in what should be updated in the external database. This is the because the curl statement has not yet returned from the first user...so the second user is working off incorrect figures. I am not sure of the best options here, but basically I need to prevent two or more curl statements being run at the same time. I thought of storing a value in the database that indicates that the curl statement is being executed at that time, and prevent any other curl statements being run until its completed. Once the first curl statement has been executed, then the database flag is updated and the next one can run. If this field is 'locked', then I could loop through the code and sleep for (5) seconds, and then check again if the flag has been reset. If after (3) loops, then reset the flag automatically (i've never seen the curl take longer than 5 seconds) and continue processing. Are there any other (more elegant) ways of approaching this?

    Read the article

  • DB Design to store custom fields for a table

    - by Fazal
    Hi All, this question came up based on the responses I got for the question http://stackoverflow.com/questions/2785033/getting-wierd-issue-with-to-number-function-in-oracle As everyone suggested that storing Numeric values in VARCHAR2 columns is not a good practice (which I totally agree with), I am wondering about a basic Design choice our team has made and whether there are better way to design. Problem Statement : We Have many tables where we want to give certain number of custom fields. The number of required custom fields is known, but what kind of attribute is mapped to the column is available to the user E.g. I am putting down a hypothetical scenario below Say you have a laptop which stores 50 attribute values for every laptop record. Each laptop attributes are created by the some admin who creates the laptop. A user created a laptop product lets say lap1 with attributes String, String, numeric, numeric, String Second user created laptop lap2 with attributes String,numeric,String,String,numeric Currently there data in our design gets persisted as following Laptop Table Id Name field1 field2 field3 field4 field5 1 lap1 lappy lappy 12 13 lappy 2 lap2 lappy2 13 lappy2 lapp2 12 This example kind of simulates our requirement and our design Now here if somebody is lookinup records for lap2 table doing a comparison on field2, We need to apply TO_NUMBER. select * from laptop where name='lap2' and TO_NUMBER(field2) < 15 TO_NUMBER fails in some cases when query plan decides to first apply to_number instead of the other filter. QUESTION Is this a valid design? What are the other alternative ways to solve this problem One of our team mates suggested creating tables on the fly for such cases. Is that a good idea How do popular ORM tools give custom fields or flex fields handling? I hope I was able to make sense in the question. Sorry for such a long text.. This causes us to use TO_NUMBER when queryio

    Read the article

  • Double hashing passwords - client & server

    - by J. Stoever
    Hey, first, let me say, I'm not asking about things like md5(md5(..., there are already topics about it. My question is this: We allow our clients to store their passwords locally. Naturally, we don't want them stored in plan text, so we hmac them locally, before storing and/or sending. Now, this is fine, but if this is all we did, then the server would have the stored hmac, and since the client only needs to send the hmac, not the plain text password, an attacker could use the stored hashes from the server to access anyone's account (in the catastrophic scenario where someone would get such an access to the database, of course). So, our idea was to encode the password on the client once via hmac, send it to the server, and there encode it a second time via hmac and match it against the stored, two times hmac'ed password. This would ensure that: The client can store the password locally without having to store it as plain text The client can send the password without having to worry (too much) about other network parties The server can store the password without having to worry about someone stealing it from the server and using it to log in. Naturally, all the other things (strong passwords, double salt, etc) apply as well, but aren't really relevant to the question. The actual question is: does this sound like a solid security design ? Did we overlook any flaws with doing things this way ? Is there maybe a security pattern for something like this ?

    Read the article

  • How to efficiently store and update binary data in Mongodb?

    - by Rocketman
    I am storing a large binary array within a document. I wish to continually add bytes to this array and sometimes change the value of existing bytes. I was looking for some $append_bytes and $replace_bytes type of modifiers but it appears that the best I can do is $push for arrays. It seems like this would be doable by performing seek-write type operations if I had access somehow to the underlying bson on disk, but it does not appear to me that there is anyway to do this in mongodb (and probably for good reason). If I were instead to just query this binary array, edit or add to it, and then update the document by rewriting the entire field, how costly will this be? Each binary array will be on the order of 1-2MB, and updates occur once every 5 minutes and across 1000s of documents. Worse, yet there is no easy way to spread these out (in time) and they will usually be happening close to one another on the 5 minute intervals. Does anyone have a good feel for how disastrous this will be? Seems like it would be problematic. An alternative would be to store this binary data as separate files on disk, implement a thread pool to efficiently manipulate the files on disk, and reference the filename from my mongodb document. (I'm using python and pymongo so I was looking at pytables). I'd prefer to avoid this though if possible. Is there any other alternative that I am overlooking here? Thanks in advnace.

    Read the article

  • Converting contents of a byte array to wchar_t*

    - by Christopher MacKinnon
    I seem to be having an issue converting a byte array (containing the text from a word document) to a LPTSTR (wchar_t *) object. Every time the code executes, I am getting a bunch of unwanted Unicode characters returned. I figure it is because I am not making the proper calls somewhere, or not using the variables properly, but not quite sure how to approach this. Hopefully someone here can guide me in the right direction. The first thing that happens in we call into C# code to open up Microsoft Word and convert the text in the document into a byte array. byte document __gc[]; document = word->ConvertToArray(filename); The contents of document are as follows: {84, 101, 115, 116, 32, 68, 111, 99, 117, 109, 101, 110, 116, 13, 10} Which ends up being the following string: "Test Document". Our next step is to allocate the memory to store the byte array into a LPTSTR variable, byte __pin * value; value = &document[0]; LPTSTR image; image = (LPTSTR)malloc( document->Length + 1 ); Once we execute the line where we start allocating the memory, our image variable gets filled with a bunch of unwanted Unicode characters: ????????????????? And then we do a memcpy to transfer over all of the data memcpy(image,value,document->Length); Which just causes more unwanted Unicode characters to appear: ????????????????? I figure the issue that we are having is either related to how we are storing the values in the byte array, or possibly when we are copying the data from the byte array to the LPTSTR variable. Any help with explaining what I'm doing wrong, or anything to point me in the right direction will be greatly appreciated.

    Read the article

  • Is it possible, in a django template, to check if an object is contained in a list

    - by AlexH
    I'm very new to django, about a week into it. I'm making a site where users enter stuff, then other users can vote on whether they like the stuff or not. I know it's not so novel, but it's a good project to learn a bunch of tools. I have a many-to-many table for storing who likes or dislikes what. Before I render the page, I pull out all the likes and dislikes for the current user, along with the stuff I'm going to show on the page. When I render the page, I go through the list of stuff I'm going to show and print them out one at a time. I want to show the user which stuff they liked, and which they didn't. So in my django template, I have an object called entry. I also have two lists of objects called likes and dislikes. Is there any way to determine if entry is a member of either list, inside my django template. I think what I'm looking for is a filter where I can say something like {% if entry|in:likes %} or {% if likes|contains:entry %} I know I could add a method to my model and check for each entry individually, but that seems like it would be database intensive. Is there a better way to think about this problem?

    Read the article

  • How to implement a network protocol?

    - by gotch4
    Here is a generic question. I'm not in search of the best answer, I'd just like you to express your favourite practices. I want to implement a network protocol in Java (but this is a rather general question, I faced the same issues in C++), this is not the first time, as I have done this before. But I think I am missing a good way to implement it. In fact usually it's all about exchanging text messages and some byte buffers between hosts, storing the status and wait until the next message comes. The problem is that I usually end up with a bunch of switch and more or less complex if statements that react to different statuses / messages. The whole thing usually gets complicated and hard to mantain. Not to mention that sometimes what comes out has some "blind spot", I mean statuses of the protocol that have not been covered and that behave in a unpredictable way. I tried to write down some state machine classes, that take care of checking start and end statuses for each action in more or less smart ways. This makes programming the protocol very complicated as I have to write lines and lines of code to cover every possible situation. What I'd like is something like a good pattern, or a best practice that is used in programming complex protocols, easy to mantain and to extend and very readable. What are your suggestions?

    Read the article

  • Database schema for a library

    - by ABach
    Hi all - I'm designing a library management system for a department at my university and I wanted to enlist your eyes on my proposed schema. This post is primarily concerned with how we store multiple copies of each book; something about what I've designed rubs me the wrong way, and I'm hoping you all can point out better ways to tackle things. For dealing with users checking out books, I've devised three tables: book, customer, and book_copy. The relationships between these tables are as follows: Every book has many book_copies (to avoid duplicating the book's information while storing the fact that we have multiple copies of that book). Every user has many book_copies (the other end of the relationship) The tables themselves are designed like this: ------------------------------------------------ book ------------------------------------------------ + id + title + author + isbn + etc. ------------------------------------------------ ------------------------------------------------ customer ------------------------------------------------ + id + first_name + first_name + email + address + city + state + zip + etc. ------------------------------------------------ ------------------------------------------------ book_copy ------------------------------------------------ + id + book_id (FK to book) + customer_id (FK to customer) + checked_out + due_date + etc. ------------------------------------------------ Something about this seems incorrect (or at least inefficient to me) - the perfectionist in me feels like I'm not normalizing this data correctly. What say ye? Is there a better, more effective way to design this schema? Thanks!

    Read the article

  • Kohana 3.2 - Database Session losing data on new Page Request

    - by reado
    I've setup my dev Kohana server to use an encrypted database as the default Session type. I'm also using this in combination with Auth to implement user authentication. Right now my user's are able to authenticate correctly and the authentication keys are being stored in the session. I'm also storing additional data like the user's firstname and businessname during the login procedure. When my login function is ready to redirect the user to the user dashboard, I'm able to see all the data correctly when I do $session::instance()->as_array(); (Array ( [auth_user] => NRyk6lA8 [businessname] => Dudetown [firstname] => Matt )) As soon as I redirect the user to another page, $session::instance()->as_array(); is empty. By dumping out the Session::instance() object, I can see that the Session id's are still the same. When I look at my database table though, i dont see any session records being saved and my session table is empty. My bootstrap.php contains: Session::$default = 'database'; Cookie::$salt = 'asdfasdf'; Cookie::$expiration = 1209600; Cookie::$domain = FALSE; and my session.php config file looks like: return array( 'database' => array( 'name' => 'auth_user', 'encrypted' => TRUE, 'lifetime' => 24 * 3600, 'group' => 'default', 'table' => 'sessions', 'columns' => array( 'session_id' => 'session_id', 'last_active' => 'last_active', 'contents' => 'contents' ), 'gc' => 500, ), ); I've looked high and low for an answer.. if anyone has any suggestions, i'm all ears! Thanks!

    Read the article

  • How else can I email a file using ASP.NET?

    - by Jane T
    Hi all, I'm using the code below in a ASP.NET page to send a file via email from our users home computer to a mailbox that is used for receiving work that needs photocopying. The code below works fine when sending a file within our network but fails when our users are at home and connected via our SSL VPN, there appears to be a bug in our VPN where it doesn't allow the file to be temporarily saved on the webserver before being sent via email. Can anyone offer any other suggestions on how to attach a file to a ASP.NET page and have the file sent via email without storing it on the web server? Many thanks Jane. MailMessage mail = new MailMessage(); mail.From = txtFrom.Text; mail.To = txtTo.Text; mail.Cc = txtFrom.Text; mail.Subject = txtSubject.Text; mail.Body = "test" mail.BodyFormat = MailFormat.Html; string strdir = "E:\\TEMPforReprographics\\"; //<-------PROBLEM AREA string strfilename = Path.GetFileName(txtFile.PostedFile.FileName); try { txtFile.PostedFile.SaveAs(strdir + strfilename); string strAttachment = strdir + strfilename; mail.Attachments.Add(new MailAttachment(strdir + strfilename)); SmtpMail.SmtpServer = "172.16.0.88"; SmtpMail.Send(mail); Response.Redirect("Thanks.aspx", true); } catch { Response.Write("An error has occured sending the email or uplocading the file."); } finally { }

    Read the article

  • App is fast on 3GS but slow on 3G

    - by Anthony Chan
    Hi all, I'm new to computer coding and have just finished coding an app and tested it on both 3G and 3GS. On 3GS, it worked as normal as on the simulator. However, when I tried to run it on 3G, the app becomes extremely slow. I'm not sure what's the reason and I hope someone could shed some light on me. Generally, my app has a couple of view controller classes, with one of them being the title page, one being the main page, one is setting, etc. I used a dissolve to transition from the title page to the main page. But even this simple transition shows un-smooth performance on a 3G! My other part of the app involves zooming in to some images by scaling up the images, switching images by push or dissolve upon receiving touch events, saving photos into photo library and storing and retrieving some photos in a folder and some data in a SQlite database, each showing un-smooth actions. Compared with some heavy graphic or heavy maths app, I think mine is pretty simple. I totally have no clue why the app would behave so slow and un-smooth that it is barely useful on a 3G. Any help/ direction would be much appreciated. Thanks for helping out.

    Read the article

  • Request for advice about class design, inheritance/aggregation

    - by Lasse V. Karlsen
    I have started writing my own WebDAV server class in .NET, and the first class I'm starting with is a WebDAVListener class, modelled after how the HttpListener class works. Since I don't want to reimplement the core http protocol handling, I will use HttpListener for all its worth, and thus I have a question. What would the suggested way be to handle this: Implement all the methods and properties found inside HttpListener, just changing the class types where it matters (ie. the GetContext + EndGetContext methods would return a different class for WebDAV contexts), and storing and using a HttpListener object internally Construct WebDAVListener by passing it a HttpListener class to use? Create a wrapper for HttpListener with an interface, and constrct WebDAVListener by passing it an object implementing this interface? If going the route of passing a HttpListener (disguised or otherwise) to the WebDAVListener, would you expose the underlying listener object through a property, or would you expect the program that used the class to keep a reference to the underlying HttpListener? Also, in this case, would you expose some of the methods of HttpListener through the WebDAVListener, like Start and Stop, or would you again expect the program that used it to keep the HttpListener reference around for all those things? My initial reaction tells me that I want a combination. For one thing, I would like my WebDAVListener class to look like a complete implementation, hiding the fact that there is a HttpListener object beneath it. On the other hand, I would like to build unit-tests without actually spinning up a networked server, so some kind of mocking ability would be nice to have as well, which suggests I would like the interface-wrapper way. One way I could solve this would be this: public WebDAVListener() : WebDAVListener(new HttpListenerWrapper()) { } public WebDAVListener(IHttpListenerWrapper listener) { } And then I would implement all the methods of HttpListener (at least all those that makes sense) in my own class, by mostly just chaining the call to the underlying HttpListener object. What do you think? Final question: If I go the way of the interface, assuming the interface maps 1-to-1 onto the HttpListener class, and written just to add support for mocking, is such an interface called a wrapper or an adapter?

    Read the article

  • Large ListView containing images in Android

    - by Marco W.
    For various Android applications, I need large ListViews, i.e. such views with 100-300 entries. All entries must be loaded in bulk when the application is started, as some sorting and processing is necessary and the application cannot know which items to display first, otherwise. So far, I've been loading the images for all items in bulk as well, which are then saved in an ArrayList<CustomType> together with the rest of the data for each entry. But of course, this is not a good practice, as you're very likely to have an OutOfMemoryException then: The references to all images in the ArrayList prevent the garbage collector from working. So the best solution is, obviously, to load only the text data in bulk whereas the images are then loaded as needed, right? The Google Play application does this, for example: You can see that images are loaded as you scroll to them, i.e. they are probably loaded in the adapter's getView() method. But with Google Play, this is a different problem, anyway, as the images must be loaded from the Internet, which is not the case for me. My problem is not that loading the images takes too long, but storing them requires too much memory. So what should I do with the images? Load in getView(), when they are really needed? Would make scrolling sluggish. So calling an AsyncTask then? Or just a normal Thread? Parametrize it? I could save the images that are already loaded into a HashMap<String,Bitmap>, so that they don't need to be loaded again in getView(). But if this is done, you have the memory problem again: The HashMap stores references to all images, so in the end, you could have the OutOfMemoryException again. I know that there are already lots of questions here that discuss "Lazy loading" of images. But they mainly cover the problem of slow loading, not too much memory consumption.

    Read the article

  • How to sum up a fetched result's number property based on the object's category?

    - by mr_kurrupt
    I have a NSFetchRequest that is returning all my saved objects (call them Items) and storing them in an NSMutableArray. Each of these Items have a category, an amount, and some other properties. My goal is to check the category of each Item and store the sum of the amounts for objects of the same category. So if I had these Items: Red; 10.00 Blue; 20.00 Green; 5.00 Red; 5.00 Green; 15.00 then I would have an array or other type of container than has: Red; 15.00 Blue; 20.00 Green; 20.00 What would be the best way to organize the data in such a manner? I was going to create a object class (call it Totals) that just has the category and amount. As I traverse through the fetch results in a for-loop, add Items with the same category in a Totals object an store them in a NSMutableArray. The problem I ran into with that is that I'm not sure how to check if an array contains a Totals object with a specific property. Specifically, a category that already exists. So if 'Red' exists, add the amount to it, otherwise create a new Totals object with category 'Red' and a the first Item's amount. Thanks.

    Read the article

  • Browser download file prompt using javascript

    - by aix
    hi, I was wondering if there was any method to implement browser's download file prompt using javascript. My reason - well users will be uploading files to a local fileserver which cannot be accessed from the webserver. In other words, both will be on different domains! e.g. let’s say websites hosted on www.xyz.com, but files would reside on local file server with address like \10.10.10.01\Files\file.txt. How am I uploading/transferring file to local fileserver... using ACTIVEX(yikes) & VBscript!!!! (don’t ask :-) so i am storing local file path in my database and binding that data to a grid. So when the user clicks on that link, the file opens in a window (using javascript). Problem is certain file types like text, jpg, pdf, etc open inside browser window. How would i be able to implement content-type or content-disposition using client side scripting? Is that even possible? hoping my description was clear. Any ideas? EDIT: the local file server has a window's shared folder on which the files are saved.

    Read the article

  • when to clear or make null asp .net mvc models?

    - by SARAVAN
    HI, I am working in an asp .net mvc application. I am using the model and storing some of the values which i need to preserve between the page posts, in the form of datacontexts. Say my model looks something like this: public SelectedUser SelectedUserDetails { //get and set has //this.datacontext.data.SelectedUser = ..... //return this.datacontext.data..... } Now when this model needs to be cleared? I have many such models with many properties and datacontext. But I don't have an idea on when to clear it. Is there a way or an event that can be triggered automatically when the model is not used for a long time? Oneway I thought is when i navigate away from a page which uses my underlying model, I can clear that model if its no longer used anywhere and initialise it back as needed. But I need to clear almost many models at many points. Is there an automatic way that can clear models when it is no longer used beacuse care can be taken by my code to initialise them when I need them, but I don't know when to clear them when I no longer need them. I need this to get rid of any memory related issues. Any thoughts or comments?

    Read the article

  • Prevent Javascript Execution in JQuery html()

    - by Mahan
    well i do have a div that contains some more html and a lot of javascript on it <div id="mydiv"> <p>Hello Philippines</p> my first time in Philippines is nice <script type="text/javascript">alert("how was it became nice?");</script> well i experienced a lot of things <script type="text/javascript">alert("and how about your Japan's trip?");</script> well its more nicer ^^ but both countries are good! hahah </div> now what i want there is to put the non-javascript code and javascript code in two separate variables var html = $("#mydiv").html(); but my problem here is my javascript is executing..which makes me stop to create the code i want which is the storing of javascript and non-javascript to two different variables. now my questions is how can i stop the javascript codes from executing when they are get inside the div? how can i store the javascript and non-javascript code into two different variables safely? NOTE: i need the stored javascript for later execution

    Read the article

  • Most watched videos this week

    - by Jan Hancic
    I have a youtube like web-page where users upload&watch videos. I would like to add a "most watched videos this week" list of videos to my page. But this list should not contain just the videos that ware uploaded in the previous week, but all videos. I'm currently recording views in a column, so I have no information on when a video was watched. So now I'm searching for a solution to how to record this data. The first is the most obvious (and the correct one, as far as I know): have a separate table in which you insert a new line every time you want to record a new view (storing the ID of the video and the timestamp). I'm worried that I would quickly get huge amounts of data in this table, and queries using this table would be extremely slow (we get about 3 million views a month). The second solution isn't as flexible but is more easy on the database. I would add 7 columns to the "videos" table (one for each day of the week): views_monday, views_tuesday , views_wednesday, ... And increment the value in the correct column based on the day it is. And I would reset the current day's column to 0 at midnight. I could then easily get the most watched videos of the week by summing this 7 columns. What do you think, should I bother with the first solution or will the second one suffice for my case? If you have a better solution please share! Oh, I'm using MySQL.

    Read the article

  • How to optimize a postgreSQL server for a "write once, read many"-type infrastructure ?

    - by mhu
    Greetings, I am working on a piece of software that logs entries (and related tagging) in a PostgreSQL database for storage and retrieval. We never update any data once it has been inserted; we might remove it when the entry gets too old, but this is done at most once a day. Stored entries can be retrieved by users. The insertion of new entries can happen rather fast and regularly, thus the database will commonly hold several millions elements. The tables used are pretty simple : one table for ids, raw content and insertion date; and one table storing tags and their values associated to an id. User search mostly concern tags values, so SELECTs usually consist of JOIN queries on ids on the two tables. To sum it up : 2 tables Lots of INSERT no UPDATE some DELETE, once a day at most some user-generated SELECT with JOIN huge data set What would an optimal server configuration (software and hardware, I assume for example that RAID10 could help) be for my PostgreSQL server, given these requirements ? By optimal, I mean one that allows SELECT queries taking a reasonably little amount of time. I can provide more information about the current setup (like tables, indexes ...) if needed.

    Read the article

  • [nHibernate] casting string to bool using nHibernate Criteria

    - by code-zoop
    I have an nHibernate query using Criteria, and I am trying to cast a string to bool in the query itself. I have done the same with casting a string to int, and that works well (the "DataField" property is "1" as a string): var result = Session .CreateCriteria<Car>() .Add(Restrictions.Eq((Projections.Cast(NHibernateUtil.Int32, Projections.Property("DataField"), 1)) .List<Car>(); tx.Commit(); But I am trying to do the same with bool, but I do not get the expected result: var result = Session .CreateCriteria<Car>() .Add(Restrictions.Eq((Projections.Cast(NHibernateUtil.bool, Projections.Property("DataField"), true)) .List<Car>(); tx.Commit(); "DataField" is the string "True", but the result in an empty list, where it should contain 100 elements with the "DataField" property string set to "True". I have tried with the string "true", and "1", but the result is still an empty List. [EDIT] As Commented below, I could check for the string "True" or "False", but I would say this is a more general question than just for the Boolean. Note that the idea is to have some sort of key value representation of the data, where the value can be different data types. I need the value table to contain all data, so storing the data as string seems like the cleanest solution! I have been able to use the method above to store both int and double as string, and to the cast in the query, but I have not succeeded using the same method for DateDime and Boolean. And for DateTime it is crucial to have the actual DateTime object. How can I make the cast from string to bool, and string to DateTime work in the queries? Thanks

    Read the article

  • How can I determine when an InnoDB table was last changed?

    - by David M
    I've had success in the past storing the (heavily) processed results of a database query in memcached, using the last update time of the underlying tables(s) as part of the cache key. For MyISAM tables, that last changed time is available in SHOW TABLE STATUS. Unfortunately, that's usually NULL for InnoDB tables. In MySQL 4.1, the ctime for an InnoDB in its SHOW TABLE STATUS line was usually its actual last update time, but that doesn't seem to be true for MySQL 5.1. There is a DATETIME field in the table, but it only shows when a row has been modified - it cannot show the deletion time of a row that's not there anymore! So, I really cannot use MAX(update_time). Here's the really tricky part. I have a number of replicas that I do reads from. Can I figure out the state of the table that doesn't rely on when the changes have actually been applied? My conclusion after working on this for a while is that it's not going to be possible to get this information as cheaply as I'd like. I'm probably going to cache data until the time that I expect the table to change (it's updated once a day), and let the query cache help out where it can.

    Read the article

  • Domain Authentication from .NET Client over VPN

    - by Holy Christ
    I am writing a ClickOnce WPF app that will sometimes be used over VPN. The app uses resources available only to domain authenticated users. Some of the things include accessing SSRS Reports, accessing LDAP to lookup user information, hitting web services, etc. When a user logs in from a machine that is not authenticated on the domain, I need to somehow get his credentials, authenticate him on the domain, and store his credentials. What is the recommended approach for authenticating domain users over VPN? How can I securely store the credentials? I've found several articles but, not much posted recently and a lot of the solutions seem kinda hacky, or aren't very secure (ie - storing strings clear text in memory). It would be cool if I could use the ActiveDicrtoryMembershipProvider, but that seems to be geared for use in web apps. EDIT: The above is kind of a workaround. The user must enter their domain credentials to authenticate on the VPN. It would be ideal to access the credentials the user has already entered to login to the VPN instead of the WindowsIdentity.GetCurrent() (which returns the user logged into the computer). Any ideas on how that could work? We use Juniper Networks to connect to the VPN. Thanks!

    Read the article

  • How do I implement Hibernate Pagination using a cursor (so the results stay consistent, despite new

    - by hunterae
    Hey all, Is there any way to maintain a database cursor using Hibernate between web requests? Basically, I'm trying to implement pagination, but the data that is being paged is consistently changing (i.e. new records are added into the database). We are trying to set it up such that when you do your initial search (returning a maximum of 5000 results), and you page through the results, those same records always appear on the same page (i.e. we're not continuously running the query each time next and previous page buttons are clicked). The way we're currently implementing this is by merely selecting 5000 (at most) primary keys from the table we're paging, storing those keys in memory, and then just using 20 primary keys at a time to fetch their details from the database. However, we want to get away from having to store these keys in memory and would much prefer a database cursor that we just keep going back to and moving backwards and forwards over the cursor to generate pages. I tried doing this with Hibernate's ScrollableResults but found that I could not call methods like next() and previous() would cause an exception if you within a different web request / Hibernate session (no surprise there). Is there any way to reattach a ScrollableResults object to a Session, much the same way you would reattach a detached database object to make it persistent? Are there any other approaches to implement this data paging with consistent paging results without caching the primary keys?

    Read the article

  • jQuery.data() works in Mac OS WebKit, but not on iPhone OS?

    - by rpj
    I'm playing around with jQTouch for an iPhone OS app that I've been toying with off and on for a while. I wanted to try my hand building it as a web app so I started playing with jQTouch. For reference, here is the page+source (all my code is currently in index.html so you can just "View Source" to see it all): http://rpj.me/doughapp.com/wd/ Essentially, I'm trying to save pertinent JSON objects retrieved from Google Local into DOM objects using the data() method (in this example, obj is the Google Local object): $('#locPane').data('selected', obj); then later (in a different "pane"), retrieving that object to be used: $('#locPane').bind('pageAnimationEnd', function(e, inf) { var selobj = $(this).data('selected'); // use 'selobj' here ... } In Chromium and Safari on the desktop OS (Snow Leopard in my case), this works perfectly (try it out). However, the same code returns undefined for the call to $(this).data('selected') in the second snippet above. I've also tried $('#' + e.target.id).data('selected') and even the naive $('#locPane').data('selected'). All variants return undefined in the iPhone OS version of WebKit, but not on the desktop. Interestingly, the running this on Mobile Safari in the iPhone Simulator fails as well. If you look at the full source, you'll see that I even try to save this object into my global jQTouch object (named jqt in my code). This, too, fails on the mobile platform. Has anyone else ever ran into this? I'll admit to not being a web/javascript programmer by trade, so if I'm making an idiot's error please call me out on it. Thank you in advance for the help! -RPJ Update: I didn't make it clear in the original post, but I'm open to any workaround if it works consistently. Since I'm having trouble storing these objects in general, anything that allows me to keep them around is good enough for now. Thanks!

    Read the article

  • How does SVN store commit time

    - by Salman
    I am working on a project that involves extracting details from a SVN server using SVNKit. My project is already complete and has been working we for a while now. During the testing, I noticed something rather very strange. the Commit Times my extract data seems is alway different from whats there in SVN Logs. I couldnt find any code in my project that could be inducing this difference but now I am looking as to how SVN server stores the Commit time in itself. As we have developer working from different part of the world thus resulting in different timezones, I was thinking that SVN might be storing time after converting them to GMT or timezone of the system on which SVN server is running. But that does not seem to be happening. Instead the times are stored as per the time when the commit was done and in that local timezone itself. I have been unable to find any substantial document on internet to support my theory so far. Can anybody in brief explain as how SVN store the Commit Time for each change? Documentaion links referring to this will be of great help.

    Read the article

< Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >