Search Results

Search found 59036 results on 2362 pages for 'fake data'.

Page 209/2362 | < Previous Page | 205 206 207 208 209 210 211 212 213 214 215 216  | Next Page >

  • Routing all data through an VPN tunnel with ppp

    - by Oliver
    I'm trying to create a VPN tunnel that forwards all data from the local machine to the VPN server. I'm using ppp-2.4.5 for this with the following configuration: pty "pptp <VPNServer> --nolaunchpppd" name <my login name> remotename PPTP usepeerdns require-mppe-128 file /etc/ppp/options.pptp persist maxfail 0 holdoff 5 I have a script in if-up.d with the following content: route del default eth0 route add default dev ppp0 Before starting the VPN tunnel my routing looks like: Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.0.1 0.0.0.0 UG 2 0 0 eth0 127.0.0.0 127.0.0.1 255.0.0.0 UG 0 0 0 lo 192.168.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0 After starting the tunnel (via pon) it looks like: Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 0.0.0.0 0.0.0.0 U 0 0 0 ppp0 12.34.56.1 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0 127.0.0.0 127.0.0.1 255.0.0.0 UG 0 0 0 lo 192.168.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0 Now the problem is, that the VPN tunnel seems to be looped into itself. If I run ifconfig after a few seconds without any traffic: eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.0.10 netmask 255.255.0.0 broadcast 192.168.255.255 ether 00:01:2e:2f:ff:35 txqueuelen 1000 (Ethernet) RX packets 39931 bytes 6784614 (6.4 MiB) RX errors 0 dropped 90 overruns 0 frame 0 TX packets 34980 bytes 7633181 (7.2 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 device interrupt 20 memory 0xfbdc0000-fbde0000 ppp0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST> mtu 1496 inet 12.34.56.78 netmask 255.255.255.255 destination 12.34.56.1 ppp txqueuelen 3 (Point-to-Point Protocol) RX packets 7 bytes 94 (94.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 782863 bytes 349257986 (333.0 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 It states that already over 300 MiB have been send, ppp0 is only online since a few seconds and the connection isn't working anyway. Can someone please help me to fix the routing table, so that the traffic from ppp0 is not send again through ppp0 but instead goes to the remote server?

    Read the article

  • Any way to recover ext4 filesystems from a deleted LVM logical volume?

    - by Vegar Nilsen
    The other day I had a proper brain fart moment while expanding a disk on a Linux guest under Vmware. I stretched the Vmware disk file to the desired size and then I did what I usually do on Linux guests without LVM: I deleted the LVM partition and recreated it, starting in the same spot as the old one, but extended to the new size of the disk. (Which will be followed by fsck and resize2fs.) And then I realized that LVM doesn't behave the same way as ext2/3/4 on raw partitions... After restoring the Linux guest from the most recent backup (taken only five hours earlier, luckily) I'm now curious on how I could have recovered from the following scenario. It's after all virtually guaranteed that I'll be a dumb ass in the future as well. Virtual Linux guest with one disk, partitioned into one /boot (primary) partition (/dev/sda1) of 256MB, and the rest in a logical, extended partition (/dev/sda5). /dev/sda5 is then setup as a physical volume with pvcreate, and one volume group (vgroup00) created on top of it with the usual vgcreate command. vgroup00 is then split into two logical volumes root and swap, which are used for / and swap, logically. / is an ext4 file system. Since I had backups of the broken guest I was able to recreate the volume group with vgcfgrestore from the backup LVM setup found under /etc/lvm/backup, with the same UUID for the physical volume and all that. After running this I had two logical volumes with the same size as earlier, with 4GB free space where I had stretched the disk. However, when I tried to run "fsck /dev/mapper/vgroup00-root" it complained about a broken superblock. I tried to locate backup superblocks by running "mke2fs -n /dev/mapper/vgroup00-root" but none of those worked either. Then I tried to run TestDisk but when I asked it to find superblocks it only gave an error about not being able to open the file system due to a broken file system. So, with the default allocation policy for LVM2 in Ubuntu Server 10.04 64-bit, is it possible that the logical volumes are allocated from the end of the volume group? That would definitely explain why the restored logical volumes didn't contain the expected data. Could I have recovered by recreating /dev/sda5 with exactly the same size and disk position as earlier? Are there any other tools I could have used to find and recover the file system? (And clearly, the question is not whether or not I should have done this in a different way from the start, I know that. This is a question about what to do when shit has already hit the fan.)

    Read the article

  • LdapErr: DSID-0C0903AA, data 52e: authenticating against AD '08 with pam_ldap

    - by Stefan M
    I have full admin access to the AD '08 server I'm trying to authenticate towards. The error code means invalid credentials, but I wish this was as simple as me typing in the wrong password. First of all, I have a working Apache mod_ldap configuration against the same domain. AuthType basic AuthName "MYDOMAIN" AuthBasicProvider ldap AuthLDAPUrl "ldap://10.220.100.10/OU=Companies,MYCOMPANY,DC=southit,DC=inet?sAMAccountName?sub?(objectClass=user)" AuthLDAPBindDN svc_webaccess_auth AuthLDAPBindPassword mySvcWebAccessPassword Require ldap-group CN=Service_WebAccess,OU=Groups,OU=MYCOMPANY,DC=southit,DC=inet I'm showing this because it works without the use of any Kerberos, as so many other guides out there recommend for system authentication to AD. Now I want to translate this into pam_ldap.conf for use with OpenSSH. The /etc/pam.d/common-auth part is simple. auth sufficient pam_ldap.so debug This line is processed before any other. I believe the real issue is configuring pam_ldap.conf. host 10.220.100.10 base OU=Companies,MYCOMPANY,DC=southit,DC=inet ldap_version 3 binddn svc_webaccess_auth bindpw mySvcWebAccessPassword scope sub timelimit 30 pam_filter objectclass=User nss_map_attribute uid sAMAccountName pam_login_attribute sAMAccountName pam_password ad Now I've been monitoring ldap traffic on the AD host using wireshark. I've captured a successful session from Apache's mod_ldap and compared it to a failed session from pam_ldap. The first bindrequest is a success using the svc_webaccess_auth account, the searchrequest is a success and returns a result of 1. The last bindrequest using my user is a failure and returns the above error code. Everything looks identical except for this one line in the filter for the searchrequest, here showing mod_ldap. Filter: (&(objectClass=user)(sAMAccountName=ivasta)) The second one is pam_ldap. Filter: (&(&(objectclass=User)(objectclass=User))(sAMAccountName=ivasta)) My user is named ivasta. However, the searchrequest does not return failure, it does return 1 result. I've also tried this with ldapsearch on the cli. It's the bindrequest that follows the searchrequest that fails with the above error code 52e. Here is the failure message of the final bindrequest. resultcode: invalidcredentials (49) 80090308: LdapErr: DSID-0C0903AA, comment: AcceptSecurityContext error, data 52e, v1772 This should mean invalid password but I've tried with other users and with very simple passwords. Does anyone recognize this from their own struggles with pam_ldap and AD? Edit: Worth noting is that I've also tried pam_password crypt, and pam_filter sAMAccountName=User because this worked when using ldapsearch. ldapsearch -LLL -h 10.220.100.10 -x -b "ou=Users,ou=mycompany,dc=southit,dc=inet" -v -s sub -D svc_webaccess_auth -W '(sAMAccountName=ivasta)' This works using the svc_webaccess_auth account password. This account has scan access to that OU for use with apache's mod_ldap.

    Read the article

  • What is the difference between "data hiding" and "encapsulation"?

    - by Software Engeneering Learner
    I'm reading "Java concurrency in practice" and there is said: "Fortunately, the same object-oriented techniques that help you write well-organized, maintainable classes - such as encapsulation and data hiding -can also help you create thread-safe classes." The problem #1 - I never heard about data hiding and don't know what it is. The problem #2 - I always thought that encapsulation is using private vs public, and is actually the data hiding. Can you please explain what data hiding is and how it differs from encapsulation?

    Read the article

  • Correcting tree from messed up file tree in NTFS partition

    - by Fullmooninu
    It's a real messed situation, but I'm quite at the end of my options. It's my personal hardrive, so it's very important for me, and yes, I have no backup =( The short story: 1) I have two discs. One with Windows, and another where I had a bit of empty space at the front of the disk, so i could install Linux. The rest was occupied by a 1.8TB NTFS partition filled with data. 2) I installed Linux, and after a while realized there was not enough space for everything, so I tried using Gparted, and told it to re-size the NTFS partition, to a lesser size. 3) The system jammed. I had to reboot and broke the Resizing operation. Here's what I did to fix it: a) Rebooted into Linux Live, and used Testdisk,to deep analyze the disk, and recover the possible partitions. It found several versions of the NTFS partitions, probably made during the resizing. I told Testdisk to open every one of them, and only one could list its files. When trying to open the other options on Testdisk, it showed an error message. I assumed the one without errors, to be the correct one, and I told Testdisk to recover the partition, and write a new MBR. b) The partition had errors, and Linux has a NTFS fixing tool, used it, but the system still had errors. c) So I booted into windows and use chkdsk to correct all errors in the partition. d) Everything seems fine, but now, back in Windows, when I open one file, it opens another file, or part of another file. As in, some files took up the position of other files. What I think happened is that I recovered an old tree, and not the most current one. And that one just happened to be intact, while the most recent one was damaged. As such, the files that were moved during the failed resizing, were now, during the automatic correction, assumed wrongly to be in their correct places. So when I open a file, it tries to open another one. Radiohead - Creep.mp3 will open and it will actually be a bit from another song, or even code from a jpg. Some files seem to be all right, but others have seemed to have had their position taken by others. Anyone knows of something really powerful that can help me solve this?

    Read the article

  • Increase Max Pool Size ERROR when using SYBASE ASE ADO.NET data provider

    - by Brani
    I have made a program in VB.net (visual studio 2003) that connects to a SYBASE ASE database using the ADO.NET data provider. Recently, after a hard disk failure, I restored the program's code from a (rather old) backup. But now the connection fails with a message that does not remind me of anything that I have seen before. Here is the code and the error message: Dim cn As New AseConnection("Data Source='my_server';Port='5000';UID='sa';PWD='my_pwd';Database='my_db';") cn.Open() Error message: Sybase.Data.AseClient.AseException - Cannot allocate more connections. Connection pool is at maximum. Increase Max Pool Size Can anybody help me?

    Read the article

  • How to remove "VsDebuggerCausalityData" data from SOAP message?

    - by scottmarlowe
    I've got a problem where incoming SOAP messages from one particular client are being marked as invalid and rejected by our XML firewall device. It appears extra payload data is being inserted by Visual Studio; we're thinking the extra data may be causing a problem b/c we're seeing "VsDebuggerCausalityData" in these messages but not in others sent from a different client who is not having a problem. It's a starting point, anyway. The question I have is how can the client remove this extra data and still run from VS? Why is VS putting it in there at all? Thanks.

    Read the article

  • How to convert CFDataRef data to UIImage / NSData - Iphone

    - by sagar
    Hello ! every one. I am having a little query regarding CFData in Objective C / iPhone Development. See, Apple documentation has following method for CGPDFStream CGPDFStreamCopyData(<#CGPDFStreamRef stream#>, <#CGPDFDataFormat *format#>) Above method return type is CGDataRef. I have data as stream. Now I wish to convert in image. & For that I think I should follow this way. CGPDFDataFormat t=CGPDFDataFormatJPEG2000; CFDataRef data = CGPDFStreamCopyData (stream, &t); After executing above statements - I have some reference in data variable. Now my Query is - How to convert this CFData to NSData or UIImage ? I have gone through the documentation of apple - But I am failure to find it. Thanks in advance for sharing your knowledge. Sagar

    Read the article

  • Expose DB2 data as XML / Query DB2via XML

    - by Anthony Gatlin
    I have a client who has a sort of data warehouse stored in DB2. For a variety of reasons, the data must remain on this platform. The client is considering implementing an open-source CMS (Drupal) which runs in MySQL. The client needs to be able to execute a bunch of pre-defined queries against the DB2 database from the remote application. Drupal appears to interact well with XML data from other systems. It was suggested that we use something like XML-RPC to execute the queries against DB2. I am very familiar with SQL Server and pretty familiar with MySQL, but I have no experience with DB2 and no understanding of its capabilities or limitations. Is there any way that we can use something like XML, XML-RPC, or even http to initiate queries against a DB2 database? Any ideas are appreciated! Thank you!

    Read the article

  • Populating Tcl Treeview with Sqlite Data

    - by DFM
    Hello: I am building a Tcl application that reads off of a Sqlite Db. Currently, I can enter data into the database using the Tcl frontend. Now, I am trying to figure out how to display the data within the Sqlite Db from the Tcl frontend. After a little bit of research, I found that the treeview widget would work well for my needs. I now have the following code: set z1 [ttk::treeview .c1.t1 -columns {1 2} -show headings] $z1 heading #1 -text "First Name" $z1 heading #2 -text "Last Name" proc Srch {} {global z1 sqlite3 db test.db pack $z1 db close } When the "Srch" procedure is executed (button event), the treeview (z1) appears with the headings First Name and Last Name. Additionally, the Sqlite Db gets connected, then closes. I wanted to add code that would populate the treeview from the Sqlite Db between connecting to the Db and packing the treeview (z1). Does anyone know the correct syntax to populate a Tcl treeview with data from Sqlite? Thank you everyone in advance, DFM

    Read the article

  • Mapping JSON data in JQGrid

    - by hunt
    Hi , I am using jqGrid 3.6.4 and a jquery 1.4.2 . in my sample i am getting following json data format & i want to map these json data into rows of a jqgrid { "page": "1", "total": 1, "records": "6", "rows": [ { "head": { "student_name": "Mr S. Jack ", "year": 2007 }, "sub": [ { "course_description": "Math ", "date": "22-04-2010", "number": 1, "time_of_add": "2:00", "day": "today" } ] } ] } my jqgrid code is as follows jQuery("#"+subgrid_table_id).jqGrid({ url:"http://localhost/stud/beta/web/GetStud.php?sid="+sid, dtatype: "json", colNames: ['Stud Name','Year','Date'.'Number'], colModel: [ {name:'Stud Name',index:'student_name', width:100, jsonmap:"student_name"}, {name:'Year',index:'year', width:100, jsonmap:"year"}, {name:'Date',index:'date', width:100, jsonmap:"date"}, {name:'Number',index:'number', width:100, jsonmap:"number"} ], height:'100%', jsonReader: { repeatitems : false, root:"head" }, }); So now the problem is as my data i.e. student_name and year is under "head" , the jqgrid is enable to locate these two fields. at the same time other two column values i.e. Date and Number lies under "sub" and even those columns i am not be able to map it with jqgrid so kindly help me how to located these attributes in JQGrid. Thanks

    Read the article

  • ADO.NET Entity Framework with OLE DB Access Data Source

    - by Tim Long
    Has anyone found a way to make the ADO.NET Entity Framework work with OLE DB or ODBC data sources? Specifically, I need to work with an Access database that for various reasons can't be upsized to SQL. This MSDN page says: The .NET Framework includes ADO.NET providers for direct access to Microsoft SQL Server (including Entity Framework support), and for indirect access to other databases with ODBC and OLE DB drivers (see .NET Framework Data Providers). For direct access to other databases, many third-party providers are available as shown below. The reference to "indirect access to other databases" is tantalising but I confess that I am hopelessly confused by all the different names for data access technology.

    Read the article

  • Remote Backup User Data on iPhone

    - by Eric
    I wrote a few iPhone apps using Core Data for persistent storage. Everything is working great but I would like to add the ability for users to back up their data to a PC (via WiFi to a PC app) or to a web server. This is new to me and I can't seem to figure out where to begin researching the problem. I don't want to overcomplicate the issue if there is an easy way to implement this. Is anyone familiar enough with what I am looking to do to point me in the right direction or give me a high level overview of what I should be considering? The data is all text and would be perfectly stored in .csv files if that matters.

    Read the article

  • IE8 + Jquery ajax call giving parsererror : for json data which seems valid in Firefox

    - by PlanetUnknown
    The ajax call works fine in FF. the data returned is in JSON here is an example from FF firebug - {"noProfiles": "No profiles have been created, lets start now !"} When I try to print the error in IE8 (& in compatibility modes as well), it says "parsererror". But the output seems to be valid JSON. Here is the ajax function call I'm making. Any pointers would be great ! $.ajax({ type: "GET", url: "/get_all_profile_details/", data: "", dataType: "json", beforeSend: function() {alert("before send called");}, success: function(jsonData) { alert("data received"); }, error: function(xhr, txt, err){ alert("xhr: " + xhr + "\n textStatus: " + txt + "\n errorThrown: " + err); } }); The alerts in the error function above give - xhr:<blank> textstatus:parsererror errorThrown: undefined Any pointers would be great ! Note : jquery : 1.3.2

    Read the article

  • Django data migration when changing a field to ManyToMany

    - by Ken H
    I have a Django application in which I want to change a field from a ForeignKey to a ManyToManyField. I want to preserve my old data. What is the simplest/best process to follow for this? If it matters, I use sqlite3 as my database back-end. If my summary of the problem isn't clear, here is an example. Say I have two models: class Author(models.Model): author = models.CharField(max_length=100) class Book(models.Model): author = models.ForeignKey(Author) title = models.CharField(max_length=100) Say I have a lot of data in my database. Now, I want to change the Book model as follows: class Book(models.Model): author = models.ManyToManyField(Author) title = models.CharField(max_length=100) I don't want to "lose" all my prior data. What is the best/simplest way to accomplish this? Ken

    Read the article

  • DVCS and data loss?

    - by David Wolever
    After almost two years of using DVCS, it seems that one inherent "flaw" is accidental data loss: I have lost code which isn't pushed, and I know other people who have as well. I can see a few reasons for this: off-site data duplication (ie, "commits have to go to a remote host") is not built in, the repository lives in the same directory as the code and the notion of "hack 'till you've got something to release" is prevalent... But that's beside the point. I'm curious to know: have you experienced DVCS-related data loss? Or have you been using DVCS without trouble? And, related, apart from "remember to push often", is there anything which can be done to minimize the risk?

    Read the article

  • Merging/filling pdf form file with xml data

    - by Giorgi
    Hello, Let's say I have a pdf form file available at website which is filled by the users and submitted to the server. On the server side (Asp.Net) I would like to merge the data that I receive in xml format with the empty pdf form that was filled and save it. As I have found there are several possible ways of doing it: Using pdf form created by adobe acrobat and filling it with itextsharp. Using pdf form created by adobe acrobat and filling it with FDF Toolkit .net (which seems to be using itextsharp internally) Usd pdfkt to fill the form. Use pdf form file created with adobe livecycle and merge the data by using Form Data Integration Service As I have no experience with this kind of task can you advise which option would be better/easier and give some additional tips? Thank you in advance.

    Read the article

  • Designing a data model in VS2010 and generating ORM code, application

    - by Kay Zed
    Simply put: I have a database design in my head and I now want to use Visual Studio 2010 to create a WPF application. Key is to use the VS2010 tools to take much as possible manual work out of my hands. -The database engine is SQLite -ORM probably through DBLINQ -Use of LINQ -The application can create new, empty database instances -Easily maintainable (changes in data model possible) Q- How do I start designing the database model (visually) in Visual Studio 2010? Should this be an xsd? Do I do this in a separate project? Q- Next, how can I make the most use of VS2010 code generation tools to generate a business layer? Q- I suppose the business layer will be added as a Data Source (in another project?) and from there it's a rather generic data binding solution? I tried finding clear examples of this but it's a jungle out there, the hunt for a solution is NOT converging to one clear method.... :_(

    Read the article

  • How to get the data for intra-day candlestick charts for stocks on eg Nasdaq

    - by Chris
    Hi, For a learning exercise, i'm wanting to create candlestick (stock) graphs for stocks using zedgraph. Now on google finance, i can get daily open-high-low-close data which is perfect for making these graphs, but i'm wanting to create intra-day graphs, eg open-high-low-close data for an hour (or 5 mins, or 1 min even). Is there any way to get that kind of data without having to subscribe to any expensive service? I've heard opentick mentioned in an old SO question, but their site is defunct now. I was thinking of polling google finance once a minute to get the latest stock price, then with an hour's worth of 60 prices, i could then roughly calculate the open-high-low-close, but this is a bit of an estimation and i'm open to other suggestions. Cheers all.

    Read the article

  • Java HTML Parsing

    - by Richie_W
    Hello everyone. I'm working on an app which scrapes data from a website and I was wondering how I should go about getting the data. Specifically I need data contained in a number of div tags which use a specific CSS class - Currently (for testing purposes) I'm just checking for "div class = "classname"" in each line of HTML - This works, but I can't help but feel there is a better solution out there. Ie. - Is there any nice way where I could give a class a line of HTML and have some nice methods like: boolean usesClass(String CSSClassname); String getText(); String getLink(); Many many thanks!

    Read the article

  • Matlab - plot multiple data sets on a scatter plot

    - by Mark
    Hey all, I have 2 sets of data (Ax, Ay; Bx, By) - I'd like to plot both of these data sets on a scatter plot with different colors, but can't seem to get it to work because it seems scatter() does not work like plot(). Is it possible to do this? I've tried... scatter(Ax, Ay, 'g', Bx, By, 'b') And scatter(Ax, Ay, 'g') scatter(Bx, By, 'b') The first way returns an error. The latter only plots the Bx/By data. Many thanks!

    Read the article

  • Using NHibernate with an EAV data model

    - by devonlazarus
    I'm trying to leverage NH to map to a data model that is a loose interpretation of the EAV/CR data model. I have most of it working but am struggling with mapping the Entity.Attributes collection. Here are the tables in question: -------------------- | Entities | -------------------- | EntityId PK |-| | EntityType | | -------------------- | ------------- | V -------------------- | EntityAttributes | ------------------ --------------------------- -------------------- | Attributes | | StringAttributes | | EntityId PK,FK | ------------------ --------------------------- | AttributeId FK | -> | AttributeId PK | -> | StringAttributeId PK,FK | | AttributeValue | | AttributeType | | AttributeName | -------------------- ------------------ --------------------------- The AttributeValue column is implemented as an sql_variant column and I've implemented an NHibernate.UserTypes.IUserType for it. I can create an EntityAttribute entity and persist it directly so that part of the hierarchy is working. I'm just not sure how to map the EntityAttributes collection to the Entity entity. Note the EntityAttributes table could (and does) contain multiple rows for a given EntityId/AttributeId combination: EntityId AttributeId AttributeValue -------- ----------- -------------- 1 1 Blue 1 1 Green StringAttributes row looks like this for this example: StringAttributeId AttributeName ----------------- -------------- 1 FavoriteColor How can I effectively map this data model to my Entity domain such that Entity.Attributes("FavoriteColors") returns a collection of favorite colors? Typed as System.String?

    Read the article

  • Export ASPNETDB data to another Database.

    - by raziiq
    Hi there. I am developing in Visual Web developer 2008. I have SQLEXPRESS 2005 and SQL Management Studio 2008 installed on my PC. I purchased a Database MS SQL 2008 on DiscountASP.net. Since the host provides only 1 database and my project has 2 database. One is the ASPNETDB that contains the roles and user etc (created using the Website Configuration Wizard) and the other is my database containing data to my website and is named MainDB. As Host allows only 1 database so i exported my ASPNETDB's tables and stored procedures to my MainDB using aspnet_regsql.exe, but the problem is that stored procedures and tables are exported to my MainDB but data is not exported, i mean there are no users in the tables. My Question is how to export everything of ASPNETDB including stored procedures, tables and data to my MainDB??

    Read the article

  • How to deal with arrays of data in cookies

    - by peter
    Hi all, I want to store data in a cookie and I am not exactly sure how I will go about it. The data is the UserName, and Password values for the users that are logging into a website, e.g. sometime like this UserName = bob, Password=Passw0rd1 UserName = harry, Password=BLANK UserName = george, Password=R0jjd6s What this means is that bob and george logged into the site and chose to have their password remembered, but harry chose for his password not to be remembered. So on the login dialog a dropdown will be present with all the usernames in it 'bob', 'harry', 'george'. If they select the username bob the password will automatically be filled in, etc. So how does that information need to be stored in the cookie? Like it is above, or does it have to be, UserName1 = bob, Password1=Passw0rd1 UserName2 = harry, Password2=BLANK UserName3 = george, Password3=R0jjd6s Are the username and password values actually stored in the same cookie, or is each piece of data separate? Any information would be good.

    Read the article

  • Entity Framework + SQLite deployment

    - by Pompair
    Hi, I have a ASP.NET MVC app that is using SQLite database through Entity Framework. Everything works on VS 2008's local development webserver. However, deploying the web app to my service provider causes this error: [ArgumentException: Unable to find the requested .Net Framework Data Provider. It may not be installed.] System.Data.Common.DbProviderFactories.GetFactory(String providerInvariantName) +1308959 System.Data.EntityClient.EntityConnection.GetFactory(String providerString) +35 Service provider has commented that they do not support SQLite. I had though that SQLite is independent of service provider's settings since it's App_Data deployable. Has anyone experiences of a succesfull Entity Framework + SQLite deployment? Cheers, -pom-

    Read the article

< Previous Page | 205 206 207 208 209 210 211 212 213 214 215 216  | Next Page >