Search Results

Search found 10242 results on 410 pages for 'stored proc'.

Page 322/410 | < Previous Page | 318 319 320 321 322 323 324 325 326 327 328 329  | Next Page >

  • SPFileVersionCollection - why versions are sorted in mixed order?

    - by Janis Veinbergs
    SPFileVersionCollection and SPListItemVersionCollection versioning seems inconsistent to me. Inconsistency wouldn't be a problem to me, but sort order is. SPListItemVersionCollection I can understand versioning of ListItems as they are stored in descending order: SPContext.Current.ListItem.Versions.Count -> 5 SPContext.Current.ListItem.Versions[0].VersionId -> 1026 (2.2 latest version) SPContext.Current.ListItem.Versions[1].VersionId -> 1025 (2.1) SPContext.Current.ListItem.Versions[2].VersionId -> 1024 (2.0) ... [4].VersionId -> (oldest version) SPFileVersionCollection However I can't understand how version numbers are saved for a document library item: SPContext.Current.ListItem.File.Versions.Count -> 4 SPContext.Current.ListItem.File.Versions[0].ID -> 512 (1.0 oldest one) SPContext.Current.ListItem.File.Versions[1].ID -> 513 (1.1) SPContext.Current.ListItem.File.Versions[2].ID -> 1025 (2.1 latest version) SPContext.Current.ListItem.File.Versions[3].ID -> 1024 (2.0 (EDIT: IsCurrentVersion = True)) They are nor in ascending order, nor descending, but something mixed. Is there any reason for SharePoint team to decide to store SPFile versions like that? And do they expect that I write my own method to get latest version or is there a builtin one for that? A note: Let me point out that SPListItem.File is not null for document library items.

    Read the article

  • PDF rendering crashes app Core Graphics

    - by Felixyz
    EDIT: The memory leaks turned out to be unrelated to the crashes. Leaks are fixed but crashes remain, still mysterious. My (iPhone) app does lots of PDF loading and rendering, some of it threaded. Sometime, it seems always after I flush a page cash after getting a memory warning, the app crashes with a bad access when trying to draw a pdf page stored in an NSData object. Here is one example trace: #0 0x3016d564 in CGPDFResourcesGetResource () #1 0x3016d58a in CGPDFResourcesGetResource () #2 0x3016d94e in CGPDFResourcesGetExtGState () #3 0x3015fac4 in CGPDFContentStreamGetExtGState () #4 0x301629a8 in op_gs () #5 0x3016df12 in handle_xname () #6 0x3016dd9e in read_objects () #7 0x3016de6c in CGPDFScannerScan () #8 0x30161e34 in CGPDFDrawingContextDraw () #9 0x3016a9dc in CGContextDrawPDFPage () But sometimes I get this instead: Program received signal: “EXC_BAD_ACCESS”. (gdb) bt #0 0x335625fa in objc_msgSend () #1 0x32c04eba in CFDictionaryGetValue () #2 0x3016d500 in get_value () #3 0x3016d5d6 in CGPDFResourcesGetFont () #4 0x3015fbb4 in CGPDFContentStreamGetFont () #5 0x30163480 in op_Tf () #6 0x3016df12 in handle_xname () #7 0x3016dd9e in read_objects () #8 0x3016de6c in CGPDFScannerScan () #9 0x30161e34 in CGPDFDrawingContextDraw () #10 0x3016a9dc in CGContextDrawPDFPage () Is this an indication that I've mistakenly deallocated an object? It's hard for me to decode what's happening here. This is how I create and retain the various objects involved: // Some data was just loaded from the network and is pointed to by "data" self.pdfData = data; _dataProviderRef = CGDataProviderCreateWithData( NULL, [_pdfData bytes], [_pdfData length], NULL ); _documentRef = CGPDFDocumentCreateWithProvider(_dataProviderRef); _pageRef = CGPDFDocumentGetPage(_documentRef, 1); CGPDFPageRetain(_pageRef); _pdfFrame = CGPDFPageGetBoxRect(_pageRef, kCGPDFArtBox); So the NSData object is retained, and I explicitly retain the page reference. The data provider and the document are already retained by the create-functions. And here is my dealloc method: -(void)dealloc { if (_pageRef) CGPDFPageRelease(_pageRef); if (_documentRef) CGPDFDocumentRelease(_documentRef); if (_dataProviderRef) CGDataProviderRelease(_dataProviderRef); self.pdfData = nil; [super dealloc]; } Am I doing anything wrong? Even an assurance that I'm not, with explanation, would be a help.

    Read the article

  • Installing into the GAC with WiX 3.0

    - by Jeff Yates
    I have a DLL that I would like to install into the Global Assembly Cache so that it can be referenced from multiple locations. I have a File declaration with the Assembly attribute set to ".net" but when the installation tries to install the DLL into the GAC, I get the following error (I have tided it up a bit to make it more readable): MSI (s) (58:38) [19:14:31:031]: Product: MyProductName 1.01 -- Error 1935. An error occurred during the installation of assembly  'Compass,   version="1.0.0.0",   culture="neutral",   publicKeyToken="392B26B760D48103",   processorArchitecture="MSIL"'. Please refer to Help and Support for more information. HRESULT: 0x80131043. assembly interface:       IAssemblyCacheItem, function:             Commit, component: {53AEE63B-F356-4D4F-8D61-EB0640A6E160} I have hunted around to find out what this means and the error relates to FUSION_E_UNEXPECTED_MODULE_FOUND. This link also includes this information: /// When installing multi-file assemblies into the GAC, the hash of each module is /// checked against the hash of that file stored in the manifest. If the /// hash of one of the files in the multi-file assembly does not match what is recorded /// in the manifest, FUSION_E_UNEXPECTED_MODULE_FOUND will be returned. /// The name of the error, and the text description of it, are somewhat confusing. /// The reason this error code is described this way is that the internally, /// Fusion/CLR implements installation of assemblies in the GAC, by installing /// multiple "streams" that are individually committed. /// Each stream has its hash computed, and all the hashes found /// are compared against the hashes in the manifest, at the end of the installation. /// Hence, a file hash mismatch appears as if an "unexpected" module was found. Unfortunately, this doesn't make much sense to me and I don't see how it relates to my assembly, which isn't fancy or complex from my perspective (it's just a regular .NET 3.5 class library and the current installation test is occurring on my development machine, which is a valid target environment for my project - 32-bit Windows XP SP3). Can anyone shed some light on why I might be getting this error and how I might hope to fix it?

    Read the article

  • AWS Amazon EC2 - password-less SSH login for non-root users using PEM keypairs

    - by Mark White
    We've got a couple of clusters running on AWS (HAProxy/Solr, PGPool/PostgreSQL) and we've setup scripts to allow new slave instances to be auto-included into the clusters by updating their IPs to config files held on S3, then SSHing to the master instance to kick them to download the revised config and restart the service. It's all working nicely, but in testing we're using our master pem for SSH which means it needs to be stored on an instance. Not good. I want a non-root user that can use an AWS keypair who will have sudo access to run the download-config-and-restart scripts, but nothing else. rbash seems to be the way to go, but I understand this can be insecure unless setup correctly. So what security holes are there in this approach: New AWS keypair created for user.pem (not really called 'user') New user on instances: user Public key for user is in ~user/.ssh/authorized_keys (taken by creating new instance with user.pem, and copying it from /root/.ssh/authorized_keys) Private key for user is in ~user/.ssh/user.pem 'user' has login shell of /home/user/bin/rbash ~user/bin/ contains symbolic links to /bin/rbash and /usr/bin/sudo /etc/sudoers has entry "user ALL=(root) NOPASSWD: ~user/.bashrc sets PATH to /home/user/bin/ only ~user/.inputrc has 'set disable-completion on' to prevent double tabbing from 'sudo /' to find paths. ~user/ -R is owned by root with read-only access to user, except for ~user/.ssh which has write access for user (for writing known_hosts), and ~user/bin/* which are +x Inter-instance communication uses 'ssh -o StrictHostKeyChecking=no -i ~user/.ssh/user.pem user@ sudo ' Any thoughts would be welcome. Mark...

    Read the article

  • Windows 7 - Enable Network DTC Access

    - by Russ Clark
    I have a Visual Studio 2010 Windows Forms application in which I start a transaction using the TransactionScope class. I then Receive a message from a Sql Server Broker Services message queue, which works fine. I next try to call a stored procedure from the same database with a call to my data access layer which is a Visual Studio dataset (xsd file). When I make this second call to the database I get the following error message: The MSDTC transaction manager was unable to pull the transaction from the source transaction manager due to communication problems. Possible causes are: a firewall is present and it doesn't have an exception for the MSDTC process, the two machines cannot find each other by their NetBIOS names, or the support for network transactions is not enabled for one of the two transaction managers. (Exception from HRESULT: 0x8004D02B). I've seen several posts on the web that talk about Enabling DTC access through dcomcnfg.exe, and allowing DTC to communicate through Windows Firewall. I've done those things, and am still having this problem. I know our remote database server is setup to Enable DTC access, because we are using similar transactions in other projects built with Visual Studio 2008 on Windows XP and Vista. I think there is something specific about Windows 7 and Visual Studio 2010 causing this problem, but haven't been able to find out what it is. Can anyone help with this problem?

    Read the article

  • Convert NSMutableArray to string and back

    - by Friendlydeveloper
    Hello, in my current project I'm facing the following problem: The app needs to exchange data with my server, which are stored inside a NSMutableArray on the iPhone. The array holds NSString, NSData and CGPoint values. Now, I thought the easiest way to achieve this, was to convert the array into a properly formatted string, send it to my server and store it inside some mySQL database. At this point I'd like to request my data from my server, receive the string, which represents contents of my array and then actually convert it back into a NSMutablArray. So far, I tried something like this: NSString *myArrayString = [myArray description]; Now I send this string to my server and store it inside my mySQL database. That part works really well. However, when I receive the string from my server, I have trouble converting it back into a NSMutableArray. Is there a method, which can easily convert array description back into an array? Unfortunately I couldn't find anything on that so far. Maybe my way of "serializing" the array is wrong right from the start and there is a smarter way to do this. Any help appreciated. Thanks in advance.

    Read the article

  • Create non-persistent cookie with FormsAuthenticationTicket

    - by Marcus
    Hello! I'm having trouble creating a non-persistent cookie using the FormsAuthenticationTicket. I want to store userdata in the ticket, so i can't use FormsAuthentication.SetAuthCookie() or FormsAuthentication.GetAuthCookie() methods. Because of this I need to create the FormsAuthenticationTicket and store it in a HttpCookie. My code looks like this: DateTime expiration = DateTime.Now.AddDays(7); // Create ticket FormsAuthenticationTicket ticket = new FormsAuthenticationTicket(2, user.Email, DateTime.Now, expiration, isPersistent, userData, FormsAuthentication.FormsCookiePath); // Create cookie HttpCookie cookie = new HttpCookie(FormsAuthentication.FormsCookieName, FormsAuthentication.Encrypt(ticket)); cookie.Path = FormsAuthentication.FormsCookiePath; if (isPersistent) cookie.Expires = expiration; // Add cookie to response HttpContext.Current.Response.Cookies.Add(cookie); When the variable isPersistent is true everything works fine and the cookie is persisted. But when isPersistent is false the cookie seems to be persisted anyway. I sign on in a browser window, closes it and opens the browser again and I am still logged in. How do i set the cookie to be non-persistent? Is a non-persistent cookie the same as a session cookie? Is the cookie information stored in the sessiondata on the server or are the cookie transferred in every request/response to the server? Thanks in advance! /Marcus

    Read the article

  • unable to get "ItemValue" of selected item using f:selectitems tag in ace:autocompleteEntry

    - by user1641976
    i want to get the Value of selectItem (ItemValue which is an Integer and the Item Label is String) in my backing bean using autocompleteentry tag of icefaces 3.1.0 but i get error: here is the code: <tr> <td>Current City</td> <td> <ace:autoCompleteEntry value="#{service.cityId}" styleClass="select-field" rows="10" width="400" filterMatchMode="" > <f:selectItems value="#{service.cities}" ></f:selectItems> </ace:autoCompleteEntry> </td> </tr> Bean is : public class Service{ private Integer cityId; public Integer getCityId() { return cityId; } public void setCityId(Integer cityId) { this.cityId= cityId; } private <SelectItem> cities; public List<SelectItem> getCities() { return cities=Dao.getCityList(); } public void setCities(List<SelectItem> cities) { this.cities= cities; } } the cities has itemvalue as a number and itemLabel as String stored in it. I do get autocomplete fine and shows list of matches if i store value in some String property of backing bean but if storing in integer property of bean, gives this error as soon i write something in autocomplete. INFO: WARNING: FacesMessage(s) have been enqueued, but may not have been displayed. sourceId=frmmaster:j_idt205:txtcity[severity=(ERROR 2), summary=(frmmaster:j_idt205:txtcity: 'a' must be a number consisting of one or more digits.), detail=(frmmaster:j_idt205:txtcity: 'a' must be a number between -2147483648 and 2147483647 Example: 9346)] Kindly reply any person i need to solve this issue as soon as possible.

    Read the article

  • X509 Certificates, DigitalSignature vs NonRepudiation (C#)

    - by Eyvind
    We have been handed a set of test sertificates on smart cards for developing a solution that requires XML messages to be signed using PKI. Each (physical) smart card seems to have two certificates stored on it. I import them into the Windows certificate store using software supplied by the smart card provider, and then use code resembling the following to iterate over the installed certificates: foreach (X509Certificate2 x509 in CertStore.Certificates) { foreach (X509Extension extension in x509.Extensions) { if (extension.Oid.Value == "one we are interested in") { X509KeyUsageExtension ext = (X509KeyUsageExtension)extension; if ((ext.KeyUsages & X509KeyUsageFlags.DigitalSignature) != X509KeyUsageFlags.None) { // process certs here We have been told to use the certificates that have the NonRepudiation key usage flag set to sign the XMLs. However, the certificate that has the NonRepudiation flag has this flag only, and not for instance the DigitalSignature flag which I check for above. Does this strike anyone but me as slightly odd? I am in other words told to sign with a certificate that does not (appear to) have the DigitalSignature usage flag set. Is this normal procedure? Any comments? Thanks.

    Read the article

  • RSACryptoServiceProvider CryptographicException System Cannot Find the File Specified under ASP.NET

    - by Will Hughes
    I have an application which is making use of the RSACryptoServiceProvider to decrypt some data using a known private key (stored in a variable). When the IIS Application Pool is configured to use Network Service, everything runs fine. However, when we configure the IIS Application Pool to run the code under a different Identity, we get the following: System.Security.Cryptography.CryptographicException: The system cannot find the file specified. at System.Security.Cryptography.Utils.CreateProvHandle(CspParameters parameters, Boolean randomKeyContainer) at System.Security.Cryptography.RSACryptoServiceProvider.ImportParameters(RSAParameters parameters) at System.Security.Cryptography.RSA.FromXmlString(String xmlString) The code is something like this: byte[] input; byte[] output; string private_key_xml; var provider = new System.Cryptography.RSACryptoServiceProvider(this.m_key.Key_Size); provider.FromXmlString(private_key_xml); // Fails Here when Application Pool Identity != Network Service ouput = provider.Decrypt(input, false); // False = Use PKCS#1 v1.5 Padding There are resources which attempt to answer it by stating that you should give the user read access to the machine key store - however there is no definitive answer to solve this issue. Environment: IIS 6.0, Windows Server 2003 R2, .NET 3.5 SP1

    Read the article

  • Write-only collections in MongoDB

    - by rcoder
    I'm currently using MongoDB to record application logs, and while I'm quite happy with both the performance and with being able to dump arbitrary structured data into log records, I'm troubled by the mutability of log records once stored. In a traditional database, I would structure the grants for my log tables such that the application user had INSERT and SELECT privileges, but not UPDATE or DELETE. Similarly, in CouchDB, I could write a update validator function that rejected all attempts to modify an existing document. However, I've been unable to find a way to restrict operations on a MongoDB database or collection beyond the three access levels (no access, read-only, "god mode") documented in the security topic on the MongoDB wiki. Has anyone else deployed MongoDB as a document store in a setting where immutability (or at least change tracking) for documents was a requirement? What tricks or techniques did you use to ensure that poorly-written or malicious application code could not modify or destroy existing log records? Do I need to wrap my MongoDB logging in a service layer that enforces the write-only policy, or can I use some combination of configuration, query hacking, and replication to ensure a consistent, audit-able record is maintained?

    Read the article

  • Linq to SQL and concurrency with Rob Conery repository pattern

    - by David Hall
    I have implemented a DAL using Rob Conery's spin on the repository pattern (from the MVC Storefront project) where I map database objects to domain objects using Linq and use Linq to SQL to actually get the data. This is all working wonderfully giving me the full control over the shape of my domain objects that I want, but I have hit a problem with concurrency that I thought I'd ask about here. I have concurrency working but the solution feels like it might be wrong (just one of those gitchy feelings). The basic pattern is: private MyDataContext _datacontext private Table _tasks; public Repository(MyDataContext datacontext) { _dataContext = datacontext; } public void GetTasks() { _tasks = from t in _dataContext.Tasks; return from t in _tasks select new Domain.Task { Name = t.Name, Id = t.TaskId, Description = t.Description }; } public void SaveTask(Domain.Task task) { Task dbTask = null; // Logic for new tasks omitted... dbTask = (from t in _tasks where t.TaskId == task.Id select t).SingleOrDefault(); dbTask.Description = task.Description, dbTask.Name = task.Name, _dataContext.SubmitChanges(); } So with that implementation I've lost concurrency tracking because of the mapping to the domain task. I get it back by storing the private Table which is my datacontext list of tasks at the time of getting the original task. I then update the tasks from this stored Table and save what I've updated This is working - I get change conflict exceptions raised when there are concurrency violations, just as I want. However, it just screams to me that I've missed a trick. Is there a better way of doing this? I've looked at the .Attach method on the datacontext but that appears to require storing the original version in a similar way to what I'm already doing. I also know that I could avoid all this by doing away with the domain objects and letting the Linq to SQL generated objects all the way up my stack - but I dislike that just as much as I dislike the way I'm handling concurrency.

    Read the article

  • Installable ISAM not found

    - by lucky
    I have a requirement in which i upload excel sheets to sql server database. The business logic is executed and display as reports in php. It is working fine till yesterday. Today i tried to upload excel files. It is throwing an error message stating:- Translated version of it by me:- The OLE DB provider "Microsoft.Jet.OLEDB.4.0" for linked server "(null)" has not found "installable ISAM." . Return This is the original message in german:-- [Microsoft][ODBC SQL Server Driver][SQL Server] OLE DB-Anbieter "Microsoft.Jet.OLEDB.4.0" für den Verbindungsserver "(null)" hat die Meldung "Installierbares ISAM nicht gefunden." zurückgeben. Query that i used in the stored procedure:- EXEC('SELECT * INTO temp FROM OPENROWSET(''Microsoft.Jet.OLEDB.4.0'', ''Excel 8.0;Database=' + @ba_bm_status + ''',' + '''SELECT * FROM [qry_BA_Controlling (Report)$]'')'); @ba_bm_status - i/p parameter of srored procedure qry_BA_Controlling (Report) - worksheet name webserver used:- IIS, connection is through odbc. I have no information about this error. Can you please help me in solving the same.

    Read the article

  • How to add dynamic profile fields in Invision Power Board?

    - by user361908
    I run a game server and want to link the persons in game character name and stats to Invision Power Board. I've setup IPB so players currently login with their in game login. That means their username on the forum is the same as their username for the game. They can have multiple characters on 1 account so ideally I'd like to allow them to choose a main character and display an actual image of that character and allow them to display other characters if they are online. Currently I'm doing something like this by hacking profileFields.php but it's messy and not very efficient on the user or server end. My code currently uses 2 custom fields which the player can enter their character names in. To display only their main character they enter the name in the first field. To also display other characters if they are online they enter the same name into the second field. To resolve the IDs I have to run a lot of queries. I know PHP but I am not familiar with IPBs code at all. I just need pointed in a direction where I can combine the 2 fields into 1 field. tl;dr: Here is my setup: Invision Power Board 3 Data is stored in MySQL on the same server the forum is hosted on. Usernames on the forum are identical to usernames in the game Here is a breakdown of what I'd like to do: In the edit profile section I need to resolve the forum username to the games account id then: Display a list of characters and allow them to choose which characters they want to display if they are online as well as a default character that will be displayed if none are online. In the posts user info pane: Display the online character or the default if none are online. Here is what I need to know: How to generate a list of characters in the profile edit form and allow selection (checkbox) of each character to display as well as the selection of a default character (radio or dropdown?) How to fetch the data and place it in the posts user info pane

    Read the article

  • How do I insert and query a DateTime object in SQLite DB from C# ?

    - by Soham
    Hi All, Consider this snippet of code: string sDate = string.Format("{0:u}", this.Date); Conn.Open(); Command.CommandText = "INSERT INTO TRADES VALUES(" + "\"" + this.Date + "\"" + "," +this.ATR + "," + "\"" + this.BIAS + "\"" + ")"; Command.ExecuteNonQuery(); Note the "this.Date" part of the command. Now Date is an abject of type DateTime of C# environment, the DB doesnt store it(somewhere in SQLite forum, it was written that ADO.NET wrapper automatically converts DateTime type to ISO1806 format) But instead of this.Date when I use sDate (shown in the first line) then it stores properly. My probem actually doesnt end here. Even if I use "sDate", I have to retrieve it through a query. And that is creating the problem Any query of this format SELECT * FROM <Table_Name> WHERE DATES = "YYYY-MM-DD" returns nothing, whereas replacing '=' with '' or '<' returns right results. So my point is: How do I query for Date variables from SQLite Database. And if there is a problem with the way I stored it (i.e non 1806 compliant), then how do I make it compliant

    Read the article

  • AS 400 Performance from .Net iSeries Provider

    - by Nathan
    Hey all, First off, I am not an AS 400 guy - at all. So please forgive me for asking any noobish questions here. Basically, I am working on a .Net application that needs to access the AS400 for some real-time data. Although I have the system working, I am getting very different performance results between queries. Typically, when I make the 1st request against a SPROC on the AS400, I am seeing ~ 14 seconds to get the full data set. After that initial call, any subsequent calls usually only take ~ 1 second to return. This performance improvement remains for ~ 20 mins or so before it takes 14 seconds again. The interesting part with this is that, if the stored procedure is executed directly on the iSeries Navigator, it always returns within milliseconds (no change in response time). I wonder if it is a caching / execution plan issue but I can only apply my SQL SERVER logic to the AS400, which is not always a match. Any suggestions on what I can do to recieve a more consistant response time or simply insight as to why the AS400 is acting in this manner when I was using the iSeries Data Provider for .Net? Is there a better access method that I should use? Just in case, here's the code I am using to connect to the AS400 Dim Conn As New IBM.Data.DB2.iSeries.iDB2Connection(ConnectionString) Dim Cmd As New IBM.Data.DB2.iSeries.iDB2Command("SPROC_NAME_HERE", Conn) Cmd.CommandType = CommandType.StoredProcedure Using Conn Conn.Open() Dim Reader = Cmd.ExecuteReader() Using Reader While Reader.Read() 'Do Something End While Reader.Close() End Using Conn.Close() End Using

    Read the article

  • Password Recovery without sending password via email

    - by Brian
    So, I've been playing with asp:PasswordRecovery and discovered I really don't like it, for several reasons: 1) Alice's password can be reset even without having access to Alice's email. A security question for password resets mitigates this, but does not really satisfy me. 2) Alice's new password is sent back to her in cleartext. I would rather send her a special link to my page (e.g. a page like example.com/recovery.aspx?P=lfaj0831uefjc), which would let her change her password. I imagine I could do this myself by creating some sort of table of expiring password recovery pages and sending those pages to users who asked for a reset. Somehow those pages could also change user passwords behind the scenes (e.g. by resetting them manually and then using the text of the new password to change the password, since a password cannot be changed without knowing the old one). I'm sure others have had this problem before and that kind of solution strikes me as a little hacky. Is there a better way to do this? An ideal solution does not violate encapsulation by accessing the database directly but instead uses the existing stored procedures within the database...though that may not be possible.

    Read the article

  • Hive Based Registry in Flash

    - by Psychic
    To start with I'll say I've read the post here and I'm still having trouble. I'm trying to create a CE6 image with a hive-based registry that actually stores results through a reboot. I've ticked the hive settings in the catalog items. In common.reg, I've set the location of the hive ([HKEY_LOCAL_MACHINE\init\BootVars] "SystemHive") to "Hard Drive\Registry" (Note: the flash shows up as a device called "Hard Drive") In common.reg, I've set "Flags"=dword:3 in the same place to get the device manager loaded along with the storage manager I've verified that these settings are wrapped in "; HIVE BOOT SECTION" This is where it starts to fall over. It all compiles fine, but on the target system, when it boots, I get: A directory, called "Hard Disk" where a registry is put A device, name called "Hard Disk2" where the permanent flash is Any changes made to the registry are lost on a reboot What am I still missing? Why is the registry not being stored on the flash? Strangly, if I create a random file/directory in the registry directory, it is still there after a reboot, so even though this directory isn't on the other partition (where I tried to put it), it does appear to be permanent. If it is permanent, why don't registry settings save (ie Ethernet adapter IP addresses?) I'm not using any specific profiles, so I'm at a loss as to what the last step is to make this hive registry a permanent store.

    Read the article

  • Updating a status on a Winform in BackgroundWorker

    - by Mike Wills
    I have a multi-step BackgroundWorker process. I use a marquee progress bar because several of these steps are run on a iSeries server so there isn't any good way to determine a percentage. What I am envisioning is a label with updates after every step. How would you recommend updating a label on a winform to reflect each step? Figured I would add a bit more. I call some CL and RPG programs via a stored procedure on an iSeries (or IBM i or AS/400 or a midrange computer running OS/400... er... i5/OS (damn you IBM for not keeping the same name year-to-year)). Anyway I have to wait until that step is fully complete before I can continue on the winform side. I was thinking of sending feedback to the user giving the major steps. Dumping data to iSeries Running month-end Creating reports Uploading final results I probably should have given this in the beginning. Sorry about that. I try to keep my questions general enough for others to make use of later rather than my specific task.

    Read the article

  • Load richtextbox from memorystream. WPF/VB>NET

    - by Peter
    Hi, I have some trouble with loading a richtextbox from a memorystream. I have some data in a database table stored as a byte array, I convert it to a string and load it into a memorystream and then I want to load that memory stream in the richtextbox. The application breaks on Dim tr As New TextRange(rtbTemplate.Document.ContentStart, rtbTemplate.Document.ContentEnd) though. Code for getting the data from the database Dim TemplateData As Byte() = TemplateDataTableInstance.Rows(0).Item("TemplateData") Dim strTemplateData As String Dim enc As New System.Text.UTF8Encoding() strTemplateData = enc.GetString(TemplateData) ' I put a messagebox here to check if I get the data I want and I do Now, how do I sort out the rest? I have Dim strDataFormat As String = DataFormats.Rtf Using ms As New MemoryStream(strTemplateData) Dim tr As New TextRange(rtbTemplate.Document.ContentStart, rtbTemplate.Document.ContentEnd) tr.Load(ms, strDataFormat) End Using and my richtextbox in xaml <RichTextBox x:Name="rtbLetter"> <RichTextBox.Resources> <Style TargetType="{x:Type Paragraph}"> <Setter Property="Margin" Value="0"/> </Style> </RichTextBox.Resources> <FlowDocument FontSize="12" FontFamily="Times New Roman"> </FlowDocument> </RichTextBox> Any help is appreciated.

    Read the article

  • How do you protect against specific CSRF attack

    - by Saif Bechan
    I am going trough the OWASP Top 10 list of 2007 and 2010. I stumbled upon Cross Site Request Forgery (CSRF) this is often called session riding as you let the user usee his session to fulfill your wishes. Now a solution to this is adding a token to every url and this token is checked for every link. For example to vote on product x the url would be: 'http://mysite.com?token=HVBKJNKL' This looks like a solid solution to because a hacker can not guess the token. But I was thinking of the following scenario(I do not know if it is possible): You create a website with an hidden iFrame or div. After that you can load my website in it either using just the normal iFrame or ajax. When you have my website loaded hidden inside your website, and the user has a stored session, the following can be done. You can retrieve the token from the URLS, and still do all the actions needed. Is it possible to do something like this. Or is it not possible to do this cross domain.

    Read the article

  • controller path not found for static images? asp.net mvc routing issue?

    - by rksprst
    I have an image folder stored at ~/Content/Images/ I am loading these images via <img src="/Content/Images/Image.png" /> Recently, the images aren't loading and I am getting the following errors in my error log. What's weird is that some images load fine, while others do not load. Anyone have any idea what is wrong with my routes? Am I missing an ignore route for the /Content/ folder? I am also getting the same error for favicon.ico and a bunch of other image files... <Fatal> -- 3/25/2010 2:32:38 AM -- System.Web.HttpException: The controller for path '/Content/Images/box_bottom.png' could not be found or it does not implement IController. at System.Web.Mvc.DefaultControllerFactory.GetControllerInstance(Type controllerType) at System.Web.Mvc.DefaultControllerFactory.CreateController(RequestContext requestContext, String controllerName) at System.Web.Mvc.MvcHandler.ProcessRequest(HttpContextBase httpContext) at System.Web.Mvc.MvcHandler.ProcessRequest(HttpContext httpContext) at System.Web.Mvc.MvcHandler.System.Web.IHttpHandler.ProcessRequest(HttpContext httpContext) at System.Web.HttpApplication.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) My current routes look like this: routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.MapRoute( "Default", // Route name "{controller}/{action}/{id}", // URL with parameters new { controller = "Home", action = "Index", id = "" } // Parameter defaults ); routes.MapRoute( "ControllerDefault", // Route name "{controller}/project/{projectid}/{action}/{searchid}", // URL with parameters new { controller = "Listen", action = "Index", searchid = "" } // Parameter defaults ); Thanks!

    Read the article

  • Serialization of Queue type- Serialization not working; C#

    - by Soham
    Hi All, Consider this piece of code: private Queue Date=new Queue(); //other declarations public DateTime _Date { get { return (DateTime)Date.Peek();} set { Date.Enqueue(value); } } //other properties and stuff.... public void UpdatePosition(...) { //other code IFormatter formatter = new BinaryFormatter(); Stream Datestream = new MemoryStream(); formatter.Serialize(Datestream, Date); byte[] Datebin = new byte[2048]; Datestream.Read(Datebin,0,2048); //Debug-Bug Console.WriteLine(Convert.ToString(this._Date)); Console.WriteLine(BitConverter.ToString(Datebin, 0, 3)); //other code } The output of the first writeline is perfect. I.e to check if really the Queue is initialised or not. It is. The right variables are stored and et. all [I inserted a value in that Q, that part of the code is not shown] But the second writeline is not giving the right expected answer: It serializes the entire Queue to 00-00-00. Want some serious help! Soham

    Read the article

  • Using Google Maps v3, PHP and Json to plot markers

    - by bateman_ap
    Hi, I am creating a map using the new(ish) v3 of the Google Maps API I have managed to get a map displaying using code as below: var myLatlng = new google.maps.LatLng(50.8194000,-0.1363000); var myOptions = { zoom: 14, center: myLatlng, mapTypeControl: false, scrollwheel: false, mapTypeId: google.maps.MapTypeId.ROADMAP }; var map = new google.maps.Map(document.getElementById("location-map"), myOptions); However I now want to add a number of markers I have stored in a PHP array. The Array currently looks like this if I print it out to screen: Array ( [0] => Array ( [poiUid] => 20 [poiName] => Brighton Cineworld [poiCode] => brighton-cineworld [poiLon] => -0.100450 [poiLat] => 50.810780 [poiType] => Cinemas ) [1] => Array ( [poiUid] => 21 [poiName] => Brighton Odeon [poiCode] => brighton-odeon [poiLon] => -0.144420 [poiLat] => 50.821860 [poiType] => Cinemas ) ) All the reading I have done so far suggests I turn this into JSON by using json_encode If I run the Array though this and echo it to the screen I get: [{"poiUid":"20","poiName":"Brighton Cineworld","poiCode":"brighton-cineworld","poiLon":"-0.100450","poiLat":"50.810780","poiType":"Cinemas"},{"poiUid":"21","poiName":"Brighton Odeon","poiCode":"brighton-odeon","poiLon":"-0.144420","poiLat":"50.821860","poiType":"Cinemas"}] The bit now is where I am struggling, I am not sure the encoded array is what I need to start populating markers, I think I need something like the code below but not sure how to add the markers from my passed through JSON var locations = $jsonPoiArray; for (var i = 0;i < locations.length; i += 1) { // Create a new marker };

    Read the article

  • UINavigationController inside tabbar loading a child root view

    - by Doug
    Hi guys, Firstly i'll preface by saying that i am a complete Cocoa touch/objective c noob (.Net dev having a dabble) I have searched on Google as well as here but cannot seem to find an easy solution. I have a UItabbarcontroller view with a UINavigationController inside its first tab I have the root view for this UINavigationController stored in a seperate class and NIB as i am trying to seperate the data viewing from the data loading (i'm going to reuse the table list in multiple places in my database) and simply pass the root view its data using a loading method and have it take it from there. What i want to happen: App loads and loads the first view of the tab bar (A UINavigationController) The UINavigationController inside the first view loads a root view (a UIViewController with a table view) and sets its title The UINavigationController loads the data from a web service and parses it The UINavigationController sends the data to a loading method inside the UIViewController Am i thinking about this completely wrongly? What currently happens: the first tab bar loads with an empty uinavigationcontroller (no table view) the data methods fire and get the webservice data this child view gets sent its data using the loading method the tableview delegate events fail to fire inside the child view telling it to load the data into the table I just can't seem how to load my second view inside the root view of the navigation controller and then send it my data?

    Read the article

< Previous Page | 318 319 320 321 322 323 324 325 326 327 328 329  | Next Page >