Search Results

Search found 18347 results on 734 pages for 'generate password'.

Page 623/734 | < Previous Page | 619 620 621 622 623 624 625 626 627 628 629 630  | Next Page >

  • SQL Server Index cost

    - by yellowstar
    I have read that one of the tradeoffs for adding table indexes in SQL Server is the increased cost of insert/update/delete queries to benefit the performance of select queries. I can conceptually understand what happens in the case of an insert because SQL Server has to write entries into each index matching the new rows, but update and delete are a little more murky to me because I can't quite wrap my head around what the database engine has to do. Let's take DELETE as an example and assume I have the following schema (pardon the pseudo-SQL) TABLE Foo col1 int ,col2 int ,col3 int ,col4 int PRIMARY KEY (col1,col2) INDEX IX_1 col3 INCLUDE col4 Now, if I issue the statement DELETE FROM Foo WHERE col1=12 AND col2 > 34 I understand what the engine must do to update the table (or clustered index if you prefer). The index is set up to make it easy to find the range of rows to be removed and do so. However, at this point it also needs to update IX_1 and the query that I gave it gives no obvious efficient way for the database engine to find the rows to update. Is it forced to do a full index scan at this point? Does the engine read the rows from the clustered index first and generate a smarter internal delete against the index? It might help me to wrap my head around this if I understood better what is going on under the hood, but I guess my real question is this. I have a database that is spending a significant amount of time in delete and I'm trying to figure out what I can do about it. When I display the execution plan for the deletion, it just shows an entry for "Clustered Index Delete" on table Foo which lists in the details section the other indices that need to be updated but I don't get any indication of the relative cost of these other indices. Are they all equal in this case? Is there some way that I can estimate the impact of removing one or more of these indices without having to actually try it?

    Read the article

  • .NET 4: Process.Start using credentials returns empty output

    - by alexey
    I run an external program from ASP.NET: var process = new Process(); var startInfo = process.StartInfo; startInfo.FileName = filePath; startInfo.Arguments = arguments; startInfo.UseShellExecute = false; startInfo.RedirectStandardOutput = true; startInfo.RedirectStandardError = true; process.Start(); process.WaitForExit(); Console.Write("Output: {0}", process.StandardOutput.ReadToEnd()); Console.Write("Error Output: {0}", process.StandardError.ReadToEnd()); Everything works fine with this code: the external program is executed and process.StandardOutput.ReadToEnd() returns the correct output. But after I add these two lines before process.Start() (to run the program in the context of another user account): startInfo.UserName = userName; startInfo.Password = securePassword; The program is not executed and process.StandardOutput.ReadToEnd() returns an empty string. No exceptions are thrown. userName and securePassword are correct (in case of incorrect credentials an exception is thrown). How to run the program in the context of another user account?

    Read the article

  • DAL Layer : EF 4.0 or Normal Data access layer with Stored Procedure

    - by Harryboy
    Hello Experts, Application : I am working on one mid-large size application which will be used as a product, we need to decide on our DAL layer. Application UI is in Silverlight and DAL layer is going to be behind service layer. We are also moving ahead with domain model, so our DB tables and domain classes are not having same structure. So patterns like Data Mapper and Repository will definitely come into picture. I need to design DAL Layer considering below mentioned factors in priority manner Speed of Development with above average performance Maintenance Future support and stability of the technology Performance Limitation : 1) As we need to strictly go ahead with microsoft, we can not use NHibernate or any other ORM except EF 4.0 2) We can use any code generation tool (Should be Open source or very cheap) but it should only generate code in .Net, so there would not be any licensing issue on per copy basis. Questions I read so many articles about EF 4.0, on outset it looks like that it is still lacking in features from NHibernate but it is considerably better then EF 1.0 So, Do you people feel that we should go ahead with EF 4.0 or we should stick to ADO .Net and use any code geneartion tool like code smith or any other you feel best Also i need to answer questions like what time it will take to port application from EF 4.0 to ADO .Net if in future we stuck up with EF 4.0 for some features or we are having serious performance issue. In reverse case if we go ahead and choose ADO .Net then what time it will take to swith to EF 4.0 Lastly..as i was going through the article i found the code only approach (with POCO classes) seems to be best suited for our requirement as switching is really easy from one technology to other. Please share your thoughts on the same and please guide on the above questions

    Read the article

  • How to handle SalesForce WSDL files for sandbox and production sites in ASP.Net?

    - by Traveling Tech Guy
    I need to authenticate users and get info about them from an ASP.Net application. Since I have 2 sites (sandbox, production) and 2 org IDs - I needed to generate 2 SalesForce WSDL files. I diffed the 2 files (each about 600kb in size) and while they are 95% the same, there are enough differences strewn all over the place - enough for me to need to use them both. I added both as web references to my solution, and here's where my problem starts.Obviously, I cannot use both references in the same file, as they contain the same classes/functions. I had to write a quick-and-dirty solution over the weekend, so I just created 2 classes - each using a different web reference - but otherwise the exact functionality, and I use the appropriate one, based on the URL the user is coming from. This works well, but strikes me as a bad (read: quick-and-dirty) solution. My question: is there any way to do one or more of the following: change the web reference on the fly? use both web references in the same file, but put one in a different namespace? find a better solution to the whole situation? I nd up with a huge XmlSerializer.dll (3mb!) - probably due to using both huge WSDL files. Thanks for your time.

    Read the article

  • Convert a Relative URL to an Absolute URL in Actionscript / Flex

    - by Bear
    I am working with Flex, and I need to take a relative URL source property and convert it to an absolute URL before loading it. The specific case I am working with involves tweaking SoundEffect's load method. I need to determine if a file will be loaded from the local file system or over the network from looking at the source property, and the easiest way I've found to do this is to generate the absolute URL. I'm having trouble generating the absolute URL for sound effect in particular. Here were my initial thoughts, which haven't worked. Look for the DisplayObject that the Sound Effect targets, and use its loaderInfo property. The target is null when the SoundEffect loads, so this doesn't work. Look at FlexGlobals.topLevelApplication, at the url or loaderInfo properties. Neither of these are set, however. Look at the FlexGlobals.topLevelApplication.systemManager.loaderInfo. This was also not set. The SoundEffect.as code basically boils down to var url:String = "mySound.mp3"; /*>> I'd like to convert the URL to absolute form here and tweak it as necessary <<*/ var req:URLRequest = new URLRequest(url); var loader:Loader = new Loader(); loader.load(req); Does anyone know how to do this? Any help clarifying the rules of how relative urls are resolved for URLRequests in ActionScript would also be much appreciated. edit I would also be perfectly satisfied with some way to tell whether the url will be loaded from the local file system or over the network. Looking at an absolute URL it would just be easy to look at the prefix, like file:// or http://.

    Read the article

  • How to handle "Remember me" in the Asp.Net Membership Provider

    - by RemotecUk
    Ive written a custom membership provider for my ASP.Net website. Im using the default Forms.Authentication redirect where you simply pass true to the method to tell it to "Remember me" for the current user. I presume that this function simply writes a cookie to the local machine containing some login credential of the user. What does ASP.Net put in this cookie? Is it possible if the format of my usernames was known (e.g. sequential numbering) someone could easily copy this cookie and by putting it on their own machine be able to access the site as another user? Additionally I need to be able to inercept the authentication of the user who has the cookie. Since the last time they logged in their account may have been cancelled, they may need to change their password etc so I need the option to intercept the authentication and if everything is still ok allow them to continue or to redirect them to the proper login page. I would be greatful for guidance on both of these two points. I gather for the second I can possibly put something in global.asax to intercept the authentication? Thanks in advance.

    Read the article

  • Error getting twitter request token using OAuth and PEAR Services_Twitter

    - by Onema
    Hello, I am moving from the basic authentication method using username and password to the OAuth based authentication. I was using an old version of the pear package Services_Twitter, that did not support OAuth. The latest version of this package supports OAuth authentications, it has a few dependencies (HTTP_Request2, HTTP_OAuth). It was very simple to install them and upgrade the package. I did all this my local machine and had no trouble getting the authentication up and running. I committed this code to the test site, but every time the code request a "request token" I get the following error message "Unable to connect to ssl://api.twitter.com:443. Error #0" I have spend 6 hours making sure that all the pear packages where up to date, checking the customer token and token secret, making sure port 443 is not closed... in addition to various other test. I have exhausted my resources and I come to you in hope to find some answers. Thank you PD: One of the things I do not understand is why does the message says that the url is ssl://api.twitter.com:443 rather than https://api.twitter.com/request_token? the former one is the one I am using to get the request token.

    Read the article

  • iphone localization

    - by hardik
    hello all when i run the command to generate Localizable.string file from my terminal it says me bad entry in to the classes file the file gets generated but it has no entry in it infact it should have entry in it. Here is what i am running in my terminal but somehow it is not happening please guide me need to solve this Last login: Mon Jun 7 18:02:09 on ttys000 comp10:~ admin$ cd .. comp10:Users admin$ cd .. comp10:/ admin$ cd /Users/admin/Desktop/localisationwithcode comp10:localisationwithcode admin$ sudo usage: sudo -K | -L | -V | -h | -k | -l | -v usage: sudo [-HPSb] [-p prompt] [-u username|#uid] { -e file [...] | -i | -s | <command> } comp10:localisationwithcode admin$ genstrings Classes/*.m Bad entry in file Classes/localisationwithcodeViewController.m (line = 35): Argument is not a literal string. Bad entry in file Classes/localisationwithcodeViewController.m (line = 36): Argument is not a literal string. Bad entry in file Classes/localisationwithcodeViewController.m (line = 37): Argument is not a literal string. Bad entry in file Classes/localisationwithcodeViewController.m (line = 38): Argument is not a literal string. 2010-06-07 18:04:45.047 genstrings[3851:10b] _CFGetHostUUIDString: unable to determine UUID for host. Error: 35 comp10:localisationwithcode admin$

    Read the article

  • Testing a patch to the Rails mysql adapter

    - by Sleepycat
    I wrote a little monkeypatch to the Rails MySQLAdapter and want to package it up to use it in my other projects. I am trying to write some tests for it but I am still new to testing and I am not sure how to test this. Can someone help get me started? Here is the code I want to test: unless RAILS_ENV == 'production' module ActiveRecord module ConnectionAdapters class MysqlAdapter < AbstractAdapter def select_with_explain(sql, name = nil) explanation = execute_with_disable_logging('EXPLAIN ' + sql) e = explanation.all_hashes.first exp = e.collect{|k,v| " | #{k}: #{v} "}.join log(exp, 'Explain') select_without_explain(sql, name) end def execute_with_disable_logging(sql, name = nil) #:nodoc: #Run a query without logging @connection.query(sql) rescue ActiveRecord::StatementInvalid => exception if exception.message.split(":").first =~ /Packets out of order/ raise ActiveRecord::StatementInvalid, "'Packets out of order' error was received from the database. Please update your mysql bindings (gem install mysql) and read http://dev.mysql.com/doc/mysql/en/password-hashing.html for more information. If you're on Windows, use the Instant Rails installer to get the updated mysql bindings." else raise end end alias_method_chain :select, :explain end end end end Thanks.

    Read the article

  • What is the general feeling about reflection extensions in std::type_info?

    - by Evan Teran
    I've noticed that reflection is one feature that developers from other languages find very lacking in c++. For certain applications I can really see why! It is so much easier to write things like an IDE's auto-complete if you had reflection. And certainly serialization APIs would be a world easier if we had it. On the other side, one of the main tenets of c++ is don't pay for what you don't use. Which makes complete sense. That's something I love about c++. But it occurred to me there could be a compromise. Why don't compilers add extensions to the std::type_info structure? There would be no runtime overhead. The binary could end up being larger, but this could be a simple compiler switch to enable/disable and to be honest, if you are really concerned about the space savings, you'll likely disable exceptions and RTTI anyway. Some people cite issues with templates, but the compiler happily generates std::type_info structures for template types already. I can imagine a g++ switch like -fenable-typeinfo-reflection which could become very popular (and mainstream libs like boost/Qt/etc could easily have a check to generate code which uses it if there, in which case the end user would benefit with no more cost than flipping a switch). I don't find this unreasonable since large portable libraries like this already depend on compiler extensions. So why isn't this more common? I imagine that I'm missing something, what are the technical issues with this?

    Read the article

  • What's the best way to read a UDT from a database with Java?

    - by Lukas Eder
    I thought I knew everything about UDTs and JDBC until someone on SO pointed out some details of the Javadoc of java.sql.SQLInput and java.sql.SQLData JavaDoc to me. The essence of that hint was (from SQLInput): An input stream that contains a stream of values representing an instance of an SQL structured type or an SQL distinct type. This interface, used only for custom mapping, is used by the driver behind the scenes, and a programmer never directly invokes SQLInput methods. This is quite the opposite of what I am used to do (which is also used and stable in productive systems, when used with the Oracle JDBC driver): Implement SQLData and provide this implementation in a custom mapping to ResultSet.getObject(int index, Map mapping) The JDBC driver will then call-back on my custom type using the SQLData.readSQL(SQLInput stream, String typeName) method. I implement this method and read each field from the SQLInput stream. In the end, getObject() will return a correctly initialised instance of my SQLData implementation holding all data from the UDT. To me, this seems like the perfect way to implement such a custom mapping. Good reasons for going this way: I can use the standard API, instead of using vendor-specific classes such as oracle.sql.STRUCT, etc. I can generate source code from my UDTs, with appropriate getters/setters and other properties My questions: What do you think about my approach, implementing SQLData? Is it viable, even if the Javadoc states otherwise? What other ways of reading UDT's in Java do you know of? E.g. what does Spring do? what does Hibernate do? What does JPA do? What do you do? Addendum: UDT support and integration with stored procedures is one of the major features of jOOQ. jOOQ aims at hiding the more complex "JDBC facts" from client code, without hiding the underlying database architecture. If you have similar questions like the above, jOOQ might provide an answer to you.

    Read the article

  • How to catch HttpRequestValidationException in production

    - by bruno
    Hello all, I have this piece of code to handle the HttpRequestValidationException in my global.asax.cs file. protected void Application_Error(object sender, EventArgs e) { var context = HttpContext.Current; var exception = context.Server.GetLastError(); if (exception is HttpRequestValidationException) { Response.Clear(); Response.StatusCode = 200; Response.Write(@"<html><head></head><body>hello</body></html>"); Response.End(); return; } } If I debug my webapplication, it works perfect. But when i put it on our production-server, the server ignores it and generate the "a potentially dangerous request.form value was detected from the client" - error page. I don't know what happens exactly... If anybody knows what the problem is, or what i do wrong..? Also I don't want to set the validaterequest on false in the web.config. The server uses IIS7.5, And I'm using asp.net 3.5. Thanks, Bruno

    Read the article

  • Authenticated user cannot log in, "The user does not exist or is not unique."

    - by Aquinas
    This is a weird one. I have a WSS3 site, no MOSS, with a custom membership and role provider that authenticates against CRM. All the users have also been added to the site user list so once logged in they have correct display names. On dev and stage everything works fine, but on UAT the users can't get past the login screen. The login screen is working, in that if you type an incorrect password for a user it comes back with the right message, meaning the custom provider is working fine. If you fill the login form in correctly you are immediately redirected straight back to the login screen, with the IIS logs showing that the login screen sent the authenticated user to the site and then was sent back. Setting the site to allow anonymous access shows that the user is not logged in on the site side after authenticating correctly. The ULS logs show: The user does not exist or is not unique. Found 1 trusted forests nzct.local. Found 0 trusted domains Adding logging code to the site I have verified that the membership provider is correctly set, and can find the user when asked. Also, when accessing the site user list, I can find the SP user with the right name. It just refuses to set the current user to be the authenticated user. Weird.

    Read the article

  • How to share data between SSRS Security and Data Processing extension?

    - by user2904681
    I've spent a lot of time trying to solve the issue pointed in title and have no found a solution yet. I use MS SSRS 2012 with custom Security (based on Form Authentication and ClaimsPrincipal) and Data Processing extensions. In Data extension level I need to apply filter programmatically based on one of the claim which I have access in Security extension level only. Here is the problem: I do know how to pass the claim from Security to Data Processing extension code... What I've tried: IAuthenticationExtension.LogonUser(string userName, string password, string authority) { ... ClaimsPrincipal claimsPrincipal = CreateClaimsPrincipal(...); Thread.CurrentPrincipal = claimsPrincipal; HttpContext.Current.User = claimsPrincipal; ... }; But it doesn't work. It seems SSRS overrides it within either GenericPrincipal or FormsIdentity internally. The possible workaround I'm thinking about (but haven't checked it yet): 1. Create HttpModule which will create HttpContext with all required information (minus: will be invoke getting claims each time - huge operation) 2. Write to custom SQL table to store logged users information which is required for Data extension and then read it 3. try somehow to append to cookies due to LogOn and then read each time on IAuthenticationExtension.GetUserInfo and fill HttpContext None of them seems to be a good solution. I would be grateful for any help/advise/comments.

    Read the article

  • Strange File Upload issue with asp.net site on a web farm

    - by Coov
    I have a basic asp.net file upload page. When I test file uploads from my local machine, it works fine. When I test file uploads from our dev machine, it works fine. When I deploy the site to our production webfarm, it behaves strangely. If I access the site from off the network, I can load file-after-file without issue. If I access the site from within our network, I can load the first file just fine but any subsequent files result it a bad sequence of commands error. I'm not sure if this is web farm issue, a network issue, or something else. It feels like a connection is not being disposed of properly but it doesn't make sense why everything works fine remotely. Markup: <asp:FileUpload ID="FileUpload1" runat="server" Width="350px" /> <asp:Button ID="btnSubmit" runat="server" Text="Upload" onclick="btnSubmit_Click" /> Code: if (FileUpload1.HasFile) { FtpWebRequest ftpRequest; FtpWebResponse ftpResponse; ftpRequest = (FtpWebRequest)FtpWebRequest.Create(new Uri("ftp://ftp.myftpsite.com/" + FileUpload1.FileName)); ftpRequest.Method = WebRequestMethods.Ftp.UploadFile; ftpRequest.Proxy = null; ftpRequest.UseBinary = true; ftpRequest.Credentials = new NetworkCredential("username", "password"); ftpRequest.KeepAlive = false; byte[] fileContents = new byte[FileUpload1.PostedFile.ContentLength]; using (Stream fr = FileUpload1.PostedFile.InputStream) { fr.Read(fileContents, 0, FileUpload1.PostedFile.ContentLength); } using (Stream writer = ftpRequest.GetRequestStream()) { writer.Write(fileContents, 0, fileContents.Length); } ftpResponse = (FtpWebResponse)ftpRequest.GetResponse(); Response.Write(ftpResponse.StatusDescription); }

    Read the article

  • Best way for launching html/jsp to communicate with GWT module

    - by h2g2java
    I asked this at the GWT forum but I'm impatient for the answer and I seem to get rather good responses here. A html or jsp file is used to launch the xxx.nocache.js, which then decides which browser "permutation" to use. <head> <meta http-equiv="content-type" content="text/html;charset=UTF-8"> <title>xxx</title> <script type="text/javascript" language="javascript" src="xxx.nocache.js"></script> </head> In my case, I am using a jsp. When the JSP is executed, it discovers some conditions. I wish to pass these conditions as values to the GWT module being launched. The "elegant" GWT way to pass these values would be to persist them as request/memcache attributes and then have the GWT module perform RPC to retrieve those values. For example, the JSP discovers that the current user is Whoopy. Shouldn't I simply have the JSP generate javascript to store user = "Whoopy" as a top or namedframe level javascript variable and use JSNI within the module to retrieve the value for user? I have not tried it yet, but I would like to know how anyone might have done it without having to use RPC.

    Read the article

  • How do I process multipart http responses in Ruby Net:HTTP?

    - by seal-7
    There is so much information out there on how to generate multipart responses or do multipart file uploads. I can't seem to find any information on how to process a multipart http response. Here is some IRB output from a multipart http response I am working with. >> response.http.content_type => "multipart/related" >> response.http.body[0..2048] => "\r\n------=_Part_3_806633756.1271797659309\r\nContent-Type: text/xml; charset=UTF-8\r\nContent-Transfer-Encoding: binary\r\nContent-Id: <A0FCC4333C6D0FCA346B97FAB6B61818>\r\n\r\n<?xml version="1.0" encoding="UTF-8"?><soapenv:Envelope xmlns:soapenv="http://www.w3.org/2003/05/soap-envelope" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"><soapenv:Body><ns1:runReportResponse soapenv:encodingStyle="http://www.w3.org/2003/05/soap-encoding" xmlns:ns1="http://192.168.1.200:8080/jasperserver/services/repository"><ns2:result xmlns:ns2="http://www.w3.org/2003/05/soap-rpc">runReportReturn</ns2:result><runReportReturn xsi:type="xsd:string">&lt;?xml version=&quot;1.0&quot; encoding=&quot;UTF-8&quot;?&gt;\n&lt;operationResult version=&quot;2.0.1&quot;&gt;\n\t&lt;returnCode&gt;&lt;![CDATA[0]]&gt;&lt;/returnCode&gt;\n&lt;/operationResult&gt;\n</runReportReturn></ns1:runReportResponse></soapenv:Body></soapenv:Envelope>\r\n------=_Part_3_806633756.1271797659309\r\nContent-Type: application/pdf\r\nContent-Transfer-Encoding: binary\r\nContent-Id: <report>\r\n\r\n%PDF-1.4\n%\342\343\317\323\n3 0 obj

    Read the article

  • how to create an function using jquery live? [Solved]

    - by Mahmoud
    Hey all i am trying to create a function that well keep the user in lightbox images while he adds to cart, for a demo you can visit secure.sabayafrah.com username: mahmud password: mahmud when you click at any image it well enlarge using lightbox v2, so when the user clicks at the image add, it well refresh the page, when i asked about it at jcart support form they informed me to use jquery live, but i dont know how to do it but as far as i tried this code which i used but still nothing is happening jQuery(function($) { $('#button') .livequery(eventType, function(event) { alert('clicked'); // to check if it works or not return false; }); }); i also used jQuery(function($) { $('input=[name=addto') .livequery(eventType, function(event) { alert('clicked'); // to check if it works or not return false; }); }); yet nothing worked for code to create those images http://pasite.org/code/572 Update 1: i have done this function adding(form){ $( "form.jcart" ).livequery('submit', function() {var b=$(this).find('input[name=<?php echo $jcart['item_id']?>]').val();var c=$(this).find('input[name=<?php echo $jcart['item_price']?>]').val();var d=$(this).find('input[name=<?php echo $jcart['item_name']?>]').val();var e=$(this).find('input[name=<?php echo $jcart['item_qty']?>]').val();var f=$(this).find('input[name=<?php echo $jcart['item_add']?>]').val();$.post('<?php echo $jcart['path'];?>jcart-relay.php',{"<?php echo $jcart['item_id']?>":b,"<?php echo $jcart['item_price']?>":c,"<?php echo $jcart['item_name']?>":d,"<?php echo $jcart['item_qty']?>":e,"<?php echo $jcart['item_add']?>":f} }); return false; } and it seems to add to jcart but yet it still refreshes

    Read the article

  • Saving "heavy" figure to PDF in MATLAB - rendering problem

    - by yuk
    I generate a figure in MATLAB with lot amount of points (100000+) and want to save it into a PDF file. With zbuffer or painters renderer I've got very large and slowly opened file (over 4 Mb) - all points are in vector format. Using OpenGL renderer rasterize the figure in PDF, ok for the plot, but not good for text labels. The file size is about 150 Kb. Try this simplified code, for example: x=linspace(1,10,100000); y=sin(x)+randn(size(x)); plot(x,y,'.') set(gcf,'Renderer','zbuffer') print -dpdf -r300 testpdf_zb set(gcf,'Renderer','painters') print -dpdf -r300 testpdf_pa set(gcf,'Renderer','opengl') print -dpdf -r300 testpdf_op The actual figure is much more complex with several axes and different types of plots. Is there a way to rasterize the figure, but keep text labels as vectors? Another problem with OpenGL is that is does not work in terminal mode (-nosplash -nodesktop) under Mac OSX. Looks like OpenGL is not supported. I have to use terminal mode for automation. The MATLAB version I run is 2007b. Mac OSX server 10.4.

    Read the article

  • What's a good Java API for creating Word documents?

    - by Bill James
    I have a new app I'll be working on where I have to generate a Word document that contains tables, graphs, a table of contents and text. What's a good API to use for this? How sure are you that it supports graphs, ToCs, and tables? What are some hidden gotcha's in using them? Some clarifications: I can't output a PDF, they want a Word doc. They're using MS Word 2003 (or 2007), not OpenOffice Application is running on *nix app-server It'd be nice if I could start with a template doc and just fill in some spaces with tables, graphs, etc. Thanks for the help. Edit: Several good answers below, each with their own faults as far as my current situation. Hard to pick a "final answer" from them. Think I'll leave it open, and hope for better solutions to be created. Edit: The OpenOffice UNO project does seem to be closest to what I asked for. While POI is certainly more mainstream, it's too immature for what I want.

    Read the article

  • PHP Error handling in WordPress plugin

    - by AaronM
    I am a newbie to both PHP and Wordpress (but do ok in C#), and am struggling to understand the error handling in a custom plugin I am attempting to write. The basics of the plugin is to query an exsiting MSSQL database (note its not the standard MYSQL db...) and return the rows back to the screen. This was working well, but the hosting provider has taken my database offline, which led me to the error handling problem (which I thought was ok). The following code is failing to connect to the database (as expected), but puts an error onto the screen and stops the page processing. It does not even output the 'or die' error text. QUESTION: How can I just output a simple "Cant load data" message, and continue on normally? function generateData() { global $post; if ("$post->post_title" == "Home") { try { $myServer = "<servername>"; $myUser = "<username>"; $myPass = "<password>"; $myDB = "<dbName>"; //connection to the database $dbhandle = mssql_connect($myServer, $myUser, $myPass) or die("Couldn't open database $myDB"); //... query processing here... } catch (Exception $e) { echo "Cannot load data<br />"; } } return $content; } Error being generated: (line 31 is $dbhandle = mssql_connect...) Warning: mssql_connect() [function.mssql-connect]: Unable to connect to server: <servername> in <file path> on line 31 Fatal error: Maximum execution time of 30 seconds exceeded in <file path> on line 31

    Read the article

  • How to get rid of the GUI access from shared library.

    - by Inso Reiges
    Hello, In my project i have a shared library with cross-platform code that provides a very convenient abstraction for a number of its clients. To be more specific, this library provides data access to encrypted files generated by main application on a number of platforms. There is a great deal of complicated code there that implements cryptographic protocols and as such is very error-prone and should be shared as much as possible across clients and platforms. However parsing all this encrypted stuff requires asking user for a number of different secrets ones in a while. The secret can be either a password, a number of shared passwords or a public key file and this list is a hot target for extension in the future. I can't really ask the user for any of those secrets beforehand from main application, because i really don't know what i need to ask for until i start working with the encrypted data directly in the library code. So i will have to create dialogs and call them from the library code. However i really see this as a bad idea, because (among other things) there is a possibility of a windows service using it and services can't have GUI access. The question is, are there any known ways or patterns to get rid of the GUI calls that are suitable for my case? Thank you.

    Read the article

  • PHP Encrypt/Decrypt with TripleDes, PKCS7, and ECB

    - by Brandon Green
    I've got my encryption function working properly however I cannot figure out how to get the decrypt function to give proper output. Here is my encrypt function: function Encrypt($data, $secret) { //Generate a key from a hash $key = md5(utf8_encode($secret), true); //Take first 8 bytes of $key and append them to the end of $key. $key .= substr($key, 0, 8); //Pad for PKCS7 $blockSize = mcrypt_get_block_size('tripledes', 'ecb'); $len = strlen($data); $pad = $blockSize - ($len % $blockSize); $data .= str_repeat(chr($pad), $pad); //Encrypt data $encData = mcrypt_encrypt('tripledes', $key, $data, 'ecb'); return base64_encode($encData); } Here is my decrypt function: function Decrypt($data, $secret) { $text = base64_decode($data); $data = mcrypt_decrypt('tripledes', $secret, $text, 'ecb'); $block = mcrypt_get_block_size('tripledes', 'ecb'); $pad = ord($data[($len = strlen($data)) - 1]); return substr($data, 0, strlen($data) - $pad); } Right now I am using a key of test and I'm trying to encrypt 1234567. I get the base64 output from encryption I'm looking for, but when I go to decrypt I get a blank response. I'm not very well versed in encryption/decryption so any help is much appreciated!!

    Read the article

  • How to do a javascript redirection to a ClickOnce deployment URL?

    - by jerem
    I have a ClickOnce application used to view some documents on a website. When connected, the user sees a list of documents as links to http://server/myapp.application?document=docname. It worked fine until I had to integrate the website authentication/security system into my application. The website uses a ticketing system to grant access to its users. A ticket is generated by a web application and needs to be added to the deployment URL querystring, then I have to check at application startup that the ticket given in querystring was right by making another request to the web application. So the deployment URL becomes something like: h ttp://server/myapp.application?document=docname&ticket=ticketnumber. The problem is the ticket is valid only 10 seconds, so I have to get it only after the user has clicked a link. My first try was to have some javascript do the request to get the ticket, generate the proper deployment URL and then redirect the user to this URL with "window.location = deploymentUrl;". It works fine in Firefox, but IE does not prompt the user for installation. I guess it is a ClickOnce security constraints, but I am able to do the redirection when doing it on localhost, so I hope there is a workaround. I have also added the server on the "trusted sites" list in IE options. Is it possible to have that working in IE? What are my other options to do that?

    Read the article

  • SubSonic 3 screws up selecteditem?

    - by SteveCav
    If you have a moment, please try this: -Download Subsonic 3. -Start a new proj and add SS's ActiveRecord templates. -Point it to any SQL Server DB and generate the classes. -Add a WPF project. -Create a window and add a combobox or listbox. -Set the ItemsSource from the SS DAL, and format it how you wish. -Add a button that will show you some value from the SelectedItem (using messagebox, console, whatever). -Run the project. -Click on the third item in the list. -Click on the second item in the list. -Click on the button. When I do that, the button gives me the value of the THIRD item, not the second. In other words, once the SelectedItem is set the first time it STAYS, no matter which item is subsequently highlighted on the screen. This is happening to me whatever control I use (combobox, listbox, even datagrid) and it ONLY happens with Subsonic Activerecord objects. If I write my own POCOs with identical properties and bind a list of them instead, the controls behave as expected. Does this happen to you? Any ideas?

    Read the article

< Previous Page | 619 620 621 622 623 624 625 626 627 628 629 630  | Next Page >