Search Results

Search found 24514 results on 981 pages for 'connection manager'.

Page 193/981 | < Previous Page | 189 190 191 192 193 194 195 196 197 198 199 200  | Next Page >

  • Webdav stops working after a few seconds

    - by user214885
    With Ubuntu 13.10, I connect to two different OwnCloud installations and can browse for only a few seconds before it freezes the connection and stops working. When I try to reinstate the connection it fails to even ask for the password (Ubuntu was told to forget the password). I did check the webdav connection through Firefox on two computers and ES File Explorer on android. I know that this isn't a webdav problem and don't know what is happening in Ubuntu to stop being able to read the connection.

    Read the article

  • IIS7 FTP Setup - An error occured during the authentication process. 530 End Login failed

    - by robmzd
    I'm having a problem very similar to IIS 7.5 FTP IIS Manager Users Login Fail (530) on Windows Server 2008 R2 Standard. I have created an FTP site and IIS Manager user but am having trouble logging in. I could really do with getting this working with the IIS Manager user rather than by creating a new system user since I'm fairly restricted with those accounts. Here is the output when connecting locally through command prompt: C:\Windows\system32>ftp localhost Connected to MYSERVER. 220 Microsoft FTP Service User (MYSERVER:(none)): MyFtpLogin 331 Password required for MyFtpLogin. Password: *** 530-User cannot log in. Win32 error: Logon failure: unknown user name or bad password. Error details: An error occured during the authentication process. 530 End Login failed. I have followed the guide to configure ftp with iis manager authentication in iis 7 and Adding FTP Publishing to a Web Site in IIS 7 Things I have done and checked: The FTP Service is installed (along with FTP Extensibility). Local Service and Network Service have been given access to the site folder Permission has been given to the config files Granted read/write permissions to the FTP Root folder The Management Service is installed and running Enable remote connections is ticked with 'Windows credentials or IIS manager credentials' selected The IIS Manager User has been added to the server (root connection in the IIS connections branch) The new FTP site has been added IIS Manager Authentication has been added to the FTP authentication providers The IIS Manager user has been added to the IIS Manager Permissions list for the site Added Read/Write permissions for the user in the FTP Authorization Rules Here's a section of the applicationHost config file associated with the FTP site <site name="MySite" id="8"> <application path="/" applicationPool="MyAppPool"> <virtualDirectory path="/" physicalPath="D:\Websites\MySite" /> </application> <bindings> <binding protocol="http" bindingInformation="*:80:www.mydomain.co.uk" /> <binding protocol="ftp" bindingInformation="*:21:www.mydomain.co.uk" /> </bindings> <ftpServer> <security> <ssl controlChannelPolicy="SslAllow" dataChannelPolicy="SslAllow" /> <authentication> <basicAuthentication enabled="true" /> <customAuthentication> <providers> <add name="IisManagerAuth" enabled="true" /> </providers> </customAuthentication> </authentication> </security> </ftpServer> </site> ... <location path="MySite"> <system.ftpServer> <security> <authorization> <add accessType="Allow" users="MyFtpLogin" permissions="Read, Write" /> </authorization> </security> </system.ftpServer> </location> If I connect to the Site (not FTP) from my local IIS Manager using the same IIS Manager account details then it connects fine, I can browse files and change settings as I would locally (though I don't seem to have an option to upload files). Trying to connect via FTP though either through the browser or FileZilla etc... gives me: Status: Resolving address of www.mydomain.co.uk Status: Connecting to 123.456.12.123:21... Status: Connection established, waiting for welcome message... Response: 220 Microsoft FTP Service Command: USER MyFtpLogin Response: 331 Password required for MyFtpLogin. Command: PASS ********* Response: 530 User cannot log in. Error: Critical error Error: Could not connect to server I have tried collecting etw traces for ftp sessions, in the logs I get a FailBasicLogon followed by a FailCustomLogon, but no other info: FailBasicLogon SessionId={cad26a97-225d-45ba-ab1f-f6acd9046e55} | ErrorCode=0x8007052E StartCustomLogon SessionId={cad26a97-225d-45ba-ab1f-f6acd9046e55} | LogonProvider=IisManagerAuth StartCallProvider SessionId={cad26a97-225d-45ba-ab1f-f6acd9046e55} | provider=IisManagerAuth EndCallProvider SessionId={cad26a97-225d-45ba-ab1f-f6acd9046e55} EndCustomLogon SessionId={cad26a97-225d-45ba-ab1f-f6acd9046e55} FailCustomLogon SessionId={cad26a97-225d-45ba-ab1f-f6acd9046e55} | ErrorCode=0x8007052E FailFtpCommand SessionId={cad26a97-225d-45ba-ab1f-f6acd9046e55} | ReturnValue=0x8007052E | SubStatus=ERROR_DURING_AUTHENTICATION In the normal FTP logs I just get: 2012-10-23 16:13:11 123.456.12.123 - 123.456.12.123 21 ControlChannelOpened - - 0 0 e2d4e935-fb31-4f2c-af79-78d75d47c18e - 2012-10-23 16:13:11 123.456.12.123 - 123.456.12.123 21 USER MyFtpLogin 331 0 0 e2d4e935-fb31-4f2c-af79-78d75d47c18e - 2012-10-23 16:13:11 123.456.12.123 - 123.456.12.123 21 PASS *** 530 1326 41 e2d4e935-fb31-4f2c-af79-78d75d47c18e - 2012-10-23 16:13:11 123.456.12.123 - 123.456.12.123 21 ControlChannelClosed - - 0 0 e2d4e935-fb31-4f2c-af79-78d75d47c18e - If anyone has any ideas than I would be very grateful to hear them. Many thanks.

    Read the article

  • "Unable to read data from the transport connection: net_io_connectionclosed." - Windows Vista Busine

    - by John DaCosta
    Unable to test sending email from .NET code in Windows Vista Business. I am writing code which I will migrate to an SSIS Package once it its proven. The code is to send an error message via email to a list of recipients. The code is below, however I am getting an exception when I execute the code. I created a simple class to do the mailing... the design could be better, I am testing functionality before implementing more robust functionality, methods, etc. namespace LabDemos { class Program { static void Main(string[] args) { Mailer m = new Mailer(); m.test(); } } } namespace LabDemos { class MyMailer { List<string> _to = new List<string>(); List<string> _cc = new List<string>(); List<string> _bcc = new List<string>(); String _msgFrom = ""; String _msgSubject = ""; String _msgBody = ""; public void test(){ //create the mail message MailMessage mail = new MailMessage(); //set the addresses mail.From = new MailAddress("[email protected]"); //set the content mail.Subject = "This is an email"; mail.Body = "this is a sample body"; mail.IsBodyHtml = false; //send the message SmtpClient smtp = new SmtpClient(); smtp.Host = "emailservername"; smtp.Port = 25; smtp.UseDefaultCredentials = true; smtp.Send(mail); } } Exception Message Inner Exception {"Unable to read data from the transport connection: net_io_connectionclosed."} Stack Trace " at System.Net.Mail.SmtpReplyReaderFactory.ProcessRead(Byte[] buffer, Int32 offset, Int32 read, Boolean readLine)\r\n at System.Net.Mail.SmtpReplyReaderFactory.ReadLines(SmtpReplyReader caller, Boolean oneLine)\r\n at System.Net.Mail.SmtpReplyReaderFactory.ReadLine(SmtpReplyReader caller)\r\n at System.Net.Mail.SmtpConnection.GetConnection(String host, Int32 port)\r\n at System.Net.Mail.SmtpTransport.GetConnection(String host, Int32 port)\r\n at System.Net.Mail.SmtpClient.GetConnection()\r\n at System.Net.Mail.SmtpClient.Send(MailMessage message)" Outer Exception System.Net.Mail.SmtpException was unhandled Message="Failure sending mail." Source="System" StackTrace: at System.Net.Mail.SmtpClient.Send(MailMessage message) at LabDemos.Mailer.test() in C:\Users\username\Documents\Visual Studio 2008\Projects\LabDemos\LabDemos\Mailer.cs:line 40 at LabDemos.Program.Main(String[] args) in C:\Users\username\Documents\Visual Studio 2008\Projects\LabDemos\LabDemos\Program.cs:line 48 at System.AppDomain._nExecuteAssembly(Assembly assembly, String[] args) at System.AppDomain.nExecuteAssembly(Assembly assembly, String[] args) at System.Runtime.Hosting.ManifestRunner.Run(Boolean checkAptModel) at System.Runtime.Hosting.ManifestRunner.ExecuteAsAssembly() at System.Runtime.Hosting.ApplicationActivator.CreateInstance(ActivationContext activationContext, String[] activationCustomData) at System.Runtime.Hosting.ApplicationActivator.CreateInstance(ActivationContext activationContext) at System.Activator.CreateInstance(ActivationContext activationContext) at Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssemblyDebugInZone() at System.Threading.ThreadHelper.ThreadStart_Context(Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ThreadHelper.ThreadStart() InnerException: System.IO.IOException Message="Unable to read data from the transport connection: net_io_connectionclosed." Source="System" StackTrace: at System.Net.Mail.SmtpReplyReaderFactory.ProcessRead(Byte[] buffer, Int32 offset, Int32 read, Boolean readLine) at System.Net.Mail.SmtpReplyReaderFactory.ReadLines(SmtpReplyReader caller, Boolean oneLine) at System.Net.Mail.SmtpReplyReaderFactory.ReadLine(SmtpReplyReader caller) at System.Net.Mail.SmtpConnection.GetConnection(String host, Int32 port) at System.Net.Mail.SmtpTransport.GetConnection(String host, Int32 port) at System.Net.Mail.SmtpClient.GetConnection() at System.Net.Mail.SmtpClient.Send(MailMessage message) InnerException:

    Read the article

  • EM12c Release 4: New Compliance features including DB STIG Standard

    - by DaveWolf
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Enterprise Manager’s compliance framework is a powerful and robust feature that provides users the ability to continuously validate their target configurations against a specified standard. Enterprise Manager’s compliance library is filled with a wide variety of standards based on Oracle’s recommendations, best practices and security guidelines. These standards can be easily associated to a target to generate a report showing its degree of conformance to that standard. ( To get an overview of  Database compliance management in Enterprise Manager see this screenwatch. ) Starting with release 12.1.0.4 of Enterprise Manager the compliance library will contain a new standard based on the US Defense Information Systems Agency (DISA) Security Technical Implementation Guide (STIG) for Oracle Database 11g. According to the DISA website, “The STIGs contain technical guidance to ‘lock down’ information systems/software that might otherwise be vulnerable to a malicious computer attack.” In essence, a STIG is a technical checklist an administrator can follow to secure a system or software. Many US government entities are required to follow these standards however many non-US government entities and commercial companies base their standards directly or partially on these STIGs. You can find more information about the Oracle Database and other STIGs on the DISA website. The Oracle Database 11g STIG consists of two categories of checks, installation and instance. Installation checks focus primarily on the security of the Oracle Home while the instance checks focus on the configuration of the running database instance itself. If you view the STIG compliance standard in Enterprise Manager, you will see the rules organized into folders corresponding to these categories. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 -"/ /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} The rule names contain a rule ID ( DG0020 for example ) which directly map to the check name in the STIG checklist along with a helpful brief description. The actual description field contains the text from the STIG documentation to aid in understanding the purpose of the check. All of the rules have also been documented in the Oracle Database Compliance Standards reference documentation. In order to use this standard both the OMS and agent must be at version 12.1.0.4 as it takes advantage of several features new in this release including: Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Agent-Side Compliance Rules Manual Compliance Rules Violation Suppression Additional BI Publisher Compliance Reports /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Agent-Side Compliance Rules Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Agent-side compliance rules are essentially the result of a tighter integration between Configuration Extensions and Compliance Rules. If you ever created customer compliance content in past versions of Enterprise Manager, you likely used Configuration Extensions to collect additional information into the EM repository so it could be used in a Repository compliance rule. This process although powerful, could be confusing to correctly model the SQL in the rule creation wizard. With agent-side rules, the user only needs to choose the Configuration Extension/Alias combination and that’s it. Enterprise Manager will do the rest for you. This tighter integration also means their lifecycle is managed together. When you associate an agent-side compliance standard to a target, the required Configuration Extensions will be deployed automatically for you. The opposite is also true, when you unassociated the compliance standard, the Configuration Extensions will also be undeployed. The Oracle Database STIG compliance standard is implemented as an agent-side standard which is why you simply need to associate the standard to your database targets without previously deploying the associated Configuration Extensions. You can learn more about using Agent-Side compliance rules in the screenwatch Using Agent-Side Compliance Rules on Enterprise Manager's Lifecycle Management page on OTN. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Manual Compliance Rules Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} There are many checks in the Oracle Database STIG as well as other common standards which simply cannot be automated. This could be something as simple as “Ensure the datacenter entrance is secured.” or complex as Oracle Database STIG Rule DG0186 – “The database should not be directly accessible from public or unauthorized networks”. These checks require a human to perform and attest to its successful completion. Enterprise Manager now supports these types of checks in Manual rules. When first associated to a target, each manual rule will generate a single violation. These violations must be manually cleared by a user who is in essence attesting to its successful completion. The user is able to permanently clear the violation or give a future date on which the violation will be regenerated. Setting a future date is useful when policy dictates a periodic re-validation of conformance wherein the user will have to reperform the check. The optional reason field gives the user an opportunity to provide details of the check results. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Violation Suppression There are situations that require the need to permanently or temporarily suppress a legitimate violation or finding. These include approved exceptions and grace periods. Enterprise Manager now supports the ability to temporarily or permanently suppress a violation. Unlike when you clear a manual rule violation, suppression simply removes the violation from the compliance results UI and in turn its negative impact on the score. The violation still remains in the EM repository and can be accounted for in compliance reports. Temporarily suppressing a violation can give users a grace period in which to address an issue. If the issue is not addressed within the specified period, the violation will reappear in the results automatically. Again the user may enter a reason for the suppression which will be permanently saved with the event along with the suppressing user ID. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Additional BI Publisher compliance reports As I am sure you have learned by now, BI Publisher now ships and is integrated with Enterprise Manager 12.1.0.4. This means users can take full advantage of the powerful reporting engine by using the Oracle provided reports or building their own. There are many new compliance related reports available in 12.1.0.4 covering all aspects including the association status, library as well as summary and detailed results reports.  10 New Compliance Reports Compliance Summary Report Example showing STIG results Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Conclusion Together with the Oracle Database 11g STIG compliance standard these features provide a complete solution for easily auditing and reporting the security posture of your Oracle Databases against this well known benchmark. You can view an overview presentation and demo in the screenwatch Using the STIG Compliance Standard on Enterprise Manager's Lifecycle Management page on OTN. Additional EM12c Compliance Management Information Compliance Management - Overview ( Presentation ) Compliance Management - Custom Compliance on Default Data (How To) Compliance Management - Custom Compliance using SQL Configuration Extension (How To) Compliance Management - Customer Compliance using Command Configuration Extension (How To)

    Read the article

  • Node.js Adventure - Storage Services and Service Runtime

    - by Shaun
    When I described on how to host a Node.js application on Windows Azure, one of questions might be raised about how to consume the vary Windows Azure services, such as the storage, service bus, access control, etc.. Interact with windows azure services is available in Node.js through the Windows Azure Node.js SDK, which is a module available in NPM. In this post I would like to describe on how to use Windows Azure Storage (a.k.a. WAS) as well as the service runtime.   Consume Windows Azure Storage Let’s firstly have a look on how to consume WAS through Node.js. As we know in the previous post we can host Node.js application on Windows Azure Web Site (a.k.a. WAWS) as well as Windows Azure Cloud Service (a.k.a. WACS). In theory, WAWS is also built on top of WACS worker roles with some more features. Hence in this post I will only demonstrate for hosting in WACS worker role. The Node.js code can be used when consuming WAS when hosted on WAWS. But since there’s no roles in WAWS, the code for consuming service runtime mentioned in the next section cannot be used for WAWS node application. We can use the solution that I created in my last post. Alternatively we can create a new windows azure project in Visual Studio with a worker role, add the “node.exe” and “index.js” and install “express” and “node-sqlserver” modules, make all files as “Copy always”. In order to use windows azure services we need to have Windows Azure Node.js SDK, as knows as a module named “azure” which can be installed through NPM. Once we downloaded and installed, we need to include them in our worker role project and make them as “Copy always”. You can use my “Copy all always” tool mentioned in my last post to update the currently worker role project file. You can also find the source code of this tool here. The source code of Windows Azure SDK for Node.js can be found in its GitHub page. It contains two parts. One is a CLI tool which provides a cross platform command line package for Mac and Linux to manage WAWS and Windows Azure Virtual Machines (a.k.a. WAVM). The other is a library for managing and consuming vary windows azure services includes tables, blobs, queues, service bus and the service runtime. I will not cover all of them but will only demonstrate on how to use tables and service runtime information in this post. You can find the full document of this SDK here. Back to Visual Studio and open the “index.js”, let’s continue our application from the last post, which was working against Windows Azure SQL Database (a.k.a. WASD). The code should looks like this. 1: var express = require("express"); 2: var sql = require("node-sqlserver"); 3:  4: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd={PASSWORD};Encrypt=yes;Connection Timeout=30;"; 5: var port = 80; 6:  7: var app = express(); 8:  9: app.configure(function () { 10: app.use(express.bodyParser()); 11: }); 12:  13: app.get("/", function (req, res) { 14: sql.open(connectionString, function (err, conn) { 15: if (err) { 16: console.log(err); 17: res.send(500, "Cannot open connection."); 18: } 19: else { 20: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 21: if (err) { 22: console.log(err); 23: res.send(500, "Cannot retrieve records."); 24: } 25: else { 26: res.json(results); 27: } 28: }); 29: } 30: }); 31: }); 32:  33: app.get("/text/:key/:culture", function (req, res) { 34: sql.open(connectionString, function (err, conn) { 35: if (err) { 36: console.log(err); 37: res.send(500, "Cannot open connection."); 38: } 39: else { 40: var key = req.params.key; 41: var culture = req.params.culture; 42: var command = "SELECT * FROM [Resource] WHERE [Key] = '" + key + "' AND [Culture] = '" + culture + "'"; 43: conn.queryRaw(command, function (err, results) { 44: if (err) { 45: console.log(err); 46: res.send(500, "Cannot retrieve records."); 47: } 48: else { 49: res.json(results); 50: } 51: }); 52: } 53: }); 54: }); 55:  56: app.get("/sproc/:key/:culture", function (req, res) { 57: sql.open(connectionString, function (err, conn) { 58: if (err) { 59: console.log(err); 60: res.send(500, "Cannot open connection."); 61: } 62: else { 63: var key = req.params.key; 64: var culture = req.params.culture; 65: var command = "EXEC GetItem '" + key + "', '" + culture + "'"; 66: conn.queryRaw(command, function (err, results) { 67: if (err) { 68: console.log(err); 69: res.send(500, "Cannot retrieve records."); 70: } 71: else { 72: res.json(results); 73: } 74: }); 75: } 76: }); 77: }); 78:  79: app.post("/new", function (req, res) { 80: var key = req.body.key; 81: var culture = req.body.culture; 82: var val = req.body.val; 83:  84: sql.open(connectionString, function (err, conn) { 85: if (err) { 86: console.log(err); 87: res.send(500, "Cannot open connection."); 88: } 89: else { 90: var command = "INSERT INTO [Resource] VALUES ('" + key + "', '" + culture + "', N'" + val + "')"; 91: conn.queryRaw(command, function (err, results) { 92: if (err) { 93: console.log(err); 94: res.send(500, "Cannot retrieve records."); 95: } 96: else { 97: res.send(200, "Inserted Successful"); 98: } 99: }); 100: } 101: }); 102: }); 103:  104: app.listen(port); Now let’s create a new function, copy the records from WASD to table service. 1. Delete the table named “resource”. 2. Create a new table named “resource”. These 2 steps ensures that we have an empty table. 3. Load all records from the “resource” table in WASD. 4. For each records loaded from WASD, insert them into the table one by one. 5. Prompt to user when finished. In order to use table service we need the storage account and key, which can be found from the developer portal. Just select the storage account and click the Manage Keys button. Then create two local variants in our Node.js application for the storage account name and key. Since we need to use WAS we need to import the azure module. Also I created another variant stored the table name. In order to work with table service I need to create the storage client for table service. This is very similar as the Windows Azure SDK for .NET. As the code below I created a new variant named “client” and use “createTableService”, specified my storage account name and key. 1: var azure = require("azure"); 2: var storageAccountName = "synctile"; 3: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 4: var tableName = "resource"; 5: var client = azure.createTableService(storageAccountName, storageAccountKey); Now create a new function for URL “/was/init” so that we can trigger it through browser. Then in this function we will firstly load all records from WASD. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: } 18: } 19: }); 20: } 21: }); 22: }); When we succeed loaded all records we can start to transform them into table service. First I need to recreate the table in table service. This can be done by deleting and creating the table through table client I had just created previously. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: } 27: }); 28: }); 29: } 30: } 31: }); 32: } 33: }); 34: }); As you can see, the azure SDK provide its methods in callback pattern. In fact, almost all modules in Node.js use the callback pattern. For example, when I deleted a table I invoked “deleteTable” method, provided the name of the table and a callback function which will be performed when the table had been deleted or failed. Underlying, the azure module will perform the table deletion operation in POSIX async threads pool asynchronously. And once it’s done the callback function will be performed. This is the reason we need to nest the table creation code inside the deletion function. If we perform the table creation code after the deletion code then they will be invoked in parallel. Next, for each records in WASD I created an entity and then insert into the table service. Finally I send the response to the browser. Can you find a bug in the code below? I will describe it later in this post. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: for (var i = 0; i < results.rows.length; i++) { 27: var entity = { 28: "PartitionKey": results.rows[i][1], 29: "RowKey": results.rows[i][0], 30: "Value": results.rows[i][2] 31: }; 32: client.insertEntity(tableName, entity, function (error) { 33: if (error) { 34: error["target"] = "insertEntity"; 35: res.send(500, error); 36: } 37: else { 38: console.log("entity inserted"); 39: } 40: }); 41: } 42: // send the 43: console.log("all done"); 44: res.send(200, "All done!"); 45: } 46: }); 47: }); 48: } 49: } 50: }); 51: } 52: }); 53: }); Now we can publish it to the cloud and have a try. But normally we’d better test it at the local emulator first. In Node.js SDK there are three build-in properties which provides the account name, key and host address for local storage emulator. We can use them to initialize our table service client. We also need to change the SQL connection string to let it use my local database. The code will be changed as below. 1: // windows azure sql database 2: //var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd=eszqu94XZY;Encrypt=yes;Connection Timeout=30;"; 3: // sql server 4: var connectionString = "Driver={SQL Server Native Client 11.0};Server={.};Database={Caspar};Trusted_Connection={Yes};"; 5:  6: var azure = require("azure"); 7: var storageAccountName = "synctile"; 8: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 9: var tableName = "resource"; 10: // windows azure storage 11: //var client = azure.createTableService(storageAccountName, storageAccountKey); 12: // local storage emulator 13: var client = azure.createTableService(azure.ServiceClient.DEVSTORE_STORAGE_ACCOUNT, azure.ServiceClient.DEVSTORE_STORAGE_ACCESS_KEY, azure.ServiceClient.DEVSTORE_TABLE_HOST); Now let’s run the application and navigate to “localhost:12345/was/init” as I hosted it on port 12345. We can find it transformed the data from my local database to local table service. Everything looks fine. But there is a bug in my code. If we have a look on the Node.js command window we will find that it sent response before all records had been inserted, which is not what I expected. The reason is that, as I mentioned before, Node.js perform all IO operations in non-blocking model. When we inserted the records we executed the table service insert method in parallel, and the operation of sending response was also executed in parallel, even though I wrote it at the end of my logic. The correct logic should be, when all entities had been copied to table service with no error, then I will send response to the browser, otherwise I should send error message to the browser. To do so I need to import another module named “async”, which helps us to coordinate our asynchronous code. Install the module and import it at the beginning of the code. Then we can use its “forEach” method for the asynchronous code of inserting table entities. The first argument of “forEach” is the array that will be performed. The second argument is the operation for each items in the array. And the third argument will be invoked then all items had been performed or any errors occurred. Here we can send our response to browser. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: async.forEach(results.rows, 26: // transform the records 27: function (row, callback) { 28: var entity = { 29: "PartitionKey": row[1], 30: "RowKey": row[0], 31: "Value": row[2] 32: }; 33: client.insertEntity(tableName, entity, function (error) { 34: if (error) { 35: callback(error); 36: } 37: else { 38: console.log("entity inserted."); 39: callback(null); 40: } 41: }); 42: }, 43: // send reponse 44: function (error) { 45: if (error) { 46: error["target"] = "insertEntity"; 47: res.send(500, error); 48: } 49: else { 50: console.log("all done"); 51: res.send(200, "All done!"); 52: } 53: } 54: ); 55: } 56: }); 57: }); 58: } 59: } 60: }); 61: } 62: }); 63: }); Run it locally and now we can find the response was sent after all entities had been inserted. Query entities against table service is simple as well. Just use the “queryEntity” method from the table service client and providing the partition key and row key. We can also provide a complex query criteria as well, for example the code here. In the code below I queried an entity by the partition key and row key, and return the proper localization value in response. 1: app.get("/was/:key/:culture", function (req, res) { 2: var key = req.params.key; 3: var culture = req.params.culture; 4: client.queryEntity(tableName, culture, key, function (error, entity) { 5: if (error) { 6: res.send(500, error); 7: } 8: else { 9: res.json(entity); 10: } 11: }); 12: }); And then tested it on local emulator. Finally if we want to publish this application to the cloud we should change the database connection string and storage account. For more information about how to consume blob and queue service, as well as the service bus please refer to the MSDN page.   Consume Service Runtime As I mentioned above, before we published our application to the cloud we need to change the connection string and account information in our code. But if you had played with WACS you should have known that the service runtime provides the ability to retrieve configuration settings, endpoints and local resource information at runtime. Which means we can have these values defined in CSCFG and CSDEF files and then the runtime should be able to retrieve the proper values. For example we can add some role settings though the property window of the role, specify the connection string and storage account for cloud and local. And the can also use the endpoint which defined in role environment to our Node.js application. In Node.js SDK we can get an object from “azure.RoleEnvironment”, which provides the functionalities to retrieve the configuration settings and endpoints, etc.. In the code below I defined the connection string variants and then use the SDK to retrieve and initialize the table client. 1: var connectionString = ""; 2: var storageAccountName = ""; 3: var storageAccountKey = ""; 4: var tableName = ""; 5: var client; 6:  7: azure.RoleEnvironment.getConfigurationSettings(function (error, settings) { 8: if (error) { 9: console.log("ERROR: getConfigurationSettings"); 10: console.log(JSON.stringify(error)); 11: } 12: else { 13: console.log(JSON.stringify(settings)); 14: connectionString = settings["SqlConnectionString"]; 15: storageAccountName = settings["StorageAccountName"]; 16: storageAccountKey = settings["StorageAccountKey"]; 17: tableName = settings["TableName"]; 18:  19: console.log("connectionString = %s", connectionString); 20: console.log("storageAccountName = %s", storageAccountName); 21: console.log("storageAccountKey = %s", storageAccountKey); 22: console.log("tableName = %s", tableName); 23:  24: client = azure.createTableService(storageAccountName, storageAccountKey); 25: } 26: }); In this way we don’t need to amend the code for the configurations between local and cloud environment since the service runtime will take care of it. At the end of the code we will listen the application on the port retrieved from SDK as well. 1: azure.RoleEnvironment.getCurrentRoleInstance(function (error, instance) { 2: if (error) { 3: console.log("ERROR: getCurrentRoleInstance"); 4: console.log(JSON.stringify(error)); 5: } 6: else { 7: console.log(JSON.stringify(instance)); 8: if (instance["endpoints"] && instance["endpoints"]["nodejs"]) { 9: var endpoint = instance["endpoints"]["nodejs"]; 10: app.listen(endpoint["port"]); 11: } 12: else { 13: app.listen(8080); 14: } 15: } 16: }); But if we tested the application right now we will find that it cannot retrieve any values from service runtime. This is because by default, the entry point of this role was defined to the worker role class. In windows azure environment the service runtime will open a named pipeline to the entry point instance, so that it can connect to the runtime and retrieve values. But in this case, since the entry point was worker role and the Node.js was opened inside the role, the named pipeline was established between our worker role class and service runtime, so our Node.js application cannot use it. To fix this problem we need to open the CSDEF file under the azure project, add a new element named Runtime. Then add an element named EntryPoint which specify the Node.js command line. So that the Node.js application will have the connection to service runtime, then it’s able to read the configurations. Start the Node.js at local emulator we can find it retrieved the connections, storage account for local. And if we publish our application to azure then it works with WASD and storage service through the configurations for cloud.   Summary In this post I demonstrated how to use Windows Azure SDK for Node.js to interact with storage service, especially the table service. I also demonstrated on how to use WACS service runtime, how to retrieve the configuration settings and the endpoint information. And in order to make the service runtime available to my Node.js application I need to create an entry point element in CSDEF file and set “node.exe” as the entry point. I used five posts to introduce and demonstrate on how to run a Node.js application on Windows platform, how to use Windows Azure Web Site and Windows Azure Cloud Service worker role to host our Node.js application. I also described how to work with other services provided by Windows Azure platform through Windows Azure SDK for Node.js. Node.js is a very new and young network application platform. But since it’s very simple and easy to learn and deploy, as well as, it utilizes single thread non-blocking IO model, Node.js became more and more popular on web application and web service development especially for those IO sensitive projects. And as Node.js is very good at scaling-out, it’s more useful on cloud computing platform. Use Node.js on Windows platform is new, too. The modules for SQL database and Windows Azure SDK are still under development and enhancement. It doesn’t support SQL parameter in “node-sqlserver”. It does support using storage connection string to create the storage client in “azure”. But Microsoft is working on make them easier to use, working on add more features and functionalities.   PS, you can download the source code here. You can download the source code of my “Copy all always” tool here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • How can I force mod_perl to only allow one process per connection?

    - by Charles Ma
    I have a perl cgi script that's fairly resource intensive (takes about 2 seconds to finish). This is fine as long as only at most 4 or 5 of them are running at the same time and that's usually the case. The problem is that when a user clicks a link that calls this script, a new process is spawned to handle that connection request, so if a user clicks many times (if they're impatient), the server gets overloaded with new processes running and most of them are redundant. How can I ensure that only one instance of this process is running per host? This is an old system that I'm maintaining which uses an old framework for the frontend, and I would like to avoid using javascript to disable the button client side if possible. Converting this to fast-cgi perl is out of the question as well, again because this is an old system and adding fast-cgi to apache might break a lot of other things that this thing runs.

    Read the article

  • Would a Socket Connection Outperform an Intarvaled Database Sweep and Requests?

    - by Jascha
    I'm building a small chat application to add to an existing framework. There will only be 20-50 users MAX at any one time. I was wondering if I could get away with updating a cache file containing (semi) live chat data for whichever users happen to be chatting just by performing timed queries and regular AJAX refreshes for new data as opposed to learning how to open and maintain a socket connection. I'm sure there are existing chat plug-ins out there. But I just had a hell of a time installing one and I could see building the whole damn thing taking just as much time as plugging one in. Am I off to a bad start? Thanks in advance -J (p.s. this is a semi closed network behind a php login so security isn't a great concern)

    Read the article

  • How to configure MySQL connection properties with Spring, Hibernate 3.3 and c3p0?

    - by sfussenegger
    I am currently in the process of upgrading an application from Hibernate 3.2 to Hibernate 3.3. I though I'd stick with the default connection pool (Hibernate changed its default from Commons DBCP to c3p0) as I don't have any good reason to choose a non-default pool. At least non but having used DBCP before. The upgrade went pretty much without any problems so far. The only thing I can't get to work is passing properties to the underlying MySQL JDBC4Connection. Up to now, I used DBCP's BasicDataSource.addConnectionProperty(String,String) to pass properties (useUnicode=true, characterEncodin=UTF-8, characterSetResults=UTF-8, zeroDateTimeBehavior=convertToNull). However, I can't find any way to do the same with c3p0 other than including them in the JDBC URL. (That's something I'd like to avoid as I wanna keep the URL configurable without forcing users to include those parameters.) So far, I've tried to use a ConnectionCustomizer without success. Any other suggestions?

    Read the article

  • How to detect open database connection with Hibernate / JPA?

    - by John K
    I am learning JPA w/Hibernate using a Java SE 6 project. I'd simply like to be able to detect if the connection between Hibernate and my database (MS SQL Server) is open. For example, I'd like to be able to detect this, log it, and try reconnecting again in 60 seconds. This is what I thought would work but isOpen() doesn't appear to be what I want (always is true): EntityManagerFactory emf = Persistence.createEntityManagerFactory("rcc", props); if (emf != null && emf.isOpen()) { EntityManager em = emf.createEntityManager(); if (em == null || !emf.isOpen()) // error connecting to database else ... This seems to me to be a simple problem, but I cannot find an answer!

    Read the article

  • How can I use my own connection class with a strongly typed dataset?

    - by Maslow
    I have designed a class with sqlClient.SqlCommand wrappers to implement such functionality as automatic retries on timeout, Async (thread safety), error logging, and some sql server functions like WhoAmI. I've used some strongly typed datasets mainly for display purposes only, but I'd like to have the same database functionality that I use with my class. Is there an interface I can implement or a way to hook my command/connection class into the dataset at design or runtime? Or would I need to somehow write a wrapper for the dataset to implement these types of functions? if this is the only option could it be made generic to wrap anything that inherits from dataset?

    Read the article

  • How to get reference to SqlConnection (or Connection string) in Castle ActiveRecord?

    - by VoimiX
    how can I get reference to current SqlConnection or Sqlconnection in config? I found http://svn.castleproject.org:8080/svn/castle/trunk/ActiveRecord/Castle.ActiveRecord.Tests/DifferentDatabaseScopeTestCase.cs and code private string GetSqlConnection() { IConfigurationSource config = GetConfigSource(); IConfiguration db2 = config.GetConfiguration(typeof(ActiveRecordBase)); string conn = string.Empty; foreach (IConfiguration child in db2.Children) { if (child.Name == "connection.connection_string") { conn = child.Value; } } return conn; } But I cant understand where I can find "GetConfigSource" implementation? Is this standart Castle helper function or not? I use these namespaces using Castle.ActiveRecord; using NHibernate.Criterion; using NHibernate; using Castle.Core.Configuration; using Castle.ActiveRecord.Framework;

    Read the article

  • problem with crawl many url in .net: Server IP not ping. maybe bandwidth or http connection limit ex

    - by Hamid
    Hi to all I develop web crawling service (windows service / multi-thread) . its work fine, but sometimes my server network not response. and i can't ping server IP (from internet), but can ping by other network card (local ip) that not access to internet. after i open server with remote desktop and stop crawling service. i could ping. What's my problem? Bandwidth limit or max connection limit exceed or ??? how to prevent this issue? Note: when this problem occur, i open browser for browse web site, but can't open any website!!! Could you please help me. Thanks in advanced

    Read the article

  • Is the first persistance of an Entity Data Model in EF 4.0 slower due to the connection cost ?

    - by Scott Davies
    Hi, I've got a console app written that persists an object graph via Entity Framework 4.0. I loop through this to dump the execution times for each persistance. The first persistance is always the largest. Is this due to EF making the initial connection to the database and/or JIT'ing ? Here's a sample of the output: Persisted graph in **3318** millseconds. Persisted graph in 25 millseconds. Persisted graph in 26 millseconds. Persisted graph in 22 millseconds. Thanks, Scott

    Read the article

  • How can I keep gnu screen from becoming unresponsive after losing my SSH connection?

    - by Mikey
    I use a VPN tunnel to connect to my work network and then SSH to connect to my work PC running cygwin. Once logged in I can attach to a screen session and everything works great. Now, after a while, I walk away from my computer and sooner or later, the VPN tunnel times out. The SSH connection on each end eventually times out and then I eventually come back to my computer to do some work. Theoretically, this should be a simple matter of just restarting the VPN, reconnecting via SSH, and then running "screen -r -d". However apparently when the sshd daemon times out on the cygwin PC, it leaves the screen session in some kind of hung state. I can reproduce a similar hung state by clicking the close box on a cygwin bash shell window while it's running a screen session. Is there any way to get the screen session to recover once this has happened, so that I don't lose anything?

    Read the article

  • How to check for an active Internet Connection on iPhone SDK?

    - by Brock Woolf
    I would like to check to see if I have an Internet connection on the iPhone using the Cocoa Touch libraries. I came up with a way to do this using an NSUrl. The way I did it seems a bit unreliable (because even Google could one day be down and relying on a 3rd party seems bad) and while I could check to see for a response from some other websites if Google didn't respond, it does seem wasteful and an unnecessary overhead on my application. - (BOOL) connectedToInternet { NSString *URLString = [NSString stringWithContentsOfURL:[NSURL URLWithString:@"http://www.google.com"]]; return ( URLString != NULL ) ? YES : NO; } Is what I have done bad? (Not to mention 'stringWithContentsOfURL' is deprecated in 3.0) And if so what is a better way to accomplish this?

    Read the article

  • ASP.NET 3.5 Stateless Session Managment and connection pooling?

    - by Norm
    I am designing an ASP.NET (3.5) web application that connects to a Rocket Software UniVerse database. I am in the planning stages right now and need some help in being pointed in the right direction. I am brand new to ASP and C#. I am shooting for a RESTful design and a MVC pattern. Rocket provides a .NET library called UniObjects.NET which handles everything for connecting and retrieving information from the database. What would be the best way to in general to log my users into the database, then use that session via connection pooling? I see that in 3.5 there is the ASP.NET Routing Infrastructure and that looks promising am I in the right direction on this? Also does C# support decorators like Python and Java?

    Read the article

  • How to open connection to local network path protected by password in smart way? (Whith C#)

    - by lfx
    Hi, I developing program witch have to write some data in file whom are stored in network computer witch are protected by password. Now I'm doing this way - open connection with cmd then write data. static bool ConnectToSrv() { String myUser = "domain.local\\user"; String myPass = "pwd"; String cmdString = "net use \\\\otherPC\\folder\\ /user:" + myUser + " " + myPass; try { ManagementClass processClass = new ManagementClass("Win32_Process"); object[] methodArgs = { cmdString, null, null, 0 }; object result = processClass.InvokeMethod("Create", methodArgs); return true; } catch (System.Exception error) { return false; } } public void writeDate(string data){ } I believe there must by better way. I mean the .NET way. Does anybody know how to do it? :) Thanks

    Read the article

  • Cache an FTP connection via session variables for use via AJAX?

    - by Chad Johnson
    I'm working on a Ruby web Application that uses the Net::FTP library. One part of it allows users to interact with an FTP site via AJAX. When the user does something, and AJAX call is made, and then Ruby reconnects to the FTP server, performs an action, and outputs information. Every time the AJAX call is made, Ruby has to reconnect to the FTP server, and that's slow. Is there a way I could cache this FTP connection? I've tried caching in the session hash, but "We're sorry, but something went wrong" is displayed, and a TCP dump is outputted in my logs whenever I attempt to store it in the session hash. I haven't tried memcache yet. Any suggestions?

    Read the article

  • WindowsPhone App data connection FAILS in MarketPlace published App but WORKS in Visual Studio development (same XAP)

    - by Tom
    Tearing my hair out(!) My last App update has been accepted and released by MarketPlace but the remote server data connection does NOT work/connect from the downloaded App (from MarketPlace). However, the same App (the accepted XAP) when I'm running it from Visual Studio, using the same remote server address works just fine. WHY!... Has anyone else ever run into anything like this? Here's the remote path: http://www.streamcommunication.com/ZenAwaken/DownloadableCollections.xml I can load that to a browser and retrieve the XML When I'm in Visual Studio I can connect via that path and retrieve the file and consume the data BUT!! The exact same XAP which has been accepted and distributed by Windows Phone marketplace FAILS. Is it possible that MarketPlace does something (encryption?) to the XAP that would corrupt the path string? Any thoughts or experiences would be very helpful! Tom

    Read the article

  • Has anyone ever bought an "Internet Connection" license for WSS 3.0?

    - by strongopinions
    I would like to run a WSS 3.0 that is exposed to the internet and provides access to an arbitrary number of users through forms-based authentication. If the WSS licensing is analogous to that of MOSS, then there should be some special licensing required to make this legitimate. I have seen several vague statements on the internet about an "Internet Connection" license for WSS 3.0, and some more general statements from Mike Walsh to the effect that it costs "around $2000." But I have never seen anything official about this, and I'm not sure if I am even using the correct terminology. Has anyone actually purchased something resembling this license?

    Read the article

  • Is it possible to create a jdbc connection without a password (using postgresql 'trust')?

    - by mojones
    I am using jdbc to connect to a postgresql database in a java application (actually the app is written in Groovy). I have postgresql set up to use the 'trust' authentication method. Is it possible to open a jdbc connection without specifying a password? When I try to use the normal constructor with a blank password, it fails with Exception in thread "Thread-2" org.postgresql.util.PSQLException: FATAL: password authentication failed for user "myuser" Even though, from the command line, this works fine psql -U myuser mydatabase Welcome to psql 8.3.5, the PostgreSQL interactive terminal. Type: \copyright for distribution terms \h for help with SQL commands \? for help with psql commands \g or terminate with semicolon to execute query \q to quit

    Read the article

  • How do I fix my VM's network connection if it seems to be running ok from the host?

    - by AndreiC
    I have a virtual machine (made with vmware) with a linux ubuntu os installed on it (i have a series of them), with NAT network connection - i am running vmware on Windows XP (my host system); the virtual machine can't connect to the internet. All the vmware services seem to be working fine from windows point of view, but inside the machine i can't connect to the internet. What is strange is that the virtual machine was able to use the internet some time ago, but all of a sudden, i just can't use my internet on the virtual machine - i have made no changes to the settings, nor in windows, nor in the virtual machine - so i don't understand.

    Read the article

  • How can I refresh a page via jQuery and ensure that there's a connection?

    - by Steve
    Hi! I toyed around with .load() and .ajax(), but i didn't get to an end. Here's what I need: Everey minute the function shall check if it can load a certain page. if the connection succeeds, I want the page to be refreshed, if not, nothing shall happen, the script shall retry later. I'm using the jQueryTimers plugin. this is my code so far: //reload trigger $(document).everyTime('60s', 'pagerefresh', reloadPage, 0, true); //refresh function function reloadPage() { $.ajax({ url:'index-1.php', type:'HEAD', success: location.reload(true) }) } I have no idea how to tell jQ what I want. any hint appreciated.

    Read the article

  • Java: Anyone know of a library that detects the quality of an internet connection?

    - by Zombies
    I know a simple URLConnection to google can detect if I am connected to the internet, after all I am confident that the internet is all well and fine If I cant connect to google. But what I am looking for at this juncture is a library that can measure how effective my connection to the internet is in terms of BOTH responsiveness and bandwidth available. BUT, I do not want to measure how much bandwidth is potentially available as that is too resource intensive. I really just need to be able to test wether or not I can recieve something like X kB's in Y amount of time. Does such a library already exist?

    Read the article

  • Does beginTransaction in Hibernate allocate a new DB connection?

    - by illscience
    Hi folks - Just wondering if beginning a new transaction in Hibernate actually allocates a connection to the DB? I'm concerned b/c our server begins a new transaction for each request received, even if that request doesn't interact with the DB. We're seeing DB connections as a major bottleneck, so I'm wondering if I should take the time narrow the scope of my transactions. Searched everywhere and haven't been able to find a good answer. The very simple code is here: SessionFactory sessionFactory = (SessionFactory) Context.getContext().getBean("sessionFactory"); sessionFactory.getCurrentSession().beginTransaction(); sessionFactory.getCurrentSession().setFlushMode(FlushMode.AUTO); thanks very much! a

    Read the article

< Previous Page | 189 190 191 192 193 194 195 196 197 198 199 200  | Next Page >