Search Results

Search found 15966 results on 639 pages for 'connection'.

Page 146/639 | < Previous Page | 142 143 144 145 146 147 148 149 150 151 152 153  | Next Page >

  • TCP Server Memory management: #Connections Vs. #Requests

    - by Andrew
    Given that, there is no theoretical limit to number of concurrent TCP connections a Windows 2008 server can handle. Only thing will happen is, with each connection there will be memory consumption in server. Unfortunately, memory is not unlimited (and I want to utilize only physical memory). For example, lets say we've 2GB server memory. Now there are two extreme cases: Case 1: If we've allocated 64KB buffer for each connection (only to receive incoming request), then 32768 connections can consume all the 2GB of memory. This will not leave any memory to queue/process incoming requests from those connections. Case 2: On the other hand, lets say a single (or very few) connections continuously keeps sending request buffers (for example, video streaming from one connection to other) and server cannot process them within time, those buffers will get piled up in server and eventually will occupy most of the servers memory. And it will not leave any memory for new connection thereafter. This is the real dilemma in server design bugging me badly for last many days. If I can decide on max size of request buffer per connection and max number of requests to allow in queue per connection. Then, based on available server memory, it will then automatically set limit on max number of concurrent connections. How to decide on these limits to achieve best performance and throughput? I am just looking for perfect utilization of server resources. Are there any standard guidelines or empirical data available with someone who can share with me please.

    Read the article

  • What is going on when I can't access an SMB server share (not accessible error) until I run cmdkey to delete the credential?

    - by Warren P
    I have a network connection share issue. The first connection works, and seems to stay connected for at least a few hours. However, after each time my windows 7 PC reboots, it can no longer form a network connection to the shared folder, nor browse to it, until I not only unmap and remap the mapped drive, but also, I have to use cmdkey to delete the stored credentials like this: cmdkey /delete:Domain:target=HOSTNAME My work PC is on a domain, and I am not the IT administrator, but I'm curious if there is anything I can do to investigate this issue. Any settings in registry or group policy that I could examine to see why the first connection works, but each subsequent attempt (once a stored credential exists) to browse or use the connection, fails with a connection error saying it is "not accessible", like this: I do not even get any error until at least several minutes go by. THe first thing I see is a window frozen and empty, and then I get this error: This has happened when connecting to a share on a DROBO device, and on a share which is not on the domain, but which was a Microsoft Home Server. I wonder if there's something broken in WIndows 7 professional with regards to connecting to non-domain shares when an active directory domain controller exists, and a particular workstation is joined to a domain? The problem only occurs if I click "remember credentials". It is not fixed by any amount of working with net use. Usingcmdkey to delete all stored credentials for the host is the only way to get back in, and it affects all non-domain shared folders. Update I'm hoping there are some registry locations I could check that could be misconfigured in some way that might explain why SMB/CIFS stored credentials for non-domain systems seem to be auto-invalidated in this weird way. Knowing how whacko Microsoft Windows domain and security handling is sometimes, this could be some kind of stupid "feature".

    Read the article

  • Hybrid Exchange Online setup with on premise public folders, certificate issues?

    - by exxoid
    We have a Hybrid Exchange setup with Exchange Online (v15 tenant) and Exchange 2010 on premise. The hybrid configuration for the most part is working, what I am having an issue with is getting public folders to work for cloud users. I followed the official documentation here (http://technet.microsoft.com/en-us/library/dn249373(v=exchg.150).aspx) and it kind of works. When I am accessing Outlook on a public wifi I am able to bring up the cloud mailboxes and on premise public folders show up in Outlook. When I am accessing email via Outlook as a cloud user on the same LAN as the on premise exchange, the cloud user makes the outlook.com connection for live/ad/archive mailbox but fails to create a proxy connection for the on premise public folders. The error I get is a certificate mismatch, it seems that when a user on the LAN accesses Outlook/Exchange it is using a different certificate vs. when Outlook is launched on a WiFi network. When I look at the Outlook connection information, I see the connection to outlook.com for ad/live/archive mailbox but no entry for public folder connection. Our on premise Exchange is 2010 SP3 with latest CUs. The client is a domain joined laptop with Windows 7 and Office 2010 SP2, latest windows updates applied. Our infrastructure has a working ADFS 3 and DirSync setup for Office 365. My question then is, what do I need to do to make sure that the Cloud user launching Outlook on the LAN uses the proper certificate (the wildcard 3rd party cert.. vs. the self signed certificate which it looks like it may be using during the connection attempt).

    Read the article

  • Legacy VB6 application under Win7 SQL error

    - by Shial
    We have a rather unfortunate legacy application at work, written originally in VB6 it predates anybody in our IT department by at least 5 years. We have a contracted developer for ongoing maintenance and where he can he rewrites sections over into .NET code (Not sure about his techniques here, this is a side job for his regular work as an IBM engineer) the application works fine (such as it is) under windows XP. We have only a couple of Windows 7 machines mainly for testing and this application seems to run into a wall. Things like the background not loading and SQL errors. This is even running under administrator. Running an SQL trace from the ODBC control panel shows several interesting things. It makes a connection to the database successfully initially where it runs a query to determine if it is running the correct version. This query works fine. 558-1af0 ENTER SQLExecDirectW HSTMT 0x020D7548 WCHAR * 0x04C8F0F0 [ 115] "SELECT count(*) c FROM tblSoftwareVersion WHERE fldSoftwareVersion = '123456' AND fldSoftwareName = 'Application.VB'" SDWORD 115 BMS 558-1af0 EXIT SQLExecDirectW with return code 1 (SQL_SUCCESS_WITH_INFO) HSTMT 0x020D7548 WCHAR * 0x04C8F0F0 [ 115] "SELECT count(*) c FROM tblSoftwareVersion WHERE fldSoftwareVersion = '123456' AND fldSoftwareName = 'Application.VB'" SDWORD 115 It then seems to drop its connection and can't find the ODBC connection despite the fact its connecting to the same DB. From the trace it looks like it configures the connection then it starts firing off SQLFreeStmt to unbind and close out then when in the application and it tries to do its thing there is no connection. 558-1af0 ENTER SQLFreeStmt HSTMT 0x020D7548 UWORD 2 <SQL_UNBIND> BMS 558-1af0 EXIT SQLFreeStmt with return code 0 (SQL_SUCCESS) HSTMT 0x020D7548 UWORD 2 <SQL_UNBIND> Then this happens when I try to do something that pulls data 558-1af0 ENTER SQLDriverConnectW HDBC 0x020DDA00 HWND 0x00000000 WCHAR * 0x73EF8634 [ -3] "******\ 0" SWORD -3 WCHAR * 0x73EF8634 SWORD -3 SWORD * 0x00000000 UWORD 0 <SQL_DRIVER_NOPROMPT> BMS 558-1af0 EXIT SQLDriverConnectW with return code -1 (SQL_ERROR) HDBC 0x020DDA00 HWND 0x00000000 WCHAR * 0x73EF8634 [ -3] "******\ 0" SWORD -3 WCHAR * 0x73EF8634 SWORD -3 SWORD * 0x00000000 UWORD 0 <SQL_DRIVER_NOPROMPT> DIAG [IM002] [Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified (0) Nearly all of my searching on this issue comes up with programming issues where the connection string has a problem. The only thing that is different in this particular scenario though is Windows 7, I know the connection string is fine since it works on the XP machines. The VB components are supposed to be still functional under Win7. My computer is running 32 bit win7 and my VP is running Win7 64 bit and both have the same problem so that can be ruled out. I have already tried reinstalling the SQL Native Client and the VB runtime as well as the application in question. Hopefully I can find a solution and not have to resort to using the XP VM.

    Read the article

  • ReportViewer Error When AsyncRendering set to false

    - by Irman
    I have problem using reportviewer with asyncrendering set to false, the report showing this error message: "The underlying connection was closed: An unexpected error occurred on a receive. Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host. An existing connection was forcibly closed by the remote host", this report is running on iis 7 and windows 7 on my local machine.

    Read the article

  • Disable Autocommit in H2 with Hibernate/C3P0 ?

    - by HDave
    I have a JPA/Hibernate application and am trying to get it to run against H2 (as well as other databases). Currently I am using Atomikos for transaction and C3P0 for connection pooing. Despite my best efforts I am still seeing this in the log file (and DAO integration tests are failing): [20100613 23:06:34] DEBUG [main] SessionFactoryImpl.(242) | instantiating session factory with properties: .....edited for brevity.... hibernate.connection.autocommit=true, ....more stuff follows The connection URL to H2 has AUTOCOMMIT=OFF, but according to the H2 documentation: this will not work as expected when using a connection pool (the connection pool manager will re-enable autocommit when returning the connection to the pool, so autocommit will only be disabled the first time the connection is used So I figured (apparently correctly) that Hibernate is where I'll have to indicate I want autocommit off. I found the autocommit property documented here and I put it in my EntityManagerFactory config as follows: <bean id="myappTestLocalEmf" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"> <property name="persistenceUnitName" value="myapp-core" /> <property name="persistenceUnitPostProcessors"> <bean class="com.myapp.core.persist.util.JtaPersistenceUnitPostProcessor"> <property name="jtaDataSource" ref="myappPersistTestJdbcDataSource" /> </bean> </property> <property name="jpaVendorAdapter"> <bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter"> <property name="showSql" value="true" /> <property name="database" value="$DS{hibernate.database}" /> <property name="databasePlatform" value="$DS{hibernate.dialect}" /> </bean> </property> <property name="jpaProperties"> <props> <prop key="hibernate.transaction.factory_class">com.atomikos.icatch.jta.hibernate3.AtomikosJTATransactionFactory</prop> <prop key="hibernate.transaction.manager_lookup_class">com.atomikos.icatch.jta.hibernate3.TransactionManagerLookup</prop> <prop key="hibernate.connection.autocommit">false</prop> <prop key="hibernate.format_sql">true"</prop> <prop key="hibernate.use_sql_comments">true</prop> </property> </bean>

    Read the article

  • Does writing data to server using Java URL class require response from server?

    - by gigadot
    I am trying to upload files using Java URL class and I have found a previous question on stack-overflow which explains very well about the details, so I try to follow it. And below is my code adopted from the sniplet given in the answer. My problem is that if I don't make a call to one of connection.getResponseCode() or connection.getInputStream() or connection.getResponseMessage() or anything which is related to reponse from the server, the request will never be sent to server. Why do I need to do this? Or is there any way to write the data without getting the response? P.S. I have developed a server-side uploading servlet which accepts multipart/form-data and save it to files using FileUpload. It is stable and definitely working without any problem so this is not where my problem is generated. import java.io.Closeable; import java.io.File; import java.io.FileInputStream; import java.io.IOException; import java.io.OutputStream; import java.io.PrintWriter; import java.net.HttpURLConnection; import java.net.URL; import org.apache.commons.io.IOUtils; public class URLUploader { public static void closeQuietly(Closeable... objs) { for (Closeable closeable : objs) { IOUtils.closeQuietly(closeable); } } public static void main(String[] args) throws IOException { File textFile = new File("D:\\file.zip"); String boundary = Long.toHexString(System.currentTimeMillis()); // Just generate some unique random value. HttpURLConnection connection = (HttpURLConnection) new URL("http://localhost:8080/upslet/upload").openConnection(); connection.setDoOutput(true); connection.setRequestProperty("Content-Type", "multipart/form-data; boundary=" + boundary); OutputStream output = output = connection.getOutputStream(); PrintWriter writer = writer = new PrintWriter(output, true); // Send text file. writer.println("--" + boundary); writer.println("Content-Disposition: form-data; name=\"file1\"; filename=\"" + textFile.getName() + "\""); writer.println("Content-Type: application/octet-stream"); FileInputStream fin = new FileInputStream(textFile); writer.println(); IOUtils.copy(fin, output); writer.println(); // End of multipart/form-data. writer.println("--" + boundary + "--"); output.flush(); closeQuietly(fin, writer, output); // Above request will never be sent if .getInputStream() or .getResponseCode() or .getResponseMessage() does not get called. connection.getResponseCode(); } }

    Read the article

  • C#.NET Socket Programming: Connecting to remote computers.

    - by Gio Borje
    I have a typical server in my end and a friend using a client to connect to my IP/Port and he consistently receives the exception: "A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond {MY_IP}:{MY_PORT}"—You don't need to know my IP. The client and server, however, work fine on the loopback address (127.0.0.1). I also do not have any firewall nor is windows firewall active. Server: static void Main(string[] args) { Console.Title = "Socket Server"; Console.WriteLine("Listening for messages..."); Socket serverSock = new Socket( AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp); IPAddress serverIP = IPAddress.Any; IPEndPoint serverEP = new IPEndPoint(serverIP, 33367); SocketPermission perm = new SocketPermission(NetworkAccess.Accept, TransportType.Tcp, "98.112.235.18", 33367); serverSock.Bind(serverEP); serverSock.Listen(10); while (true) { Socket connection = serverSock.Accept(); Byte[] serverBuffer = new Byte[8]; String message = String.Empty; while (connection.Available > 0) { int bytes = connection.Receive( serverBuffer, serverBuffer.Length, 0); message += Encoding.UTF8.GetString( serverBuffer, 0, bytes); } Console.WriteLine(message); connection.Close(); } } Client: static void Main(string[] args) { // Design the client a bit Console.Title = "Socket Client"; Console.Write("Enter the IP of the server: "); IPAddress clientIP = IPAddress.Parse(Console.ReadLine()); String message = String.Empty; while (true) { Console.Write("Enter the message to send: "); // The messsage to send message = Console.ReadLine(); IPEndPoint clientEP = new IPEndPoint(clientIP, 33367); // Setup the socket Socket clientSock = new Socket( AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp); // Attempt to establish a connection to the server Console.Write("Establishing connection to the server... "); try { clientSock.Connect(clientEP); // Send the message clientSock.Send(Encoding.UTF8.GetBytes(message)); clientSock.Shutdown(SocketShutdown.Both); clientSock.Close(); Console.Write("Message sent successfully.\n\n"); } catch (Exception ex) { Console.WriteLine(ex.Message); } } }

    Read the article

  • Disable Autocommit with Spring/Hibernate/C3P0/Atomikos ?

    - by HDave
    I have a JPA/Hibernate application and am trying to get it to run against H2 and MySQL. Currently I am using Atomikos for transactions and C3P0 for connection pooling. Despite my best efforts I am still seeing this in the log file (and DAO integration tests are failing with org.hibernate.NonUniqueObjectException even though Spring Test should be rolling back the database after each test method): [20100613 23:06:34] DEBUG [main] SessionFactoryImpl.(242) | instantiating session factory with properties: .....edited for brevity.... hibernate.connection.autocommit=true, ....more stuff follows The connection URL to H2 has AUTOCOMMIT=OFF, but according to the H2 documentation: this will not work as expected when using a connection pool (the connection pool manager will re-enable autocommit when returning the connection to the pool, so autocommit will only be disabled the first time the connection is used So I figured (apparently correctly) that Hibernate is where I'll have to indicate I want autocommit off. I found the autocommit property documented here and I put it in my EntityManagerFactory config as follows: <bean id="myappTestLocalEmf" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"> <property name="persistenceUnitName" value="myapp-core" /> <property name="persistenceUnitPostProcessors"> <bean class="com.myapp.core.persist.util.JtaPersistenceUnitPostProcessor"> <property name="jtaDataSource" ref="myappPersistTestJdbcDataSource" /> </bean> </property> <property name="jpaVendorAdapter"> <bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter"> <property name="showSql" value="true" /> <property name="database" value="$DS{hibernate.database}" /> <property name="databasePlatform" value="$DS{hibernate.dialect}" /> </bean> </property> <property name="jpaProperties"> <props> <prop key="hibernate.transaction.factory_class">com.atomikos.icatch.jta.hibernate3.AtomikosJTATransactionFactory</prop> <prop key="hibernate.transaction.manager_lookup_class">com.atomikos.icatch.jta.hibernate3.TransactionManagerLookup</prop> <prop key="hibernate.connection.autocommit">false</prop> <prop key="hibernate.format_sql">true"</prop> <prop key="hibernate.use_sql_comments">true</prop> </property> </bean>

    Read the article

  • How to Connect Crystal Reports to MySQL directly by C# code without DSN or a DataSet

    - by Yanko Hernández Alvarez
    How can I connect a Crystal Report (VS 2008 basic) to a MySQL DB without using a DSN or a preload DataSet using C#? I need install the program on several places, so I must change the connection parameters. I don't want to create a DSN on every place, nor do I want to preload a DataSet and pass it to the report engine. I use nhibernate to access the database, so to create and fill the additional DS would take twice the work and additional maintenance later. I think the best option would be to let the crystal reports engine to connect to MySQL server by itself using ODBC. I managed to create the connection in the report designer (VS2008) using the Database Expert, creating an ODBC(RDO) connection and entering this connection string "DRIVER={MySQL ODBC 5.1 Driver};SERVER=myserver.mydomain" and in the "Next" page filling the "User ID", "Password" and "Database" parameters. I didn't fill the "Server" parameter. It worked. As a matter of fact, if you use the former connection string, it doesn't matter what you put on the "Server" parameter, it seems the parameter is unused. On the other hand, if you use "DRIVER={MySQL ODBC 5.1 Driver}" as a connection string and later fill the "Server" parameter with the FQDN of the server, the connection doesn't work. How can I do that by code? All the examples I've seen till now, use a DSN or the DataSet method. I saw the same question posted but for PostgreSQL and tried to adapt it to mysql, but so far, no success. The first method: Rp.Load(); Rp.DataSourceConnections[0].SetConnection("DRIVER={MySQL ODBC 5.1 Driver};SERVER=myserver.mydomain", "database", "user", "pass"); Rp.ExportToDisk(ExportFormatType.PortableDocFormat, "report.pdf"); raise an CrystalDecisions.CrystalReports.Engine.LogOnException during ExportToDisk Message="Logon failed.\nDetails: IM002:[Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified.\rError in File temporal file path.rpt:\nUnable to connect: incorrect log on parameters. the InnerException is an System.Runtime.InteropServices.COMException with the same message and no InnerException The "no default driver specified" makes me wonder if the server parameter is unused here too (see above). In that case: How can I specify the connection string? I haven't tried the second method because it doesn't apply. Does anybody know the solution?

    Read the article

  • Am I Leaking ADO.NET Connections?

    - by HardCode
    Here is an example of my code in a DAL. All calls to the database's Stored Procedures are structured this way, and there is no in-line SQL. Friend Shared Function Save(ByVal s As MyClass) As Boolean Dim cn As SqlClient.SqlConnection = Dal.Connections.MyAppConnection Dim cmd As New SqlClient.SqlCommand Try cmd.Connection = cn cmd.CommandType = CommandType.StoredProcedure cmd.CommandText = "proc_save_my_class" cmd.Parameters.AddWithValue("@param1", s.Foo) cmd.Parameters.AddWithValue("@param2", s.Bar) Return True Finally Dal.Utility.CleanupAdoObjects(cmd, cn) End Try End Function Here is the Connection factory (if I am using the correct term): Friend Shared Function MyAppConnection() As SqlClient.SqlConnection Dim cn As New SqlClient.SqlConnection(ConfigurationManager.ConnectionStrings("MyConnectionString").ToString) cn.Open() If cn.State <> ConnectionState.Open Then ' CriticalException is a custom object inheriting from Exception. Throw New CriticalException("Could not connect to the database.") Else Return cn End If End Function Here is the Dal.Utility.CleaupAdoObjects() function: Friend Shared Sub CleanupAdoObjects(ByVal cmd As SqlCommand, ByVal cn As SqlConnection) If cmd IsNot Nothing Then cmd.Dispose() If cn IsNot Nothing AndAlso cn.State <> ConnectionState.Closed Then cn.Close() End Sub I am getting a lot of "Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding." error messages reported by the users. The application's DAL opens a connection, reads or saves data, and closes it. No connections are ever left open - intentionally! There is nothing obvious on the Windows 2000 Server hosting the SQL Server 2000 that would indicate a problem. Nothing in the Event Logs and nothing in the SQL Server logs. The timeouts happen randomly - I cannot reproduce. It happens early in the day with only 1 to 5 users in the system. It also happens with around 50 users in the system. The most connections to SQL Server via Performance Monitor, for all databases, has been about 74. The timeouts happen in code that both saves to, and reads from, the database in different parts of the application. The stack trace does not point to one or two offending DAL functions. It's happened in many different places. Does my ADO.NET code appear to be able to leak connections? I've goolged around a bit, and I've read that if the connection pool fills up, this can happen. However, I'm not explicitly setting any connection pooling. I've even tried to increase the Connection Timeout in the connection string, but timeouts happen long before the 300 second (5 minute) value: <add name="MyConnectionString" connectionString="Data Source=MyServer;Initial Catalog=MyDatabase;Integrated Security=SSPI;Connection Timeout=300;"/> I'm at a total loss already as to what is causing these Timeout issues. Any ideas are appreciated.

    Read the article

  • Java multiple connections downloading file

    - by weulerjunior
    Hello friends, I was wanting to add multiple connections in the code below to be able to download files faster. Could someone help me? Thanks in advance. public void run() { RandomAccessFile file = null; InputStream stream = null; try { // Open connection to URL. HttpURLConnection connection = (HttpURLConnection) url.openConnection(); // Specify what portion of file to download. connection.setRequestProperty("Range", "bytes=" + downloaded + "-"); // Connect to server. connection.connect(); // Make sure response code is in the 200 range. if (connection.getResponseCode() / 100 != 2) { error(); } // Check for valid content length. int contentLength = connection.getContentLength(); if (contentLength < 1) { error(); } /* Set the size for this download if it hasn't been already set. */ if (size == -1) { size = contentLength; stateChanged(); } // Open file and seek to the end of it. file = new RandomAccessFile("C:\\"+getFileName(url), "rw"); file.seek(downloaded); stream = connection.getInputStream(); while (status == DOWNLOADING) { /* Size buffer according to how much of the file is left to download. */ byte buffer[]; if (size - downloaded > MAX_BUFFER_SIZE) { buffer = new byte[MAX_BUFFER_SIZE]; } else { buffer = new byte[size - downloaded]; } // Read from server into buffer. int read = stream.read(buffer); if (read == -1) { break; } // Write buffer to file. file.write(buffer, 0, read); downloaded += read; stateChanged(); } /* Change status to complete if this point was reached because downloading has finished. */ if (status == DOWNLOADING) { status = COMPLETE; stateChanged(); } } catch (Exception e) { error(); } finally { // Close file. if (file != null) { try { file.close(); } catch (Exception e) { } } // Close connection to server. if (stream != null) { try { stream.close(); } catch (Exception e) { } } } }

    Read the article

  • WebSockets authentication

    - by Tomi
    What are the possible ways to authenticate user when websocket connection is used? Example scenario: Web based multi-user chat application through encrypted websocket connection. How can I ensure (or guarantee) that each connection in this application belongs to certain authenticated user and "can't be" exploited by false user impersonation during the connection.

    Read the article

  • Alert on gridview edit based on permission

    - by Vicky
    I have a gridview with edit option at the start of the row. Also I maintain a seperate table called Permission where I maintain user permissions. I have three different types of permissions like Admin, Leads, Programmers. These all three will have access to the gridview. Except admin if anyone tries to edit the gridview on clicking the edit option, I need to give an alert like This row has important validation and make sure you make proper changes. When I edit, the action with happen on table called Application. The table has a column called Comments. Also the alert should happen only when they try to edit rows where the Comments column have these values in them. ManLog datas Funding Approved Exported Applications My try so far. public bool IsApplicationUser(string userName) { return CheckUser(userName); } public static bool CheckUser(string userName) { string CS = ConfigurationManager.ConnectionStrings["ConnectionString"].ToString(); DataTable dt = new DataTable(); using (SqlConnection connection = new SqlConnection(CS)) { SqlCommand command = new SqlCommand(); command.Connection = connection; string strquery = "select * from Permissions where AppCode='Nest' and UserID = '" + userName + "'"; SqlCommand cmd = new SqlCommand(strquery, connection); SqlDataAdapter da = new SqlDataAdapter(cmd); da.Fill(dt); } if (dt.Rows.Count >= 1) return true; else return true; } protected void Details_RowCommand(object sender, GridViewCommandEventArgs e) { string currentUser = HttpContext.Current.Request.LogonUserIdentity.Name; string str = ConfigurationManager.ConnectionStrings["ConnectionString"].ToString(); string[] words = currentUser.Split('\\'); currentUser = words[1]; bool appuser = IsApplicationUser(currentUser); if (appuser) { DataSet ds = new DataSet(); using (SqlConnection connection = new SqlConnection(str)) { SqlCommand command = new SqlCommand(); command.Connection = connection; string strquery = "select Role_Cd from User_Role where AppCode='PM' and UserID = '" + currentUser + "'"; SqlCommand cmd = new SqlCommand(strquery, connection); SqlDataAdapter da = new SqlDataAdapter(cmd); da.Fill(ds); } if (e.CommandName.Equals("Edit") && ds.Tables[0].Rows[0]["Role_Cd"].ToString().Trim() != "ADMIN") { int index = Convert.ToInt32(e.CommandArgument); GridView gvCurrentGrid = (GridView)sender; GridViewRow row = gvCurrentGrid.Rows[index]; string strID = ((Label)row.FindControl("lblID")).Text; string strAppName = ((Label)row.FindControl("lblAppName")).Text; Response.Redirect("AddApplication.aspx?ID=" + strID + "&AppName=" + strAppName + "&Edit=True"); } } } Kindly let me know if I need to add something. Thanks for any suggestions.

    Read the article

  • Problem using generics in function

    - by JAVA
    Hi all, I have this functions and need to make it one function. The only difference is type of input variable sourceColumnValue. This variable can be String or Integer but the return value of function must be always Integer. I know I need to use Generics but can't do it. public Integer selectReturnInt(String tableName, String sourceColumnName, String sourceColumnValue, String targetColumnName) { Integer returned = null; String query = "SELECT "+targetColumnName+" FROM "+tableName+" WHERE "+sourceColumnName+"='"+sourceColumnValue+"' LIMIT 1"; try { Connection connection = ConnectionManager.getInstance().open(); java.sql.Statement statement = connection.createStatement(); statement.execute(query.toString()); ResultSet rs = statement.getResultSet(); while(rs.next()){ returned = rs.getInt(targetColumnName); } rs.close(); statement.close(); ConnectionManager.getInstance().close(connection); } catch (SQLException e) { System.out.println("???????? ?? ???? ?? ???? ?????????!"); System.out.println(e); } return returned; } // SELECT (RETURN INTEGER) public Integer selectIntReturnInt(String tableName, String sourceColumnName, Integer sourceColumnValue, String targetColumnName) { Integer returned = null; String query = "SELECT "+targetColumnName+" FROM "+tableName+" WHERE "+sourceColumnName+"='"+sourceColumnValue+"' LIMIT 1"; try { Connection connection = ConnectionManager.getInstance().open(); java.sql.Statement statement = connection.createStatement(); statement.execute(query.toString()); ResultSet rs = statement.getResultSet(); while(rs.next()){ returned = rs.getInt(targetColumnName); } rs.close(); statement.close(); ConnectionManager.getInstance().close(connection); } catch (SQLException e) { System.out.println("???????? ?? ???? ?? ???? ?????????!"); System.out.println(e); } return returned; }

    Read the article

  • PPTP VPN from Ubuntu cannot connect

    - by Andrea Polci
    I'm trying to configure under Linux (Kubuntu 9.10) a VPN I already use from Windows. I installed the network-manager-pptp package and added the VPN under Network Manager. These are the parameters under "advanced" button: Authentication Methods: PAP, CHAP, MSCHAP, MSCHAP2, EAP (I also tried "MSCHAP, MSCHAP2") Use MPPE Encryption: yes Crypto: Any Use stateful encryption: no Allow BSD compression: yes Allow Deflate compression: yes Allow TCP header compression: yes Send PPP echo packets: no When I try to connnect it doesn't work and this is what I get in the system log: 2010-04-08 13:53:47 pcelena NetworkManager <info> Starting VPN service 'org.freedesktop.NetworkManager.pptp'... 2010-04-08 13:53:47 pcelena NetworkManager <info> VPN service 'org.freedesktop.NetworkManager.pptp' started (org.freedesktop.NetworkManager.pptp), PID 4931 2010-04-08 13:53:47 pcelena NetworkManager <info> VPN service 'org.freedesktop.NetworkManager.pptp' just appeared, activating connections 2010-04-08 13:53:47 pcelena pppd[4932] Plugin /usr/lib/pppd/2.4.5//nm-pptp-pppd-plugin.so loaded. 2010-04-08 13:53:47 pcelena NetworkManager <info> VPN plugin state changed: 3 2010-04-08 13:53:47 pcelena pppd[4932] pppd 2.4.5 started by root, uid 0 2010-04-08 13:53:47 pcelena NetworkManager <info> VPN connection 'MYVPN' (Connect) reply received. 2010-04-08 13:53:47 pcelena NetworkManager SCPlugin-Ifupdown: devices added (path: /sys/devices/virtual/net/ppp0, iface: ppp0) 2010-04-08 13:53:47 pcelena NetworkManager SCPlugin-Ifupdown: device added (path: /sys/devices/virtual/net/ppp0, iface: ppp0): no ifupdown configuration found. 2010-04-08 13:53:47 pcelena pppd[4932] Using interface ppp0 2010-04-08 13:53:47 pcelena pppd[4932] Connect: ppp0 <--> /dev/pts/2 2010-04-08 13:53:47 pcelena pptp[4934] nm-pptp-service-4931 log[main:pptp.c:314]: The synchronous pptp option is NOT activated 2010-04-08 13:53:47 pcelena pptp[4927] nm-pptp-service-4918 log[ctrlp_rep:pptp_ctrl.c:251]: Sent control packet type is 7 'Outgoing-Call-Request' 2010-04-08 13:53:47 pcelena pptp[4927] nm-pptp-service-4918 log[ctrlp_disp:pptp_ctrl.c:858]: Received Outgoing Call Reply. 2010-04-08 13:53:47 pcelena pptp[4927] nm-pptp-service-4918 log[ctrlp_disp:pptp_ctrl.c:897]: Outgoing call established (call ID 1, peer's call ID 14800). 2010-04-08 13:53:48 pcelena pppd[4932] CHAP authentication succeeded 2010-04-08 13:53:48 pcelena pppd[4932] CHAP authentication succeeded 2010-04-08 13:53:48 pcelena pppd[4932] LCP terminated by peer 2010-04-08 13:53:48 pcelena pptp[4927] nm-pptp-service-4918 log[ctrlp_disp:pptp_ctrl.c:929]: Call disconnect notification received (call id 14800) 2010-04-08 13:53:48 pcelena pptp[4927] nm-pptp-service-4918 log[ctrlp_disp:pptp_ctrl.c:788]: Received Stop Control Connection Request. 2010-04-08 13:53:48 pcelena pptp[4927] nm-pptp-service-4918 log[ctrlp_rep:pptp_ctrl.c:251]: Sent control packet type is 4 'Stop-Control-Connection-Reply' 2010-04-08 13:53:48 pcelena pptp[4927] nm-pptp-service-4918 log[callmgr_main:pptp_callmgr.c:258]: Closing connection (shutdown) 2010-04-08 13:53:48 pcelena pptp[4927] nm-pptp-service-4918 log[ctrlp_rep:pptp_ctrl.c:251]: Sent control packet type is 12 'Call-Clear-Request' 2010-04-08 13:53:48 pcelena pptp[4927] nm-pptp-service-4918 log[callmgr_main:pptp_callmgr.c:258]: Closing connection (shutdown) 2010-04-08 13:53:48 pcelena pptp[4927] nm-pptp-service-4918 log[ctrlp_rep:pptp_ctrl.c:251]: Sent control packet type is 12 'Call-Clear-Request' 2010-04-08 13:53:48 pcelena pptp[4927] nm-pptp-service-4918 log[call_callback:pptp_callmgr.c:79]: Closing connection (call state) 2010-04-08 13:53:48 pcelena pppd[4932] Modem hangup 2010-04-08 13:53:48 pcelena pppd[4932] Connection terminated. 2010-04-08 13:53:48 pcelena NetworkManager <info> VPN plugin failed: 1 2010-04-08 13:53:48 pcelena NetworkManager SCPlugin-Ifupdown: devices removed (path: /sys/devices/virtual/net/ppp0, iface: ppp0) 2010-04-08 13:53:48 pcelena pppd[4932] Exit. 2010-04-08 13:53:48 pcelena NetworkManager <info> VPN plugin failed: 1 2010-04-08 13:53:48 pcelena NetworkManager <info> VPN plugin state changed: 6 2010-04-08 13:53:48 pcelena NetworkManager <info> VPN plugin state change reason: 0 2010-04-08 13:53:48 pcelena NetworkManager <WARN> connection_state_changed(): Could not process the request because no VPN connection was active. 2010-04-08 13:53:48 pcelena NetworkManager <info> Policy set 'Auto eth0' (eth0) as default for routing and DNS. 2010-04-08 13:54:01 pcelena NetworkManager <debug> [1270727641.001390] ensure_killed(): waiting for vpn service pid 4931 to exit 2010-04-08 13:54:01 pcelena NetworkManager <debug> [1270727641.001479] ensure_killed(): vpn service pid 4931 cleaned up The error that sticks out here is "pppd[4932] LCP terminated by peer". Does anyone has suggestion on what can be the problem and how to make it work?

    Read the article

  • android upload progressbarr not working

    - by pieter
    I'm a beginner in Android programming and I was tryinh to upload an image to a server. I found some code here on stackoverflow, I adjusted it and it still doesn't work. The problem is my image still won't upload. edit I solved the problem, I had no rights on the folder on the server. Now I have a new problem. the progresbarr doesn't work. it keeps saying 0 % transmitted does anyone sees an error in my code? import android.app.Activity; import android.app.AlertDialog; import android.app.Dialog; import android.app.ProgressDialog; import android.content.Context; import android.content.DialogInterface; import android.graphics.Bitmap; import android.graphics.BitmapFactory; import android.location.Location; import android.location.LocationListener; import android.location.LocationManager; import android.os.AsyncTask; import android.os.Bundle; import android.util.Log; import android.view.View; import android.view.Window; import android.widget.Button; import android.widget.ImageView; import java.io.DataInputStream; import java.io.DataOutputStream; import java.io.File; import java.io.FileInputStream; import java.io.IOException; import java.net.HttpURLConnection; import java.net.URL; public class PreviewActivity extends Activity { /** The captured image file. Get it's path from the starting intent */ private File mImage; public static final String EXTRA_IMAGE_PATH = "extraImagePath" /** Log tag */ private static final String TAG = "DFH"; /** Progress dialog id */ private static final int UPLOAD_PROGRESS_DIALOG = 0; private static final int UPLOAD_ERROR_DIALOG = 1; private static final int UPLOAD_SUCCESS_DIALOG = 2; /** Handler to confirm button */ private Button mConfirm; /** Handler to cancel button */ private Button mCancel; /** Uploading progress dialog */ private ProgressDialog mDialog; /** * Called when the activity is created * * We load the captured image, and register button callbacks */ @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); requestWindowFeature(Window.FEATURE_INDETERMINATE_PROGRESS); setContentView(R.layout.preview); setResult(RESULT_CANCELED); // Import image Bundle extras = getIntent().getExtras(); String imagePath = extras.getString(FotoActivity.EXTRA_IMAGE_PATH); Log.d("DFHprev", imagePath); mImage = new File(imagePath); if (mImage.exists()) { setResult(RESULT_OK); loadImage(mImage); } registerButtonCallbacks(); } @Override protected void onPause() { super.onPause(); } /** * Register callbacks for ui buttons */ protected void registerButtonCallbacks() { // Cancel button callback mCancel = (Button) findViewById(R.id.preview_send_cancel); mCancel.setOnClickListener(new View.OnClickListener() { public void onClick(View v) { PreviewActivity.this.finish(); } }); // Confirm button callback mConfirm = (Button) findViewById(R.id.preview_send_confirm); mConfirm.setEnabled(true); mConfirm.setOnClickListener(new View.OnClickListener() { public void onClick(View v) { new UploadImageTask().execute(mImage); } }); } /** * Initialize the dialogs */ @Override protected Dialog onCreateDialog(int id) { switch(id) { case UPLOAD_PROGRESS_DIALOG: mDialog = new ProgressDialog(this); mDialog.setProgressStyle(ProgressDialog.STYLE_HORIZONTAL); mDialog.setCancelable(false); mDialog.setTitle(getString(R.string.progress_dialog_title_connecting)); return mDialog; case UPLOAD_ERROR_DIALOG: AlertDialog.Builder builder = new AlertDialog.Builder(this); builder.setTitle(R.string.upload_error_title) .setIcon(android.R.drawable.ic_dialog_alert) .setMessage(R.string.upload_error_message) .setCancelable(false) .setPositiveButton(getString(R.string.retry), new DialogInterface.OnClickListener() { public void onClick(DialogInterface dialog, int which) { PreviewActivity.this.finish(); } }); return builder.create(); case UPLOAD_SUCCESS_DIALOG: AlertDialog.Builder success = new AlertDialog.Builder(this); success.setTitle(R.string.upload_success_title) .setIcon(android.R.drawable.ic_dialog_info) .setMessage(R.string.upload_success_message) .setCancelable(false) .setPositiveButton(getString(R.string.success), new DialogInterface.OnClickListener() { public void onClick(DialogInterface dialog, int which) { PreviewActivity.this.finish(); } }); return success.create(); default: return null; } } /** * Prepare the progress dialog */ @Override protected void onPrepareDialog(int id, Dialog dialog) { switch(id) { case UPLOAD_PROGRESS_DIALOG: mDialog.setProgress(0); mDialog.setTitle(getString(R.string.progress_dialog_title_connecting)); } } /** * Load the image file into the imageView * * @param image */ protected void loadImage(File image) { Bitmap bm = BitmapFactory.decodeFile(image.getPath()); ImageView view = (ImageView) findViewById(R.id.preview_image); view.setImageBitmap(bm); } /** * Asynchronous task to upload file to server */ class UploadImageTask extends AsyncTask<File, Integer, Boolean> { /** Upload file to this url */ private static final String UPLOAD_URL = "http://www.xxxx.x/xxxx/fotos"; /** Send the file with this form name */ private static final String FIELD_FILE = "file"; /** * Prepare activity before upload */ @Override protected void onPreExecute() { super.onPreExecute(); setProgressBarIndeterminateVisibility(true); mConfirm.setEnabled(false); mCancel.setEnabled(false); showDialog(UPLOAD_PROGRESS_DIALOG); } /** * Clean app state after upload is completed */ @Override protected void onPostExecute(Boolean result) { super.onPostExecute(result); setProgressBarIndeterminateVisibility(false); mConfirm.setEnabled(true); mDialog.dismiss(); if (result) { showDialog(UPLOAD_SUCCESS_DIALOG); } else { showDialog(UPLOAD_ERROR_DIALOG); } } @Override protected Boolean doInBackground(File... image) { return doFileUpload(image[0], "UPLOAD_URL"); } @Override protected void onProgressUpdate(Integer... values) { super.onProgressUpdate(values); if (values[0] == 0) { mDialog.setTitle(getString(R.string.progress_dialog_title_uploading)); } mDialog.setProgress(values[0]); } private boolean doFileUpload(File file, String uploadUrl) { HttpURLConnection connection = null; DataOutputStream outputStream = null; DataInputStream inputStream = null; String pathToOurFile = file.getPath(); String urlServer = "http://www.xxxx.x/xxxx/upload.php"; String lineEnd = "\r\n"; String twoHyphens = "--"; String boundary = "*****"; // log pathtoourfile Log.d("DFHinUpl", pathToOurFile); int bytesRead, bytesAvailable, bufferSize; byte[] buffer; int maxBufferSize = 1*1024*1024; int sentBytes = 0; long fileSize = file.length(); // log filesize String files= String.valueOf(fileSize); String buffers= String.valueOf(maxBufferSize); Log.d("fotosize",files); Log.d("buffers",buffers); try { FileInputStream fileInputStream = new FileInputStream(new File(pathToOurFile) ); URL url = new URL(urlServer); connection = (HttpURLConnection) url.openConnection(); // Allow Inputs & Outputs connection.setDoInput(true); connection.setDoOutput(true); connection.setUseCaches(false); // Enable POST method connection.setRequestMethod("POST"); connection.setRequestProperty("Connection", "Keep-Alive"); connection.setRequestProperty("Content-Type", "multipart/form-data;boundary="+boundary); outputStream = new DataOutputStream( connection.getOutputStream() ); outputStream.writeBytes(twoHyphens + boundary + lineEnd); outputStream.writeBytes("Content-Disposition: form-data; name=\"uploadedfile\";filename=\"" + pathToOurFile +"\"" + lineEnd); outputStream.writeBytes(lineEnd); bytesAvailable = fileInputStream.available(); bufferSize = Math.min(bytesAvailable, maxBufferSize); buffer = new byte[bufferSize]; // Read file bytesRead = fileInputStream.read(buffer, 0, bufferSize); while (bytesRead > 0) { outputStream.write(buffer, 0, bufferSize); bytesAvailable = fileInputStream.available(); bufferSize = Math.min(bytesAvailable, maxBufferSize); bytesRead = fileInputStream.read(buffer, 0, bufferSize); sentBytes += bufferSize; publishProgress((int)(sentBytes * 100 / fileSize)); bytesAvailable = fileInputStream.available(); bufferSize = Math.min(bytesAvailable, maxBufferSize); bytesRead = fileInputStream.read(buffer, 0, bufferSize); } outputStream.writeBytes(lineEnd); outputStream.writeBytes(twoHyphens + boundary + twoHyphens + lineEnd); // Responses from the server (code and message) int serverResponseCode = connection.getResponseCode(); String serverResponseMessage = connection.getResponseMessage(); fileInputStream.close(); outputStream.flush(); outputStream.close(); try { int responseCode = connection.getResponseCode(); return responseCode == 200; } catch (IOException ioex) { Log.e("DFHUPLOAD", "Upload file failed: " + ioex.getMessage(), ioex); return false; } catch (Exception e) { Log.e("DFHUPLOAD", "Upload file failed: " + e.getMessage(), e); return false; } } catch (Exception ex) { String msg= ex.getMessage(); Log.d("DFHUPLOAD", msg); } return true; } } } the PHP code that handles this upload is following: <?php $date=getdate(); $urldate=$date['year'].$date['month'].$date['month'].$date['hours'].$date['minutes'].$date[ 'seconds']; $target_path = "./"; $target_path = $target_path . basename( $_FILES['uploadedfile']['name']) . $urldate; if(move_uploaded_file($_FILES['uploadedfile']['tmp_name'], $target_path)) { echo "The file ". basename( $_FILES['uploadedfile']['name']). " has been uploaded"; } else{ echo "There was an error uploading the file, please try again!"; } ?> would really appreciate it if someone could help me.

    Read the article

  • Azure Grid Computing - Worker Roles as HPC Compute Nodes

    - by JoshReuben
    Overview ·        With HPC 2008 R2 SP1 You can add Azure worker roles as compute nodes in a local Windows HPC Server cluster. ·        The subscription for Windows Azure like any other Azure Service - charged for the time that the role instances are available, as well as for the compute and storage services that are used on the nodes. ·        Win-Win ? - Azure charges the computer hour cost (according to vm size) amortized over a month – so you save on purchasing compute node hardware. Microsoft wins because you need to purchase HPC to have a local head node for managing this compute cluster grid distributed in the cloud. ·        Blob storage is used to hold input & output files of each job. I can see how Parametric Sweep HPC jobs can be supported (where the same job is run multiple times on each node against different input units), but not MPI.NET (where different HPC Job instances function as coordinated agents and conduct master-slave inter-process communication), unless Azure is somehow tunneling MPI communication through inter-WorkerRole Azure Queues. ·        this is not the end of the story for Azure Grid Computing. If MS requires you to purchase a local HPC license (and administrate it), what's to stop a 3rd party from doing this and encapsulating exposing HPC WCF Broker Service to you for managing compute nodes? If MS doesn’t  provide head node as a service, someone else will! Process ·        requires creation of a worker node template that specifies a connection to an existing subscription for Windows Azure + an availability policy for the worker nodes. ·        After worker nodes are added to the cluster, you can start them, which provisions the Windows Azure role instances, and then bring them online to run HPC cluster jobs. ·        A Windows Azure worker role instance runs a HPC compatible Azure guest operating system which runs on the VMs that host your service. The guest operating system is updated monthly. You can choose to upgrade the guest OS for your service automatically each time an update is released - All role instances defined by your service will run on the guest operating system version that you specify. see Windows Azure Guest OS Releases and SDK Compatibility Matrix (http://go.microsoft.com/fwlink/?LinkId=190549). ·        use the hpcpack command to upload file packages and install files to run on the worker nodes. see hpcpack (http://go.microsoft.com/fwlink/?LinkID=205514). Requirements ·        assuming you have an azure subscription account and the HPC head node installed and configured. ·        Install HPC Pack 2008 R2 SP 1 -  see Microsoft HPC Pack 2008 R2 Service Pack 1 Release Notes (http://go.microsoft.com/fwlink/?LinkID=202812). ·        Configure the head node to connect to the Internet - connectivity is provided by the connection of the head node to the enterprise network. You may need to configure a proxy client on the head node. Any cluster network topology (1-5) is supported). ·        Configure the firewall - allow outbound TCP traffic on the following ports: 80,       443, 5901, 5902, 7998, 7999 ·        Note: HPC Server  uses Admin Mode (Elevated Privileges) in Windows Azure to give the service administrator of the subscription the necessary privileges to initialize HPC cluster services on the worker nodes. ·        Obtain a Windows Azure subscription certificate - the Windows Azure subscription must be configured with a public subscription (API) certificate -a valid X.509 certificate with a key size of at least 2048 bits. Generate a self-sign certificate & upload a .cer file to the Windows Azure Portal Account page > Manage my API Certificates link. see Using the Windows Azure Service Management API (http://go.microsoft.com/fwlink/?LinkId=205526). ·        import the certificate with an associated private key on the HPC cluster head node - into the trusted root store of the local computer account. Obtain Windows Azure Connection Information for HPC Server ·        required for each worker node template ·        copy from azure portal - Get from: navigation pane > Hosted Services > Storage Accounts & CDN ·        Subscription ID - a 32-char hex string in the form xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx. In Properties pane. ·        Subscription certificate thumbprint - a 40-char hex string (you need to remove spaces). In Management Certificates > Properties pane. ·        Service name - the value of <ServiceName> configured in the public URL of the service (http://<ServiceName>.cloudapp.net). In Hosted Services > Properties pane. ·        Blob Storage account name - the value of <StorageAccountName> configured in the public URL of the account (http://<StorageAccountName>.blob.core.windows.net). In Storage Accounts > Properties pane. Import the Azure Subscription Certificate on the HPC Head Node ·        enable the services for Windows HPC Server  to authenticate properly with the Windows Azure subscription. ·        use the Certificates MMC snap-in to import the certificate to the Trusted Root Certification Authorities store of the local computer account. The certificate must be in PFX format (.pfx or .p12 file) with a private key that is protected by a password. ·        see Certificates (http://go.microsoft.com/fwlink/?LinkId=163918). ·        To open the certificates snapin: Run > mmc. File > Add/Remove Snap-in > certificates > Computer account > Local Computer ·        To import the certificate via wizard - Certificates > Trusted Root Certification Authorities > Certificates > All Tasks > Import ·        After the certificate is imported, it appears in the details pane in the Certificates snap-in. You can open the certificate to check its status. Configure a Proxy Client on the HPC Head Node ·        the following Windows HPC Server services must be able to communicate over the Internet (through the firewall) with the services for Windows Azure: HPCManagement, HPCScheduler, HPCBrokerWorker. ·        Create a Windows Azure Worker Node Template ·        Edit HPC node templates in HPC Node Template Editor. ·        Specify: 1) Windows Azure subscription connection info (unique service name) for adding a set of worker nodes to the cluster + 2)worker node availability policy – rules for deploying / removing worker role instances in Windows Azure o   HPC Cluster Manager > Configuration > Navigation Pane > Node Templates > Actions pane > New à Create Node Template Wizard or Edit à Node Template Editor o   Choose Node Template Type page - Windows Azure worker node template o   Specify Template Name page – template name & description o   Provide Connection Information page – Azure Subscription ID (text) & Subscription certificate (browse) o   Provide Service Information page - Azure service name + blob storage account name (optionally click Retrieve Connection Information to get list of available from azure – possible LRT). o   Configure Azure Availability Policy page - how Windows Azure worker nodes start / stop (online / offline the worker role instance -  add / remove) – manual / automatic o   for automatic - In the Configure Windows Azure Worker Availability Policy dialog -select days and hours for worker nodes to start / stop. ·        To validate the Windows Azure connection information, on the template's Connection Information tab > Validate connection information. ·        You can upload a file package to the storage account that is specified in the template - eg upload application or service files that will run on the worker nodes. see hpcpack (http://go.microsoft.com/fwlink/?LinkID=205514). Add Azure Worker Nodes to the HPC Cluster ·        Use the Add Node Wizard – specify: 1) the worker node template, 2) The number of worker nodes   (within the quota of role instances in the azure subscription), and 3)           The VM size of the worker nodes : ExtraSmall, Small, Medium, Large, or ExtraLarge.  ·        to add worker nodes of different sizes, must run the Add Node Wizard separately for each size. ·        All worker nodes that are added to the cluster by using a specific worker node template define a set of worker nodes that will be deployed and managed together in Windows Azure when you start the nodes. This includes worker nodes that you add later by using the worker node template and, if you choose, worker nodes of different sizes. You cannot start, stop, or delete individual worker nodes. ·        To add Windows Azure worker nodes o   In HPC Cluster Manager: Node Management > Actions pane > Add Node à Add Node Wizard o   Select Deployment Method page - Add Azure Worker nodes o   Specify New Nodes page - select a worker node template, specify the number and size of the worker nodes ·        After you add worker nodes to the cluster, they are in the Not-Deployed state, and they have a health state of Unapproved. Before you can use the worker nodes to run jobs, you must start them and then bring them online. ·        Worker nodes are numbered consecutively in a naming series that begins with the root name AzureCN – this is non-configurable. Deploying Windows Azure Worker Nodes ·        To deploy the role instances in Windows Azure - start the worker nodes added to the HPC cluster and bring the nodes online so that they are available to run cluster jobs. This can be configured in the HPC Azure Worker Node Template – Azure Availability Policy -  to be automatic or manual. ·        The Start, Stop, and Delete actions take place on the set of worker nodes that are configured by a specific worker node template. You cannot perform one of these actions on a single worker node in a set. You also cannot perform a single action on two sets of worker nodes (specified by two different worker node templates). ·        ·          Starting a set of worker nodes deploys a set of worker role instances in Windows Azure, which can take some time to complete, depending on the number of worker nodes and the performance of Windows Azure. ·        To start worker nodes manually and bring them online o   In HPC Node Management > Navigation Pane > Nodes > List / Heat Map view - select one or more worker nodes. o   Actions pane > Start – in the Start Azure Worker Nodes dialog, select a node template. o   the state of the worker nodes changes from Not Deployed to track the provisioning progress – worker node Details Pane > Provisioning Log tab. o   If there were errors during the provisioning of one or more worker nodes, the state of those nodes is set to Unknown and the node health is set to Unapproved. To determine the reason for the failure, review the provisioning logs for the nodes. o   After a worker node starts successfully, the node state changes to Offline. To bring the nodes online, select the nodes that are in the Offline state > Bring Online. ·        Troubleshooting o   check node template. o   use telnet to test connectivity: telnet <ServiceName>.cloudapp.net 7999 o   check node status - Deployment status information appears in the service account information in the Windows Azure Portal - HPC queries this -  see  node status information for any failed nodes in HPC Node Management. ·        When role instances are deployed, file packages that were previously uploaded to the storage account using the hpcpack command are automatically installed. You can also upload file packages to storage after the worker nodes are started, and then manually install them on the worker nodes. see hpcpack (http://go.microsoft.com/fwlink/?LinkID=205514). ·        to remove a set of role instances in Windows Azure - stop the nodes by using HPC Cluster Manager (apply the Stop action). This deletes the role instances from the service and changes the state of the worker nodes in the HPC cluster to Not Deployed. ·        Each time that you start a set of worker nodes, two proxy role instances (size Small) are configured in Windows Azure to facilitate communication between HPC Cluster Manager and the worker nodes. The proxy role instances are not listed in HPC Cluster Manager after the worker nodes are added. However, the instances appear in the Windows Azure Portal. The proxy role instances incur charges in Windows Azure along with the worker node instances, and they count toward the quota of role instances in the subscription.

    Read the article

  • Connecting Linux to WatchGuard Firebox SSL (OpenVPN client)

    Recently, I got a new project assignment that requires to connect permanently to the customer's network through VPN. They are using a so-called SSL VPN. As I am using OpenVPN since more than 5 years within my company's network I was quite curious about their solution and how it would actually be different from OpenVPN. Well, short version: It is a disguised version of OpenVPN. Unfortunately, the company only offers a client for Windows and Mac OS which shouldn't bother any Linux user after all. OpenVPN is part of every recent distribution and can be activated in a couple of minutes - both client as well as server (if necessary). WatchGuard Firebox SSL - About dialog Borrowing some files from a Windows client installation Initially, I didn't know about the product, so therefore I went through the installation on Windows 8. No obstacles (and no restart despite installation of TAP device drivers!) here and the secured VPN channel was up and running in less than 2 minutes or so. Much appreciated from both parties - customer and me. Of course, this whole client package and my long year approved and stable installation ignited my interest to have a closer look at the WatchGuard client. Compared to the original OpenVPN client (okay, I have to admit this is years ago) this commercial product is smarter in terms of file locations during installation. You'll be able to access the configuration and key files below your roaming application data folder. To get there, simply enter '%AppData%\WatchGuard\Mobile VPN' in your Windows/File Explorer and confirm with Enter/Return. This will display the following files: Application folder below user profile with configuration and certificate files From there we are going to borrow four files, namely: ca.crt client.crt client.ovpn client.pem and transfer them to the Linux system. You might also be able to isolate those four files from a Mac OS client. Frankly, I'm just too lazy to run the WatchGuard client installation on a Mac mini only to find the folder location, and I'm going to describe why a little bit further down this article. I know that you can do that! Feedback in the comment section is appreciated. Configuration of OpenVPN (console) Depending on your distribution the following steps might be a little different but in general you should be able to get the important information from it. I'm going to describe the steps in Ubuntu 13.04 (Raring Ringtail). As usual, there are two possibilities to achieve your goal: console and UI. Let's what it is necessary to be done. First of all, you should ensure that you have OpenVPN installed on your system. Open your favourite terminal application and run the following statement: $ sudo apt-get install openvpn network-manager-openvpn network-manager-openvpn-gnome Just to be on the safe side. The four above mentioned files from your Windows machine could be copied anywhere but either you place them below your own user directory or you put them (as root) below the default directory: /etc/openvpn At this stage you would be able to do a test run already. Just in case, run the following command and check the output (it's the similar information you would get from the 'View Logs...' context menu entry in Windows: $ sudo openvpn --config client.ovpn Pay attention to the correct path to your configuration and certificate files. OpenVPN will ask you to enter your Auth Username and Auth Password in order to establish the VPN connection, same as the Windows client. Remote server and user authentication to establish the VPN Please complete the test run and see whether all went well. You can disconnect pressing Ctrl+C. Simplifying your life - authentication file In my case, I actually set up the OpenVPN client on my gateway/router. This establishes a VPN channel between my network and my client's network and allows me to switch machines easily without having the necessity to install the WatchGuard client on each and every machine. That's also very handy for my various virtualised Windows machines. Anyway, as the client configuration, key and certificate files are located on a headless system somewhere under the roof, it is mandatory to have an automatic connection to the remote site. For that you should first change the file extension '.ovpn' to '.conf' which is the default extension on Linux systems for OpenVPN, and then open the client configuration file in order to extend an existing line. $ sudo mv client.ovpn client.conf $ sudo nano client.conf You should have a similar content to this one here: dev tunclientproto tcp-clientca ca.crtcert client.crtkey client.pemtls-remote "/O=WatchGuard_Technologies/OU=Fireware/CN=Fireware_SSLVPN_Server"remote-cert-eku "TLS Web Server Authentication"remote 1.2.3.4 443persist-keypersist-tunverb 3mute 20keepalive 10 60cipher AES-256-CBCauth SHA1float 1reneg-sec 3660nobindmute-replay-warningsauth-user-pass auth.txt Note: I changed the IP address of the remote directive above (which should be obvious, right?). Anyway, the required change is marked in red and we have to create a new authentication file 'auth.txt'. You can give the directive 'auth-user-pass' any file name you'd like to. Due to my existing OpenVPN infrastructure my setup differs completely from the above written content but for sake of simplicity I just keep it 'as-is'. Okay, let's create this file 'auth.txt' $ sudo nano auth.txt and just put two lines of information in it - username on the first, and password on the second line, like so: myvpnusernameverysecretpassword Store the file, change permissions, and call openvpn with your configuration file again: $ sudo chmod 0600 auth.txt $ sudo openvpn --config client.conf This should now work without being prompted to enter username and password. In case that you placed your files below the system-wide location /etc/openvpn you can operate your VPNs also via service command like so: $ sudo service openvpn start client $ sudo service openvpn stop client Using Network Manager For newer Linux users or the ones with 'console-phobia' I'm going to describe now how to use Network Manager to setup the OpenVPN client. For this move your mouse to the systray area and click on Network Connections => VPN Connections => Configure VPNs... which opens your Network Connections dialog. Alternatively, use the HUD and enter 'Network Connections'. Network connections overview in Ubuntu Click on 'Add' button. On the next dialog select 'Import a saved VPN configuration...' from the dropdown list and click on 'Create...' Choose connection type to import VPN configuration Now you navigate to your folder where you put the client files from the Windows system and you open the 'client.ovpn' file. Next, on the tab 'VPN' proceed with the following steps (directives from the configuration file are referred): General Check the IP address of Gateway ('remote' - we used 1.2.3.4 in this setup) Authentication Change Type to 'Password with Certificates (TLS)' ('auth-pass-user') Enter User name to access your client keys (Auth Name: myvpnusername) Enter Password (Auth Password: verysecretpassword) and choose your password handling Browse for your User Certificate ('cert' - should be pre-selected with client.crt) Browse for your CA Certificate ('ca' - should be filled as ca.crt) Specify your Private Key ('key' - here: client.pem) Then click on the 'Advanced...' button and check the following values: Use custom gateway port: 443 (second value of 'remote' directive) Check the selected value of Cipher ('cipher') Check HMAC Authentication ('auth') Enter the Subject Match: /O=WatchGuard_Technologies/OU=Fireware/CN=Fireware_SSLVPN_Server ('tls-remote') Finally, you have to confirm and close all dialogs. You should be able to establish your OpenVPN-WatchGuard connection via Network Manager. For that, click on the 'VPN Connections => client' entry on your Network Manager in the systray. It is advised that you keep an eye on the syslog to see whether there are any problematic issues that would require some additional attention. Advanced topic: routing As stated above, I'm running the 'WatchGuard client for Linux' on my head-less server, and since then I'm actually establishing a secure communication channel between two networks. In order to enable your network clients to get access to machines on the remote side there are two possibilities to enable that: Proper routing on both sides of the connection which enables both-direction access, or Network masquerading on the 'client side' of the connection Following, I'm going to describe the second option a little bit more in detail. The Linux system that I'm using is already configured as a gateway to the internet. I won't explain the necessary steps to do that, and will only focus on the additional tweaks I had to do. You can find tons of very good instructions and tutorials on 'How to setup a Linux gateway/router' - just use Google. OK, back to the actual modifications. First, we need to have some information about the network topology and IP address range used on the 'other' side. We can get this very easily from /var/log/syslog after we established the OpenVPN channel, like so: $ sudo tail -n20 /var/log/syslog Or if your system is quite busy with logging, like so: $ sudo less /var/log/syslog | grep ovpn The output should contain PUSH received message similar to the following one: Jul 23 23:13:28 ios1 ovpn-client[789]: PUSH: Received control message: 'PUSH_REPLY,topology subnet,route 192.168.1.0 255.255.255.0,dhcp-option DOMAIN ,route-gateway 192.168.6.1,topology subnet,ping 10,ping-restart 60,ifconfig 192.168.6.2 255.255.255.0' The interesting part for us is the route command which I highlighted already in the sample PUSH_REPLY. Depending on your remote server there might be multiple networks defined (172.16.x.x and/or 10.x.x.x). Important: The IP address range on both sides of the connection has to be different, otherwise you will have to shuffle IPs or increase your the netmask. {loadposition content_adsense} After the VPN connection is established, we have to extend the rules for iptables in order to route and masquerade IP packets properly. I created a shell script to take care of those steps: #!/bin/sh -eIPTABLES=/sbin/iptablesDEV_LAN=eth0DEV_VPNS=tun+VPN=192.168.1.0/24 $IPTABLES -A FORWARD -i $DEV_LAN -o $DEV_VPNS -d $VPN -j ACCEPT$IPTABLES -A FORWARD -i $DEV_VPNS -o $DEV_LAN -s $VPN -j ACCEPT$IPTABLES -t nat -A POSTROUTING -o $DEV_VPNS -d $VPN -j MASQUERADE I'm using the wildcard interface 'tun+' because I have multiple client configurations for OpenVPN on my server. In your case, it might be sufficient to specify device 'tun0' only. Simplifying your life - automatic connect on boot Now, that the client connection works flawless, configuration of routing and iptables is okay, we might consider to add another 'laziness' factor into our setup. Due to kernel updates or other circumstances it might be necessary to reboot your system. Wouldn't it be nice that the VPN connections are established during the boot procedure? Yes, of course it would be. To achieve this, we have to configure OpenVPN to automatically start our VPNs via init script. Let's have a look at the responsible 'default' file and adjust the settings accordingly. $ sudo nano /etc/default/openvpn Which should have a similar content to this: # This is the configuration file for /etc/init.d/openvpn## Start only these VPNs automatically via init script.# Allowed values are "all", "none" or space separated list of# names of the VPNs. If empty, "all" is assumed.# The VPN name refers to the VPN configutation file name.# i.e. "home" would be /etc/openvpn/home.conf#AUTOSTART="all"#AUTOSTART="none"#AUTOSTART="home office"## ... more information which remains unmodified ... With the OpenVPN client configuration as described above you would either set AUTOSTART to "all" or to "client" to enable automatic start of your VPN(s) during boot. You should also take care that your iptables commands are executed after the link has been established, too. You can easily test this configuration without reboot, like so: $ sudo service openvpn restart Enjoy stable VPN connections between your Linux system(s) and a WatchGuard Firebox SSL remote server. Cheers, JoKi

    Read the article

  • httpd keeps crashing without any reference to why in the logs

    - by Fred
    I have the logs set to debug in the hopes of tracking down what's causing the crash, but I can't find anything. Here is the error_log. [Thu Jan 06 10:27:35 2011] [debug] proxy_util.c(1967): proxy: initialized single connection worker 0 in child 19999 for (*) [Thu Jan 06 14:47:04 2011] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec) [Thu Jan 06 14:47:04 2011] [info] Init: Seeding PRNG with 256 bytes of entropy [Thu Jan 06 14:47:04 2011] [info] Init: Generating temporary RSA private keys (512/1024 bits) [Thu Jan 06 14:47:04 2011] [info] Init: Generating temporary DH parameters (512/1024 bits) [Thu Jan 06 14:47:04 2011] [info] Init: Initializing (virtual) servers for SSL [Thu Jan 06 14:47:04 2011] [info] Server: Apache/2.2.3, Interface: mod_ssl/2.2.3, Library: OpenSSL/0.9.8e-fips-rhel5 [Thu Jan 06 14:47:04 2011] [notice] Digest: generating secret for digest authentication ... [Thu Jan 06 14:47:04 2011] [notice] Digest: done [Thu Jan 06 14:47:04 2011] [debug] util_ldap.c(2021): LDAP merging Shared Cache conf: shm=0xb9dc2480 rmm=0xb9dc24b0 for VHOST: server.fredfinn.com [Thu Jan 06 14:47:04 2011] [info] APR LDAP: Built with OpenLDAP LDAP SDK [Thu Jan 06 14:47:04 2011] [info] LDAP: SSL support available [Thu Jan 06 14:47:05 2011] [info] Init: Seeding PRNG with 256 bytes of entropy [Thu Jan 06 14:47:05 2011] [info] Init: Generating temporary RSA private keys (512/1024 bits) [Thu Jan 06 14:47:05 2011] [info] Init: Generating temporary DH parameters (512/1024 bits) [Thu Jan 06 14:47:05 2011] [debug] ssl_scache_shmcb.c(374): shmcb_init allocated 512000 bytes of shared memory [Thu Jan 06 14:47:05 2011] [debug] ssl_scache_shmcb.c(554): entered shmcb_init_memory() [Thu Jan 06 14:47:05 2011] [debug] ssl_scache_shmcb.c(576): for 512000 bytes, recommending 4266 indexes [Thu Jan 06 14:47:05 2011] [debug] ssl_scache_shmcb.c(619): shmcb_init_memory choices follow [Thu Jan 06 14:47:05 2011] [debug] ssl_scache_shmcb.c(621): division_mask = 0x1F [Thu Jan 06 14:47:05 2011] [debug] ssl_scache_shmcb.c(623): division_offset = 64 [Thu Jan 06 14:47:05 2011] [debug] ssl_scache_shmcb.c(625): division_size = 15998 [Thu Jan 06 14:47:05 2011] [debug] ssl_scache_shmcb.c(627): queue_size = 1604 [Thu Jan 06 14:47:05 2011] [debug] ssl_scache_shmcb.c(629): index_num = 133 [Thu Jan 06 14:47:05 2011] [debug] ssl_scache_shmcb.c(631): index_offset = 8 [Thu Jan 06 14:47:05 2011] [debug] ssl_scache_shmcb.c(633): index_size = 12 [Thu Jan 06 14:47:05 2011] [debug] ssl_scache_shmcb.c(635): cache_data_offset = 8 [Thu Jan 06 14:47:05 2011] [debug] ssl_scache_shmcb.c(637): cache_data_size = 14386 [Thu Jan 06 14:47:05 2011] [debug] ssl_scache_shmcb.c(650): leaving shmcb_init_memory() [Thu Jan 06 14:47:05 2011] [info] Shared memory session cache initialised [Thu Jan 06 14:47:05 2011] [info] Init: Initializing (virtual) servers for SSL [Thu Jan 06 14:47:05 2011] [info] Server: Apache/2.2.3, Interface: mod_ssl/2.2.3, Library: OpenSSL/0.9.8e-fips-rhel5 [Thu Jan 06 14:47:05 2011] [warn] pid file /etc/httpd/run/httpd.pid overwritten -- Unclean shutdown of previous Apache run? [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1854): proxy: grabbed scoreboard slot 0 in child 26527 for worker proxy:reverse [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1967): proxy: initialized single connection worker 0 in child 26527 for (*) [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1854): proxy: grabbed scoreboard slot 0 in child 26528 for worker proxy:reverse [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1873): proxy: worker proxy:reverse already initialized [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1967): proxy: initialized single connection worker 0 in child 26528 for (*) [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1854): proxy: grabbed scoreboard slot 0 in child 26529 for worker proxy:reverse [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1873): proxy: worker proxy:reverse already initialized [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1967): proxy: initialized single connection worker 0 in child 26529 for (*) [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1854): proxy: grabbed scoreboard slot 0 in child 26530 for worker proxy:reverse [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1873): proxy: worker proxy:reverse already initialized [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1967): proxy: initialized single connection worker 0 in child 26530 for (*) [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1854): proxy: grabbed scoreboard slot 0 in child 26532 for worker proxy:reverse [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1873): proxy: worker proxy:reverse already initialized [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1967): proxy: initialized single connection worker 0 in child 26532 for (*) [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1854): proxy: grabbed scoreboard slot 0 in child 26533 for worker proxy:reverse [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1873): proxy: worker proxy:reverse already initialized [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1967): proxy: initialized single connection worker 0 in child 26533 for (*) [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1854): proxy: grabbed scoreboard slot 0 in child 26534 for worker proxy:reverse [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1873): proxy: worker proxy:reverse already initialized [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1967): proxy: initialized single connection worker 0 in child 26534 for (*) [Thu Jan 06 14:47:05 2011] [notice] Apache/2.2.3 (CentOS) configured -- resuming normal operations [Thu Jan 06 14:47:05 2011] [info] Server built: Aug 30 2010 12:32:08 [Thu Jan 06 14:47:05 2011] [debug] prefork.c(991): AcceptMutex: sysvsem (default: sysvsem) [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1854): proxy: grabbed scoreboard slot 0 in child 26531 for worker proxy:reverse [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1873): proxy: worker proxy:reverse already initialized [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1967): proxy: initialized single connection worker 0 in child 26531 for (*) The logs are setup as: ErrorLog logs/error_log LogLevel debug LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %b" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent CustomLog logs/access_log common CustomLog logs/access_log combined ServerSignature On

    Read the article

  • Implement OAuth in Java

    - by phineas
    I made an an attempt to implement OAuth for my programming idea in Java, but I failed miserably. I don't know why, but my code doesn't work. Every time I run my program, an IOException is thrown with the reason "java.io.IOException: Server returned HTTP response code: 401" (401 means Unauthorized). I had a close look at the docs, but I really don't understand why it doesn't work. My OAuth provider I wanted to use is twitter, where I've registered my app, too. Thanks in advance phineas OAuth docs Twitter API wiki Class Base64Coder import java.io.InputStreamReader; import java.io.BufferedReader; import java.io.OutputStream; import java.io.IOException; import java.io.UnsupportedEncodingException; import java.net.URL; import java.net.URLEncoder; import java.net.URLConnection; import java.net.MalformedURLException; import javax.crypto.Mac; import javax.crypto.spec.SecretKeySpec; import java.security.NoSuchAlgorithmException; import java.security.InvalidKeyException; public class Request { public static String read(String url) { StringBuffer buffer = new StringBuffer(); try { /** * get the time - note: value below zero * the millisecond value is used for oauth_nonce later on */ int millis = (int) System.currentTimeMillis() * -1; int time = (int) millis / 1000; /** * Listing of all parameters necessary to retrieve a token * (sorted lexicographically as demanded) */ String[][] data = { {"oauth_callback", "SOME_URL"}, {"oauth_consumer_key", "MY_CONSUMER_KEY"}, {"oauth_nonce", String.valueOf(millis)}, {"oauth_signature", ""}, {"oauth_signature_method", "HMAC-SHA1"}, {"oauth_timestamp", String.valueOf(time)}, {"oauth_version", "1.0"} }; /** * Generation of the signature base string */ String signature_base_string = "POST&"+URLEncoder.encode(url, "UTF-8")+"&"; for(int i = 0; i < data.length; i++) { // ignore the empty oauth_signature field if(i != 3) { signature_base_string += URLEncoder.encode(data[i][0], "UTF-8") + "%3D" + URLEncoder.encode(data[i][1], "UTF-8") + "%26"; } } // cut the last appended %26 signature_base_string = signature_base_string.substring(0, signature_base_string.length()-3); /** * Sign the request */ Mac m = Mac.getInstance("HmacSHA1"); m.init(new SecretKeySpec("CONSUMER_SECRET".getBytes(), "HmacSHA1")); m.update(signature_base_string.getBytes()); byte[] res = m.doFinal(); String sig = String.valueOf(Base64Coder.encode(res)); data[3][1] = sig; /** * Create the header for the request */ String header = "OAuth "; for(String[] item : data) { header += item[0]+"=\""+item[1]+"\", "; } // cut off last appended comma header = header.substring(0, header.length()-2); System.out.println("Signature Base String: "+signature_base_string); System.out.println("Authorization Header: "+header); System.out.println("Signature: "+sig); String charset = "UTF-8"; URLConnection connection = new URL(url).openConnection(); connection.setDoInput(true); connection.setDoOutput(true); connection.setRequestProperty("Accept-Charset", charset); connection.setRequestProperty("Content-Type", "application/x-www-form-urlencoded;charset=" + charset); connection.setRequestProperty("Authorization", header); connection.setRequestProperty("User-Agent", "XXXX"); OutputStream output = connection.getOutputStream(); output.write(header.getBytes(charset)); BufferedReader reader = new BufferedReader(new InputStreamReader(connection.getInputStream())); String read; while((read = reader.readLine()) != null) { buffer.append(read); } } catch(Exception e) { e.printStackTrace(); } return buffer.toString(); } public static void main(String[] args) { System.out.println(Request.read("http://api.twitter.com/oauth/request_token")); } }

    Read the article

  • iphone web service access

    - by malleswar
    Hi, I have .net webservice methods login and summary. After getting the result from login, I need to show second view and need to call summary method. I am following this tutorial. http://icodeblog.com/2008/11/03/iphone-programming-tutorial-intro-to-soap-web-services/ I created two new classes loginaccess.h and loginaccess.m. @implementation LoginAccess @synthesize ResultString,webData, soapResults, xmlParser; -(NSString*)LoginCheck:(NSString*)userName:(NSString*)pwd { NSString *soapMessage = [NSString stringWithFormat: @"\n" "\n" "\n" "\n" "%@" "%@" "" "\n" "\n",userName,pwd ]; NSLog(soapMessage); NSURL *url = [NSURL URLWithString:@"http://www.XXXXXXXXX.com/service.asmx"]; NSMutableURLRequest *theRequest = [NSMutableURLRequest requestWithURL:url]; NSString *msgLength = [NSString stringWithFormat:@"%d", [soapMessage length]]; [theRequest addValue: @"text/xml; charset=utf-8" forHTTPHeaderField:@"Content-Type"]; [theRequest addValue: @"http://XXXXXXXXm/Login" forHTTPHeaderField:@"SOAPAction"]; [theRequest addValue: msgLength forHTTPHeaderField:@"Content-Length"]; [theRequest setHTTPMethod:@"POST"]; [theRequest setHTTPBody: [soapMessage dataUsingEncoding:NSUTF8StringEncoding]]; NSURLConnection *theConnection = [[NSURLConnection alloc] initWithRequest:theRequest delegate:self]; if( theConnection ) { webData = [[NSMutableData data] retain]; } else { NSLog(@"theConnection is NULL"); } //[nameInput resignFirstResponder]; return ResultString; } -(void)connection:(NSURLConnection *)connection didReceiveResponse:(NSURLResponse *)response { [webData setLength: 0]; } -(void)connection:(NSURLConnection *)connection didReceiveData:(NSData *)data { [webData appendData:data]; } -(void)connection:(NSURLConnection *)connection didFailWithError:(NSError *)error { NSLog(@"ERROR with theConenction"); [connection release]; [webData release]; } -(void)connectionDidFinishLoading:(NSURLConnection *)connection { NSLog(@"DONE. Received Bytes: %d", [webData length]); NSString *theXML = [[NSString alloc] initWithBytes: [webData mutableBytes] length:[webData length] encoding:NSUTF8StringEncoding]; NSLog(theXML); [theXML release]; if( xmlParser ) { [xmlParser release]; } xmlParser = [[NSXMLParser alloc] initWithData: webData]; [xmlParser setDelegate: self]; [xmlParser setShouldResolveExternalEntities: YES]; [xmlParser parse]; [connection release]; [webData release]; } -(void)parser:(NSXMLParser *)parser didStartElement:(NSString *)elementName namespaceURI:(NSString *) namespaceURI qualifiedName:(NSString *)qName attributes: (NSDictionary *)attributeDict { if( [elementName isEqualToString:@"LoginResult"]) { if(!soapResults) { soapResults = [[NSMutableString alloc] init]; } recordResults = TRUE; } } -(void)parser:(NSXMLParser *)parser foundCharacters:(NSString *)string { if( recordResults ) { [soapResults appendString: string]; } } -(void)parser:(NSXMLParser *)parser didEndElement:(NSString *)elementName namespaceURI:(NSString *)namespaceURI qualifiedName:(NSString *)qName { if( [elementName isEqualToString:@"LoginResult"]) { recordResults = FALSE; ResultString = soapResults; NSLog(@"Login"); [VariableStore setStr:ResultString]; NSLog(soapResults); [soapResults release]; soapResults = nil; } } @end I am calling LoginCheck method and based on result I want to show the second view. Here after finishing of the button touch down event, it enter into did end element, so I am always getting nil value. If I use the same code controller it works fine as I push second view controller in didendelement. Please give me some samples to place the web service calls in differnt class and how to call them in viewcontrollers. Regards, Malleswar

    Read the article

  • Why doesn't PHP's oci_connect return false?

    - by absolethe
    I have a situation in which we have two production databases that synchronize with one other. Server One is considered the primary. Sometimes due to maintenance or a disaster Server Two will become primary. In some of our code that means we have to manually go in and edit the server name for database connections. I find this annoying, so the last thing I wrote I put the server information for both and set up a loop. If oci_connect failed on the Server One 3 times it would move on to Server Two. If Server Two failed 3 times it would notify the user a connection couldn't be made. This has worked fine most times we've had the situation of switching the servers. Yesterday, for example, it worked fine. Today it didn't. It just sat and spun endlessly. No error in the PHP error log. No failure to move on from. No error output to the screen. Nothing for 5 minutes. So then I had to manually edit the stupid config file. I asked what could possibly be different and I was told "yesterday the database was down, but not the server. today the server is down." Okay...? But I don't see a distinction. I would expect oci_connect to return false if it can't establish any sort of communication with the server. I'd expect it to timeout and error. Not just pass it on when it receives an error code from the server. What if there's a network problem, for example? Is this a bug in oci_connect or is there a possibility that something in our PHP configuration gives oci_connect a crazily long timeout? If it is a sort of "bug" is there some way I can check to see if the server is up first? Like a ping? (Of course when I did a ping through the command prompt I got a response from Server One and then was told, "it's back now" although I am skeptical about the timing on that.) Anyway, if anyone could shed some light on why oci_connect might run endlessly without failing and how to keep it from doing so I'd be grateful. -- Edit: My code looks like the examples on PHP.net only in some loops. $count = count($servers); for($i = 0; $i < $count; $i++){ if((!isset($connection)) || ($connection == false)){ // Attempt to connect to the oracle database $connection = @oci_connect($servers[$i]["user"], $servers[$i]["pass"], $servers[$i]["conid"]) or ($conn_error = oracle_error()); // Try again if there was a failure if(($connection == false) || (isset($con_error))){ // Three (two more) tries per alternative for($j = $st; $j < $fn; $j++){ // Try again to connect $connection = @oci_connect($servers[$i]["user"], $servers[$i]["pass"], $servers[$i]["conid"]) or ($conn_error = oracle_error()); } // for($j = 2; $j < 4; $j++) } // if($connection == false) } // if(!isset($connection) || ($connection == false)) } // for($i = 0; $i < $count; $i++)

    Read the article

  • Ardour wont start Jack problem

    - by Drew S
    I downloaded Ardour yesterday, it worked, edited an audio file done. Come back today it wont start I get this: Ardour could not start JACK There are several possible reasons: 1) You requested audio parameters that are not supported.. 2) JACK is running as another user. Please consider the possibilities, and perhaps try different parameters. So I try and look at qjackctl to see what happening there. When I try to start JACK I get D-BUS: JACK server could not be started. then Could not connect to JACK server as client. - Overall operation failed. - Unable to connect to server. Please check the messages window for more info. and this is the message box in JACK. 15:22:12.927 Patchbay deactivated. 15:22:12.927 Statistics reset. 15:22:12.944 ALSA connection change. 15:22:12.951 D-BUS: Service is available (org.jackaudio.service aka jackdbus). Cannot connect to server socket err = No such file or directory Cannot connect to server request channel jack server is not running or cannot be started 15:22:12.959 ALSA connection graph change. 15:22:45.850 ALSA connection graph change. 15:22:46.021 ALSA connection change. 15:22:56.492 ALSA connection graph change. 15:22:56.624 ALSA connection change. 15:23:42.340 D-BUS: JACK server could not be started. Sorry Cannot connect to server socket err = No such file or directory Cannot connect to server request channel jack server is not running or cannot be started Wed Oct 23 15:23:42 2013: Starting jack server... Wed Oct 23 15:23:42 2013: JACK server starting in realtime mode with priority 10 Wed Oct 23 15:23:42 2013: ERROR: Cannot lock down 82274202 byte memory area (Cannot allocate memory) Wed Oct 23 15:23:42 2013: Acquired audio card Audio0 Wed Oct 23 15:23:42 2013: creating alsa driver ... hw:0|hw:0|1024|2|44100|0|0|nomon|swmeter|-|32bit Wed Oct 23 15:23:42 2013: ERROR: ATTENTION: The playback device "hw:0" is already in use. The following applications are using your soundcard(s) so you should check them and stop them as necessary before trying to start JACK again: pulseaudio (process ID 2553) Wed Oct 23 15:23:42 2013: ERROR: Cannot initialize driver Wed Oct 23 15:23:42 2013: ERROR: JackServer::Open failed with -1 Wed Oct 23 15:23:42 2013: ERROR: Failed to open server Wed Oct 23 15:23:43 2013: Saving settings to "/home/drew/.config/jack/conf.xml" ... 15:26:41.669 Could not connect to JACK server as client. - Overall operation failed. - Unable to connect to server. Please check the messages window for more info. Cannot connect to server socket err = No such file or directory Cannot connect to server request channel jack server is not running or cannot be started 15:26:49.006 D-BUS: JACK server could not be started. Sorry Wed Oct 23 15:26:48 2013: Starting jack server... Wed Oct 23 15:26:48 2013: JACK server starting in non-realtime mode Wed Oct 23 15:26:48 2013: ERROR: Cannot lock down 82274202 byte memory area (Cannot allocate memory) Cannot connect to server socket err = No such file or directory Cannot connect to server request channel jack server is not running or cannot be started Wed Oct 23 15:26:48 2013: ERROR: cannot register object path "/org/freedesktop/ReserveDevice1/Audio0": A handler is already registered for /org/freedesktop/ReserveDevice1/Audio0 Wed Oct 23 15:26:48 2013: ERROR: Failed to acquire device name : Audio0 error : A handler is already registered for /org/freedesktop/ReserveDevice1/Audio0 Wed Oct 23 15:26:48 2013: ERROR: Audio device hw:0 cannot be acquired... Wed Oct 23 15:26:48 2013: ERROR: Cannot initialize driver Wed Oct 23 15:26:48 2013: ERROR: JackServer::Open failed with -1 Wed Oct 23 15:26:48 2013: ERROR: Failed to open server Wed Oct 23 15:26:50 2013: Saving settings to "/home/drew/.config/jack/conf.xml" ... 15:26:52.441 Could not connect to JACK server as client. - Overall operation failed. - Unable to connect to server. Please check the messages window for more info. Cannot connect to server socket err = No such file or directory Cannot connect to server request channel jack server is not running or cannot be started 15:26:55.997 D-BUS: JACK server could not be started. Sorry Wed Oct 23 15:26:55 2013: Starting jack server... Wed Oct 23 15:26:55 2013: JACK server starting in non-realtime mode Wed Oct 23 15:26:55 2013: ERROR: Cannot lock down 82274202 byte memory area (Cannot allocate memory) Cannot connect to server socket err = No such file or directory Cannot connect to server request channel jack server is not running or cannot be started Wed Oct 23 15:26:55 2013: ERROR: cannot register object path "/org/freedesktop/ReserveDevice1/Audio0": A handler is already registered for /org/freedesktop/ReserveDevice1/Audio0 Wed Oct 23 15:26:55 2013: ERROR: Failed to acquire device name : Audio0 error : A handler is already registered for /org/freedesktop/ReserveDevice1/Audio0 Wed Oct 23 15:26:55 2013: ERROR: Audio device hw:0 cannot be acquired... Wed Oct 23 15:26:55 2013: ERROR: Cannot initialize driver Wed Oct 23 15:26:55 2013: ERROR: JackServer::Open failed with -1 Wed Oct 23 15:26:55 2013: ERROR: Failed to open server Wed Oct 23 15:26:57 2013: Saving settings to "/home/drew/.config/jack/conf.xml" ... 15:26:59.054 Could not connect to JACK server as client. - Overall operation failed. - Unable to connect to server. Please check the messages window for more info. Cannot connect to server socket err = No such file or directory Cannot connect to server request channel jack server is not running or cannot be started 15:29:24.624 ALSA connection graph change. 15:29:24.641 ALSA connection change. 15:33:11.760 D-BUS: JACK server could not be started. Sorry Cannot connect to server socket err = No such file or directory Cannot connect to server request channel jack server is not running or cannot be started Wed Oct 23 15:33:11 2013: Starting jack server... Wed Oct 23 15:33:11 2013: JACK server starting in non-realtime mode Wed Oct 23 15:33:11 2013: ERROR: Cannot lock down 82274202 byte memory area (Cannot allocate memory) Wed Oct 23 15:33:11 2013: ERROR: cannot register object path "/org/freedesktop/ReserveDevice1/Audio0": A handler is already registered for /org/freedesktop/ReserveDevice1/Audio0 Wed Oct 23 15:33:11 2013: ERROR: Failed to acquire device name : Audio0 error : A handler is already registered for /org/freedesktop/ReserveDevice1/Audio0 Wed Oct 23 15:33:11 2013: ERROR: Audio device hw:0 cannot be acquired... Wed Oct 23 15:33:11 2013: ERROR: Cannot initialize driver Wed Oct 23 15:33:11 2013: ERROR: JackServer::Open failed with -1 Wed Oct 23 15:33:11 2013: ERROR: Failed to open server Wed Oct 23 15:33:12 2013: Saving settings to "/home/drew/.config/jack/conf.xml" ... 15:34:09.439 Could not connect to JACK server as client. - Overall operation failed. - Unable to connect to server. Please check the messages window for more info. Cannot connect to server socket err = No such file or directory Cannot connect to server request channel jack server is not running or cannot be started

    Read the article

  • SQL SERVER – Data Sources and Data Sets in Reporting Services SSRS

    - by Pinal Dave
    This example is from the Beginning SSRS by Kathi Kellenberger. Supporting files are available with a free download from the www.Joes2Pros.com web site. This example is from the Beginning SSRS. Supporting files are available with a free download from the www.Joes2Pros.com web site. Connecting to Your Data? When I was a child, the telephone book was an important part of my life. Maybe I was just a nerd, but I enjoyed getting a new book every year to page through to learn about the businesses in my small town or to discover where some of my school acquaintances lived. It was also the source of maps to my town’s neighborhoods and the towns that surrounded me. To make a phone call, I would need a telephone number. In order to find a telephone number, I had to know how to use the telephone book. That seems pretty simple, but it resembles connecting to any data. You have to know where the data is and how to interact with it. A data source is the connection information that the report uses to connect to the database. You have two choices when creating a data source, whether to embed it in the report or to make it a shared resource usable by many reports. Data Sources and Data Sets A few basic terms will make the upcoming choses make more sense. What database on what server do you want to connect to? It would be better to just ask… “what is your data source?” The connection you need to make to get your reports data is called a data source. If you connected to a data source (like the JProCo database) there may be hundreds of tables. You probably only want data from just a few tables. This means you want to write a specific query against this data source. A query on a data source to get just the records you need for an SSRS report is called a Data Set. Creating a local Data Source You can connect embed a connection from your report directly to your JProCo database which (let’s say) is installed on a server named Reno. If you move JProCo to a new server named Tampa then you need to update the Data Set. If you have 10 reports in one project that were all pointing to the JProCo database on the Reno server then they would all need to be updated at once. It’s possible to make a project level Data Source and have each report use that. This means one change can fix all 10 reports at once. This would be called a Shared Data Source. Creating a Shared Data Source The best advice I can give you is to create shared data sources. The reason I recommend this is that if a database moves to a new server you will have just one place in Report Manager to make the server name change. That one change will update the connection information in all the reports that use that data source. To get started, you will start with a fresh project. Go to Start > All Programs > SQL Server 2012 > Microsoft SQL Server Data Tools to launch SSDT. Once SSDT is running, click New Project to create a new project. Once the New Project dialog box appears, fill in the form, as shown in. Be sure to select Report Server Project this time – not the wizard. Click OK to dismiss the New Project dialog box. You should now have an empty project, as shown in the Solution Explorer. A report is meant to show you data. Where is the data? The first task is to create a Shared Data Source. Right-click on the Shared Data Sources folder and choose Add New Data Source. The Shared Data Source Properties dialog box will launch where you can fill in a name for the data source. By default, it is named DataSource1. The best practice is to give the data source a more meaningful name. It is possible that you will have projects with more than one data source and, by naming them, you can tell one from another. Type the name JProCo for the data source name and click the Edit button to configure the database connection properties. If you take a look at the types of data sources you can choose, you will see that SSRS works with many data platforms including Oracle, XML, and Teradata. Make sure SQL Server is selected before continuing. For this post, I am assuming that you are using a local SQL Server and that you can use your Windows account to log in to the SQL Server. If, for some reason you must use SQL Server Authentication, choose that option and fill in your SQL Server account credentials. Otherwise, just accept Windows Authentication. If your database server was installed locally and with the default instance, just type in Localhost for the Server name. Select the JProCo database from the database list. At this point, the connection properties should look like. If you have installed a named instance of SQL Server, you will have to specify the server name like this: Localhost\InstanceName, replacing the InstanceName with whatever your instance name is. If you are not sure about the named instance, launch the SQL Server Configuration Manager found at Start > All Programs > Microsoft SQL Server 2012 > Configuration Tools. If you have a named instance, the name will be shown in parentheses. A default instance of SQL Server will display MSSQLSERVER; a named instance will display the name chosen during installation. Once you get the connection properties filled in, click OK to dismiss the Connection Properties dialog box and OK again to dismiss the Shared Data Source properties. You now have a data source in the Solution Explorer. What’s next I really need to thank Kathi Kellenberger and Rick Morelan for sharing this material for this 5 day series of posts on SSRS. To get really comfortable with SSRS you will get to know the different SSDT windows, Build reports on your own (without the wizards),  Add report headers and footers, Accept user input,  create levels, charts, or even maps for visual appeal. You might be surprise to know a small 230 page book starts from the very beginning and covers the steps to do all these items. Beginning SSRS 2012 is a small easy to follow book so you can learn SSRS for less than $20. See Joes2Pros.com for more on this and other books. If you want to learn SSRS in easy to simple words – I strongly recommend you to get Beginning SSRS book from Joes 2 Pros. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Reporting Services, SSRS

    Read the article

< Previous Page | 142 143 144 145 146 147 148 149 150 151 152 153  | Next Page >