Search Results

Search found 17749 results on 710 pages for 'connection pool'.

Page 164/710 | < Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >

  • How to Connect Crystal Reports to MySQL directly by C# code without DSN or a DataSet

    - by Yanko Hernández Alvarez
    How can I connect a Crystal Report (VS 2008 basic) to a MySQL DB without using a DSN or a preload DataSet using C#? I need install the program on several places, so I must change the connection parameters. I don't want to create a DSN on every place, nor do I want to preload a DataSet and pass it to the report engine. I use nhibernate to access the database, so to create and fill the additional DS would take twice the work and additional maintenance later. I think the best option would be to let the crystal reports engine to connect to MySQL server by itself using ODBC. I managed to create the connection in the report designer (VS2008) using the Database Expert, creating an ODBC(RDO) connection and entering this connection string "DRIVER={MySQL ODBC 5.1 Driver};SERVER=myserver.mydomain" and in the "Next" page filling the "User ID", "Password" and "Database" parameters. I didn't fill the "Server" parameter. It worked. As a matter of fact, if you use the former connection string, it doesn't matter what you put on the "Server" parameter, it seems the parameter is unused. On the other hand, if you use "DRIVER={MySQL ODBC 5.1 Driver}" as a connection string and later fill the "Server" parameter with the FQDN of the server, the connection doesn't work. How can I do that by code? All the examples I've seen till now, use a DSN or the DataSet method. I saw the same question posted but for PostgreSQL and tried to adapt it to mysql, but so far, no success. The first method: Rp.Load(); Rp.DataSourceConnections[0].SetConnection("DRIVER={MySQL ODBC 5.1 Driver};SERVER=myserver.mydomain", "database", "user", "pass"); Rp.ExportToDisk(ExportFormatType.PortableDocFormat, "report.pdf"); raise an CrystalDecisions.CrystalReports.Engine.LogOnException during ExportToDisk Message="Logon failed.\nDetails: IM002:[Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified.\rError in File temporal file path.rpt:\nUnable to connect: incorrect log on parameters. the InnerException is an System.Runtime.InteropServices.COMException with the same message and no InnerException The "no default driver specified" makes me wonder if the server parameter is unused here too (see above). In that case: How can I specify the connection string? I haven't tried the second method because it doesn't apply. Does anybody know the solution?

    Read the article

  • Variables are "lost" somewhere, then reappear... all with no error thrown.

    - by rob - not a robber
    Hi All, I'm probably going to make myself look like a fool with my horrible scripting but here we go. I have a form that I am collecting a bunch of checkbox info from using a binary method. ON/SET=1 !ISSET=0 Anyway, all seems to be going as planned except for the query bit. When I run the script, it runs through and throws no errors, but it's not doing what I think I am telling it to. I've hard coded the values in the query and left them in the script and it DOES update the DB. I've also tried echoing all the needed variables after the script runs and exiting right after so I can audit them... and they're all there. Here's an example. ####FEATURES RECORD UPDATE ### HERE I DECIDE TO RUN THE SCRIPT BASED ON WHETHER AN IMAGE BUTTON WAS USED if (isset($_POST["button_x"])) { ### HERE I AM ASSIGNING 1 OR 0 TO A VAR BASED ON WHTER THE CHECKBOX WAS SET if (isset($_POST["pool"])) $pool=1; if (!isset($_POST["pool"])) $pool=0; if (isset($_POST["darts"])) $darts=1; if (!isset($_POST["darts"])) $darts=0; if (isset($_POST["karaoke"])) $karaoke=1; if (!isset($_POST["karaoke"])) $karaoke=0; if (isset($_POST["trivia"])) $trivia=1; if (!isset($_POST["trivia"])) $trivia=0; if (isset($_POST["wii"])) $wii=1; if (!isset($_POST["wii"])) $wii=0; if (isset($_POST["guitarhero"])) $guitarhero=1; if (!isset($_POST["guitarhero"])) $guitarhero=0; if (isset($_POST["megatouch"])) $megatouch=1; if (!isset($_POST["megatouch"])) $megatouch=0; if (isset($_POST["arcade"])) $arcade=1; if (!isset($_POST["arcade"])) $arcade=0; if (isset($_POST["jukebox"])) $jukebox=1; if (!isset($_POST["jukebox"])) $jukebox=0; if (isset($_POST["dancefloor"])) $dancefloor=1; if (!isset($_POST["dancefloor"])) $dancefloor=0; ### I'VE DONE LOADS OF PERMUTATIONS HERE... HARD SET THE 1/0 VARS AND LEFT THE $estab_id TO BE PICKED UP. SET THE $estab_id AND LEFT THE COLUMN DATA TO BE PICKED UP. ALL NO GOOD. IT _DOES_ WORK IF I HARD SET ALL VARS THOUGH mysql_query("UPDATE thedatabase SET pool_table='$pool', darts='$darts', karoke='$karaoke', trivia='$trivia', wii='$wii', megatouch='$megatouch', guitar_hero='$guitarhero', arcade_games='$arcade', dancefloor='$dancefloor' WHERE establishment_id='22'"); ###WEIRD THING HERE IS IF I ECHO THE VARS AT THIS POINT AND THEN EXIT(); they all show up as intended. header("location:theadminfilething.php"); exit(); THANKS ALL!!!

    Read the article

  • Cannot install Apache Web Server on Ubuntu, Amazon WS

    - by Eugene Retunsky
    I enter command apt-get install apache2 --fix-missing (under the root user) and this is what I receive: Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: apache2-mpm-worker apache2-utils apache2.2-bin apache2.2-common libapr1 libaprutil1 libaprutil1-dbd-sqlite3 libaprutil1-ldap ssl-cert Suggested packages: apache2-doc apache2-suexec apache2-suexec-custom openssl-blacklist The following NEW packages will be installed: apache2 apache2-mpm-worker apache2-utils apache2.2-bin apache2.2-common libapr1 libaprutil1 libaprutil1-dbd-sqlite3 libaprutil1-ldap ssl-cert 0 upgraded, 10 newly installed, 0 to remove and 36 not upgraded. Need to get 2,945 kB/3,141 kB of archives. After this operation, 10.4 MB of additional disk space will be used. Do you want to continue [Y/n]? y Err http://us-west-1.ec2.archive.ubuntu.com/ubuntu/ oneiric-updates/main apache2.2-bin i386 2.2.20-1ubuntu1.1 404 Not Found [IP: 10.161.51.124 80] Err http://security.ubuntu.com/ubuntu/ oneiric-security/main apache2.2-bin i386 2.2.20-1ubuntu1.1 404 Not Found [IP: 91.189.92.167 80] Err http://security.ubuntu.com/ubuntu/ oneiric-security/main apache2-utils i386 2.2.20-1ubuntu1.1 404 Not Found [IP: 91.189.92.167 80] Err http://security.ubuntu.com/ubuntu/ oneiric-security/main apache2.2-common i386 2.2.20-1ubuntu1.1 404 Not Found [IP: 91.189.92.167 80] Err http://security.ubuntu.com/ubuntu/ oneiric-security/main apache2-mpm-worker i386 2.2.20-1ubuntu1.1 404 Not Found [IP: 91.189.92.167 80] Err http://security.ubuntu.com/ubuntu/ oneiric-security/main apache2 i386 2.2.20-1ubuntu1.1 404 Not Found [IP: 91.189.92.167 80] Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/a/apache2/apache2.2-bin_2.2.20-1ubuntu1.1_i386.deb 404 Not Found [IP: 91.189.92.167 80] Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/a/apache2/apache2-utils_2.2.20-1ubuntu1.1_i386.deb 404 Not Found [IP: 91.189.92.167 80] Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/a/apache2/apache2.2-common_2.2.20-1ubuntu1.1_i386.deb 404 Not Found [IP: 91.189.92.167 80] Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/a/apache2/apache2-mpm-worker_2.2.20-1ubuntu1.1_i386.deb 404 Not Found [IP: 91.189.92.167 80] Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/a/apache2/apache2_2.2.20-1ubuntu1.1_i386.deb 404 Not Found [IP: 91.189.92.167 80] Unable to correct missing packages. E: Aborting install. Any help is appreciated.

    Read the article

  • Am I Leaking ADO.NET Connections?

    - by HardCode
    Here is an example of my code in a DAL. All calls to the database's Stored Procedures are structured this way, and there is no in-line SQL. Friend Shared Function Save(ByVal s As MyClass) As Boolean Dim cn As SqlClient.SqlConnection = Dal.Connections.MyAppConnection Dim cmd As New SqlClient.SqlCommand Try cmd.Connection = cn cmd.CommandType = CommandType.StoredProcedure cmd.CommandText = "proc_save_my_class" cmd.Parameters.AddWithValue("@param1", s.Foo) cmd.Parameters.AddWithValue("@param2", s.Bar) Return True Finally Dal.Utility.CleanupAdoObjects(cmd, cn) End Try End Function Here is the Connection factory (if I am using the correct term): Friend Shared Function MyAppConnection() As SqlClient.SqlConnection Dim cn As New SqlClient.SqlConnection(ConfigurationManager.ConnectionStrings("MyConnectionString").ToString) cn.Open() If cn.State <> ConnectionState.Open Then ' CriticalException is a custom object inheriting from Exception. Throw New CriticalException("Could not connect to the database.") Else Return cn End If End Function Here is the Dal.Utility.CleaupAdoObjects() function: Friend Shared Sub CleanupAdoObjects(ByVal cmd As SqlCommand, ByVal cn As SqlConnection) If cmd IsNot Nothing Then cmd.Dispose() If cn IsNot Nothing AndAlso cn.State <> ConnectionState.Closed Then cn.Close() End Sub I am getting a lot of "Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding." error messages reported by the users. The application's DAL opens a connection, reads or saves data, and closes it. No connections are ever left open - intentionally! There is nothing obvious on the Windows 2000 Server hosting the SQL Server 2000 that would indicate a problem. Nothing in the Event Logs and nothing in the SQL Server logs. The timeouts happen randomly - I cannot reproduce. It happens early in the day with only 1 to 5 users in the system. It also happens with around 50 users in the system. The most connections to SQL Server via Performance Monitor, for all databases, has been about 74. The timeouts happen in code that both saves to, and reads from, the database in different parts of the application. The stack trace does not point to one or two offending DAL functions. It's happened in many different places. Does my ADO.NET code appear to be able to leak connections? I've goolged around a bit, and I've read that if the connection pool fills up, this can happen. However, I'm not explicitly setting any connection pooling. I've even tried to increase the Connection Timeout in the connection string, but timeouts happen long before the 300 second (5 minute) value: <add name="MyConnectionString" connectionString="Data Source=MyServer;Initial Catalog=MyDatabase;Integrated Security=SSPI;Connection Timeout=300;"/> I'm at a total loss already as to what is causing these Timeout issues. Any ideas are appreciated.

    Read the article

  • Java multiple connections downloading file

    - by weulerjunior
    Hello friends, I was wanting to add multiple connections in the code below to be able to download files faster. Could someone help me? Thanks in advance. public void run() { RandomAccessFile file = null; InputStream stream = null; try { // Open connection to URL. HttpURLConnection connection = (HttpURLConnection) url.openConnection(); // Specify what portion of file to download. connection.setRequestProperty("Range", "bytes=" + downloaded + "-"); // Connect to server. connection.connect(); // Make sure response code is in the 200 range. if (connection.getResponseCode() / 100 != 2) { error(); } // Check for valid content length. int contentLength = connection.getContentLength(); if (contentLength < 1) { error(); } /* Set the size for this download if it hasn't been already set. */ if (size == -1) { size = contentLength; stateChanged(); } // Open file and seek to the end of it. file = new RandomAccessFile("C:\\"+getFileName(url), "rw"); file.seek(downloaded); stream = connection.getInputStream(); while (status == DOWNLOADING) { /* Size buffer according to how much of the file is left to download. */ byte buffer[]; if (size - downloaded > MAX_BUFFER_SIZE) { buffer = new byte[MAX_BUFFER_SIZE]; } else { buffer = new byte[size - downloaded]; } // Read from server into buffer. int read = stream.read(buffer); if (read == -1) { break; } // Write buffer to file. file.write(buffer, 0, read); downloaded += read; stateChanged(); } /* Change status to complete if this point was reached because downloading has finished. */ if (status == DOWNLOADING) { status = COMPLETE; stateChanged(); } } catch (Exception e) { error(); } finally { // Close file. if (file != null) { try { file.close(); } catch (Exception e) { } } // Close connection to server. if (stream != null) { try { stream.close(); } catch (Exception e) { } } } }

    Read the article

  • WebSockets authentication

    - by Tomi
    What are the possible ways to authenticate user when websocket connection is used? Example scenario: Web based multi-user chat application through encrypted websocket connection. How can I ensure (or guarantee) that each connection in this application belongs to certain authenticated user and "can't be" exploited by false user impersonation during the connection.

    Read the article

  • Alert on gridview edit based on permission

    - by Vicky
    I have a gridview with edit option at the start of the row. Also I maintain a seperate table called Permission where I maintain user permissions. I have three different types of permissions like Admin, Leads, Programmers. These all three will have access to the gridview. Except admin if anyone tries to edit the gridview on clicking the edit option, I need to give an alert like This row has important validation and make sure you make proper changes. When I edit, the action with happen on table called Application. The table has a column called Comments. Also the alert should happen only when they try to edit rows where the Comments column have these values in them. ManLog datas Funding Approved Exported Applications My try so far. public bool IsApplicationUser(string userName) { return CheckUser(userName); } public static bool CheckUser(string userName) { string CS = ConfigurationManager.ConnectionStrings["ConnectionString"].ToString(); DataTable dt = new DataTable(); using (SqlConnection connection = new SqlConnection(CS)) { SqlCommand command = new SqlCommand(); command.Connection = connection; string strquery = "select * from Permissions where AppCode='Nest' and UserID = '" + userName + "'"; SqlCommand cmd = new SqlCommand(strquery, connection); SqlDataAdapter da = new SqlDataAdapter(cmd); da.Fill(dt); } if (dt.Rows.Count >= 1) return true; else return true; } protected void Details_RowCommand(object sender, GridViewCommandEventArgs e) { string currentUser = HttpContext.Current.Request.LogonUserIdentity.Name; string str = ConfigurationManager.ConnectionStrings["ConnectionString"].ToString(); string[] words = currentUser.Split('\\'); currentUser = words[1]; bool appuser = IsApplicationUser(currentUser); if (appuser) { DataSet ds = new DataSet(); using (SqlConnection connection = new SqlConnection(str)) { SqlCommand command = new SqlCommand(); command.Connection = connection; string strquery = "select Role_Cd from User_Role where AppCode='PM' and UserID = '" + currentUser + "'"; SqlCommand cmd = new SqlCommand(strquery, connection); SqlDataAdapter da = new SqlDataAdapter(cmd); da.Fill(ds); } if (e.CommandName.Equals("Edit") && ds.Tables[0].Rows[0]["Role_Cd"].ToString().Trim() != "ADMIN") { int index = Convert.ToInt32(e.CommandArgument); GridView gvCurrentGrid = (GridView)sender; GridViewRow row = gvCurrentGrid.Rows[index]; string strID = ((Label)row.FindControl("lblID")).Text; string strAppName = ((Label)row.FindControl("lblAppName")).Text; Response.Redirect("AddApplication.aspx?ID=" + strID + "&AppName=" + strAppName + "&Edit=True"); } } } Kindly let me know if I need to add something. Thanks for any suggestions.

    Read the article

  • Problem using generics in function

    - by JAVA
    Hi all, I have this functions and need to make it one function. The only difference is type of input variable sourceColumnValue. This variable can be String or Integer but the return value of function must be always Integer. I know I need to use Generics but can't do it. public Integer selectReturnInt(String tableName, String sourceColumnName, String sourceColumnValue, String targetColumnName) { Integer returned = null; String query = "SELECT "+targetColumnName+" FROM "+tableName+" WHERE "+sourceColumnName+"='"+sourceColumnValue+"' LIMIT 1"; try { Connection connection = ConnectionManager.getInstance().open(); java.sql.Statement statement = connection.createStatement(); statement.execute(query.toString()); ResultSet rs = statement.getResultSet(); while(rs.next()){ returned = rs.getInt(targetColumnName); } rs.close(); statement.close(); ConnectionManager.getInstance().close(connection); } catch (SQLException e) { System.out.println("???????? ?? ???? ?? ???? ?????????!"); System.out.println(e); } return returned; } // SELECT (RETURN INTEGER) public Integer selectIntReturnInt(String tableName, String sourceColumnName, Integer sourceColumnValue, String targetColumnName) { Integer returned = null; String query = "SELECT "+targetColumnName+" FROM "+tableName+" WHERE "+sourceColumnName+"='"+sourceColumnValue+"' LIMIT 1"; try { Connection connection = ConnectionManager.getInstance().open(); java.sql.Statement statement = connection.createStatement(); statement.execute(query.toString()); ResultSet rs = statement.getResultSet(); while(rs.next()){ returned = rs.getInt(targetColumnName); } rs.close(); statement.close(); ConnectionManager.getInstance().close(connection); } catch (SQLException e) { System.out.println("???????? ?? ???? ?? ???? ?????????!"); System.out.println(e); } return returned; }

    Read the article

  • Can't install git on Ubuntu 12.10

    - by Lucas Windir
    I'm following these instructions to install git on my laptop: http://git-scm.com/download/linux When I do: $ sudo apt-get install git-core This is what my terinal shows: Reading package lists... Done Building dependency tree Reading state information... Done The following packages were automatically installed and are no longer required: libasprintf0c2:i386 libcroco3:i386 libgettextpo0:i386 libgomp1:i386 libunistring0:i386 Use 'apt-get autoremove' to remove them. The following extra packages will be installed: git git-man liberror-perl Suggested packages: git-daemon-run git-daemon-sysvinit git-doc git-el git-arch git-cvs git-svn git-email git-gui gitk gitweb The following NEW packages will be installed: git git-core git-man liberror-perl 0 upgraded, 4 newly installed, 0 to remove and 0 not upgraded. Need to get 6,825 kB of archives. After this operation, 15.3 MB of additional disk space will be used. Do you want to continue [Y/n]? y WARNING: The following packages cannot be authenticated! liberror-perl git-man git git-core Install these packages without verification [y/N]? E: Some packages could not be authenticated lucas@lucas-Inspiron-N5050:~$ sudo apt-get install git-core Reading package lists... Done Building dependency tree Reading state information... Done The following packages were automatically installed and are no longer required: libasprintf0c2:i386 libcroco3:i386 libgettextpo0:i386 libgomp1:i386 libunistring0:i386 Use 'apt-get autoremove' to remove them. The following extra packages will be installed: git git-man liberror-perl Suggested packages: git-daemon-run git-daemon-sysvinit git-doc git-el git-arch git-cvs git-svn git-email git-gui gitk gitweb The following NEW packages will be installed: git git-core git-man liberror-perl 0 upgraded, 4 newly installed, 0 to remove and 0 not upgraded. Need to get 6,825 kB of archives. After this operation, 15.3 MB of additional disk space will be used. Do you want to continue [Y/n]? y WARNING: The following packages cannot be authenticated! liberror-perl git-man git git-core Install these packages without verification [y/N]? y Err httpq://py.archive.ubuntu.com/ubuntu/ quantal/main liberror-perl all 0.17-1 Something wicked happened resolving 'py.archive.ubuntu.com:http' (-5 - No address associated with hostname) Err httpq://py.archive.ubuntu.com/ubuntu/ quantal/main git-man all 1:1.7.10.4-1ubuntu1 Something wicked happened resolving 'py.archive.ubuntu.com:http' (-5 - No address associated with hostname) Err httpq://py.archive.ubuntu.com/ubuntu/ quantal/main git amd64 1:1.7.10.4-1ubuntu1 Something wicked happened resolving 'py.archive.ubuntu.com:http' (-5 - No address associated with hostname) Err httpq://py.archive.ubuntu.com/ubuntu/ quantal/main git-core all 1:1.7.10.4-1ubuntu1 Something wicked happened resolving 'py.archive.ubuntu.com:http' (-5 - No address associated with hostname) Failed to fetch httpq://py.archive.ubuntu.com/ubuntu/pool/main/libe/liberror-perl/liberrorperl_0.17-1_all.deb Something wicked happened resolving 'py.archive.ubuntu.com:http' (-5 - No address associated with hostname) Failed to fetch httpq://py.archive.ubuntu.com/ubuntu/pool/main/g/git/git-man_1.7.10.4-1ubuntu1_all.deb Something wicked happened resolving 'py.archive.ubuntu.com:http' (-5 - No address associated with hostname) Failed to fetch httpq://py.archive.ubuntu.com/ubuntu/pool/main/g/git/git_1.7.10.4-1ubuntu1_amd64.deb Something wicked happened resolving 'py.archive.ubuntu.com:http' (-5 - No address associated with hostname) Failed to fetch http://py.archive.ubuntu.com/ubuntu/pool/main/g/git/git-core_1.7.10.4-1ubuntu1_all.deb Something wicked happened resolving 'py.archive.ubuntu.com:http' (-5 - No address associated with hostname) E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing? How could I install git on Ubuntu 12.10? I can't even do it from the Ubuntu Software Center. Thanks in advance!

    Read the article

  • PPTP VPN from Ubuntu cannot connect

    - by Andrea Polci
    I'm trying to configure under Linux (Kubuntu 9.10) a VPN I already use from Windows. I installed the network-manager-pptp package and added the VPN under Network Manager. These are the parameters under "advanced" button: Authentication Methods: PAP, CHAP, MSCHAP, MSCHAP2, EAP (I also tried "MSCHAP, MSCHAP2") Use MPPE Encryption: yes Crypto: Any Use stateful encryption: no Allow BSD compression: yes Allow Deflate compression: yes Allow TCP header compression: yes Send PPP echo packets: no When I try to connnect it doesn't work and this is what I get in the system log: 2010-04-08 13:53:47 pcelena NetworkManager <info> Starting VPN service 'org.freedesktop.NetworkManager.pptp'... 2010-04-08 13:53:47 pcelena NetworkManager <info> VPN service 'org.freedesktop.NetworkManager.pptp' started (org.freedesktop.NetworkManager.pptp), PID 4931 2010-04-08 13:53:47 pcelena NetworkManager <info> VPN service 'org.freedesktop.NetworkManager.pptp' just appeared, activating connections 2010-04-08 13:53:47 pcelena pppd[4932] Plugin /usr/lib/pppd/2.4.5//nm-pptp-pppd-plugin.so loaded. 2010-04-08 13:53:47 pcelena NetworkManager <info> VPN plugin state changed: 3 2010-04-08 13:53:47 pcelena pppd[4932] pppd 2.4.5 started by root, uid 0 2010-04-08 13:53:47 pcelena NetworkManager <info> VPN connection 'MYVPN' (Connect) reply received. 2010-04-08 13:53:47 pcelena NetworkManager SCPlugin-Ifupdown: devices added (path: /sys/devices/virtual/net/ppp0, iface: ppp0) 2010-04-08 13:53:47 pcelena NetworkManager SCPlugin-Ifupdown: device added (path: /sys/devices/virtual/net/ppp0, iface: ppp0): no ifupdown configuration found. 2010-04-08 13:53:47 pcelena pppd[4932] Using interface ppp0 2010-04-08 13:53:47 pcelena pppd[4932] Connect: ppp0 <--> /dev/pts/2 2010-04-08 13:53:47 pcelena pptp[4934] nm-pptp-service-4931 log[main:pptp.c:314]: The synchronous pptp option is NOT activated 2010-04-08 13:53:47 pcelena pptp[4927] nm-pptp-service-4918 log[ctrlp_rep:pptp_ctrl.c:251]: Sent control packet type is 7 'Outgoing-Call-Request' 2010-04-08 13:53:47 pcelena pptp[4927] nm-pptp-service-4918 log[ctrlp_disp:pptp_ctrl.c:858]: Received Outgoing Call Reply. 2010-04-08 13:53:47 pcelena pptp[4927] nm-pptp-service-4918 log[ctrlp_disp:pptp_ctrl.c:897]: Outgoing call established (call ID 1, peer's call ID 14800). 2010-04-08 13:53:48 pcelena pppd[4932] CHAP authentication succeeded 2010-04-08 13:53:48 pcelena pppd[4932] CHAP authentication succeeded 2010-04-08 13:53:48 pcelena pppd[4932] LCP terminated by peer 2010-04-08 13:53:48 pcelena pptp[4927] nm-pptp-service-4918 log[ctrlp_disp:pptp_ctrl.c:929]: Call disconnect notification received (call id 14800) 2010-04-08 13:53:48 pcelena pptp[4927] nm-pptp-service-4918 log[ctrlp_disp:pptp_ctrl.c:788]: Received Stop Control Connection Request. 2010-04-08 13:53:48 pcelena pptp[4927] nm-pptp-service-4918 log[ctrlp_rep:pptp_ctrl.c:251]: Sent control packet type is 4 'Stop-Control-Connection-Reply' 2010-04-08 13:53:48 pcelena pptp[4927] nm-pptp-service-4918 log[callmgr_main:pptp_callmgr.c:258]: Closing connection (shutdown) 2010-04-08 13:53:48 pcelena pptp[4927] nm-pptp-service-4918 log[ctrlp_rep:pptp_ctrl.c:251]: Sent control packet type is 12 'Call-Clear-Request' 2010-04-08 13:53:48 pcelena pptp[4927] nm-pptp-service-4918 log[callmgr_main:pptp_callmgr.c:258]: Closing connection (shutdown) 2010-04-08 13:53:48 pcelena pptp[4927] nm-pptp-service-4918 log[ctrlp_rep:pptp_ctrl.c:251]: Sent control packet type is 12 'Call-Clear-Request' 2010-04-08 13:53:48 pcelena pptp[4927] nm-pptp-service-4918 log[call_callback:pptp_callmgr.c:79]: Closing connection (call state) 2010-04-08 13:53:48 pcelena pppd[4932] Modem hangup 2010-04-08 13:53:48 pcelena pppd[4932] Connection terminated. 2010-04-08 13:53:48 pcelena NetworkManager <info> VPN plugin failed: 1 2010-04-08 13:53:48 pcelena NetworkManager SCPlugin-Ifupdown: devices removed (path: /sys/devices/virtual/net/ppp0, iface: ppp0) 2010-04-08 13:53:48 pcelena pppd[4932] Exit. 2010-04-08 13:53:48 pcelena NetworkManager <info> VPN plugin failed: 1 2010-04-08 13:53:48 pcelena NetworkManager <info> VPN plugin state changed: 6 2010-04-08 13:53:48 pcelena NetworkManager <info> VPN plugin state change reason: 0 2010-04-08 13:53:48 pcelena NetworkManager <WARN> connection_state_changed(): Could not process the request because no VPN connection was active. 2010-04-08 13:53:48 pcelena NetworkManager <info> Policy set 'Auto eth0' (eth0) as default for routing and DNS. 2010-04-08 13:54:01 pcelena NetworkManager <debug> [1270727641.001390] ensure_killed(): waiting for vpn service pid 4931 to exit 2010-04-08 13:54:01 pcelena NetworkManager <debug> [1270727641.001479] ensure_killed(): vpn service pid 4931 cleaned up The error that sticks out here is "pppd[4932] LCP terminated by peer". Does anyone has suggestion on what can be the problem and how to make it work?

    Read the article

  • android upload progressbarr not working

    - by pieter
    I'm a beginner in Android programming and I was tryinh to upload an image to a server. I found some code here on stackoverflow, I adjusted it and it still doesn't work. The problem is my image still won't upload. edit I solved the problem, I had no rights on the folder on the server. Now I have a new problem. the progresbarr doesn't work. it keeps saying 0 % transmitted does anyone sees an error in my code? import android.app.Activity; import android.app.AlertDialog; import android.app.Dialog; import android.app.ProgressDialog; import android.content.Context; import android.content.DialogInterface; import android.graphics.Bitmap; import android.graphics.BitmapFactory; import android.location.Location; import android.location.LocationListener; import android.location.LocationManager; import android.os.AsyncTask; import android.os.Bundle; import android.util.Log; import android.view.View; import android.view.Window; import android.widget.Button; import android.widget.ImageView; import java.io.DataInputStream; import java.io.DataOutputStream; import java.io.File; import java.io.FileInputStream; import java.io.IOException; import java.net.HttpURLConnection; import java.net.URL; public class PreviewActivity extends Activity { /** The captured image file. Get it's path from the starting intent */ private File mImage; public static final String EXTRA_IMAGE_PATH = "extraImagePath" /** Log tag */ private static final String TAG = "DFH"; /** Progress dialog id */ private static final int UPLOAD_PROGRESS_DIALOG = 0; private static final int UPLOAD_ERROR_DIALOG = 1; private static final int UPLOAD_SUCCESS_DIALOG = 2; /** Handler to confirm button */ private Button mConfirm; /** Handler to cancel button */ private Button mCancel; /** Uploading progress dialog */ private ProgressDialog mDialog; /** * Called when the activity is created * * We load the captured image, and register button callbacks */ @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); requestWindowFeature(Window.FEATURE_INDETERMINATE_PROGRESS); setContentView(R.layout.preview); setResult(RESULT_CANCELED); // Import image Bundle extras = getIntent().getExtras(); String imagePath = extras.getString(FotoActivity.EXTRA_IMAGE_PATH); Log.d("DFHprev", imagePath); mImage = new File(imagePath); if (mImage.exists()) { setResult(RESULT_OK); loadImage(mImage); } registerButtonCallbacks(); } @Override protected void onPause() { super.onPause(); } /** * Register callbacks for ui buttons */ protected void registerButtonCallbacks() { // Cancel button callback mCancel = (Button) findViewById(R.id.preview_send_cancel); mCancel.setOnClickListener(new View.OnClickListener() { public void onClick(View v) { PreviewActivity.this.finish(); } }); // Confirm button callback mConfirm = (Button) findViewById(R.id.preview_send_confirm); mConfirm.setEnabled(true); mConfirm.setOnClickListener(new View.OnClickListener() { public void onClick(View v) { new UploadImageTask().execute(mImage); } }); } /** * Initialize the dialogs */ @Override protected Dialog onCreateDialog(int id) { switch(id) { case UPLOAD_PROGRESS_DIALOG: mDialog = new ProgressDialog(this); mDialog.setProgressStyle(ProgressDialog.STYLE_HORIZONTAL); mDialog.setCancelable(false); mDialog.setTitle(getString(R.string.progress_dialog_title_connecting)); return mDialog; case UPLOAD_ERROR_DIALOG: AlertDialog.Builder builder = new AlertDialog.Builder(this); builder.setTitle(R.string.upload_error_title) .setIcon(android.R.drawable.ic_dialog_alert) .setMessage(R.string.upload_error_message) .setCancelable(false) .setPositiveButton(getString(R.string.retry), new DialogInterface.OnClickListener() { public void onClick(DialogInterface dialog, int which) { PreviewActivity.this.finish(); } }); return builder.create(); case UPLOAD_SUCCESS_DIALOG: AlertDialog.Builder success = new AlertDialog.Builder(this); success.setTitle(R.string.upload_success_title) .setIcon(android.R.drawable.ic_dialog_info) .setMessage(R.string.upload_success_message) .setCancelable(false) .setPositiveButton(getString(R.string.success), new DialogInterface.OnClickListener() { public void onClick(DialogInterface dialog, int which) { PreviewActivity.this.finish(); } }); return success.create(); default: return null; } } /** * Prepare the progress dialog */ @Override protected void onPrepareDialog(int id, Dialog dialog) { switch(id) { case UPLOAD_PROGRESS_DIALOG: mDialog.setProgress(0); mDialog.setTitle(getString(R.string.progress_dialog_title_connecting)); } } /** * Load the image file into the imageView * * @param image */ protected void loadImage(File image) { Bitmap bm = BitmapFactory.decodeFile(image.getPath()); ImageView view = (ImageView) findViewById(R.id.preview_image); view.setImageBitmap(bm); } /** * Asynchronous task to upload file to server */ class UploadImageTask extends AsyncTask<File, Integer, Boolean> { /** Upload file to this url */ private static final String UPLOAD_URL = "http://www.xxxx.x/xxxx/fotos"; /** Send the file with this form name */ private static final String FIELD_FILE = "file"; /** * Prepare activity before upload */ @Override protected void onPreExecute() { super.onPreExecute(); setProgressBarIndeterminateVisibility(true); mConfirm.setEnabled(false); mCancel.setEnabled(false); showDialog(UPLOAD_PROGRESS_DIALOG); } /** * Clean app state after upload is completed */ @Override protected void onPostExecute(Boolean result) { super.onPostExecute(result); setProgressBarIndeterminateVisibility(false); mConfirm.setEnabled(true); mDialog.dismiss(); if (result) { showDialog(UPLOAD_SUCCESS_DIALOG); } else { showDialog(UPLOAD_ERROR_DIALOG); } } @Override protected Boolean doInBackground(File... image) { return doFileUpload(image[0], "UPLOAD_URL"); } @Override protected void onProgressUpdate(Integer... values) { super.onProgressUpdate(values); if (values[0] == 0) { mDialog.setTitle(getString(R.string.progress_dialog_title_uploading)); } mDialog.setProgress(values[0]); } private boolean doFileUpload(File file, String uploadUrl) { HttpURLConnection connection = null; DataOutputStream outputStream = null; DataInputStream inputStream = null; String pathToOurFile = file.getPath(); String urlServer = "http://www.xxxx.x/xxxx/upload.php"; String lineEnd = "\r\n"; String twoHyphens = "--"; String boundary = "*****"; // log pathtoourfile Log.d("DFHinUpl", pathToOurFile); int bytesRead, bytesAvailable, bufferSize; byte[] buffer; int maxBufferSize = 1*1024*1024; int sentBytes = 0; long fileSize = file.length(); // log filesize String files= String.valueOf(fileSize); String buffers= String.valueOf(maxBufferSize); Log.d("fotosize",files); Log.d("buffers",buffers); try { FileInputStream fileInputStream = new FileInputStream(new File(pathToOurFile) ); URL url = new URL(urlServer); connection = (HttpURLConnection) url.openConnection(); // Allow Inputs & Outputs connection.setDoInput(true); connection.setDoOutput(true); connection.setUseCaches(false); // Enable POST method connection.setRequestMethod("POST"); connection.setRequestProperty("Connection", "Keep-Alive"); connection.setRequestProperty("Content-Type", "multipart/form-data;boundary="+boundary); outputStream = new DataOutputStream( connection.getOutputStream() ); outputStream.writeBytes(twoHyphens + boundary + lineEnd); outputStream.writeBytes("Content-Disposition: form-data; name=\"uploadedfile\";filename=\"" + pathToOurFile +"\"" + lineEnd); outputStream.writeBytes(lineEnd); bytesAvailable = fileInputStream.available(); bufferSize = Math.min(bytesAvailable, maxBufferSize); buffer = new byte[bufferSize]; // Read file bytesRead = fileInputStream.read(buffer, 0, bufferSize); while (bytesRead > 0) { outputStream.write(buffer, 0, bufferSize); bytesAvailable = fileInputStream.available(); bufferSize = Math.min(bytesAvailable, maxBufferSize); bytesRead = fileInputStream.read(buffer, 0, bufferSize); sentBytes += bufferSize; publishProgress((int)(sentBytes * 100 / fileSize)); bytesAvailable = fileInputStream.available(); bufferSize = Math.min(bytesAvailable, maxBufferSize); bytesRead = fileInputStream.read(buffer, 0, bufferSize); } outputStream.writeBytes(lineEnd); outputStream.writeBytes(twoHyphens + boundary + twoHyphens + lineEnd); // Responses from the server (code and message) int serverResponseCode = connection.getResponseCode(); String serverResponseMessage = connection.getResponseMessage(); fileInputStream.close(); outputStream.flush(); outputStream.close(); try { int responseCode = connection.getResponseCode(); return responseCode == 200; } catch (IOException ioex) { Log.e("DFHUPLOAD", "Upload file failed: " + ioex.getMessage(), ioex); return false; } catch (Exception e) { Log.e("DFHUPLOAD", "Upload file failed: " + e.getMessage(), e); return false; } } catch (Exception ex) { String msg= ex.getMessage(); Log.d("DFHUPLOAD", msg); } return true; } } } the PHP code that handles this upload is following: <?php $date=getdate(); $urldate=$date['year'].$date['month'].$date['month'].$date['hours'].$date['minutes'].$date[ 'seconds']; $target_path = "./"; $target_path = $target_path . basename( $_FILES['uploadedfile']['name']) . $urldate; if(move_uploaded_file($_FILES['uploadedfile']['tmp_name'], $target_path)) { echo "The file ". basename( $_FILES['uploadedfile']['name']). " has been uploaded"; } else{ echo "There was an error uploading the file, please try again!"; } ?> would really appreciate it if someone could help me.

    Read the article

  • Azure Grid Computing - Worker Roles as HPC Compute Nodes

    - by JoshReuben
    Overview ·        With HPC 2008 R2 SP1 You can add Azure worker roles as compute nodes in a local Windows HPC Server cluster. ·        The subscription for Windows Azure like any other Azure Service - charged for the time that the role instances are available, as well as for the compute and storage services that are used on the nodes. ·        Win-Win ? - Azure charges the computer hour cost (according to vm size) amortized over a month – so you save on purchasing compute node hardware. Microsoft wins because you need to purchase HPC to have a local head node for managing this compute cluster grid distributed in the cloud. ·        Blob storage is used to hold input & output files of each job. I can see how Parametric Sweep HPC jobs can be supported (where the same job is run multiple times on each node against different input units), but not MPI.NET (where different HPC Job instances function as coordinated agents and conduct master-slave inter-process communication), unless Azure is somehow tunneling MPI communication through inter-WorkerRole Azure Queues. ·        this is not the end of the story for Azure Grid Computing. If MS requires you to purchase a local HPC license (and administrate it), what's to stop a 3rd party from doing this and encapsulating exposing HPC WCF Broker Service to you for managing compute nodes? If MS doesn’t  provide head node as a service, someone else will! Process ·        requires creation of a worker node template that specifies a connection to an existing subscription for Windows Azure + an availability policy for the worker nodes. ·        After worker nodes are added to the cluster, you can start them, which provisions the Windows Azure role instances, and then bring them online to run HPC cluster jobs. ·        A Windows Azure worker role instance runs a HPC compatible Azure guest operating system which runs on the VMs that host your service. The guest operating system is updated monthly. You can choose to upgrade the guest OS for your service automatically each time an update is released - All role instances defined by your service will run on the guest operating system version that you specify. see Windows Azure Guest OS Releases and SDK Compatibility Matrix (http://go.microsoft.com/fwlink/?LinkId=190549). ·        use the hpcpack command to upload file packages and install files to run on the worker nodes. see hpcpack (http://go.microsoft.com/fwlink/?LinkID=205514). Requirements ·        assuming you have an azure subscription account and the HPC head node installed and configured. ·        Install HPC Pack 2008 R2 SP 1 -  see Microsoft HPC Pack 2008 R2 Service Pack 1 Release Notes (http://go.microsoft.com/fwlink/?LinkID=202812). ·        Configure the head node to connect to the Internet - connectivity is provided by the connection of the head node to the enterprise network. You may need to configure a proxy client on the head node. Any cluster network topology (1-5) is supported). ·        Configure the firewall - allow outbound TCP traffic on the following ports: 80,       443, 5901, 5902, 7998, 7999 ·        Note: HPC Server  uses Admin Mode (Elevated Privileges) in Windows Azure to give the service administrator of the subscription the necessary privileges to initialize HPC cluster services on the worker nodes. ·        Obtain a Windows Azure subscription certificate - the Windows Azure subscription must be configured with a public subscription (API) certificate -a valid X.509 certificate with a key size of at least 2048 bits. Generate a self-sign certificate & upload a .cer file to the Windows Azure Portal Account page > Manage my API Certificates link. see Using the Windows Azure Service Management API (http://go.microsoft.com/fwlink/?LinkId=205526). ·        import the certificate with an associated private key on the HPC cluster head node - into the trusted root store of the local computer account. Obtain Windows Azure Connection Information for HPC Server ·        required for each worker node template ·        copy from azure portal - Get from: navigation pane > Hosted Services > Storage Accounts & CDN ·        Subscription ID - a 32-char hex string in the form xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx. In Properties pane. ·        Subscription certificate thumbprint - a 40-char hex string (you need to remove spaces). In Management Certificates > Properties pane. ·        Service name - the value of <ServiceName> configured in the public URL of the service (http://<ServiceName>.cloudapp.net). In Hosted Services > Properties pane. ·        Blob Storage account name - the value of <StorageAccountName> configured in the public URL of the account (http://<StorageAccountName>.blob.core.windows.net). In Storage Accounts > Properties pane. Import the Azure Subscription Certificate on the HPC Head Node ·        enable the services for Windows HPC Server  to authenticate properly with the Windows Azure subscription. ·        use the Certificates MMC snap-in to import the certificate to the Trusted Root Certification Authorities store of the local computer account. The certificate must be in PFX format (.pfx or .p12 file) with a private key that is protected by a password. ·        see Certificates (http://go.microsoft.com/fwlink/?LinkId=163918). ·        To open the certificates snapin: Run > mmc. File > Add/Remove Snap-in > certificates > Computer account > Local Computer ·        To import the certificate via wizard - Certificates > Trusted Root Certification Authorities > Certificates > All Tasks > Import ·        After the certificate is imported, it appears in the details pane in the Certificates snap-in. You can open the certificate to check its status. Configure a Proxy Client on the HPC Head Node ·        the following Windows HPC Server services must be able to communicate over the Internet (through the firewall) with the services for Windows Azure: HPCManagement, HPCScheduler, HPCBrokerWorker. ·        Create a Windows Azure Worker Node Template ·        Edit HPC node templates in HPC Node Template Editor. ·        Specify: 1) Windows Azure subscription connection info (unique service name) for adding a set of worker nodes to the cluster + 2)worker node availability policy – rules for deploying / removing worker role instances in Windows Azure o   HPC Cluster Manager > Configuration > Navigation Pane > Node Templates > Actions pane > New à Create Node Template Wizard or Edit à Node Template Editor o   Choose Node Template Type page - Windows Azure worker node template o   Specify Template Name page – template name & description o   Provide Connection Information page – Azure Subscription ID (text) & Subscription certificate (browse) o   Provide Service Information page - Azure service name + blob storage account name (optionally click Retrieve Connection Information to get list of available from azure – possible LRT). o   Configure Azure Availability Policy page - how Windows Azure worker nodes start / stop (online / offline the worker role instance -  add / remove) – manual / automatic o   for automatic - In the Configure Windows Azure Worker Availability Policy dialog -select days and hours for worker nodes to start / stop. ·        To validate the Windows Azure connection information, on the template's Connection Information tab > Validate connection information. ·        You can upload a file package to the storage account that is specified in the template - eg upload application or service files that will run on the worker nodes. see hpcpack (http://go.microsoft.com/fwlink/?LinkID=205514). Add Azure Worker Nodes to the HPC Cluster ·        Use the Add Node Wizard – specify: 1) the worker node template, 2) The number of worker nodes   (within the quota of role instances in the azure subscription), and 3)           The VM size of the worker nodes : ExtraSmall, Small, Medium, Large, or ExtraLarge.  ·        to add worker nodes of different sizes, must run the Add Node Wizard separately for each size. ·        All worker nodes that are added to the cluster by using a specific worker node template define a set of worker nodes that will be deployed and managed together in Windows Azure when you start the nodes. This includes worker nodes that you add later by using the worker node template and, if you choose, worker nodes of different sizes. You cannot start, stop, or delete individual worker nodes. ·        To add Windows Azure worker nodes o   In HPC Cluster Manager: Node Management > Actions pane > Add Node à Add Node Wizard o   Select Deployment Method page - Add Azure Worker nodes o   Specify New Nodes page - select a worker node template, specify the number and size of the worker nodes ·        After you add worker nodes to the cluster, they are in the Not-Deployed state, and they have a health state of Unapproved. Before you can use the worker nodes to run jobs, you must start them and then bring them online. ·        Worker nodes are numbered consecutively in a naming series that begins with the root name AzureCN – this is non-configurable. Deploying Windows Azure Worker Nodes ·        To deploy the role instances in Windows Azure - start the worker nodes added to the HPC cluster and bring the nodes online so that they are available to run cluster jobs. This can be configured in the HPC Azure Worker Node Template – Azure Availability Policy -  to be automatic or manual. ·        The Start, Stop, and Delete actions take place on the set of worker nodes that are configured by a specific worker node template. You cannot perform one of these actions on a single worker node in a set. You also cannot perform a single action on two sets of worker nodes (specified by two different worker node templates). ·        ·          Starting a set of worker nodes deploys a set of worker role instances in Windows Azure, which can take some time to complete, depending on the number of worker nodes and the performance of Windows Azure. ·        To start worker nodes manually and bring them online o   In HPC Node Management > Navigation Pane > Nodes > List / Heat Map view - select one or more worker nodes. o   Actions pane > Start – in the Start Azure Worker Nodes dialog, select a node template. o   the state of the worker nodes changes from Not Deployed to track the provisioning progress – worker node Details Pane > Provisioning Log tab. o   If there were errors during the provisioning of one or more worker nodes, the state of those nodes is set to Unknown and the node health is set to Unapproved. To determine the reason for the failure, review the provisioning logs for the nodes. o   After a worker node starts successfully, the node state changes to Offline. To bring the nodes online, select the nodes that are in the Offline state > Bring Online. ·        Troubleshooting o   check node template. o   use telnet to test connectivity: telnet <ServiceName>.cloudapp.net 7999 o   check node status - Deployment status information appears in the service account information in the Windows Azure Portal - HPC queries this -  see  node status information for any failed nodes in HPC Node Management. ·        When role instances are deployed, file packages that were previously uploaded to the storage account using the hpcpack command are automatically installed. You can also upload file packages to storage after the worker nodes are started, and then manually install them on the worker nodes. see hpcpack (http://go.microsoft.com/fwlink/?LinkID=205514). ·        to remove a set of role instances in Windows Azure - stop the nodes by using HPC Cluster Manager (apply the Stop action). This deletes the role instances from the service and changes the state of the worker nodes in the HPC cluster to Not Deployed. ·        Each time that you start a set of worker nodes, two proxy role instances (size Small) are configured in Windows Azure to facilitate communication between HPC Cluster Manager and the worker nodes. The proxy role instances are not listed in HPC Cluster Manager after the worker nodes are added. However, the instances appear in the Windows Azure Portal. The proxy role instances incur charges in Windows Azure along with the worker node instances, and they count toward the quota of role instances in the subscription.

    Read the article

  • Connecting Linux to WatchGuard Firebox SSL (OpenVPN client)

    Recently, I got a new project assignment that requires to connect permanently to the customer's network through VPN. They are using a so-called SSL VPN. As I am using OpenVPN since more than 5 years within my company's network I was quite curious about their solution and how it would actually be different from OpenVPN. Well, short version: It is a disguised version of OpenVPN. Unfortunately, the company only offers a client for Windows and Mac OS which shouldn't bother any Linux user after all. OpenVPN is part of every recent distribution and can be activated in a couple of minutes - both client as well as server (if necessary). WatchGuard Firebox SSL - About dialog Borrowing some files from a Windows client installation Initially, I didn't know about the product, so therefore I went through the installation on Windows 8. No obstacles (and no restart despite installation of TAP device drivers!) here and the secured VPN channel was up and running in less than 2 minutes or so. Much appreciated from both parties - customer and me. Of course, this whole client package and my long year approved and stable installation ignited my interest to have a closer look at the WatchGuard client. Compared to the original OpenVPN client (okay, I have to admit this is years ago) this commercial product is smarter in terms of file locations during installation. You'll be able to access the configuration and key files below your roaming application data folder. To get there, simply enter '%AppData%\WatchGuard\Mobile VPN' in your Windows/File Explorer and confirm with Enter/Return. This will display the following files: Application folder below user profile with configuration and certificate files From there we are going to borrow four files, namely: ca.crt client.crt client.ovpn client.pem and transfer them to the Linux system. You might also be able to isolate those four files from a Mac OS client. Frankly, I'm just too lazy to run the WatchGuard client installation on a Mac mini only to find the folder location, and I'm going to describe why a little bit further down this article. I know that you can do that! Feedback in the comment section is appreciated. Configuration of OpenVPN (console) Depending on your distribution the following steps might be a little different but in general you should be able to get the important information from it. I'm going to describe the steps in Ubuntu 13.04 (Raring Ringtail). As usual, there are two possibilities to achieve your goal: console and UI. Let's what it is necessary to be done. First of all, you should ensure that you have OpenVPN installed on your system. Open your favourite terminal application and run the following statement: $ sudo apt-get install openvpn network-manager-openvpn network-manager-openvpn-gnome Just to be on the safe side. The four above mentioned files from your Windows machine could be copied anywhere but either you place them below your own user directory or you put them (as root) below the default directory: /etc/openvpn At this stage you would be able to do a test run already. Just in case, run the following command and check the output (it's the similar information you would get from the 'View Logs...' context menu entry in Windows: $ sudo openvpn --config client.ovpn Pay attention to the correct path to your configuration and certificate files. OpenVPN will ask you to enter your Auth Username and Auth Password in order to establish the VPN connection, same as the Windows client. Remote server and user authentication to establish the VPN Please complete the test run and see whether all went well. You can disconnect pressing Ctrl+C. Simplifying your life - authentication file In my case, I actually set up the OpenVPN client on my gateway/router. This establishes a VPN channel between my network and my client's network and allows me to switch machines easily without having the necessity to install the WatchGuard client on each and every machine. That's also very handy for my various virtualised Windows machines. Anyway, as the client configuration, key and certificate files are located on a headless system somewhere under the roof, it is mandatory to have an automatic connection to the remote site. For that you should first change the file extension '.ovpn' to '.conf' which is the default extension on Linux systems for OpenVPN, and then open the client configuration file in order to extend an existing line. $ sudo mv client.ovpn client.conf $ sudo nano client.conf You should have a similar content to this one here: dev tunclientproto tcp-clientca ca.crtcert client.crtkey client.pemtls-remote "/O=WatchGuard_Technologies/OU=Fireware/CN=Fireware_SSLVPN_Server"remote-cert-eku "TLS Web Server Authentication"remote 1.2.3.4 443persist-keypersist-tunverb 3mute 20keepalive 10 60cipher AES-256-CBCauth SHA1float 1reneg-sec 3660nobindmute-replay-warningsauth-user-pass auth.txt Note: I changed the IP address of the remote directive above (which should be obvious, right?). Anyway, the required change is marked in red and we have to create a new authentication file 'auth.txt'. You can give the directive 'auth-user-pass' any file name you'd like to. Due to my existing OpenVPN infrastructure my setup differs completely from the above written content but for sake of simplicity I just keep it 'as-is'. Okay, let's create this file 'auth.txt' $ sudo nano auth.txt and just put two lines of information in it - username on the first, and password on the second line, like so: myvpnusernameverysecretpassword Store the file, change permissions, and call openvpn with your configuration file again: $ sudo chmod 0600 auth.txt $ sudo openvpn --config client.conf This should now work without being prompted to enter username and password. In case that you placed your files below the system-wide location /etc/openvpn you can operate your VPNs also via service command like so: $ sudo service openvpn start client $ sudo service openvpn stop client Using Network Manager For newer Linux users or the ones with 'console-phobia' I'm going to describe now how to use Network Manager to setup the OpenVPN client. For this move your mouse to the systray area and click on Network Connections => VPN Connections => Configure VPNs... which opens your Network Connections dialog. Alternatively, use the HUD and enter 'Network Connections'. Network connections overview in Ubuntu Click on 'Add' button. On the next dialog select 'Import a saved VPN configuration...' from the dropdown list and click on 'Create...' Choose connection type to import VPN configuration Now you navigate to your folder where you put the client files from the Windows system and you open the 'client.ovpn' file. Next, on the tab 'VPN' proceed with the following steps (directives from the configuration file are referred): General Check the IP address of Gateway ('remote' - we used 1.2.3.4 in this setup) Authentication Change Type to 'Password with Certificates (TLS)' ('auth-pass-user') Enter User name to access your client keys (Auth Name: myvpnusername) Enter Password (Auth Password: verysecretpassword) and choose your password handling Browse for your User Certificate ('cert' - should be pre-selected with client.crt) Browse for your CA Certificate ('ca' - should be filled as ca.crt) Specify your Private Key ('key' - here: client.pem) Then click on the 'Advanced...' button and check the following values: Use custom gateway port: 443 (second value of 'remote' directive) Check the selected value of Cipher ('cipher') Check HMAC Authentication ('auth') Enter the Subject Match: /O=WatchGuard_Technologies/OU=Fireware/CN=Fireware_SSLVPN_Server ('tls-remote') Finally, you have to confirm and close all dialogs. You should be able to establish your OpenVPN-WatchGuard connection via Network Manager. For that, click on the 'VPN Connections => client' entry on your Network Manager in the systray. It is advised that you keep an eye on the syslog to see whether there are any problematic issues that would require some additional attention. Advanced topic: routing As stated above, I'm running the 'WatchGuard client for Linux' on my head-less server, and since then I'm actually establishing a secure communication channel between two networks. In order to enable your network clients to get access to machines on the remote side there are two possibilities to enable that: Proper routing on both sides of the connection which enables both-direction access, or Network masquerading on the 'client side' of the connection Following, I'm going to describe the second option a little bit more in detail. The Linux system that I'm using is already configured as a gateway to the internet. I won't explain the necessary steps to do that, and will only focus on the additional tweaks I had to do. You can find tons of very good instructions and tutorials on 'How to setup a Linux gateway/router' - just use Google. OK, back to the actual modifications. First, we need to have some information about the network topology and IP address range used on the 'other' side. We can get this very easily from /var/log/syslog after we established the OpenVPN channel, like so: $ sudo tail -n20 /var/log/syslog Or if your system is quite busy with logging, like so: $ sudo less /var/log/syslog | grep ovpn The output should contain PUSH received message similar to the following one: Jul 23 23:13:28 ios1 ovpn-client[789]: PUSH: Received control message: 'PUSH_REPLY,topology subnet,route 192.168.1.0 255.255.255.0,dhcp-option DOMAIN ,route-gateway 192.168.6.1,topology subnet,ping 10,ping-restart 60,ifconfig 192.168.6.2 255.255.255.0' The interesting part for us is the route command which I highlighted already in the sample PUSH_REPLY. Depending on your remote server there might be multiple networks defined (172.16.x.x and/or 10.x.x.x). Important: The IP address range on both sides of the connection has to be different, otherwise you will have to shuffle IPs or increase your the netmask. {loadposition content_adsense} After the VPN connection is established, we have to extend the rules for iptables in order to route and masquerade IP packets properly. I created a shell script to take care of those steps: #!/bin/sh -eIPTABLES=/sbin/iptablesDEV_LAN=eth0DEV_VPNS=tun+VPN=192.168.1.0/24 $IPTABLES -A FORWARD -i $DEV_LAN -o $DEV_VPNS -d $VPN -j ACCEPT$IPTABLES -A FORWARD -i $DEV_VPNS -o $DEV_LAN -s $VPN -j ACCEPT$IPTABLES -t nat -A POSTROUTING -o $DEV_VPNS -d $VPN -j MASQUERADE I'm using the wildcard interface 'tun+' because I have multiple client configurations for OpenVPN on my server. In your case, it might be sufficient to specify device 'tun0' only. Simplifying your life - automatic connect on boot Now, that the client connection works flawless, configuration of routing and iptables is okay, we might consider to add another 'laziness' factor into our setup. Due to kernel updates or other circumstances it might be necessary to reboot your system. Wouldn't it be nice that the VPN connections are established during the boot procedure? Yes, of course it would be. To achieve this, we have to configure OpenVPN to automatically start our VPNs via init script. Let's have a look at the responsible 'default' file and adjust the settings accordingly. $ sudo nano /etc/default/openvpn Which should have a similar content to this: # This is the configuration file for /etc/init.d/openvpn## Start only these VPNs automatically via init script.# Allowed values are "all", "none" or space separated list of# names of the VPNs. If empty, "all" is assumed.# The VPN name refers to the VPN configutation file name.# i.e. "home" would be /etc/openvpn/home.conf#AUTOSTART="all"#AUTOSTART="none"#AUTOSTART="home office"## ... more information which remains unmodified ... With the OpenVPN client configuration as described above you would either set AUTOSTART to "all" or to "client" to enable automatic start of your VPN(s) during boot. You should also take care that your iptables commands are executed after the link has been established, too. You can easily test this configuration without reboot, like so: $ sudo service openvpn restart Enjoy stable VPN connections between your Linux system(s) and a WatchGuard Firebox SSL remote server. Cheers, JoKi

    Read the article

  • httpd keeps crashing without any reference to why in the logs

    - by Fred
    I have the logs set to debug in the hopes of tracking down what's causing the crash, but I can't find anything. Here is the error_log. [Thu Jan 06 10:27:35 2011] [debug] proxy_util.c(1967): proxy: initialized single connection worker 0 in child 19999 for (*) [Thu Jan 06 14:47:04 2011] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec) [Thu Jan 06 14:47:04 2011] [info] Init: Seeding PRNG with 256 bytes of entropy [Thu Jan 06 14:47:04 2011] [info] Init: Generating temporary RSA private keys (512/1024 bits) [Thu Jan 06 14:47:04 2011] [info] Init: Generating temporary DH parameters (512/1024 bits) [Thu Jan 06 14:47:04 2011] [info] Init: Initializing (virtual) servers for SSL [Thu Jan 06 14:47:04 2011] [info] Server: Apache/2.2.3, Interface: mod_ssl/2.2.3, Library: OpenSSL/0.9.8e-fips-rhel5 [Thu Jan 06 14:47:04 2011] [notice] Digest: generating secret for digest authentication ... [Thu Jan 06 14:47:04 2011] [notice] Digest: done [Thu Jan 06 14:47:04 2011] [debug] util_ldap.c(2021): LDAP merging Shared Cache conf: shm=0xb9dc2480 rmm=0xb9dc24b0 for VHOST: server.fredfinn.com [Thu Jan 06 14:47:04 2011] [info] APR LDAP: Built with OpenLDAP LDAP SDK [Thu Jan 06 14:47:04 2011] [info] LDAP: SSL support available [Thu Jan 06 14:47:05 2011] [info] Init: Seeding PRNG with 256 bytes of entropy [Thu Jan 06 14:47:05 2011] [info] Init: Generating temporary RSA private keys (512/1024 bits) [Thu Jan 06 14:47:05 2011] [info] Init: Generating temporary DH parameters (512/1024 bits) [Thu Jan 06 14:47:05 2011] [debug] ssl_scache_shmcb.c(374): shmcb_init allocated 512000 bytes of shared memory [Thu Jan 06 14:47:05 2011] [debug] ssl_scache_shmcb.c(554): entered shmcb_init_memory() [Thu Jan 06 14:47:05 2011] [debug] ssl_scache_shmcb.c(576): for 512000 bytes, recommending 4266 indexes [Thu Jan 06 14:47:05 2011] [debug] ssl_scache_shmcb.c(619): shmcb_init_memory choices follow [Thu Jan 06 14:47:05 2011] [debug] ssl_scache_shmcb.c(621): division_mask = 0x1F [Thu Jan 06 14:47:05 2011] [debug] ssl_scache_shmcb.c(623): division_offset = 64 [Thu Jan 06 14:47:05 2011] [debug] ssl_scache_shmcb.c(625): division_size = 15998 [Thu Jan 06 14:47:05 2011] [debug] ssl_scache_shmcb.c(627): queue_size = 1604 [Thu Jan 06 14:47:05 2011] [debug] ssl_scache_shmcb.c(629): index_num = 133 [Thu Jan 06 14:47:05 2011] [debug] ssl_scache_shmcb.c(631): index_offset = 8 [Thu Jan 06 14:47:05 2011] [debug] ssl_scache_shmcb.c(633): index_size = 12 [Thu Jan 06 14:47:05 2011] [debug] ssl_scache_shmcb.c(635): cache_data_offset = 8 [Thu Jan 06 14:47:05 2011] [debug] ssl_scache_shmcb.c(637): cache_data_size = 14386 [Thu Jan 06 14:47:05 2011] [debug] ssl_scache_shmcb.c(650): leaving shmcb_init_memory() [Thu Jan 06 14:47:05 2011] [info] Shared memory session cache initialised [Thu Jan 06 14:47:05 2011] [info] Init: Initializing (virtual) servers for SSL [Thu Jan 06 14:47:05 2011] [info] Server: Apache/2.2.3, Interface: mod_ssl/2.2.3, Library: OpenSSL/0.9.8e-fips-rhel5 [Thu Jan 06 14:47:05 2011] [warn] pid file /etc/httpd/run/httpd.pid overwritten -- Unclean shutdown of previous Apache run? [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1854): proxy: grabbed scoreboard slot 0 in child 26527 for worker proxy:reverse [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1967): proxy: initialized single connection worker 0 in child 26527 for (*) [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1854): proxy: grabbed scoreboard slot 0 in child 26528 for worker proxy:reverse [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1873): proxy: worker proxy:reverse already initialized [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1967): proxy: initialized single connection worker 0 in child 26528 for (*) [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1854): proxy: grabbed scoreboard slot 0 in child 26529 for worker proxy:reverse [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1873): proxy: worker proxy:reverse already initialized [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1967): proxy: initialized single connection worker 0 in child 26529 for (*) [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1854): proxy: grabbed scoreboard slot 0 in child 26530 for worker proxy:reverse [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1873): proxy: worker proxy:reverse already initialized [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1967): proxy: initialized single connection worker 0 in child 26530 for (*) [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1854): proxy: grabbed scoreboard slot 0 in child 26532 for worker proxy:reverse [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1873): proxy: worker proxy:reverse already initialized [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1967): proxy: initialized single connection worker 0 in child 26532 for (*) [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1854): proxy: grabbed scoreboard slot 0 in child 26533 for worker proxy:reverse [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1873): proxy: worker proxy:reverse already initialized [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1967): proxy: initialized single connection worker 0 in child 26533 for (*) [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1854): proxy: grabbed scoreboard slot 0 in child 26534 for worker proxy:reverse [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1873): proxy: worker proxy:reverse already initialized [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1967): proxy: initialized single connection worker 0 in child 26534 for (*) [Thu Jan 06 14:47:05 2011] [notice] Apache/2.2.3 (CentOS) configured -- resuming normal operations [Thu Jan 06 14:47:05 2011] [info] Server built: Aug 30 2010 12:32:08 [Thu Jan 06 14:47:05 2011] [debug] prefork.c(991): AcceptMutex: sysvsem (default: sysvsem) [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1854): proxy: grabbed scoreboard slot 0 in child 26531 for worker proxy:reverse [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1873): proxy: worker proxy:reverse already initialized [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1967): proxy: initialized single connection worker 0 in child 26531 for (*) The logs are setup as: ErrorLog logs/error_log LogLevel debug LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %b" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent CustomLog logs/access_log common CustomLog logs/access_log combined ServerSignature On

    Read the article

  • Implement OAuth in Java

    - by phineas
    I made an an attempt to implement OAuth for my programming idea in Java, but I failed miserably. I don't know why, but my code doesn't work. Every time I run my program, an IOException is thrown with the reason "java.io.IOException: Server returned HTTP response code: 401" (401 means Unauthorized). I had a close look at the docs, but I really don't understand why it doesn't work. My OAuth provider I wanted to use is twitter, where I've registered my app, too. Thanks in advance phineas OAuth docs Twitter API wiki Class Base64Coder import java.io.InputStreamReader; import java.io.BufferedReader; import java.io.OutputStream; import java.io.IOException; import java.io.UnsupportedEncodingException; import java.net.URL; import java.net.URLEncoder; import java.net.URLConnection; import java.net.MalformedURLException; import javax.crypto.Mac; import javax.crypto.spec.SecretKeySpec; import java.security.NoSuchAlgorithmException; import java.security.InvalidKeyException; public class Request { public static String read(String url) { StringBuffer buffer = new StringBuffer(); try { /** * get the time - note: value below zero * the millisecond value is used for oauth_nonce later on */ int millis = (int) System.currentTimeMillis() * -1; int time = (int) millis / 1000; /** * Listing of all parameters necessary to retrieve a token * (sorted lexicographically as demanded) */ String[][] data = { {"oauth_callback", "SOME_URL"}, {"oauth_consumer_key", "MY_CONSUMER_KEY"}, {"oauth_nonce", String.valueOf(millis)}, {"oauth_signature", ""}, {"oauth_signature_method", "HMAC-SHA1"}, {"oauth_timestamp", String.valueOf(time)}, {"oauth_version", "1.0"} }; /** * Generation of the signature base string */ String signature_base_string = "POST&"+URLEncoder.encode(url, "UTF-8")+"&"; for(int i = 0; i < data.length; i++) { // ignore the empty oauth_signature field if(i != 3) { signature_base_string += URLEncoder.encode(data[i][0], "UTF-8") + "%3D" + URLEncoder.encode(data[i][1], "UTF-8") + "%26"; } } // cut the last appended %26 signature_base_string = signature_base_string.substring(0, signature_base_string.length()-3); /** * Sign the request */ Mac m = Mac.getInstance("HmacSHA1"); m.init(new SecretKeySpec("CONSUMER_SECRET".getBytes(), "HmacSHA1")); m.update(signature_base_string.getBytes()); byte[] res = m.doFinal(); String sig = String.valueOf(Base64Coder.encode(res)); data[3][1] = sig; /** * Create the header for the request */ String header = "OAuth "; for(String[] item : data) { header += item[0]+"=\""+item[1]+"\", "; } // cut off last appended comma header = header.substring(0, header.length()-2); System.out.println("Signature Base String: "+signature_base_string); System.out.println("Authorization Header: "+header); System.out.println("Signature: "+sig); String charset = "UTF-8"; URLConnection connection = new URL(url).openConnection(); connection.setDoInput(true); connection.setDoOutput(true); connection.setRequestProperty("Accept-Charset", charset); connection.setRequestProperty("Content-Type", "application/x-www-form-urlencoded;charset=" + charset); connection.setRequestProperty("Authorization", header); connection.setRequestProperty("User-Agent", "XXXX"); OutputStream output = connection.getOutputStream(); output.write(header.getBytes(charset)); BufferedReader reader = new BufferedReader(new InputStreamReader(connection.getInputStream())); String read; while((read = reader.readLine()) != null) { buffer.append(read); } } catch(Exception e) { e.printStackTrace(); } return buffer.toString(); } public static void main(String[] args) { System.out.println(Request.read("http://api.twitter.com/oauth/request_token")); } }

    Read the article

  • Servlet requests are executed sequentially for no apparent reason in Glassfish v3

    - by Fabien Benoit
    Hi, I'm using Glassfish 3 Web profile and can't get http workers to execute concurrently requests on a servlet. This is how i observed the problem. I've made a very simple servlet, that writes the current thread name to the standard output and sleep for 10 seconds : protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { System.out.println(Thread.currentThread().getName()); try { Thread.sleep(10000); // 10 sec } catch (InterruptedException ex) {} } } And when i'm running several simultaneous requests, I clearly see in the logs that the requests are sequentially executed (one trace every 10 seconds). INFO: http-thread-pool-8080-(2) (10 seconds later...) INFO: http-thread-pool-8080-(1) (10 seconds later...) INFO: http-thread-pool-8080-(2) etc. All my GF settings are untouched - it's the out-of-the-box config (the default thread pool is 2 threads min, 5 max if I recall properly). ...I really don't understand why the sleep() block all the others worker threads. Any insight would be greatly appreciated ! Thanks, Fabien

    Read the article

  • How do i write tasks? (parallel code)

    - by acidzombie24
    I am impressed with intel thread building blocks. I like how i should write task and not thread code and i like how it works under the hood with my limited understanding (task are in a pool, there wont be 100 threads on 4cores, a task is not guaranteed to run because it isnt on its own thread and may be far into the pool. But it may be run with another related task so you cant do bad things like typical thread unsafe code). I wanted to know more about writing task. I like the 'Task-based Multithreading - How to Program for 100 cores' video here http://www.gdcvault.com/sponsor.php?sponsor_id=1 (currently second last link. WARNING it isnt 'great'). My fav part was 'solving the maze is better done in parallel' which is around the 48min mark (you can click the link on the left side. That part is really all you need to watch if any). However i like to see more code examples and some API of how to write task. Does anyone have a good resource? I have no idea how a class or pieces of code may look after pushing it onto a pool or how weird code may look when you need to make a copy of everything and how much of everything is pushed onto a pool.

    Read the article

  • How do i write task? (parallel code)

    - by acidzombie24
    I am impressed with intel thread building blocks. I like how i should write task and not thread code and i like how it works under the hood with my limited understanding (task are in a pool, there wont be 100 threads on 4cores, a task is not guaranteed to run because it isnt on its own thread and may be far into the pool. But it may be run with another related task so you cant do bad things like typical thread unsafe code). I wanted to know more about writing task. I like the 'Task-based Multithreading - How to Program for 100 cores' video here http://www.gdcvault.com/sponsor.php?sponsor_id=1 (currently second last link. WARNING it isnt 'great'). My fav part was 'solving the maze is better done in parallel' which is around the 48min mark (you can click the link on the left side). However i like to see more code examples and some API of how to write task. Does anyone have a good resource? I have no idea how a class or pieces of code may look after pushing it onto a pool or how weird code may look when you need to make a copy of everything and how much of everything is pushed onto a pool.

    Read the article

  • iphone web service access

    - by malleswar
    Hi, I have .net webservice methods login and summary. After getting the result from login, I need to show second view and need to call summary method. I am following this tutorial. http://icodeblog.com/2008/11/03/iphone-programming-tutorial-intro-to-soap-web-services/ I created two new classes loginaccess.h and loginaccess.m. @implementation LoginAccess @synthesize ResultString,webData, soapResults, xmlParser; -(NSString*)LoginCheck:(NSString*)userName:(NSString*)pwd { NSString *soapMessage = [NSString stringWithFormat: @"\n" "\n" "\n" "\n" "%@" "%@" "" "\n" "\n",userName,pwd ]; NSLog(soapMessage); NSURL *url = [NSURL URLWithString:@"http://www.XXXXXXXXX.com/service.asmx"]; NSMutableURLRequest *theRequest = [NSMutableURLRequest requestWithURL:url]; NSString *msgLength = [NSString stringWithFormat:@"%d", [soapMessage length]]; [theRequest addValue: @"text/xml; charset=utf-8" forHTTPHeaderField:@"Content-Type"]; [theRequest addValue: @"http://XXXXXXXXm/Login" forHTTPHeaderField:@"SOAPAction"]; [theRequest addValue: msgLength forHTTPHeaderField:@"Content-Length"]; [theRequest setHTTPMethod:@"POST"]; [theRequest setHTTPBody: [soapMessage dataUsingEncoding:NSUTF8StringEncoding]]; NSURLConnection *theConnection = [[NSURLConnection alloc] initWithRequest:theRequest delegate:self]; if( theConnection ) { webData = [[NSMutableData data] retain]; } else { NSLog(@"theConnection is NULL"); } //[nameInput resignFirstResponder]; return ResultString; } -(void)connection:(NSURLConnection *)connection didReceiveResponse:(NSURLResponse *)response { [webData setLength: 0]; } -(void)connection:(NSURLConnection *)connection didReceiveData:(NSData *)data { [webData appendData:data]; } -(void)connection:(NSURLConnection *)connection didFailWithError:(NSError *)error { NSLog(@"ERROR with theConenction"); [connection release]; [webData release]; } -(void)connectionDidFinishLoading:(NSURLConnection *)connection { NSLog(@"DONE. Received Bytes: %d", [webData length]); NSString *theXML = [[NSString alloc] initWithBytes: [webData mutableBytes] length:[webData length] encoding:NSUTF8StringEncoding]; NSLog(theXML); [theXML release]; if( xmlParser ) { [xmlParser release]; } xmlParser = [[NSXMLParser alloc] initWithData: webData]; [xmlParser setDelegate: self]; [xmlParser setShouldResolveExternalEntities: YES]; [xmlParser parse]; [connection release]; [webData release]; } -(void)parser:(NSXMLParser *)parser didStartElement:(NSString *)elementName namespaceURI:(NSString *) namespaceURI qualifiedName:(NSString *)qName attributes: (NSDictionary *)attributeDict { if( [elementName isEqualToString:@"LoginResult"]) { if(!soapResults) { soapResults = [[NSMutableString alloc] init]; } recordResults = TRUE; } } -(void)parser:(NSXMLParser *)parser foundCharacters:(NSString *)string { if( recordResults ) { [soapResults appendString: string]; } } -(void)parser:(NSXMLParser *)parser didEndElement:(NSString *)elementName namespaceURI:(NSString *)namespaceURI qualifiedName:(NSString *)qName { if( [elementName isEqualToString:@"LoginResult"]) { recordResults = FALSE; ResultString = soapResults; NSLog(@"Login"); [VariableStore setStr:ResultString]; NSLog(soapResults); [soapResults release]; soapResults = nil; } } @end I am calling LoginCheck method and based on result I want to show the second view. Here after finishing of the button touch down event, it enter into did end element, so I am always getting nil value. If I use the same code controller it works fine as I push second view controller in didendelement. Please give me some samples to place the web service calls in differnt class and how to call them in viewcontrollers. Regards, Malleswar

    Read the article

  • Dependency injection in constructors

    - by andre
    Hello everyone. I'm starting a new project and setting up the base to work on. A few questions have risen and I'll probably be asking quite a few in here, hopefully I'll find some answers. First step is to handle dependencies for objects. I've decided to go with the dependency injection design pattern, to which I'm somewhat new, to handle all of this for the application. When actually coding it I came across a problem. If a class has multiple dependencies and you want to pass on multiple dependencies via the constructor (so that they cannot be changed after you instantiate the object). How do you do it without passing an array of dependencies, using call_user_func_array(), eval() or Reflection? This is what i'm looking for: <?php class DI { public function getClass($classname) { if(!$this->pool[$classname]) { # Load dependencies $deps = $this->loadDependencies($classname); # Here is where the magic should happen $instance = new $classname($dep1, $dep2, $dep3); # Add to pool $this->pool[$classname] = $instance; return $instance; } else { return $this->pool[$classname]; } } } Again, I would like to avoid the most costly methods to call the class. Any other suggestions?

    Read the article

  • iPhone: Low memory crash...

    - by MacTouch
    Once again I'm hunting memory leaks and other crazy mistakes in my code. :) I have a cache with frequently used files (images, data records etc. with a TTL of about one week and a size limited cache (100MB)). There are sometimes more then 15000 files in a directory. On application exit the cache writes an control file with the current cache size along with other useful information. If the applications crashes for some reason (sh.. happens sometimes) I have in such case to calculate the size of all files on application start to make sure I know the cache size. My app crashes at this point because of low memory and I have no clue why. Memory leak detector does not show any leaks at all. I do not see any too. What's wrong with the code below? Is there any other fast way to calculate the total size of all files within a directory on iPhone? Maybe without to enumerate the whole contents of the directory? The code is executed on the main thread. NSUInteger result = 0; NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; NSDirectoryEnumerator *dirEnum = [[[NSFileManager defaultManager] enumeratorAtPath:path] retain]; int i = 0; while ([dirEnum nextObject]) { NSDictionary *attributes = [dirEnum fileAttributes]; NSNumber* fileSize = [attributes objectForKey:NSFileSize]; result += [fileSize unsignedIntValue]; if (++i % 500 == 0) { // I tried lower values too [pool drain]; } } [dirEnum release]; dirEnum = nil; [pool release]; pool = nil; Thanks, MacTouch

    Read the article

  • Why doesn't PHP's oci_connect return false?

    - by absolethe
    I have a situation in which we have two production databases that synchronize with one other. Server One is considered the primary. Sometimes due to maintenance or a disaster Server Two will become primary. In some of our code that means we have to manually go in and edit the server name for database connections. I find this annoying, so the last thing I wrote I put the server information for both and set up a loop. If oci_connect failed on the Server One 3 times it would move on to Server Two. If Server Two failed 3 times it would notify the user a connection couldn't be made. This has worked fine most times we've had the situation of switching the servers. Yesterday, for example, it worked fine. Today it didn't. It just sat and spun endlessly. No error in the PHP error log. No failure to move on from. No error output to the screen. Nothing for 5 minutes. So then I had to manually edit the stupid config file. I asked what could possibly be different and I was told "yesterday the database was down, but not the server. today the server is down." Okay...? But I don't see a distinction. I would expect oci_connect to return false if it can't establish any sort of communication with the server. I'd expect it to timeout and error. Not just pass it on when it receives an error code from the server. What if there's a network problem, for example? Is this a bug in oci_connect or is there a possibility that something in our PHP configuration gives oci_connect a crazily long timeout? If it is a sort of "bug" is there some way I can check to see if the server is up first? Like a ping? (Of course when I did a ping through the command prompt I got a response from Server One and then was told, "it's back now" although I am skeptical about the timing on that.) Anyway, if anyone could shed some light on why oci_connect might run endlessly without failing and how to keep it from doing so I'd be grateful. -- Edit: My code looks like the examples on PHP.net only in some loops. $count = count($servers); for($i = 0; $i < $count; $i++){ if((!isset($connection)) || ($connection == false)){ // Attempt to connect to the oracle database $connection = @oci_connect($servers[$i]["user"], $servers[$i]["pass"], $servers[$i]["conid"]) or ($conn_error = oracle_error()); // Try again if there was a failure if(($connection == false) || (isset($con_error))){ // Three (two more) tries per alternative for($j = $st; $j < $fn; $j++){ // Try again to connect $connection = @oci_connect($servers[$i]["user"], $servers[$i]["pass"], $servers[$i]["conid"]) or ($conn_error = oracle_error()); } // for($j = 2; $j < 4; $j++) } // if($connection == false) } // if(!isset($connection) || ($connection == false)) } // for($i = 0; $i < $count; $i++)

    Read the article

  • Waiting for thread to finish Python

    - by lunchtime
    Alright, here's my problem. I have a thread that creates another thread in a pool, applies async so I can work with the returned data, which is working GREAT. But I need the current thread to WAIT until the result is returned. Here is the simplified code, as the current script is over 300 lines. I'm sure i've included everything for you to make sense of what I'm attempting: from multiprocessing.pool import ThreadPool import threading pool = ThreadPool(processes=1) class MyStreamer(TwythonStreamer): #[...] def on_success(self, data): #### Everytime data comes in, this is called #[...] #<Pseudocode> if score >= limit if list exists: Do stuff elif list does not exist: #</Pseudocode> dic = [] dic.append([k1, v1]) did = dict(dic) async_result = pool.apply_async(self.list_step, args=(did)) return_val = async_result.get() slug = return_val[0] idd = return_val[1] #[...] def list_step(self, *args): ## CREATE LIST ## RETURN 2 VALUES class threadStream (threading.Thread): def __init__(self, auth): threading.Thread.__init__(self) self.auth = auth def run(self): stream = MyStreamer(auth = auth[0], *auth[0]) stream.statuses.filter(track=auth[1]) t = threadStream(auth=AuthMe) t.start() I receive the results as intended, which is great, but how do I make it so this thread t waits for the async_result to come in?? My problem is everytime new data comes in, it seems that the ## CREATE LIST function is called multiple times if similar data comes in quickly enough. So I'm ending up with many lists of the same name when I have code in place to ensure that a list will never be created if the name already exists. So to reiterate: How do I make this thread wait on the function to complete before accepting new data / continuing. I don't think time.sleep() works because on_success is called when data enters the stream. I don't think Thread.Join() will work either since I have to use a ThreadPool.apply_async to receive the data I need. Is there a hack I can make in the MyStreamer class somehow? I'm kind of at a loss here. Am I over complicating things and can this be simplified to do what I want?

    Read the article

  • Error on 64 Bit Install of IIS &ndash; LoadLibraryEx failed on aspnet_filter.dll

    - by Rick Strahl
    I’ve been having a few problems with my Windows 7 install and trying to get IIS applications to run properly in 64 bit. After installing IIS and creating virtual directories for several of my applications and firing them up I was left with the following error message from IIS: Calling LoadLibraryEx on ISAPI filter “c:\windows\Microsoft.NET\Framework\v4.0.30319\aspnet_filter.dll” failed This is on Windows 7 64 bit and running on an ASP.NET 4.0 Application configured for running 64 bit (32 bit disabled). It’s also on what is essentially a brand new installation of IIS and Windows 7. So it failed right out of the box. The problem here is that IIS is trying to loading this ISAPI filter from the 32 bit folder – it should be loading from Framework64 folder note the Framework folder. The aspnet_filter.dll component is a small Win32 ISAPI filter used to back up the cookieless session state for ASP.NET on IIS 7 applications. It’s not terribly important because of this focus, but it’s a default loaded component. After a lot of fiddling I ended up with two solutions (with the help and support of some Twitter folks): Switch IIS to run in 32 bit mode Fix the filter listing in ApplicationHost.config Switching IIS to allow 32 Bit Code This is a quick fix for the problem above which enables 32 bit code in the Application Pool. The problem above is that IIS is trying to load a 32 bit ISAPI filter and enabling 32 bit code gets you around this problem. To configure your Application Pool, open the Application Pool in IIS Manager bring up Advanced Options and Enable 32 Bit Applications: And voila the error message above goes away. Fix Filters Enabling 32 bit code is a quick fix solution to this problem, but not an ideal one. If you’re running a pure .NET application that doesn’t need to do COM or pInvoke Interop with 32 bit apps there’s usually no need for enabling 32 bit code in an Application Pool as you can run in native 64 bit code. So trying to get 64 bit working natively is a pretty key feature in my opinion :-) So what’s the problem – why is IIS trying to load a 32 bit DLL in a 64 bit install, especially if the application pool is configured to not allow 32 bit code at all? The problem lies in the server configuration and the fact that 32 bit and 64 bit configuration settings exist side by side in IIS. If I open my Default Web Site (or any other root Web Site) and go to the ISAPI filter list here’s what I see: Notice that there are 3 entries for ASP.NET 4.0 in this list. Only two of them however are specifically scoped to the specifically to 32 bit or 64 bit. As you can see the 64 bit filter correctly points at the Framework64 folder to load the dll, while both the 32 bit and the ‘generic’ entry point at the plain Framework 32 bit folder. Aha! Hence lies our problem. You can edit ApplicationHost.config manually, but I ran into the nasty issue of not being able to easily edit that file with the 32 bit editor (who ever thought that was a good idea???? WTF). You have to open ApplicationHost.Config in a 64 bit native text editor – which Visual Studio is not. Or my favorite editor: EditPad Pro. Since I don’t have a native 64 bit editor handy Notepad was my only choice. Or as an alternative you can use the IIS 7.5 Configuration Editor which lets you interactively browse and edit most ApplicationHost settings. You can drill into the configuration hierarchy visually to find your keys and edit attributes and sub values in property editor type interface. I had no idea this tool existed prior to today and it’s pretty cool as it gives you some visual clues to options available – especially in absence of an Intellisense scheme you’d get in Visual Studio (which doesn’t work). To use the Configuration Editor go the Web Site root and use the Configuration Editor option in the Management Group. Drill into System.webServer/isapiFilters and then click on the Collection’s … button on the right. You should now see a display like this: which shows all the same attributes you’d see in ApplicationHost.config (cool!). These entries correspond to these raw ApplicationHost.config entries: <filter name="ASP.Net_4.0" path="C:\Windows\Microsoft.NET\Framework\v4.0.30319\aspnet_filter.dll" enableCache="true" preCondition="runtimeVersionv4.0" /> <filter name="ASP.Net_4.0_64bit" path="C:\Windows\Microsoft.NET\Framework64\v4.0.30319\aspnet_filter.dll" enableCache="true" preCondition="runtimeVersionv4.0,bitness64" /> <filter name="ASP.Net_4.0_32bit" path="C:\Windows\Microsoft.NET\Framework\v4.0.30319\aspnet_filter.dll" enableCache="true" preCondition="runtimeVersionv4.0,bitness32" /> The key attribute we’re concerned with here is the preCondition and the bitness subvalue. Notice that the ‘generic’ version – which comes first in the filter list – has no bitness assigned to it, so it defaults to 32 bit and the 32 bit dll path. And this is where our problem comes from. The simple solution to fix the startup problem is to remove the generic entry from this list here or in the filters list shown earlier and leave only the bitness specific versions active. The preCondition attribute acts as a filter and as you can see here it filters the list by runtime version and bitness value. This is something to keep an eye out in general – if a bitness values are missing it’s easy to run into conflicts like this with any settings that are global and especially those that load modules and handlers and other executable code. On 64 bit systems it’s a good idea to explicitly set the bitness of all entries or remove the non-specific versions and add bit specific entries. So how did this get misconfigured? I installed IIS before everything else was installed on this machine and then went ahead and installed Visual Studio. I suspect the Visual Studio install munged this up as I never saw a similar problem on my live server where everything just worked right out of the box. In searching about this problem a lot of solutions pointed at using aspnet_regiis –r from the Framework64 directory, but that did not fix this extra entry in the filters list – it adds the required 32 bit and 64 bit entries, but it doesn’t remove the errand un-bitness set entry. Hopefully this post will help out anybody who runs into a similar situation without having to trouble shoot all the way down into the configuration settings and noticing the bitness settings. It’s a good lesson learned for me – this is my first desktop install of a 64 bit OS and things like this are what I was reluctant to find. Now that I ran into this I have a good idea what to look for with 32/64 bit misconfigurations in IIS at least.© Rick Strahl, West Wind Technologies, 2005-2011Posted in IIS7   ASP.NET  

    Read the article

  • Ardour wont start Jack problem

    - by Drew S
    I downloaded Ardour yesterday, it worked, edited an audio file done. Come back today it wont start I get this: Ardour could not start JACK There are several possible reasons: 1) You requested audio parameters that are not supported.. 2) JACK is running as another user. Please consider the possibilities, and perhaps try different parameters. So I try and look at qjackctl to see what happening there. When I try to start JACK I get D-BUS: JACK server could not be started. then Could not connect to JACK server as client. - Overall operation failed. - Unable to connect to server. Please check the messages window for more info. and this is the message box in JACK. 15:22:12.927 Patchbay deactivated. 15:22:12.927 Statistics reset. 15:22:12.944 ALSA connection change. 15:22:12.951 D-BUS: Service is available (org.jackaudio.service aka jackdbus). Cannot connect to server socket err = No such file or directory Cannot connect to server request channel jack server is not running or cannot be started 15:22:12.959 ALSA connection graph change. 15:22:45.850 ALSA connection graph change. 15:22:46.021 ALSA connection change. 15:22:56.492 ALSA connection graph change. 15:22:56.624 ALSA connection change. 15:23:42.340 D-BUS: JACK server could not be started. Sorry Cannot connect to server socket err = No such file or directory Cannot connect to server request channel jack server is not running or cannot be started Wed Oct 23 15:23:42 2013: Starting jack server... Wed Oct 23 15:23:42 2013: JACK server starting in realtime mode with priority 10 Wed Oct 23 15:23:42 2013: ERROR: Cannot lock down 82274202 byte memory area (Cannot allocate memory) Wed Oct 23 15:23:42 2013: Acquired audio card Audio0 Wed Oct 23 15:23:42 2013: creating alsa driver ... hw:0|hw:0|1024|2|44100|0|0|nomon|swmeter|-|32bit Wed Oct 23 15:23:42 2013: ERROR: ATTENTION: The playback device "hw:0" is already in use. The following applications are using your soundcard(s) so you should check them and stop them as necessary before trying to start JACK again: pulseaudio (process ID 2553) Wed Oct 23 15:23:42 2013: ERROR: Cannot initialize driver Wed Oct 23 15:23:42 2013: ERROR: JackServer::Open failed with -1 Wed Oct 23 15:23:42 2013: ERROR: Failed to open server Wed Oct 23 15:23:43 2013: Saving settings to "/home/drew/.config/jack/conf.xml" ... 15:26:41.669 Could not connect to JACK server as client. - Overall operation failed. - Unable to connect to server. Please check the messages window for more info. Cannot connect to server socket err = No such file or directory Cannot connect to server request channel jack server is not running or cannot be started 15:26:49.006 D-BUS: JACK server could not be started. Sorry Wed Oct 23 15:26:48 2013: Starting jack server... Wed Oct 23 15:26:48 2013: JACK server starting in non-realtime mode Wed Oct 23 15:26:48 2013: ERROR: Cannot lock down 82274202 byte memory area (Cannot allocate memory) Cannot connect to server socket err = No such file or directory Cannot connect to server request channel jack server is not running or cannot be started Wed Oct 23 15:26:48 2013: ERROR: cannot register object path "/org/freedesktop/ReserveDevice1/Audio0": A handler is already registered for /org/freedesktop/ReserveDevice1/Audio0 Wed Oct 23 15:26:48 2013: ERROR: Failed to acquire device name : Audio0 error : A handler is already registered for /org/freedesktop/ReserveDevice1/Audio0 Wed Oct 23 15:26:48 2013: ERROR: Audio device hw:0 cannot be acquired... Wed Oct 23 15:26:48 2013: ERROR: Cannot initialize driver Wed Oct 23 15:26:48 2013: ERROR: JackServer::Open failed with -1 Wed Oct 23 15:26:48 2013: ERROR: Failed to open server Wed Oct 23 15:26:50 2013: Saving settings to "/home/drew/.config/jack/conf.xml" ... 15:26:52.441 Could not connect to JACK server as client. - Overall operation failed. - Unable to connect to server. Please check the messages window for more info. Cannot connect to server socket err = No such file or directory Cannot connect to server request channel jack server is not running or cannot be started 15:26:55.997 D-BUS: JACK server could not be started. Sorry Wed Oct 23 15:26:55 2013: Starting jack server... Wed Oct 23 15:26:55 2013: JACK server starting in non-realtime mode Wed Oct 23 15:26:55 2013: ERROR: Cannot lock down 82274202 byte memory area (Cannot allocate memory) Cannot connect to server socket err = No such file or directory Cannot connect to server request channel jack server is not running or cannot be started Wed Oct 23 15:26:55 2013: ERROR: cannot register object path "/org/freedesktop/ReserveDevice1/Audio0": A handler is already registered for /org/freedesktop/ReserveDevice1/Audio0 Wed Oct 23 15:26:55 2013: ERROR: Failed to acquire device name : Audio0 error : A handler is already registered for /org/freedesktop/ReserveDevice1/Audio0 Wed Oct 23 15:26:55 2013: ERROR: Audio device hw:0 cannot be acquired... Wed Oct 23 15:26:55 2013: ERROR: Cannot initialize driver Wed Oct 23 15:26:55 2013: ERROR: JackServer::Open failed with -1 Wed Oct 23 15:26:55 2013: ERROR: Failed to open server Wed Oct 23 15:26:57 2013: Saving settings to "/home/drew/.config/jack/conf.xml" ... 15:26:59.054 Could not connect to JACK server as client. - Overall operation failed. - Unable to connect to server. Please check the messages window for more info. Cannot connect to server socket err = No such file or directory Cannot connect to server request channel jack server is not running or cannot be started 15:29:24.624 ALSA connection graph change. 15:29:24.641 ALSA connection change. 15:33:11.760 D-BUS: JACK server could not be started. Sorry Cannot connect to server socket err = No such file or directory Cannot connect to server request channel jack server is not running or cannot be started Wed Oct 23 15:33:11 2013: Starting jack server... Wed Oct 23 15:33:11 2013: JACK server starting in non-realtime mode Wed Oct 23 15:33:11 2013: ERROR: Cannot lock down 82274202 byte memory area (Cannot allocate memory) Wed Oct 23 15:33:11 2013: ERROR: cannot register object path "/org/freedesktop/ReserveDevice1/Audio0": A handler is already registered for /org/freedesktop/ReserveDevice1/Audio0 Wed Oct 23 15:33:11 2013: ERROR: Failed to acquire device name : Audio0 error : A handler is already registered for /org/freedesktop/ReserveDevice1/Audio0 Wed Oct 23 15:33:11 2013: ERROR: Audio device hw:0 cannot be acquired... Wed Oct 23 15:33:11 2013: ERROR: Cannot initialize driver Wed Oct 23 15:33:11 2013: ERROR: JackServer::Open failed with -1 Wed Oct 23 15:33:11 2013: ERROR: Failed to open server Wed Oct 23 15:33:12 2013: Saving settings to "/home/drew/.config/jack/conf.xml" ... 15:34:09.439 Could not connect to JACK server as client. - Overall operation failed. - Unable to connect to server. Please check the messages window for more info. Cannot connect to server socket err = No such file or directory Cannot connect to server request channel jack server is not running or cannot be started

    Read the article

< Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >