Search Results

Search found 8839 results on 354 pages for 'optional parameters'.

Page 68/354 | < Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >

  • How to get google app engine logs in C#?

    - by Max
    I am trying to retrieve app engine logs the only result I get is "# next_offset=None", below is my code: internal string GetLogs() { string result = _connection.Get("/api/request_logs", GetPostParameters(null)); return result; } private Dictionary<string, string> GetPostParameters(Dictionary<string, string> customParameters) { Dictionary<string, string> parameters = new Dictionary<string, string>() { { "app_id", _settings.AppId }, { "version", _settings.Version.ToString() } }; if (customParameters != null) { foreach (string key in customParameters.Keys) { if (parameters.ContainsKey(key)) { parameters[key] = customParameters[key]; } else { parameters.Add(key, customParameters[key]); } } } return parameters; }

    Read the article

  • Executing Stored Procedures in Visual Studio LightSwitch.

    - by dataintegration
    A LightSwitch Project is very easy way to visualize and manipulate information directly from one of our ADO.NET Providers. But when it comes to executing the Stored Procedures, it can be a bit more complicated. In this article, we will demonstrate how to execute a Stored Procedure in LightSwitch. For the purposes of this article, we will be using the RSSBus Email Data Provider, but the same process will work with any of our ADO.NET Providers. Creating the RIA Service. Step 1: Open Visual Studio and create a new WCF RIA Service Class Project. Step 2:Add the reference to the RSSBus Email Data Provider dll in the (ProjectName).Web project. Step 3: Add a new Domain Service Class to the (ProjectName).Web project. Step 4: In the new Domain Service Class, create a new class with the attributes needed for the Stored Procedure's parameters. In this demo, the Stored Procedure we are executing is called SendMessage. The parameters we will need are as follows: public class NewMessage{ [Key] public int ID { get; set; } public string FromEmail { get; set; } public string ToEmail { get; set; } public string Subject { get; set; } public string Text { get; set; } } Note: The created class must have an ID which will serve as the key value. Step 5: Create a new method that will executed when the insert event fires. Inside this method you can use the standards ADO.NET code which will execute the stored procedure. [Insert] public void SendMessage(NewMessage newMessage) { try { EmailConnection conn = new EmailConnection(connectionString); EmailCommand comm = new EmailCommand("SendMessage", conn); comm.CommandType = System.Data.CommandType.StoredProcedure; if (!newMessage.FromEmail.Equals("")) comm.Parameters.Add(new EmailParameter("@From", newMessage.FromEmail)); if (!newMessage.ToEmail.Equals("")) comm.Parameters.Add(new EmailParameter("@To", newMessage.ToEmail)); if (!newMessage.Subject.Equals("")) comm.Parameters.Add(new EmailParameter("@Subject", newMessage.Subject)); if (!newMessage.Text.Equals("")) comm.Parameters.Add(new EmailParameter("@Text", newMessage.Text)); comm.ExecuteNonQuery(); } catch (Exception exc) { Console.WriteLine(exc.Message); } } Step 6: Create a query method. We are not going to be using getNewMessages(), so it does not matter what it returns for the purpose of our example, but you will need to create a method for the query event as well. [Query(IsDefault=true)] public IEnumerable<NewMessage> getNewMessages() { return null; } Step 7: Rebuild the whole solution. Creating the LightSwitch Project. Step 8: Open Visual Studio and create a new LightSwitch Application Project. Step 9: On the Data Sources, add a new data source. Choose a WCF RIA Service Step 10: Choose to add a new reference and select the (Project Name).Web.dll generated from the RIA Service. Step 11: Select the entities you would like to import. In this case, we are using the recently created NewMessage entity. Step 13: On the Screens section, create a new screen and select the NewMessage entity as the Screen Data. Step 14: After you run the project, you will be able to add a new record and save it. This will execute the Stored Procedure and send the new message. If you create a screen to check the sent messages, you can refresh this screen to see the mail you sent. Sample Project To help you with get started using stored procedures in LightSwitch, download the fully functional sample project. You will also need the RSSBus Email Data Provider to make the connection. You can download a free trial here.

    Read the article

  • PAM Winbind Expired Password

    - by kernelpanic
    We've got Winbind/Kerberos setup on RHEL for AD authentication. Working fine however I noticed that when a password has expired, we get a warning but shell access is still granted. What's the proper way of handling this? Can we tell PAM to close the session once it sees the password has expired? Example: login as: ad-user [email protected]'s password: Warning: password has expired. [ad-user@server ~]$ Contents of /etc/pam.d/system-auth: auth required pam_env.so auth sufficient pam_unix.so nullok try_first_pass auth requisite pam_succeed_if.so uid >= 500 quiet auth sufficient pam_krb5.so use_first_pass auth sufficient pam_winbind.so use_first_pass auth required pam_deny.so account [default=2 success=ignore] pam_succeed_if.so quiet uid >= 10000000 account sufficient pam_succeed_if.so user ingroup AD_Admins debug account requisite pam_succeed_if.so user ingroup AD_Developers debug account required pam_access.so account required pam_unix.so broken_shadow account sufficient pam_localuser.so account sufficient pam_succeed_if.so uid < 500 quiet account [default=bad success=ok user_unknown=ignore] pam_krb5.so account [default=bad success=ok user_unknown=ignore] pam_winbind.so account required pam_permit.so password requisite pam_cracklib.so try_first_pass retry=3 password sufficient pam_unix.so md5 shadow nullok try_first_pass use_authtok password sufficient pam_krb5.so use_authtok password sufficient pam_winbind.so use_authtok password required pam_deny.so session [default=2 success=ignore] pam_succeed_if.so quiet uid >= 10000000 session sufficient pam_succeed_if.so user ingroup AD_Admins debug session requisite pam_succeed_if.so user ingroup AD_Developers debug session optional pam_mkhomedir.so umask=0077 skel=/etc/skel session optional pam_keyinit.so revoke session required pam_limits.so session optional pam_mkhomedir.so session [success=1 default=ignore] pam_succeed_if.so service in crond quiet use_uid session required pam_unix.so session optional pam_krb5.so

    Read the article

  • Nginx and automatic updates

    - by Desmond Hume
    I'm on Ubuntu 12.04.1 with unattended-upgrades configured for automatic security updates, and I installed Nginx by first adding deb http://nginx.org/packages/ubuntu/ lucid nginx deb-src http://nginx.org/packages/ubuntu/ lucid nginx to /etc/apt/sources.list file, just as was suggested by the official wiki, and then by sudo apt-get update sudo apt-get install nginx which installed Nginx with all the standard modules. But now I think I could make good use of one or two of the Nginx optional modules, like the gzip precompression module or some security-related one. So far, I see two ways of adding an optional module to Nginx, one is compiling and installing from the source code and the other is described in this article. So, which of the ways should I choose so that automatic updates still run for and apply to Nginx and its optional modules? Or should I create a cron job with a command/script specific for Nginx instead of using unattended-upgrades utility? Can I choose between volume updates and security-only updates to be automatically applied to the standard and optional modules? And finally, is there a possibility to automatically update Nginx's modules on the fly (without any connections having been dropped), like the documentation suggests it's possible with sudo kill -USR2 $( cat /run/nginx.pid ) P.S. Actually I'm not certain if unattended-upgrades utility would automatically update the standard modules in the first place, not enough time has passed since Nginx was installed to say for sure.

    Read the article

  • Problem with date parameter - Oracle

    - by Nicole
    Hi everyone! I have this stored procedure: CREATE OR REPLACE PROCEDURE "LIQUIDACION_OBTENER" ( p_Cuenta IN NUMBER, p_Fecha IN DATE, p_Detalle OUT LIQUIDACION.FILADETALLE%TYPE ) IS BEGIN SELECT FILADETALLE INTO p_Detalle FROM Liquidacion WHERE (FILACUENTA = p_Cuenta) AND (FILAFECHA = p_Fecha); END; and my c# code: string liquidacion = string.Empty; OracleCommand command = new OracleCommand("Liquidacion_Obtener"); command.BindByName = true; command.Parameters.Add(new OracleParameter("p_Cuenta", OracleDbType.Int64)); command.Parameters["p_Cuenta"].Value = cuenta; command.Parameters.Add(new OracleParameter("p_Fecha", OracleDbType.Date)); command.Parameters["p_Fecha"].Value = fecha; command.Parameters.Add("p_Detalle", OracleDbType.Varchar2, ParameterDirection.Output); OracleConnectionHolder connection = null; connection = this.GetConnection(); command.Connection = connection.Connection; command.CommandTimeout = 30; command.CommandType = CommandType.StoredProcedure; OracleDataReader lector = command.ExecuteReader(); while (lector.Read()) { liquidacion += ((OracleString)command.Parameters["p_Detalle"].Value).Value; } the thing is that when I try to put a value into the parameter "Fecha" (that is a date) the code gives me this error (when the line command.ExecuteReader(); is executed) Oracle.DataAccess.Client.OracleException : ORA-06502: PL/SQL: numeric or value error ORA-06512: at "SYSTEM.LIQUIDACION_OBTENER", line 9 ORA-06512: at line 1 the thing is that te date in the data base is saved like a date but it's format is "2010-APR-14" and the value I send is a datetime that has this format: "14/04/2010 00:00:00" could it be that??? I hope my post is understandable.. thanks!!!!!!!!!!

    Read the article

  • How can I improve this design?

    - by klausbyskov
    Let's assume that our system can perform actions, and that an action requires some parameters to do its work. I have defined the following base class for all actions (simplified for your reading pleasure): public abstract class BaseBusinessAction<TActionParameters> : where TActionParameters : IActionParameters { protected BaseBusinessAction(TActionParameters actionParameters) { if (actionParameters == null) throw new ArgumentNullException("actionParameters"); this.Parameters = actionParameters; if (!ParametersAreValid()) throw new ArgumentException("Valid parameters must be supplied", "actionParameters"); } protected TActionParameters Parameters { get; private set; } protected abstract bool ParametersAreValid(); public void CommonMethod() { ... } } Only a concrete implementation of BaseBusinessAction knows how to validate that the parameters passed to it are valid, and therefore the ParametersAreValid is an abstract function. However, I want the base class constructor to enforce that the parameters passed are always valid, so I've added a call to ParametersAreValid to the constructor and I throw an exception when the function returns false. So far so good, right? Well, no. Code analysis is telling me to "not call overridable methods in constructors" which actually makes a lot of sense because when the base class's constructor is called the child class's constructor has not yet been called, and therefore the ParametersAreValid method may not have access to some critical member variable that the child class's constructor would set. So the question is this: How do I improve this design? Do I add a Func<bool, TActionParameters> parameter to the base class constructor? If I did: public class MyAction<MyParameters> { public MyAction(MyParameters actionParameters, bool something) : base(actionParameters, ValidateIt) { this.something = something; } private bool something; public static bool ValidateIt() { return something; } } This would work because ValidateIt is static, but I don't know... Is there a better way? Comments are very welcome.

    Read the article

  • Communication between EJB3 Instances (JEE inter-bean communication) possible?

    - by Hank
    I'm designing a part of a JEE6 application, consisting of EJB3 beans. Part of the requirements are multiple parallel (say a few hundred) long running (over days) database hunts. Individual hunts have different search parameters (start time, end time, query filter). Parameters may get changed over time. Currently I'm thinking of the following: SearchController (Stateless Session Bean) formulates a set of search parameters, sends it off to a SearchListener via JMS SearchListener (Message Driven Bean) receives search parameters, instantiates a SearchWorker with the parameters SearchWorker (SLSB) hunts repeatedly through the database; when it finds something, the result is sent off via JMS, and the search continues; when the given 'end-time' has reached, it ends What I'm wondering now: Is there a problem, with EJB3 instances running for days? (Other than that I need to be able to deal with container restarts...) How do I know how many and which EJB instances of SearchWorker are currently running? Is it possible to communicate with them individually (similar to sending a System V signal to a unix process), e.g. to send new parameters, to end an instance, etc..

    Read the article

  • OdbcCommand on Stored Procedure - "Parameter not supplied" error on Output parameter

    - by Aaron
    I'm trying to execute a stored procedure (against SQL Server 2005 through the ODBC driver) and I recieve the following error: Procedure or Function 'GetNodeID' expects parameter '@ID', which was not supplied. @ID is the OUTPUT parameter for my procedure, there is an input @machine which is specified and is set to null in the stored procedure: ALTER PROCEDURE [dbo].[GetNodeID] @machine nvarchar(32) = null, @ID int OUTPUT AS BEGIN SET NOCOUNT ON; IF EXISTS(SELECT * FROM Nodes WHERE NodeName=@machine) BEGIN SELECT @ID = (SELECT NodeID FROM Nodes WHERE NodeName=@machine) END ELSE BEGIN INSERT INTO Nodes (NodeName) VALUES (@machine) SELECT @ID = (SELECT NodeID FROM Nodes WHERE NodeName=@machine) END END The following is the code I'm using to set the parameters and call the procedure: OdbcCommand Cmd = new OdbcCommand("GetNodeID", _Connection); Cmd.CommandType = CommandType.StoredProcedure; Cmd.Parameters.Add("@machine", OdbcType.NVarChar); Cmd.Parameters["@machine"].Value = Environment.MachineName.ToLower(); Cmd.Parameters.Add("@ID", OdbcType.Int); Cmd.Parameters["@ID"].Direction = ParameterDirection.Output; Cmd.ExecuteNonQuery(); _NodeID = (int)Cmd.Parameters["@Count"].Value; I've also tried using Cmd.ExecuteScalar with no success. If I break before I execute the command, I can see that @machine has a value. If I execute the procedure directly from Management Studio, it works correctly. Any thoughts? Thanks

    Read the article

  • how to create a DataAccessLayer ?

    - by NIGHIL DAS
    hi, i am creating a database applicatin in .Net. I am using a DataAccessLayer for communicating .net objects with database but i am not sure that this class is correct or not Can anyone cross check it and rectify any mistakes namespace IDataaccess { #region Collection Class public class SPParamCollection : List<SPParams> { } public class SPParamReturnCollection : List<SPParams> { } #endregion #region struct public struct SPParams { public string Name { get; set; } public object Value { get; set; } public ParameterDirection ParamDirection { get; set; } public SqlDbType Type { get; set; } public int Size { get; set; } public string TypeName { get; set; } // public string datatype; } #endregion /// <summary> /// Interface DataAccess Layer implimentation New version /// </summary> public interface IDataAccess { DataTable getDataUsingSP(string spName); DataTable getDataUsingSP(string spName, SPParamCollection spParamCollection); DataSet getDataSetUsingSP(string spName); DataSet getDataSetUsingSP(string spName, SPParamCollection spParamCollection); SqlDataReader getDataReaderUsingSP(string spName); SqlDataReader getDataReaderUsingSP(string spName, SPParamCollection spParamCollection); int executeSP(string spName); int executeSP(string spName, SPParamCollection spParamCollection, bool addExtraParmas); int executeSP(string spName, SPParamCollection spParamCollection); DataTable getDataUsingSqlQuery(string strSqlQuery); int executeSqlQuery(string strSqlQuery); SPParamReturnCollection executeSPReturnParam(string spName, SPParamReturnCollection spParamReturnCollection); SPParamReturnCollection executeSPReturnParam(string spName, SPParamCollection spParamCollection, SPParamReturnCollection spParamReturnCollection); SPParamReturnCollection executeSPReturnParam(string spName, SPParamCollection spParamCollection, SPParamReturnCollection spParamReturnCollection, bool addExtraParmas); int executeSPReturnParam(string spName, SPParamCollection spParamCollection, ref SPParamReturnCollection spParamReturnCollection); object getScalarUsingSP(string spName); object getScalarUsingSP(string spName, SPParamCollection spParamCollection); } } using IDataaccess; namespace Dataaccess { /// <summary> /// Class DataAccess Layer implimentation New version /// </summary> public class DataAccess : IDataaccess.IDataAccess { #region Public variables static string Strcon; DataSet dts = new DataSet(); public DataAccess() { Strcon = sReadConnectionString(); } private string sReadConnectionString() { try { //dts.ReadXml("C:\\cnn.config"); //Strcon = dts.Tables[0].Rows[0][0].ToString(); //System.Configuration.Configuration config = ConfigurationManager.OpenExeConfiguration(ConfigurationUserLevel.None); //Strcon = config.ConnectionStrings.ConnectionStrings["connectionString"].ConnectionString; // Add an Application Setting. //Strcon = "Data Source=192.168.50.103;Initial Catalog=erpDB;User ID=ipixerp1;Password=NogoXVc3"; Strcon = System.Configuration.ConfigurationManager.AppSettings["connection"]; //Strcon = System.Configuration.ConfigurationSettings.AppSettings[0].ToString(); } catch (Exception) { } return Strcon; } public SqlConnection connection; public SqlCommand cmd; public SqlDataAdapter adpt; public DataTable dt; public int intresult; public SqlDataReader sqdr; #endregion #region Public Methods public DataTable getDataUsingSP(string spName) { return getDataUsingSP(spName, null); } public DataTable getDataUsingSP(string spName, SPParamCollection spParamCollection) { try { using (connection = new SqlConnection(Strcon)) { connection.Open(); using (cmd = new SqlCommand(spName, connection)) { int count, param = 0; if (spParamCollection == null) { param = -1; } else { param = spParamCollection.Count; } for (count = 0; count < param; count++) { cmd.Parameters.AddWithValue(spParamCollection[count].Name, spParamCollection[count].Value); } cmd.CommandType = CommandType.StoredProcedure; cmd.CommandTimeout = 60; adpt = new SqlDataAdapter(cmd); dt = new DataTable(); adpt.Fill(dt); return (dt); } } } finally { connection.Close(); } } public DataSet getDataSetUsingSP(string spName) { return getDataSetUsingSP(spName, null); } public DataSet getDataSetUsingSP(string spName, SPParamCollection spParamCollection) { try { using (connection = new SqlConnection(Strcon)) { connection.Open(); using (cmd = new SqlCommand(spName, connection)) { int count, param = 0; if (spParamCollection == null) { param = -1; } else { param = spParamCollection.Count; } for (count = 0; count < param; count++) { cmd.Parameters.AddWithValue(spParamCollection[count].Name, spParamCollection[count].Value); } cmd.CommandType = CommandType.StoredProcedure; cmd.CommandTimeout = 60; adpt = new SqlDataAdapter(cmd); DataSet ds = new DataSet(); adpt.Fill(ds); return ds; } } } finally { connection.Close(); } } public SqlDataReader getDataReaderUsingSP(string spName) { return getDataReaderUsingSP(spName, null); } public SqlDataReader getDataReaderUsingSP(string spName, SPParamCollection spParamCollection) { try { using (connection = new SqlConnection(Strcon)) { connection.Open(); using (cmd = new SqlCommand(spName, connection)) { int count, param = 0; if (spParamCollection == null) { param = -1; } else { param = spParamCollection.Count; } for (count = 0; count < param; count++) { cmd.Parameters.AddWithValue(spParamCollection[count].Name, spParamCollection[count].Value); } cmd.CommandType = CommandType.StoredProcedure; cmd.CommandTimeout = 60; sqdr = cmd.ExecuteReader(); return (sqdr); } } } finally { connection.Close(); } } public int executeSP(string spName) { return executeSP(spName, null); } public int executeSP(string spName, SPParamCollection spParamCollection, bool addExtraParmas) { try { using (connection = new SqlConnection(Strcon)) { connection.Open(); using (cmd = new SqlCommand(spName, connection)) { int count, param = 0; if (spParamCollection == null) { param = -1; } else { param = spParamCollection.Count; } for (count = 0; count < param; count++) { SqlParameter par = new SqlParameter(spParamCollection[count].Name, spParamCollection[count].Value); if (addExtraParmas) { par.TypeName = spParamCollection[count].TypeName; par.SqlDbType = spParamCollection[count].Type; } cmd.Parameters.Add(par); } cmd.CommandType = CommandType.StoredProcedure; cmd.CommandTimeout = 60; return (cmd.ExecuteNonQuery()); } } } finally { connection.Close(); } } public int executeSP(string spName, SPParamCollection spParamCollection) { return executeSP(spName, spParamCollection, false); } public DataTable getDataUsingSqlQuery(string strSqlQuery) { try { using (connection = new SqlConnection(Strcon)) connection.Open(); { using (cmd = new SqlCommand(strSqlQuery, connection)) { cmd.CommandType = CommandType.Text; cmd.CommandTimeout = 60; adpt = new SqlDataAdapter(cmd); dt = new DataTable(); adpt.Fill(dt); return (dt); } } } finally { connection.Close(); } } public int executeSqlQuery(string strSqlQuery) { try { using (connection = new SqlConnection(Strcon)) { connection.Open(); using (cmd = new SqlCommand(strSqlQuery, connection)) { cmd.CommandType = CommandType.Text; cmd.CommandTimeout = 60; intresult = cmd.ExecuteNonQuery(); return (intresult); } } } finally { connection.Close(); } } public SPParamReturnCollection executeSPReturnParam(string spName, SPParamReturnCollection spParamReturnCollection) { return executeSPReturnParam(spName, null, spParamReturnCollection); } public int executeSPReturnParam() { return 0; } public int executeSPReturnParam(string spName, SPParamCollection spParamCollection, ref SPParamReturnCollection spParamReturnCollection) { try { SPParamReturnCollection spParamReturned = new SPParamReturnCollection(); using (connection = new SqlConnection(Strcon)) { connection.Open(); using (cmd = new SqlCommand(spName, connection)) { int count, param = 0; if (spParamCollection == null) { param = -1; } else { param = spParamCollection.Count; } for (count = 0; count < param; count++) { cmd.Parameters.AddWithValue(spParamCollection[count].Name, spParamCollection[count].Value); } cmd.CommandType = CommandType.StoredProcedure; foreach (SPParams paramReturn in spParamReturnCollection) { SqlParameter _parmReturn = new SqlParameter(paramReturn.Name, paramReturn.Size); _parmReturn.Direction = paramReturn.ParamDirection; if (paramReturn.Size > 0) _parmReturn.Size = paramReturn.Size; else _parmReturn.Size = 32; _parmReturn.SqlDbType = paramReturn.Type; cmd.Parameters.Add(_parmReturn); } cmd.CommandTimeout = 60; intresult = cmd.ExecuteNonQuery(); connection.Close(); //for (int i = 0; i < spParamReturnCollection.Count; i++) //{ // spParamReturned.Add(new SPParams // { // Name = spParamReturnCollection[i].Name, // Value = cmd.Parameters[spParamReturnCollection[i].Name].Value // }); //} } } return intresult; } finally { connection.Close(); } } public SPParamReturnCollection executeSPReturnParam(string spName, SPParamCollection spParamCollection, SPParamReturnCollection spParamReturnCollection) { return executeSPReturnParam(spName, spParamCollection, spParamReturnCollection, false); } public SPParamReturnCollection executeSPReturnParam(string spName, SPParamCollection spParamCollection, SPParamReturnCollection spParamReturnCollection, bool addExtraParmas) { try { SPParamReturnCollection spParamReturned = new SPParamReturnCollection(); using (connection = new SqlConnection(Strcon)) { connection.Open(); using (cmd = new SqlCommand(spName, connection)) { int count, param = 0; if (spParamCollection == null) { param = -1; } else { param = spParamCollection.Count; } for (count = 0; count < param; count++) { //cmd.Parameters.AddWithValue(spParamCollection[count].Name, spParamCollection[count].Value); SqlParameter par = new SqlParameter(spParamCollection[count].Name, spParamCollection[count].Value); if (addExtraParmas) { par.TypeName = spParamCollection[count].TypeName; par.SqlDbType = spParamCollection[count].Type; } cmd.Parameters.Add(par); } cmd.CommandType = CommandType.StoredProcedure; foreach (SPParams paramReturn in spParamReturnCollection) { SqlParameter _parmReturn = new SqlParameter(paramReturn.Name, paramReturn.Value); _parmReturn.Direction = paramReturn.ParamDirection; if (paramReturn.Size > 0) _parmReturn.Size = paramReturn.Size; else _parmReturn.Size = 32; _parmReturn.SqlDbType = paramReturn.Type; cmd.Parameters.Add(_parmReturn); } cmd.CommandTimeout = 60; cmd.ExecuteNonQuery(); connection.Close(); for (int i = 0; i < spParamReturnCollection.Count; i++) { spParamReturned.Add(new SPParams { Name = spParamReturnCollection[i].Name, Value = cmd.Parameters[spParamReturnCollection[i].Name].Value }); } } } return spParamReturned; } catch (Exception ex) { return null; } finally { connection.Close(); } } public object getScalarUsingSP(string spName) { return getScalarUsingSP(spName, null); } public object getScalarUsingSP(string spName, SPParamCollection spParamCollection) { try { using (connection = new SqlConnection(Strcon)) { connection.Open(); using (cmd = new SqlCommand(spName, connection)) { int count, param = 0; if (spParamCollection == null) { param = -1; } else { param = spParamCollection.Count; } for (count = 0; count < param; count++) { cmd.Parameters.AddWithValue(spParamCollection[count].Name, spParamCollection[count].Value); cmd.CommandTimeout = 60; } cmd.CommandType = CommandType.StoredProcedure; return cmd.ExecuteScalar(); } } } finally { connection.Close(); cmd.Dispose(); } } #endregion } }

    Read the article

  • asp.net server controls

    - by Richard Friend
    Okay i have a custom server control that has some autocomplete settings, i have this as follows and it works fine. /// <summary> /// Auto complete settings /// </summary> [System.ComponentModel.DesignerSerializationVisibility (System.ComponentModel.DesignerSerializationVisibility.Content), PersistenceMode(PersistenceMode.InnerProperty), Category("Data"), Description("Auto complete settings"), NotifyParentProperty(true)] public AutoCompleteLookupSettings AutoComplete { private set; get; } I also have a ParameterCollection that is really related to the auto complete settings, currently this collection resides off the control itself like so : /// <summary> /// Parameters for any data lookups /// </summary> [System.ComponentModel.DesignerSerializationVisibility(System.ComponentModel.DesignerSerializationVisibility.Content), PersistenceMode(PersistenceMode.InnerProperty)] public ParameterCollection Parameters { get; set; } What i would like to do is move the parameter collection inside of the AutoCompleteSettings as it really relates to my autocomplete, i have tried this but to no avail.. I would like to move from <cc1:TextField ID="TextField1" runat='server'> <AutoComplete MethodName="GetTest" TextField="Item1" TypeName ="AppFrameWork.Utils" /> <Parameters> <asp:ControlParameter ControlID="txtTest" PropertyName="Text" Name="test" /> </Parameters> </cc1:TextField> To <cc1:TextField ID="TextField1" runat='server'> <AutoComplete MethodName="GetTest" TextField="Item1" TypeName ="AppFrameWork.Utils" > <Parameters> <asp:ControlParameter ControlID="txtTest" PropertyName="Text" Name="test" /> </Parameters> </AutoComplete> </cc1:TextField>

    Read the article

  • How to send check boxes ID in gridview in one string for mass update.

    - by SmartDev
    hi , I have a grid which has check boxes and when selecting the top chek box select all will select all the check box in gridview and would update .for this im using for loop where it exceutes every time and this is taking lot of time cuase there are more thn 100 records in grid . try { string StrOutputMessageDisplayDocReqCsu = string.Empty; string strid = string.Empty; string strflag = string.Empty; string strSelected = string.Empty; for (int j = 0; j < GdvDocReqMU.Rows.Count; j++) { CheckBox Chkupdate = (CheckBox)GdvDocReqMU.Rows[j].Cells[1].FindControl("chkDR"); if (Chkupdate != null) { if (Chkupdate.Checked) { strid = ((Label)GdvDocReqMU.Rows[j].FindControl("lblIDDocReqCsu")).Text; strflag = ((Label)GdvDocReqMU.Rows[j].FindControl("lblStatusDocReqCsu")).Text; cmd = new SqlCommand("sp_Update_v1", con); cmd.CommandType = CommandType.StoredProcedure; cmd.Parameters.AddWithValue("@id", strSelected); cmd.Parameters.AddWithValue("@flag", DdlStatusDocReqMU.SelectedValue); cmd.Parameters.AddWithValue("@notes", txtnotesDocReqMU.Text); cmd.Parameters.AddWithValue("@user", strUseridDRAhk); cmd.Parameters.Add(new SqlParameter("@message", SqlDbType.VarChar, 100, ParameterDirection.Output, false, 0, 50, "message", DataRowVersion.Default, null)); cmd.UpdatedRowSource = UpdateRowSource.OutputParameters; con.Open(); cmd.ExecuteNonQuery(); con.Close(); } } } //} //StrOutputMessageDisplayDocReqCsu += (string)cmd.Parameters["@message"].Value + "<br/>"; //lbldbmessDocReqAhk.Text += StrOutputMessageDisplayDocReqCsu + "<br>"; GetgridDocReq(); } catch (Exception ex) { lbldbmessDocReqAhk.Text = ex.Message.ToString(); }

    Read the article

  • SugarmCRM REST API always returns "null"

    - by TuomasR
    I'm trying to test out the SugarCRM REST API, running latest version of CE (6.0.1). It seems that whenever I do a query, the API returns "null", nothing else. If I omit the parameters, then the API returns the API description (which the documentation says it should). I'm trying to perform a login, passing as parameter the method (login), input_type and response_type (json) and rest_data (JSON encoded parameters). The following code does the query: $api_target = "http://example.com/sugarcrm/service/v2/rest.php"; $parameters = json_encode(array( "user_auth" => array( "user_name" => "admin", "password" => md5("adminpassword"), ), "application_name" => "Test", "name_value_list" => array(), )); $postData = http_build_query(array( "method" => "login", "input_type" => "json", "response_type" => "json", "rest_data" => $parameters )); echo $parameters . "\n"; echo $postData . "\n"; echo file_get_contents($api_target, false, stream_context_create(array( "http" => array( "method" => "POST", "header" => "Content-Type: application/x-www-form-urlencoded\r\n", "content" => $postData ) ))) . "\n"; I've tried different variations of parameters and using username instead of user_name, and all provide the same result, just a response "null" and that's it.

    Read the article

  • server control properties

    - by Richard Friend
    Okay i have a custom server control that has some autocomplete settings, i have this as follows and it works fine. /// <summary> /// Auto complete settings /// </summary> [System.ComponentModel.DesignerSerializationVisibility (System.ComponentModel.DesignerSerializationVisibility.Content), PersistenceMode(PersistenceMode.InnerProperty), Category("Data"), Description("Auto complete settings"), NotifyParentProperty(true)] public AutoCompleteLookupSettings AutoComplete { private set; get; } I also have a ParameterCollection that is really related to the auto complete settings, currently this collection resides off the control itself like so : /// <summary> /// Parameters for any data lookups /// </summary> [System.ComponentModel.DesignerSerializationVisibility(System.ComponentModel.DesignerSerializationVisibility.Content), PersistenceMode(PersistenceMode.InnerProperty)] public ParameterCollection Parameters { get; set; } What i would like to do is move the parameter collection inside of the AutoCompleteSettings as it really relates to my autocomplete, i have tried this but to no avail.. I would like to move from <cc1:TextField ID="TextField1" runat='server'> <AutoComplete MethodName="GetTest" TextField="Item1" TypeName ="AppFrameWork.Utils" /> <Parameters> <asp:ControlParameter ControlID="txtTest" PropertyName="Text" Name="test" /> </Parameters> </cc1:TextField> To <cc1:TextField ID="TextField1" runat='server'> <AutoComplete MethodName="GetTest" TextField="Item1" TypeName ="AppFrameWork.Utils" > <Parameters> <asp:ControlParameter ControlID="txtTest" PropertyName="Text" Name="test" /> </Parameters> </AutoComplete> </cc1:TextField>

    Read the article

  • Access DB Transaction on insert or updating

    - by Raju Gujarati
    I am going to implement the database access layer of the Window application using C#. The database (.accdb) is located to the project files. When it comes to two notebooks (clients) connecting to one access database through switches, it throws DBConcurrency Exception Error. My target is to check the timestamp of the sql executed first and then run the sql . Would you please provide me some guidelines to achieve this ? The below is my code protected void btnTransaction_Click(object sender, EventArgs e) { string custID = txtID.Text; string CompName = txtCompany.Text; string contact = txtContact.Text; string city = txtCity.Text; string connString = ConfigurationManager.ConnectionStrings["CustomersDatabase"].ConnectionString; OleDbConnection connection = new OleDbConnection(connString); connection.Open(); OleDbCommand command = new OleDbCommand(); command.Connection = connection; OleDbTransaction transaction = connection.BeginTransaction(); command.Transaction = transaction; try { command.CommandText = "INSERT INTO Customers(CustomerID, CompanyName, ContactName, City, Country) VALUES(@CustomerID, @CompanyName, @ContactName, @City, @Country)"; command.CommandType = CommandType.Text; command.Parameters.AddWithValue("@CustomerID", custID); command.Parameters.AddWithValue("@CompanyName", CompName); command.Parameters.AddWithValue("@ContactName", contact); command.Parameters.AddWithValue("@City", city); command.ExecuteNonQuery(); command.CommandText = "UPDATE Customers SET ContactName = @ContactName2 WHERE CustomerID = @CustomerID2"; command.CommandType = CommandType.Text; command.Parameters.AddWithValue("@CustomerID2", custIDUpdate); command.Parameters.AddWithValue("@ContactName2", contactUpdate); command.ExecuteNonQuery(); adapter.Fill(table); GridView1.DataSource = table; GridView1.DataBind(); transaction.Commit(); lblMessage.Text = "Transaction successfully completed"; } catch (Exception ex) { transaction.Rollback(); lblMessage.Text = "Transaction is not completed"; } finally { connection.Close(); } }

    Read the article

  • Call any webservice from the same $.ajax() call

    - by Andreas
    Hi! Im creating a usercontrol which is controled client side, it has an javascript-file attatched to it. This control has a button and upon click a popup appears, this popup shows a list of my domain entities. My entities are fetched using a call to a webservice. Im trying to get this popup usercontrol to work on all my entities, therefore i have the need to call any webservice needed (one per entity for example) with the same $.ajax() call. I have hiddenfields for the webservice url in my usercontrol which you specify in the markup via a property. So far so good. The problem arise when i need some additional parameters to the webservice (other than pagesize and pageindex). Say for example that one webservice takes an additional parameter "Date". At the moment i have my parameters set up like this: var params = JSON.stringify({ pageSize: _this.pageSize, pageIndex: _this.pageIndex }); and then i call the webservice like so: $.ajax({ webserviceUrl, params, function(result) { //some logic }); }); What i want to do is to be able to add my extra parameters (Date) to "Param" when needed, the specification of these parameters will be done via properties of the usercontrol. So, bottom line, i have a set of default parameters and want to dynamically add optional extra parameters. How is this possible? Thanks in advance.

    Read the article

  • server controls complex properties with sub collections.

    - by Richard Friend
    Okay i have a custom server control that has some autocomplete settings, i have this as follows and it works fine. /// <summary> /// Auto complete settings /// </summary> [System.ComponentModel.DesignerSerializationVisibility (System.ComponentModel.DesignerSerializationVisibility.Content), PersistenceMode(PersistenceMode.InnerProperty), Category("Data"), Description("Auto complete settings"), NotifyParentProperty(true)] public AutoCompleteLookupSettings AutoComplete { private set; get; } I also have a ParameterCollection that is really related to the auto complete settings, currently this collection resides off the control itself like so : /// <summary> /// Parameters for any data lookups /// </summary> [System.ComponentModel.DesignerSerializationVisibility(System.ComponentModel.DesignerSerializationVisibility.Content), PersistenceMode(PersistenceMode.InnerProperty)] public ParameterCollection Parameters { get; set; } What i would like to do is move the parameter collection inside of the AutoCompleteSettings as it really relates to my autocomplete, i have tried this but to no avail.. I would like to move from <cc1:TextField ID="TextField1" runat='server'> <AutoComplete MethodName="GetTest" TextField="Item1" TypeName ="AppFrameWork.Utils" /> <Parameters> <asp:ControlParameter ControlID="txtTest" PropertyName="Text" Name="test" /> </Parameters> </cc1:TextField> To <cc1:TextField ID="TextField1" runat='server'> <AutoComplete MethodName="GetTest" TextField="Item1" TypeName ="AppFrameWork.Utils" > <Parameters> <asp:ControlParameter ControlID="txtTest" PropertyName="Text" Name="test" /> </Parameters> </AutoComplete> </cc1:TextField>

    Read the article

  • C#: ExecuteNonQuery() returns -1 when execute the stored procedure

    - by user1122359
    I'm trying to execute stored procedure in Visual Studio. Its given below. CREATE PROCEDURE [dbo].[addStudent] @stuName varchar(50), @address varchar(100), @tel varchar(15), @etel varchar(15), @nic varchar (10), @dob date AS BEGIN SET NOCOUNT ON; DECLARE @currentID INT DECLARE @existPerson INT SET @existPerson = (SELECT p_ID FROM Student WHERE s_NIC = @nic); IF @existPerson = null BEGIN INSERT INTO Person (p_Name, p_RegDate, p_Address, p_Tel, p_EmergeNo, p_Valid, p_Userlevel) VALUES (@stuName, GETDATE(), @address, @tel, @etel, 0, 'Student' ); SET @currentID = (SELECT MAX( p_ID) FROM Person); INSERT INTO Student (p_ID, s_Barcode, s_DOB, s_NIC) VALUES (@currentID , NULL, @dob, @nic); return 0; END ELSE return -1; END Im doing so by using this code below. SqlConnection con = new SqlConnection(); Connect conn = new Connect(); con = conn.getConnected(); con.Open(); cmd = new SqlCommand("addStudent", con); cmd.CommandType = CommandType.StoredProcedure; cmd.Parameters.Add("@stuName", SqlDbType.VarChar).Value = nameTxt.Text.ToString(); cmd.Parameters.Add("@address", SqlDbType.VarChar).Value = addressTxt.Text.ToString(); cmd.Parameters.Add("@tel", SqlDbType.VarChar).Value = telTxt.Text.ToString(); cmd.Parameters.Add("@etel", SqlDbType.VarChar).Value = emerTxt.Text.ToString(); cmd.Parameters.Add("@nic", SqlDbType.VarChar).Value = nicTxt.Text.ToString(); cmd.Parameters.Add("@dob", SqlDbType.DateTime).Value = dobTime.Value.ToString("MM-dd-yyyy"); int n = cmd.ExecuteNonQuery(); MessageBox.Show(n.ToString()); But it returns me -1. I tried this stored procedure by entering the same values I captured from debugging. It was successful. What can be the possible error? Thanks a lot!

    Read the article

  • Sending Outlook Invites

    - by Daniel Moth
    Sending an Outlook invite for a meeting (also referred to as S+ in Microsoft) is a simple thing to get right if you just run the quick mental check below, which is driven by visual cues in the Outlook UI. I know that some folks don’t do this often or are new to Outlook, so if you know one of those folks share this blog post with them and if they read nothing else ask them to read step 7. Add on the To line the folks that you want to be at the meeting. Indicate optional invitees. Click on the “To” button to bring up the dialog that lets you move folks to be Optional (you can also do this from the Scheduling Assistant). Set the Reminder according to the attendee that has to travel the most. 5 minutes is the minimum. Use the Response Options and uncheck the "Request Response" if your event is going ahead regardless of who can make it or not, i.e. if everyone is optional. Don’t force every recipient to make an extra click, instead make the extra click yourself - you are the organizer. Add a good subject Make the subject such that just by reading it folks know what the meeting is about. Examples, e.g. "Review…", "Finalize…", "XYZ sync up" If this is only between two people and what is commonly referred to as a one to one, the subject would be something like "MyName/YourName 1:1" Write the subject in such a way that when the recipient sees this on their calendar among all the other items, they know what this meeting is about without having to see location, recipients, or any other information about the invite. Add a location, typically a meeting room. If recipients are from different buildings, schedule it where the folks that are doing the other folks a favor live. Otherwise schedule it wherever the least amount of people will have to travel. If you send me an invite to come to your building, and there is more of us than you, you are silently sending me the message that you are doing me a favor so if you don’t want to do that, include a note of why this is in your building, e.g. "Sorry we are slammed with back to back meetings today so hope you can come over to our building". If this is in someone's office, the location would be something like "Moth's office (7/666)" where in parenthesis you see the office location. If some folks are remote in another building/country, or if you know you picked a time which wasn't free for everyone, add an Online option (click the Lync Meeting button). Add a date and time. This MUST be at a time that is showing on the recipients’ calendar as FREE or at worst TENTATIVE. You can check that on the Scheduling Assistant. The reality is that this is not always possible, so in that case you MUST say something about it in the Invite Body, e.g. "Sorry I can see X has a conflict, but I cannot find a better slot", or "With so many of us there are some conflicts and I cannot find a better slot so hope this works", or "Apologies but due to Y we must have this meeting at this time and I know there are some conflicts, hope you can make it anyway". When you do that, I better not be able to find a better slot myself for all of us, and of course when you do that you have implicitly designated the Busy folks as optional. Finally, the body of the invite. This has the agenda of the meeting and if applicable the courtesy apologies due to messing up steps 6 & 7. This should not be the introduction to the meeting, in other words the recipients should not be surprised when they see the invite and go to the body to read it. Notifying them of the meeting takes place via separate email where you explain the purpose and give them a heads up that you'll be sending an invite. That separate email is also your chance to attach documents, don’t do that as part of the invite. TIP: If you have sent mail about the meeting, you can then go to your sent folder to select the message and click the "Meeting" button (Ctrl+Alt+R). This will populate the body with the necessary background, auto select the mandatory and optional attendees as per the TO/CC line, and have a subject that may be good enough already (or you can tweak it). Long to write, but very quick to remember and enforce since most of it is common sense and the checklist is driven of the visual cues in the UI you use to send the invite. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Latency Matters

    - by Frederic P
    A lot of interest in low latencies has been expressed within the financial services segment, most especially in the stock trading applications where every millisecond directly influences the profitability of the trader. These days, much of the trading is executed by software applications which are trained to respond to each other almost instantaneously. In fact, you could say that we are in an arms race where traders are using any and all options to cut down on the delay in executing transactions, even by moving physically closer to the trading venue. The Solaris OS network stack has traditionally been engineered for high throughput, at the expense of higher latencies. Knowledge of tuning parameters to redress the imbalance is critical for applications that are latency sensitive. We are presenting in this blog how to configure further a default Oracle Solaris 10 installation to reduce network latency. There are many parameters in fact that can be altered, but the most effective ones are intr_blank_time and intr_blank_packets. These parameters affect on-board network throughput and latency on Solaris systems. If interrupt blanking is disabled, packets are processed by the driver as soon as they arrive, resulting in higher network throughput and lower latency, but with higher CPU utilization. With interrupt blanking disabled, processor utilization can be as high as 80–90% in some high-load web server environments. If interrupt blanking is enabled, packets are processed when the interrupt is issued. Enabling interrupt blanking can result in reduced processor utilization and network throughput, but higher network latency. Both parameters should be set at the same time. You can set these parameters by using the ndd command as follows: # ndd -set /dev/eri intr_blank_time 0 # ndd -set /dev/eri intr_blank_packets 0 You can add them to the /etc/system file as follows: set eri:intr_blank_time 0 set eri:intr_blank_packets 0 The value of the interrupt blanking parameter is a trade-off between network throughput and processor utilization. If higher processor utilization is acceptable for achieving higher network throughput, then disable interrupt blanking. If lower processor utilization is preferred and higher network latency is the penalty, then enable interrupt blanking. Our experience at ISV Engineering is that under controlled experiments the above settings result in reduction of network latency by at least 50%; on a two-socket 3GHz Sun Fire X4170 M2 running Solaris 10 Update 9, the above settings improved ping-pong latency from 60µs to 25-30µs with the on-board NIC.

    Read the article

  • C#/.NET Little Wonders: Getting Caller Information

    - by James Michael Hare
    Originally posted on: http://geekswithblogs.net/BlackRabbitCoder/archive/2013/07/25/c.net-little-wonders-getting-caller-information.aspx Once again, in this series of posts I look at the parts of the .NET Framework that may seem trivial, but can help improve your code by making it easier to write and maintain. The index of all my past little wonders posts can be found here. There are times when it is desirable to know who called the method or property you are currently executing.  Some applications of this could include logging libraries, or possibly even something more advanced that may server up different objects depending on who called the method. In the past, we mostly relied on the System.Diagnostics namespace and its classes such as StackTrace and StackFrame to see who our caller was, but now in C# 5, we can also get much of this data at compile-time. Determining the caller using the stack One of the ways of doing this is to examine the call stack.  The classes that allow you to examine the call stack have been around for a long time and can give you a very deep view of the calling chain all the way back to the beginning for the thread that has called you. You can get caller information by either instantiating the StackTrace class (which will give you the complete stack trace, much like you see when an exception is generated), or by using StackFrame which gets a single frame of the stack trace.  Both involve examining the call stack, which is a non-trivial task, so care should be done not to do this in a performance-intensive situation. For our simple example let's say we are going to recreate the wheel and construct our own logging framework.  Perhaps we wish to create a simple method Log which will log the string-ified form of an object and some information about the caller.  We could easily do this as follows: 1: static void Log(object message) 2: { 3: // frame 1, true for source info 4: StackFrame frame = new StackFrame(1, true); 5: var method = frame.GetMethod(); 6: var fileName = frame.GetFileName(); 7: var lineNumber = frame.GetFileLineNumber(); 8: 9: // we'll just use a simple Console write for now 10: Console.WriteLine("{0}({1}):{2} - {3}", 11: fileName, lineNumber, method.Name, message); 12: } So, what we are doing here is grabbing the 2nd stack frame (the 1st is our current method) using a 2nd argument of true to specify we want source information (if available) and then taking the information from the frame.  This works fine, and if we tested it out by calling from a file such as this: 1: // File c:\projects\test\CallerInfo\CallerInfo.cs 2:  3: public class CallerInfo 4: { 5: Log("Hello Logger!"); 6: } We'd see this: 1: c:\projects\test\CallerInfo\CallerInfo.cs(5):Main - Hello Logger! This works well, and in fact CallStack and StackFrame are still the best ways to examine deeper into the call stack.  But if you only want to get information on the caller of your method, there is another option… Determining the caller at compile-time In C# 5 (.NET 4.5) they added some attributes that can be supplied to optional parameters on a method to receive caller information.  These attributes can only be applied to methods with optional parameters with explicit defaults.  Then, as the compiler determines who is calling your method with these attributes, it will fill in the values at compile-time. These are the currently supported attributes available in the  System.Runtime.CompilerServices namespace": CallerFilePathAttribute – The path and name of the file that is calling your method. CallerLineNumberAttribute – The line number in the file where your method is being called. CallerMemberName – The member that is calling your method. So let’s take a look at how our Log method would look using these attributes instead: 1: static int Log(object message, 2: [CallerMemberName] string memberName = "", 3: [CallerFilePath] string fileName = "", 4: [CallerLineNumber] int lineNumber = 0) 5: { 6: // we'll just use a simple Console write for now 7: Console.WriteLine("{0}({1}):{2} - {3}", 8: fileName, lineNumber, memberName, message); 9: } Again, calling this from our sample Main would give us the same result: 1: c:\projects\test\CallerInfo\CallerInfo.cs(5):Main - Hello Logger! However, though this seems the same, there are a few key differences. First of all, there are only 3 supported attributes (at this time) that give you the file path, line number, and calling member.  Thus, it does not give you as rich of detail as a StackFrame (which can give you the calling type as well and deeper frames, for example).  Also, these are supported through optional parameters, which means we could call our new Log method like this: 1: // They're defaults, why not fill 'em in 2: Log("My message.", "Some member", "Some file", -13); In addition, since these attributes require optional parameters, they cannot be used in properties, only in methods. These caveats aside, they do let you get similar information inside of methods at a much greater speed!  How much greater?  Well lets crank through 1,000,000 iterations of each.  instead of logging to console, I’ll return the formatted string length of each.  Doing this, we get: 1: Time for 1,000,000 iterations with StackTrace: 5096 ms 2: Time for 1,000,000 iterations with Attributes: 196 ms So you see, using the attributes is much, much faster!  Nearly 25x faster in fact.  Summary There are a few ways to get caller information for a method.  The StackFrame allows you to get a comprehensive set of information spanning the whole call stack, but at a heavier cost.  On the other hand, the attributes allow you to quickly get at caller information baked in at compile-time, but to do so you need to create optional parameters in your methods to support it. Technorati Tags: Little Wonders,CSharp,C#,.NET,StackFrame,CallStack,CallerFilePathAttribute,CallerLineNumberAttribute,CallerMemberName

    Read the article

  • 7u10: JavaFX packaging tools update

    - by igor
    Last weeks were very busy here in Oracle. JavaOne 2012 is next week. Come to see us there! Meanwhile i'd like to quickly update you on recent developments in the area of packaging tools. This is an area of ongoing development for the team, and we are  continuing to refine and improve both the tools and the process. Thanks to everyone who shared experiences and suggestions with us. We are listening and fixed many of reported issues. Please keep them coming as comments on the blog or (even better) file issues directly to the JIRA. In this post i'll focus on several new packaging features added in JDK 7 update 10: Self-Contained Applications: Select Java Runtime to bundle Self-Contained Applications: Create Package without Java Runtime Self-Contained Applications: Package non-JavaFX application Option to disable proxy setup in the JavaFX launcher Ability to specify codebase for WebStart application Option to update existing jar file Self-Contained Applications: Specify application icon Self-Contained Applications: Pass parameters on the command line All these features and number of other important bug fixes are available in the developer preview builds of JDK 7 update 10 (build 8 or later). Please give them a try and share your feedback! Self-Contained Applications: Select Java Runtime to bundle Packager tools in 7u6 assume current JDK (based on java.home property) is the source for embedded runtime. This is useful simplification for many scenarios but there are cases where ability to specify what to embed explicitly is handy. For example IDE may be using fixed JDK to build the project and this is not the version you want to bundle into your application. To make it more flexible we now allow to specify location of base JDK explicitly. It is optional and if you do not specify it then current JDK will be used (i.e. this change is fully backward compatible). New 'basedir' attribute was added to <fx:platform> tag. Its value is location of JDK to be used. It is ok to point to either JRE inside the JDK or JDK top level folder. However, it must be JDK and not JRE as we need other JDK tools for proper packaging and it must be recent version of JDK that is bundled with JavaFX (i.e. Java 7 update 6 or later). Here are examples (<fx:platform> is part of <fx:deploy> task): <fx:platform basedir="${java.home}"/> <fx:platform basedir="c:\tools\jdk7"/> Hint: this feature enables you to use packaging tools from JDK 7 update 10 (and benefit from bug fixes and other features described below) to create application package with bundled FCS version of JRE 7 update 6. Self-Contained Applications: Create Package without Java Runtime This may sound a bit puzzling at first glance. Package without embedded Java Runtime is not really self-contained and obviously will not help with: Deployment on fresh systems. JRE need to be installed separately (and this step will require admin permissions). Possible compatibility issues due to updates of system runtime. However, these packages are much much smaller in size. If download size matters and you are confident that user have recommended system JRE installed then this may be good option to consider if you want to improve user experience for install and launch. Technically, this is implemented as an extension of previous feature. Pass empty string as value for 'basedir' attribute and this will be treated as request to not bundle Java runtime, e.g. <fx:platform basedir=""/> Self-Contained Applications: Package non-JavaFX application One of popular questions people ask about self-contained applications - can i package my Java application as self-contained application? Absolutely. This is true even for tools shipped with JDK 7 update 6. Simply follow steps for creating package for Swing application with integrated JavaFX content and they will work even if your application does not use JavaFX. What's wrong with it? Well, there are few caveats: bundle size is larger because JavaFX is bundled whilst it is not really needed main application jar needs to be packaged to comply to JavaFX packaging requirements(and this may be not trivial to achieve in your existing build scripts) javafx application launcher may not work well with startup logic of your application (for example launcher will initialize networking stack and this may void custom networking settings in your application code) In JDK 7 update 6 <fx:deploy> was updated to accept arbitrary executable jar as an input. Self-contained application package will be created preserving input jar as-is, i.e. no JavaFX launcher will be embedded. This does not help with first point above but resolves other two. More formally following assertions must be true for packaging to succeed: application can be launched as "java -jar YourApp.jar" from the command line  mainClass attribute of <fx:application> refers to application main class <fx:resources> lists all resources needed for the application To give you an example lets assume we need to create a bundle for application consisting of 3 jars:     dist/javamain.jar     dist/lib/somelib.jar    dist/morelibs/anotherlib.jar where javamain.jar has manifest with      Main-Class: app.Main     Class-Path: lib/somelib.jar morelibs/anotherlib.jar Here is sample ant code to package it: <target name="package-bundle"> <taskdef resource="com/sun/javafx/tools/ant/antlib.xml" uri="javafx:com.sun.javafx.tools.ant" classpath="${javafx.tools.ant.jar}"/> <fx:deploy nativeBundles="all" width="100" height="100" outdir="native-packages/" outfile="MyJavaApp"> <info title="Sample project" vendor="Me" description="Test built from Java executable jar"/> <fx:application id="myapp" version="1.0" mainClass="app.Main" name="MyJavaApp"/> <fx:resources> <fx:fileset dir="dist"> <include name="javamain.jar"/> <include name="lib/somelib.jar"/> <include name="morelibs/anotherlib.jar"/> </fx:fileset> </fx:resources> </fx:deploy> </target> Option to disable proxy setup in the JavaFX launcher Since JavaFX 2.2 (part of JDK 7u6) properly packaged JavaFX applications  have proxy settings initialized according to Java Runtime configuration settings. This is handy for most of the application accessing network with one exception. If your application explicitly sets networking properties (e.g. socksProxyHost) then they must be set before networking stack is initialized. Proxy detection will initialize networking stack and therefore your custom settings will be ignored. One way to disable proxy setup by the embedded JavaFX launcher is to pass "-Djavafx.autoproxy.disable=true" on the command line. This is good for troubleshooting (proxy detection may cause significant startup time increases if network is misconfigured) but not really user friendly. Now proxy setup will be disabled if manifest of main application jar has "JavaFX-Feature-Proxy" entry with value "None". Here is simple example of adding this entry using <fx:jar> task: <fx:jar destfile="dist/sampleapp.jar"> <fx:application refid="myapp"/> <fx:resources refid="myresources"/> <fileset dir="build/classes"/> <manifest> <attribute name="JavaFX-Feature-Proxy" value="None"/> </manifest> </fx:jar> Ability to specify codebase for WebStart application JavaFX applications do not need to specify codebase (i.e. absolute location where application code will be deployed) for most of real world deployment scenarios. This is convenient as application does not need to be modified when it is moved from development to deployment environment. However, some developers want to ensure copies of their application JNLP file will redirect to master location. This is where codebase is needed. To avoid need to edit JNLP file manually <fx:deploy> task now accepts optional codebase attribute. If attribute is not specified packager will generate same no-codebase files as before. If codebase value is explicitly specified then generated JNLP files (including JNLP content embedded into web page) will use it.  Here is an example: <fx:deploy width="600" height="400" outdir="Samples" codebase="http://localhost/codebaseTest" outfile="TestApp"> .... </fx:deploy> Option to update existing jar file JavaFX packaging procedures are optimized for new application that can use ant or command line javafxpackager utility. This may lead to some redundant steps when you add it to your existing build process. One typical situation is that you might already have a build procedure that produces executable jar file with custom manifest. To properly package it as JavaFX executable jar you would need to unpack it and then use javafxpackager or <fx:jar> to create jar again (and you need to make sure you pass all important details from your custom manifest). We added option to pass jar file as an input to javafxpackager and <fx:jar>. This simplifies integration of JavaFX packaging tools into existing build  process as postprocessing step. By the way, we are looking for ways to simplify this further. Please share your suggestions! On the technical side this works as follows. Both <fx:jar> and javafxpackager will attempt to update existing jar file if this is the only input file. Update process will add JavaFX launcher classes and update the jar manifest with JavaFX attributes. Your custom attributes will be preserved. Update could be performed in place or result may be saved to a different file. Main-Class and Class-Path elements (if present) of manifest of input jar file will be used for JavaFX application  unless they are explicitly overriden in the packaging command you use. E.g. attribute mainClass of <fx:application> (or -appclass in the javafxpackager case) overrides existing Main-Class in the jar manifest. Note that class specified in the Main-Class attribute could either extend JavaFX Application or provide static main() method. Here are examples of updating jar file using javafxpackager: Create new JavaFX executable jar as a copy of given jar file javafxpackager -createjar -srcdir dist -srcfiles fish_proto.jar -outdir dist -outfile fish.jar  Update existing jar file to be JavaFX executable jar and use test.Fish as main application class javafxpackager -createjar -srcdir dist -appclass test.Fish -srcfiles fish.jar -outdir dist -outfile fish.jar  And here is example of using <fx:jar> to create new JavaFX executable jar from the existing fish_proto.jar: <fx:jar destfile="dist/fish.jar"> <fileset dir="dist"> <include name="fish_proto.jar"/> </fileset> </fx:jar> Self-Contained Applications: Specify application icon The only way to specify application icon for self-contained application using tools in JDK 7 update 6 is to use drop-in resources. Now this bug is resolved and you can also specify icon using <fx:icon> tag. Here is an example: <fx:deploy ...> <fx:info> <fx:icon href="default.png"/> </fx:info> ... </fx:deploy> Few things to keep in mind: Only default kind of icon is applicable to self-contained applications (as of now) Icon should follow platform specific rules for sizes and image format (e.g. .ico on Windows and .icns on Mac) Self-Contained Applications: Pass parameters on the command line JavaFX applications support two types of application parameters: named and unnamed (see the API for Application.Parameters). Static named parameters can be added to the application package using <fx:param> and unnamed parameters can be added using <fx:argument>. They are applicable to all execution modes including standalone applications. It is also possible to pass parameters to a JavaFX application from a Web page that hosts it, using <fx:htmlParam>.  Prior to JavaFX 2.2, this was only supported for embedded applications. Starting from JavaFX 2.2, <fx:htmlParam> is applicable to Web Start applications also. See JavaFX deployment guide for more details on this. However, there was no way to pass dynamic parameters to the self-contained application. This has been improved and now native launchers will  delegate parameters from command line to the application code. I.e. to pass parameter to the application you simply need to run it as "myapp.exe somevalue" and then use getParameters().getUnnamed().get(0) to get "somevalue".

    Read the article

  • Windows update doesn't list update details, only the number of updates

    - by leodip
    Hi all, My system is Vista Business 32 bits SP2. When I click "Windows update", I can see there are 2 available optional updates. If I click the link "2 optional updates are available" to see what the updates are, I get: http://leodip.s3.amazonaws.com/windows%5Fupdate2.PNG The details about the updates are EMPTY. Any idea why this is happening? It doesn't happen only with optional updates, but with critical updates as well. I would like to fix this, so I can see what the updates are. Thanks!

    Read the article

  • Watchguard firebox: public IP addresses behind firewall with as much usable IP addresses as possible

    - by martinezpt
    Our ISP assigned us 16 public IP addresses that we want to assign to hosts behind a Watchguard firebox x750e. The IP addresses are: x.x.x.176/28 of which x.x.x.177 is the gateway. The hosts will be running software that needs to be directly assigned the public IP address so 1:1 NAT is not an option. I found this document that gives examples on how to assign public IP addresses to hosts behind the firewall, using an optional interface: http://www.watchguard.com/help/configuration-examples/public_IP_behind_XTM_configuration_example_(en-US).pdf However, I can't implement scenario 1 as it won't allow me to use the same subnet on both interfaces. As for scenario 2, splitting the address range into 2 subnets will decrease the usable hosts on the optional interface to 5 (8 - network - broadcast - optional interface ip). I'm convinced that there must be a better way to address this problem and maximize the number of usable IP addresses but I'm not very familiar with this specific firewall. Are there any suggestions on how to keep the hosts behind the firewall with public IP addresses while maximizing the usable IP addresses? thanks

    Read the article

  • Make user home directory at gdm login

    - by Lorenzo
    I'm trying to make home directory at (RADIUS) user gdm login. The auth is working right, but when I try gdm says that the user hasn't a home directory. I tried to do that with pam_mkhomedir.so but is not working. My /etc/pam.d/gdm file: PAM-1.0 auth sufficient pam_radius_auth.so auth sufficient pam_nologin.so auth sufficient pam_env.so readenv=1 auth sufficient pam_env.so readenv=1 envfile=/etc/default/locale auth sufficient pam_succeed_if.so @include common-auth auth optional pam_gnome_keyring.so account sufficient pam_radius_auth.so @include common-account session [success=ok ignore=ignore module_unknown=ignore default=bad] pam_selinux.so close session optional pam_limits.so @include common-session session [success=ok ignore=ignore module_unknown=ignore default=bad] pam_selinux.so open session optional pam_gnome_keyring.so auto_start session required pam_mkhomedir.so skel=/etc/skel umask=0022 @include common-password Thanks

    Read the article

  • How can I change the flow through this PAM (programmable authentication module) file?

    - by Jamie
    I'd like the PAM module to skip the pam_mount.so line when a unix login succeeds. I've tried various things including: auth [success=2 default=ignore] pam_unix.so nullok_secure auth [success=2 default=ignore] pam_winbind.so krb5_auth krb5_ccache_type=FILE cached_login try_first_pass auth requisite pam_deny.so auth requisite pam_permit.so auth required pam_permit.so auth optional pam_mount.so But can't get it to work. Conversely, when a session shuts down, how can I modify the following os that an unmount command (via pam_mount.so) is avoided during a unix login? session [default=1] pam_permit.so session requisite pam_deny.so session required pam_permit.so session required pam_unix.so session optional pam_winbind.so session optional pam_mount.so

    Read the article

< Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >