Search Results

Search found 32364 results on 1295 pages for 'null layout manager'.

Page 256/1295 | < Previous Page | 252 253 254 255 256 257 258 259 260 261 262 263  | Next Page >

  • get Phone numbers from android phone

    - by Luca
    Hi! First of all i'm sorry for my english... I've a problem getting phone numbers from contacts. That's my code import android.app.ListActivity; import android.database.Cursor; import android.os.Bundle; import android.provider.ContactsContract; import android.widget.SimpleAdapter; import android.widget.Toast; import java.util.ArrayList; import java.util.HashMap; public class TestContacts extends ListActivity { private ArrayList<HashMap<String,String>> list = new ArrayList<HashMap<String,String>>(); private SimpleAdapter numbers; /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.contacts); numbers = new SimpleAdapter( this, list, R.layout.main_item_two_line_row, new String[] { "line1","line2" }, new int[] { R.id.text1, R.id.text2 } ); setListAdapter( numbers ); Cursor cursor = getContentResolver().query(ContactsContract.Contacts.CONTENT_URI, null, null, null, null); while (cursor.moveToNext()) { String contactId = cursor.getString(cursor.getColumnIndex( ContactsContract.Contacts._ID)); String hasPhone = cursor.getString(cursor.getColumnIndex( ContactsContract.Contacts.HAS_PHONE_NUMBER)); //check if the contact has a phone number if (Boolean.parseBoolean(hasPhone)) { Cursor phones = getContentResolver().query( ContactsContract.CommonDataKinds.Phone.CONTENT_URI, null, ContactsContract.CommonDataKinds.Phone.CONTACT_ID +" = "+ contactId, null, null); while (phones.moveToNext()) { // Get the phone number!? String contactName = phones.getString( phones.getColumnIndex( ContactsContract.CommonDataKinds.Phone.DISPLAY_NAME)); String phoneNumber = phones.getString( phones.getColumnIndex( ContactsContract.CommonDataKinds.Phone.NUMBER)); Toast.makeText(this, phoneNumber, Toast.LENGTH_LONG).show(); drawContact(contactName, phoneNumber); } phones.close(); } }cursor.close(); } private void drawContact(String name, String number){ HashMap<String,String> item = new HashMap<String,String>(); item.put( "line1",name); item.put( "line2",number); list.add( item ); numbers.notifyDataSetChanged(); } } It'seems that no contact have a phone number (i've added 2 contacts on the emulator and i've tried also on my HTC Desire). The problem is that if (Boolean.parseBoolean(hasPhone)) returns always false.. How can i get correctly phone numbers? I've tried to call drawContact(String name, String number) before the if statement without querying for the phone number, and it worked (it draws two times the name). but on the LinearLayout they are not ordered alphabetically... how can i order alphabetically (similar to the original contacts app)? thank you in advice, Luca

    Read the article

  • closing MQ connection

    - by OakvilleWork
    Good afternoon, I wrote a project to Get Park Queue Info from the IBM MQ, it has producing an error when attempting to close the connection though. It is written in java. Under application in Event Viewer on the MQ machine it displays two errors. They are: “Channel program ended abnormally. Channel program ‘system.def.surconn’ ended abnormally. Look at previous error messages for channel program ‘system.def.surconn’ in the error files to determine the cause of the failure. The other message states: “Error on receive from host rnanaj (10.10.12.34) An error occurred receiving data from rnanaj (10.10.12.34) over tcp/ip. This may be due to a communications failure. The return code from tcp/ip recv() call was 10054 (X’2746’). Record these values.” This must be something how I try to connect or close the connection, below I have my code to connect and close, any ideas?? Connect: _logger.info("Start"); File outputFile = new File(System.getProperty("PROJECT_HOME"), "run/" + this.getClass().getSimpleName() + "." + System.getProperty("qmgr") + ".txt"); FileUtils.mkdirs(outputFile.getParentFile()); Connection jmsConn = null; Session jmsSession = null; QueueBrowser queueBrowser = null; BufferedWriter commandsBw = null; try { // get queue connection MQConnectionFactory MQConn = new MQConnectionFactory(); MQConn.setHostName(System.getProperty("host")); MQConn.setPort(Integer.valueOf(System.getProperty("port"))); MQConn.setQueueManager(System.getProperty("qmgr")); MQConn.setChannel("SYSTEM.DEF.SVRCONN"); MQConn.setTransportType(JMSC.MQJMS_TP_CLIENT_MQ_TCPIP); jmsConn = (Connection) MQConn.createConnection(); jmsSession = jmsConn.createSession(false, Session.AUTO_ACKNOWLEDGE); Queue jmsQueue = jmsSession.createQueue("PARK"); // browse thru messages queueBrowser = jmsSession.createBrowser(jmsQueue); Enumeration msgEnum = queueBrowser.getEnumeration(); commandsBw = new BufferedWriter(new FileWriter(outputFile)); // String line = "DateTime\tMsgID\tOrigMsgID\tCorrelationID\tComputerName\tSubsystem\tDispatcherName\tProcessor\tJobID\tErrorMsg"; commandsBw.write(line); commandsBw.newLine(); while (msgEnum.hasMoreElements()) { Message message = (Message) msgEnum.nextElement(); line = dateFormatter.format(new Date(message.getJMSTimestamp())) + "\t" + message.getJMSMessageID() + "\t" + message.getStringProperty("pkd_orig_jms_msg_id") + "\t" + message.getJMSCorrelationID() + "\t" + message.getStringProperty("pkd_computer_name") + "\t" + message.getStringProperty("pkd_subsystem") + "\t" + message.getStringProperty("pkd_dispatcher_name") + "\t" + message.getStringProperty("pkd_processor") + "\t" + message.getStringProperty("pkd_job_id") + "\t" + message.getStringProperty("pkd_sysex_msg"); _logger.info(line); commandsBw.write(line); commandsBw.newLine(); } } Close: finally { IO.close(commandsBw); if (queueBrowser != null) { try { queueBrowser.close();} catch (Exception ignore) {}} if (jmsSession != null) { try { jmsSession.close();} catch (Exception ignore) {}} if (jmsConn != null) { try { jmsConn.stop();} catch (Exception ignore) {}} }

    Read the article

  • How to store array data in MySQL database using PHP & MySQL?

    - by Cyn
    I'm new to php and mysql and I'm trying to learn how to store the following array data from three different arrays friend[], hair_type[], hair_color[] using MySQL and PHP an example would be nice. Thanks Here is the HTML code. <input type="text" name="friend[]" id="friend[]" /> <select id="hair_type[]" name="hair_type[]"> <option value="Hair Type" selected="selected">Hair Type</option> <option value="Straight">Straight</option> <option value="Curly">Curly</option> <option value="Wavey">Wavey</option> <option value="Bald">Bald</option> </select> <select id="hair_color[]" name="hair_color[]"> <option value="Hair Color" selected="selected">Hair Color</option> <option value="Brown">Brown</option> <option value="Black">Black</option> <option value="Red">Red</option> <option value="Blonde">Blonde</option> </select> <input type="text" name="friend[]" id="friend[]" /> <select id="hair_type[]" name="hair_type[]"> <option value="Hair Type" selected="selected">Hair Type</option> <option value="Straight">Straight</option> <option value="Curly">Curly</option> <option value="Wavey">Wavey</option> <option value="Bald">Bald</option> </select> <select id="hair_color[]" name="hair_color[]"> <option value="Hair Color" selected="selected">Hair Color</option> <option value="Brown">Brown</option> <option value="Black">Black</option> <option value="Red">Red</option> <option value="Blonde">Blonde</option> </select> Here is the MySQL tables below. CREATE TABLE friends_hair ( id INT UNSIGNED NOT NULL AUTO_INCREMENT, hair_id INT UNSIGNED NOT NULL, user_id INT UNSIGNED NOT NULL, PRIMARY KEY (id) ); CREATE TABLE hair_types ( id INT UNSIGNED NOT NULL AUTO_INCREMENT, friend TEXT NOT NULL, hair_type TEXT NOT NULL, hair_color TEXT NOT NULL, PRIMARY KEY (id) );

    Read the article

  • Error with connection in my database servlet

    - by Zerobu
    Hello, I am writing a Database servlet, all seems well except that there seems to be an error in my connection import java.io.IOException; import java.sql.Connection; import java.sql.DriverManager; import java.sql.PreparedStatement; import java.sql.ResultSet; import java.sql.SQLException; import java.sql.Statement; import java.util.ArrayList; import javax.servlet.RequestDispatcher; import javax.servlet.ServletContext; import javax.servlet.ServletException; import javax.servlet.http.HttpServlet; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; public class DBServlet3 extends HttpServlet { private static final long serialVersionUID = 1L; @Override public void init() throws ServletException { super.init(); try { String jdbcDriverClass= getServletContext().getInitParameter( "jdbcDriverClass" ); if (jdbcDriverClass == null) throw new ServletException( "Could not find jdbcDriverClass initialization parameter" ); Class.forName( jdbcDriverClass ); } catch (ClassNotFoundException e) { throw new ServletException( "Could not load JDBC driver class", e ); } } @Override protected void doGet( HttpServletRequest request, HttpServletResponse response ) throws ServletException, IOException { RequestDispatcher dispatcher= request.getRequestDispatcher( "/db.jsp" ); ServletContext application= getServletContext(); ArrayList<String> names= new ArrayList<String>(); try { Connection connection= null; Statement statement= null; ResultSet results= null; try { String jdbcUrl= application.getInitParameter( "jdbcUrl" ); String jdbcUser= application.getInitParameter( "jdbcUser" ); String jdbcPassword= application.getInitParameter( "jdbcPassword" ); connection= DriverManager.getConnection( jdbcUrl, jdbcUser, jdbcPassword ); statement= connection.createStatement(); results= statement.executeQuery( "SELECT * FROM students" ); while (results.next()) { String name= results.getString( "name" ); names.add( name ); } } finally { if (results != null) results.close(); if (statement != null) statement.close(); if (connection != null) connection.close(); } } catch (SQLException e) { throw new ServletException( e ); } request.setAttribute( "names", names ); dispatcher.forward( request, response ); } @Override protected void doPost( HttpServletRequest request, HttpServletResponse response ) throws ServletException, IOException { String sql= "INSERT INTO students VALUES (" + request.getParameter( "id" ) + ", '" + request.getParameter( "name" ) + "')"; sql= "INSERT INTO students VALUES (?, ?, ?, ?)"; PreparedStatement statement= connection.prepareStatement( sql ); //error on this line statement.setString( 1, request.getParameter( "id" ) ); statement.setString( 2, request.getParameter( "name" ) ); } }

    Read the article

  • run shell command from java

    - by Aykut
    Hi, I am working on an application an have an issue about running shell command from java application. here is the code: public String execRuntime(String cmd) { Process proc = null; int inBuffer, errBuffer; int result = 0; StringBuffer outputReport = new StringBuffer(); StringBuffer errorBuffer = new StringBuffer(); try { proc = Runtime.getRuntime().exec(cmd); } catch (IOException e) { return ""; } try { response.status = 1; result = proc.waitFor(); } catch (InterruptedException e) { return ""; } if (proc != null && null != proc.getInputStream()) { InputStream is = proc.getInputStream(); InputStream es = proc.getErrorStream(); OutputStream os = proc.getOutputStream(); try { while ((inBuffer = is.read()) != -1) { outputReport.append((char) inBuffer); } while ((errBuffer = es.read()) != -1) { errorBuffer.append((char) errBuffer); } } catch (IOException e) { return ""; } try { is.close(); is = null; es.close(); es = null; os.close(); os = null; } catch (IOException e) { return ""; } proc.destroy(); proc = null; } if (errorBuffer.length() > 0) { logger .error("could not finish execution because of error(s)."); logger.error("*** Error : " + errorBuffer.toString()); return ""; } return outputReport.toString(); } but when i try to exec command like : /export/home/test/myapp -T "some argument" myapp reads "some argument" as two seperated arguments.but I want to read "some argument" as only a argument. when i directly run this command from terminal, it executed successfully. I tried '"some argument"' ,""some argument"" , "some\ argument" but did not work for me. how can i read this argument as one argument. Thnaks.

    Read the article

  • Handle property change event listeners (lots of them) more elegantly (dictionary?)

    - by no9
    hELLO ! Here i have a simple class example with three fields of type class B and some other stuff. As you can see im listening on every child object change. Since i could need alot of properties of type class B i wonder if there is a way of shrinking the code. Creating a listener + a method for each seems like i will have ALOT of code. How would i fix this ... using a dictionary or something similar? I have been told that IoC could fix this, but im not sure where to start. public class A : INotifyPropertyChanged { public event PropertyChangedEventHandler PropertyChanged; public int _id; public int Id { get { return _id; } set { if (_id == value) { return; } _id = value; OnPropertyChanged("Id"); } } public string _name; public string Name { get { return _name; } set { if (_name == value) { return; } _name = value; OnPropertyChanged("Name"); } } public B _firstB; public B FirstB { get { return _firstB; } set { if (_firstB == value) { return; } if (_firstB != null) { FirstB.PropertyChanged -= firstObjectB_Listener; } _firstB = value; if (_firstB != null) FirstB.PropertyChanged += new PropertyChangedEventHandler(firstObjectB_Listener); OnPropertyChanged("FirstB"); } } public B _secondB; public B SecondB { get { return _secondB; } set { if (_secondB == value) { return; } if (_secondB != null) { FirstB.PropertyChanged -= secondObjectB_Listener; } _secondB = value; if (_secondB != null) SecondB.PropertyChanged += new PropertyChangedEventHandler(secondObjectB_Listener); OnPropertyChanged("FirstB"); } } public B _thirdB; public B ThirdB { get { return _thirdB; } set { if (_thirdB == value) { return; } if (_thirdB != null) { ThirdB.PropertyChanged -= thirdObjectB_Listener; } _thirdB = value; if (_thirdB != null) ThirdB.PropertyChanged += new PropertyChangedEventHandler(thirdObjectB_Listener); OnPropertyChanged("ThirdB"); } } protected void OnPropertyChanged(string name) { PropertyChangedEventHandler handler = PropertyChanged; if (handler != null) { handler(this, new PropertyChangedEventArgs(name)); } } void firstObjectB_Listener(object sender, PropertyChangedEventArgs e) { Console.WriteLine("Object A has found a change of " + e.PropertyName + " on first object B"); } void secondObjectB_Listener(object sender, PropertyChangedEventArgs e) { Console.WriteLine("Object A has found a change of " + e.PropertyName + " on second object B"); } void thirdObjectB_Listener(object sender, PropertyChangedEventArgs e) { Console.WriteLine("Object A has found a change of " + e.PropertyName + " on third object B"); } }

    Read the article

  • How can I set an image for background of GUI interface?

    - by enriched
    hey everyone, im having some troubles displaying the background image for a GUI interface in java. Here is what i have at the moment, and with current stage of code it shows default(gray) background. import javax.swing.*; import java.awt.event.*; import java.util.Scanner; import java.awt.*; import java.io.File; import javax.imageio.ImageIO; import java.awt.image.BufferedImage; import java.io.IOException; ////////////////////////////////// // 3nriched Games Presents: // // MIPS The Mouse!! // ////////////////////////////////// public class mipsMouseGUI extends JFrame implements ActionListener { private static String ThePDub = ("mouse"); //the password JPasswordField pass; JPanel panel; JButton btnEnter; JLabel lblpdub; public mipsMouseGUI() { BufferedImage image = null; try { //attempts to read picture from the folder image = ImageIO.read(getClass().getResource("/mousepics/mousepic.png")); } catch (IOException e) { //catches exceptions e.printStackTrace(); } ImagePanel panel = new ImagePanel(new ImageIcon("/mousepics/neonglowOnwill.png").getImage()); setIconImage(image); //sets icon picture setTitle("Mips The Mouse Login"); setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); pass = new JPasswordField(5); //sets password length to 5 pass.setEchoChar('@'); //hide characters as @ symbol pass.addActionListener(this); //adds action listener add(panel); //adds panel to frame btnEnter = new JButton("Enter"); //creates a button btnEnter.addActionListener(this);// Register the action listener. lblpdub = new JLabel(" Your Password: "); // label that says enter password panel.add(lblpdub, BorderLayout.CENTER);// adds label and inputbox panel.add(pass, BorderLayout.CENTER); // to panel and sets location panel.add(btnEnter, BorderLayout.CENTER); //adds button to panel pack(); // packs controls and setLocationRelativeTo(null); // Implicit "this" if inside JFrame constructor. setVisible(true);// makes them visible (duh) } public void actionPerformed(ActionEvent a) { Object source = a.getSource(); //char array that holds password char[] passy = pass.getPassword(); //characters array to string String p = new String(passy); //determines if user entered correct password if(p.equals(ThePDub)) { JOptionPane.showMessageDialog(null, "Welcome beta user: USERNAME."); } else JOptionPane.showMessageDialog(null, "You have enter an incorrect password. Please try again."); } public class ImagePanel extends JPanel { private BufferedImage img; public ImagePanel(String img) { this(new ImageIcon(img).getImage()); } public ImagePanel(Image img) { Dimension size = new Dimension(img.getWidth(null), img.getHeight(null)); } public void paintComponent(Graphics g) { g.drawImage(img, 0, 0, null); } } }

    Read the article

  • Copy image to BLOB from client pc aka Java function in Oracle

    - by mumich
    Hi guys, I've been stuck with this for past two days. I've go java function stored in Oracle system which is supposed to copy image from local drive do remote database and store it in BLOB - it's called CopyBLOB and looks like this: import java.sql.*; import oracle.sql.*; import java.io.*; public class CopyBLOB { static int id; static String fileName = null; static Connection conn = null; public CopyBLOB(int idz, String f) { id = idz; fileName = f; } public static void copy(int ident, String path) throws SQLException, FileNotFoundException { CopyBLOB cpB = new CopyBLOB(ident, path); cpB.getConnection(); cpB.callUpdate(id, fileName); } public void getConnection() throws SQLException { DriverManager.registerDriver (new oracle.jdbc.OracleDriver()); try { conn = DriverManager.getConnection("jdbc:oracle:thin:@oraserv.ms.mff.cuni.cz:1521:db", "xxx", "xxx"); } catch (SQLException sqlex) { System.out.println("SQLException while getting db connection: "+sqlex); if (conn != null) conn.close(); } catch (Exception ex) { System.out.println("Exception while getting db connection: "+ex); if (conn != null) conn.close(); } } public void callUpdate(int id, String file ) throws SQLException, FileNotFoundException { CallableStatement cs = null; try { conn.setAutoCommit(false); File f = new File(file); FileInputStream fin = new FileInputStream(f); cs = (CallableStatement) conn.prepareCall( "begin add_image(?,?); end;" ); cs.setInt(1, id ); cs.setBinaryStream(2, fin, (int) f.length()); cs.execute(); conn.setAutoCommit(true); } catch ( SQLException sqlex ) { System.out.println("SQLException in callUpdateUsingStream method of given status : " + sqlex.getMessage() ); } catch ( FileNotFoundException fnex ) { System.out.println("FileNotFoundException in callUpdateUsingStream method of given status : " + fnex.getMessage() ); } finally { try { if (cs != null) cs.close(); if (conn != null) conn.close(); } catch ( Exception ex ) { System.out.println("Some exception in callUpdateUsingStream method of given status : " + ex.getMessage( ) ); } } } } The wrapper function is defined in package "MyPackage" as folows: procedure image_adder( id varchar2, path varchar2 ) AS language java name 'CopyBLOB.copy(java.lang.String, java.lang.String)'; And the inserting function called image_add is as simple as this: procedure add_image( id numeric(10), pic blob) AS BEGIN insert into pictures values (seq_pic.nextval, id, pic); END add_image; Now the problem: When I type call MyPackage.image_adder(1, 'd:\samples\img.jpg'); I get the ORA-29531 Error: No method copy in class CopyBLOB. Can you help me, please?

    Read the article

  • to write log files in two different files

    - by Sun
    my application run on customized client framework,the client framework used log4net to log their own log files. we are(our application) has to use the same log4net to log our log files in our own path(say our customized path). currently the our log files are created but log are not writing in that file.it is writting in the client framework log file. searched lot of sites the link http://stackoverflow.com/questions/308436/log4net-programmatcially-specify-multiple-loggers-with-multiple-file-appenders helped me to configure the log4net config programatically, still im log statemets are not written in my log file.the code used as below public class TraceLog { private string message = string.Empty; private static ILog ILogger = null; private static TraceLog instance = new TraceLog(); private TraceLog() { SetLevel("Log4net.MainForm", "ALL"); AddAppender("Log4net.MainForm", CreateFileAppender("FileAppender", "C:\\mylog.log")); } public static TraceLog Instance { get { return instance; } } public void Debug(string logMessage) { message = PrepareLog(logMessage); ILogger.Debug(message); } protected string PrepareLog(string logMessage) { string message = GetFileMethodLineNumberInfo(); message += logMessage; return message; } protected string GetFileMethodLineNumberInfo() { StackTrace stackTrace = new StackTrace(true); // The position 3 is relative to the index of the specified method StackFrame stackFrame = stackTrace.GetFrame(3); return (stackFrame.GetMethod().DeclaringType.Name + "/" + stackFrame.GetMethod().Name + "/" + stackFrame.GetFileLineNumber() + ":"); } private static void SetLevel(string loggerName, string levelName) { ILogger = LogManager.GetLogger(loggerName); log4net.Repository.Hierarchy.Logger l = (log4net.Repository.Hierarchy.Logger)ILogger.Logger; l.Level = l.Hierarchy.LevelMap[levelName]; } private static void AddAppender(string loggerName, IAppender appender) { ILogger = LogManager.GetLogger(loggerName); log4net.Repository.Hierarchy.Logger l = (log4net.Repository.Hierarchy.Logger)ILogger.Logger; l.AddAppender(appender); } private static IAppender CreateFileAppender(string name, string fileName) { FileAppender appender = new FileAppender(); appender.Name = name; appender.File = fileName; appender.AppendToFile = true; //PatternLayout layout = new PatternLayout(); //layout.ConversionPattern = "%d [%t] %-5p %c [%x] - %m%n"; //layout.ActivateOptions(); //appender.Layout = layout; appender.ActivateOptions(); return appender; } } } anyone pls help how to solve this

    Read the article

  • How to update a TextView on ButtonClick with Spinner(s) values

    - by source.rar
    Hi, I am trying to populate a TextView based on the current selected options in 3 Spinner(s) but cant seem to figure out how to retrieve the selected values from the Spinners to invoke the update function with. Here is my current code (quite messy but I'm just learning Java :)), public class AgeFun extends Activity { private String[] dayNames; private String[] yearArray; private final static int START_YEAR = 1990; private static TextView textDisp; private Button calcButton; private static Spinner spinnerDay, spinnerYear, spinnerMonth; private static ArrayAdapter<?> monthAdapter, dayAdapter, yearAdapter; private int year, month, day; /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); year = 2000; month = 1; day = 1; textDisp = (TextView) findViewById(R.id.textView1); calcButton = (Button) findViewById(R.id.button); calcButton.setOnClickListener(new OnClickListener() { public void onClick(View v) { // Perform action on clicks AgeFun.updateAge(year, month, day); } }); // Month spinner spinnerMonth = (Spinner) findViewById(R.id.spinnerFirst); monthAdapter = ArrayAdapter.createFromResource( this, R.array.monthList, android.R.layout.simple_spinner_item); monthAdapter.setDropDownViewResource(android.R.layout.simple_spinner_dropdown_item); spinnerMonth.setAdapter(monthAdapter); // Day spinner dayNames = new String[31]; for(int i =1; i <= 31; ++i) { dayNames[i-1] = Integer.toString(i); } spinnerDay = (Spinner) findViewById(R.id.spinnerSecond); dayAdapter = new ArrayAdapter<CharSequence>(this, android.R.layout.simple_spinner_item, dayNames); spinnerDay.setAdapter(dayAdapter); // Year spinner yearArray = new String[40]; for(int i =0; i < 40; ++i) { yearArray[i] = Integer.toString(START_YEAR+i); } spinnerYear = (Spinner) findViewById(R.id.spinnerThird); yearAdapter = new ArrayAdapter<CharSequence>(this, android.R.layout.simple_spinner_item, yearArray); spinnerYear.setAdapter(yearAdapter); updateAge(2000,1,1); } private static void updateAge(int year, int month, int day) { Date dob = new GregorianCalendar(year, month, day).getTime(); Date currDate = new Date(); long age = (currDate.getTime() - dob.getTime()) / (1000 * 60 * 60 * 24) / 365; textDisp.setText("Your are " + Long.toString(age) + " years old"); } } Any help with this would be great. TIA

    Read the article

  • How to create multiple Repository object inside a Repository class using Unit Of Work?

    - by Santosh
    I am newbie to MVC3 application development, currently, we need following Application technologies as requirement MVC3 framework IOC framework – Autofac to manage object creation dynamically Moq – Unit testing Entity Framework Repository and Unit Of Work Pattern of Model class I have gone through many article to explore an basic idea about the above points but still I am little bit confused on the “Repository and Unit Of Work Pattern “. Basically what I understand Unit Of Work is a pattern which will be followed along with Repository Pattern in order to share the single DB Context among all Repository object, So here is my design : IUnitOfWork.cs public interface IUnitOfWork : IDisposable { IPermitRepository Permit_Repository{ get; } IRebateRepository Rebate_Repository { get; } IBuildingTypeRepository BuildingType_Repository { get; } IEEProjectRepository EEProject_Repository { get; } IRebateLookupRepository RebateLookup_Repository { get; } IEEProjectTypeRepository EEProjectType_Repository { get; } void Save(); } UnitOfWork.cs public class UnitOfWork : IUnitOfWork { #region Private Members private readonly CEEPMSEntities context = new CEEPMSEntities(); private IPermitRepository permit_Repository; private IRebateRepository rebate_Repository; private IBuildingTypeRepository buildingType_Repository; private IEEProjectRepository eeProject_Repository; private IRebateLookupRepository rebateLookup_Repository; private IEEProjectTypeRepository eeProjectType_Repository; #endregion #region IUnitOfWork Implemenation public IPermitRepository Permit_Repository { get { if (this.permit_Repository == null) { this.permit_Repository = new PermitRepository(context); } return permit_Repository; } } public IRebateRepository Rebate_Repository { get { if (this.rebate_Repository == null) { this.rebate_Repository = new RebateRepository(context); } return rebate_Repository; } } } PermitRepository .cs public class PermitRepository : IPermitRepository { #region Private Members private CEEPMSEntities objectContext = null; private IObjectSet<Permit> objectSet = null; #endregion #region Constructors public PermitRepository() { } public PermitRepository(CEEPMSEntities _objectContext) { this.objectContext = _objectContext; this.objectSet = objectContext.CreateObjectSet<Permit>(); } #endregion public IEnumerable<RebateViewModel> GetRebatesByPermitId(int _permitId) { // need to implment } } PermitController .cs public class PermitController : Controller { #region Private Members IUnitOfWork CEEPMSContext = null; #endregion #region Constructors public PermitController(IUnitOfWork _CEEPMSContext) { if (_CEEPMSContext == null) { throw new ArgumentNullException("Object can not be null"); } CEEPMSContext = _CEEPMSContext; } #endregion } So here I am wondering how to generate a new Repository for example “TestRepository.cs” using same pattern where I can create more then one Repository object like RebateRepository rebateRepo = new RebateRepository () AddressRepository addressRepo = new AddressRepository() because , what ever Repository object I want to create I need an object of UnitOfWork first as implmented in the PermitController class. So if I would follow the same in each individual Repository class that would again break the priciple of Unit Of Work and create multiple instance of object context. So any idea or suggestion will be highly appreciated. Thank you

    Read the article

  • Updating table from async task android

    - by CantChooseUsernames
    I'm following this tutorial: http://huuah.com/android-progress-bar-and-thread-updating/ to learn how to make progress bars. I'm trying to show the progress bar on top of my activity and have it update the activity's table view in the background. So I created an async task for the dialog that takes a callback: package com.lib.bookworm; import android.app.ProgressDialog; import android.content.Context; import android.os.AsyncTask; public class UIThreadProgress extends AsyncTask<Void, Void, Void> { private UIThreadCallback callback = null; private ProgressDialog dialog = null; private int maxValue = 100, incAmount = 1; private Context context = null; public UIThreadProgress(Context context, UIThreadCallback callback) { this.context = context; this.callback = callback; } @Override protected Void doInBackground(Void... args) { while(this.callback.condition()) { this.callback.run(); this.publishProgress(); } return null; } @Override protected void onProgressUpdate(Void... values) { super.onProgressUpdate(values); dialog.incrementProgressBy(incAmount); }; @Override protected void onPreExecute() { super.onPreExecute(); dialog = new ProgressDialog(context); dialog.setCancelable(true); dialog.setMessage("Loading..."); dialog.setProgress(0); dialog.setProgressStyle(ProgressDialog.STYLE_HORIZONTAL); dialog.setMax(maxValue); dialog.show(); } @Override protected void onPostExecute(Void result) { super.onPostExecute(result); if (this.dialog.isShowing()) { this.dialog.dismiss(); } this.callback.onThreadFinish(); } } And in my activity, I do: final String page = htmlPage.substring(start, end).trim(); //Create new instance of the AsyncTask.. new UIThreadProgress(this, new UIThreadCallback() { @Override public void run() { row_id = makeTableRow(row_id, layout, params, matcher); //ADD a row to the table layout. } @Override public void onThreadFinish() { System.out.println("FINISHED!!"); } @Override public boolean condition() { return matcher.find(); } }).execute(); So the above creates an async task to run to update a table layout activity while showing the progress bar that displays how much work has been done.. However, I get an error saying that only the thread that started the activity can update its views. I tried doing: MainActivity.this.runOnUiThread(new Runnable() { @Override public void run() { row_id = makeTableRow(row_id, layout, params, matcher); //ADD a row to the table layout. } } But this gives me synchronization errors.. Any ideas how I can display progress and at the same time update my table in the background? Currently my UI looks like:

    Read the article

  • More efficient SQL than using "A UNION (B in A)"?

    - by machinatus
    Edit 1 (clarification): Thank you for the answers so far! The response is gratifying. I want to clarify the question a little because based on the answers I think I did not describe one aspect of the problem correctly (and I'm sure that's my fault as I was having a difficult time defining it even for myself). Here's the rub: The result set should contain ONLY the records with tstamp BETWEEN '2010-01-03' AND '2010-01-09', AND the one record where the tstamp IS NULL for each order_num in the first set (there will always be one with null tstamp for each order_num). The answers given so far appear to include all records for a certain order_num if there are any with tstamp BETWEEN '2010-01-03' AND '2010-01-09'. For example, if there were another record with order_num = 2 and tstamp = 2010-01-12 00:00:00 it should not be included in the result. Original question: Consider an orders table containing id (unique), order_num, tstamp (a timestamp), and item_id (the single item included in an order). tstamp is null, unless the order has been modified, in which case there is another record with identical order_num and tstamp then contains the timestamp of when the change occurred. Example... id order_num tstamp item_id __ _________ ___________________ _______ 0 1 100 1 2 101 2 2 2010-01-05 12:34:56 102 3 3 113 4 4 124 5 5 135 6 5 2010-01-07 01:23:45 136 7 5 2010-01-07 02:46:00 137 8 6 100 9 6 2010-01-13 08:33:55 105 What is the most efficient SQL statement to retrieve all of the orders (based on order_num) which have been modified one or more times during a certain date range? In other words, for each order we need all of the records with the same order_num (including the one with NULL tstamp), for each order_num WHERE at least one of the order_num's has tstamp NOT NULL AND tstamp BETWEEN '2010-01-03' AND '2010-01-09'. It's the "WHERE at least one of the order_num's has tstamp NOT NULL" that I'm having difficulty with. The result set should look like this: id order_num tstamp item_id __ _________ ___________________ _______ 1 2 101 2 2 2010-01-05 12:34:56 102 5 5 135 6 5 2010-01-07 01:23:45 136 7 5 2010-01-07 02:46:00 137 The SQL that I came up with is this, which is essentially "A UNION (B in A)", but it executes slowly and I hope there is a more efficient solution: SELECT history_orders.order_id, history_orders.tstamp, history_orders.item_id FROM (SELECT orders.order_id, orders.tstamp, orders.item_id FROM orders WHERE orders.tstamp BETWEEN '2010-01-03' AND '2010-01-09') AS history_orders UNION SELECT current_orders.order_id, current_orders.tstamp, current_orders.item_id FROM (SELECT orders.order_id, orders.tstamp, orders.item_id FROM orders WHERE orders.tstamp IS NULL) AS current_orders WHERE current_orders.order_id IN (SELECT orders.order_id FROM orders WHERE orders.tstamp BETWEEN '2010-01-03' AND '2010-01-09');

    Read the article

  • Beware Sneaky Reads with Unique Indexes

    - by Paul White NZ
    A few days ago, Sandra Mueller (twitter | blog) asked a question using twitter’s #sqlhelp hash tag: “Might SQL Server retrieve (out-of-row) LOB data from a table, even if the column isn’t referenced in the query?” Leaving aside trivial cases (like selecting a computed column that does reference the LOB data), one might be tempted to say that no, SQL Server does not read data you haven’t asked for.  In general, that’s quite correct; however there are cases where SQL Server might sneakily retrieve a LOB column… Example Table Here’s a T-SQL script to create that table and populate it with 1,000 rows: CREATE TABLE dbo.LOBtest ( pk INTEGER IDENTITY NOT NULL, some_value INTEGER NULL, lob_data VARCHAR(MAX) NULL, another_column CHAR(5) NULL, CONSTRAINT [PK dbo.LOBtest pk] PRIMARY KEY CLUSTERED (pk ASC) ); GO DECLARE @Data VARCHAR(MAX); SET @Data = REPLICATE(CONVERT(VARCHAR(MAX), 'x'), 65540);   WITH Numbers (n) AS ( SELECT ROW_NUMBER() OVER (ORDER BY (SELECT 0)) FROM master.sys.columns C1, master.sys.columns C2 ) INSERT LOBtest WITH (TABLOCKX) ( some_value, lob_data ) SELECT TOP (1000) N.n, @Data FROM Numbers N WHERE N.n <= 1000; Test 1: A Simple Update Let’s run a query to subtract one from every value in the some_value column: UPDATE dbo.LOBtest WITH (TABLOCKX) SET some_value = some_value - 1; As you might expect, modifying this integer column in 1,000 rows doesn’t take very long, or use many resources.  The STATITICS IO and TIME output shows a total of 9 logical reads, and 25ms elapsed time.  The query plan is also very simple: Looking at the Clustered Index Scan, we can see that SQL Server only retrieves the pk and some_value columns during the scan: The pk column is needed by the Clustered Index Update operator to uniquely identify the row that is being changed.  The some_value column is used by the Compute Scalar to calculate the new value.  (In case you are wondering what the Top operator is for, it is used to enforce SET ROWCOUNT). Test 2: Simple Update with an Index Now let’s create a nonclustered index keyed on the some_value column, with lob_data as an included column: CREATE NONCLUSTERED INDEX [IX dbo.LOBtest some_value (lob_data)] ON dbo.LOBtest (some_value) INCLUDE ( lob_data ) WITH ( FILLFACTOR = 100, MAXDOP = 1, SORT_IN_TEMPDB = ON ); This is not a useful index for our simple update query; imagine that someone else created it for a different purpose.  Let’s run our update query again: UPDATE dbo.LOBtest WITH (TABLOCKX) SET some_value = some_value - 1; We find that it now requires 4,014 logical reads and the elapsed query time has increased to around 100ms.  The extra logical reads (4 per row) are an expected consequence of maintaining the nonclustered index. The query plan is very similar to before (click to enlarge): The Clustered Index Update operator picks up the extra work of maintaining the nonclustered index. The new Compute Scalar operators detect whether the value in the some_value column has actually been changed by the update.  SQL Server may be able to skip maintaining the nonclustered index if the value hasn’t changed (see my previous post on non-updating updates for details).  Our simple query does change the value of some_data in every row, so this optimization doesn’t add any value in this specific case. The output list of columns from the Clustered Index Scan hasn’t changed from the one shown previously: SQL Server still just reads the pk and some_data columns.  Cool. Overall then, adding the nonclustered index hasn’t had any startling effects, and the LOB column data still isn’t being read from the table.  Let’s see what happens if we make the nonclustered index unique. Test 3: Simple Update with a Unique Index Here’s the script to create a new unique index, and drop the old one: CREATE UNIQUE NONCLUSTERED INDEX [UQ dbo.LOBtest some_value (lob_data)] ON dbo.LOBtest (some_value) INCLUDE ( lob_data ) WITH ( FILLFACTOR = 100, MAXDOP = 1, SORT_IN_TEMPDB = ON ); GO DROP INDEX [IX dbo.LOBtest some_value (lob_data)] ON dbo.LOBtest; Remember that SQL Server only enforces uniqueness on index keys (the some_data column).  The lob_data column is simply stored at the leaf-level of the non-clustered index.  With that in mind, we might expect this change to make very little difference.  Let’s see: UPDATE dbo.LOBtest WITH (TABLOCKX) SET some_value = some_value - 1; Whoa!  Now look at the elapsed time and logical reads: Scan count 1, logical reads 2016, physical reads 0, read-ahead reads 0, lob logical reads 36015, lob physical reads 0, lob read-ahead reads 15992.   CPU time = 172 ms, elapsed time = 16172 ms. Even with all the data and index pages in memory, the query took over 16 seconds to update just 1,000 rows, performing over 52,000 LOB logical reads (nearly 16,000 of those using read-ahead). Why on earth is SQL Server reading LOB data in a query that only updates a single integer column? The Query Plan The query plan for test 3 looks a bit more complex than before: In fact, the bottom level is exactly the same as we saw with the non-unique index.  The top level has heaps of new stuff though, which I’ll come to in a moment. You might be expecting to find that the Clustered Index Scan is now reading the lob_data column (for some reason).  After all, we need to explain where all the LOB logical reads are coming from.  Sadly, when we look at the properties of the Clustered Index Scan, we see exactly the same as before: SQL Server is still only reading the pk and some_value columns – so what’s doing the LOB reads? Updates that Sneakily Read Data We have to go as far as the Clustered Index Update operator before we see LOB data in the output list: [Expr1020] is a bit flag added by an earlier Compute Scalar.  It is set true if the some_value column has not been changed (part of the non-updating updates optimization I mentioned earlier). The Clustered Index Update operator adds two new columns: the lob_data column, and some_value_OLD.  The some_value_OLD column, as the name suggests, is the pre-update value of the some_value column.  At this point, the clustered index has already been updated with the new value, but we haven’t touched the nonclustered index yet. An interesting observation here is that the Clustered Index Update operator can read a column into the data flow as part of its update operation.  SQL Server could have read the LOB data as part of the initial Clustered Index Scan, but that would mean carrying the data through all the operations that occur prior to the Clustered Index Update.  The server knows it will have to go back to the clustered index row to update it, so it delays reading the LOB data until then.  Sneaky! Why the LOB Data Is Needed This is all very interesting (I hope), but why is SQL Server reading the LOB data?  For that matter, why does it need to pass the pre-update value of the some_value column out of the Clustered Index Update? The answer relates to the top row of the query plan for test 3.  I’ll reproduce it here for convenience: Notice that this is a wide (per-index) update plan.  SQL Server used a narrow (per-row) update plan in test 2, where the Clustered Index Update took care of maintaining the nonclustered index too.  I’ll talk more about this difference shortly. The Split/Sort/Collapse combination is an optimization, which aims to make per-index update plans more efficient.  It does this by breaking each update into a delete/insert pair, reordering the operations, removing any redundant operations, and finally applying the net effect of all the changes to the nonclustered index. Imagine we had a unique index which currently holds three rows with the values 1, 2, and 3.  If we run a query that adds 1 to each row value, we would end up with values 2, 3, and 4.  The net effect of all the changes is the same as if we simply deleted the value 1, and added a new value 4. By applying net changes, SQL Server can also avoid false unique-key violations.  If we tried to immediately update the value 1 to a 2, it would conflict with the existing value 2 (which would soon be updated to 3 of course) and the query would fail.  You might argue that SQL Server could avoid the uniqueness violation by starting with the highest value (3) and working down.  That’s fine, but it’s not possible to generalize this logic to work with every possible update query. SQL Server has to use a wide update plan if it sees any risk of false uniqueness violations.  It’s worth noting that the logic SQL Server uses to detect whether these violations are possible has definite limits.  As a result, you will often receive a wide update plan, even when you can see that no violations are possible. Another benefit of this optimization is that it includes a sort on the index key as part of its work.  Processing the index changes in index key order promotes sequential I/O against the nonclustered index. A side-effect of all this is that the net changes might include one or more inserts.  In order to insert a new row in the index, SQL Server obviously needs all the columns – the key column and the included LOB column.  This is the reason SQL Server reads the LOB data as part of the Clustered Index Update. In addition, the some_value_OLD column is required by the Split operator (it turns updates into delete/insert pairs).  In order to generate the correct index key delete operation, it needs the old key value. The irony is that in this case the Split/Sort/Collapse optimization is anything but.  Reading all that LOB data is extremely expensive, so it is sad that the current version of SQL Server has no way to avoid it. Finally, for completeness, I should mention that the Filter operator is there to filter out the non-updating updates. Beating the Set-Based Update with a Cursor One situation where SQL Server can see that false unique-key violations aren’t possible is where it can guarantee that only one row is being updated.  Armed with this knowledge, we can write a cursor (or the WHILE-loop equivalent) that updates one row at a time, and so avoids reading the LOB data: SET NOCOUNT ON; SET STATISTICS XML, IO, TIME OFF;   DECLARE @PK INTEGER, @StartTime DATETIME; SET @StartTime = GETUTCDATE();   DECLARE curUpdate CURSOR LOCAL FORWARD_ONLY KEYSET SCROLL_LOCKS FOR SELECT L.pk FROM LOBtest L ORDER BY L.pk ASC;   OPEN curUpdate;   WHILE (1 = 1) BEGIN FETCH NEXT FROM curUpdate INTO @PK;   IF @@FETCH_STATUS = -1 BREAK; IF @@FETCH_STATUS = -2 CONTINUE;   UPDATE dbo.LOBtest SET some_value = some_value - 1 WHERE CURRENT OF curUpdate; END;   CLOSE curUpdate; DEALLOCATE curUpdate;   SELECT DATEDIFF(MILLISECOND, @StartTime, GETUTCDATE()); That completes the update in 1280 milliseconds (remember test 3 took over 16 seconds!) I used the WHERE CURRENT OF syntax there and a KEYSET cursor, just for the fun of it.  One could just as well use a WHERE clause that specified the primary key value instead. Clustered Indexes A clustered index is the ultimate index with included columns: all non-key columns are included columns in a clustered index.  Let’s re-create the test table and data with an updatable primary key, and without any non-clustered indexes: IF OBJECT_ID(N'dbo.LOBtest', N'U') IS NOT NULL DROP TABLE dbo.LOBtest; GO CREATE TABLE dbo.LOBtest ( pk INTEGER NOT NULL, some_value INTEGER NULL, lob_data VARCHAR(MAX) NULL, another_column CHAR(5) NULL, CONSTRAINT [PK dbo.LOBtest pk] PRIMARY KEY CLUSTERED (pk ASC) ); GO DECLARE @Data VARCHAR(MAX); SET @Data = REPLICATE(CONVERT(VARCHAR(MAX), 'x'), 65540);   WITH Numbers (n) AS ( SELECT ROW_NUMBER() OVER (ORDER BY (SELECT 0)) FROM master.sys.columns C1, master.sys.columns C2 ) INSERT LOBtest WITH (TABLOCKX) ( pk, some_value, lob_data ) SELECT TOP (1000) N.n, N.n, @Data FROM Numbers N WHERE N.n <= 1000; Now here’s a query to modify the cluster keys: UPDATE dbo.LOBtest SET pk = pk + 1; The query plan is: As you can see, the Split/Sort/Collapse optimization is present, and we also gain an Eager Table Spool, for Halloween protection.  In addition, SQL Server now has no choice but to read the LOB data in the Clustered Index Scan: The performance is not great, as you might expect (even though there is no non-clustered index to maintain): Table 'LOBtest'. Scan count 1, logical reads 2011, physical reads 0, read-ahead reads 0, lob logical reads 36015, lob physical reads 0, lob read-ahead reads 15992.   Table 'Worktable'. Scan count 1, logical reads 2040, physical reads 0, read-ahead reads 0, lob logical reads 34000, lob physical reads 0, lob read-ahead reads 8000.   SQL Server Execution Times: CPU time = 483 ms, elapsed time = 17884 ms. Notice how the LOB data is read twice: once from the Clustered Index Scan, and again from the work table in tempdb used by the Eager Spool. If you try the same test with a non-unique clustered index (rather than a primary key), you’ll get a much more efficient plan that just passes the cluster key (including uniqueifier) around (no LOB data or other non-key columns): A unique non-clustered index (on a heap) works well too: Both those queries complete in a few tens of milliseconds, with no LOB reads, and just a few thousand logical reads.  (In fact the heap is rather more efficient). There are lots more fun combinations to try that I don’t have space for here. Final Thoughts The behaviour shown in this post is not limited to LOB data by any means.  If the conditions are met, any unique index that has included columns can produce similar behaviour – something to bear in mind when adding large INCLUDE columns to achieve covering queries, perhaps. Paul White Email: [email protected] Twitter: @PaulWhiteNZ

    Read the article

  • SQL SERVER – Create Primary Key with Specific Name when Creating Table

    - by pinaldave
    It is interesting how sometimes the documentation of simple concepts is not available online. I had received email from one of the reader where he has asked how to create Primary key with a specific name when creating the table itself. He said, he knows the method where he can create the table and then apply the primary key with specific name. The attached code was as follows: CREATE TABLE [dbo].[TestTable]( [ID] [int] IDENTITY(1,1) NOT NULL, [FirstName] [varchar](100) NULL) GO ALTER TABLE [dbo].[TestTable] ADD  CONSTRAINT [PK_TestTable] PRIMARY KEY CLUSTERED ([ID] ASC) GO He wanted to know if we can create Primary Key as part of the table name as well, and also give it a name at the same time. Though it would look very normal to all experienced developers, it can be still confusing to many. Here is the quick code that functions as the above code in one single statement. CREATE TABLE [dbo].[TestTable]( [ID] [int] IDENTITY(1,1) NOT NULL, [FirstName] [varchar](100) NULL CONSTRAINT [PK_TestTable] PRIMARY KEY CLUSTERED ([ID] ASC) ) GO Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, Readers Question, SQL, SQL Authority, SQL Constraint and Keys, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • How to properly scroll a 2D tilemap?

    - by Sri Harsha Chilakapati
    Hello and I'm trying to make my own game engine in Java. I have completed all the necessary ones but I can't figure it out with the TileGame class. It just can't scroll. Also there are no exceptions. Here I'm listing the code. TileGame.java @Override public void draw(Graphics2D g) { if (back!=null){ back.render(g); } if (follower!=null){ follower.render(g); follower.draw(g); } for (int i=0; i<actors.size(); i++){ Actor actor = actors.get(i); if (actor!=follower&&getVisibleRect().intersects(actor.getBounds())){ g.drawImage(actor.getAnimation().getFrameImage(), actor.x - OffSetX, actor.y - OffSetY, null); actor.draw(g); } } } /** * This method returns the visible rectangle * @return The visible rectangle */ public Rectangle getVisibleRect(){ return new Rectangle(OffSetX, OffSetY, global.WIDTH, global.HEIGHT); } @Override public void update(){ if (follower!=null){ if (scrollHorizontally){ OffSetX = global.WIDTH/2 - Math.round((float)follower.x) - tileSize; OffSetX = Math.min(OffSetX, 0); OffSetX = Math.max(OffSetX, global.WIDTH - mapWidth); } if (scrollVertically){ OffSetY = global.HEIGHT/2 - Math.round((float)follower.y) - tileSize; OffSetY = Math.min(OffSetY, 0); OffSetY = Math.max(OffSetY, global.HEIGHT - mapHeight); } } for (int i=0; i<actors.size(); i++){ Actor actor1 = actors.get(i); if (getVisibleRect().contains(actor1.x, actor1.y)){ actor1.update(); for (int j=0; j<actors.size(); j++){ Actor actor2 = actors.get(j); if (actor1.isCollidingWith(actor2)){ actor1.collision(actor2); actor2.collision(actor1); } } } } } but the problem is that all the actors are working, but it just won't scroll. Help Please.. Thanks in Advance.

    Read the article

  • SQL SERVER – sp_describe_first_result_set New System Stored Procedure in SQL Server 2012

    - by pinaldave
    I might have said this earlier many times but I will say it again – SQL Server never stops to amaze me. Here is the example of it sp_describe_first_result_set. I stumbled upon it when I was looking for something else on BOL. This new system stored procedure did attract me to experiment with it. This SP does exactly what its names suggests – describes the first result set. Let us see very simple example of the same. Please note that this will work on only SQL Server 2012. EXEC sp_describe_first_result_set N'SELECT * FROM AdventureWorks.Sales.SalesOrderDetail', NULL, 1 GO Here is the partial resultset. Now let us take this simple example to next level and learn one more interesting detail about this function. First I will be creating a view and then we will use the same procedure over the view. USE AdventureWorks GO CREATE VIEW dbo.MyView AS SELECT [SalesOrderID] soi_v ,[SalesOrderDetailID] sodi_v ,[CarrierTrackingNumber] stn_v FROM [Sales].[SalesOrderDetail] GO Now let us execute above stored procedure with various options. You can notice I am changing the very last parameter which I am passing to the stored procedure.This option is known as for browse_information_mode. EXEC sp_describe_first_result_set N'SELECT soi_v soi, sodi_v sodi, stn_v stn FROM MyView', NULL, 0; GO EXEC sp_describe_first_result_set N'SELECT soi_v soi, sodi_v sodi, stn_v stn FROM MyView', NULL, 1; GO EXEC sp_describe_first_result_set N'SELECT soi_v soi, sodi_v sodi, stn_v stn FROM MyView', NULL, 2; GO Here is result of all the three queries together in single image for easier understanding regarding their difference. You can see that when BrowseMode is set to 1 the resultset describes the details of the original source database, schema as well source table. When BrowseMode is set to 2 the resulset describes the details of the view as the source database. I found it really really interesting that there exists system stored procedure which now describes the resultset of the output. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Stored Procedure, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • --log-slave-updates is OFF but some updates are still logged to the slave binary log?

    - by quanta
    MySQL version 5.5.14 According to the document, by the default, slave does not log to its binary log any updates that are received from a master server. Here are my config. on the slave: # egrep 'bin|slave' /etc/my.cnf relay-log=mysqld-relay-bin log-bin = /var/log/mysql/mysql-bin binlog-format=MIXED sync_binlog = 1 log-bin-trust-function-creators = 1 mysql> show global variables like 'log_slave%'; +-------------------+-------+ | Variable_name | Value | +-------------------+-------+ | log_slave_updates | OFF | +-------------------+-------+ 1 row in set (0.01 sec) mysql> select @@log_slave_updates; +---------------------+ | @@log_slave_updates | +---------------------+ | 0 | +---------------------+ 1 row in set (0.00 sec) but slave still logs the some changes to its binary logs, let's see the file size: -rw-rw---- 1 mysql mysql 37M Apr 1 01:00 /var/log/mysql/mysql-bin.001256 -rw-rw---- 1 mysql mysql 25M Apr 2 01:00 /var/log/mysql/mysql-bin.001257 -rw-rw---- 1 mysql mysql 46M Apr 3 01:00 /var/log/mysql/mysql-bin.001258 -rw-rw---- 1 mysql mysql 115M Apr 4 01:00 /var/log/mysql/mysql-bin.001259 -rw-rw---- 1 mysql mysql 105M Apr 4 18:54 /var/log/mysql/mysql-bin.001260 and the sample query when reading these binary files with mysqlbinlog utility: #120404 19:08:57 server id 3 end_log_pos 110324763 Query thread_id=382435 exec_time=0 error_code=0 SET TIMESTAMP=1333541337/*!*/; INSERT INTO norep_SplitValues VALUES ( NAME_CONST('cur_string',_utf8'118212' COLLATE 'utf8_general_ci')) /*!*/; # at 110324763 Did I miss something? Reply to @RolandoMySQLDBA: If replication brought this over, then the same query has to be in the relay logs. Please go find the relay log that has the INSERT query with the same TIMESTAMP (1333541337). There is no such query with the same TIMESTAMP in the relay logs. If you cannot find it in the relay logs, then look and see if Infobright is posting the INSERT query. In that instance, the INSERT should be recorded in the binary logs of the Slave. Looking more deeply into the binary logs, I see that almost of the queries are CREATE/INSERT/UPDATE/DROP "temporary" tables, something like this: # at 123873315 #120405 0:42:04 server id 3 end_log_pos 123873618 Query thread_id=395373 exec_time=0 error_code=0 SET TIMESTAMP=1333561324/*!*/; SET @@session.pseudo_thread_id=395373/*!*/; CREATE TEMPORARY TABLE `norep_tmpcampaign` ( `campaignid` INTEGER(11) NOT NULL DEFAULT '0' , `status` INTEGER(11) NOT NULL DEFAULT '0' , `updated` DATETIME, KEY `campaignid` (`campaignid`) )ENGINE=MEMORY /*!*/; # at 123873618 #120405 0:42:04 server id 3 end_log_pos 123873755 Query thread_id=395373 exec_time=0 error_code=0 SET TIMESTAMP=1333561324/*!*/; DROP TABLE IF EXISTS `norep_tmpcampaign1` /* generated by server */ "temporary" here means that they are dropped after calculation is done. I also tells the slave not to replicate any statement matches the norep_ wildcard pattern: replicate-wild-ignore-table=%.norep_% But there is an exception table in the binary logs: # at 123828094 #120405 0:37:21 server id 3 end_log_pos 123828495 Query thread_id=395209 exec_time=0 error_code=0 SET TIMESTAMP=1333561041/*!*/; INSERT INTO sessions (SessionId, ApplicationName, Created, Expires, LockDate, LockId, Timeout, Locked, SessionItems, Fla gs) Values('pgv2exo4y4vo4ccz44vwznu0', '/', '2012-04-05 00:37:21', '2012-04-05 00:57:21', '2012-04-05 00:37:21', 0, 20, 0, 'AwAAAP////8IdXNlcm5hbWUGdXNlcmlkCHBlcm1pdGlkAgAAAAQAAAAGAAAAAQABAAEA', 0) /*!*/; Description: mysql> desc reportingdb.sessions; +-----------------+------------------+------+-----+---------------------+-------+ | Field | Type | Null | Key | Default | Extra | +-----------------+------------------+------+-----+---------------------+-------+ | SessionId | varchar(64) | NO | PRI | | | | ApplicationName | varchar(255) | NO | | | | | Created | timestamp | NO | | 0000-00-00 00:00:00 | | | Expires | timestamp | NO | | 0000-00-00 00:00:00 | | | LockDate | timestamp | NO | | 0000-00-00 00:00:00 | | | LockId | int(11) unsigned | NO | | NULL | | | Timeout | int(11) unsigned | NO | | NULL | | | Locked | bit(1) | NO | | NULL | | | SessionItems | varchar(255) | YES | | NULL | | | Flags | int(11) | NO | | NULL | | +-----------------+------------------+------+-----+---------------------+-------+ I'm sure all these queries are posting by MySQL, not Infobright: $ mysql-ib -u root -p Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 48971 Server version: 5.1.40 build number (revision)=IB_4.0.5_r15240_15370(ice) (static) Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> select * from information_schema.tables where table_name='sessions'; Empty set (0.02 sec) I've been trying some INSERT/UPDATE queries with testing tables on the master, it is copied to the relay logs, not binary logs on slave: # at 311664029 #120405 0:15:23 server id 1 end_log_pos 311664006 Query thread_id=10458250 exec_time=0 error_code=0 use testuser/*!*/; SET TIMESTAMP=1333559723/*!*/; update users set email2='[email protected]' where id=22 /*!*/; Pay attention to the server id, in the relay logs, server id is master's (1) and in the binary log, server id is slave's (3 in this case). Reply to @RolandoMySQLDBA: Thu Apr 5 10:06:00 ICT 2012 Run CREATE DATABASE quantatest; on the Master now, please. Tell me if CREATE DATABASE quantatest; showed up in the Slave's Binary Logs. As I said above: I've been trying some INSERT/UPDATE queries with testing tables on the master, it is copied to the relay logs, not binary logs and you can guess, IO thread copied it to the relay logs, not binary logs. #120405 10:07:25 server id 1 end_log_pos 347573819 Query thread_id=10480775 exec_time=0 error_code=0 SET TIMESTAMP=1333595245/*!*/; /*!\C latin1 *//*!*/; SET @@session.character_set_client=8,@@session.collation_connection=8,@@session.collation_server=8/*!*/; create database quantatest /*!*/; The question must probably change to: why some update queries still logged to the slave binary logs althrough --log-slave-updates is disabled? Where they come from? Here are few last: /*!*/; # at 27492197 #120405 10:12:45 server id 3 end_log_pos 27492370 Query thread_id=410353 exec_time=0 error_code=0 SET TIMESTAMP=1333595565/*!*/; CREATE TEMPORARY TABLE norep_SplitValues ( value VARCHAR(1000) NOT NULL ) ENGINE=MEMORY /*!*/; # at 27492370 #120405 10:12:45 server id 3 end_log_pos 27492445 Query thread_id=410353 exec_time=0 error_code=0 SET TIMESTAMP=1333595565/*!*/; BEGIN /*!*/; # at 27492445 #120405 10:12:45 server id 3 end_log_pos 27492619 Query thread_id=410353 exec_time=0 error_code=0 SET TIMESTAMP=1333595565/*!*/; INSERT INTO norep_SplitValues VALUES ( NAME_CONST('cur_string',_utf8'119577' COLLATE 'utf8_general_ci')) /*!*/; # at 27492619 #120405 10:12:45 server id 3 end_log_pos 27492695 Query thread_id=410353 exec_time=0 error_code=0 SET TIMESTAMP=1333595565/*!*/; COMMIT /*!*/; # at 27492918 #120405 10:12:46 server id 3 end_log_pos 27493115 Query thread_id=410353 exec_time=0 error_code=0 SET TIMESTAMP=1333595566/*!*/; SELECT `reportingdb`.`selfserving_get_locationad`(_utf8'3' COLLATE 'utf8_general_ci',_utf8'' COLLATE 'utf8_general_ci') /*!*/; # at 27493199 #120405 10:12:46 server id 3 end_log_pos 27493353 Query thread_id=410353 exec_time=0 error_code=0 SET TIMESTAMP=1333595566/*!*/; /*!\C utf8 *//*!*/; SET @@session.character_set_client=33,@@session.collation_connection=33,@@session.collation_server=8/*!*/; DROP TEMPORARY TABLE IF EXISTS `norep_SplitValues` /* generated by server */ /*!*/;

    Read the article

  • SIM to OIM Migration: A How-to Guide to Avoid Costly Mistakes (SDG Corporation)

    - by Darin Pendergraft
    In the fall of 2012, Oracle launched a major upgrade to its IDM portfolio: the 11gR2 release.  11gR2 had four major focus areas: More simplified and customizable user experience Support for cloud, mobile, and social applications Extreme scalability Clear upgrade path For SUN migration customers, it is critical to develop and execute a clearly defined plan prior to beginning this process.  The plan should include initiation and discovery, assessment and analysis, future state architecture, review and collaboration, and gap analysis.  To help better understand your upgrade choices, SDG, an Oracle partner has developed a series of three whitepapers focused on SUN Identity Manager (SIM) to Oracle Identity Manager (OIM) migration. In the second of this series on SUN Identity Manager (SIM) to Oracle Identity Manager (OIM) migration, Santosh Kumar Singh from SDG  discusses the proper steps that should be taken during the planning-to-post implementation phases to ensure a smooth transition from SIM to OIM. Read the whitepaper for Part 2: Download Part 2 from SDGC.com In the last of this series of white papers, Santosh will talk about Identity and Access Management best practices and how these need to be considered when going through with an OIM migration. If you have not taken the opportunity, please read the first in this series which discusses the Migration Approach, Methodology, and Tools for you to consider when planning a migration from SIM to OIM. Read the white paper for part 1: Download Part 1 from SDGC.com About the Author: Santosh Kumar Singh Identity and Access Management (IAM) Practice Leader Santosh, in his capacity as SDG Identity and Access Management (IAM) Practice Leader, has direct senior management responsibility for the firm's strategy, planning, competency building, and engagement deliverance for this Practice. He brings over 12+ years of extensive IT, business, and project management and delivery experience, primarily within enterprise directory, single sign-on (SSO) application, and federated identity services, provisioning solutions, role and password management, and security audit and enterprise blueprint. Santosh possesses strong architecture and implementation expertise in all areas within these technologies and has repeatedly lead teams in successfully deploying complex technical solutions. About SDG: SDG Corporation empowers forward thinking companies to strategize their future, realize their vision, and minimize their IT risk. SDG distinguishes itself by offering flexible business models to fit their clients’ needs; faster time-to-market with its pre-built solutions and frameworks; a broad-based foundation of domain experts, and deep program management expertise. (www.sdgc.com)

    Read the article

  • JD Edwards World Reporting Made Easy with Real Time Reporting Tools from The GL Company

    Fred talks to Paul Yarwood, US Operations General Manager and Richard Crotty, North America Business Development Manager for The GL Company, an Oracle Certified Partner, and Denise Grills, Senior Director of Marketing and Product Strategy for Oracle's JD Edwards World products. They discuss how the finance department of JD Edwards World customers can have complete control over their management reporting with a true inquiry, consolidation, and reporting solution from The GL Company, freeing up the finance team from being dependent upon IT time and resources.

    Read the article

  • Web Self Service installation on Windows

    - by Rajesh Sharma
    Web Self Service (WSS) installation on windows is pretty straight forward but you might face some issues if deployed under tomcat. Here's a step-by-step guide to install Oracle Utilities Web Self Service on windows.   Below installation steps are done on: Oracle Utilities Framework version 2.2.0 Oracle Utilities Application - Customer Care & Billing version 2.2.0 Application server - Apache Tomcat 6.0.13 on default port 6500 Other settings include: SPLBASE = C:\spl\CCBDEMO22 SPLENVIRON = CCBV22 SPLWAS = TCAT   Follow these steps for a Web Self Service installation on windows: Download Web Self Service application from edelivery.   Copy the delivery file Release-SelfService-V2.2.0.zip from the Oracle Utilities Customer Care and Billing version 2.2.0 Web Self Service folder on the installation media to a directory on your Windows box where you would like to install the application, in our case it's a temporary folder C:\wss_temp.   Setup application environment, execute splenviron.cmd -e <ENVIRON_NAME>   Create base folder for Self Service application named SelfService under %SPLEBASE%\splapp\applications   Install Oracle Utilities Web Self Service   C:\wss_temp\Release-SelfService-V2.2.0>install.cmd -d %SPLEBASE%\splapp\applications\SelfService   Web Self Service installation menu. Populate environment values for each item.   ******************************************************** Pick your installation options: ******************************************************** 1. Destination directory name for installation.             | C:\spl\CCBDEMO22\splapp\applications\SelfService 2. Web Server Host.                                         | CCBV22 3. Web Server Port Number.                                  | 6500 4. Mail SMTP Host.                                          | CCBV22 5. Top Product Installation directory.                      | C:\spl\CCBDEMO22 6.     Web Application Server Type.                         | TCAT 7.     When OAS: SPLWeb OC4J instance name is required.     | OC4J1 8.     When WAS: SPLWeb server instance name is required.   | server1   P. Process the installation. Each item in the above list should be configured for a successful installation. Choose option to configure or (P) to process the installation:  P   Option 7 and Option 8 can be ignored for TCAT.   Above step installs SelfService.war file in the destination directory. We need to explode this war file. Change directory to the installation destination folder, and   C:\spl\CCBDEMO22\splapp\applications\SelfService>jar -xf SelfService.war   Review SelfServiceConfig.properties and CMSelfServiceConfig.properties. Change any properties value within the file specific to your installation/site. Generally default settings apply, for this exercise assumes that WEB user already exists in your application database.   For more information on property file customization, refer to Oracle Utilities Web Self Service Configuration section in Customer Care & Billing Installation Guide.   Add context entry in server.xml located under tomcat-base folder C:\spl\CCBDEMO22\product\tomcatBase\conf   ... <!-- SPL Context -->           <Context path="" docBase="C:/spl/CCBDEMO22/splapp/applications/root" debug="0" privileged="true"/>           <Context path="/appViewer" docBase="C:/spl/CCBDEMO22/splapp/applications/appViewer" debug="0" privileged="true"/>           <Context path="/help" docBase="C:/spl/CCBDEMO22/splapp/applications/help" debug="0" privileged="true"/>           <Context path="/XAIApp" docBase="C:/spl/CCBDEMO22/splapp/applications/XAIApp" debug="0" privileged="true"/>           <Context path="/SelfService" docBase="C:/spl/CCBDEMO22/splapp/applications/SelfService" debug="0" privileged="true"/> ...   Add User in tomcat-users.xml file located under tomcat-base folder C:\spl\CCBDEMO22\product\tomcatBase\conf   <user username="WEB" password="selfservice" roles="cisusers"/>   Note the password is "selfservice", this is the default password set within the SelfServiceConfig.properties file with base64 encoding.   Restart the application (spl.cmd stop | start)   12.  Although Apache Tomcat version 6.0.13 does not come with the admin pack, you can verify whether SelfService application is loaded and running, go to following URL http://server:port/manager/list, in our case it'll be http://ccbv22:6500/manager/list Following output will be displayed   OK - Listed applications for virtual host localhost /admin:running:0:C:/tomcat/apache-tomcat-6.0.13/webapps/ROOT/admin /XAIApp:running:0:C:/spl/CCBDEMO22/splapp/applications/XAIApp /host-manager:running:0:C:/tomcat/apache-tomcat-6.0.13/webapps/host-manager /SelfService:running:0:C:/spl/CCBDEMO22/splapp/applications/SelfService /appViewer:running:0:C:/spl/CCBDEMO22/splapp/applications/appViewer /manager:running:1:C:/tomcat/apache-tomcat-6.0.13/webapps/manager /help:running:0:C:/spl/CCBDEMO22/splapp/applications/help /:running:0:C:/spl/CCBDEMO22/splapp/applications/root   Also ensure that the XAIApp is running.   Run Oracle Utilities Web Self Service application http://server:port/SelfService in our case it'll be  http://ccbv22:6500/SelfService   Still doesn't work? And you get '503 HTTP response' at the time of customer registration?     This is because XAI service is still unavailable. There is initialize.waittime set for a default value of 90 seconds for the XAI Application to come up.   Remember WSS uses XAI to perform actions/validations on the CC&B database.  

    Read the article

  • Eine komplette Virtualisierungslandschaft auf dem eigenen Laptop – So geht’s

    - by Manuel Hossfeld
    Eine komplette Virtualisierungslandschaftauf dem eigenen Laptop – So geht’s Wenn man sich mit dem Virtualisierungsprodukt Oracle VM in der aktuellen Version 3.x näher befassen möchte, bietet es sich natürlich an, eine eigene Umgebung zu Lern- und Testzwecken zu installieren. Doch leichter gesagt als getan: Bei näherer Betrachtung der Architektur wird man schnell feststellen, dass mehrere Rechner benötigt werden, um überhaupt alle Komponenten abbilden zu können: Zum einen gilt es, den oder die OVM Server selbst zu installieren. Das ist recht leicht und schnell erledigt, aber da Oracle VM ein „Typ 1 Hypervisor ist“ - also direkt auf dem Rechner („bare metal“) installiert wird – ist der eigenen Arbeits-PC oder Laptop dafür recht ungeeignet. (Eine Dual-Boot Umgebung wäre zwar denkbar, aber recht unpraktisch.) Zum anderen wird auch ein Rechner benötigt, auf dem der OVM Manager installiert wird. Im Gegensatz zum OVM Server erfolgt dessen Installation nicht „bare metal“, sondern auf einem bestehenden Oracle Linux. Aber was tun, wenn man gerade keinen Linux-Server griffbereit hat und auch keine extra Hardware dafür opfern will? Möchte man alle Funktionen von Oracle VM austesten, so sollte man zusätzlich über einen Shared Storag everüfugen. Dieser kann wahlweise über NFS oder über ein SAN (per iSCSI oder FibreChannel) angebunden werden. Zwar braucht man zum Testen nicht zwingend entsprechende „echte“ Storage-Hardware, aber auch die „Simulation“ entsprechender Komponenten erfordert zusätzliche Hardware mit entsprechendem freien Plattenplatz.(Alternativ können auch fertige „Software Storage Appliances“ wie z.B. OpenFiler oder FreeNAS verwendet werden). Angenommen, es stehen tatsächlich keine „echte“ Server- und Storage Hardware zur Verfügung, so benötigt man für die oben genannten drei Punkte  drei bzw. vier Rechner (PCs, Laptops...) - je nachdem ob man einen oder zwei OVM Server starten möchte. Erfreulicherweise geht es aber auch mit deutlich weniger Aufwand: Wie bereits kurz im Blogpost anlässlich des letzten OVM-Releases 3.1.1 beschrieben, ist die aktuelle Version in der Lage, selbst vollständig innerhalb von VirtualBox als Gast zu laufen. Wer bei dieser „doppelten Virtualisierung“ nun an das Prinzip der russischen Matroschka-Puppen denkt, liegt genau richtig. Oracle VM VirtualBox stellt dabei gewissermaßen die äußere Hülle dar – und da es sich bei VirtualBox im Gegensatz zu Oracle VM Server um einen „Typ 2 Hypervisor“ handelt, funktioniert dieser Ansatz auch auf einem „normalen“ Arbeits-PC bzw. Laptop, ohne dessen eigentliche Betriebsystem komplett zu überschreiben. Doch das beste dabei ist: Die Installation der jeweiligen VirtualBox VMs muss man nicht selber durchführen. Der OVM Manager als auch der OVM Server stehen bereits als vorgefertigte „VirtualBox Appliances“ im Oracle Technology Network zum Download zur Verfügung und müssen im Grunde nur noch importiert und konfiguriert werden. Das folgende Schaubild verdeutlicht das Prinzip: Die dunkelgrünen Bereiche stellen jeweils Instanzen der eben erwähnten VirtualBox Appliances für OVM Server und OVM Manager dar. (Hier im Bild sind zwei OVM Server zu sehen, als Minimum würde natürlich auch einer genügen. Dann können aber viele Features wie z.B. OVM HA nicht ausprobieren werden.) Als cleveren Trick zur Einsparung einer weiteren VM für Storage-Zwecke hat Wim Coekaerts (Senior Vice President of Linux and Virtualization Engineering bei Oracle), der „Erbauer“ der VirtualBox Appliances, die OVM Manager Appliance bereits so vorbereitet, dass diese gleichzeitig als NFS-Share (oder ggf. sogar als iSCSI Target) dienen kann. Dies beschreibt er auch kurz auf seinem Blog. Die hellgrünen Ovale stellen die VMs dar, welche dann innerhalb einer der virtualisierten OVM Server laufen können. Aufgrund der Tatsache, dass durch diese „doppelte Virtualisierung“ die Fähigkeit zur Hardware-Virtualisierung verloren geht, können diese „Nutz-VMs“ demzufolge nur paravirtualisiert sein (PVM). Die hier in blau eingezeichneten Netzwerk-Schnittstellen sind virtuelle Interfaces, welche beliebig innerhalb von VirtualBox eingerichtet werden können. Wer die verschiedenen Netzwerk-Rollen innerhalb von Oracle VM im Detail ausprobieren will, kann hier natürlich auch mehr als zwei dieser Interfaces konfigurieren. Die Vorteile dieser Lösung für Test- und Demozwecke liegen auf der Hand: Mit lediglich einem PC bzw. Laptop auf dem VirtualBox installiert ist, können alle oben genannten Komponenten installiert und genutzt werden – genügend RAM vorausgesetzt. Als Minimum darf hier 8GB gelten. Soll auf der „Host-Umgebung“ (also dem PC auf dem VirtualBox läuft) nebenbei noch gearbeiten werden und/oder mehrere „Nutz-VMs“ in dieser simulierten OVM-Server-Umgebung laufen, empfehlen sich natürlich eher 16GB oder mehr. Da die nötigen Schritte zum Installieren und initialen Konfigurieren der Umgebung ausführlich in einem entsprechenden Paper beschrieben sind, möchte ich im Rest dieses Artikels noch einige zusätzliche Tipps und Details erwähnen, welche einem das Leben etwas leichter machen können: Um möglichst entstpannt und mit zusätzlichen „Sicherheitsnetz“ an die Konfiguration der Umgebung herangehen zu können, empfiehlt es sich, ausgiebigen Gebrauch von der in VirtualBox eingebauten Funktionalität der VM Snapshots zu machen. Dies ermöglicht nicht nur ein Zurücksetzen falls einmal etwas schiefgehen sollte, sondern auch ein beliebiges Wiederholen von bereits absolvierten Teilschritten (z.B. um eine andere Idee oder Variante der Umgebung auszuprobieren). Sowohl bei den gerade erwähnten Snapshots als auch bei den VMs selbst sollte man aussagekräftige Namen verwenden. So ist sichergestellt, dass man nicht durcheinander kommt und auch nach ein paar Wochen noch weiß, welche Umgebung man da eigentlich vor sich hat. Dies beinhaltet auch die genaue Versions- und Buildnr. des jeweiligen OVM-Releases. (Siehe dazu auch folgenden Screenshot.) Weitere Informationen und Details zum aktuellen Zustand sowie Zweck der jeweiligen VMs kann in dem oft übersehenen Beschreibungsfeld hinterlegt werden. Es empfiehlt sich, bereits VOR der Installation einen Notizzettel (oder eine Textdatei) mit den geplanten IP-Adressen und Namen für die VMs zu erstellen. (Nicht vergessen: Auch der Server Pool benötigt eine eigene IP.) Dabei sollte man auch nochmal die tatsächlichen Netzwerke der zu verwendenden Virtualbox-Interfaces prüfen und notieren. Achtung: Es gibt im Rahmen der Installation einige Passworte, die vom Nutzer gesetzt werden können – und solche, die zunächst fest eingestellt sind. Zu letzterem gehört das Passwort für den ovs-agent sowie den root-User auf den OVM Servern, welche beide per Default „ovsroot“ lauten. (Alle weiteren Passwort-Informationen sind in dem „Read me first“ Dokument zu finden, welches auf dem Desktop der OVM Manager VM liegt.) Aufpassen muss man ggf. auch in der initialen „Interview-Phase“ welche die VirtualBox VMs durchlaufen, nachdem sie das erste mal gebootet werden. Zu diesem Zeitpunkt ist nämlich auf jeden Fall noch die amerikanische Tastaturbelegung aktiv, so dass man z.B. besser kein „y“ und „z“ in seinem selbst gewählten Passwort verwendet. Aufgrund der Tatsache, dass wie oben erwähnt der OVM Manager auch gleichzeitig den Shared Storage bereitstellt, sollte darauf geachtet werden, dass dessen VM vor den OVM Server VMs gestartet wird. (Andernfalls „findet“ der dem OVM Server Pool zugrundeliegende Cluster sein sog. „Server Pool File System“ nicht.)

    Read the article

  • John Hitchcock of Pace Describes the Oracle Agile PLM Customer Experience

    John Hitchcock, Senior Manager of Configuration Management at Pace (formerly 2Wire, Inc.), sat down for an interview during Oracle's Innovation Summit with Kerrie Foy, Manager of PLM Product Marketing at Oracle. Learn why his organization upgraded to the latest version of Agile and expanded the footprint to achieve impressive savings and productivity gains across the global, networked product value-chain.

    Read the article

  • John Hitchcock of Pace Describes the Oracle Agile PLM Customer Experience

    John Hitchcock, Senior Manager of Configuration Management at Pace (formerly 2Wire, Inc.), sat down for an interview during Oracle's Innovation Summit with Kerrie Foy, Manager of PLM Product Marketing at Oracle. Learn why his organization upgraded to the latest version of Agile and expanded the footprint to achieve impressive savings and productivity gains across the global, networked product value-chain.

    Read the article

< Previous Page | 252 253 254 255 256 257 258 259 260 261 262 263  | Next Page >