Search Results

Search found 4532 results on 182 pages for 'ibm rational quality mana'.

Page 5/182 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Logitech bluetooth audio receiver quality

    - by lietus
    I bought a Logitech Bluetooth Audio receiver, and have run into a problem: When playing audio, it contains short hisses/clicks on higher tones or vocal+heavy instrumental music (Sound quality appears to be unclean). I'm not sure what could be the reason, but this only happens when I play from certain devices. My laptops (a Dell XPS Win7, and an Asus Eee seashell Win7) and my phone (Samsung Player 5) both have the problem. However, when I tried using a Samsung S2 phone, the audio was crystal clear. Seems that this could be something with Bluetooth transiting device.. Has anyone encountered a similar problem?

    Read the article

  • HP/IBM alternative to Buffalo iSCSI TerraStation?

    - by Robin Day
    I'm looking at virtualising some of our infrastructure in order to allow for more resiliance and future expandability. We have successfully virtualised on single servers with Direct Attached Storage and are now looking for a more future proof solution using a high powered host (or two) and a SAN (or two). I'm thinking that the host machine will probably be an HP ProLiant DL360 G7 (all of our exisiting infrastructure is HP). Unfortunately, I am new to the world of SANs. From what I can see, the Buffalo Terrastation III is all I would need in order to setup an iSCSI SAN for VMWare to use. However, I'm a little reticent to go that way as it's a bit too "entry level" for my liking. In particular I would be very keen for more redundancy, power, networking, etc. I'm also very aware that you "get what you pay for". Therefore, can anyone reccommend equivalents from the big boys? HP/IBM? I have searched high and low on the HP site and seen many options but am struggling to work out if it is all the hardware I will need. Some options appear to need separate controllers from disk enclosures, etc.

    Read the article

  • Textures quality issues with Libgdx

    - by user1876708
    I have drawn several vector objets and characters ( in Adobe Illustrator ) for my game on Android. They are all scalable at any size without any quality losses ( of course it's vector ^^ ). I have tried to simulate my gameboard directly on Illustrator just before setting my assests on libdgx to implement them in my game. I set all the objects at the good size, so that they fit perfectly on my XHDPI device I am running my test on. As you can see it works great ( for me at least ^^ ), the PNG quality is good for me, as expected ! So I have edited all my PNG at this size, set my assets on libgdx and build my game apk. And here is a screenshot of my gameboard ( don't pay attention at the differences of placing and objects, but check at the objets presents on both screenshot ). As you can see, I have a loss of my PNG quality in the game. It can be seen clealry on the hedgehog PNG, but also ( but not as obvious ) on the mushroom ( check at the outline ) and the hole PNG. If you really pay attention, on every objects, you can see pixels that are not visible on my first screenshot. And I just can't figure out why this is happening Oo If you have any ideas, you are very welcome ! Thanks. PS : You can check more clearly the 2 gameboard on this two links ( look at them at 100%, display at high resolution ) : Good quality link, from Illustrator Poor quality link, from the game Second phase of tests : We display an object ( the hedgehog ) on our main menu screen to see how it looks like. The things is that it looks like he is suppose to, which means, high quality with no pixels. The hedgehog PNG is coming from an atlas : layer.addActor(hedgehog); No loss of quality with this method So we think the problem is comming from the method we are using to display it on our gameboard : blocks[9][3] = new Block(TextureUtils.hedgehog, new Vector2(9, 3)); the block is getting the size from the vector we are associating to it, but we have a loss of quality with this method.

    Read the article

  • In which fields does quality of the software product matter as much as the completion time?

    - by Nav
    Someone told me that if the software product meets clients expectations, it is good quality. But I've worked with Interaction Designers (the same kind of people who made Gmail's interface and usability so cool!), and I've loved working with them because even though they came up with hundreds of changes in requirements, and emphasised on many many subtle details, when the software was complete, I could look at the product and say WOW! The current place I work, the only thing that matters is completing the project on time. As long as it works and as long as the client says it's ok, nobody bothers to improve it. I'm not talking about gold-plating, but I believe that for a programmer to enjoy his (well, maybe her too ;) ) job, they should be able to proudly say that "Hey, I made that software" and that comes only when the product is of good quality. Apart from your opinions on this, I'd also like to know which fields (Eg. Aerospace, Finance etc.) could I find companies (or you could mention the company name) where the quality of a product is as important as completing the project on time?

    Read the article

  • Innovative Applications with WebSphere Server Feature Packs and Rational Tooling

    This webcast will cover how the new open standards and programming models that are delivered through WebSphere Application Server Feature Packs can be used to create innovative web applications that help you to stay ahead of your competition. The Feature Packs covered will include: Web 2.0, Service Component Architecture, Communication Enabled Applications, and OSGi. It will also cover the IBM Rational tooling that can help you to quickly leverage the new capabilities delivered in these Feature Packs and accelerate the delivery of new applications. <b>Date / Time:</b> &#9;&#10;Wednesday, December 15, 2010 / 11:00 AM PT / 2:00 PM ET

    Read the article

  • SQL SERVER – Configuring Interactive Cleansing Suggestion Min Score for Suggestions in Data Quality Services (DQS) – Sensitivity of Suggestion

    - by pinaldave
    Earlier I talked about what kind of questions, I do not like when I get asked. Today we will go over the question which I like when I get asked the same. One of the reader practices various steps in my earlier blog post Step by Step Guide to Beginning Data Quality Services in SQL Server 2012 – Introduction to DQS. While reading the blog post he noticed that Data Quality Services is not providing very helpful suggestions. He wrote an email to me about it. Let us go over his email. “Pinal, I noticed in one of your images that DQS is not providing very helpful suggestions. First of all DQS should be able to make intelligent guesses and make the necessary correction by itself. If it cannot do the same, in that case, it should give us intelligent suggestions but in the image included here, I see the suggestions are not there as well. Why is it so? Would you please tell me how to increase the numbers of suggestion? I do understand this may not be preferable solution in many case but all the business cases go on it depends. There are cases when the high sensitivity required and there are cases when higher sensitivities are not required. I would like to seek your help here. –Sriram MD” This is indeed a great question. I see that Sriram understands that every system is different and every application has a different need. I will not have to tell him this most important concept. The question is about how to change the sensitivity of suggestions for correction in DQS. Well, this option is available under the configuration tab in the DQS client. Once you click on Configuration you will see the following screen. Click the Tab of General Settings. You will see the section of Interactive Cleansing. Under this second there is the first option of “Min score for suggestions”. As this is set to 0.7 every suggestion which matches 0.7 probabilities or higher probability are displayed under the suggestion tab. You can see in the following image that there is no suggestion as the min score for suggestions is set to 0.7 and there is no record which qualifies to that much confidence. Now let us change the value of Min Score for suggestion to 0.5. The lower value increased the confidence of DQS to give further suggestion to values which are over 0.5. However, in our case the suggestions which it provides are also accurate. This may not be true for your sample. Every sample is different so you should manually review it before approving them. I guess, this is a simple blog post to demonstrate how to change the confidence value for the suggestions which Data Quality Services provides. Use this feature with care and always tune it according to your datasets and record diversity. Reference: Pinal Dave (http://blog.SQLAuthority.com)       Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Data Quality Services, DQS

    Read the article

  • Migrating test cases & defects from Quality Center to TFS 2008/2010

    - by stackoverflowuser
    Tool that can be used to migrate (or even better..synchronize) test cases and bugs between: TFS 2008 and Quality Center 9.2 (or later) TFS 2010 and Quality Center 9.2 (or later) I am aware of the following tools: Test Case Migrator (Excel/MHT) Tool TFS Bug Item Synchronizer 2.2 for Quality Center Also shai raiten mentions on his blog about QC 2 Team System 2010 migration tool that he has been working on and its done. But could not find any link for downloading the tool. http://blogs.microsoft.co.il/blogs/shair/archive/2009/12/31/quality-center-migration-to-team-system-2010-done.aspx Before jumping on coding with TFS SDK and QC components to come up with my own tool I need some inputs from the stackoverflow community.

    Read the article

  • Ensuring quality of your software and code

    - by Filip Ekberg
    When I usually write code I follow some guidelines to ensure that my code has a certain standard and I as any other developer try to ensure that my code and software is of quality. Try to focus on the programming and not the understanding of the domain or any other pre-programming steps. These are the following steps I live by: Writing unit tests Make it fail ( no code ) Make it Work ( working code ) Analysing abstraction Extracting methods Exteract interfaces Refactoring In addition to the above which is a part of refactoring, I also try to refactor the code with good tools such as ReSharper, CodeRush or others. The question; What is the next step? Commenting the code is trivial and shouldn't even have to be mentioned, but updated comments and xml-comments where it's needed / everywhere is something that I try to have. But all the above helps he ensure that other developers might understand my code, that the code has some sort of quality and follows naming standards. It does however not ensure any product quality. I am looking for tools for post-development quality ensurance, such as profilers and how one would use these tools to increase product quality.

    Read the article

  • How can I set the BIOS/EFI security password on IBM System x servers by script/ASU?

    - by christian123
    I want to deploy IBM System x servers (like IBM System x 3550 M2) automatically and need to set a security password in the bios (actually it's uefi). I found this nice tool named ASU: http://www-947.ibm.com/systems/support/supportsite.wss/docdisplay?brandind=5000008&lndocid=MIGR-55021 Unfortunately I cannot see an option to set the password. Forum searches only show me people who want to reset the password using this tool. Does anybody know how to automatically deploy system passwords on IBM Intel-based servers?

    Read the article

  • IBM MQ corrupted messages

    - by Anand
    Hi I posted the question below in the forum and now I am asking another question in the hope that I get some pointers to my answers. my previous post Ok lets begin: Now the problem is like this: OS: Linux 1. I post messages to the IBM MQ 2. The some random messages in the queue get randomly corrupted as posted in the previous stackoverflow question OS: Windows 1. I post messages to the IBM MQ 2. The some random messages in the queue get randomly corrupted as posted in the previous stackoverflow question OS: Windows 1. I post messages to the IBM MQ 2. Now I read the messages and write them to a file just to observe them 3. Also I allow the messages to pass through as is after writing them to file Now everything goes through fine How can I resolve this problem

    Read the article

  • Enterprise Data Quality - New and Improved on Oracle Technology Network

    - by Mala Narasimharajan
    Looking for Enterprise Data Quality technical and developer resources on your projects? Wondering where the best place is to go for finding the latest documentations, downloads and even code samples and libraries?  Check out the new and improved Oracle Technical Network pages for Oracle Enterprise Data Quality.  This section features developer forums as well for EDQ and Master Data Management so that you can connect with other technical professionals who have submitted concerns or posted tips and tricks and learn from them.  Here are the links to bookmark:    Oracle Technology Network website * NEW *   Installation Guide for Enterprise Data Quality Address Verification  Enterprise Data Quality Forum For more information on Oracle's software offerings for data quality and master data management visit:  http://www.oracle.com/us/products/applications/master-data-management/index.html http://www.oracle.com/us/products/middleware/data-integration/enterprise-data-quality/overview/index.html   

    Read the article

  • New Feature in ODI 11.1.1.6: Enterprise Data Quality Integration

    - by Julien Testut
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Oracle Data Integrator 11.1.1.6.0 introduces a new Open Tool called EnterpriseDataQuality which allows ODI users to invoke an Oracle Enterprise Data Quality Job from a Package. This post will give you an overview of this new feature. Oracle Enterprise Data Quality (OEDQ) provides organizations with an integrated suite of data quality tools that offer an end-to-end solution to measure, improve, and manage the quality of data from any domain, including customer and product data. The addition of the EnterpriseDataQuality Open Tool extends the inline Data Quality capabilities of Oracle Data Integrator with Oracle Enterprise Data Quality powerful data profiling, cleansing, matching, and monitoring capabilities. The EnterpriseDataQuality Open Tool can invoke any OEDQ Job stored in a Project. This Open Tool connects to an OEDQ server using a JMX (Java Management Extensions) interface. Once installed, this Open Tool will be found under Plugins in the Package Toolbox area: This EnterpriseDataQuality Open Tool takes a couple of parameters as inputs such as the Enterprise Data Quality Job and Project names, the OEDQ hostname and JMX port etc. With the EnterpriseDataQuality Open Tool, ODI customers can now incorporate their Oracle Enterprise Data Quality processes within their Data Integration workflows. You will find instructions about how to use the Enterprise Data Quality Open Tool in the Oracle Data Integrator documentation at: Using the EnterpriseDataQuality Open Tool.You can find an overview of all the new features introduced in ODI 11.1.1.6 in the following document: ODI 11.1.1.6 New Features Overview.

    Read the article

  • photshop: why low quality jpg saved as high quality increases the file size?

    - by Alex Angelico
    i have a low quality background about 85kb. If I open the file in Photoshop and save it, I have to save this image as 100% quality, the file size increases to 640kb. This doesn't make sense, the image si already compressed, the quality cannot be better than the source, so the 100% quality while saving should produce THE SAME FILE OPENED. The problem is, I have this background and want to add a logo image over it, and then save it. But If I do so, i have to save this with 10% quality, and the logo looks horrible. If I save this to 100% quality, the file size is huge... How can I achive this?

    Read the article

  • selectOneMenu - java.lang.NullPointerException when adding record to the database (JSF2 and JPA2-OpenJPA)

    - by rogie
    Good day to all; I'm developing a program using JSF2 and JPA2 (OpenJPA). Im also using IBM Rapid App Dev't v8 with WebSphere App Server v8 test server. I have two simple entities, Employee and Department. Each Department has many Employees and each Employee belongs to a Department (using deptno and workdept). My problem occurs when i tried to add a new employee and selecting a department from a combo box (using selectOneMenu - populated from Department table): when i run the program, the following error messages appeared: An Error Occurred: java.lang.NullPointerException Caused by: java.lang.NullPointerException - java.lang.NullPointerException I also tried to make another program using Deptno and Workdept as String instead of integer, still doesn't work. Pls help. Im also a newbie. Tnx and God bless. Below are my codes, configurations and setup. Just tell me if there are some codes that I forgot to include. Im also using Derby v10.5 as my database: CREATE SCHEMA RTS; CREATE TABLE RTS.DEPARTMENT (DEPTNO INTEGER NOT NULL GENERATED ALWAYS AS IDENTITY (START WITH 1, INCREMENT BY 1), DEPTNAME VARCHAR(30)); ALTER TABLE RTS.DEPARTMENT ADD CONSTRAINT PK_DEPARTMNET PRIMARY KEY (DEPTNO); CREATE TABLE RTS.EMPLOYEE (EMPNO INTEGER NOT NULL GENERATED ALWAYS AS IDENTITY (START WITH 1, INCREMENT BY 1), NAME VARCHAR(50), WORKDEPT INTEGER); ALTER TABLE RTS.EMPLOYEE ADD CONSTRAINT PK_EMPLOYEE PRIMARY KEY (EMPNO); Employee and Department Entities package rts.entities; import java.io.Serializable; import javax.persistence.*; @Entity public class Employee implements Serializable { private static final long serialVersionUID = 1L; @Id @GeneratedValue(strategy=GenerationType.IDENTITY) private int empno; private String name; //bi-directional many-to-one association to Department @ManyToOne @JoinColumn(name="WORKDEPT") private Department department; ....... getter and setter methods package rts.entities; import java.io.Serializable; import javax.persistence.*; import java.util.List; @Entity public class Department implements Serializable { private static final long serialVersionUID = 1L; @Id @GeneratedValue(strategy=GenerationType.IDENTITY) private int deptno; private String deptname; //bi-directional many-to-one association to Employee @OneToMany(mappedBy="department") private List<Employee> employees; ....... getter and setter methods JSF 2 snipet using combo box (populated from Department table) <tr> <td align="left">Department</td> <td style="width: 5px">&#160;</td> <td><h:selectOneMenu styleClass="selectOneMenu" id="department1" value="#{pc_EmployeeAdd.employee.department}"> <f:selectItems value="#{DepartmentManager.departmentSelectList}" id="selectItems1"></f:selectItems> </h:selectOneMenu></td> </tr> package rts.entities.controller; import com.ibm.jpa.web.JPAManager; import javax.persistence.EntityManager; import javax.persistence.EntityManagerFactory; import com.ibm.jpa.web.NamedQueryTarget; import com.ibm.jpa.web.Action; import javax.persistence.PersistenceUnit; import javax.annotation.Resource; import javax.transaction.UserTransaction; import rts.entities.Department; import java.util.List; import javax.persistence.Query; import java.util.ArrayList; import java.text.MessageFormat; import javax.faces.model.SelectItem; @SuppressWarnings("unchecked") @JPAManager(targetEntity = rts.entities.Department.class) public class DepartmentManager { ....... public List<SelectItem> getDepartmentSelectList() { List<Department> departmentList = getDepartment(); List<SelectItem> selectList = new ArrayList<SelectItem>(); MessageFormat mf = new MessageFormat("{0}"); for (Department department : departmentList) { selectList.add(new SelectItem(department, mf.format( new Object[] { department.getDeptname() }, new StringBuffer(), null).toString())); } return selectList; } Converter: package rts.entities.converter; import javax.faces.component.UIComponent; import javax.faces.context.FacesContext; import javax.faces.convert.Converter; import rts.entities.Department; import rts.entities.controller.DepartmentManager; import com.ibm.jpa.web.TypeCoercionUtility; public class DepartmentConverter implements Converter { public Object getAsObject(FacesContext facesContext, UIComponent arg1, String entityId) { DepartmentManager departmentManager = (DepartmentManager) facesContext .getApplication().createValueBinding("#{DepartmentManager}") .getValue(facesContext); int deptno = (Integer) TypeCoercionUtility.coerceType("int", entityId); Department result = departmentManager.findDepartmentByDeptno(deptno); return result; } public String getAsString(FacesContext arg0, UIComponent arg1, Object object) { if (object instanceof Department) { return "" + ((Department) object).getDeptno(); } else { throw new IllegalArgumentException("Invalid object type:" + object.getClass().getName()); } } } Method for Add button: public String createEmployeeAction() { EmployeeManager employeeManager = (EmployeeManager) getManagedBean("EmployeeManager"); try { employeeManager.createEmployee(employee); } catch (Exception e) { logException(e); } return ""; } faces-conf.xml <converter> <converter-for-class>rts.entities.Department</converter-for-class> <converter-class>rts.entities.converter.DepartmentConverter</converter-class> </converter> Stack trace javax.faces.FacesException: java.lang.NullPointerException at org.apache.myfaces.shared_impl.context.ExceptionHandlerImpl.wrap(ExceptionHandlerImpl.java:241) at org.apache.myfaces.shared_impl.context.ExceptionHandlerImpl.handle(ExceptionHandlerImpl.java:156) at org.apache.myfaces.lifecycle.LifecycleImpl.render(LifecycleImpl.java:258) at javax.faces.webapp.FacesServlet.service(FacesServlet.java:191) at com.ibm.ws.webcontainer.servlet.ServletWrapper.service(ServletWrapper.java:1147) at com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:722) at com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:449) at com.ibm.ws.webcontainer.servlet.ServletWrapperImpl.handleRequest(ServletWrapperImpl.java:178) at com.ibm.ws.webcontainer.filter.WebAppFilterManager.invokeFilters(WebAppFilterManager.java:1020) at com.ibm.ws.webcontainer.servlet.CacheServletWrapper.handleRequest(CacheServletWrapper.java:87) at com.ibm.ws.webcontainer.WebContainer.handleRequest(WebContainer.java:886) at com.ibm.ws.webcontainer.WSWebContainer.handleRequest(WSWebContainer.java:1655) at com.ibm.ws.webcontainer.channel.WCChannelLink.ready(WCChannelLink.java:195) at com.ibm.ws.http.channel.inbound.impl.HttpInboundLink.handleDiscrimination(HttpInboundLink.java:452) at com.ibm.ws.http.channel.inbound.impl.HttpInboundLink.handleNewRequest(HttpInboundLink.java:511) at com.ibm.ws.http.channel.inbound.impl.HttpInboundLink.processRequest(HttpInboundLink.java:305) at com.ibm.ws.http.channel.inbound.impl.HttpInboundLink.ready(HttpInboundLink.java:276) at com.ibm.ws.tcp.channel.impl.NewConnectionInitialReadCallback.sendToDiscriminators(NewConnectionInitialReadCallback.java:214) at com.ibm.ws.tcp.channel.impl.NewConnectionInitialReadCallback.complete(NewConnectionInitialReadCallback.java:113) at com.ibm.ws.tcp.channel.impl.AioReadCompletionListener.futureCompleted(AioReadCompletionListener.java:165) at com.ibm.io.async.AbstractAsyncFuture.invokeCallback(AbstractAsyncFuture.java:217) at com.ibm.io.async.AsyncChannelFuture.fireCompletionActions(AsyncChannelFuture.java:161) at com.ibm.io.async.AsyncFuture.completed(AsyncFuture.java:138) at com.ibm.io.async.ResultHandler.complete(ResultHandler.java:204) at com.ibm.io.async.ResultHandler.runEventProcessingLoop(ResultHandler.java:775) at com.ibm.io.async.ResultHandler$2.run(ResultHandler.java:905) at com.ibm.ws.util.ThreadPool$Worker.run(ThreadPool.java:1650) Caused by: java.lang.NullPointerException at rts.entities.converter.DepartmentConverter.getAsString(DepartmentConverter.java:29) at org.apache.myfaces.shared_impl.renderkit.RendererUtils.getConvertedStringValue(RendererUtils.java:656) at org.apache.myfaces.shared_impl.renderkit.html.HtmlRendererUtils.getSubmittedOrSelectedValuesAsSet(HtmlRendererUtils.java:444) at org.apache.myfaces.shared_impl.renderkit.html.HtmlRendererUtils.internalRenderSelect(HtmlRendererUtils.java:421) at org.apache.myfaces.shared_impl.renderkit.html.HtmlRendererUtils.renderMenu(HtmlRendererUtils.java:359) at org.apache.myfaces.shared_impl.renderkit.html.HtmlMenuRendererBase.encodeEnd(HtmlMenuRendererBase.java:76) at javax.faces.component.UIComponentBase.encodeEnd(UIComponentBase.java:519) at javax.faces.component.UIComponent.encodeAll(UIComponent.java:626) at javax.faces.component.UIComponent.encodeAll(UIComponent.java:622) at javax.faces.component.UIComponent.encodeAll(UIComponent.java:622) at javax.faces.component.UIComponent.encodeAll(UIComponent.java:622) at org.apache.myfaces.view.facelets.FaceletViewDeclarationLanguage.renderView(FaceletViewDeclarationLanguage.java:1320) at org.apache.myfaces.application.ViewHandlerImpl.renderView(ViewHandlerImpl.java:263) at org.apache.myfaces.lifecycle.RenderResponseExecutor.execute(RenderResponseExecutor.java:85) at org.apache.myfaces.lifecycle.LifecycleImpl.render(LifecycleImpl.java:239) ... 24 more

    Read the article

  • IBM MQ Message Listener

    - by x1a0
    Hi does anyone know how to create a message listener using IBM MQ? I know how to do it using the JMS spec but I am not sure how to do it for IBM MQ. Any links or pointers are greatly appreciated.

    Read the article

  • Restart Fibre channel controller after blade bootup IBM HS bladecentre

    - by Spence
    I have a remote system that needs to resume on startup. If the system is simply powered on then the blades boot before the SAN is online and then the only thing you can do is restart the systems. Is it possible to restart the fibre channel controller? That way I could have a system restart the controller after boot, connect to the SAN and then restart all servers requiring SAN information? Please note that I'm not a sys admin, just shooting for ideas to get a clean startup to work, apologies if my terminology is wrong.

    Read the article

  • SPARC T4-4 Beats 8-CPU IBM POWER7 on TPC-H @3000GB Benchmark

    - by Brian
    Oracle's SPARC T4-4 server delivered a world record TPC-H @3000GB benchmark result for systems with four processors. This result beats eight processor results from IBM (POWER7) and HP (x86). The SPARC T4-4 server also delivered better performance per core than these eight processor systems from IBM and HP. Comparisons below are based upon system to system comparisons, highlighting Oracle's complete software and hardware solution. This database world record result used Oracle's Sun Storage 2540-M2 arrays (rotating disk) connected to a SPARC T4-4 server running Oracle Solaris 11 and Oracle Database 11g Release 2 demonstrating the power of Oracle's integrated hardware and software solution. The SPARC T4-4 server based configuration achieved a TPC-H scale factor 3000 world record for four processor systems of 205,792 QphH@3000GB with price/performance of $4.10/QphH@3000GB. The SPARC T4-4 server with four SPARC T4 processors (total of 32 cores) is 7% faster than the IBM Power 780 server with eight POWER7 processors (total of 32 cores) on the TPC-H @3000GB benchmark. The SPARC T4-4 server is 36% better in price performance compared to the IBM Power 780 server on the TPC-H @3000GB Benchmark. The SPARC T4-4 server is 29% faster than the IBM Power 780 for data loading. The SPARC T4-4 server is up to 3.4 times faster than the IBM Power 780 server for the Refresh Function. The SPARC T4-4 server with four SPARC T4 processors is 27% faster than the HP ProLiant DL980 G7 server with eight x86 processors on the TPC-H @3000GB benchmark. The SPARC T4-4 server is 52% faster than the HP ProLiant DL980 G7 server for data loading. The SPARC T4-4 server is up to 3.2 times faster than the HP ProLiant DL980 G7 for the Refresh Function. The SPARC T4-4 server achieved a peak IO rate from the Oracle database of 17 GB/sec. This rate was independent of the storage used, as demonstrated by the TPC-H @3000TB benchmark which used twelve Sun Storage 2540-M2 arrays (rotating disk) and the TPC-H @1000TB benchmark which used four Sun Storage F5100 Flash Array devices (flash storage). [*] The SPARC T4-4 server showed linear scaling from TPC-H @1000GB to TPC-H @3000GB. This demonstrates that the SPARC T4-4 server can handle the increasingly larger databases required of DSS systems. [*] The SPARC T4-4 server benchmark results demonstrate a complete solution of building Decision Support Systems including data loading, business questions and refreshing data. Each phase usually has a time constraint and the SPARC T4-4 server shows superior performance during each phase. [*] The TPC believes that comparisons of results published with different scale factors are misleading and discourages such comparisons. Performance Landscape The table lists the leading TPC-H @3000GB results for non-clustered systems. TPC-H @3000GB, Non-Clustered Systems System Processor P/C/T – Memory Composite(QphH) $/perf($/QphH) Power(QppH) Throughput(QthH) Database Available SPARC Enterprise M9000 3.0 GHz SPARC64 VII+ 64/256/256 – 1024 GB 386,478.3 $18.19 316,835.8 471,428.6 Oracle 11g R2 09/22/11 SPARC T4-4 3.0 GHz SPARC T4 4/32/256 – 1024 GB 205,792.0 $4.10 190,325.1 222,515.9 Oracle 11g R2 05/31/12 SPARC Enterprise M9000 2.88 GHz SPARC64 VII 32/128/256 – 512 GB 198,907.5 $15.27 182,350.7 216,967.7 Oracle 11g R2 12/09/10 IBM Power 780 4.1 GHz POWER7 8/32/128 – 1024 GB 192,001.1 $6.37 210,368.4 175,237.4 Sybase 15.4 11/30/11 HP ProLiant DL980 G7 2.27 GHz Intel Xeon X7560 8/64/128 – 512 GB 162,601.7 $2.68 185,297.7 142,685.6 SQL Server 2008 10/13/10 P/C/T = Processors, Cores, Threads QphH = the Composite Metric (bigger is better) $/QphH = the Price/Performance metric in USD (smaller is better) QppH = the Power Numerical Quantity QthH = the Throughput Numerical Quantity The following table lists data load times and refresh function times during the power run. TPC-H @3000GB, Non-Clustered Systems Database Load & Database Refresh System Processor Data Loading(h:m:s) T4Advan RF1(sec) T4Advan RF2(sec) T4Advan SPARC T4-4 3.0 GHz SPARC T4 04:08:29 1.0x 67.1 1.0x 39.5 1.0x IBM Power 780 4.1 GHz POWER7 05:51:50 1.5x 147.3 2.2x 133.2 3.4x HP ProLiant DL980 G7 2.27 GHz Intel Xeon X7560 08:35:17 2.1x 173.0 2.6x 126.3 3.2x Data Loading = database load time RF1 = power test first refresh transaction RF2 = power test second refresh transaction T4 Advan = the ratio of time to T4 time Complete benchmark results found at the TPC benchmark website http://www.tpc.org. Configuration Summary and Results Hardware Configuration: SPARC T4-4 server 4 x SPARC T4 3.0 GHz processors (total of 32 cores, 128 threads) 1024 GB memory 8 x internal SAS (8 x 300 GB) disk drives External Storage: 12 x Sun Storage 2540-M2 array storage, each with 12 x 15K RPM 300 GB drives, 2 controllers, 2 GB cache Software Configuration: Oracle Solaris 11 11/11 Oracle Database 11g Release 2 Enterprise Edition Audited Results: Database Size: 3000 GB (Scale Factor 3000) TPC-H Composite: 205,792.0 QphH@3000GB Price/performance: $4.10/QphH@3000GB Available: 05/31/2012 Total 3 year Cost: $843,656 TPC-H Power: 190,325.1 TPC-H Throughput: 222,515.9 Database Load Time: 4:08:29 Benchmark Description The TPC-H benchmark is a performance benchmark established by the Transaction Processing Council (TPC) to demonstrate Data Warehousing/Decision Support Systems (DSS). TPC-H measurements are produced for customers to evaluate the performance of various DSS systems. These queries and updates are executed against a standard database under controlled conditions. Performance projections and comparisons between different TPC-H Database sizes (100GB, 300GB, 1000GB, 3000GB, 10000GB, 30000GB and 100000GB) are not allowed by the TPC. TPC-H is a data warehousing-oriented, non-industry-specific benchmark that consists of a large number of complex queries typical of decision support applications. It also includes some insert and delete activity that is intended to simulate loading and purging data from a warehouse. TPC-H measures the combined performance of a particular database manager on a specific computer system. The main performance metric reported by TPC-H is called the TPC-H Composite Query-per-Hour Performance Metric (QphH@SF, where SF is the number of GB of raw data, referred to as the scale factor). QphH@SF is intended to summarize the ability of the system to process queries in both single and multiple user modes. The benchmark requires reporting of price/performance, which is the ratio of the total HW/SW cost plus 3 years maintenance to the QphH. A secondary metric is the storage efficiency, which is the ratio of total configured disk space in GB to the scale factor. Key Points and Best Practices Twelve Sun Storage 2540-M2 arrays were used for the benchmark. Each Sun Storage 2540-M2 array contains 12 15K RPM drives and is connected to a single dual port 8Gb FC HBA using 2 ports. Each Sun Storage 2540-M2 array showed 1.5 GB/sec for sequential read operations and showed linear scaling, achieving 18 GB/sec with twelve Sun Storage 2540-M2 arrays. These were stand alone IO tests. The peak IO rate measured from the Oracle database was 17 GB/sec. Oracle Solaris 11 11/11 required very little system tuning. Some vendors try to make the point that storage ratios are of customer concern. However, storage ratio size has more to do with disk layout and the increasing capacities of disks – so this is not an important metric in which to compare systems. The SPARC T4-4 server and Oracle Solaris efficiently managed the system load of over one thousand Oracle Database parallel processes. Six Sun Storage 2540-M2 arrays were mirrored to another six Sun Storage 2540-M2 arrays on which all of the Oracle database files were placed. IO performance was high and balanced across all the arrays. The TPC-H Refresh Function (RF) simulates periodical refresh portion of Data Warehouse by adding new sales and deleting old sales data. Parallel DML (parallel insert and delete in this case) and database log performance are a key for this function and the SPARC T4-4 server outperformed both the IBM POWER7 server and HP ProLiant DL980 G7 server. (See the RF columns above.) See Also Transaction Processing Performance Council (TPC) Home Page Ideas International Benchmark Page SPARC T4-4 Server oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN Sun Storage 2540-M2 Array oracle.com OTN Disclosure Statement TPC-H, QphH, $/QphH are trademarks of Transaction Processing Performance Council (TPC). For more information, see www.tpc.org. SPARC T4-4 205,792.0 QphH@3000GB, $4.10/QphH@3000GB, available 5/31/12, 4 processors, 32 cores, 256 threads; IBM Power 780 QphH@3000GB, 192,001.1 QphH@3000GB, $6.37/QphH@3000GB, available 11/30/11, 8 processors, 32 cores, 128 threads; HP ProLiant DL980 G7 162,601.7 QphH@3000GB, $2.68/QphH@3000GB available 10/13/10, 8 processors, 64 cores, 128 threads.

    Read the article

  • Communicating via Command Mode with IBM HS22 IMM via AMM

    - by MikeyB
    On previous model blades that contained a BMC, I was able to communicate from our external management station via pass-through commands to the BMC to do things such as power blades on/off, set VPD parameters, reboot the BMC, etc. Now on the HS22, a bunch of things happen differently. For example, we can no longer use the same pass-through commands to write VPD information pages and have them persist across reboots of the IMM - it looks as though those VPD pages are populated from information contained in the IMM. How do we use the Advanced Settings Utility from an external host to communicate with HS22 IMMs? Alternatively, what TCP Command Mode commands do we need to send to the AMM to communicate with the IMM? For our purposes, we specifically cannot communicate with the IMM from the blade itself. Specific example: When I send a pass-thru IPMI command via the AMM to the blade BMC to write information (such as MTM, Serial) into VPD page 0x10, it persists on blades with a BMC (HS21 for example). I can send the same IPMI command to write data to the VPD page on the HS22, however it does not persist across reboots of the IMM. What IPMI commands do I need to send to the IMM? What IPMI commands are asu sending when it sets the MTM & Serial?

    Read the article

  • IBM Tivoli Network Manager IP Edition - Job does not run

    - by Thorsten Niehues
    Since our network discovery takes too long I tried to split the biggest job into two parts. The two parts use the same Perl script but have a different scope. I copied a Job (Agent) doing the following: Copied the .agnt file Copied the associated perl script The problem is that one or the other job (changes randomly) does not run. The Disco Process will fail eventually. In the log of the job which does not run I see the following error message: Wed Jul 18 08:48:54 2012 Warning: Failed to send on transport layer found in file CRivObjSockClient.cc at line 1293 - Client My_MacTable_Cis is not connected to service Helper How do I fix this problem?

    Read the article

  • Upgrading memory on an IBM Power 710 Express (8231-E2B)

    - by cairnz
    We have a Power 710 Express server that was loaded with 4x4 GB memory on a single riser card. I have replaced the 4 chips with 4x8GB and put in another riser card and loaded it with 4x8GB more for a total of 64GB memory. The firmware is AL730_078. When i power it on, the service processor boots up and i can access the ASMi. From here I can look at "Memory Serial Presence Data" and see that the system in some way detects 8x8 GB. However when i look at Hardware Deconfiguration and specifically Memory Deconfiguration, it is still listed with old values, 16384MB, and claims there are 4x4 chips in the C17 riser. How do i proceed to make the server recognize properly the amount of memory installed? I get a FSPSP04 and B181B50F progress code on booting because (i think) it hasn't been told the memory has changed. It then does not proceed to booting the operating system (VIOS) when turned on. Are there any steps I have overlooked here? Can I do some commands, either on the service processor, or otherwise, to tell the system to configure with the proper amount of memory? PS: This is a stand alone server, not configured with HMC or SDMC.

    Read the article

  • IBM x260m won't boot from PCI raid card

    - by syrenity
    Hi. We ghosted our old Windows 2003 drive to a new 1TB one, and connected through new PCI SATA card. The server fails to boot with error 1962: Boot sector not found. When we connect the drive directly to motherboard, it boots fine, so the problem is probably in getting BIOS boot from the SATA card. Does anyone know how to do it? Thanks.

    Read the article

  • IBM Bladecentre H Bootup Sequence Control

    - by Spence
    I have a bladecentre with blades, network and a SAN in a single rack. What I'd like to do is control the startup of the bladecentre so that when the rack is powered on that the blades will delay their bootup until the SAN is powered on correctly. Is this even possible? We have an AMM in the chassis if that helps. Basically we are looking into the reboot that occurs after power is restored to a UPS. I sincerely apologise for the noobness of this question, but I am a software guy trying to help :)

    Read the article

  • IBM Bladecentre H Bootup

    - by Spence
    I have a bladecentre with blades, network and a SAN in a single rack. What I'd like to do is control the startup of the bladecentre so that when the rack is powered on that the blades will delay their bootup until the SAN is powered on correctly. Is this even possible? We have an AMM in the chassis if that helps. Basically we are looking into the reboot that occurs after power is restored to a UPS. I sincerely apologise for the noobness of this question, but I am a software guy trying to help :)

    Read the article

  • IBM x260m won't boot from PCI SATA card

    - by syrenity
    Hi. We ghosted our old Windows 2003 drive to a new 1TB one, and connected through new PCI SATA card. The x260m server fails to boot with error 1962: "Boot sector not found." When we connect the drive directly to motherboard, it boots fine, so the problem is probably in getting BIOS to boot from the SATA card. Has anyone encountered and solved such issue? EDIT: We tried disabling the on-board Sata controller (Hostraid/SAS?), but there is no such option - only enable, enhanced and compatible. Tried also to chance boot device priority to all possible choices - no luck. Thanks in advance!

    Read the article

  • IBM BladeCenter S: Disk Configuration

    - by gravyface
    Have just the one storage bay right now (SAS 15K 600GB x 6) and have configured one storage pool in RAID 10 with 4 disks (and two global spares). For each blade, I've created a volume and mapped accordingly: Blade #1 400 GB Blade #2 200 GB Blade #3 100 GB Blade #4 100 GB When I boot up Blade 1 and enter into the UEFI Setup (F1) followed by the Adapters and UEFI Drivers LSI Logic Fusion MPT SAS Driver Utility, I see 4 disks: two are the on-board 73GB drives, the other two are 200GB each and assume I'm being presented with two logical disks from the volume I created and mapped to this blade. I was a bit surprised by this: I figured I would've been presented with one logical drive per volume, not two. I'm assuming I can just configure whatever RAID level I wish that supports two disks, but not really sure what the benefits/trade-offs here. Should I go with RAID 10 on top of RAID 10? RAID 0? Software RAID 0/1/10? Does it even matter? If this is "normal" to see two disks, then I'm going to likely just do some benchmarking and see if it makes a difference changing the RAID levels (my guess is no); if this is not normal, well, please let me know. :)

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >