Search Results

Search found 3994 results on 160 pages for 'dual wan'.

Page 151/160 | < Previous Page | 147 148 149 150 151 152 153 154 155 156 157 158  | Next Page >

  • Was Visual Studio 2008 or 2010 written to use multi cores?

    - by Erx_VB.NExT.Coder
    basically i want to know if the visual studio IDE and/or compiler in 2010 was written to make use of a multi core environment (i understand we can target multi core environments in 08 and 10, but that is not my question). i am trying to decide on if i should get a higher clock dual core or a lower clock quad core, as i want to try and figure out which processor will give me the absolute best possible experience with Visual Studio 2010 (ide and background compiler). if they are running the most important section (background compiler and other ide tasks) in one core, then the core will get cut off quicker if running a quad core, esp if background compiler is the heaviest task, i would imagine this would b e difficult to seperate in more then one process, so even if it uses multi cores you might still be better off with going for a higher clock cpu if the majority of the processing is still bound to occur in one core (ie the most significant part of the VS environment). i am a vb programmer, they've made great performance improvements in beta 2, congrats, but i would love to be able to use VS seamlessly... anyone have any ideas? thanks, erx

    Read the article

  • Which Java Service or Daemon framework would you recommend?

    - by blwy10
    I have encountered many different ways to turn a Java program into a Windows Service or a *nix daemon, such as Java Service Wrapper, Apache Commons Daemon, and so on. Barring licensing concerns (such as JSW's GPL or pay dual-license), and more advanced features, which one would you recommend? All I intend to do is convert a simple Java program into a service; I don't need anything fancy, just something that runs as a service or a daemon, so I can start it or stop it in the service manager, or it runs for the lifetime of my *nix uptime. EDIT: I've decided to make this community wiki. I didn't start this question with an intention to find an answer for a problem I really had. I was just doing some reading and researching and chanced upon this question, so I was looking for recommendations and the like. Sorry for not doing this sooner or doing this at first. I didn't know what community wiki was for when I first started, and I completely forgot about this question until now. Many thanks for the answers!

    Read the article

  • CPU overheating because of Delphi IDE

    - by Altar
    I am using Delphi 7 but I have trialed the Delphi 2005 - 2010 versions. In all these new versions my CPU utilization is 50% (one core is 100%, the other is "relaxed") when Delphi IDE is visible on screen. It doesn't happen when the IDE is minimized. My computer is overheating because of this. Any hints why this happens? It looks like if I want to upgrade to Delphi 2010, I need to upgrade my cooling system first. And I am a bit lazy about that, especially that I want to discahrge my computer and buy a new one (in the next 6 months) - probably I will have to buy a Win 7 license too. Edit: My CPU is AMD Dual Core 4600+. 4GB RAM Overheating means that temperature of the HDDs is raising because the CPU. Edit: Comming to a possible solution I just deleted all HKCU/CodeGear key and let started Delphi as "new". Guess what? The CPU utilization is 0%. I will investigate more.

    Read the article

  • Function to extract data in insert into satement for a table.

    - by user269484
    Hi...I m using this Function to extract the data but unable to extract LONG datatype. Can anyone help me? create or replace Function ExtractData(v_table_name varchar2) return varchar2 As b_found boolean:=false; v_tempa varchar2(8000); v_tempb varchar2(8000); v_tempc varchar2(255); begin for tab_rec in (select table_name from user_tables where table_name=upper(v_table_name)) loop b_found:=true; v_tempa:='select ''insert into '||tab_rec.table_name||' ('; for col_rec in (select * from user_tab_columns where table_name=tab_rec.table_name order by column_id) loop if col_rec.column_id=1 then v_tempa:=v_tempa||'''||chr(10)||'''; else v_tempa:=v_tempa||',''||chr(10)||'''; v_tempb:=v_tempb||',''||chr(10)||'''; end if; v_tempa:=v_tempa||col_rec.column_name; if instr(col_rec.data_type,'CHAR') 0 then v_tempc:='''''''''||'||col_rec.column_name||'||'''''''''; elsif instr(col_rec.data_type,'DATE') 0 then v_tempc:='''to_date(''''''||to_char('||col_rec.column_name||',''mm/dd/yyyy hh24:mi'')||'''''',''''mm/dd/yyyy hh24:mi'''')'''; else v_tempc:=col_rec.column_name; end if; v_tempb:=v_tempb||'''||decode('||col_rec.column_name||',Null,''Null'','||v_tempc||')||'''; end loop; v_tempa:=v_tempa||') values ('||v_tempb||');'' from '||tab_rec.table_name||';'; end loop; if Not b_found then v_tempa:='-- Table '||v_table_name||' not found'; else v_tempa:=v_tempa||chr(10)||'select ''-- commit;'' from dual;'; end if; return v_tempa; end; /

    Read the article

  • Optimize Duplicate Detection

    - by Dave Jarvis
    Background This is an optimization problem. Oracle Forms XML files have elements such as: <Trigger TriggerName="name" TriggerText="SELECT * FROM DUAL" ... /> Where the TriggerText is arbitrary SQL code. Each SQL statement has been extracted into uniquely named files such as: sql/module=DIAL_ACCESS+trigger=KEY-LISTVAL+filename=d_access.fmb.sql sql/module=REP_PAT_SEEN+trigger=KEY-LISTVAL+filename=rep_pat_seen.fmb.sql I wrote a script to generate a list of exact duplicates using a brute force approach. Problem There are 37,497 files to compare against each other; it takes 8 minutes to compare one file against all the others. Logically, if A = B and A = C, then there is no need to check if B = C. So the problem is: how do you eliminate the redundant comparisons? The script will complete in approximately 208 days. Script Source Code The comparison script is as follows: #!/bin/bash echo Loading directory ... for i in $(find sql/ -type f -name \*.sql); do echo Comparing $i ... for j in $(find sql/ -type f -name \*.sql); do if [ "$i" = "$j" ]; then continue; fi # Case insensitive compare, ignore spaces diff -IEbwBaq $i $j > /dev/null # 0 = no difference (i.e., duplicate code) if [ $? = 0 ]; then echo $i :: $j >> clones.txt fi done done Question How would you optimize the script so that checking for cloned code is a few orders of magnitude faster? System Constraints Using a quad-core CPU with an SSD; trying to avoid using cloud services if possible. The system is a Windows-based machine with Cygwin installed -- algorithms or solutions in other languages are welcome. Thank you!

    Read the article

  • Setting up a NAS with Citrix XenServer

    - by JasonBrown
    Just a quick query on anyone whos worked with XenServer, I want to setup a NAS at home but with virtualization (I've looked into VMWare Server and KVM, I quite like KVM!) but I was told about XenServer 5.5. I have comomodity hardware (ASUS board, dual core 2.66Ghz CPU with 8Gb RAM), I need to setup a fileserver to house about 2-3Tb worth of data (big chunky video - not porn!). Need to run Linux (preferably CentOS) but also run Windows virtualised for testing. I was thinking of going the XenServer route, however I want to be able to offer a VM access to the 2-3Tb of HDDs (5 HDD drives) directly so it can do its thing (maybe using FreeNAS). Would this be possible with XenServer? Or will I have to do more work - and another box - to offer this? My goals are to use FreeNAS (ZFS!) for the filesserver, CentOS for SVN and aother bits we need to use (LAMP Stack), Windows for our win32 testing all on one box. I see this iSCSI target bits and get scared.

    Read the article

  • Configuring WPA WiFi in Ubuntu 10.10

    - by sma
    Hello, I am trying to configure my wireless network on my laptop running Ubuntu 10.10 and am having a bit of difficulty. I am a complete Linux newb, but want to learn it, hence the reason I'm trying to set this up. Here's the vitals: It is a Gateway 600 YG2 laptop. It was previously running Windows XP, but I installed Ubuntu 10.10 in place of it (not a dual boot, I removed XP altogether). I have an old wireless card that I'm trying to resurrect. I haven't really used the card in a couple years, but it seems to still work, I just can't connect to my home's wireless network. The card is a Linksys WPC11 v2.5. When I plug it in, Ubuntu recognizes the network, but won't connect to it. My home network uses WPA encryption and the only connection type that Ubuntu's network manager is giving me is WEP and then it asks for a key -- I have no idea what that key should be. So, basically, I'm asking, is there a way I can instead connect through WPA? I've tried creating a new connection in network manager, but that won't work, it keeps falling back to the WEP connection and asking me for a key. I have tried to install the XP driver using ndiswrapper but I don't know if that's working or not. Is there a way to tell if: A) the card is working as it should B) the correct drivers are installed (again, I installed the XP one using ndiswrapper NET8180.INF, but I'm not sure what to do next) Any help would be appreciated. Thank you.

    Read the article

  • Pass command line arguments to JUnit test case being run programmatically

    - by __nv__
    I am attempting to run a JUnit Test from a Java Class with: JUnitCore core = new JUnitCore(); core.addListener(new RunListener()); core.run(classToRun); Problem is my JUnit test requires a database connection that is currently hardcoded in the JUnit test itself. What I am looking for is a way to run the JUnit test programmatically(above) but pass a database connection to it that I create in my Java Class that runs the test, and not hardcoded within the JUnit class. Basically something like JUnitCore core = new JUnitCore(); core.addListener(new RunListener()); core.addParameters(java.sql.Connection); core.run(classToRun); Then within the classToRun: @Test Public void Test1(Connection dbConnection){ Statement st = dbConnection.createStatement(); ResultSet rs = st.executeQuery("select total from dual"); rs.next(); String myTotal = rs.getString("TOTAL"); //btw my tests are selenium testcases:) selenium.isTextPresent(myTotal); } I know about The @Parameters, but it doesn't seem applicable here as it is more for running the same test case multiple times with differing values. I want all of my test cases to share a database connection that I pass in through a configuration file to my java client that then runs those test cases (also passed in through the configuration file). Is this possible? P.S. I understand this seems like an odd way of doing things.

    Read the article

  • Advance Query with Join

    - by user1462589
    I'm trying to convert a product table that contains all the detail of the product into separate tables in SQL. I've got everything done except for duplicated descriptor details. The problem I am having all the products have size/color/style/other that many other products contain. I want to only have one size or color descriptor for all the items and reuse the "ID" for all the product which I believe is a Parent key to the Product ID which is a ...Foreign Key. The only problem is that every descriptor would have multiple Foreign Keys assigned to it. So I was thinking on the fly just have it skip figuring out a Foreign Parent key for each descriptor and just check to see if that descriptor exist and if it does use its Key for the descriptor. Data Table PI Colo Sz OTHER 1 | Blue | 5 | Vintage 2 | Blue | 6 | Vintage 3 | Blac | 5 | Simple 4 | Blac | 6 | Simple =================================== Its destination table is this =================================== DI Description 1 | Blue 2 | Blac 3 | 5 4 | 6 6 | Vintage 7 | Simple ============================= Select Data.Table Unique.Data.Table.Colo Unique.Data.Table.Sz Unique.Data.Table.Other ======================================= Then the dual part of the questions after we create all the descriptors how to do a new query and assign the product ID to the descriptors. PI| DI 1 | 1 1 | 3 1 | 4 2 | 1 2 | 3 2 | 4 By figuring out how to do this I should be able to duplicate this pattern for all 300 + columns in the product. Some of these fields are 60+ characters large so its going to save a ton of space. Do I use a Array?

    Read the article

  • LINQ Many to Many With In or Contains Clause (and a twist)

    - by Chris
    I have a many to many table structure called PropertyPets. It contains a dual primary key consisting of a PropertyID (from a Property table) and one or more PetIDs (from a Pet table). Next I have a search screen where people can multiple select pets from a jquery multiple select dropdown. Let's say somebody selects Dogs and Cats. Now, I want to be able to return all properties that contain BOTH dogs and cats in the many to many table, PropertyPets. I'm trying to do this with Linq to Sql. I've looked at the Contains clause, but it doesn't seem to work for my requirement: var result = properties.Where(p => search.PetType.Contains(p.PropertyPets)); Here, search.PetType is an int[] array of the Id's for Dog and Cat (which were selected in the multiple select drop down). The problem is first, Contains requires a string not an IEnumerable of type PropertyPet. And second, I need to find the properties that have BOTH dogs and cats and not just simply containing one or the other. Thank you for any pointers.

    Read the article

  • What is SharePoint Out of the Box?

    - by Bil Simser
    It’s always fun in the blog-o-sphere and SharePoint bloggers always keep the pot boiling. Bjorn Furuknap recently posted a blog entry titled Why Out-of-the-Box Makes No Sense in SharePoint, quickly followed up by a rebuttal by Marc Anderson on his blog. Okay, now that we have all the players and the stage what’s the big deal? Bjorn started his post saying that you don’t use “out-of-the-box” (OOTB) SharePoint because it makes no sense. I have to disagree with his premise because what he calls OOTB is basically installing SharePoint and admiring it, but not using it. In his post he lays claim that modifying say the OOTB contacts list by removing (or I suppose adding) a column, now puts you in a situation where you’re no longer using the OOTB functionality. Really? Side note. Dear Internet, please stop comparing building software to building houses. Or comparing software architecture to building architecture. Or comparing web sites to making dinner. Are you trying to dumb down something so the general masses understand it? Comparing a technical skill to a construction operation isn’t the way to do this. Last time I checked, most people don’t know how to build houses and last time I checked people reading technical SharePoint blogs are generally technical people that understand the terms you use. Putting metaphors around software development to make it easy to understand is detrimental to the goal. </rant> Okay, where were we? Right, adding columns to lists means you are no longer using the OOTB functionality. Yeah, I still don’t get it. Another statement Bjorn makes is that using the OOTB functionality kills the flexibility SharePoint has in creating exactly what you want. IMHO this really flies in the absolute face of *where* SharePoint *really* shines. For the past year or so I’ve been leaning more and more towards OOTB solutions over custom development for the simple reason that its expensive to maintain systems and code and assets. SharePoint has enabled me to do this simply by providing the tools where I can give users what they need without cracking open up Visual Studio. This might be the fact that my day job is with a regulated company and there’s more scrutiny with spending money on anything new, but frankly that should be the position of any responsible developer, architect, manager, or PM. Do you really want to throw money away because some developer tells you that you need a custom web part when perhaps with some creative thinking or expectation setting with customers you can meet the need with what you already have. The way I read Bjorn’s terminology of “out-of-the-box” is install the software and tell people to go to a website and admire the OOTB system, but don’t change it! For those that know things like WordPress, DotNetNuke, SubText, Drupal or any of those content management/blogging systems, its akin to installing the software and setting up the “Hello World” blog post or page, then staring at it like it’s useful. “Yes, we are using WordPress!”. Then not adding a new post, creating a new category, or adding an About page. Perhaps I’m wrong in my interpretation. This leads us to what is OOTB SharePoint? To many people I’ve talked to the last few hours on twitter, email, etc. it is *not* just installing software but actually using it as it was fit for purpose. What’s the purpose of SharePoint then? It has many purposes, but using the OOTB templates Microsoft has given you the ability to collaborate on projects, author/share/publish documents, create pages, track items/contacts/tasks/etc. in a multi-user web based interface, and so on. Microsoft has pretty clear definitions of these different levels of SharePoint we’re talking about and I think it’s important for everyone to know what they are and what they mean. Personalization and Administration To me, this is the OOTB experience. You install the product and then are able to do things like create new lists, sites, edit and personalize pages, create new views, etc. Basically use the platform services available to you with Windows SharePoint Services (or SharePoint Foundation in 2010) to your full advantage. No code, no special tools needed, and very little user training required. Could you take someone who has never done anything in a website or piece of software and unleash them onto a site? Probably not. However I would argue that anyone who’s configured the Outlook reading layout or applied styles to a Word document probably won’t have too much difficulty in using SharePoint OUT OF THE BOX. Customization Here’s where things might get a bit murky but to me this is where you start looking at HTML/ASPX page code through SharePoint Designer, using jQuery scripts and plugging them into Web Part Pages via a Content Editor Web Part, and generally enhancing the site. The JavaScript debate might kick in here claiming it’s no different than C#, and frankly you can totally screw a site up with jQuery on a CEWP just as easily as you can with a C# delegate control deployed to the server file system. However (again, my blog, my opinion) the customization label comes in when I need to access the server (for example creating a custom theme) or have some kind of net-new element I add to the system that wasn’t there OOTB. It’s not content (like a new list or site), it’s code and does something functional. Development Here’s were the propeller hats come on and we’re talking algorithms and unit tests and compilers oh my. Software is deployed to the server, people are writing solutions after some kind of training (perhaps), there might be some specialized tools they use to craft and deploy the solutions, there’s the possibility of exceptions being thrown, etc. There are a lot of definitions here and just like customization it might get murky (do you let non-developers build solutions using development, i.e. jQuery/C#?). In my experience, it’s much more cost effective keeping solutions under the first two umbrellas than leaping into the third one. Arguably you could say that you can’t build useful solutions without *some* kind of code (even just some simple jQuery). I think you can get a *lot* of value just from using the OOTB experience and I don’t think you’re constraining your users that much. I’m not saying Marc or Bjorn are wrong. Like Obi-Wan stated, they’re both correct “from a certain point of view”. To me, SharePoint Out of the Box makes total sense and should not be dismissed. I just don’t agree with the premise that Bjorn is basing his statements on but that’s just my opinion and his is different and never the twain shall meet.

    Read the article

  • Integrating Coherence & Java EE 6 Applications using ActiveCache

    - by Ricardo Ferreira
    OK, so you are a developer and are starting a new Java EE 6 application using the most wonderful features of the Java EE platform like Enterprise JavaBeans, JavaServer Faces, CDI, JPA e another cool stuff technologies. And your architecture need to hold piece of data into distributed caches to improve application's performance, scalability and reliability? If this is your current facing scenario, maybe you should look closely in the solutions provided by Oracle WebLogic Server. Oracle had integrated WebLogic Server and its champion data caching technology called Oracle Coherence. This seamless integration between this two products provides a comprehensive environment to develop applications without the complexity of extra Java code to manage cache as a dependency, since Oracle provides an DI ("Dependency Injection") mechanism for Coherence, the same DI mechanism available in standard Java EE applications. This feature is called ActiveCache. In this article, I will show you how to configure ActiveCache in WebLogic and at your Java EE application. Configuring WebLogic to manage Coherence Before you start changing your application to use Coherence, you need to configure your Coherence distributed cache. The good news is, you can manage all this stuff without writing a single line of code of XML or even Java. This configuration can be done entirely in the WebLogic administration console. The first thing to do is the setup of a Coherence cluster. A Coherence cluster is a set of Coherence JVMs configured to form one single view of the cache. This means that you can insert or remove members of the cluster without the client application (the application that generates or consume data from the cache) knows about the changes. This concept allows your solution to scale-out without changing the application server JVMs. You can growth your application only in the data grid layer. To start the configuration, you need to configure an machine that points to the server in which you want to execute the Coherence JVMs. WebLogic Server allows you to do this very easily using the Administration Console. In this example, I will call the machine as "coherence-server". Remember that in order to the machine concept works, you need to ensure that the NodeManager are being executed in the target server that the machine points to. The NodeManager executable can be found in <WLS_HOME>/server/bin/startNodeManager.sh. The next thing to do is to configure a Coherence cluster. In the WebLogic administration console, go to Environment > Coherence Clusters and click in "New". Call this Coherence cluster of "my-coherence-cluster". Click in next. Specify a valid cluster address and port. The Coherence members will communicate with each other through this address and port. Our Coherence cluster are now configured. Now it is time to configure the Coherence members and add them to this cluster. In the WebLogic administration console, go to Environment > Coherence Servers and click in "New". In the field "Name" set to "coh-server-1". In the field "Machine", associate this Coherence server to the machine "coherence-server". In the field "Cluster", associate this Coherence server to the cluster named "my-coherence-cluster". Click in "Finish". Start the Coherence server using the "Control" tab of WebLogic administration console. This will instruct WebLogic to start a new JVM of Coherence in the target machine that should join the pre-defined Coherence cluster. Configuring your Java EE Application to Access Coherence Now lets pass to the funny part of the configuration. The first thing to do is to inform your Java EE application which Coherence cluster to join. Oracle had updated WebLogic server deployment descriptors so you will not have to change your code or the containers deployment descriptors like application.xml, ejb-jar.xml or web.xml. In this example, I will show you how to enable DI ("Dependency Injection") to a Coherence cache from a Servlet 3.0 component. In the WEB-INF/weblogic.xml deployment descriptor, put the following metadata information: <?xml version="1.0" encoding="UTF-8"?> <wls:weblogic-web-app xmlns:wls="http://xmlns.oracle.com/weblogic/weblogic-web-app" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd http://xmlns.oracle.com/weblogic/weblogic-web-app http://xmlns.oracle.com/weblogic/weblogic-web-app/1.4/weblogic-web-app.xsd"> <wls:context-root>myWebApp</wls:context-root> <wls:coherence-cluster-ref> <wls:coherence-cluster-name>my-coherence-cluster</wls:coherence-cluster-name> </wls:coherence-cluster-ref> </wls:weblogic-web-app> As you can see, using the "coherence-cluster-name" tag, we are informing our Java EE application that it should join the "my-coherence-cluster" when it loads in the web container. Without this information, the application will not be able to access the predefined Coherence cluster. It will form its own Coherence cluster without any members. So never forget to put this information. Now put the coherence.jar and active-cache-1.0.jar dependencies at your WEB-INF/lib application classpath. You need to deploy this dependencies so ActiveCache can automatically take care of the Coherence cluster join phase. This dependencies can be found in the following locations: - <WLS_HOME>/common/deployable-libraries/active-cache-1.0.jar - <COHERENCE_HOME>/lib/coherence.jar Finally, you need to write down the access code to the Coherence cache at your Servlet. In the following example, we have a Servlet 3.0 component that access a Coherence cache named "transactions" and prints into the browser output the content (the ammount property) of one specific transaction. package com.oracle.coherence.demo.activecache; import java.io.IOException; import javax.annotation.Resource; import javax.servlet.ServletException; import javax.servlet.annotation.WebServlet; import javax.servlet.http.HttpServlet; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import com.tangosol.net.NamedCache; @WebServlet("/demo/specificTransaction") public class TransactionServletExample extends HttpServlet { @Resource(mappedName = "transactions") NamedCache transactions; protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { int transId = Integer.parseInt(request.getParameter("transId")); Transaction transaction = (Transaction) transactions.get(transId); response.getWriter().println("<center>" + transaction.getAmmount() + "</center>"); } } Thats it! No more configuration is necessary and you have all set to start producing and getting data to/from Coherence. As you can see in the example code, the Coherence cache are treated as a normal dependency in the Java EE container. The magic happens behind the scenes when the ActiveCache allows your application to join the defined Coherence cluster. The most interesting thing about this approach is, no matter which type of Coherence cache your are using (Distributed, Partitioned, Replicated, WAN-Remote) for the client application, it is just a simple attribute member of com.tangosol.net.NamedCache type. And its all managed by the Java EE container as an dependency. This means that if you inject the same dependency (the Coherence cache named "transactions") in another Java EE component (JSF managed-bean, Stateless EJB) the cache will be the same. Cool isn't it? Thanks to the CDI technology, we can extend the same support for non-Java EE standards components like simple POJOs. This means that you are not forced to only use Servlets, EJBs or JSF in order to inject Coherence caches. You can do the same approach for regular POJOs created for you and managed by lightweight containers like Spring or Seam.

    Read the article

  • krb5-multidev, libk5crypto3, libk5crypto3:i386 package dependency

    - by TDalton
    Using Ubuntu 12.04 Asus U43F I can no longer update, install or remove packages via software center because of package dependency errors. sudo apt-get install -f reads the following: Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following packages were automatically installed and are no longer required: libunity6 libqapt-runtime libboost-program-options1.46.1 akonadi-backend-mysql libqapt1 shared-desktop-ontologies libntrack0 ntrack-module-libnl-0 libntrack-qt4-1 Use 'apt-get autoremove' to remove them. The following extra packages will be installed: krb5-multidev libk5crypto3:i386 libkrb5-dev Suggested packages: krb5-doc krb5-doc:i386 krb5-user:i386 The following packages will be upgraded: krb5-multidev libk5crypto3:i386 libkrb5-dev 3 upgraded, 0 newly installed, 0 to remove and 325 not upgraded. 11 not fully installed or removed. Need to get 0 B/213 kB of archives. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? y dpkg: error processing libk5crypto3 (--configure): libk5crypto3:amd64 1.10+dfsg~beta1-2ubuntu0.3 cannot be configured because libk5crypto3:i386 is in a different version (1.10+dfsg~beta1-2ubuntu0.1) dpkg: error processing libk5crypto3:i386 (--configure): libk5crypto3:i386 1.10+dfsg~beta1-2ubuntu0.1 cannot be configured because libk5crypto3:amd64 is in a different version (1.10+dfsg~beta1-2ubuntu0.3) dpkg: dependency problems prevent configuration of libkrb5-3: libkrb5-3 depends on libk5crypto3 (>= 1.9+dfsg~beta1); however:No apport report written because MaxReports is reached already Package libk5crypto3 is not configured yet. dpkg: error processing libkrb5-3 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libgssapi-krb5-2: libgssapi-krb5-2 depends on libk5crypto3 (>= 1.8+dfsg); however: Package libk5crypto3 is not configured yet. libgssapi-krb5-2 depends on libkrb5-3 (= 1.10+dfsg~beta1-2ubuntu0.3); however: Package libkrb5-3 is not configured yet. dpkg: error processing libgssapi-krb5-2 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libgssrpc4: libgssrpc4 depends on libgssapi-krb5-2 (>= 1.10+dfsg~); however: Package libgssapi-krb5-2 is not configured yet. dpkg: error processing libgssrpc4 (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of libkadm5srv-mit8: libkadm5srv-mit8 depends on libgssapi-krb5-2 (>= 1.6.dfsg.2); however: Package libgssapi-krb5-2 is not configured yet. libkadm5srv-mit8 depends on libgssrpc4 (>= 1.6.dfsg.2); however: Package libgssrpc4 is not configured yet. libkadm5srv-mit8 depends on libk5crypto3 (>= 1.6.dfsg.2); however: Package libk5crypto3 is not configured yet. libkadm5srv-mit8 depends on libkrb5-3 (>= 1.9+dfsg~beta1); however: Package libkrb5-3 is not configured yet. dpkg: error processing libkadm5srv-mit8 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libkadm5clnt-mit8: libkadm5clnt-mit8 depends on libgssapi-krb5-2 (>= 1.10+dfsg~); however: Package libgssapi-krb5-2 is not configured yet. libkadm5clnt-mit8 depends on libgssrpc4 (>= 1.6.dfsg.2); however: Package libgssrpc4 is not configured yet.No apport report written because MaxReports is reached already libkadm5clnt-mit8 depends on libk5crypto3 (>= 1.6.dfsg.2); however: Package libk5crypto3 is not configured yet. libkadm5clnt-mit8 depends on libkrb5-3 (>= 1.8+dfsg); however: Package libkrb5-3 is not configured yet. dpkg: error processing libkadm5clnt-mit8 (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of krb5-multidev: krb5-multidev depends on libkrb5-3 (= 1.10+dfsg~beta1-2ubuntu0.2); however: Version of libkrb5-3 on system is 1.10+dfsg~beta1-2ubuntu0.3. krb5-multidev depends on libk5crypto3 (= 1.10+dfsg~beta1-2ubuntu0.2); however: Version of libk5crypto3 on system is 1.10+dfsg~beta1-2ubuntu0.3. krb5-multidev depends on libgssapi-krb5-2 (= 1.10+dfsg~beta1-2ubuntu0.2); however: Version of libgssapi-krb5-2 on system is 1.10+dfsg~beta1-2ubuntu0.3. krb5-multidev depends on libgssrpc4 (= 1.10+dfsg~beta1-2ubuntu0.2); however: Version of libgssrpc4 on system is 1.10+dfsg~beta1-2ubuntu0.3. krb5-multidev depends on libkadm5srv-mit8 (= 1.10+dfsg~beta1-2ubuntu0.2); however: Version of libkadm5srv-mit8 on system is 1.10+dfsg~beta1-2ubuntu0.3. krb5-multidev depends on libkadm5clnt-mit8 (= 1.10+dfsg~beta1-2ubuntu0.2); however: Version of libkadm5clnt-mit8 on system is 1.10+dfsg~beta1-2ubuntu0.3. dpkg: error processing krb5-multidev (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of libkrb5-dev: libkrb5-dev depends on krb5-multidev (= 1.10+dfsg~beta1-2ubuntu0.2); however: Package krb5-multidev is not configured yet. dpkg: error processing libkrb5-dev (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of libkrb5-3:i386: libkrb5-3:i386 depends on libk5crypto3 (>= 1.9+dfsg~beta1); however: Package libk5crypto3:i386 is not configured yet. dpkg: error processing libkrb5-3:i386 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libgssapi-krb5-2:i386: libgssapi-krb5-2:i386 depends on libk5crypto3 (>= 1.8+dfsg); however: Package libk5crypto3:i386 is not configured yet. libgssapi-krb5-2:i386 depends on libkrb5-3 (= 1.10+dfsg~beta1-2ubuntu0.3); however: Package libkrb5-3:i386 is not configured yet. dpkg: error processing libgssapi-krb5-2:i386 (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: libk5crypto3 libk5crypto3:i386 libkrb5-3 libgssapi-krb5-2 libgssrpc4 libkadm5srv-mit8 libkadm5clnt-mit8 krb5-multidev libkrb5-dev libkrb5-3:i386 libgssapi-krb5-2:i386 E: Sub-process /usr/bin/dpkg returned an error code (1) I have tried to fix the broken dependencies via synaptic package manager, but it returns with an error: E: libk5crypto3: libk5crypto3:amd64 1.10+dfsg~beta1-2ubuntu0.3 cannot be configured because libk5crypto3 E: libkrb5-3: dependency problems - leaving unconfigured E: libgssapi-krb5-2: dependency problems - leaving unconfigured E: libgssrpc4: dependency problems - leaving unconfigured E: libkadm5srv-mit8: dependency problems - leaving unconfigured E: libkadm5clnt-mit8: dependency problems - leaving unconfigured E: krb5-multidev: dependency problems - leaving unconfigured E: libkrb5-dev: dependency problems - leaving unconfigured I haven't gotten help from ubuntuforums.org on this issue. Please help, obi-wan

    Read the article

  • To SYNC or not to SYNC – Part 3

    - by AshishRay
    I can't believe it has been almost a year since my last blog post. I know, that's an absolute no-no in the blogosphere. And I know that "I have been busy" is not a good excuse. So - without trying to come up with an excuse - let me state this - my apologies for taking such a long time to write the next Part. Without further ado, here goes. This is Part 3 of a multi-part blog article where we are discussing various aspects of setting up Data Guard synchronous redo transport (SYNC). In Part 1 of this article, I debunked the myth that Data Guard SYNC is similar to a two-phase commit operation. In Part 2, I discussed the various ways that network latency may or may not impact a Data Guard SYNC configuration. In this article, I will talk in details regarding why Data Guard SYNC is a good thing. I will also talk about distance implications for setting up such a configuration. So, Why Good? Why is Data Guard SYNC a good thing? Because, at the end of the day, this gives you the assurance of zero data loss - it doesn’t matter what outage may befall your primary system. Befall! Boy, that sounds theatrical. But seriously - think about this - it minimizes your data risks. That’s a big deal. Whether you have an outage due to bad disks, faulty hardware components, hardware / software bugs, physical data corruptions, power failures, lightning that takes out significant part of your data center, fire that melts your assets, water leakage from the cooling system, human errors such as accidental deletion of online redo log files - it doesn’t matter - you can have that “Om - peace” look on your face and then you can failover to the standby system, without losing a single bit of data in your Oracle database. You will be a hero, as shown in this not so imaginary conversation: IT Manager: Well, what’s the status? You: John is doing the trace analysis on the storage array. IT Manager: So? How long is that gonna take? You: Well, he is stuck, waiting for a response from <insert your not-so-favorite storage vendor here>. IT Manager: So, no root cause yet? You: I told you, he is stuck. We have escalated with their Support, but you know how long these things take. IT Manager: Darn it - the site is down! You: Not really … IT Manager: What do you mean? You: John is stuck, but Sreeni has already done a failover to the Data Guard standby. IT Manager: Whoa, whoa - wait! Failover means we lost some data, why did you do this without letting the Business group know? You: We didn’t lose any data. Remember, we had set up Data Guard with SYNC? So now, any problems on the production – we just failover. No data loss, and we are up and running in minutes. The Business guys don’t need to know. IT Manager: Wow! Are we great or what!! You: I guess … Ok, so you get it - SYNC is good. But as my dear friend Larry Carpenter says, “TANSTAAFL”, or "There ain't no such thing as a free lunch". Yes, of course - investing in Data Guard SYNC means that you have to invest in a low-latency network, you have to monitor your applications and database especially in peak load conditions, and you cannot under-provision your standby systems. But all these are good and necessary things, if you are supporting mission-critical apps that are supposed to be running 24x7. The peace of mind that this investment will give you is priceless, especially if you are serious about HA. How Far Can We Go? Someone may say at this point - well, I can’t use Data Guard SYNC over my coast-to-coast deployment. Most likely - true. So how far can you go? Well, we have customers who have deployed Data Guard SYNC over 300+ miles! Does this mean that you can also deploy over similar distances? Duh - no! I am going to say something here that most IT managers don’t like to hear - “It depends!” It depends on your application design, application response time / throughput requirements, network topology, etc. However, because of the optimal way we do SYNC, customers have been able to stretch Data Guard SYNC deployments over longer distances compared to traditional, storage-centric ways of doing this. The MAA Database 10.2 best practices paper Data Guard Redo Transport & Network Configuration, and Oracle Database 11.2 High Availability Best Practices Manual talk about some of these SYNC-related metrics. For example, a test deployment of Data Guard SYNC over 330 miles with 10ms latency showed an impact less than 5% for a busy OLTP application. Even if you can’t deploy Data Guard SYNC over your WAN distance, or if you already have an ASYNC standby located 1000-s of miles away, here’s another nifty way to boost your HA. Have a local standby, configured SYNC. How local is “local”? Again - it depends. One customer runs a local SYNC standby across the campus. Another customer runs it across 15 miles in another data center. Both of these customers are running Data Guard SYNC as their HA standard. If a localized outage affects their primary system, no problem! They have all the data available on the standby, to which they can failover. Very fast. In seconds. Wait - did I say “seconds”? Yes, Virginia, there is a Santa Claus. But you have to wait till the next blog article to find out more. I assure you tho’ that this time you won’t have to wait for another year for this.

    Read the article

  • SQL Server Licensing in a VMware vSphere Cluster

    - by Helvick
    If I have SQL Server 2008 instances running in virtual machines on a VMware vSphere cluster with vMotion\DRS enabled so that the VM's can (potentially) run on any one of the physical servers in the cluster what precisely are the license requirements? For example assume that I have 4 physical ESX Hosts with dual physical CPU's and 3 separate single vCPU Virtual Machines running SQL Server 2008 running in that cluster. How many SQL Standard Processor licenses would I need? Is it 3 (one per VM) or 12 (one per VM on each physical host) or something else? How many SQL Enterprise Processor licenses would I need? Is it 3 (one per VM) or 8 (one for each physical CPU in the cluster) or, again, something else? The range in the list prices for these options goes from $17k to $200k so getting it right is quite important. Bonus question: If I choose the Server+CAL licensing model do I need to buy multiple Server instance licenses for each of the ESX hosts (so 12 copies of the SQL Server Standard server license so that there are enough licenses on each host to run all VM's) or again can I just license the VM and what difference would using Enterprise per server licensing make? Edited to Add Having spent some time reading the SQL 2008 Licensing Guide (63 Pages! Includes Maps!*) I've come across this: • Under the Server/CAL model, you may run unlimited instances of SQL Server 2008 Enterprise within the server farm, and move those instances freely, as long as those instances are not running on more servers than the number of licenses assigned to the server farm. • Under the Per Processor model, you effectively count the greatest number of physical processors that may support running instances of SQL Server 2008 Enterprise at any one time across the server farm and assign that number of Processor licenses And earlier: ..For SQL Server, these rule changes apply to SQL Server 2008 Enterprise only. By my reading this means that for my 3 VM's I only need 3 SQL 2008 Enterprise Processor Licenses or one copy of Server Enterprise + CALs for the cluster. By implication it means that I have to license all processors if I choose SQL 2008 Standard Processor licensing or that I have to buy a copy of SQL Server 2008 Standard for each ESX host if I choose to use CALs. *There is a map to demonstrate that a Server Farm cannot extend across an area broader than 3 timezones unless it's in the European Free Trade Area, I wasn't expecting that when I started reading it.

    Read the article

  • Where to find new Micro-BTX (uBTX) motherboards? Or should I just replace the box?

    - by John Rudy
    OK, so I'm guessing that it's dead. It's not my machine, and the owner is on a very fixed (IE, none) income. I'm generous, but I'm not that generous, since I already gave him what (at the time) was a fully functional and fairly well-equipped machine. (Aside from the mobo and proc, almost nothing else in it was stock. I'd taken it up to 3GB of RAM, upgraded the hard drive, added a decent video card, installed a wireless adapter, running Vista, etc.) According to further research, the machine uses a Micro-BTX (uBTX) motherboard, and since it's an AMD Athlon64, the AM2 socket. So I'm looking at a few options, and am wondering what's the best route to take? Find an AM2 socket uBTX mobo. I can't find them new online anywhere, leading me to believe that this is an obsolete form factor/chip combination. I don't want a refurb or a system pull because, quite honestly, once I deal with this mess, I don't want to go through it again in another year or two. Find an Intel uBTX mobo and a (relatively -- hah, I still want at least a dual-core) inexpensive Intel CPU. At this point, the only things stock in the machine would be the case and the PSU. :) Buy a bare-bones kit (mobo/proc/PSU/case, sometimes even RAM) from somewhere like CompUSA/TigerDirect or Fry's and move all of the other hardware over. This makes life difficult because the copy of Vista is an upgrade, tied to the copy of XP which shipped on the Gateway, which is OEM and won't install on the new box. :) If I change the CPU brand (AMD to Intel), will I need to reinstall Windows, or can it just be reactivated? Where can I actually find a new, in-box, not system pull, not refurb AM2 uBTX mobo? Do they even exist anymore? What kind of money are we talking (US dollars)? The end goal is to get the machine functional again as cheaply as humanly possible. If it were my own machine, I wouldn't even be asking this, I'd be custom-building a new one. However, it's not mine, I'm shelling out of pocket for the fix (plus the work), and thus want to keep that end price low-low-low.

    Read the article

  • SQL Server Licensing in a VMware vSphere Cluster

    - by Helvick
    If I have SQL Server 2008 instances running in virtual machines on a VMware vSphere cluster with vMotion\DRS enabled so that the VM's can (potentially) run on any one of the physical servers in the cluster what precisely are the license requirements? For example assume that I have 4 physical ESX Hosts with dual physical CPU's and 3 separate single vCPU Virtual Machines running SQL Server 2008 running in that cluster. How many SQL Standard Processor licenses would I need? Is it 3 (one per VM) or 12 (one per VM on each physical host) or something else? How many SQL Enterprise Processor licenses would I need? Is it 3 (one per VM) or 8 (one for each physical CPU in the cluster) or, again, something else? The range in the list prices for these options goes from $17k to $200k so getting it right is quite important. Bonus question: If I choose the Server+CAL licensing model do I need to buy multiple Server instance licenses for each of the ESX hosts (so 12 copies of the SQL Server Standard server license so that there are enough licenses on each host to run all VM's) or again can I just license the VM and what difference would using Enterprise per server licensing make? Edited to Add Having spent some time reading the SQL 2008 Licensing Guide (63 Pages! Includes Maps!*) I've come across this: • Under the Server/CAL model, you may run unlimited instances of SQL Server 2008 Enterprise within the server farm, and move those instances freely, as long as those instances are not running on more servers than the number of licenses assigned to the server farm. • Under the Per Processor model, you effectively count the greatest number of physical processors that may support running instances of SQL Server 2008 Enterprise at any one time across the server farm and assign that number of Processor licenses And earlier: ..For SQL Server, these rule changes apply to SQL Server 2008 Enterprise only. By my reading this means that for my 3 VM's I only need 3 SQL 2008 Enterprise Processor Licenses or one copy of Server Enterprise + CALs for the cluster. By implication it means that I have to license all processors if I choose SQL 2008 Standard Processor licensing or that I have to buy a copy of SQL Server 2008 Standard for each ESX host if I choose to use CALs. *There is a map to demonstrate that a Server Farm cannot extend across an area broader than 3 timezones unless it's in the European Free Trade Area, I wasn't expecting that when I started reading it.

    Read the article

  • Informix "Database locale information mismatch"

    - by lmmortal
    I have informix 11.5 running in my Win-2003 box and few databases running in it. System databases have locale en_us.819 My custom databases have locale en_us.57372 (UTF8). There is also application deployed to JBoss 4.0.2 which has few datasources configured for those custom databases. <local-tx-datasource> <jndi-name>InformixDS</jndi-name> <connection-url>jdbc:informix-sqli://@database.server@:@database.port@/tcs_catalog:[email protected]@</connection-url> <driver-class>com.informix.jdbc.IfxDriver</driver-class> <user-name>@database.username@</user-name> <password>@database.password@</password> <new-connection-sql>set lock mode to wait 5</new-connection-sql> <check-valid-connection-sql>select '1' from dual</check-valid-connection-sql> <metadata> <type-mapping>InformixDB</type-mapping> </metadata> I'm logged in as Administrator and when I start JBoss the following error is shown Caused by: java.sql.SQLException: Database locale information mismatch. at com.informix.util.IfxErrMsg.getSQLException(IfxErrMsg.java:373) at com.informix.jdbc.IfxSqli.a(IfxSqli.java:3208) at com.informix.jdbc.IfxSqli.E(IfxSqli.java:3518) at com.informix.jdbc.IfxSqli.dispatchMsg(IfxSqli.java:2353) at com.informix.jdbc.IfxSqli.receiveMessage(IfxSqli.java:2269) at com.informix.jdbc.IfxSqli.executeOpenDatabase(IfxSqli.java:1786) at com.informix.jdbc.IfxSqliConnect.<init>(IfxSqliConnect.java:1327) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:501) at com.informix.jdbc.IfxDriver.connect(IfxDriver.java:254) at org.jboss.resource.adapter.jdbc.local.LocalManagedConnectionFactory.createManagedConnection(LocalManagedConnectionFactory.java:151) ... 160 more Caused by: java.sql.SQLException at com.informix.util.IfxErrMsg.getSQLException(IfxErrMsg.java:373) at com.informix.jdbc.IfxSqli.E(IfxSqli.java:3523) ... 170 more DB_LOCALE and CLIENT_LOCALE are set to en_us.utf8 for Administrator. When I set in Server Studio DB_LOCALE and CLIENT_LOCALE to en_us.utf8 I can connect my databases. Where should I set DB_LOCALE and CLIENT_LOCALE to avoid this Database locale information mismatch error? Thanks.

    Read the article

  • Apache2 benchmarks - very poor performance

    - by andrzejp
    I have two servers on which I test the configuration of apache2. The first server: 4GB of RAM, AMD Athlon (tm) 64 X2 Dual Core Processor 5600 + Apache 2.2.3, mod_php, mpm prefork: Settings: Timeout 100 KeepAlive On MaxKeepAliveRequests 150 KeepAliveTimeout 4 <IfModule Mpm_prefork_module> StartServers 7 MinSpareServers 15 MaxSpareServers 30 MaxClients 250 MaxRequestsPerChild 2000 </ IfModule> Compiled in modules: core.c mod_log_config.c mod_logio.c prefork.c http_core.c mod_so.c Second server: 8GB of RAM, Intel (R) Core (TM) i7 CPU [email protected] Apache 2.2.9, **fcgid, mpm worker, suexec** PHP scripts are running via fcgi-wrapper Settings: Timeout 100 KeepAlive On MaxKeepAliveRequests 100 KeepAliveTimeout 4 <IfModule Mpm_worker_module> StartServers 10 MaxClients 200 MinSpareThreads 25 MaxSpareThreads 75 ThreadsPerChild 25 MaxRequestsPerChild 1000 </ IfModule> Compiled in modules: core.c mod_log_config.c mod_logio.c worker.c http_core.c mod_so.c The following test results, which are very strange! New server (dynamic content - php via fcgid+suexec): Server Software: Apache/2.2.9 Server Hostname: XXXXXXXX Server Port: 80 Document Path: XXXXXXX Document Length: 179512 bytes Concurrency Level: 10 Time taken for tests: 0.26276 seconds Complete requests: 1000 Failed requests: 0 Total transferred: 179935000 bytes HTML transferred: 179512000 bytes Requests per second: 38.06 Transfer rate: 6847.88 kb/s received Connnection Times (ms) min avg max Connect: 2 4 54 Processing: 161 257 449 Total: 163 261 503 Old server (dynamic content - mod_php): Server Software: Apache/2.2.3 Server Hostname: XXXXXX Server Port: 80 Document Path: XXXXXX Document Length: 187537 bytes Concurrency Level: 10 Time taken for tests: 173.073 seconds Complete requests: 1000 Failed requests: 22 (Connect: 0, Length: 22, Exceptions: 0) Total transferred: 188003372 bytes HTML transferred: 187546372 bytes Requests per second: 5777.91 Transfer rate: 1086267.40 kb/s received Connnection Times (ms) min avg max Connect: 3 3 28 Processing: 298 1724 26615 Total: 301 1727 26643 Old server: Static content (jpg file) Server Software: Apache/2.2.3 Server Hostname: xxxxxxxxx Server Port: 80 Document Path: /images/top2.gif Document Length: 40486 bytes Concurrency Level: 100 Time taken for tests: 3.558 seconds Complete requests: 1000 Failed requests: 0 Write errors: 0 Total transferred: 40864400 bytes HTML transferred: 40557482 bytes Requests per second: 281.09 [#/sec] (mean) Time per request: 355.753 [ms] (mean) Time per request: 3.558 [ms] (mean, across all concurrent requests) Transfer rate: 11217.51 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 3 11 4.5 12 23 Processing: 40 329 61.4 339 1009 Waiting: 6 282 55.2 293 737 Total: 43 340 63.0 351 1020 New server - static content (jpg file) Server Software: Apache/2.2.9 Server Hostname: XXXXX Server Port: 80 Document Path: /images/top2.gif Document Length: 40486 bytes Concurrency Level: 100 Time taken for tests: 3.571531 seconds Complete requests: 1000 Failed requests: 0 Write errors: 0 Total transferred: 41282792 bytes HTML transferred: 41030080 bytes Requests per second: 279.99 [#/sec] (mean) Time per request: 357.153 [ms] (mean) Time per request: 3.572 [ms] (mean, across all concurrent requests) Transfer rate: 11287.88 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 2 63 24.8 66 119 Processing: 124 278 31.8 282 391 Waiting: 3 70 28.5 66 164 Total: 126 341 35.9 350 443 I noticed that in the apache error.log is a lot of entries: [notice] mod_fcgid: call /www/XXXXX/public_html/forum/index.php with wrapper /www/php-fcgi-scripts/XXXXXX/php-fcgi-starter What I have omitted, or do not understand? Such a difference in requests per second? Is it possible? What could be the cause?

    Read the article

  • Bad motherboard / controller / HDs?

    - by quidpro
    On a leased server, I am running into some timing issues with an application that requires precise timing. Server is a Dual Xeon E5410 running on a Supermicro X7DVL-3 motherboard under CentOs 5.5 x64. The application I am running is timer sensitive and keeps sensing drift whether under load or at idle, but especially under load. I did some investigating with atop and dd and found some mind-blowing numbers. Mind you, I am no Linux guru but something sure seems out of whack. I ran: dd bs=4096 if=/dev/zero of=/bigtestfile to generate disk activity. Regardless whether I wrote it to sda or sdb my DSK value in atop would go over 100%, at one time peaking at 1700%. Again it does not matter if I am writing to sda or sdb. DSK | sdb | busy 675% | read 0 | write 110 | avio 78 ms | Here are the smartctl outputs: # smartctl -A /dev/sda smartctl version 5.38 [x86_64-redhat-linux-gnu] Copyright (C) 2002-8 Bruce Allen Home page is http://smartmontools.sourceforge.net/ === START OF READ SMART DATA SECTION === SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000b 200 200 051 Pre-fail Always - 0 3 Spin_Up_Time 0x0007 165 165 021 Pre-fail Always - 2750 4 Start_Stop_Count 0x0032 100 100 040 Old_age Always - 21 5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always - 0 7 Seek_Error_Rate 0x000a 200 200 051 Old_age Always - 0 9 Power_On_Hours 0x0032 065 065 000 Old_age Always - 25831 10 Spin_Retry_Count 0x0012 100 253 051 Old_age Always - 0 11 Calibration_Retry_Count 0x0012 100 253 051 Old_age Always - 0 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 21 194 Temperature_Celsius 0x0022 116 093 000 Old_age Always - 27 196 Reallocated_Event_Count 0x0032 200 200 000 Old_age Always - 0 197 Current_Pending_Sector 0x0012 200 200 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0012 200 200 000 Old_age Always - 0 199 UDMA_CRC_Error_Count 0x000a 200 253 000 Old_age Always - 0 200 Multi_Zone_Error_Rate 0x0008 200 200 051 Old_age Offline - 0 # smartctl -A /dev/sdb smartctl version 5.38 [x86_64-redhat-linux-gnu] Copyright (C) 2002-8 Bruce Allen Home page is http://smartmontools.sourceforge.net/ === START OF READ SMART DATA SECTION === SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000f 200 200 051 Pre-fail Always - 0 3 Spin_Up_Time 0x0003 180 180 021 Pre-fail Always - 3958 4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 22 5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always - 0 7 Seek_Error_Rate 0x000f 200 200 051 Pre-fail Always - 0 9 Power_On_Hours 0x0032 068 068 000 Old_age Always - 24087 10 Spin_Retry_Count 0x0013 100 253 051 Pre-fail Always - 0 11 Calibration_Retry_Count 0x0013 100 253 051 Pre-fail Always - 0 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 21 194 Temperature_Celsius 0x0022 122 096 000 Old_age Always - 25 196 Reallocated_Event_Count 0x0032 200 200 000 Old_age Always - 0 197 Current_Pending_Sector 0x0012 200 200 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0010 200 200 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x003e 200 200 000 Old_age Always - 0 200 Multi_Zone_Error_Rate 0x0009 200 200 051 Pre-fail Offline - 0 Any idea what's wrong here? Bad motherboard? It would seem rare that both drives are going bad (smartctl says they PASS_, so it leaves the mobo as the culprit in my eyes.

    Read the article

  • New Computer Build Questions

    - by MJ
    I'm in the process of gathering parts and specs for a new machine. I wear many hats, so the machine needs to do a lot. I need at least 2 monitor support, if not three. I also play many online MMOs (wow, aion, war hammer, etc), along with some freelance programming projects. I already have a case which is very large, so it will fit anything. I have 2 other SATA HDs. They are more for storage and basic programs. I feel that the best improvement could be done with a solid state HD, true or not? I'm more of a software/programming guy, so ANY input at all on improving this system build would be appreciated. I have a few questions with this list. AMD or Intel? I don't know enough about either to choose what would best fit me. Thanks! **EDIT: Thanks for the input everyone! Here are some answers: I do a lot of programming and gaming, so I do need things for both. The newer video card covers the gaming aspect, as well as allowing me to have many monitors. (hopefully upgrade to dual 30' or more) I don't need any additional HDs at this time. I have a SATA 160g and 120g from my previous computer, and a NAS system with over 2TB of storage on the homenetwork. I just wanted a fast HD for OS/programs/games. With the memory. I have used G.SKILL before in 2 system builds. It's done excellent for me in them. Very stable. **EDIT2: Made some additional changes. Lowered the power supply down to 750, which saves me more $$. Also changed the SSD to 2 WD 650G HDs. Thinking of doing a CPU upgrade to the 3.4GHZ AMD Phenom II X4 965 Black Edition Deneb 3.4GHz System Specs - Budget:$1500 CPU: AMD Phenom II X4 955 Black Edition Deneb 3.2GHz MB: GIGABYTE GA-MA790GPT-UD3H AM3 AMD 790GX HDMI ATX Memory: G.SKILL 4GB (2 x 2GB) 240-Pin DDR3 SDRAM DDR3 1333 (PC3 1066 Video: DIAMOND 5870PE51G Radeon HD 5870 (Cypress XT) 1GB 256-bit GD Power Supply: XCLIO GREATPOWER 1000W ATX12V SLI Ready CrossFire Ready HD:Intel X25-M Mainstream SSDSA2MH080G2C1 2.5" 80GB SATA II MLC Changes: Power Supply: CORSAIR CMPSU-750TX 750W ATX12V / EPS12V HD: 2x Western Digital Caviar Blue WD6400AAKS 640GB CPU: AMD Phenom II X4 965 Black Edition Deneb 3.4GHz

    Read the article

  • Installing drivers for switchable graphics

    - by Anonymous
    I recently bought a laptop that came with Windows 7 64-bit installed. I have some older (16-bit and 32-bit) software that doesn't work with 64-bit Windows, but works just fine with 32-bit. Since I also wanted to get rid of all of the pre-installed spam, I decided to wipe the hard drive and install a fresh copy of Windows 7 32-bit. I can't get the graphics cards working. This laptop uses switchable graphics, an Intel card and a Radeon card. I first tried installing this driver from Intel, which works for the Intel card. Of course, the Radeon card doesn't work with this driver and I need it for some of the newer games I have. I also tried this driver. Windows's device manager will recognize the Radeon card, but it will still use the Intel card. Also, even though that package says it contains the Intel driver, the Intel card still isn't properly recognized by Windows (leaving me with a nasty 800x600 resolution). On top of that, the Catalyst Control Center won't open (saying "The Catalyst Control Center is not supported by the driver version of your enabled graphics adapter") I tried installing HP's driver then installing Intel's driver on top of it. Device manager will then recognize both graphics cards properly. However, the laptop still uses the Intel card. The CCC still won't start (saying the same thing as before) and I can't find any of 'switching' graphics cards. Before formatting, I could right-click the desktop and click "Configure Switchable Graphics" This option hasn't been in the context menu regardless of what driver(s) I've installed. After some research, I found out that this menu entry runs the command "cli.exe Start PowerXpressHybrid" I've tried manually running this command, but I get the same unsupported message from CCC. So, does anyone know how I can get this working? I would like to be able to switch between the Intel and Radeon. But, if there's some way to disable the Intel and use only the Radeon, that would be fine I dual-boot with Linux (framebuffer uses the Intel, haven't even tried getting X set up yet) Here's the output of lspci # lspci -v | grep VGA 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) (prog-if 00 [VGA controller]) 01:00.0 VGA compatible controller: ATI Technologies Inc NI Seymour [AMD Radeon HD 6470M] (prog-if 00 [VGA controller]) The laptop is a HP Pavilion g6t-1d00. HP doesn't support installing anything but Windows 7 64-bit, so calling tech support isn't an option. Thanks for any help UPDATE: I finally got it working. After a fresh install of Windows 7, I installed the HP driver (the one linked above). Then, there's an optional Windows update I installed (don't remember the exact name, but it'll stick out). After that, graphics switching works just like it's supposed to. Moab, thanks anyways for your help

    Read the article

  • Apache2 cgi's crash on odbc db access (but run fine from shell)

    - by Martin
    Problem overview (details below): I'm having an apache2 + ruby integration problem when trying to connect to an ODBC data source. The main problem boils down to the fact that scripts that run fine from an interactive shell crash ruby on the database connect line when run as a cgi from apache2. Ruby cgi's that don't try to access the ODBC datasource work fine. And (again) ruby scripts that connect to a database with ODBC do fine when executed from the command line (or cron). This behavior is identical when I use perl instead of ruby. So, the issue seems to be with the environment provided for ruby (perl) by apache2, but I can't figure out what is wrong or what to do about it. Does anyone have any suggestions on how to get these cgi scripts to work properly? I've tried many different things to get this to work, and I'm happy to provide more detail of any aspect if that will help. Details: Mac OS X Server 10.5.8 Xserve 2 x 2.66 Dual-Core Intel Xeon (12 GB) Apache 2.2.13 ruby 1.8.6 (2008-08-11 patchlevel 287) [universal-darwin9.0] ruby-odbc 0.9997 dbd-odbc (0.2.5) dbi (0.4.3) mod_ruby 1.3.0 Perl -- 5.8.8 DBI -- 1.609 DBD::ODBC -- 1.23 odbc driver: DataDirect SequeLink v5.5 (/Library/ODBC/SequeLink.bundle/Contents/MacOS/ivslk20.dylib) odbc datasource: FileMaker Server 10 (v10.0.2.206) ) a minimal version of a script (anonymized) that will crash in apache but run successfully from a shell: #!/usr/bin/ruby require 'cgi' require 'odbc' cgi = CGI.new("html3") aConnection = ODBC::connect('DBFile', "username", 'password') aQuery = aConnection.prepare("SELECT zzz_kP_ID FROM DBTable WHERE zzz_kP_ID = 81044") aQuery.execute aRecord = aQuery.fetch_hash.inspect aQuery.drop aConnection.disconnect # aRecord = '{"zzz_kP_ID"=>81044.0}' cgi.out{ cgi.html{ cgi.body{ "<pre>Primary Key: #{aRecord}</pre>" } } } Example of running this from a shell: gamma% ./minimal.rb (offline mode: enter name=value pairs on standard input) Content-Type: text/html Content-Length: 134 <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN"><HTML><BODY><pre>Primary Key: {"zzz_kP_ID"=>81044.0}</pre></font></BODY></HTML>% gamma% ) typical crash log lines: Dec 22 14:02:38 gamma ReportCrash[79237]: Formulating crash report for process perl[79236] Dec 22 14:02:38 gamma ReportCrash[79237]: Saved crashreport to /Library/Logs/CrashReporter/perl_2009-12-22-140237_HTCF.crash using uid: 0 gid: 0, euid: 0 egid: 0 Dec 22 14:03:13 gamma ReportCrash[79256]: Formulating crash report for process perl[79253] Dec 22 14:03:13 gamma ReportCrash[79256]: Saved crashreport to /Library/Logs/CrashReporter/perl_2009-12-22-140311_HTCF.crash using uid: 0 gid: 0, euid: 0 egid: 0

    Read the article

  • How can I troubleshoot a "Hardware Malfunction" blue screen?

    - by AaronSieb
    My computer has suddenly started crashing to a blue screen with the following text: hardware malfunction call your hardware vendor for support *the system has halted* The crash occurs randomly during normal use. I have thus far always been able to reproduce it by transferring the contents of a large folder... But I'm not sure if this is caused by the file transfer, or simply because the transfer takes long enough for something else to trigger it. A bit about my hardware I have an dual core Intel CPU, and Asus motherboard. Video card is by nVidia, and connects via PCIe. My hard drives are in pairs, and connect via SATA to a RAID controller on the motherboard. They are configured to use a RAID0 configuration. What I've tried so far There is nothing in the Windows Event Log. WhoCrashed was unable to find any crash records. ScanDisk runs to completion (it launches prior to Windows load) and reports no errors. MemTest reports no errors (to 200% coverage). System temperatures are in the range of 40 to 50 degrees Celsius, with video card temperatures in the range of 60 to eighty degrees Celsius. I have stripped the system down to a minimal configuration (hard drive, video card, one memory module, motherboard, CPU, power supply). The problem still occurrs. However, this has allowed me to rule out a few components: It is not the video card because the problem still occurred after replacing the video card another one I had on hand. It is not the hard drive or anything software related because the problem occurred after a fresh installation of Windows on a replacement hard drive. It is not the hard drive cables because I replaced those with new ones and still had the problem. It is not the power supply because the problem still occurred after replacing the power supply with another one I had on hand. It is probably not the memory because I've tried three different memory modules in three different memory slots and was still able to replicate the issue. Is there anything I can do to confirm what's causing the issue? At the moment it seems as though it must be either the motherboard or CPU, but those are both difficult components to replace... In addition, both components are relatively new (two to three years old). I will gladly edit in any additional information I can get my hands on, and/or focus the question as I can find more details...

    Read the article

  • MD3200 - 3 to 4x less throughput than MD1220. Am I missing something here?

    - by Igor Polishchuk
    I have two R710 servers with similar configuration. One in my office has MD1220 attached. Another one in the datacenter of my hosting services vendor has MD3200. I'm getting significantly worse throughput from MD3200 at my vendors setup. I'm mostly interested in sequential writes, and I'm getting these results in bonnie++ and dd tests: Seq. writes on MD1220 in my office: 1.1 GB/s - bonnie++, 1.3GB/s - dd Seq. writes on MD3200 at my vendor's: 240MB/s - bonnie++, 310MB/s - dd Unfortunately, I could not test the exactly the same configurations, but the two I have should be comparable. If anything, my good performing environment is cheaper than the bad performing. I expect at least similar throughput from these two setups. My vendor cannot really help me. Hopefully, somebody more familiar with the DAS performance can look at it and tell if I'm missing something here and my expectations are too high. To summarize, the question here is it reasonable to expect about 100MB/s of sequential write throughput per each couple of drives in RAID10 on MD3200? Is there any trick to enable such performance in MD3200 with dual controller as opposed to simple MD1220 with a single H800 adapter? More details about the configurations: A good one in my office: Dell R710 2CPU X5650 @ 2.67GHz 12 cores 96GB DDR3, OS: RHEL 5.5, kernel 2.6.18-194.26.1.el5 x86_64 20x300GB 2.5" SAS 10K in a single RAID10 1MB chunk size on MD1220 + Dell H800 I/O controller with 1GB cache in the host Not so good one at my vendor's: Dell R710 2CPU L5520 @ 2.27GHz 8 cores 144GB DDR3, OS: RHEL 5.5, kernel 2.6.18-194.11.4.el5 x86_64 20x146GB 2.5" SAS 15K in a single RAID10 512KB chunk size, Dell MD3200, 2 I/O controllers in array with 1GB cache each Additional information. I've also ran the same tests on the same vendor's host, but the storage was: two raids of 14x146GB 15K RPM drives RAID 10, striped together on the OS level on MD3000+MD1000. The performance was about 25% worse than on MD3200 despite having more drives. When I ran similar tests on the internal storage of my vendor's host (2x146GB 15K RPM drives RAID1, Perc 6i) I've got about 128MB/s seq. writes. Just two internal drives gave me about a half of 20 drives' throughput on MD3200. The random I/O performance of the MD3200 setup is ok, it gives me at least 1300 IOPS. I'm mostly have problems with sequentioal I/O throughput. Thank you for looking into it. Regards Igor

    Read the article

< Previous Page | 147 148 149 150 151 152 153 154 155 156 157 158  | Next Page >