Search Results

Search found 11953 results on 479 pages for 'functional testing'.

Page 150/479 | < Previous Page | 146 147 148 149 150 151 152 153 154 155 156 157  | Next Page >

  • Is there an imperative language with a Haskell-like type system?

    - by Graham Kaemmer
    I've tried to learn Haskell a few times over the last few years, and, maybe because I know mainly scripting languages, the functional-ness of it has always bothered me (monads seem like a huge mess for doing lots of I/O). However, I think it's type system is perfect. Reading through a guide to Haskell's types and typeclasses (like this), I don't really see a reason why they would require a functional language, and furthermore, they seem like they would be perfect for an industry-grade object-oriented language (like Java). This all begs the question: has anyone ever taken Haskell's typing system and made a imperative, OOP language with it? If so, I want to use it.

    Read the article

  • What programming language matches this description? [on hold]

    - by Benubird
    I am looking for a functional language that is basically dynamic programming - i.e. one where functions are first-class objects - but where all function calls are asynchronous by default; i.e. you define function X(a,b) = (Y(a)+Z(b)), and when X() is called, it sees it is waiting for the return from two functions, runs one in the current thread, and spawns a new thread to run the other. The future is very much parallel processing; multiple cores, multiple machines, the internet of things, etc. and I was wondering if there was a language specifically designed to make this kind of parallelization easy. I currently have only used imperative languages (c, php, java, ruby, etc), so I don't know anything about what kind of functional languages are available.

    Read the article

  • Load Test Manifesto

    - by jchang
    Load testing used to be a standard part of the software development, but not anymore. Now people express a preference for assessing performance on the production system. There is a lack of confidence that a load test reflects what will actually happen in production. In essence, it has become accepted that the value of load testing is not worth the cost and time, and perhaps whether there is any value at all. The main problem is the load test plan criteria – excessive focus on perceived importance...(read more)

    Read the article

  • Is Running programs by address common?

    - by dgood1
    I have read some of the things posted here and I keep reading about people running stuff like /foldername/executable -cmd NAME (was reading about a programmer using Eclipse, so he was testing something he made) I don't see things like that when I run things here (Ubuntu 12.04) because of the launcher and the Ubuntu button at the very top. That and Eclipse indigo has a button for running and testing things it makes. Just asking how and why it's common? (assuming it's the Terminal[Ctrl+alt+T] but I'm not sure)

    Read the article

  • F# - When do you use a class instead of a record when you do not want to use mutable fields?

    - by fairflow
    I'm imagining a situation where you are creating an F# module in a purely functional style. This means objects do not have mutable fields and are not modified in place. I'm assuming for simplicity that there is no need to use .NET objects or other kinds of objects. There are two possible ways of implementing an object-oriented kind of solution: the first is to use type classes and the second to use records which have fields of functional type, to implement methods. I imagine you'd use classes when you want to use inheritance but that otherwise records would be adequate, if perhaps clumsier to express. Or do you find classes more convenient than records in any case?

    Read the article

  • F# - When do you use a type class instead of a record when you do not want to use mutable fields?

    - by fairflow
    I'm imagining a situation where you are creating an F# module in a purely functional style. This means objects do not have mutable fields and are not modified in place. I'm assuming for simplicity that there is no need to use .NET objects or other kinds of objects. There are two possible ways of implementing an object-oriented kind of solution: the first is to use type classes and the second to use records which have fields of functional type, to implement methods. I imagine you'd use classes when you want to use inheritance but that otherwise records would be adequate, if perhaps clumsier to express. Or do you find classes more convenient than records in any case?

    Read the article

  • Puppet Decentralized Setup

    - by paul.tw
    I want to migrate my existing Puppet setup from master/client to a decentralized solution. I know other solutions, such as Ansible are easier to setup for that purpose, but I really want to stay with Puppet. I found "supply_drop"(https://github.com/pitluga/supply_drop) on github, so I followed the instructions and did the following: rvm gemset create testing rvm use 1.9.3@testing gem install supply_drop The output is the following: [m@ms-MacBook-Pro:~ $ irb 1.9.3-p547 :001 require 'supply_drop' NameError: uninitialized constant Capistrano from /Users/m/.rvm/gems/ruby-1.9.3-p547@testing/gems/supply_drop-0.17.0/lib/supply_drop/tasks.rb:1:in `' from /Users/m/.rvm/rubies/ruby-1.9.3-p547/lib/ruby/site_ruby/1.9.1/rubygems/core_ext/kernel_require.rb:55:in `require' from /Users/m/.rvm/rubies/ruby-1.9.3-p547/lib/ruby/site_ruby/1.9.1/rubygems/core_ext/kernel_require.rb:55:in `require' from /Users/m/.rvm/gems/ruby-1.9.3-p547@testing/gems/supply_drop-0.17.0/lib/supply_drop.rb:10:in `' from /Users/m/.rvm/rubies/ruby-1.9.3-p547/lib/ruby/site_ruby/1.9.1/rubygems/core_ext/kernel_require.rb:135:in `require' from /Users/m/.rvm/rubies/ruby-1.9.3-p547/lib/ruby/site_ruby/1.9.1/rubygems/core_ext/kernel_require.rb:135:in `rescue in require' from /Users/m/.rvm/rubies/ruby-1.9.3-p547/lib/ruby/site_ruby/1.9.1/rubygems/core_ext/kernel_require.rb:144:in `require' from (irb):1 from /Users/m/.rvm/rubies/ruby-1.9.3-p547/bin/irb:12:in `' Since that doesn't work without problems, I was wondering which alternatives are there available to do the same. Do you have any suggestings?

    Read the article

  • Windows7 corrupted profile - prevention exists?

    - by Radek
    I have dedicated Windows7 (not on domain) virtual machine for overnight automation testing. Some commands (mySQLdump, tscon.exe) must be run under administrator account. Last week administrator account's profile was corrupted. I fixed it by renaming it in the registry and logging in as administrator. And today it is corrupted again. I use administrator account only to run above commands via runas. Also the computer is restarted via cmd - shutdown command - quite often. Especially every night before automation testing starts. I checked the comp for viruses - did full scan using avast although I believed that the comp is clean. Any idea how to prevent the profile to get corrupted again? update So the first log entry in event log is today from 1.15am and one of my scripts ran runas command as administrator exactly at 1.15am. It was second time that runas war executed though after the testing started. The same happened second day in a row. Before the testing starts I need to copy one file that is locked. So I run handle.exe from runas to unlock it. That is what I think causing the profile to get corrupted. I am not able to reproduce it by myself. The message from event viewer is Windows cannot load the locally stored profile. Possible causes of this error include insufficient security rights or a corrupt local profile. DETAIL – The process cannot access the file because it is being used by another process.

    Read the article

  • How to resolve 'No internet connectivity issues' with a Virtualised 2008 R2 Server using Forefront UAG

    - by user684589
    I have spent some considerable time reading up on as many possible blogs and articles as I can to help me solve why my VM (Running on Hyper-V) for DirectAccess has suddenly stopped being able to access the internet. The VM setup shares the same internet connection on which I have written and submitted this question so I know that the actual underlying internet connection is fully functional. Previous to last week the DirectAccess was fully functional and had no issues. This is a recent problem which was led up to by a number of consistent crashes on the DA machine when access was attempted. Upon reboot all seemed well until recently. I am not certain whether it is relevant, but previously to this I had a number of power issues where the entire VM host shutdown unexpectedly leaving around 8 VM's in a bad way. Upon restart, the UAG DirectAccess machine was unable to access its configuration service (although the service was started) but this seemed to relate to the Light-Weight Active Directory Service AD LDS which had a corrupted database. Having repaired this database, I restarted the service and could subsequently reconnect to the configuration service again. For good measure I re-bound the network adapters (virtualised through Hyper-V) and DirectAccess claimed to be all happy again. However as it stands my machine is still unable to access the internet showing the "No internet connectivity" exclamation mark for the external facing NIC. I have also tried removing the adapters, disabling, re-enabling and the problem persists. The intranet part of the VM CorpNet seems to be fully functional as before and I'm running out of ideas. Any input would be greatly appreciated. I am not an advanced Domain Administrator so please be gentle.

    Read the article

  • Business Case for investing time developing Stubs and BizUnit Tests

    - by charlie.mott
    I was recently in a position where I had to justify why effort should be spent developing Stubbed Integration Tests for BizTalk solutions. These tests are usually developed using the BizUnit framework. I assumed that most seasoned BizTalk developers would consider this best practice. Even though Microsoft suggest use of BizUnit on MSDN, I've not found a single site listing the justifications for investing time writing stubs and BizUnit tests. Stubs Stubs should be developed to isolate your development team from external dependencies. This is described by Michael Stephenson here. Failing to do this can result in the following problems: In contract-first scenarios, the external system interface will have been defined.  But the interface may not have been setup or even developed yet for the BizTalk developers to work with. By the time you open the target location to see the data BizTalk has sent, it may have been swept away. If you are relying on the UI of the target system to see the data BizTalk has sent, what do you do if it fails to arrive? It may take time for the data to be processed or it may be scheduled to be processed later. Learning how to use the source\target systems and investigations into where things go wrong in these systems will slow down the BizTalk development effort. By the time the data is visible in a UI it may have undergone further transformations. In larger development teams working together, do you all use the same source and target instances. How do you know which data was created by whose tests? How do you know which event log error message are whose?  Another developer may have “cleaned up” your data. It is harder to write BizUnit tests that clean up the data\logs after each test run. What if your B2B partners' source or target system cannot support the sort of testing you want to do. They may not even have a development or test instance that you can work with. Their single test instance may be used by the SIT\UAT teams. There may be licencing costs of setting up an instances of the external system. The stubs I like to use are generic stubs that can accept\return any message type.  Usually I need to create one per protocol. They should be driven by BizUnit steps to: validates the data received; and select a response messages (or error response). Once built, they can be re-used for many integration tests and from project to project. I’m not saying that developers should never test against a real instance.  Every so often, you still need to connect to real developer or test instances of the source and target endpoints\services. The interface developers may ask you to send them some data to see if everything still works.  Or you might want some messages sent to BizTalk to get confidence that everything still works beyond BizTalk. Tests Automated “Stubbed Integration Tests” are usually built using the BizUnit framework. These facilitate testing of the entire integration process from source stub to target stub. It will ensure that all of the BizTalk components are configured together correctly to meet all the requirements. More fine grained unit testing of individual BizTalk components is still encouraged.  But BizUnit provides much the easiest way to test some components types (e.g. Orchestrations). Using BizUnit with the Behaviour Driven Development approach described by Mike Stephenson delivers the following benefits: source: http://biztalkbddsample.codeplex.com – Video 1. Requirements can be easily defined using Given/When/Then Requirements are close to the code so easier to manage as features and scenarios Requirements are defined in domain language The feature files can be used as part of the documentation The documentation is accurate to the build of code and can be published with a release The scenarios are effective to document the scenarios and are not over excessive The scenarios are maintained with the code There’s an abstraction between the intention and implementation of tests making them easier to understand The requirements drive the testing These same tests can also be used to drive load testing as described here. If you don't do this ... If you don't follow the above “Stubbed Integration Tests” approach, the developer will need to manually trigger the tests. This has the following risks: Developers are unlikely to check all the scenarios each time and all the expected conditions each time. After the developer leaves, these manual test steps may be lost. What test scenarios are there?  What test messages did they use for each scenario? There is no mechanism to prove adequate test coverage. A test team may attempt to automate integration test scenarios in a test environment through the triggering of tests from a source system UI. If this is a replacement for BizUnit tests, then this carries the following risks: It moves the tests downstream, so problems will be found later in the process. Testers may not check all the expected conditions within the BizTalk infrastructure such as: event logs, suspended messages, etc. These automated tests may also get in the way of manual tests run on these environments.

    Read the article

  • The future of programming, or what lies in the future in programming?

    - by prosseek
    I remember that a article that Microsoft uses formal verification to debug the Device Driver, and I also remember that Functional Programming removes much of the bug as it ensures stateless programming. And we all know about the multi-core. I beleive all of them are future direction of programming or programming language. Multi-core programming or parallel programming Software Formal Verification Functional Programming (as a mainstream?) What do you think? What will be the future of programming?

    Read the article

  • scrum and refactoring

    - by zachary
    If everything in scrum is all about functional things that a user can see is there really any place for refactoring code unrelated to any new functional requirements?

    Read the article

  • How is management of requirements for embedded software different from business applications ?

    - by Chakra
    For business software we usually document the business flow and functional and non functional specs as SRS, Use cases or user stories. One of the critical requirements is UI design which may get prototyped. How do people in the real world document and manage requirements for embedded software for automobile systems ? How are they different from the business applications in terms of requirements management ? Thanks, Chak.

    Read the article

  • Middleware Test Automation Tools

    Hi I would like to know about any Test Automation Solution adopted in Middle ware Testing. Is it similar to automation solutions provided for functional tests? I would like to know in what ways it is similar / different than Functional test automation. What are the tools which can be used to accomplish test automation in Middleware testing?

    Read the article

  • Abstracting functionality

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/08/22/abstracting-functionality.aspxWhat is more important than data? Functionality. Yes, I strongly believe we should switch to a functionality over data mindset in programming. Or actually switch back to it. Focus on functionality Functionality once was at the core of software development. Back when algorithms were the first thing you heard about in CS classes. Sure, data structures, too, were important - but always from the point of view of algorithms. (Niklaus Wirth gave one of his books the title “Algorithms + Data Structures” instead of “Data Structures + Algorithms” for a reason.) The reason for the focus on functionality? Firstly, because software was and is about doing stuff. Secondly because sufficient performance was hard to achieve, and only thirdly memory efficiency. But then hardware became more powerful. That gave rise to a new mindset: object orientation. And with it functionality was devalued. Data took over its place as the most important aspect. Now discussions revolved around structures motivated by data relationships. (John Beidler gave his book the title “Data Structures and Algorithms: An Object Oriented Approach” instead of the other way around for a reason.) Sure, this data could be embellished with functionality. But nevertheless functionality was second. When you look at (domain) object models what you mostly find is (domain) data object models. The common object oriented approach is: data aka structure over functionality. This is true even for the most modern modeling approaches like Domain Driven Design. Look at the literature and what you find is recommendations on how to get data structures right: aggregates, entities, value objects. I´m not saying this is what object orientation was invented for. But I´m saying that´s what I happen to see across many teams now some 25 years after object orientation became mainstream through C++, Delphi, and Java. But why should we switch back? Because software development cannot become truly agile with a data focus. The reason for that lies in what customers need first: functionality, behavior, operations. To be clear, that´s not why software is built. The purpose of software is to be more efficient than the alternative. Money mainly is spent to get a certain level of quality (e.g. performance, scalability, security etc.). But without functionality being present, there is nothing to work on the quality of. What customers want is functionality of a certain quality. ASAP. And tomorrow new functionality needs to be added, existing functionality needs to be changed, and quality needs to be increased. No customer ever wanted data or structures. Of course data should be processed. Data is there, data gets generated, transformed, stored. But how the data is structured for this to happen efficiently is of no concern to the customer. Ask a customer (or user) whether she likes the data structured this way or that way. She´ll say, “I don´t care.” But ask a customer (or user) whether he likes the functionality and its quality this way or that way. He´ll say, “I like it” (or “I don´t like it”). Build software incrementally From this very natural focus of customers and users on functionality and its quality follows we should develop software incrementally. That´s what Agility is about. Deliver small increments quickly and often to get frequent feedback. That way less waste is produced, and learning can take place much easier (on the side of the customer as well as on the side of developers). An increment is some added functionality or quality of functionality.[1] So as it turns out, Agility is about functionality over whatever. But software developers’ thinking is still stuck in the object oriented mindset of whatever over functionality. Bummer. I guess that (at least partly) explains why Agility always hits a glass ceiling in projects. It´s a clash of mindsets, of cultures. Driving software development by demanding small increases in functionality runs against thinking about software as growing (data) structures sprinkled with functionality. (Excuse me, if this sounds a bit broad-brush. But you get my point.) The need for abstraction In the end there need to be data structures. Of course. Small and large ones. The phrase functionality over data does not deny that. It´s not functionality instead of data or something. It´s just over, i.e. functionality should be thought of first. It´s a tad more important. It´s what the customer wants. That´s why we need a way to design functionality. Small and large. We need to be able to think about functionality before implementing it. We need to be able to reason about it among team members. We need to be able to communicate our mental models of functionality not just by speaking about them, but also on paper. Otherwise reasoning about it does not scale. We learned thinking about functionality in the small using flow charts, Nassi-Shneiderman diagrams, pseudo code, or UML sequence diagrams. That´s nice and well. But it does not scale. You can use these tools to describe manageable algorithms. But it does not work for the functionality triggered by pressing the “1-Click Order” on an amazon product page for example. There are several reasons for that, I´d say. Firstly, the level of abstraction over code is negligible. It´s essentially non-existent. Drawing a flow chart or writing pseudo code or writing actual code is very, very much alike. All these tools are about control flow like code is.[2] In addition all tools are computationally complete. They are about logic which is expressions and especially control statements. Whatever you code in Java you can fully (!) describe using a flow chart. And then there is no data. They are about control flow and leave out the data altogether. Thus data mostly is assumed to be global. That´s shooting yourself in the foot, as I hope you agree. Even if it´s functionality over data that does not mean “don´t think about data”. Right to the contrary! Functionality only makes sense with regard to data. So data needs to be in the picture right from the start - but it must not dominate the thinking. The above tools fail on this. Bottom line: So far we´re unable to reason in a scalable and abstract manner about functionality. That´s why programmers are so driven to start coding once they are presented with a problem. Programming languages are the only tool they´ve learned to use to reason about functional solutions. Or, well, there might be exceptions. Mathematical notation and SQL may have come to your mind already. Indeed they are tools on a higher level of abstraction than flow charts etc. That´s because they are declarative and not computationally complete. They leave out details - in order to deliver higher efficiency in devising overall solutions. We can easily reason about functionality using mathematics and SQL. That´s great. Except for that they are domain specific languages. They are not general purpose. (And they don´t scale either, I´d say.) Bummer. So to be more precise we need a scalable general purpose tool on a higher than code level of abstraction not neglecting data. Enter: Flow Design. Abstracting functionality using data flows I believe the solution to the problem of abstracting functionality lies in switching from control flow to data flow. Data flow very naturally is not about logic details anymore. There are no expressions and no control statements anymore. There are not even statements anymore. Data flow is declarative by nature. With data flow we get rid of all the limiting traits of former approaches to modeling functionality. In addition, nomen est omen, data flows include data in the functionality picture. With data flows, data is visibly flowing from processing step to processing step. Control is not flowing. Control is wherever it´s needed to process data coming in. That´s a crucial difference and needs some rewiring in your head to be fully appreciated.[2] Since data flows are declarative they are not the right tool to describe algorithms, though, I´d say. With them you don´t design functionality on a low level. During design data flow processing steps are black boxes. They get fleshed out during coding. Data flow design thus is more coarse grained than flow chart design. It starts on a higher level of abstraction - but then is not limited. By nesting data flows indefinitely you can design functionality of any size, without losing sight of your data. Data flows scale very well during design. They can be used on any level of granularity. And they can easily be depicted. Communicating designs using data flows is easy and scales well, too. The result of functional design using data flows is not algorithms (too low level), but processes. Think of data flows as descriptions of industrial production lines. Data as material runs through a number of processing steps to be analyzed, enhances, transformed. On the top level of a data flow design might be just one processing step, e.g. “execute 1-click order”. But below that are arbitrary levels of flows with smaller and smaller steps. That´s not layering as in “layered architecture”, though. Rather it´s a stratified design à la Abelson/Sussman. Refining data flows is not your grandpa´s functional decomposition. That was rooted in control flows. Refining data flows does not suffer from the limits of functional decomposition against which object orientation was supposed to be an antidote. Summary I´ve been working exclusively with data flows for functional design for the past 4 years. It has changed my life as a programmer. What once was difficult is now easy. And, no, I´m not using Clojure or F#. And I´m not a async/parallel execution buff. Designing the functionality of increments using data flows works great with teams. It produces design documentation which can easily be translated into code - in which then the smallest data flow processing steps have to be fleshed out - which is comparatively easy. Using a systematic translation approach code can mirror the data flow design. That way later on the design can easily be reproduced from the code if need be. And finally, data flow designs play well with object orientation. They are a great starting point for class design. But that´s a story for another day. To me data flow design simply is one of the missing links of systematic lightweight software design. There are also other artifacts software development can produce to get feedback, e.g. process descriptions, test cases. But customers can be delighted more easily with code based increments in functionality. ? No, I´m not talking about the endless possibilities this opens for parallel processing. Data flows are useful independently of multi-core processors and Actor-based designs. That´s my whole point here. Data flows are good for reasoning and evolvability. So forget about any special frameworks you might need to reap benefits from data flows. None are necessary. Translating data flow designs even into plain of Java is possible. ?

    Read the article

  • Cannot get git working

    - by Devin Dixon
    I'm trying to install my own git server with these instructions. http://cisight.com/how-to-setup-git-server-using-gitolite-in-ubuntu-11-10-oneiric/ But I am get stuck at this point. git clone --verbose [email protected]:testing.git Cloning into 'testing'... Permission denied (publickey). fatal: The remote end hung up unexpectedly And I think it has something to do with this: gitolite@ip-xxxx:~$ gl-setup tmp/john.pub key_read: uudecode Aklkdfgkldkgldkgldkgfdlkgldkgdlfkgldkgldkgdlkgkfdnknbkdnbkdnbkdnbkfnbkdfnbkdnfbkdfnbdknbkdnbkfnbkdbnkdbnkdfnbkd [email protected] failed fprint failed I always get the fail and I think its preventing me from cloning repo.The repo is there along with gitolite-admin.git repo. The permissions are this: drwxr-x--- 8 gitolite gitolite 4096 Jun 6 16:29 gitolite-admin.git drwxr-x--- 7 gitolite gitolite 4096 Jun 6 16:29 testing.git So my question is what am I missing here?

    Read the article

  • Hardware asset management systems

    - by Dave
    I need to track a bunch of specialized testing tools, They are hardware devices used for testing other equipement. Each device has a serial number and is sent out for use in testing. Occasionally they break and have to be sent to the manufacture for repair. I'm looking for an open source application (preferably a webapp) to help manage them. Right now we're using Excel and it's not scaling as we get more tools. They aren't computers so all the standard IT asset management systems don't really fit the bill. I found h-harmony, but that project seems dead?

    Read the article

  • Mobile Phone Browser Emulators/Simulators

    - by Jessie
    I work in QA in a .NET shop and recently part of my testing process has started to involve testing our company website on mobile devices. At least one of our techs uses an HTC Desire. After tons of googling I still can't find a good online emulator for testing websites on different types of mobile devices. Is anyone aware of a website that I can test across multiple mobile platforms? Or even an online HTC or Blackberry browser emulator? I've found an iphone/opera mini simulator, but that's about it. Also, I realize there are a lot of SDK's that include emulators, but I'd rather not have to set up an entire SDK just to use an emulator.

    Read the article

  • Asterisk does not recognise DTMF tones from mobile phones

    - by Eugene van der Merwe
    We have an Asterisk 1.8.7.0 (the Elastix derivative) switchboard. Every since a month ago, seemingly out of the blue, the switchboard does not recognise DTMF tones any more from mobile phones. Testing the switchboard using 7777 works. Testing the switchboard from a normal phones works. Testing the switchboard from a mobile phone fails. Looking at the log file I can't see anything. I used 'asterisk -rvvvv' and 'tail -f /var/log/asterisk/full' to see the live output and scan the logs. I guess I don't see anything because it's simply not recognising the DTMF tones. I did brief research and found an old setting for SIP phones, 'rfc2833compensate=yes', and tried adding this to 'sip_general_custom.conf'. After that I did 'core restart when convenient' but that didn't make any difference. Could anyone give me some additional troubleshooting steps?

    Read the article

  • Exchange Server 2010 ActiveSync SSL Certificate Problem

    - by Cell-o
    Hi All, We have a problem related Exchange Server 2010 Activesync.My problem is;When I connecting to activesync from outside, I am receiving the following error. ExRCA is testing Exchange ActiveSync. The Exchange ActiveSync test failed. Test Steps Attempting to resolve the host name mail.xxxxx.com in DNS. The host name resolved successfully. Additional Details IP addresses returned: xx.0.x3.4 Testing TCP port 443 on host mail.x.com to ensure it's listening and open. The port was opened successfully. Testing the SSL certificate to make sure it's valid. The SSL certificate failed one or more certificate validation checks. Test Steps Validating the certificate name. Certificate name validation failed. Tell me more about this issue and how to resolve it Additional Details Host name mail.x.com doesn't match any name found on the server certificate CN=xxxxxx. Thanks in advance all your help.

    Read the article

  • needs updated glibc package version 3.4.15 or later for RHEL6

    - by Tejas
    I want to upgrade my current running applications to latest version. But due to some package issue i am unable to install them. I get common error in that: /usr/lib64/libstdc++.so.6: version 'GLIBCXX_3.4.15' not found. When i tried to update glibc package i get following output: [root@agastya ~]# yum install glibc Loaded plugins: refresh-packagekit, rhnplugin epel/metalink | 3.8 kB 00:00 epel | 4.3 kB 00:00 epel/primary_db | 5.0 MB 01:33 epel-testing/metalink | 3.8 kB 00:00 epel-testing | 4.3 kB 00:00 epel-testing/primary_db | 295 kB 00:03 rhel-x86_64-server-6 | 1.8 kB 00:00 rhel-x86_64-server-6/primary | 11 MB 02:02 rhel-x86_64-server-6 8816/8816 Setting up Install Process Package glibc-2.12-1.80.el6_3.6.x86_64 already installed and latest version Nothing to do [root@agastya ~]# Should i need to add some more repositories? If yes, how?

    Read the article

  • How to manage a home-grown YUM package repo?

    - by TomOnTime
    There are plenty of websites that explain how to manage a mirror of YUM repos. I want to run a repo for my home-grown packages. Is there a good way to manage such repos? What I need to do: Manage 3 repos: unstable, testing, stable Self-service functions that let users add/remove/promote packages (promote means moving a package unstable?testing or testing-stable). ACLs that control which users/groups may add/remove/promote packages. Automatically re-sign packages as they move repo to repo (since the GPG key for "stable" should be different than "unstable") Automatically run "createrepo" to update repodata when needed. Suggestions?

    Read the article

  • Netcat UDP File Transfer Between Two Servers Times Out?

    - by Mark Bowytz
    I'm testing file transfer speeds between two Red Hat servers that are connected to the same switch within the data center and I decided to use netcat to eliminate protocol overhead as much as possible. Testing in TCP mode went well and I was wondering how UDP might fare. On my receiving (client) end, I ran this: nc -u -l 11225 -v > myfile.out And then on the sending (server) end I ran the following: cat myfile.out | nc -u myserver.foo.zzz.com 11225 -v The file I'm testing with is 38 GB but the transfer seems to stop at around 15 GB (one time at 14.9, another at 15.6). I've tested by adding a "-w 5000" just in case it's timing out but no joy. Adding the -v doesn't show anything except acknowledging that the connection occurred. No errors. So - any suggestions as to why would the transfer cease?

    Read the article

< Previous Page | 146 147 148 149 150 151 152 153 154 155 156 157  | Next Page >