Search Results

Search found 11675 results on 467 pages for 'parallel testing'.

Page 141/467 | < Previous Page | 137 138 139 140 141 142 143 144 145 146 147 148  | Next Page >

  • Parallel MySQL queries for HTML table - WHILE(x or y)?

    - by Beti Chode
    I'm trying to create a table using PHP. What I need is a table with two columns. So I have an SQL table with 4 fields - primary key id, language, word and definition. The language for each is either Arabic or Russian. I want a table that does the following: | defintion | |____________________| | | | rus1 | arab1 | | rus2 | arab2 | | rus3 | arab3 | | rus4 | | So it divides the list by English word, creates a for each English word, then lists Russian equivalents in the left column and Arabic in the right. However there are often not the same number for both. What I am doing right now is running a WHILE loop in a WHILE loop. The outer loop is running fine but I think I am doing the inner loop wrong. Here is the bulk of the code: $definitions=mysql_query("SELECT DISTINCT definition FROM words") WHILE($row=mysql_fetch_array($definitions) { ECHO '<tr><th colspan="2">' . $row['definition'] . '</th></tr>'; $russian="SELECT * FROM words WHERE language='Russian' AND definition='".$row['definition']."'"; $arabic="SELECT * FROM words WHERE language='Arabic' AND definition='".$row['definition']."'"; WHILE($rus=mysql_fetch_array($russian) or $arb=mysql_fetch_array($arabic)) { ECHO '<tr><td>'.$rus['word'].'</td><td>'.$arb['word'].'</td></tr>'; } } Sadly I am getting soemthing like this: | defintion | |____________________| | | | rus1 | | | rus2 | | | rus3 | | | rus4 | | | | arab1 | | | arab2 | | | arab3 | Not sure what other way I can do this? I tried changing the or to || thinking the different precedence would cause another outcome, but then I get ONLY the Russian column. I'm out of ideas, you guys are my only hope!

    Read the article

  • Modern.IE VM license

    - by Thomas W.
    Microsoft provides some VMs for testing purposes (advertised on StackOverflow) and I'm trying to understand the license terms. The one I don't really understand is 1.b. You may use the software for testing purposes only. You may not use the software for commercial purposes. My thoughts: a) Testing a website in several browsers on several different virtual machines seems a quite professional approach. I hardly believe many private developers would do that. Of course they should, but which private developer has the time to do so? b) If that's really only available to private developers, what is the offer to companies doing the same thing? I am missing the advertisement for a paid service. My question as a non-native English speaker: Is testing by a company considered as a commercial purpose? Can I use the VMs within a company for testing or not?

    Read the article

  • DotNetNuke 5.4.1 Released

    I am happy to announce the release of DotNetNuke 5.4.1 which corrects the major issues which slipped through the QA process for 5.4. While we try to do a good job in testing our releases, our recent efforts for 5.3 and 5.4 have fallen short of the mark. We are currently working with a small team of commercial module developers and the core team to put a better public beta testing process in place that will help augment our own internal testing. Ultimately, community testing is the only testing that...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • apache virtual host to work with django

    - by khelll
    My project is under: /home/projects/testing and I'm adding this to the buttom of my /etc/httpd/conf/httpd.conf file on Centos machine, but that is not working, <Location "/testing/"> SetHandler python-program PythonHandler django.core.handlers.modpython SetEnv DJANGO_SETTINGS_MODULE testing.settings PythonOption django.root /testing PythonDebug On PythonPath "['/home/projects/'] + sys.path" </Location> but when requesting http://localhost/testing/jobs for example, I get: Mod_python error: "PythonHandler django.core.handlers.modpython" Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/mod_python/apache.py", line 299, in HandlerDispatch result = object(req) ............. File "/usr/lib/python2.4/site-packages/Django-1.1.1-py2.4.egg/django/conf/__init__.py", line 75, in __init__ raise ImportError, "Could not import settings '%s' (Is it on sys.path? Does it have syntax errors?): %s" % (self.SETTINGS_MODULE, e) ImportError: Could not import settings 'testing.settings' (Is it on sys.path? Does it have syntax errors?): No module named testing.settings

    Read the article

  • Where to find great proxy servers for testing GeoIP services?

    - by Andreas
    We would like to test a GeoIP-Service. Therefore we need to go to the site with an IP from another country. There are a lot of free proxy lists like http://nntime.com/proxy-country/ The problem with them is, that only the CoDeen-Proxies are working. But with CoDeen you can't select your country of origin (the same as with TOR). You get redirected to a random proxy in the network. Where to find good proxy server for testing the GeoIP Services? Free proxy servers would be great, but if they cost something small that doesn't matter.

    Read the article

  • Why does javascript's "in" operator return true when testing if 0 exists in an array that doesn't co

    - by Mariano Peterson
    For example, this returns true, and makes sense: var x = [1,2]; 1 in x; // true This returns false, and makes sense: var x = [1,2]; 3 in x; // false However this returns true, and I don't understand why: var x = [1,2]; 0 in x; You can quickly test it by putting this in your browser's address bar: javascript:var x=[1,2]; alert(0 in x); Why does the "in" operator in Javascript return true when testing if "0" exists in array, even when the array doesn't appear to contain "0"?

    Read the article

  • How to automate testing of a browser-based app?

    - by mawg
    If it were a windows program, I would use Auto it to automate testing. Is there something similar for browser-based apps? Nothing too complex, it should just allow scripting (preferable for me to macro-recording) to simulate human interaction with the browser, which means being able to identify fields of a form by name, inject text into some, simulate mouse-click on others, etc and then, after submitting a form, should be able to read text certain named controls, check the status of others (checked, radio group index, read-only, etc). While I do appreciate a full featured product, I don't appreciate a steep learning curve. so something as simple as the scripting of Auto It woudl be fine. I don't know if it makes a difference which browser is used, but I could live with MSIE 6 or higher (maybe 7 or higher at a push).

    Read the article

  • angularjs model view update through angular directive

    - by Relicset
    I am trying to update my view using model in angular directive here is the directive I have created app.directive('aboutOptions', [function() { return { restrict: 'C', scope: { testing: '=' }, transclude: true, link: function(scope, elem, attr) { scope.$watch(attr.ngModel, function() { console.log(scope.$eval(attr.ngModel)); scope.testing = scope.$eval(attr.ngModel); }); } } }]); here is html model and file name is ab.html <input type="text" class="about-options" ng-model="testing" /> Here is the view to be updated and file name is show.html <h2 class="about-options">{{testing}}</h2> ab.html file will be loaded as a template inside jquery ui dialog and my show.html file is in main page If I remove scope: { testing: '=' }, Console is showing what I am typing Update - 1 Tried with the following changes testing: '=test' And in html <input type="text" class="about-options" ng-model="testing" /> <h2 class="about-options">{{test}}</h2>

    Read the article

  • Much Ado About Nothing: Stub Objects

    - by user9154181
    The Solaris 11 link-editor (ld) contains support for a new type of object that we call a stub object. A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be executed — the runtime linker will kill any process that attempts to load one. However, you can link to a stub object as a dependency, allowing the stub to act as a proxy for the real version of the object. You may well wonder if there is a point to producing an object that contains nothing but linking interface. As it turns out, stub objects are very useful for building large bodies of code such as Solaris. In the last year, we've had considerable success in applying them to one of our oldest and thorniest build problems. In this discussion, I will describe how we came to invent these objects, and how we apply them to building Solaris. This posting explains where the idea for stub objects came from, and details our long and twisty journey from hallway idea to standard link-editor feature. I expect that these details are mainly of interest to those who work on Solaris and its makefiles, those who have done so in the past, and those who work with other similar bodies of code. A subsequent posting will omit the history and background details, and instead discuss how to build and use stub objects. If you are mainly interested in what stub objects are, and don't care about the underlying software war stories, I encourage you to skip ahead. The Long Road To Stubs This all started for me with an email discussion in May of 2008, regarding a change request that was filed in 2002, entitled: 4631488 lib/Makefile is too patient: .WAITs should be reduced This CR encapsulates a number of cronic issues with Solaris builds: We build Solaris with a parallel make (dmake) that tries to build as much of the code base in parallel as possible. There is a lot of code to build, and we've long made use of parallelized builds to get the job done quicker. This is even more important in today's world of massively multicore hardware. Solaris contains a large number of executables and shared objects. Executables depend on shared objects, and shared objects can depend on each other. Before you can build an object, you need to ensure that the objects it needs have been built. This implies a need for serialization, which is in direct opposition to the desire to build everying in parallel. To accurately build objects in the right order requires an accurate set of make rules defining the things that depend on each other. This sounds simple, but the reality is quite complex. In practice, having programmers explicitly specify these dependencies is a losing strategy: It's really hard to get right. It's really easy to get it wrong and never know it because things build anyway. Even if you get it right, it won't stay that way, because dependencies between objects can change over time, and make cannot help you detect such drifing. You won't know that you got it wrong until the builds break. That can be a long time after the change that triggered the breakage happened, making it hard to connect the cause and the effect. Usually this happens just before a release, when the pressure is on, its hard to think calmly, and there is no time for deep fixes. As a poor compromise, the libraries in core Solaris were built using a set of grossly incomplete hand written rules, supplemented with a number of dmake .WAIT directives used to group the libraries into sets of non-interacting groups that can be built in parallel because we think they don't depend on each other. From time to time, someone will suggest that we could analyze the built objects themselves to determine their dependencies and then generate make rules based on those relationships. This is possible, but but there are complications that limit the usefulness of that approach: To analyze an object, you have to build it first. This is a classic chicken and egg scenario. You could analyze the results of a previous build, but then you're not necessarily going to get accurate rules for the current code. It should be possible to build the code without having a built workspace available. The analysis will take time, and remember that we're constantly trying to make builds faster, not slower. By definition, such an approach will always be approximate, and therefore only incremantally more accurate than the hand written rules described above. The hand written rules are fast and cheap, while this idea is slow and complex, so we stayed with the hand written approach. Solaris was built that way, essentially forever, because these are genuinely difficult problems that had no easy answer. The makefiles were full of build races in which the right outcomes happened reliably for years until a new machine or a change in build server workload upset the accidental balance of things. After figuring out what had happened, you'd mutter "How did that ever work?", add another incomplete and soon to be inaccurate make dependency rule to the system, and move on. This was not a satisfying solution, as we tend to be perfectionists in the Solaris group, but we didn't have a better answer. It worked well enough, approximately. And so it went for years. We needed a different approach — a new idea to cut the Gordian Knot. In that discussion from May 2008, my fellow linker-alien Rod Evans had the initial spark that lead us to a game changing series of realizations: The link-editor is used to link objects together, but it only uses the ELF metadata in the object, consisting of symbol tables, ELF versioning sections, and similar data. Notably, it does not look at, or understand, the machine code that makes an object useful at runtime. If you had an object that only contained the ELF metadata for a dependency, but not the code or data, the link-editor would find it equally useful for linking, and would never know the difference. Call it a stub object. In the core Solaris OS, we require all objects to be built with a link-editor mapfile that describes all of its publically available functions and data. Could we build a stub object using the mapfile for the real object? It ought to be very fast to build stub objects, as there are no input objects to process. Unlike the real object, stub objects would not actually require any dependencies, and so, all of the stubs for the entire system could be built in parallel. When building the real objects, one could link against the stub objects instead of the real dependencies. This means that all the real objects can be built built in parallel too, without any serialization. We could replace a system that requires perfect makefile rules with a system that requires no ordering rules whatsoever. The results would be considerably more robust. We immediately realized that this idea had potential, but also that there were many details to sort out, lots of work to do, and that perhaps it wouldn't really pan out. As is often the case, it would be necessary to do the work and see how it turned out. Following that conversation, I set about trying to build a stub object. We determined that a faithful stub has to do the following: Present the same set of global symbols, with the same ELF versioning, as the real object. Functions are simple — it suffices to have a symbol of the right type, possibly, but not necessarily, referencing a null function in its text segment. Copy relocations make data more complicated to stub. The possibility of a copy relocation means that when you create a stub, the data symbols must have the actual size of the real data. Any error in this will go uncaught at link time, and will cause tragic failures at runtime that are very hard to diagnose. For reasons too obscure to go into here, involving tentative symbols, it is also important that the data reside in bss, or not, matching its placement in the real object. If the real object has more than one symbol pointing at the same data item, we call these aliased symbols. All data symbols in the stub object must exhibit the same aliasing as the real object. We imagined the stub library feature working as follows: A command line option to ld tells it to produce a stub rather than a real object. In this mode, only mapfiles are examined, and any object or shared libraries on the command line are are ignored. The extra information needed (function or data, size, and bss details) would be added to the mapfile. When building the real object instead of the stub, the extra information for building stubs would be validated against the resulting object to ensure that they match. In exploring these ideas, I immediately run headfirst into the reality of the original mapfile syntax, a subject that I would later write about as The Problem(s) With Solaris SVR4 Link-Editor Mapfiles. The idea of extending that poor language was a non-starter. Until a better mapfile syntax became available, which seemed unlikely in 2008, the solution could not involve extentions to the mapfile syntax. Instead, we cooked up the idea (hack) of augmenting mapfiles with stylized comments that would carry the necessary information. A typical definition might look like: # DATA(i386) __iob 0x3c0 # DATA(amd64,sparcv9) __iob 0xa00 # DATA(sparc) __iob 0x140 iob; A further problem then became clear: If we can't extend the mapfile syntax, then there's no good way to extend ld with an option to produce stub objects, and to validate them against the real objects. The idea of having ld read comments in a mapfile and parse them for content is an unacceptable hack. The entire point of comments is that they are strictly for the human reader, and explicitly ignored by the tool. Taking all of these speed bumps into account, I made a new plan: A perl script reads the mapfiles, generates some small C glue code to produce empty functions and data definitions, compiles and links the stub object from the generated glue code, and then deletes the generated glue code. Another perl script used after both objects have been built, to compare the real and stub objects, using data from elfdump, and validate that they present the same linking interface. By June 2008, I had written the above, and generated a stub object for libc. It was a useful prototype process to go through, and it allowed me to explore the ideas at a deep level. Ultimately though, the result was unsatisfactory as a basis for real product. There were so many issues: The use of stylized comments were fine for a prototype, but not close to professional enough for shipping product. The idea of having to document and support it was a large concern. The ideal solution for stub objects really does involve having the link-editor accept the same arguments used to build the real object, augmented with a single extra command line option. Any other solution, such as our prototype script, will require makefiles to be modified in deeper ways to support building stubs, and so, will raise barriers to converting existing code. A validation script that rederives what the linker knew when it built an object will always be at a disadvantage relative to the actual linker that did the work. A stub object should be identifyable as such. In the prototype, there was no tag or other metadata that would let you know that they weren't real objects. Being able to identify a stub object in this way means that the file command can tell you what it is, and that the runtime linker can refuse to try and run a program that loads one. At that point, we needed to apply this prototype to building Solaris. As you might imagine, the task of modifying all the makefiles in the core Solaris code base in order to do this is a massive task, and not something you'd enter into lightly. The quality of the prototype just wasn't good enough to justify that sort of time commitment, so I tabled the project, putting it on my list of long term things to think about, and moved on to other work. It would sit there for a couple of years. Semi-coincidentally, one of the projects I tacked after that was to create a new mapfile syntax for the Solaris link-editor. We had wanted to do something about the old mapfile syntax for many years. Others before me had done some paper designs, and a great deal of thought had already gone into the features it should, and should not have, but for various reasons things had never moved beyond the idea stage. When I joined Sun in late 2005, I got involved in reviewing those things and thinking about the problem. Now in 2008, fresh from relearning for the Nth time why the old mapfile syntax was a huge impediment to linker progress, it seemed like the right time to tackle the mapfile issue. Paving the way for proper stub object support was not the driving force behind that effort, but I certainly had them in mind as I moved forward. The new mapfile syntax, which we call version 2, integrated into Nevada build snv_135 in in February 2010: 6916788 ld version 2 mapfile syntax PSARC/2009/688 Human readable and extensible ld mapfile syntax In order to prove that the new mapfile syntax was adequate for general purpose use, I had also done an overhaul of the ON consolidation to convert all mapfiles to use the new syntax, and put checks in place that would ensure that no use of the old syntax would creep back in. That work went back into snv_144 in June 2010: 6916796 OSnet mapfiles should use version 2 link-editor syntax That was a big putback, modifying 517 files, adding 18 new files, and removing 110 old ones. I would have done this putback anyway, as the work was already done, and the benefits of human readable syntax are obvious. However, among the justifications listed in CR 6916796 was this We anticipate adding additional features to the new mapfile language that will be applicable to ON, and which will require all sharable object mapfiles to use the new syntax. I never explained what those additional features were, and no one asked. It was premature to say so, but this was a reference to stub objects. By that point, I had already put together a working prototype link-editor with the necessary support for stub objects. I was pleased to find that building stubs was indeed very fast. On my desktop system (Ultra 24), an amd64 stub for libc can can be built in a fraction of a second: % ptime ld -64 -z stub -o stubs/libc.so.1 -G -hlibc.so.1 \ -ztext -zdefs -Bdirect ... real 0.019708910 user 0.010101680 sys 0.008528431 In order to go from prototype to integrated link-editor feature, I knew that I would need to prove that stub objects were valuable. And to do that, I knew that I'd have to switch the Solaris ON consolidation to use stub objects and evaluate the outcome. And in order to do that experiment, ON would first need to be converted to version 2 mapfiles. Sub-mission accomplished. Normally when you design a new feature, you can devise reasonably small tests to show it works, and then deploy it incrementally, letting it prove its value as it goes. The entire point of stub objects however was to demonstrate that they could be successfully applied to an extremely large and complex code base, and specifically to solve the Solaris build issues detailed above. There was no way to finesse the matter — in order to move ahead, I would have to successfully use stub objects to build the entire ON consolidation and demonstrate their value. In software, the need to boil the ocean can often be a warning sign that things are trending in the wrong direction. Conversely, sometimes progress demands that you build something large and new all at once. A big win, or a big loss — sometimes all you can do is try it and see what happens. And so, I spent some time staring at ON makefiles trying to get a handle on how things work, and how they'd have to change. It's a big and messy world, full of complex interactions, unspecified dependencies, special cases, and knowledge of arcane makefile features... ...and so, I backed away, put it down for a few months and did other work... ...until the fall, when I felt like it was time to stop thinking and pondering (some would say stalling) and get on with it. Without stubs, the following gives a simplified high level view of how Solaris is built: An initially empty directory known as the proto, and referenced via the ROOT makefile macro is established to receive the files that make up the Solaris distribution. A top level setup rule creates the proto area, and performs operations needed to initialize the workspace so that the main build operations can be launched, such as copying needed header files into the proto area. Parallel builds are launched to build the kernel (usr/src/uts), libraries (usr/src/lib), and commands. The install makefile target builds each item and delivers a copy to the proto area. All libraries and executables link against the objects previously installed in the proto, implying the need to synchronize the order in which things are built. Subsequent passes run lint, and do packaging. Given this structure, the additions to use stub objects are: A new second proto area is established, known as the stub proto and referenced via the STUBROOT makefile macro. The stub proto has the same structure as the real proto, but is used to hold stub objects. All files in the real proto are delivered as part of the Solaris product. In contrast, the stub proto is used to build the product, and then thrown away. A new target is added to library Makefiles called stub. This rule builds the stub objects. The ld command is designed so that you can build a stub object using the same ld command line you'd use to build the real object, with the addition of a single -z stub option. This means that the makefile rules for building the stub objects are very similar to those used to build the real objects, and many existing makefile definitions can be shared between them. A new target is added to the Makefiles called stubinstall which delivers the stub objects built by the stub rule into the stub proto. These rules reuse much of existing plumbing used by the existing install rule. The setup rule runs stubinstall over the entire lib subtree as part of its initialization. All libraries and executables link against the objects in the stub proto rather than the main proto, and can therefore be built in parallel without any synchronization. There was no small way to try this that would yield meaningful results. I would have to take a leap of faith and edit approximately 1850 makefiles and 300 mapfiles first, trusting that it would all work out. Once the editing was done, I'd type make and see what happened. This took about 6 weeks to do, and there were many dark days when I'd question the entire project, or struggle to understand some of the many twisted and complex situations I'd uncover in the makefiles. I even found a couple of new issues that required changes to the new stub object related code I'd added to ld. With a substantial amount of encouragement and help from some key people in the Solaris group, I eventually got the editing done and stub objects for the entire workspace built. I found that my desktop system could build all the stub objects in the workspace in roughly a minute. This was great news, as it meant that use of the feature is effectively free — no one was likely to notice or care about the cost of building them. After another week of typing make, fixing whatever failed, and doing it again, I succeeded in getting a complete build! The next step was to remove all of the make rules and .WAIT statements dedicated to controlling the order in which libraries under usr/src/lib are built. This came together pretty quickly, and after a few more speed bumps, I had a workspace that built cleanly and looked like something you might actually be able to integrate someday. This was a significant milestone, but there was still much left to do. I turned to doing full nightly builds. Every type of build (open, closed, OpenSolaris, export, domestic) had to be tried. Each type failed in a new and unique way, requiring some thinking and rework. As things came together, I became aware of things that could have been done better, simpler, or cleaner, and those things also required some rethinking, the seeking of wisdom from others, and some rework. After another couple of weeks, it was in close to final form. My focus turned towards the end game and integration. This was a huge workspace, and needed to go back soon, before changes in the gate would made merging increasingly difficult. At this point, I knew that the stub objects had greatly simplified the makefile logic and uncovered a number of race conditions, some of which had been there for years. I assumed that the builds were faster too, so I did some builds intended to quantify the speedup in build time that resulted from this approach. It had never occurred to me that there might not be one. And so, I was very surprised to find that the wall clock build times for a stock ON workspace were essentially identical to the times for my stub library enabled version! This is why it is important to always measure, and not just to assume. One can tell from first principles, based on all those removed dependency rules in the library makefile, that the stub object version of ON gives dmake considerably more opportunities to overlap library construction. Some hypothesis were proposed, and shot down: Could we have disabled dmakes parallel feature? No, a quick check showed things being build in parallel. It was suggested that we might be I/O bound, and so, the threads would be mostly idle. That's a plausible explanation, but system stats didn't really support it. Plus, the timing between the stub and non-stub cases were just too suspiciously identical. Are our machines already handling as much parallelism as they are capable of, and unable to exploit these additional opportunities? Once again, we didn't see the evidence to back this up. Eventually, a more plausible and obvious reason emerged: We build the libraries and commands (usr/src/lib, usr/src/cmd) in parallel with the kernel (usr/src/uts). The kernel is the long leg in that race, and so, wall clock measurements of build time are essentially showing how long it takes to build uts. Although it would have been nice to post a huge speedup immediately, we can take solace in knowing that stub objects simplify the makefiles and reduce the possibility of race conditions. The next step in reducing build time should be to find ways to reduce or overlap the uts part of the builds. When that leg of the build becomes shorter, then the increased parallelism in the libs and commands will pay additional dividends. Until then, we'll just have to settle for simpler and more robust. And so, I integrated the link-editor support for creating stub objects into snv_153 (November 2010) with 6993877 ld should produce stub objects PSARC/2010/397 ELF Stub Objects followed by the work to convert the ON consolidation in snv_161 (February 2011) with 7009826 OSnet should use stub objects 4631488 lib/Makefile is too patient: .WAITs should be reduced This was a huge putback, with 2108 modified files, 8 new files, and 2 removed files. Due to the size, I was allowed a window after snv_160 closed in which to do the putback. It went pretty smoothly for something this big, a few more preexisting race conditions would be discovered and addressed over the next few weeks, and things have been quiet since then. Conclusions and Looking Forward Solaris has been built with stub objects since February. The fact that developers no longer specify the order in which libraries are built has been a big success, and we've eliminated an entire class of build error. That's not to say that there are no build races left in the ON makefiles, but we've taken a substantial bite out of the problem while generally simplifying and improving things. The introduction of a stub proto area has also opened some interesting new possibilities for other build improvements. As this article has become quite long, and as those uses do not involve stub objects, I will defer that discussion to a future article.

    Read the article

  • Exchange 2010 DAG Automatic Failover Testing/Issue. Not always automatically failing over to health

    - by Richard
    Ok I've got 2 exchange 2010 servers that run client access/hub transport/mailbox roles and one exchange 2010 server running just client access/hub transport roles and acts as my bridgehead. The two mailbox servers are running one database setup in a DAG. Server A shows the DB Mounted and Server B shows Healthy. If I reboot Server A via windows GUI Server B switches from healthy to mounted and I see hardly any interruption in service using Outlook 2007. Server A shows "Service down", then "Failed" then "Healthy" and leaves the DB mounted on Server B. This is how it should work, so far so good. Now if I test Server A being shut down cold, or unplugging both nics from network to simulate failure, Server B switches from Healthy to Mounted and server A switches to "Service Down" but my outlook client never connects to the DB mounted on server B! I can connect to server C (client access/hub transport) and get to my email and even send new email out, but incoming email doesn't deliver until Server A is brought back online and it's DB goes back to Healthy status. So I don't understand why it auto fail-overs when I reboot the server with the mounted DB copy, causing very little outlook 2007 hiccup if any. But when I shutdown or DC the mounted DB server it DOES mount the healthy copy but outlook 2007 clients can't connect.. I hope the picture I'm trying to paint makes some sense, it's driving me a little batty. Any help would be appreciated!

    Read the article

  • CentOS package not available

    - by Bastien974
    I installed a CentOS 5.5 and the default php package is 5.1.6. I want to install 5.2.9, so I added /etc/yum.repo.d/CentOS-Testing.repo [c5-testing] name=CentOS-5 Testing baseurl=http://dev.centos.org/centos/$releasever/testing/$basearch/ enabled=0 gpgcheck=1 gpgkey=http://dev.centos.org/centos/RPM-GPG-KEY-CentOS-testing php-5.2.9-2.el5.centos.x86_64.rpm is available http://dev.centos.org/centos/5/testing/x86_64/RPMS/ So I tried: yum clean all, *yum search php-5.2.9-2.el5.centos.x86_64.rpm* -- Warning: No matches found for: php-5.2.9-2.el5.centos.x86_64.rpm What I am doing wrong ? Thanks for your help.

    Read the article

  • How to configure IIS for SVG and web testing with Visual Studio?

    - by macias
    Let's say I have a simple web page with svg image in it: <img src="foobar.svg" alt="not working" /> If I make this page as static html page and view it directly svg is displayed. If I type the address of this svg -- it is displayed. But when I make this as .aspx page and launch it dynamically from Visual Studio I get alt text. If I type the address of this svg (from localhost, not as a local file) -- browser tries to download it instead of displaying. I already defined mime type in IIS (for entire server -- "image/svg+xml") and restarted IIS. Same effect as before. Question: what should I do more? Update WireShark won't work (it is in documentation), I tried also RawCap, but it cannot trace my connection (odd), luckily Fiddler worked: From client: GET http://127.0.0.1:1731/svg/document_edit.svg HTTP/1.1 Host: 127.0.0.1:1731 User-Agent: Mozilla/5.0 (Windows NT 6.1; rv:10.0.1) Gecko/20100101 Firefox/10.0.1 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip, deflate Connection: keep-alive Answer from server: HTTP/1.1 200 OK Server: ASP.NET Development Server/10.0.0.0 Date: Thu, 16 Feb 2012 11:14:38 GMT X-AspNet-Version: 4.0.30319 Cache-Control: private Content-Type: application/octet-stream Content-Length: 87924 Connection: Close <?xml version="1.0" encoding="UTF-8" standalone="no"?> <!-- Created with Inkscape (http://www.inkscape.org/) --> <svg xmlns: *** FIDDLER: RawDisplay truncated at 128 characters. Right-click to disable truncation. *** For the record, here is useful Q&A for Fiddler: http://stackoverflow.com/questions/826134/how-to-display-localhost-traffic-in-fiddler-while-debugging-an-asp-net-applicati

    Read the article

  • don't know how this virtual directory structure was setup on iis6

    - by deostroll
    Our development server has a setup as follows: \\DEVSRVR\WEBSITES\COMMON +---include Here is where all css and script files resides. They are required by various web applications \\DEVSRVR\WEBSITES\TESTING\SAM +---Backup ¦ +---bin +---bin +---help Here is where an application resides. Suppose there is an aspx page under the folder called SAM, we'd normally issue an http request as follows: http://testing.apps/sam/default.aspx We believe that testing.apps virtual name points to \\devsrvr\websites\testing folder. Suppose there is a css file called menu.css inside common/include. We'd simply have to make the following http call to get it: http://testing.apps/common/include/menu3.css This works!!! I don't understand how? There is no such folder called common inside of testing...

    Read the article

  • Why does my 5.1 surround work in testing only?

    - by Mike Pateras
    I've got a 5.1 speaker setup. In both this SoundMax utility (I think it came with my motherboard), and in the Windows 7 sound test, all 5.1 speakers work properly, but that's the only time that I can get audio to come out of the rear speakers and the center channel. When playing games, video, music, etc., I only seem to get 2.1 speakers worth of sound, even though I've configured everything for 5.1 surround sound. How can I get my 5.1 surround sound working during actual use?

    Read the article

  • Check if folders exist in Git repository... testing if a sub-string exists in bash with NULL as a separator

    - by Craig Francis
    I have a common git "post-receive" script for several projects, and it needs to perform different actions if an /app/ or /public/ folder exists in the root. Using: FOLDERS=`git ls-tree -d --name-only -z master`; I can see the directory listing, and I would like to use the RegExp support in bash to run something like: if [[ "$FOLDERS" =~ app ]]; then ... fi But that won't work if there was something like an "app lication" folder... I specified the "-z" option in the git "ls-tree" command so I could use the \0 (null) character as a separator, but not sure how to test for that in the bash RegExp. Likewise I know there is support for specifying a particular path in the ls-tree command, and could then pipe that to "wc -l", but I'd have thought it was quicker to get a full directory listing of the root (not recursive) then test for the 2 (or more) folders with the returned output. Possibly related to: http://stackoverflow.com/questions/7938094/git-how-to-check-which-files-exist-and-their-content-in-a-shared-bare-repos

    Read the article

  • Testing DNS configuration of domain by using hosts file?

    - by Alex Blundell
    I'm currently migrating a website to another server, and want to test the DNS configuration (more specifically, email mx records) before moving the domain over. I've configured the DNS on the new server to have mx entries for Google Apps in the same way that it's configured on the old server. The domain is controlled by nameservers on the old server at the moment, so the change would simply be updating the nameservers to the new servers. (What I'm getting at is DNS is controlled at the server level, not registrar level). Since the website has quite a number of users, I want to make sure the configuration is right before flicking the switch. For this, can I add an entry to the hosts file of my local computer to point the domain to the new server? I've done this, and the web server works, but would this also test the email mx records on the new server?

    Read the article

  • Testing performance from around the world - how do I get a linux shell easily in multiple countries?

    - by Matthew O'Riordan
    We are building a socket based service where latency is paramount, and as such we have servers distributed into 7 data centres around the world. However, whilst we know we're bringing the servers closer to the clients, it's very difficult to know how effective this is, and importantly, what difference this makes compared to our competitors. As such, we want to run simple scripts that test latency and throughput for both our service and our competitors, which is easy enough using Amazon, however Amazon only have 7 data centres. We would like to know for example how we perform in locations all over the world such as South Africa, Australia, China, Peru etc. Does anyone know of any service where we could piggy back off their global infrastructure and run some scripts to test this performance? The obvious contenders are people like Monitis, but I don't think they would allow us to run custom scripts, only standard protocol monitors. Thanks for your help. Matt

    Read the article

  • decrypting a dvd and testing that it is decrypted and playing it locally with its menu?

    - by barlop
    I have a DVD , it may be decrypted. How do I get it to hard drive decrypt it and see if it is decrypted? I have got it to hard drive... I can get VOB files to play(does that mean it is not encrypted?) but I can't get it to play with the menu. I can get the DVD itself to play with a menu. I'd like to get the local copy to. Also, I can play the first VOB file in VLC and MPC but not in mplayer. But that's a secondary thing. I will use whatever player you suggest. I have tried VLC, and mplayer, and media player classic.

    Read the article

  • Is there a way to automate testing for a GNS3 network topology?

    - by Chedy2149
    I'm working on an MPLS backbone topology using GNS3, this topology includes configuration of several networking technologies such as IPv6, BGP, OSPF, IPv6 to IPv4 dynamic tunneling, MPLS based VPNs, MPLS based traffic engineering... So I'm a beginner with basic knowledge in networking (Ipv4 addressing, basic routing and switching) and I can't just easly test if my topology works or not, moreover fiddling with configuration I've noticed that sometimes when getting MPLS config right and missing BGP config all my topology is messed up. Given that I use Cisco 7200 series routers, is there any way to build a test harness for my topology in order to guarentee correctness with a single push button automated process?

    Read the article

  • Practical way to set up an email inbox for testing?

    - by Ben Collins
    I need to test a high-volume email application. Up to now, I've just been using gmail ad-hoc aliases ([email protected]) to receive emails and see that the content is right. However, I now need to verify a recipient list, which means I need to get every single email that goes out on a particular test run. The problem with Google isn't clear, but there's some throttling somewhere (perhaps from SendGrid, who is my delivery provider), and only a very small number of those emails ever make it to my acount. So: what's a practical way to get where I want to be? Is there a way to get gmail to just accept everything? Is there a web app / service somewhere that will let me throw up a fake email address that can receive for a large number of recipients? Is there something else?

    Read the article

  • Backup of whole harddrive during full operation with Acronis True Image Home 2010

    - by testing
    Currently I'm creating a backup of one of my hard drives. It's my main hard drive, where the operating system is running on. Because the backup is done during full operation I'm asking me if the backup really includes all files (registry, ...). Can I restore the backup on another hard drive and then run the operating system again without problems? Normally I would say that you have to boot from a CD (without running OS) to make a backup. I made a Google research but I didn't found my case so far.

    Read the article

  • In VirtualBox, how can I access host localhost from guest (Visual Studio Dev Server from IE7 testing VM)?

    - by Seth
    Host OS is Win7 running MyApp in the Visual Studio Development Server, bound to localhost:51227, VM is VirtualBox configured with NAT. Guest OS is Win XP with IE7 installed. My goal is to debug MyApp (running on host) from within IE7 (running on guest). Visual Studio Development server only binds to the loopback network device (i.e. localhost). It does not bind to the external IP address of my host. I've tried access 10.0.2.2:51227 from IE7 on the guest (and confirmed that 10.0.2.2 is the gateway address using ipconfig), but it appears that 10.0.2.2 binds to the external IP of the Host, NOT the loopback IP (localhost), so this does not work. Any suggestions?

    Read the article

  • Reduce size of MP4

    - by testing
    I have a MP4 file with a lengh of 22:44. Here are the details: Video: width: 720 px height: 404 px data bitrate: 1022 kBit/s overall bitrate: 1182 kBit/s fps: 24 codec: H264 - MPEG4 AVC (part 10) (avc1) Audio: bitrate: 159 kBit/s stereo sample rate: 48 kHz codec: MPEG AAC Audio (mp4a) I thought I can reduce the current filesize (about 200 MB) by reducing the width and the height (420 x 236). I tried different programs: Handbrake, Format Factory, Next Video Converter and Super. The first three didn't worked as expected: Handbrake has a bug by setting the width and the height, the next two doesn't allow the fine setting of the videosize (only presets of width and height). Super seems to be the best, but I didn't found a setting which reduces the file size. I reduced the width and the height but only got 20 MB less. Now I tried the xth setting and I still get a too high file size. I want to reduce the filesize to 100 MB or less. The ouput format should be FLV or MP4, because I need this for flowplayer. Which settings of SUPER or which program should I use to reduce the file size? Of course the video should still be viewable.

    Read the article

  • Can I use Cygwin as a replacement for Ubuntu, for bash script testing?

    - by Jeroen De Meerleer
    Next wednesday i'm having an exam on Operating Systems. In this exam there will also be a part bash-scripting. The teacher itself will test the scripts in a Virtual Machine running Ubuntu. Myself, however, I'm having serious troubles with running the latest Ubuntu (14.04 LTS) on a Virtual Machine (there are troubles with gnome running very slow). So I'm thinking about using Cygwin, which is doing the job great for another course. The teacher already confirmed I can use that, but I'm thinking he doesn't know it at all. I've already tested the scripts we made in class and they're all running without errors. But I'm quite sure there are some things I have to mind on. My question: would you use Cygwin as a replacement for the Ubuntu VM? Or should I stick it with the VM (maybe by using a different config/platform).

    Read the article

< Previous Page | 137 138 139 140 141 142 143 144 145 146 147 148  | Next Page >