Search Results

Search found 399 results on 16 pages for 'stability'.

Page 11/16 | < Previous Page | 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • Can I convert an existing Firefox installation to ESR without a re-install?

    - by Iszi
    It took some jumping through hoops (including a mailing list subscription that I apparently didn't need) but I finally found where to download the Firefox ESR. This is great for fresh installs, but I was wondering if there's a way to simply convert existing installations to the ESR configuration without having to do a full install. As I understand it, the only difference between ESR and regular Firefox will be how they receive updates. After the new standard version of Firefox comes out, ESR releases will only receive critical security updates and bug fixes for the remainder of their support life. Newer versions of Firefox's standard build will have all the latest and greatest features, while ESR releases are meant to provide stability for environments that can't be expected to keep up with a new full version number change as often as Mozilla does them. In regular Firefox, the About screen shows that I am using the "release" update channel. Is switching to ESR really just a matter of switching the update channel? I presume this can be done in about:config by changing app.update.channel and probably also app.update.url. However, I don't know what these values should be for ESR or if anything else should be tweaked. So, is it possible to switch to ESR without a reinstall and, if so, how? (Note: While this question was written originally for Firefox 10, I expect any answers will apply to future ESR versions as well.)

    Read the article

  • is ksplice production ready?

    - by faultyserver
    I would be interested to hear the serverfault community's experiences with Ksplice in production. Quick blurb from wikipedia: Ksplice is a free and open source extension of the Linux kernel which allows system administrators to apply security patches to a running kernel without having to reboot the operating system. and Ksplice can, without restarting the kernel, apply any source code patch that only needs to modify the kernel code. Unlike other hot update systems, Ksplice takes as input only a unified diff and the original kernel source code, and it updates the running kernel correctly, with no further human assistance required. Additionally, taking advantage of Ksplice does not require any preparation before the system is originally booted (the running kernel does not need to have been specially compiled, for example). In order to generate an update, Ksplice must determine what code within the kernel has been changed by the source code patch. So a few questions: How has the stability been? any odd issues that you have encountered with its 'rebootless live patching' of the kernel? Kernel panics or horror stories? I have been running it on a few test systems and so far its been working as advertised, but I am interested in what other sysadmins experiences have been with Ksplice before going 'all in' and deploying this on our production servers. So, anybody using Kspice in production? update: hmm, not seeing any real activity on this question after a couple of hours (besides some kind upvotes and favs). Maybe to spark some activity I'll also ask a few more questions and see if we can get this discussion going... "If you are aware of Ksplice, is there a reason you are not using it?" "Do you feel its still too bleeding edge, unproven or untested?" "Does Ksplice not fit well within your current patch-management system?" "Do you hate having systems that have long (and secure) uptimes?" ;-)

    Read the article

  • DD-WRT Acces Point as a Router

    - by Dzh
    Following suggestion to this question asked on Network Engineering, I am asking the question here. this is an extension to my previous question (I think it was deleted), where I was claiming that DDWRT was disabling it's DHCP server once connected to the network. I was wrong, as it now seems that it is bridging itself with another parallel connected wireless router. I have two Draytek 2820 and one Netgear WG602v3 with latest DDWRT. Lets call one wired-Draytek and it has wireless disabled. The other one, let's call it wireless-Draytek, is connected to wired-Draytek and has wireless with MAC filtering enabled. Once I connect Netgear to the wired-Draytek, the client that connects to Netgear, will be assigned with IP address from the wireless-Draytek. If the MAC address is not on the wireles-Draytek, the client is unable to obtain IP address and has no connectivity at all, even with manually assigned static IP configuration. To illustrate further, this is how network is set up: wired-Draytek ---------- wireless-Draytek \_________ Netgear What I wish to have, is that Netgear issues IP addresses from it's own IP pool and ignores the MAC filtering rules from wireless-Draytek. This is kind of puzzling how this they are bridging (if they are) themselves automatically. Thanks. UPDATE: It's not a home network. I gave you a bit simplified set-up. If there is a better site on Stack Exchange to ask this, please let me know. The Drayteks are running stock firmware, it's only Netgear that I've flashed to get more stability. In addition to these routers, I have also three 3COM Baseline switch 2824, and another Draytek router with Prosafe FS752TP PoE switch dedicated for VoIP phones. Wired-Draytek has IP 10.0.0.1, DHCP disabled as there is AD DC which is issuing IP addresses. Wireless-Draytek has IP 1.1.1.1 and DHCP enabled. Netgear has default - 192.168.1.1. As per suggestion, the specific question is - how do I isolate these two wireless routers?

    Read the article

  • Can I boot up a virtual machine natively?

    - by Anshul
    My question is: Is is possible to run a virtual machine natively on your hardware if you have installed the proper drivers etc? In other words, can I use a VHD as a regular hard drive to boot from? The reason I want to do this is that I do both graphics-intensive and audio-intensive work, but my computer is not powerful enough to handle both at the same time and many times I install a bunch of audio programs that I don't want affecting the stability of my graphics programs. Basically I wanted to have sandboxing between the two sets of applications. So I tried running the graphics-intensive programs in a VirtualBox VM and the audio-intensive work natively (simply because it's a pain to route ASIO audio devices in/out of VirtualBox). This kind-of works - the graphics-intensive stuff is tolerable, but still relatively slow, because it's running inside a VM. So my next idea was to just dual-boot and install the graphics and audio programs in separate partitions but I frequently use them in tandem, so it wouldn't be practical to reboot my machine every time I need to use the other set of programs. But I could live with this scenario: If I need to do more audio-intensive stuff, I'll just boot up to the audio partition and run the graphics programs in a VM, and then when I'm working heavily on the graphics part, I'll just boot the graphics partition as a regular OS directly on the hardware. Is this possible? For example by booting up a VHD as a regular hard drive? Or by setting up dual-boot, and every time the audio partition is shut down, synchronize the graphics VM VHD with the native graphics partition? Is it practical, given the above scenario? And if it's not possible, barring buying another computer, can anyone suggest a best-of-all-worlds setup (the two worlds being performance, sandboxing, and running in parallel) for the above scenario? Thanks in advance.

    Read the article

  • Painless deployment of a Django app (port from Drupal). Do I have to switch to a VPS?

    - by Monden
    I'm about to complete porting my Drupal based community site to Django. My Drupal site is hosted at a shared hosting (Dreamhost) for last 4 years, and stability & performance has been satisfactory. The site gets around 5k unique visitors with 70-80k page views a day. This will be my first deployment of a Django application and I'm not comfortable with managing my own VPS. I use Ubuntu as a dev. server, but I don't have experience with it at the production env. I have an unrelated internal CRM app (Django) that I host with Webfaction. However security and performance isn't an issue as it's only accessed by 5 people. Unfortunately, I don't have much time to learn and maintain a VPS at this moment. I would like to know if I can host a site with this much traffic at Webfaction's shared environment? How would performance differ in comparison to Linode or Slicehost? Google AppEngine isn't an option at the moment as I'll be using my current Postgresql database.

    Read the article

  • MysqlTunner and query_cache_size dilemma

    - by wbad
    On a busy mysql server MySQLTuner 1.2.0 always recommends to add query_cache_size no matter how I increase the value (I tried up to 512MB). On the other hand it warns that : Increasing the query_cache size over 128M may reduce performance Here are the last results: >> MySQLTuner 1.2.0 - Major Hayden <[email protected]> >> Bug reports, feature requests, and downloads at http://mysqltuner.com/ >> Run with '--help' for additional options and output filtering -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.5.25-1~dotdeb.0-log [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: +Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in InnoDB tables: 6G (Tables: 195) [--] Data in PERFORMANCE_SCHEMA tables: 0B (Tables: 17) [!!] Total fragmented tables: 51 -------- Security Recommendations ------------------------------------------- [OK] All database users have passwords assigned -------- Performance Metrics ------------------------------------------------- [--] Up for: 1d 19h 17m 8s (254M q [1K qps], 5M conn, TX: 139B, RX: 32B) [--] Reads / Writes: 89% / 11% [--] Total buffers: 24.2G global + 92.2M per thread (1200 max threads) [!!] Maximum possible memory usage: 132.2G (139% of installed RAM) [OK] Slow queries: 0% (2K/254M) [OK] Highest usage of available connections: 32% (391/1200) [OK] Key buffer size / total MyISAM indexes: 128.0M/92.0K [OK] Key buffer hit rate: 100.0% (8B cached / 0 reads) [OK] Query cache efficiency: 79.9% (181M cached / 226M selects) [!!] Query cache prunes per day: 1033203 [OK] Sorts requiring temporary tables: 0% (341 temp sorts / 4M sorts) [OK] Temporary tables created on disk: 14% (760K on disk / 5M total) [OK] Thread cache hit rate: 99% (676 created / 5M connections) [OK] Table cache hit rate: 22% (1K open / 8K opened) [OK] Open file limit used: 0% (49/13K) [OK] Table locks acquired immediately: 99% (64M immediate / 64M locks) [OK] InnoDB data size / buffer pool: 6.1G/19.5G -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance Reduce your overall MySQL memory footprint for system stability Increasing the query_cache size over 128M may reduce performance Variables to adjust: *** MySQL's maximum memory usage is dangerously high *** *** Add RAM before increasing MySQL buffer variables *** query_cache_size (> 192M) [see warning above] The server has 76GB ram and dual E5-2650. The load is usually below 2. I appreciate your hints to interpret the recommendation and optimize the database configs.

    Read the article

  • Revamping an old and unstable office IT-solution using Windows Server and OpenVPN

    - by cmbrnt
    I've been given the cumbersome task to totally redo the IT-infrastructure for a customer's office. They are currently running Windows XP all over, with one computer acting as a file server with no control over which users have access to which files, and so on. To top it off, this file server also functions as a workstation, which means it gets rebooted every time the user notices some sluggish behavior or experiences problems with flash games. To say the least, this isn't working for them. Now - I've got a very slim budget, but I need to set up a new server, and I wish to run Windows Server 2008 on it. I also need the ability to access the network remotely via VPN. Would it be a good idea to install VMware ESXi 4.1 onto the new server, and then run Windows Server 2008 as well as a separate Debian install for openvpn on it? I don't like the Domain Controller for the future AD to also run a VPN-server, because of stability issues when something goes to hell with either of them. There will be no redundancy though. However, I'm not sure if there is something to gain by installing a VPN solution on the Windows Server itself, when it comes to accessing file shares on the network via VPN. I don't know how to enable users logging in via the VPN to access the remote files, since they will be accessing the network from their own home computers (which is indeed a really bad idea, but this is what I've got to work with). They won't be logged in to the windows Domain, but rather their home workgroups. I need to be able to grant access to files in certain directories based on the logged in AD-user, but every computer won't necessarily be configured to log into the domain. I'm not sure how to explain this in a good way, but I'd be happy to clarify if somethings not clear. Any help would be great, because I've got a feeling that I can't do this without introducing a bunch of costly new rules when it comes to their IT-solution. I'd rather leave that untouched and go on my merry way to the next assignment.

    Read the article

  • What Logs / Process Stats to monitor on a Ubuntu FTP server?

    - by Adam Salkin
    I am administering a server with Ubuntu Server which is running pureFTP. So far all is well, but I would like to know what I should be monitoring so that I can spot any potential stability and security issues. I'm not looking for sophisticated software, more an idea of what logs and process statistics are most useful for checking on the health of the system. I'm thinking that I can look at various parameters output from the "ps" command and compare to see if I have things like memory leaks. But I would like to know what experienced admins do. Also, how do I do a disk check so that when I reboot, I don't get a message saying something like "disk not checked for x days, forcing check" which delays the reboot? I assume there is command that I can run as a cron job late at night. How often should it be run? What things should I be looking at to spot intrusion attempts? The only shell access is SSH on a non-standard port through UFW firewall, and I regularly do a grep on auth.log for "Fail" or "Invalid". Is there anything else I should look at? I was logging the firewall (UFW) but I have very few open ports (FTP and SSH on a non standard port) so looking at lists of IP's that have been blocked did not seem useful. Many thanks

    Read the article

  • Revamping an old and unstable IT-solution for a customer?

    - by cmbrnt
    I've been given the cumbersome task to totally redo the IT-infrastructure for a customer's office. They are currently running Windows XP all over, with one computer acting as a file server with no control over which users have access to which files, and so on. To top it off, this file server also functions as a workstation, which means it gets rebooted every time the user notices some sluggish behavior or experiences problems with flash games. To say the least, this isn't working for them. Now - I've got a very slim budget, but I need to set up a new server, and I wish to run Windows Server 2008 on it. I also need the ability to access the network remotely via VPN. Would it be a good idea to install VMware ESXi 4.1 onto the new server, and then run Windows Server 2008 as well as a separate Debian install for openvpn on it? I don't like the Domain Controller for the future AD to also run a VPN-server, because of stability issues when something goes to hell with either of them. There will be no redundancy though. However, I'm not sure if there is something to gain by installing a VPN solution on the Windows Server itself, when it comes to accessing file shares on the network via VPN. I don't know how to enable users logging in via the VPN to access the remote files, since they will be accessing the network from their own home computers (which is indeed a really bad idea, but this is what I've got to work with). They won't be logged in to the windows Domain, but rather their home workgroups. I need to be able to grant access to files in certain directories based on the logged in AD-user, but every computer won't necessarily be configured to log into the domain. I'm not sure how to explain this in a good way, but I'd be happy to clarify if somethings not clear. Any help would be great, because I've got a feeling that I can't do this without introducing a bunch of costly new rules when it comes to their IT-solution. I'd rather leave that untouched and go on my merry way to the next assignment.

    Read the article

  • Server down at 23:26 every night

    - by miccet
    We are having a big problem with our sites stability the last couple of weeks and after endless hours of troubleshooting I don't get anywhere. So I turn to you dear community. Setup: 2 x VPS servers - Front end, 8 core, 8G RAM. - Database, 5 core, 3G RAM. Both running Ubuntu. Ruby on Rails EE with Passenger 3 and Rails 2.3.11. MySQL 5.1.67. The problem is that each night, at the exact same time (23:26) the SQL server suddenly shows a processlist full of COMMIT with an increasing Time. After 30-40 seconds (can go longer) a wave seems processed and the site responds for a few seconds before it repeats. During this hick up the database server load spikes while the front end is relaxing. I have looked at slow queries, but is not finding any locks or other unusual queries ran at this time. I have looked at iotop at the time of the halt and there is no activity from mysql. I also tried turning off query_cache and messed around with the mysql configuration file without much change. Any ideas?

    Read the article

  • What Logs / Process Stats to monitor on a Ubuntu FTP server?

    - by Adam Salkin
    I am administering a server with Ubuntu Server which is running pureFTP. So far all is well, but I would like to know what I should be monitoring so that I can spot any potential stability and security issues. I'm not looking for sophisticated software, more an idea of what logs and process statistics are most useful for checking on the health of the system. I'm thinking that I can look at various parameters output from the "ps" command and compare to see if I have things like memory leaks. But I would like to know what experienced admins do. Also, how do I do a disk check so that when I reboot, I don't get a message saying something like "disk not checked for x days, forcing check" which delays the reboot? I assume there is command that I can run as a cron job late at night. How often should it be run? What things should I be looking at to spot intrusion attempts? The only shell access is SSH on a non-standard port through UFW firewall, and I regularly do a grep on auth.log for "Fail" or "Invalid". Is there anything else I should look at? I was logging the firewall (UFW) but I have very few open ports (FTP and SSH on a non standard port) so looking at lists of IP's that have been blocked did not seem useful. Many thanks

    Read the article

  • hibernate validationQuery.. proper way to use it

    - by cometta
    sometime the connection from application to database will drop and i get SQLState: 08006 error Code: 17002. Below is my configuration for database pooling <prop key="hibernate.dialect">org.hibernate.dialect.Oracle10gDialect</prop> <prop key="hibernate.show_sql">true</prop> <prop key="hibernate.format_sql">true</prop> <prop key="hibernate.use_sql_comments">true</prop> <prop key="hibernate.cglib.use_reflection_optimizer">true</prop> <!-- true ==start up slower, but performacen on runtime good--> <prop key="hibernate.hbm2ddl.auto">update</prop> <prop key="hibernate.c3p0.min_size">5</prop> <prop key="hibernate.c3p0.max_size">20</prop> <prop key="hibernate.c3p0.timeout">1800</prop> <prop key="hibernate.c3p0.max_statements">100</prop> <!-- c3p0's PreparedStatement cache. Zero means statement caching is turned off. --> Can anyone advise, should i use validationQuery, onBorrow.. and why need to use it? what other properties should i use to enhance stability of the connection from my application to oracle10g?

    Read the article

  • Gmagick extension for php install -- how and where?

    - by Vivek Chandra
    Downloaded php-pear and tried installing gmagick extension by following the steps given in link "http://www.gerd-riesselmann.net/development/how-install-imagick-and-gmagick-ubuntu" The pecl gave an error -- gmagick-1.0.9b1$ pecl install gmagick Failed to download pecl/gmagick within preferred state "stable", latest release is version 1.0.9b1, stability "beta", use "channel://pecl.php.net/gmagick-1.0.9b1" to install install failed Tried adding the channel (no result)-- gmagick-1.0.9b1$ pecl channel-add http://pecl.php.net/package/gmagick/1.0.9b1 Error: No version number found in tag channel-add: invalid channel.xml file Found the link "http://pecl.php.net/package/gmagick" to download the php extension untar'd it to find the following files -- gmagick-1.0.9b1$ ls config.m4 gmagickdraw_methods.c gmagick_methods.c LICENSE php_gmagick_helpers.h README gmagick.c gmagick_helpers.c gmagickpixel_methods.c php_gmagick.h php_gmagick_macros.h Tried . / config.m4 only to find more errors gmagick-1.0.9b1$ . / config.m4 ./config.m4: line 1: syntax error near unexpected token `gmagick,' ./config.m4: line 1: `PHP_ARG_WITH(gmagick, whether to enable the gmagick extension,' Been at this since a day with no result.Read that gmagick is a swiss knife of image processing,sad that there isnt much documentation done on it or at least a proper how to install link anywhere. Badly need help. Thanks in advance.

    Read the article

  • ASP.NET MVC 1 and 2 on Mono 2.4 with Fluent NHibernate

    - by SztupY
    Hi! I'd like to create an application using ASP.NET MVC, that should run under mono 2.4 (compiling will be done on a Windows box). Has anyone getting luck with this? Here is what I've already tried: ASP.NET MVC on mono without any persistence model support, and using nhaml as the view engine S#aml architecture, which is a quite good framework imho, but it depends too much on stuff, that are not working good under mono (like windsor) The first part worked fine, I didn't encounter any major problems. But I couldn't get the second part working. It seems it's dependency on Castle.Windsor breaks the whole mono support (but there might be other parts too). Therefore I decided to create an alternative framework, that borrows some of the ideas of s#arp-architecture, but designed to be working under mono (and if I'm able to do this I'll release it for the community of course). The controller and view part is working fine (not much magic here though, they have been always working), but I have some questions before I start job on the persistence part: What NHibernate versions are working under mono? I've heard 1.2 is working fine. Does 2.0.1/2.1 beta work under mono? Does Fluent.NHibernate and NHibernate.Linq work under mono? (for the latter it seems it needs some dependcies that aren't avaialable in mono) Are there any good alternatives for persistence support to NHibernate under mono? Alternative questions: Are there any frameworks that have mono+persistence+asp.net mvc support already or am I the first one to think about this? If you have already done this: what are your opinions on stability/usability? Thanks for the answers EDIT: Updated the framework to support ASP.NET MVC 2: http://shaml.sztupy.hu/

    Read the article

  • Scalable / Parallel Large Graph Analysis Library?

    - by Joel Hoff
    I am looking for good recommendations for scalable and/or parallel large graph analysis libraries in various languages. The problems I am working on involve significant computational analysis of graphs/networks with 1-100 million nodes and 10 million to 1+ billion edges. The largest SMP computer I am using has 256 GB memory, but I also have access to an HPC cluster with 1000 cores, 2 TB aggregate memory, and MPI for communication. I am primarily looking for scalable, high-performance graph libraries that could be used in either single or multi-threaded scenarios, but parallel analysis libraries based on MPI or a similar protocol for communication and/or distributed memory are also of interest for high-end problems. Target programming languages include C++, C, Java, and Python. My research to-date has come up with the following possible solutions for these languages: C++ -- The most viable solutions appear to be the Boost Graph Library and Parallel Boost Graph Library. I have looked briefly at MTGL, but it is currently slanted more toward massively multithreaded hardware architectures like the Cray XMT. C - igraph and SNAP (Small-world Network Analysis and Partitioning); latter uses OpenMP for parallelism on SMP systems. Java - I have found no parallel libraries here yet, but JGraphT and perhaps JUNG are leading contenders in the non-parallel space. Python - igraph and NetworkX look like the most solid options, though neither is parallel. There used to be Python bindings for BGL, but these are now unsupported; last release in 2005 looks stale now. Other topics here on SO that I've looked at have discussed graph libraries in C++, Java, Python, and other languages. However, none of these topics focused significantly on scalability. Does anyone have recommendations they can offer based on experience with any of the above or other library packages when applied to large graph analysis problems? Performance, scalability, and code stability/maturity are my primary concerns. Most of the specialized algorithms will be developed by my team with the exception of any graph-oriented parallel communication or distributed memory frameworks (where the graph state is distributed across a cluster).

    Read the article

  • Upgrade to Delphi 2010, or stick with Delphi 7 "forever"?

    - by tim11g
    I am an individual user of Delphi, starting back in the early Turbo Pascal days. I have quite a bit of code developed over the years, but I have never sold software commercially or used it for business. Historically, Borland supported the non-professional users with lower cost versions, but Embarcadero does not. As I consider upgrading to Delphi 2010, I am put off by the high price. Embarcadero is also trying to "encourage" upgrading by threatening to charge "new user" prices for upgrades after Dec 31st. I have several questions for the community to help me decide whether to upgrade. 1) I have read about difficulties updating existing code to support the unicode string types. I have no need for unicode strings, and I am happy with the string support in D7. Will I have to modify existing code and components just to re-compile under D2010? Or are there compiler options to allow backward compatibility if new string types are not required? 2) The main reason I'm considering upgrading is for IDE improvements, and to get access to new APIs added to Windows since 2002. Are there any Windows 7 APIs or capabilities that would be impossible to support from my programs compiled using using Delphi 7 (assuming appropriate JEDI API libraries, for example)? 3) Is there anything else about Delphi 2010 that is really compelling for someone who is primarily interested in Win32 apps, and not working with databases? I have read that D2010 is slow to load, and other versions between D7 and D2010 have had stability issues, and the help system was "broken". What is the biggest benefit to D2010?

    Read the article

  • Force full garbage collection when memory occupation goes beyond a certain threshold

    - by Silvio Donnini
    I have a server application that, in rare occasions, can allocate large chunks of memory. It's not a memory leak, as these chunks can be claimed back by the garbage collector by executing a full garbage collection. Normal garbage collection frees amounts of memory that are too small: it is not adequate in this context. The garbage collector executes these full GCs when it deems appropriate, namely when the memory footprint of the application nears the allotted maximum specified with -Xmx. That would be ok, if it wasn't for the fact that these problematic memory allocations come in bursts, and can cause OutOfMemoryErrors due to the fact that the jvm is not able to perform a GC quickly enough to free the required memory. If I manually call System.gc() beforehand, I can prevent this situation. Anyway, I'd prefer not having to monitor my jvm's memory allocation myself (or insert memory management into my application's logic); it would be nice if there was a way to run the virtual machine with a memory threshold, over which full GCs would be executed automatically, in order to release very early the memory I'm going to need. Long story short: I need a way (a command line option?) to configure the jvm in order to release early a good amount of memory (i.e. perform a full GC) when memory occupation reaches a certain threshold, I don't care if this slows my application down every once in a while. All I've found till now are ways to modify the size of the generations, but that's not what I need (at least not directly). I'd appreciate your suggestions, Silvio P.S. I'm working on a way to avoid large allocations, but it could require a long time and meanwhile my app needs a little stability

    Read the article

  • Subversion - Do I need to reintegrate if I don't merge from trunk

    - by user314584
    Hi, I have read quite a bit about the need to re-integrate when you merge from a branch back to the trunk in SVN (This article was really helpful http://blogs.open.collab.net/svn/2008/07/subversion-merg.html). The problem seems to come from the fact that people are regularly updating the branch from the trunk which means that the final merge back is reflective. In my use-case, we want to create a release branch which will live for as long as it takes to stabilise the branch and fix any bugs. To maintain stability we don't want to merge up from the trunk but we do want to regularly merge fixes down from the release branch so that trunk gets all the bug fixes for free. We also don't want to wait until the end of QA to merge back to trunk. We therefore want to: 1.) Create the branch 2.) Make regular changes to the branch (and trunk) 3.) Merge back to trunk regularly (daily perhaps) Since we will never merge up from trunk I don't think that we need to worry about the problems that re-intergrating is designed to fix. Can anyone see a problem with this approach? Cheers, Matt

    Read the article

  • DAL Layer : EF 4.0 or Normal Data access layer with Stored Procedure

    - by Harryboy
    Hello Experts, Application : I am working on one mid-large size application which will be used as a product, we need to decide on our DAL layer. Application UI is in Silverlight and DAL layer is going to be behind service layer. We are also moving ahead with domain model, so our DB tables and domain classes are not having same structure. So patterns like Data Mapper and Repository will definitely come into picture. I need to design DAL Layer considering below mentioned factors in priority manner Speed of Development with above average performance Maintenance Future support and stability of the technology Performance Limitation : 1) As we need to strictly go ahead with microsoft, we can not use NHibernate or any other ORM except EF 4.0 2) We can use any code generation tool (Should be Open source or very cheap) but it should only generate code in .Net, so there would not be any licensing issue on per copy basis. Questions I read so many articles about EF 4.0, on outset it looks like that it is still lacking in features from NHibernate but it is considerably better then EF 1.0 So, Do you people feel that we should go ahead with EF 4.0 or we should stick to ADO .Net and use any code geneartion tool like code smith or any other you feel best Also i need to answer questions like what time it will take to port application from EF 4.0 to ADO .Net if in future we stuck up with EF 4.0 for some features or we are having serious performance issue. In reverse case if we go ahead and choose ADO .Net then what time it will take to swith to EF 4.0 Lastly..as i was going through the article i found the code only approach (with POCO classes) seems to be best suited for our requirement as switching is really easy from one technology to other. Please share your thoughts on the same and please guide on the above questions

    Read the article

  • Java SSH2 libraries in depth: Trilead/Ganymed/Orion [/other?]

    - by Bernd Haug
    I have been searching for a pure Java SSH library to use for a project. The single most important needed feature is that it has to be able to work with command-line git, but remote-controlling command-line tools is also important. A pretty common choice, e.g. used in the IntelliJ IDEA git integration (which works very well), seems to be Trilead SSH2. Looking at their website, it's not being maintained any more. Trilead seems to have been a fork of Ganymed SSH2, which was a ETH Zurich project that didn't see releases for a while, but had a recent release by its new owner, Christian Plattner. There is another actively maintained fork from that code base, Orion SSH, that saw an even more recent release, but which seems to get mentioned online much less than the other 2 forks. Has anybody here worked with any of (or, if possible, both) of Ganymed and Orion and could kindly describe the development experience with either/both? Accuracy of documentation [existence of documentation?], stability, buggyness... - all of these would be highly interesting to me. Performance is not so important for my current project. If there is another pure-Java SSH implementation that should be used instead, please feel free to mention it, but please don't just mention a name...describe your judgment from actual experience. Sorry if this question may seem a bit "do my homework"-y, but I've really searched for reviews. Everything out there seems to be either a listing of implementations or short "use this! it's great!" snippets.

    Read the article

  • Is it safe to use random Unicode for complex delimiter sequences in strings?

    - by ccomet
    Question: In terms of program stability and ensuring that the system will actually operate, how safe is it to use chars like ¦, § or ‡ for complex delimiter sequences in strings? Can I reliable believe that I won't run into any issues in a program reading these incorrectly? I am working in a system, using C# code, in which I have to store a fairly complex set of information within a single string. The readability of this string is only necessary on the computer side, end-users should only ever see the information after it has been parsed by the appropriate methods. Because some of the data in these strings will be collections of variable size, I use different delimiters to identify what parts of the string correspond to a certain tier of organization. There are enough cases that the standard sets of ;, |, and similar ilk have been exhausted. I considered two-char delimiters, like ;# or ;|, but I felt that it would be very inefficient. There probably isn't that large of a performance difference in storing with one char versus two chars, but when I have the option of picking the smaller option, it just feels wrong to pick the larger one. So finally, I considered using the set of characters like the double dagger and section. They only take up one char, and they are definitely not going to show up in the actual text that I'll be storing, so they won't be confused for anything. But character encoding is finicky. While the visibility to the end user is meaningless (since they, in fact, won't see it), I became recently concerned about how the programs in the system will read it. The string is stored in one database, while a separate program is responsible for both encoding and decoding the string into different object types for the rest of the application to work with. And if something is expected to be written one way, is possibly written another, then maybe the whole system will fail and I can't really let that happen. So is it safe to use these kind of chars for background delimiters?

    Read the article

  • Image processing on bifurcation diagram to get small eps size

    - by yCalleecharan
    Hello, I'm producing bifurcation diagrams (which are normally used in nonlinear dynamics). These diagrams identify abrupt changes in topologies due to stability changes. These abrupt changes occur as one or more parameters pass through some critical value(s). An example is here: http://en.wikipedia.org/wiki/File:LogisticMap_BifurcationDiagram.png On the above figure, some image processing has been done so as to make the plot more visually pleasant. A bifurcation diagram usually contains hundreds of thousands of points and the resulting eps file can become very big. Journal submission in the LaTeX format require that figures are to be submitted in the eps format. In my case one of such figures can result in about 6 MB in Matlab and even much more in Gnuplot. For the example in the above figure, 100,000 x values are calculated for each r and one can imagine that the resulting eps file would be huge. The site however explains some image processing that makes the plot more visually pleasing. Can anyone explain to me stepwise how go about? I can't understand the explanation provided in the "summary" section. Will the resulting image processing also reduce the figure size? Furthermore, any tips on reducing the file size of such a huge eps figure? Thanks a lot...

    Read the article

  • What can I read from the iPad Camera Connection Kit?

    - by HELVETICADE
    I'm building a small controller device that I'd like to partner with a computer. I've settled on using OSC out from my custom built hardware and am pretty satisfied with what I can get from WOscLib. Two goals I'd like to achieve are portability and a very ratio between battery:computing power, and this has lured me towards using iPhoneOS to accomplish my goals. I think the iPad would suit my needs perfectly, except that using wifi to broadcast OSC out from my device requires a third device and would destroy the goal of portability, whilst also introducing potential latency and stability headaches. My question is pretty simple: Can I push my OSC-out FROM my controller TO an iPad via USB and the Camera Connection Kit? If I could accomplish this, the two major goals of my project would be fulfilled very nicely. This seems like it should be a simple little question, but researching this obsessively over the past few weeks has left me more almost more uncertain than if I had done no research at all. I'd really like some more confidence before I go down this route, and it seems like it should be possible. Any insight would be very, very appreciated.

    Read the article

  • Is it possible to send OSC commands to an iPad via the Camera Connection Kit?

    - by HELVETICADE
    I'm building a small controller device that I'd like to partner with a computer. I've settled on using OSC out from my custom built hardware and am pretty satisfied with what I can get from WOscLib. Two goals I'd like to achieve are portability and a very nice ratio between battery:computing power, and this has lured me towards using iPhoneOS to accomplish my goals. I think the iPad would suit my needs perfectly, except that using wifi to broadcast OSC out from my device requires that device to be connected to a third device with a wifi chip, and this would destroy the goal of portability, whilst also introducing potential latency and stability headaches. My question is pretty simple: Can I push OSC commands FROM my controller TO an iPad via USB and the Camera Connection Kit? If I could accomplish this, the two major goals of my project would be fulfilled very nicely. This seems like it should be a simple little question, but researching this obsessively over the past few weeks has left me more almost more uncertain than if I had done no research at all. I'd really like some more confidence before I go down this route, and it seems like it should be possible. Any insight would be very, very appreciated.

    Read the article

  • WCF with MANY database connections

    - by Jorge Dominguez
    I'm working in the development of an ERP type .Net WinForms application consuming a WCF service. It's to be used by many small companies (in the range of 100-200). Database is SQL Server 2008 and the service will be hosted as a Windows service. Even thought there will be a single DB Server, our customer insists in having separate databases for each company. That is because of stability/support concerns (like DB being damaged or took offline for some reason thus affecting all clients). Concerns coming from previous experiences (not necessarily with same platform). With a single database, connections to the DB would be opened at service start up and pooling used, but, I'm not sure how connections could be managed in a multiple DB scenario: Could a connection to the corresponding DB be opened and closed for each service request? would performance be acceptable? If a connection is opened and maintained for each company accessing the system, what's the practical limit of opened connections (to different databases)? It would be very interesting to hear your opinions and suggestions for this situation. Tanks

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16  | Next Page >