Search Results

Search found 4940 results on 198 pages for 'understanding'.

Page 134/198 | < Previous Page | 130 131 132 133 134 135 136 137 138 139 140 141  | Next Page >

  • Strangling the life out of Software Testing

    - by MarkPearl
    I recently did a course at the local university on Software Engineering. At the beginning of the course I looked over the outline of the subject and there seemed to be some really good content. It covered traditional & agile project methodologies, some general communication and modelling chapters and finished off with testing. I was particularly excited to see the section on testing as this was something I learnt on my own and see great value in. The course has now just ended and I am very disappointed. I now know one of the reasons why so few people i.e. in my region do Test Driven Development, or perform even basic testing methodologies. The topic was to academic! Yes, you might be able to list 4 different types of black box test approaches vs. white box test approaches and describe the characteristics of Smoke Tests, but never during course did we see an example of an actual test or how it might be implemented! In fact, if I did not have personal experience of applying testing in actual projects, I wouldn’t even know what a unit test looked like. Now, what worries me is the following… It took us 6 months to cover the course material, other students more than likely came out of that course with little appreciation of the subject – in fact they now have a very complex view of what a test is – so complex that I think most of them will never attempt it again on their own. Secondly, imagine studying to be a dentist without ever actually seeing a tooth? Yes, you might be able to describe a tooth, and know what it is made out of – but nobody would want a dentist who has never seen a tooth to operate on them. Yet somehow we expect people studying software engineering to do the same? This is not right. Now, before I finish my rant let me say that I know this is not the same everywhere in the world, and that there needs to be a balance on practical implementation and academic understanding – I am just disappointed that this does not seem to be happening at the institution that I am currently studying at ;-( Please, if you happen to be a lecturer or teacher reading this post – a combination of theory and practical's goes a long way. We need to up the quality of software being produced and that starts at learner level!

    Read the article

  • Should we persist with an employee still writing bad code after many years?

    - by user94986
    I've been assigned the task of managing developers for a well-established company. They have a single developer who specialises in all their C++ coding (since forever), but the quality of the work is abysmal. Code reviews and testing have revealed many problems, one of the worst being memory leaks. The developer has never tested his code for leaks, and I discovered that the applications could leak many MBs with only a minute of use. User's were reporting huge slowdowns, and his take was, "it's nothing to do with me - if they quit and restart, it's all good again." I've given him tools to detect and trace the leaks, and sat down with him for many hours to demonstrate how the tools are used, where the problems occur, and what to do to fix them. We're 6 months down the track, and I assigned him to write a new module. I reviewed it before it was integrated into our larger code base, and was dismayed to discover the same bad coding as before. The part that I find incomprehensible is that some of the coding is worse than amateurish. For example, he wanted a class (Foo) that could populate an object of another class (Bar). He decided that Foo would hold a reference to Bar, e.g.: class Foo { public: Foo(Bar& bar) : m_bar(bar) {} private: Bar& m_bar; }; But (for other reasons) he also needed a default constructor for Foo and, rather than question his initial design, he wrote this gem: Foo::Foo() : m_bar(*(new Bar)) {} So every time the default constructor is called, a Bar is leaked. To make matters worse, Foo allocates memory from the heap for 2 other objects, but he didn't write a destructor or copy constructor. So every allocation of Foo actually leaks 3 different objects, and you can imagine what happened when a Foo was copied. And - it only gets better - he repeated the same pattern on three other classes, so it isn't a one-off slip. The whole concept is wrong on so many levels. I would feel more understanding if this came from a total novice. But this guy has been doing this for many years and has had very focussed training and advice over the past few months. I realise he has been working without mentoring or peer reviews most of that time, but I'm beginning to feel he can't change. So my question is, would you persist with someone who is writing such obviously bad code?

    Read the article

  • What does Active, Targetset, and Active targetset mean in the Output of dfsutil /pkiinfo?

    - by Kyle Brandt
    I could use some guidance in interpreting the output of dfsutil /pktinfo. Using the following example: PS C:\Users\kbrandt dfsutil.exe /pktinfo ... Entry: \long.biz.foo\Images ShortEntry: \long.biz.foo\Images Expires in 4 seconds UseCount: 1 Type:0x81 ( REFERRAL_SVC DFS ) 0:[\OR-UTIL01\Images] ( TARGETSET ) 1:[\NY-FS01\Images] AccessStatus: 0xc00000be ( TARGETSET ) 2:[\NY-UTIL01\Images] AccessStatus: 0 ( ACTIVE ) Entry: \NY-UTIL01\Images ShortEntry: \NY-UTIL01\Images Expires in 65 seconds UseCount: 0 Type:0x81 ( REFERRAL_SVC DFS ) 0:[\or-util01\Images] ( TARGETSET ) 1:[\NY-FS01\Images] AccessStatus: 0xc00000be ( TARGETSET ) 2:[\NY-UTIL01\Images] AccessStatus: 0 ( ACTIVE ) Entry: \or-util01\Images ShortEntry: \or-util01\Images Expires in 0 seconds UseCount: 0 Type:0x81 ( REFERRAL_SVC DFS ) 0:[\OR-UTIL01\Images] AccessStatus: 0 ( ACTIVE TARGETSET ) 1:[\NY-UTIL01\Images] ( TARGETSET ) 2:[\NY-FS01\Images] Entry: \FOO\Images ShortEntry: \FOO\Images Expires in 108 seconds UseCount: 0 Type:0x81 ( REFERRAL_SVC DFS ) 0:[\OR-UTIL01\Images] AccessStatus: 0 ( ACTIVE TARGETSET ) 1:[\NY-UTIL01\Images] ( TARGETSET ) 2:[\NY-FS01\Images] What do the three states TARGETSET, ACTIVE TARGETSET, and ACTIVE mean exactly? In particular, why might OR-UTIL01 be ACTIVE for \long.biz.foo\Images but the shortname version FOO\Images have NY-UTIL01 as ACTIVE TARGETSET? I'd like to have a better understanding of this to know if it is normal or not. Once I understand it, I might be looking at and issue with IPv6 being disabled (http://support.microsoft.com/kb/2003961) if this isn't normal.

    Read the article

  • Setting up Tomcat6 properly in Ubuntu 10.04

    - by aasukisuki
    We have a Tomcat6 instance running on Ubuntu 10.04LTS. Our test box was just a Windows machine running Tomcat6. Both machines (Linux and Windows) have 1GB of ram. Via the Tomcat configuration tool in windows, I was able to set the min/max/permgen sizes of the JVM. Those were set to 256/512/128 respectively. Now on the Ubuntu box, I've tried setting the JVM options in several different places including: Adding JAVA_OPTS & CATALINA_OPTS in /etc/environment Adding JAVA_OPTS in $CATALINA_HOME/bin/catalina.sh Creating setenv.sh and adding JAVA_OPTS in $CATALINA_HOME/bin Adding JAVA_OPTS directly to /etc/init.d/tomcat6 Un-commenting the JAVA_OPTS and modifying it in /etc/default/tomcat6 Nearly all of those methods did not work, except for modifying /etc/init.d/tomcat6 directly (and possibly the /etc/default/tomcat6 change, but I just did that). However, my understanding is that when you change these settings, only one JVM should be used for the entire tomcat6 instance, and that memory is shared among the applications. On our windows box, tomcat6 is run as a service, and appears to behave this way. However, when I look at htop on the linux box, there are 20+ tomcat6 instances (I have an app that triggers internal jobs every X seconds using chron, so maybe these are threads? Or are they actual instances) all with those memory settings. The app runs fine for a bit, but eventually ends up locking up. I'm guessing each of these apps thinks it has 512m to work with and never GC's and then locks tomcat up completely. What is the proper way to set all of this up?

    Read the article

  • Account Lockout with pam_tally2 in RHEL6

    - by Aaron Copley
    I am using pam_tally2 to lockout accounts after 3 failed logins per policy, however, the connecting user does not receive the error indicating pam_tally2's action. (Via SSH.) I expect to see on the 4th attempt: Account locked due to 3 failed logins No combination of required or requisite or the order in the file seems to help. This is under Red Hat 6, and I am using /etc/pam.d/password-auth. The lockout does work as expected but the user does not receive the error described above. This causes a lot of confusion and frustration as they have no way of knowing why authentication fails when they are sure they are using the correct password. Implementation follows NSA's Guide to the Secure Conguration of Red Hat Enterprise Linux 5. (pg.45) It's my understanding that that only thing changed in PAM is that /etc/pam.d/sshd now includes /etc/pam.d/password-auth instead of system-auth. If locking out accounts after a number of incorrect login attempts is required by your security policy, implement use of pam_tally2.so. To enforce password lockout, add the following to /etc/pam.d/system-auth. First, add to the top of the auth lines: auth required pam_tally2.so deny=5 onerr=fail unlock_time=900 Second, add to the top of the account lines: account required pam_tally2.so EDIT: I get the error message by resetting pam_tally2 during one of the login attempts. user@localhost's password: (bad password) Permission denied, please try again. user@localhost's password: (bad password) Permission denied, please try again. (reset pam_tally2 from another shell) user@localhost's password: (good password) Account locked due to ... Account locked due to ... Last login: ... [user@localhost ~]$

    Read the article

  • VMRC equivalent for Hyper-V?

    - by Ian Boyd
    VMRC was the client tool used to connect to virtual machines running on Virtual Server. Upgrading to Windows Server 2008 R2 with the Hyper-V role, i need a way for people to be able to use the virtual machines. Note: not all virtual machines will have network connectivity not all virtual machines will be running Windows some people needing to connect to a virtual machine will be running Windows XP Hyper-V manager, allowing management of the hyper-v server, is less desirable (since it allows management of the hyper-v server (and doesn't work on all operating systems)) What is the Windows Server 2008 R2 equivalent of VMRC; to "vnc" to a virtual server? Update: i think Tatas was suggesting Microsoft System Center Virtual Machine Manager Self-Service Portal 2.0 (?): Which requires SQL Server IIS Installing those would unfortunately violate our Windows Server 2008 R2 license. i might be looking at the wrong product link, since commenter said there is a version that doesn't require "System Center". Update 2: The Windows Server 2008 R2 running HyperV is being licensed with the understanding that it only be used to host HyperV. From the [Windows Server 2008 R2 Licensing FAQ][4]: Q. If I have one license for Windows Server 2008 R2 Standard and want to run it in a virtual operating system environment, can I continue running it in the physical operating system environment? A. Yes, with Windows Server 2008 R2 Standard, you may run one instance in the physical operating system environment and one instance in the virtual operating system environment; however, the instance running in the physical operating system environment may be used only to run hardware virtualization software, provide hardware virtualization services, or to run software to manage and service operating system environments on the licensed server. This is why i'm weary about installing IIS or SQL Server.

    Read the article

  • Zscaler. Certs, cookies, and port 80 traffic

    - by 54's_lol
    So I work at HQ for a large company that shall remain nameless. We use Zscaler and I had to roll out a 2048 cert per zscaler's request. People around me at work dont understand the technology and think that the cert's are what is allowing internet connectivity. From my understanding(and please chime in) is the cookie located C:\Users\$$$$$$4$$\AppData\Roaming\Macromedia\Flash Player#SharedObjects\Q3JQJQJV\gateway.zscaler.net\zscaler.swf here that gets created when you provide your creds the first time you use the browser. The cert's are just simply a way of inspecting the SSL traffic as zscaler had no way of doing this before without them. They are essentially using the classic MITM attack to parse your SSL traffic. Gmail is smart enough to recognize this as you get a warning. My question is this, is there a product or service that I can use to verify my web browser when at home(I.E. off company network) isn't still getting routed to zscaler's cloud? If i do a tracert that will work fine. It's the port 80 and 443 web traffic zscaler and my company is after. I would like to verify that when I'm off their premise that my web traffic is using only my isp and the path to whatever content I'm searching for. Do the cert's i'm pushing and browser authentication do something behind the curtain that forces web traffic to get routed to zscaler? I searched quite a bit and would very much like to know if I'm ever off company scrutiny. I do know zscaler offers the service to force the scenario im asking about. Can I prove how my web traffic is getting routed? Thanks for any insight. I've been a fan for a long time and your guy's kung fu is very strong:-)

    Read the article

  • Apache: https to https redirect

    - by Klaas van Schelven
    I'm trying to get Apache to redirect all http and https traffic to a single endpoint www.example.org. The http part is easy: <VirtualHost *:80> ServerName example.org Redirect permanent / https://www.example.org/ </VirtualHost> # long list of other domains, all redirecting to https://www.example.org/ <VirtualHost *:80> ServerName www.example.org Redirect permanent / https://www.example.org/ </VirtualHost> I'm trying to do something similar for the https. It is my understanding that I need to specify one specific IP address, because the Host directive is also sent encrypted. So the below works: <VirtualHost xx.xx.xx.xx:443> ServerName www.example.org # actual stuff happening here </VirtualHost> However, when I start adding the redirects to the config, like so: <VirtualHost xx.x.xx.xx:443> ServerName example.org Redirect permanent / https://www.example.org/ </VirtualHost> # long list of other domains stuff breaks. $ apache2ctl configtest [warn] VirtualHost xx.xx.xx.xx:443 overlaps with VirtualHost xx.xx.xx.xx:443, the first has precedence, perhaps you need a NameVirtualHost directive If I add a directive like so: NameVirtualHost xx.xx.xx.xx:443 Connecting to the (ssl part of the) server starts to fail. How do I solve this?

    Read the article

  • Synchronizing audio and video using MP4Box / ffmpeg to concatenate files

    - by jdl2003
    I have two H.264 encoded MPEG-4 files that I need to concatenate. I have been using MP4Box for this task by first ensuring the files are encoded identically (even went so far as to compare output from h264_parse on their video tracks) and then concatenating with this command: MP4Box -cat file1.mp4 -cat file2.mp4 output_file.mp4 This works and the output file is playable, but on playback in Quicktime or VLC the second video's audio starts too soon, making the entire second part of the concatenated file out of sync. I have tried reencoding the output through ffmpeg with -vcodec copy and -acodec copy but the sync issue persists. I have also tried converting to MPEG-2 format first, concatenating with a simple cat file1.mpg file2.mpg > output.mpg and reencoding the result with ffmpeg. This was even worse. I know that I can pass commands to MP4Box to adjust the start time of the audio track, but I am trying to do this programmatically for any input video (in the same encoding of course). I am looking for possible solutions that would not require human intervention / manual adjustments. Or, at least, an understanding of what is happening to make the second part of the concatenated video go out of sync.

    Read the article

  • How to construct SELinux rules for a Glassfish server

    - by tronda
    I'm running Glassfish 3.1 on a CentOS 6 solution and by default SELinux is enabled. I have installed Sun's JDK version 1.6.0_29 on the server and extracted the Glassfish 3.1.1 to /opt/glassfish-3.1.1 with a link /opt/glassfish pointing to the latest Glassfish version. I've also created a system user named glassfish with a home directory /home/glassfish. When running with SELinux enabled I get all sorts of errors. For instance I'm not able to create the domain. I kind of like the concept of SELinux, and would like to be able to have SELinux enabled. I have the following requirements for the Glassfish server: Listening to port 8080 and 8081 Other ports 7676: JMS 8686: JMX monitoring, 4848: Admin console Forwarding from apache to Glassfish through mod_jk and port 8009 Starting OpenMQ as an separate process which listens to 7676 and it's JMX monitoring port 7776 Able to read and write files at a specified area (different from home directory) Able to use /tmp/ for temporary files I am aware of the audit2allow tool when running in permissive mode, but I struggle with understanding the rules that is generated from this tool, and thought that setting up these rule manually the first time would help me understand the SELinux rules better than the simplistic examples that I've seen so far. Can someone with SELinux experience help me form these SELinux rules with comments describing each part of the rules?

    Read the article

  • Simple P2V help from Linux to Windows

    - by Ke.
    I have two OS's installed on different drives in my PC. One linux (Centos 5.4) and one windows 7. Its getting tiresome to constantly have to stop and restart the PC when I want to use either OS. I would very much like to use Windows 7 as my host OS and access my linux OS from within Windows. However, im having trouble deciphering exactly how to do this (many of the articles seem confusing and a bit overkill) From what i have seen its possible to use VMWare converter to convert the physical linux image to a virtual image so that I can use it in windows. As im having problems understanding how this is done, I would really appreciate a step by step guide (for a newbie), or any simple tutorials that you can point me at. Some questions beforehand: 1) My linux image is around 80gb, do i need to take this into consideration? The linux drive is around 180gb in total. All my other drives are NTFS non writeable in linux (as I use them in windows and ntfs is dodgy in linux), so probably not possible to move the image over to my ntfs drives 2) Can I just zip the linux files up somehow and transfer it to windows to create the p2v? 3) Is it possible to do the P2V conversion while I am logged into windows. I can see the actual linux drive loaded in disk manager, but windows doesnt read linux file systems so im confused as to how to access the linux drive if this is possible. 4) Or will i need to do the whole p2v conversion inside linux? Cheers, any help is much appreciated Ke (a confused p2v newbie)

    Read the article

  • most reliable linux terminal app / general procedures for process stability

    - by intuited
    I've been using konsole (KDE 4.2) for a while now but it crashed recently. Konsole is efficiently designed to use one instance for all of the windows for your entire X session. Extra-unfortunately, because of this ingenuity the crash brought down all the humpty-dumptys and their bashes and their bashes' applications and all the begattens' begattens all the way down to Jebodiah Springfield into one big flat nonexistent omelette. The fact that this app is capable of crashing under any circumstances is pretty disappointing. Although KDE 4.2 is not expected to be entirely stable -- and yes, I know, I should update my distro -- it's still a no-sell for me, since if at all possible, this sort of thing Shouldn't Happen to something that's likely to be a foundation for an entire working environment. Maybe this is arrogant and unrealistic, but if it's possible to have something more stable, I want it. So other than running under screen -- which is fun, nifty, and thus far flawless in its reliability, but which has some issues with not understanding certain keycodes -- I'm looking for ways to improve my environment's reliability. The most obvious strategy is to cast about for a more reliable console app. A standard featureset -- which to me includes tabbed windows, Unicode support, and a decent level of keyboard shortcut configuration -- is pretty much essential. I'm currently running gnome-terminal and roxterm, both of which have acceptable featuresets (pretty much identical, actually; I think rox is actually the superset), and neither of which have provided me with extensive, objective reliability data. Not that they were expected to. Other strategies are also welcome. Were I responding to this question I would perhaps suggest backgrounding critical tasks with & and/or disowning them so they don't come down with the global pandemic. And stuff like that.

    Read the article

  • Experiences with BIRD for BGP?

    - by Shtééf
    We're currently using Quagga with Debian Linux to run a full table BGP router. The set-up has been dead simple up to now, but we've come to a point where I have to reconfigure the router quite a bit, and want to tighten things up. I've never really understood Quagga, and always found its documentation to be lacking. It appears to be mimicking Cisco, of which I only have basic understanding. BIRD has caught my eye recently. The couple of articles / presentations I found promote it as lightweight and more responsive under stress compared to Quagga. And it actually seems to have very decent documentation. So I'd like to know: Who's running BIRD right now, and in what kind of set-up? How is it stability-wise? I've read about it running in a couple of sites in production. Let's say I don't care at all for a Cisco-feel to configuration. How is configuration, maintainance, monitoring, etc. of BIRD in general? And any other notable experiences you may have with it.

    Read the article

  • Can compressing Program Files save space *and* give a significant boost to SSD performance?

    - by Christopher Galpin
    Considering solid-state disk space is still an expensive resource, compressing large folders has appeal. Thanks to VirtualStore, could Program Files be a case where it might even improve performance? Discovery In particular I have been reading: SSD and NTFS Compression Speed Increase? Does NTFS compression slow SSD/flash performance? Will somebody benchmark whole disk compression (HD,SSD) please? (may have to scroll up) The first link is particularly dreamy, but maybe head a little too far in the clouds. The third link has this sexy semi-log graph (logarithmic scale!). Quote (with notes): Using highly compressable data (IOmeter), you get at most a 30x performance increase [for reads], and at least a 49x performance DECREASE [for writes]. Assuming I interpreted and clarified that sentence correctly, this single user's benchmark has me incredibly interested. Although write performance tanks wretchedly, read performance still soars. It gave me an idea. Idea: VirtualStore It so happens that thanks to sanity saving security features introduced in Windows Vista, write access to certain folders such as Program Files is virtualized for non-administrator processes. Which means, in normal (non-elevated) usage, a program or game's attempt to write data to its install location in Program Files (which is perhaps a poor location) is redirected to %UserProfile%\AppData\Local\VirtualStore, somewhere entirely different. Thus, to my understanding, writes to Program Files should primarily only occur when installing an application. This makes compressing it not only a huge source of space gain, but also a potential candidate for performance gain. Testing The beginning of this post has me a bit timid, it suggests benchmarking NTFS compression on a whole drive is difficult because turning it off "doesn't decompress the objects". However it seems to me the compact command is perfectly capable of doing so for both drives and individual folders. Could it be only marking them for decompression the next time the OS reads from them? I need to find the answer before I begin my own testing.

    Read the article

  • What is Apache Synapse?

    - by Aren B
    My website keeps getting hit by odd requests with the following user-agent string: Mozilla/4.0 (compatible; Synapse) Using our friendly tool Google I was able to determine this is the hallmark calling-card of our friendly neighborhood Apache Synapse. A 'Lightweight ESB (Enterprise Service Bus)'. Now, based on this information I was able to gather, I still have no clue what this tool is used for. All I can tell is that is has something to do with Web-Services, and supports a variety of protocols. The Info page only leads me to conclude it has something to do with proxies, and web-services. The problem I've run into is that while normally I wouldn't care, we're getting hit quite a bit by Russian IPs (not that russian's are bad, but our site is pretty regionally specific), and when they do they're shoving wierd (not xss/malicious at least not yet) values into our query string parameters. Things like &PageNum=-1 or &Brand=25/5/2010 9:04:52 PM. Before I go ahead and block these ips/useragent from our site, I'd like some help understanding just what is going on. Any help would be greatly appreciated :)

    Read the article

  • How to make sysctl network bridge settings persist after a reboot?

    - by user183394
    I am setting up a notebook for software demo purpose. The machine has 8GB RAM, a Core i7 Intel CPU, a 128GB SSD, and runs Ubuntu 12.04 LTS 64bit. The notebook is used as a KVM host and runs a few KVM guests. All such guests use the virbr0 default bridge. To enable them to communicate with each other using multicast, I added the following to the host's /etc/sysctl.conf, as shown below net.bridge.bridge-nf-call-ip6tables = 0 net.bridge.bridge-nf-call-iptables = 0 net.bridge.bridge-nf-call-arptables = 0 Afterwards, following man sysctl(8), I issued the following: sudo /sbin/sysctl -p /etc/sysctl.conf My understanding is that this should make these settings persist over reboots. I tested it, and was surprised to find out the following: root@sdn1 :/proc/sys/net/bridge# more *tables :::::::::::::: bridge-nf-call-arptables :::::::::::::: 1 :::::::::::::: bridge-nf-call-ip6tables :::::::::::::: 1 :::::::::::::: bridge-nf-call-iptables :::::::::::::: 1 All defaults are coming back! Yes. I can use some kludgy "get arounds" such as putting a /sbin/sysctl -p /etc/sysctl.conf into the host's /etc/rc.local but I would rather "do it right". Did I misunderstand the man page or is there something that I missed? Thanks for any hints. -- Zack

    Read the article

  • Windows 2003 R2 x86 Mini dump fails to write to disk

    - by Randy K
    I have 3 blade servers that are Blue Screening with a 0xC2 error as far as we can tell randomly. When it started happening I found that the servers weren't set to do provide a dump because they each have 16GB RAM and a 16GB swap file divided over 4 partitions in 4GB files. I set them to provided a small dump file (64K mini dump), but the dump files aren't being written. On start up the server event log is reporting both Event ID 45 "The system could not sucessfully load the crash dump driver." and Event ID 49 "Configuring the Page file for crash dump failed. Make sure there is a page file on the boot partition and that is large enough to contain all physical memory." I understanding is the the small dump shouldn't need a swap file large enough for all physical memory, but the error seems to point to this not being the case. The issue of course is that the max swap file size is 4GB, so this seems to be impossible. Can anyone point me where to go from here?

    Read the article

  • Zimbra MTA settings

    - by user192702
    Hi have some questions for Zimbra v8.0.6GA. Under Configure - MTA - Network, I'm seeing a few settings and am not very clear what to do with them. Web mail MTA Host name Is this for delivering local mail only (ie not for external mails)? According to this link, it says the following. That's a mouthful but what is "composed messages"? Is this for a multi server deployment where the Postfix server for Zimbra isn't installed on the same box that as the rest of the servers? Webmail MTA is used by the Zimbra server for composed messages and must be the location of the Postfix server in the Zimbra MTA. Relay MTA for external delivery My understanding after reading the doc is that if my ISP doesn't force me to relay outgoing mails through them, and I have enabled DNS lookup, I can leave this blank? Inbound SMTP host name Sorry I know this is explained as "If your MX records point to a spam-relay or any other external non-Zimbra server, enter the name of that server in the Inbound SMTP host name field." but I'm not following. Can someone provide an example? MTA Trusted Networks The admin doc says "To set up MTA trusted networks on a per server basis, make sure that MTA trusted networks have been set up as global settings and then go the Configure Servers MTA page and in the MTA Trusted Networks field enter the trusted network addresses for the server." However I see out of the box it has default networks setup for the server whereas on a global level it's blank. Does this mean there is a bug with the install software and I have to copy the setting from the server to the global setting?

    Read the article

  • What can cause an increase in inactive memory and how to reclaim it?

    - by Boaz
    I have heavy application running on a CentOS server and I'm seeing a strange memory behavior. Here is a snapshot of a munin graph: As you can see the amount of committed memory increases gradually causing the swap file to be use. What strikes me odd is that the amount of inactive memory keeps growing as well. It is my understanding that the inactive memory is actually memory freed up but not yet clean by the OS and put back in the free memory pool. It seems that running out of memory is acutally caused by this lack of clean up, but I may be wrong. Can you give some tips to find the cause of the problem and/or cause CentOS to reclaim the inactive memory? Thanks. Some extra info: 1) I have a tmpfs mounted on /tmp and the number of files stored there grows (but it is double the amount of the inactive memory). 2) cat /proc/meminfo (at a later stage than the image) gives: MemTotal: 14371428 kB MemFree: 1207108 kB Buffers: 35440 kB Cached: 4276628 kB SwapCached: 785316 kB Active: 9038924 kB Inactive: 3902876 kB HighTotal: 0 kB HighFree: 0 kB LowTotal: 14371428 kB LowFree: 1207108 kB SwapTotal: 10223608 kB SwapFree: 6438320 kB Dirty: 627792 kB Writeback: 0 kB AnonPages: 7844560 kB Mapped: 49304 kB Slab: 146676 kB PageTables: 27480 kB NFS_Unstable: 0 kB Bounce: 0 kB CommitLimit: 17409320 kB Committed_AS: 16471488 kB VmallocTotal: 34359738367 kB VmallocUsed: 275852 kB VmallocChunk: 34359462007 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 Hugepagesize: 2048 kB 3) The application is a combination of MySQL, Heritrix (http://crawler.archive.org/ ) and a Tomcat based Java servlet to manage things.

    Read the article

  • What can cause an increase in inactive memory and how to reclaim it?

    - by Boaz
    Hi All, I have heavy application running on a CentOS server and I'm seeing a strange memory behavior. Here is a snapshot of a munin graph: As you can see the amount of committed memory increases gradually causing the swap file to be use. What strikes me odd is that the amount of inactive memory keeps growing as well. It is my understanding that the inactive memory is actually memory freed up but not yet clean by the OS and put back in the free memory pool. It seems that running out of memory is acutally caused by this lack of clean up, but I may be wrong. Can you give some tips to find the cause of the problem and/or cause CentOS to reclaim the inactive memory? Thanks. Some extra info: 1) I have a tmpfs mounted on /tmp and the number of files stored there grows (but it is double the amount of the inactive memory). 2) cat /proc/meminfo (at a later stage than the image) gives: MemTotal: 14371428 kB MemFree: 1207108 kB Buffers: 35440 kB Cached: 4276628 kB SwapCached: 785316 kB Active: 9038924 kB Inactive: 3902876 kB HighTotal: 0 kB HighFree: 0 kB LowTotal: 14371428 kB LowFree: 1207108 kB SwapTotal: 10223608 kB SwapFree: 6438320 kB Dirty: 627792 kB Writeback: 0 kB AnonPages: 7844560 kB Mapped: 49304 kB Slab: 146676 kB PageTables: 27480 kB NFS_Unstable: 0 kB Bounce: 0 kB CommitLimit: 17409320 kB Committed_AS: 16471488 kB VmallocTotal: 34359738367 kB VmallocUsed: 275852 kB VmallocChunk: 34359462007 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 Hugepagesize: 2048 kB 3) The application is a combination of MySQL, Heritrix (http://crawler.archive.org/ ) and a Tomcat based Java servlet to manage things.

    Read the article

  • What can cause an increase in inactive memory and how to reclame it?

    - by Boaz
    Hi All, I have heavy application running on a CentOS server and I'm seeing a strange memory behavior. Here is a snapshot of a munin graph: As you can see the amount of committed memory increases gradually causing the swap file to be use. What strikes me odd is that the amount of inactive memory keeps growing as well. It is my understanding that the inactive memory is actually memory freed up but not yet clean by the OS and put back in the free memory pool. It seems that running out of memory is acutally caused by this lack of clean up, but I may be wrong. Can you give some tips to find the cause of the problem and/or cause CentOS to reclaim the inactive memory? Thanks. Some extra info: 1) I have a tmpfs mounted on /tmp and the number of files stored there grows (but it is double the amount of the inactive memory). 2) cat /proc/meminfo (at a later stage than the image) gives: MemTotal: 14371428 kB MemFree: 1207108 kB Buffers: 35440 kB Cached: 4276628 kB SwapCached: 785316 kB Active: 9038924 kB Inactive: 3902876 kB HighTotal: 0 kB HighFree: 0 kB LowTotal: 14371428 kB LowFree: 1207108 kB SwapTotal: 10223608 kB SwapFree: 6438320 kB Dirty: 627792 kB Writeback: 0 kB AnonPages: 7844560 kB Mapped: 49304 kB Slab: 146676 kB PageTables: 27480 kB NFS_Unstable: 0 kB Bounce: 0 kB CommitLimit: 17409320 kB Committed_AS: 16471488 kB VmallocTotal: 34359738367 kB VmallocUsed: 275852 kB VmallocChunk: 34359462007 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 Hugepagesize: 2048 kB

    Read the article

  • COM+/Desktop Heap errors in IIS affecting sites at random?

    - by tresstylez
    We have a Win2K3 server that is hosting 30+ sites. Each site is configured to have its own unique application pool -- so that we can manually recycle specific sites if needed and not kill sessions for the others. From what I've read, the consequence of this type of setup is that each application pool worker process gets allocated a Desktop Heap (normally 512 kb's) and we limit the number of app pools we can serve. http://blogs.msdn.com/b/david.wang/archive/2006/01/25/security-considerations-of-usesharedwpdesktop-on-iis6.aspx PROBLEM: What we're seeing is that occasionally COM+ errors get triggered, presumably by hitting our 512 kb limit of the desktop heap -- and certain sites become unresponsive (or have errors) until we manually recycle that specific app pool. I know that I can increase the desktop heap limit to 1024, and make other tweaks/tunes, but I've been tasked with finding out what exactly causes one site's heap to max out as opposed to another. It seems that when we start seeing COM+ errors, the sites it affects are random -- small sites or big sites (heavier used). Is it based on process id? Traffic? Any pointers on understanding this a little more would be excellent. Thanks! jg

    Read the article

  • PPTP Client setup, Fedora 17

    - by Suarez Romina
    I am trying to connect to hidemyass.com VPN services via PPTP, but I am having issues understanding why it isn't working, since I don't get a warning or fatal error and my IP remains the same. This is how i create the connection: [root@lasvegas-nv-datacenter ~]# pptpsetup --create TUNNELNAME --server 199.58.165.20 --username MYUSERNAME --password MYPASSWORD --encrypt --start And this is the output: Using interface ppp0 Connect: ppp0 <-- /dev/pts/1 CHAP authentication succeeded MPPE 128-bit stateless compression enabled local IP address 10.200.21.14 remote IP address 10.200.20.1 After that, I check the log and this is what i get: [root@lasvegas-nv-datacenter ~]# tail -f /var/log/messages Aug 24 11:25:33 lasvegas-nv-datacenter pptp[3892]: anon log[ctrlp_rep:pptp_ctrl.c:254]: Sent control packet type is 1 'Start-Control-Connection-Request' Aug 24 11:25:33 lasvegas-nv-datacenter pptp[3892]: anon log[ctrlp_disp:pptp_ctrl.c:754]: Received Start Control Connection Reply Aug 24 11:25:33 lasvegas-nv-datacenter pptp[3892]: anon log[ctrlp_disp:pptp_ctrl.c:788]: Client connection established. Aug 24 11:25:34 lasvegas-nv-datacenter pptp[3892]: anon log[ctrlp_rep:pptp_ctrl.c:254]: Sent control packet type is 7 'Outgoing-Call-Request' Aug 24 11:25:34 lasvegas-nv-datacenter pptp[3892]: anon log[ctrlp_disp:pptp_ctrl.c:873]: Received Outgoing Call Reply. Aug 24 11:25:34 lasvegas-nv-datacenter pptp[3892]: anon log[ctrlp_disp:pptp_ctrl.c:912]: Outgoing call established (call ID 0, peer's call ID 20096). Aug 24 11:25:38 lasvegas-nv-datacenter pppd[3884]: CHAP authentication succeeded Aug 24 11:25:38 lasvegas-nv-datacenter pppd[3884]: MPPE 128-bit stateless compression enabled Aug 24 11:25:38 lasvegas-nv-datacenter pppd[3884]: local IP address 10.200.21.14 Aug 24 11:25:38 lasvegas-nv-datacenter pppd[3884]: remote IP address 10.200.20.1 Can someone help me? Basically, i Ieed to connect to the VPN and have my IP changed after the connection. I read a lot of guides but still cannot understand why I don't get a connection.

    Read the article

  • Cisco SG200 vlan issue in ESXi VSA cluster

    - by George
    I have three Cisco SG200-26 switches, and I also have two ESXi hosts that I have connected like shown in the below "best practice" map by VMware: http://communities.vmware.com/servlet/JiveServlet/previewBody/17393-102-1-22458/VSA_networking_map.pdf Even though I created the VLANs in the SG200 and I set the two VLANs (508 and 608) as allowed for these untagged ports (where my ESX NIC's are connected), I can not ping from host 1 to host 2 when configuring the NIC's to use 608 VLAN. Am I missing something? my IP's are all in the 192.168. range, and the only reason I need the VLANs is to isolate the traffic of VSA back-end internally, only the two hosts will be using the VLANs. So I think I do not have to create virtual interfaces on my router since that's the case, is my understanding correct? Also sending my switch config screenshot below.. all 3 switches have the latest firmware (it seems these were originally linksys and got rebranded as cisco after the acquisition) http://img31.imageshack.us/img31/2503/switch.gif Any ideas what to change on the Cisco SG200 to make this work , would be appreciated! The second VLAN (608) only needs two IP's: 192.168.0.1 and 192.168.0.2 The first VLAN (508) will have about 15 IP's for ESXi Management and VSA cluster service, I could use either 192.168.1.xx or 10.0.1.xx The rest of my network (about 50 clients) is in 192.168.1.xx range VMware also states that the VLAN protocol on the physical switch must be 802.1Q, not ISL, anyone knows which of the two my SG200-26 uses? In addition to that, the only requirement from VSA is that my two hosts: -Are in the same subnet. -Have static IP addresses set. -Have the same Default Gateway configured. If I need inter-vlan routing for this, I suppose I have to create virtual interfaces on my sonicwall, and assign an IP for each VLAN, and then set routes between them? Thank you for your time!

    Read the article

  • What DBus signal is sent on system suspend?

    - by Paul Robinson
    Hello, I need to detect when a machine is going to sleep in Ubuntu 9.10 and Fedora 13. Both use UPower, so I've been looking on the "org.freedesktop.UPower" DBus bus for such signals. I've been listening for the "sleeping" signal on the UPower bus with the following command: dbus-monitor --system "type='signal',interface='org.freedesktop.UPower',member='Sleeping'" When I sleep the machine (either by closing the lid, selecting "shutdown - suspend" or sending a DBus message) I don't see a "sleeping" event. I notice that the "Sleeping" event is sent when the "org.freedesktop.UPower.AboutToSleep" method is invoked. I can do this manually by calling: dbus-send --print-reply --system --dest=org.freedesktop.UPower /org/freedesktop/UPower org.freedesktop.UPower.AboutToSleep And I notice the "sleeping" signal is fired. My understanding is that anything that sleeps the PC must send the "AboutToSleep" signal before hand. It doesn't seem like this is happening. I've tried these steps on both Fedora 13 and Ubuntu 9.10 and I see the same results. Can anyone explain what's happening or provide me with an alternative DBus signal to listen for? Many thanks, Paul

    Read the article

< Previous Page | 130 131 132 133 134 135 136 137 138 139 140 141  | Next Page >