Search Results

Search found 5161 results on 207 pages for 'jonathan low'.

Page 190/207 | < Previous Page | 186 187 188 189 190 191 192 193 194 195 196 197  | Next Page >

  • Building Enterprise Smartphone App &ndash; Part 2: Platforms and Features

    - by Tim Murphy
    This is part 2 in a series of posts based on a talk I gave recently at the Chicago Information Technology Architects Group.  Feel free to leave feedback. In the previous post I discussed what reasons a company might have for creating a smartphone application.  In this installment I will cover some of history and state of the different platforms as well as features that can be leveraged for building enterprise smartphone applications. Platforms Before you start choosing a platform to develop your solutions on it is good to understand how we got here and what features you can leverage. History To my memory we owe all of this to a product called the Apple Newton that came out in 1987. It was the first PDA and back then I was much more of an Apple fan.  I was very impressed with this device even though it never really went anywhere.  The Palm Pilot by US Robotics was the next major advancement in PDA. It had a simple short hand window that allowed for quick stylus entry.. Later, Windows CE came out and started the broadening of the PDA market. After that it was the Palm and CE operating systems that started showing up on cell phones and for some time these were the two dominant operating systems that were distributed with devices from multiple hardware vendors. Current The iPhone was the first smartphone to take away the stylus and give us a multi-touch interface.  It was a revolution in usability and really changed the attractiveness of smartphones for the general public.  This brought us to the beginning of the current state of the market with the concept of an online store that makes it easy for customers to get new features and functionality on demand. With Android, Google made this more than a one horse race.  Not only did they come to compete, their low cost actually made them the leading OS.  Of course what made Android so attractive also is its major fault.  It is so open that it has been a target for malware which leaves consumers exposed.  Fortunately for Google though, most consumers aren’t aware of the threat that they are under. Although Microsoft had put out one of the first smart phone operating systems with CE it had to play catch up and finally came out with the Windows Phone.  They have gone for a market approach between those of iOS and Android.  They support multiple hardware vendors like Google, but they kept a certification process for applications that is similar to Apple.  They also created a user interface that was different enough to give it a clear separation from the other two platforms. The result of all this is hundreds of millions of smartphones being sold monthly across all three platforms giving us a wide range of choices and challenges when it comes to developing solutions. Features So what are the features that make these devices flexible enough be considered for use in the enterprise? The biggest advantage of today's devices is network connectivity.  The ability to access information from multiple sources at a moment’s notice is critical for businesses.  Add to that the ability to communicate over a variety of text, voice and video modes and we have a powerful starting point. Every smartphone has a cameras and they are not just useful for posting to Instagram. We are seeing more applications such as Bing vision that allow us to scan just about any printed code or text to find information.  These capabilities have been made available to developers in the form of standard libraries for reading barcodes of just about an flavor and optical character recognition (OCR) interpretation. Bluetooth give us the ability to communicate with multiple devices. Whether these are headsets, keyboard or printers the wireless communication capabilities are just starting to evolve.  The more these wireless communication protocols grow, the more opportunities we will see to transfer data between users and a variety of devices. Local storage of information that can be called up even when the device cannot reach the network is the other big capability.  This give users the ability to work offline as well and transmit information when connections are restored. These are the tools that we have to work with to build applications that can be leveraged to gain a competitive advantage for companies that implement them. Coming Up In the third installment I will cover key concerns that you face when building enterprise smartphone apps. del.icio.us Tags: smartphones,enterprise smartphone Apps,architecture,iOS,Android,Windows Phone

    Read the article

  • How can I estimate the entropy of a password?

    - by Wug
    Having read various resources about password strength I'm trying to create an algorithm that will provide a rough estimation of how much entropy a password has. I'm trying to create an algorithm that's as comprehensive as possible. At this point I only have pseudocode, but the algorithm covers the following: password length repeated characters patterns (logical) different character spaces (LC, UC, Numeric, Special, Extended) dictionary attacks It does NOT cover the following, and SHOULD cover it WELL (though not perfectly): ordering (passwords can be strictly ordered by output of this algorithm) patterns (spatial) Can anyone provide some insight on what this algorithm might be weak to? Specifically, can anyone think of situations where feeding a password to the algorithm would OVERESTIMATE its strength? Underestimations are less of an issue. The algorithm: // the password to test password = ? length = length(password) // unique character counts from password (duplicates discarded) uqlca = number of unique lowercase alphabetic characters in password uquca = number of uppercase alphabetic characters uqd = number of unique digits uqsp = number of unique special characters (anything with a key on the keyboard) uqxc = number of unique special special characters (alt codes, extended-ascii stuff) // algorithm parameters, total sizes of alphabet spaces Nlca = total possible number of lowercase letters (26) Nuca = total uppercase letters (26) Nd = total digits (10) Nsp = total special characters (32 or something) Nxc = total extended ascii characters that dont fit into other categorys (idk, 50?) // algorithm parameters, pw strength growth rates as percentages (per character) flca = entropy growth factor for lowercase letters (.25 is probably a good value) fuca = EGF for uppercase letters (.4 is probably good) fd = EGF for digits (.4 is probably good) fsp = EGF for special chars (.5 is probably good) fxc = EGF for extended ascii chars (.75 is probably good) // repetition factors. few unique letters == low factor, many unique == high rflca = (1 - (1 - flca) ^ uqlca) rfuca = (1 - (1 - fuca) ^ uquca) rfd = (1 - (1 - fd ) ^ uqd ) rfsp = (1 - (1 - fsp ) ^ uqsp ) rfxc = (1 - (1 - fxc ) ^ uqxc ) // digit strengths strength = ( rflca * Nlca + rfuca * Nuca + rfd * Nd + rfsp * Nsp + rfxc * Nxc ) ^ length entropybits = log_base_2(strength) A few inputs and their desired and actual entropy_bits outputs: INPUT DESIRED ACTUAL aaa very pathetic 8.1 aaaaaaaaa pathetic 24.7 abcdefghi weak 31.2 H0ley$Mol3y_ strong 72.2 s^fU¬5ü;y34G< wtf 88.9 [a^36]* pathetic 97.2 [a^20]A[a^15]* strong 146.8 xkcd1** medium 79.3 xkcd2** wtf 160.5 * these 2 passwords use shortened notation, where [a^N] expands to N a's. ** xkcd1 = "Tr0ub4dor&3", xkcd2 = "correct horse battery staple" The algorithm does realize (correctly) that increasing the alphabet size (even by one digit) vastly strengthens long passwords, as shown by the difference in entropy_bits for the 6th and 7th passwords, which both consist of 36 a's, but the second's 21st a is capitalized. However, they do not account for the fact that having a password of 36 a's is not a good idea, it's easily broken with a weak password cracker (and anyone who watches you type it will see it) and the algorithm doesn't reflect that. It does, however, reflect the fact that xkcd1 is a weak password compared to xkcd2, despite having greater complexity density (is this even a thing?). How can I improve this algorithm? Addendum 1 Dictionary attacks and pattern based attacks seem to be the big thing, so I'll take a stab at addressing those. I could perform a comprehensive search through the password for words from a word list and replace words with tokens unique to the words they represent. Word-tokens would then be treated as characters and have their own weight system, and would add their own weights to the password. I'd need a few new algorithm parameters (I'll call them lw, Nw ~= 2^11, fw ~= .5, and rfw) and I'd factor the weight into the password as I would any of the other weights. This word search could be specially modified to match both lowercase and uppercase letters as well as common character substitutions, like that of E with 3. If I didn't add extra weight to such matched words, the algorithm would underestimate their strength by a bit or two per word, which is OK. Otherwise, a general rule would be, for each non-perfect character match, give the word a bonus bit. I could then perform simple pattern checks, such as searches for runs of repeated characters and derivative tests (take the difference between each character), which would identify patterns such as 'aaaaa' and '12345', and replace each detected pattern with a pattern token, unique to the pattern and length. The algorithmic parameters (specifically, entropy per pattern) could be generated on the fly based on the pattern. At this point, I'd take the length of the password. Each word token and pattern token would count as one character; each token would replace the characters they symbolically represented. I made up some sort of pattern notation, but it includes the pattern length l, the pattern order o, and the base element b. This information could be used to compute some arbitrary weight for each pattern. I'd do something better in actual code. Modified Example: Password: 1234kitty$$$$$herpderp Tokenized: 1 2 3 4 k i t t y $ $ $ $ $ h e r p d e r p Words Filtered: 1 2 3 4 @W5783 $ $ $ $ $ @W9001 @W9002 Patterns Filtered: @P[l=4,o=1,b='1'] @W5783 @P[l=5,o=0,b='$'] @W9001 @W9002 Breakdown: 3 small, unique words and 2 patterns Entropy: about 45 bits, as per modified algorithm Password: correcthorsebatterystaple Tokenized: c o r r e c t h o r s e b a t t e r y s t a p l e Words Filtered: @W6783 @W7923 @W1535 @W2285 Breakdown: 4 small, unique words and no patterns Entropy: 43 bits, as per modified algorithm The exact semantics of how entropy is calculated from patterns is up for discussion. I was thinking something like: entropy(b) * l * (o + 1) // o will be either zero or one The modified algorithm would find flaws with and reduce the strength of each password in the original table, with the exception of s^fU¬5ü;y34G<, which contains no words or patterns.

    Read the article

  • R12.0 Cash Management Consolidated Patch Collection (CPC) And R12.1 Cash Management Recommended Patch Collection (RPC)

    - by user793553
    If you have Oracle E-Business Suite's Cash Management (CE) application installed, you'll want to be sure to install the latest CPC (Consolidated Patch Collection) if you are using a R12.0 version of the apps, or the latest RPC (Recommended Patch Collection) for the R12.1 version of the apps. These collections give you all the fixes currently available for known issues in the specified versions of the application, including all of the latest Root Cause Analysis Fixes (RCAs)! What is an "RPC" (for R12.1 users)? Since the release of 12.1, a number of recommended patches for Oracle Cash Management have been made available as standalone patches to help address important business process issues. Adoption of these patches was highly recommended at the time, but not always implemented, so to further facilitate adoption of these patches, Oracle consolidated them into product-specific Recommended Patch Collections (RPCs) - a collection of recommended patches. They were created by Oracle Development with the following goals in mind: Stability: To address data integrity issues that have been identified by Oracle Development and Oracle Software Support as having the potential to interfere with the normal completion of important business processes (such as, period close, etc.). Root Cause Fixes (RCAs): To make available root cause fixes for known data integrity issues. Compact: To keep the file footprint as small as possible to help facilitate the install process and minimize testing. Granular: To compile the collection of patches based on functional areas, allowing a customer to apply multiple RPCs at once, or in phases (based on individual needs and goals). Where to start ALL R12 Cash Management users (R12.0 and R12.1 users) should start with the following Note on My Oracle Support (MOS): Doc ID 1367845.1: R12: Cash Management Recommended Patch Collections It's a great place for important implementation information about both sets of critical patch collections! For R12.1x users R12.1 users should also take a look at the documents below for even more information about the RPC for the R12.1.x versions of the Cash Management application, and other related available RPCs: Note Number  Title                                                                                                      1489997.1 Master Troubleshooting Guide for CE: Reconciliation & Clearing [VIDEO] 954704.1 EBS: R12.1 Oracle Financials Recommended Patch Collections (RPCs) 1316506.1 R12: Oracle CE: Upgrading from R11i to R12.1: Latest Recommended Patches Patch Wizard Utility While a patch may contain several hundred files, the impact on your system may actually be minimal. Patches contain hard prerequisites that are intended to make a patch work on a very low code baseline. The Patch Wizard Utility will give you a detailed analysis of the patch’s impact on your instance BEFORE it’s applied, so you’ll know exactly what to expect from the application. Please refer to Doc ID 976188.1 for more information on this important utility

    Read the article

  • Exploring TCP throughput with DTrace (2)

    - by user12820842
    Last time, I described how we can use the overlap in distributions of unacknowledged byte counts and send window to determine whether the peer's receive window may be too small, limiting throughput. Let's combine that comparison with a comparison of congestion window and slow start threshold, all on a per-port/per-client basis. This will help us Identify whether the congestion window or the receive window are limiting factors on throughput by comparing the distributions of congestion window and send window values to the distribution of outstanding (unacked) bytes. This will allow us to get a visual sense for how often we are thwarted in our attempts to fill the pipe due to congestion control versus the peer not being able to receive any more data. Identify whether slow start or congestion avoidance predominate by comparing the overlap in the congestion window and slow start distributions. If the slow start threshold distribution overlaps with the congestion window, we know that we have switched between slow start and congestion avoidance, possibly multiple times. Identify whether the peer's receive window is too small by comparing the distribution of outstanding unacked bytes with the send window distribution (i.e. the peer's receive window). I discussed this here. # dtrace -s tcp_window.d dtrace: script 'tcp_window.d' matched 10 probes ^C cwnd 80 10.175.96.92 value ------------- Distribution ------------- count 1024 | 0 2048 | 4 4096 | 6 8192 | 18 16384 | 36 32768 |@ 79 65536 |@ 155 131072 |@ 199 262144 |@@@ 400 524288 |@@@@@@ 798 1048576 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 3848 2097152 | 0 ssthresh 80 10.175.96.92 value ------------- Distribution ------------- count 268435456 | 0 536870912 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 5543 1073741824 | 0 unacked 80 10.175.96.92 value ------------- Distribution ------------- count -1 | 0 0 | 1 1 | 0 2 | 0 4 | 0 8 | 0 16 | 0 32 | 0 64 | 0 128 | 0 256 | 3 512 | 0 1024 | 0 2048 | 4 4096 | 9 8192 | 21 16384 | 36 32768 |@ 78 65536 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 5391 131072 | 0 swnd 80 10.175.96.92 value ------------- Distribution ------------- count 32768 | 0 65536 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 5543 131072 | 0 Here we are observing a large file transfer via http on the webserver. Comparing these distributions, we can observe: That slow start congestion control is in operation. The distribution of congestion window values lies below the range of slow start threshold values (which are in the 536870912+ range), so the connection is in slow start mode. Both the unacked byte count and the send window values peak in the 65536-131071 range, but the send window value distribution is narrower. This tells us that the peer TCP's receive window is not closing. The congestion window distribution peaks in the 1048576 - 2097152 range while the receive window distribution is confined to the 65536-131071 range. Since the cwnd distribution ranges as low as 2048-4095, we can see that for some of the time we have been observing the connection, congestion control has been a limiting factor on transfer, but for the majority of the time the receive window of the peer would more likely have been the limiting factor. However, we know the window has never closed as the distribution of swnd values stays within the 65536-131071 range. So all in all we have a connection that has been mildly constrained by congestion control, but for the bulk of the time we have been observing it neither congestion or peer receive window have limited throughput. Here's the script: #!/usr/sbin/dtrace -s tcp:::send / (args[4]-tcp_flags & (TH_SYN|TH_RST|TH_FIN)) == 0 / { @cwnd["cwnd", args[4]-tcp_sport, args[2]-ip_daddr] = quantize(args[3]-tcps_cwnd); @ssthresh["ssthresh", args[4]-tcp_sport, args[2]-ip_daddr] = quantize(args[3]-tcps_cwnd_ssthresh); @unacked["unacked", args[4]-tcp_sport, args[2]-ip_daddr] = quantize(args[3]-tcps_snxt - args[3]-tcps_suna); @swnd["swnd", args[4]-tcp_sport, args[2]-ip_daddr] = quantize((args[4]-tcp_window)*(1 tcps_snd_ws)); } One surprise here is that slow start is still in operation - one would assume that for a large file transfer, acknowledgements would push the congestion window up past the slow start threshold over time. The slow start threshold is in fact still close to it's initial (very high) value, so that would suggest we have not experienced any congestion (the slow start threshold is adjusted when congestion occurs). Also, the above measurements were taken early in the connection lifetime, so the congestion window did not get a changes to get bumped up to the level of the slow start threshold. A good strategy when examining these sorts of measurements for a given service (such as a webserver) would be start by examining the distributions above aggregated by port number only to get an overall feel for service performance, i.e. is congestion control or peer receive window size an issue, or are we unconstrained to fill the pipe? From there, the overlap of distributions will tell us whether to drill down into specific clients. For example if the send window distribution has multiple peaks, we may want to examine if particular clients show issues with their receive window.

    Read the article

  • Different fan behaviour in my laptop after upgrade, what to do now?

    - by student
    After upgrading from lubuntu 13.10 to 14.04 the fan of my laptop seems to run much more often than in 13.10. When it runs, it doesn't run continously but starts and stops every second. fwts fan results in Results generated by fwts: Version V14.03.01 (2014-03-27 02:14:17). Some of this work - Copyright (c) 1999 - 2014, Intel Corp. All rights reserved. Some of this work - Copyright (c) 2010 - 2014, Canonical. This test run on 12/05/14 at 21:40:13 on host Linux einstein 3.13.0-24-generic #47-Ubuntu SMP Fri May 2 23:30:00 UTC 2014 x86_64. Command: "fwts fan". Running tests: fan. fan: Simple fan tests. -------------------------------------------------------------------------------- Test 1 of 2: Test fan status. Test how many fans there are in the system. Check for the current status of the fan(s). PASSED: Test 1, Fan cooling_device0 of type Processor has max cooling state 10 and current cooling state 0. PASSED: Test 1, Fan cooling_device1 of type Processor has max cooling state 10 and current cooling state 0. PASSED: Test 1, Fan cooling_device2 of type LCD has max cooling state 15 and current cooling state 10. Test 2 of 2: Load system, check CPU fan status. Test how many fans there are in the system. Check for the current status of the fan(s). Loading CPUs for 20 seconds to try and get fan speeds to change. Fan cooling_device0 current state did not change from value 0 while CPUs were busy. Fan cooling_device1 current state did not change from value 0 while CPUs were busy. ADVICE: Did not detect any change in the CPU related thermal cooling device states. It could be that the devices are returning static information back to the driver and/or the fan speed is automatically being controlled by firmware using System Management Mode in which case the kernel interfaces being examined may not work anyway. ================================================================================ 3 passed, 0 failed, 0 warning, 0 aborted, 0 skipped, 0 info only. ================================================================================ 3 passed, 0 failed, 0 warning, 0 aborted, 0 skipped, 0 info only. Test Failure Summary ================================================================================ Critical failures: NONE High failures: NONE Medium failures: NONE Low failures: NONE Other failures: NONE Test |Pass |Fail |Abort|Warn |Skip |Info | ---------------+-----+-----+-----+-----+-----+-----+ fan | 3| | | | | | ---------------+-----+-----+-----+-----+-----+-----+ Total: | 3| 0| 0| 0| 0| 0| ---------------+-----+-----+-----+-----+-----+-----+ Here is the output of lsmod lsmod Module Size Used by i8k 14421 0 zram 18478 2 dm_crypt 23177 0 gpio_ich 13476 0 dell_wmi 12761 0 sparse_keymap 13948 1 dell_wmi snd_hda_codec_hdmi 46207 1 snd_hda_codec_idt 54645 1 rfcomm 69160 0 arc4 12608 2 dell_laptop 18168 0 bnep 19624 2 dcdbas 14928 1 dell_laptop bluetooth 395423 10 bnep,rfcomm iwldvm 232285 0 mac80211 626511 1 iwldvm snd_hda_intel 52355 3 snd_hda_codec 192906 3 snd_hda_codec_hdmi,snd_hda_codec_idt,snd_hda_intel snd_hwdep 13602 1 snd_hda_codec snd_pcm 102099 3 snd_hda_codec_hdmi,snd_hda_codec,snd_hda_intel snd_page_alloc 18710 2 snd_pcm,snd_hda_intel snd_seq_midi 13324 0 snd_seq_midi_event 14899 1 snd_seq_midi snd_rawmidi 30144 1 snd_seq_midi coretemp 13435 0 kvm_intel 143060 0 kvm 451511 1 kvm_intel snd_seq 61560 2 snd_seq_midi_event,snd_seq_midi joydev 17381 0 serio_raw 13462 0 iwlwifi 169932 1 iwldvm pcmcia 62299 0 snd_seq_device 14497 3 snd_seq,snd_rawmidi,snd_seq_midi snd_timer 29482 2 snd_pcm,snd_seq lpc_ich 21080 0 cfg80211 484040 3 iwlwifi,mac80211,iwldvm yenta_socket 41027 0 pcmcia_rsrc 18407 1 yenta_socket pcmcia_core 23592 3 pcmcia,pcmcia_rsrc,yenta_socket binfmt_misc 17468 1 snd 69238 17 snd_hwdep,snd_timer,snd_hda_codec_hdmi,snd_hda_codec_idt,snd_pcm,snd_seq,snd_rawmidi,snd_hda_codec,snd_hda_intel,snd_seq_device,snd_seq_midi soundcore 12680 1 snd parport_pc 32701 0 mac_hid 13205 0 ppdev 17671 0 lp 17759 0 parport 42348 3 lp,ppdev,parport_pc firewire_ohci 40409 0 psmouse 102222 0 sdhci_pci 23172 0 sdhci 43015 1 sdhci_pci firewire_core 68769 1 firewire_ohci crc_itu_t 12707 1 firewire_core ahci 25819 2 libahci 32168 1 ahci i915 783485 2 wmi 19177 1 dell_wmi i2c_algo_bit 13413 1 i915 drm_kms_helper 52758 1 i915 e1000e 254433 0 drm 302817 3 i915,drm_kms_helper ptp 18933 1 e1000e pps_core 19382 1 ptp video 19476 1 i915 I tried one answer to the similar question: loud fan on Ubuntu 14.04 and created a /etc/i8kmon.conf like the following: # Run as daemon, override with --daemon option set config(daemon) 1 # Automatic fan control, override with --auto option set config(auto) 1 # Status check timeout (seconds), override with --timeout option set config(timeout) 2 # Report status on stdout, override with --verbose option set config(verbose) 1 # Temperature thresholds: {fan_speeds low_ac high_ac low_batt high_batt} set config(0) {{0 0} -1 55 -1 55} set config(1) {{0 1} 50 60 55 65} set config(2) {{1 1} 55 80 60 85} set config(3) {{2 2} 70 128 75 128} With this setup the fan goes on even if the temperature is below 50 degree celsius (I don't see a pattern). However I get the impression that the CPU got's hotter in average than without this file. What changes from 13.10 to 14.04 may be responsible for this? If this is a bug, for which package I should report the bug?

    Read the article

  • IIS7 web farm - local or shared content?

    - by rbeier
    We're setting up an IIS7 web farm with two servers. Should each server have its own local copy of the content, or should they pull content directly from a UNC share? What are the pros and cons of each approach? We currently have a single live server WEB1, with content stored locally on a separate partition. A job periodically syncs WEB1 to a standby server WEB2, using robocopy for content and msdeploy for config. If WEB1 goes down, Nagios notifies us, and we manually run a script to move the IP addresses to WEB2's network interface. Both servers are actually VMs running on separate VMWare ESX 4 hosts. The servers are domain-joined. We have around 50-60 live sites on WEB1 - mostly ASP.NET, with a few that are just static HTML. Most are low-traffic "microsites". A few have moderate traffic, but none are massive. We'd like to change this so both WEB1 and WEB2 are actively serving content. This is mainly for reliability - if WEB1 goes down, we don't want to have to manually intervene to fail things over. Spreading the load is also nice, but the load is not high enough right now for us to need this. We're planning to configure our firewall to balance traffic across the two servers. It will detect when a server goes down and will send all the traffic to the remaining live server. We're planning to use sticky sessions for now... eventually we may move to SQL Server session state and stateless load balancing. But we need a way for the servers to share content. We were originally planning to move all the content to a UNC share. Our storage provider says they can set up a highly available SMB share for us. So if we go the UNC route, the storage shouldn't be a single point of failure. But we're wondering about the downsides to this approach: We'll need to change the physical paths for each site and virtual directory. There are also some projects that have absolute paths in their web.config files - we'll have to update those as well. We'll need to create a domain user for the web servers to access the share, and grant that user appropriate permissions. I haven't looked into this yet - I'm not sure if the application pool identity needs to be changed to this user, or if there's another way to tell IIS to use this account when connecting to the share. Sites will no longer be able to access their content if there's ever an Active Directory problem. In general, it just seems a lot more complicated, with more moving parts that could break. Our storage provider would create a volume for us on their redundant SAN. If I understand correctly, this SAN volume would be mounted on a VM running in their redundant VMWare environment; this VM would then expose the SMB share to our web servers. On the other hand, a benefit of the shared content approach is that we'd only need to deploy code to one place, and there would never be a temporary inconsistency between multiple copies of the content. This thread is pretty interesting, though some of these people are working at a much larger scale. I've just been discussing content so far, but we also need to think about configuration. I don't know if we can just use DFS replication for the applicationHost.config and other files, or if it's best to use the shared configuration feature with the config on a UNC share. What do you think? Thanks for your help, Richard

    Read the article

  • multiple puppet masters

    - by Oli
    I would like to set up an additional puppet master but have the CA server handled by only 1 puppet master. I have set this up as per the documentation here: http://docs.puppetlabs.com/guides/scaling_multiple_masters.html I have configured my second puppet master as follows: [main] ... ca = false ca_server = puppet-master1.test.net I am using passenger so I am a bit confused how the virtual-host.conf file should look for my second puppet-master2.test.net. Here is mine (updated as per Shane Maddens answer): LoadModule passenger_module /usr/lib/ruby/gems/1.8/gems/passenger-3.0.18/ext/apache2/mod_passenger.so PassengerRoot /usr/lib/ruby/gems/1.8/gems/passenger-3.0.18 PassengerRuby /usr/bin/ruby Listen 8140 <VirtualHost *:8140> ProxyPassMatch ^/([^/]+/certificate.*)$ https://puppet-master1.test.net:8140/$1 SSLEngine on SSLProtocol -ALL +SSLv3 +TLSv1 SSLCipherSuite ALL:!ADH:RC4+RSA:+HIGH:+MEDIUM:-LOW:-SSLv2:-EXP SSLCertificateFile /var/lib/puppet/ssl/certs/puppet-master2.test.net.pem SSLCertificateKeyFile /var/lib/puppet/ssl/private_keys/puppet-master2.test.net.pem #SSLCertificateChainFile /var/lib/puppet/ssl/ca/ca_crt.pem #SSLCACertificateFile /var/lib/puppet/ssl/ca/ca_crt.pem # If Apache complains about invalid signatures on the CRL, you can try disabling # CRL checking by commenting the next line, but this is not recommended. #SSLCARevocationFile /var/lib/puppet/ssl/ca/ca_crl.pem SSLVerifyClient optional SSLVerifyDepth 1 # The `ExportCertData` option is needed for agent certificate expiration warnings SSLOptions +StdEnvVars +ExportCertData # This header needs to be set if using a loadbalancer or proxy RequestHeader unset X-Forwarded-For RequestHeader set X-SSL-Subject %{SSL_CLIENT_S_DN}e RequestHeader set X-Client-DN %{SSL_CLIENT_S_DN}e RequestHeader set X-Client-Verify %{SSL_CLIENT_VERIFY}e DocumentRoot /etc/puppet/rack/public/ RackBaseURI / <Directory /etc/puppet/rack/> Options None AllowOverride None Order allow,deny allow from all </Directory> </VirtualHost> I have commented out the #SSLCertificateChainFile, #SSLCACertificateFile & #SSLCARevocationFile - this is not a CA server so not sure I need this. How would I get passenger to work with these? I would like to use ProxyPassMatch which I have configured as per the documentation. I don't want to specify a ca server in every puppet.conf file. I am getting this error when trying to get create a cert from a puppet client pointing to the second puppet master server (puppet-master2.test.net): [root@puppet-client2 ~]# puppet agent --test Error: Could not request certificate: Could not intern from s: nested asn1 error Exiting; failed to retrieve certificate and waitforcert is disabled On the puppet client I have this [main] server = puppet-master2.test.net What have I missed? -- update Here is a new virtual host file on my secondary puppet master. Is this correct? I have SSL turned off? LoadModule passenger_module /usr/lib/ruby/gems/1.8/gems/passenger-3.0.18/ext/apache2/mod_passenger.so PassengerRoot /usr/lib/ruby/gems/1.8/gems/passenger-3.0.18 PassengerRuby /usr/bin/ruby # you probably want to tune these settings PassengerHighPerformance on PassengerMaxPoolSize 12 PassengerPoolIdleTime 1500 # PassengerMaxRequests 1000 PassengerStatThrottleRate 120 RackAutoDetect Off RailsAutoDetect Off Listen 8140 <VirtualHost *:8140> SSLEngine off ProxyPassMatch ^/([^/]+/certificate.*)$ https://puppet-master1.test.net:8140/$1 # Obtain Authentication Information from Client Request Headers SetEnvIf X-Client-Verify "(.*)" SSL_CLIENT_VERIFY=$1 SetEnvIf X-SSL-Client-DN "(.*)" SSL_CLIENT_S_DN=$1 DocumentRoot /etc/puppet/rack/public/ RackBaseURI / <Directory /etc/puppet/rack/> Options None AllowOverride None Order allow,deny allow from all </Directory> </VirtualHost> Cheers, Oli

    Read the article

  • Magento installation problem on Nginx in Windows

    - by Nithin
    I am trying to install magento locally using nginx as the web server instead of Apache. I copied the magento folder to the html directory. When i try to call the magento folder, I get the 404 not found error. I am able to access other php files setup in the html folder and have PHP installed. Here is my config file: #user nobody; worker_processes 1; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; #gzip on; server { listen 8080; server_name localhost; #charset koi8-r; #access_log logs/host.access.log main; location / { root html; index index.html index.htm index.php; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root html; allow all; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { root html; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME c:/nginx/html/$fastcgi_script_name; include fastcgi_params; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } # another virtual host using mix of IP-, name-, and port-based configuration # #server { # listen 8000; # listen somename:8080; # server_name somename alias another.alias; # location / { # root html; # index index.html index.htm; # } #} # HTTPS server # #server { # listen 443; # server_name localhost; # ssl on; # ssl_certificate cert.pem; # ssl_certificate_key cert.key; # ssl_session_timeout 5m; # ssl_protocols SSLv2 SSLv3 TLSv1; # ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP; # ssl_prefer_server_ciphers on; # location / { # root html; # index index.html index.htm; # } #} } How do I fix this? This is what I found in the error.log file : 2011/09/06 12:22:35 [error] 5632#0: *1 "/cygdrive/c/nginx/html/magento/index.php/install/index.html" is not found (20: Not a directory), client: 127.0.0.1, server: localhost, request: "GET /magento/index.php/install/ HTTP/1.1", host: "localhost:8080"

    Read the article

  • Wireless AAA for a small, bandwidth-limited hotel.

    - by Anthony Hiscox
    We (the tech I work with and myself) live in a remote northern town where Internet access is somewhat of a luxury, and bandwidth is quite limited. Here, overage charges ranging from few hundreds, to few thousands of dollars a month, is not uncommon. I myself incur regular monthly charges just through my regular Internet usage at home (I am allowed 10G for $60CAD!) As part of my work, I have found myself involved with several hotels that are feeling this. I know that I can come up with something to solve this problem, but I am relatively new to system administration and I don't want my dreams to overcome reality. So, I pass these ideas on to you, those with much more experience than I, in hopes you will share some of your thoughts and concerns. This system must be cost effective, yes the charges are high here, but the trust in technology is the lowest I've ever seen. Must be capable of helping client reduce their usage (squid) Allow a limited (throughput and total usage) amount of free Internet, as this is often franchise policy. Allow a user to track their bandwidth usage Allow (optional) higher speed and/or usage for an additional charge. This fee can be obtained at the front desk on checkout and should not require the use of PayPal or Credit Card. Unfortunately some franchises have ridiculous policies that require the use of a third party remote service to authenticate guests to your network. This means WPA is out, and it also means that I do not auth before Internet usage, that will be their job. However, I do require the ABILITY to perform authentication for Internet access if a hotel does not have this policy. I will still have to track bandwidth (under a guest account by default) and provide the same limiting, however the guest often will require a complete 'unlimited' access, in terms of existence, not throughput. Provide firewalling capabilities for hotels that have nothing, Office, and Guest network segregation (some of these guys are running their office on the guest network, with no encryption, and a simple TOS to get on!) Prevent guests from connecting to other guests, however provide a means to allow this to happen. IE. Each guest connects to a page and allows the other guest, this writes a iptables rule (with python-netfilter) and allows two rooms to play a game, for instance. My thoughts on how to implement this. One decent box (we'll call it a router now) with a lot of ram, and 3 NIC's: Internet Office Guests (AP's + In Room Ethernet) Router Firewall Rules Guest can talk to router only, through which they are routed to where they need to go, including Internet services. Office can be used to bridge Office to Internet if an existing solution is not in place, otherwise, it simply works for a network accessible web (webmin+python-webmin?) interface. Router Software: OpenVZ provides virtualization for a few services I don't really trust. Squid, FreeRADIUS and Apache. The only service directly accessible to guests is Apache. Apache has mod_wsgi and django, because I can write quickly using django and my needs are low. It also potentially has the FreeRADIUS mod, but there seems to be some caveats with this. Firewall rules are handled on the router with iptables. Webmin (or a custom django app maybe) provides abstracted control over any features that the staff may need to access. Python, if you haven't guessed it's the language I feel most comfortable in, and I use it for almost everything. And finally, has this been done, is it a overly massive project not worth taking on for one guy, and/or is there some tools I'm missing that could be making my life easier? For the record, I am fairly good with Python, but not very familiar with many other languages (I can struggle through PHP, it's a cosmetic issue there). I am also an avid linux user, and comfortable with config files and command line. Thank you for your time, I look forward to reading your responses. Edit: My apologies if this is not a Q&A in the sense that some were expecting, I'm just looking for ideas and to make sure I'm not trying to do something that's been done. I'm looking at pfSense now as a possible start for what I need.

    Read the article

  • Why do we get a sudden spike in response times?

    - by Christian Hagelid
    We have an API that is implemented using ServiceStack which is hosted in IIS. While performing load testing of the API we discovered that the response times are good but that they deteriorate rapidly as soon as we hit about 3,500 concurrent users per server. We have two servers and when hitting them with 7,000 users the average response times sit below 500ms for all endpoints. The boxes are behind a load balancer so we get 3,500 concurrents per server. However as soon as we increase the number of total concurrent users we see a significant increase in response times. Increasing the concurrent users to 5,000 per server gives us an average response time per endpoint of around 7 seconds. The memory and CPU on the servers are quite low, both while the response times are good and when after they deteriorate. At peak with 10,000 concurrent users the CPU averages just below 50% and the RAM sits around 3-4 GB out of 16. This leaves us thinking that we are hitting some kind of limit somewhere. The below screenshot shows some key counters in perfmon during a load test with a total of 10,000 concurrent users. The highlighted counter is requests/second. To the right of the screenshot you can see the requests per second graph becoming really erratic. This is the main indicator for slow response times. As soon as we see this pattern we notice slow response times in the load test. How do we go about troubleshooting this performance issue? We are trying to identify if this is a coding issue or a configuration issue. Are there any settings in web.config or IIS that could explain this behaviour? The application pool is running .NET v4.0 and the IIS version is 7.5. The only change we have made from the default settings is to update the application pool Queue Length value from 1,000 to 5,000. We have also added the following config settings to the Aspnet.config file: <system.web> <applicationPool maxConcurrentRequestsPerCPU="5000" maxConcurrentThreadsPerCPU="0" requestQueueLimit="5000" /> </system.web> More details: The purpose of the API is to combine data from various external sources and return as JSON. It is currently using an InMemory cache implementation to cache individual external calls at the data layer. The first request to a resource will fetch all data required and any subsequent requests for the same resource will get results from the cache. We have a 'cache runner' that is implemented as a background process that updates the information in the cache at certain set intervals. We have added locking around the code that fetches data from the external resources. We have also implemented the services to fetch the data from the external sources in an asynchronous fashion so that the endpoint should only be as slow as the slowest external call (unless we have data in the cache of course). This is done using the System.Threading.Tasks.Task class. Could we be hitting a limitation in terms of number of threads available to the process?

    Read the article

  • Setting up RADIUS + LDAP for WPA2 on Ubuntu

    - by Morten Siebuhr
    I'm setting up a wireless network for ~150 users. In short, I'm looking for a guide to set RADIUS server to authenticate WPA2 against a LDAP. On Ubuntu. I got a working LDAP, but as it is not in production use, it can very easily be adapted to whatever changes this project may require. I've been looking at FreeRADIUS, but any RADIUS server will do. We got a separate physical network just for WiFi, so not too many worries about security on that front. Our AP's are HP's low end enterprise stuff - they seem to support whatever you can think of. All Ubuntu Server, baby! And the bad news: I now somebody less knowledgeable than me will eventually take over administration, so the setup has to be as "trivial" as possible. So far, our setup is based only on software from the Ubuntu repositories, with exception of our LDAP administration web application and a few small special scripts. So no "fetch package X, untar, ./configure"-things if avoidable. UPDATE 2009-08-18: While I found several useful resources, there is one serious obstacle: Ignoring EAP-Type/tls because we do not have OpenSSL support. Ignoring EAP-Type/ttls because we do not have OpenSSL support. Ignoring EAP-Type/peap because we do not have OpenSSL support. Basically the Ubuntu version of FreeRADIUS does not support SSL (bug 183840), which makes all the secure EAP-types useless. Bummer. But some useful documentation for anybody interested: http://vuksan.com/linux/dot1x/802-1x-LDAP.html http://tldp.org/HOWTO/html_single/8021X-HOWTO/#confradius UPDATE 2009-08-19: I ended up compiling my own FreeRADIUS package yesterday evening - there's a really good recipe at http://www.linuxinsight.com/building-debian-freeradius-package-with-eap-tls-ttls-peap-support.html (See the comments to the post for updated instructions). I got a certificate from http://CACert.org (you should probably get a "real" cert if possible) Then I followed the instructions at http://vuksan.com/linux/dot1x/802-1x-LDAP.html. This links to http://tldp.org/HOWTO/html_single/8021X-HOWTO/, which is a very worthwhile read if you want to know how WiFi security works. UPDATE 2009-08-27: After following the above guide, I've managed to get FreeRADIUS to talk to LDAP: I've created a test user in LDAP, with the password mr2Yx36M - this gives an LDAP entry roughly of: uid: testuser sambaLMPassword: CF3D6F8A92967E0FE72C57EF50F76A05 sambaNTPassword: DA44187ECA97B7C14A22F29F52BEBD90 userPassword: {SSHA}Z0SwaKO5tuGxgxtceRDjiDGFy6bRL6ja When using radtest, I can connect fine: > radtest testuser "mr2Yx36N" sbhr.dk 0 radius-private-password Sending Access-Request of id 215 to 130.225.235.6 port 1812 User-Name = "msiebuhr" User-Password = "mr2Yx36N" NAS-IP-Address = 127.0.1.1 NAS-Port = 0 rad_recv: Access-Accept packet from host 130.225.235.6 port 1812, id=215, length=20 > But when I try through the AP, it doesn't fly - while it does confirm that it figures out the NT and LM passwords: ... rlm_ldap: sambaNTPassword -> NT-Password == 0x4441343431383745434139374237433134413232463239463532424542443930 rlm_ldap: sambaLMPassword -> LM-Password == 0x4346334436463841393239363745304645373243353745463530463736413035 [ldap] looking for reply items in directory... WARNING: No "known good" password was found in LDAP. Are you sure that the user is configured correctly? [ldap] user testuser authorized to use remote access rlm_ldap: ldap_release_conn: Release Id: 0 ++[ldap] returns ok ++[expiration] returns noop ++[logintime] returns noop [pap] Normalizing NT-Password from hex encoding [pap] Normalizing LM-Password from hex encoding ... It is clear that the NT and LM passwords differ from the above, yet the message [ldap] user testuser authorized to use remote access - and the user is later rejected...

    Read the article

  • broadcom 5722 NIC not installed on Ubuntu Server, although driver present

    - by Bastien
    Hello, I just installed Ubuntu Server 10.04 LTS, running kernel 2.6.32-24-server, on a brand new Dell T110 server, supposedly fully compatible with Ubuntu Server. I have two NICs: one ONBOARD, the other additional on PCI. both of them are Broadcom netXtreme 5572. on the first boot of the system, I could see both cards as eth0 and eth1 (with ifconfig) I configured eth0 as static IP (as planned), and did not configure eth1. after rebooting, one of the two NICs "disappeared": it does not appear in ifconfig at all. the one that disappeared is the ONBOARD one. I investigated a bit and found the following things: the card is SEEN, but not "installed", it appears as "UNCLAIMED" in lshw: *-network UNCLAIMED description: Ethernet controller product: NetXtreme BCM5722 Gigabit Ethernet PCI Express vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:04:00.0 version: 00 width: 64 bits clock: 33MHz capabilities: pm vpd msi pciexpress cap_list configuration: latency=0 resources: memory:df9f0000-df9fffff *-network description: Ethernet interface product: NetXtreme BCM5722 Gigabit Ethernet PCI Express vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:05:00.0 logical name: eth0 version: 00 serial: 00:10:18:60:23:64 size: 100MB/s capacity: 1GB/s width: 64 bits clock: 33MHz capabilities: pm vpd msi pciexpress bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=tg3 driverversion=3.102 duplex=full firmware=5722-v3.09 ip=10.129.167.25 latency=0 link=yes multicast=yes port=twisted pair speed=100MB/s resources: irq:35 memory:dfaf0000-dfafffff so I checked my dmesg and found a few strange lines, showing, there actually is a problem bringing up this card: [ 3.737506] tg3: Could not obtain valid ethernet address, aborting. [ 3.737527] tg3 0000:04:00.0: PCI INT A disabled [ 3.737535] tg3: probe of 0000:04:00.0 failed with error -22 [ 3.737553] alloc irq_desc for 17 on node -1 [ 3.737555] alloc kstat_irqs on node -1 [ 3.737560] tg3 0000:05:00.0: PCI INT A -> GSI 17 (level, low) -> IRQ 17 [ 3.737566] tg3 0000:05:00.0: setting latency timer to 64 [ 3.793529] eth0: Tigon3 [partno(BCM95722A2202G) rev a200] (PCI Express) MAC address 00:10:18:60:23:64 [ 3.793532] eth0: attached PHY is 5722/5756 (10/100/1000Base-T Ethernet) (WireSpeed[1]) [ 3.793534] eth0: RXcsums[1] LinkChgREG[0] MIirq[0] ASF[0] TSOcap[1] [ 3.793536] eth0: dma_rwctrl[76180000] dma_mask[64-bit] that actually shows that one NIC is recognized, the other is not. I researched a bit more, with lspci -v: 04:00.0 Ethernet controller: Broadcom Corporation NetXtreme BCM5722 Gigabit Ethernet PCI Express Subsystem: Broadcom Corporation NetXtreme BCM5722 Gigabit Ethernet PCI Express Flags: fast devsel, IRQ 16 Memory at df9f0000 (64-bit, non-prefetchable) [size=64K] Capabilities: [48] Power Management version 3 Capabilities: [50] Vital Product Data <?> Capabilities: [58] Vendor Specific Information <?> Capabilities: [e8] Message Signalled Interrupts: Mask- 64bit+ Queue=0/0 Enable- Capabilities: [d0] Express Endpoint, MSI 00 Capabilities: [100] Advanced Error Reporting <?> Capabilities: [13c] Virtual Channel <?> Capabilities: [160] Device Serial Number 00-00-00-fe-ff-00-00-00 Kernel modules: tg3 05:00.0 Ethernet controller: Broadcom Corporation NetXtreme BCM5722 Gigabit Ethernet PCI Express Subsystem: Broadcom Corporation NetXtreme BCM5722 Gigabit Ethernet PCI Express Flags: bus master, fast devsel, latency 0, IRQ 35 Memory at dfaf0000 (64-bit, non-prefetchable) [size=64K] Expansion ROM at <ignored> [disabled] Capabilities: [48] Power Management version 3 Capabilities: [50] Vital Product Data <?> Capabilities: [58] Vendor Specific Information <?> Capabilities: [e8] Message Signalled Interrupts: Mask- 64bit+ Queue=0/0 Enable+ Capabilities: [d0] Express Endpoint, MSI 00 Capabilities: [100] Advanced Error Reporting <?> Capabilities: [13c] Virtual Channel <?> Capabilities: [160] Device Serial Number 64-23-60-fe-ff-18-10-00 Capabilities: [16c] Power Budgeting <?> Kernel driver in use: tg3 Kernel modules: tg3 here I could see that the MAC address is 00-00-00-FE-FF-00-00-00, which, according to some forum posts on several websites, could be an issue. I've researched everything I could on the net, and found out several people having slightly comparable issues, but they usually involve different HW, and do not provide a proper explanation / solution... I would appreciate if anyone around here has some info to share ! thanks

    Read the article

  • Ubuntu server random hangups.

    - by Ebbe
    Hello all. this is my first post to this forum which I found through the superb podcast "It Conversations" from StackOverFlow. I am quite in my role as server administrator for an exhibition center in London. Basically we have a central file and sql server to which roughly 40 stations connects to to upload/download data used/captured by a set of applications. Over the last weeks we have experienced a few random hangups to our applications, and as it always happen to multiple applications simultaneously I do not believe that the applications are the source of the problem. We also monitor the network using Dartware Intermapper which indicates that all switches and stations on the network has been reachable during the downtime. Thus, its all pointing to the server. I have been looking through all log files I can think of and the only thing so far that I have found suspicious is the following lines in the syslog which are from the time of one of the hangups: Feb 6 17:14:27 es named[5582]: client 127.0.0.1#33721: RFC 1918 response from Internet for 150.0.168.192.in-addr.arpa Feb 6 17:14:40 es named[5582]: client 127.0.0.1#32899: RFC 1918 response from Internet for 152.0.168.192.in-addr.arpa Feb 6 17:15:01 es /USR/SBIN/CRON[1956]: (es) CMD (/home/es/apps/es/bin/es_checksum.sh) Feb 6 17:16:06 es /USR/SBIN/CRON[2031]: (es) CMD (/home/es/apps/es/bin/es_checksum.sh) Feb 6 17:21:00 es named[5582]: *** POKED TIMER *** Feb 6 17:21:00 es last message repeated 2 times Feb 6 17:21:07 es named[5582]: client 127.0.0.1#44194: RFC 1918 response from Internet for 143.0.168.192.in-addr.arpa Feb 6 17:21:12 es named[5582]: client 127.0.0.1#59004: RFC 1918 response from Internet for 164.0.168.192.in-addr.arpa I find a few lines of interesting lines here: 1) "RFC 1918 response from Internet for 150.1.168.192.in-addr.arpa". I see this a lot in the syslog. And basically everytime I do a nslookup for any of the computers in the cluster I get a new similar line in the syslog. I understand from google that this has to do with reverse lookup problems. But I do not know how that could effect the systems. Lets say that one of these lines appear every time one of the userstations connects to the server, which may happen several times a second. Could this possible cause a hangup of the entire server? 2) POKED TIMER, I have googled this quite a lot, but not found an explaination that I can relate to. What does this mean? 3) The timestamps, it seems like the entire server has stopped responding for several minutes. Normally there are many printouts to the syslog per minute on this server. Furthermore the CRON job is set to run once every minute. Which according to the log, hasent happened here. OS: Ubuntu 8.04 Kernel: Linux 2.6.24-24-server x86_64 GNU/Linux. Hardware: Dell R710, RAID1, CPU: 2x XEON E5530. 16GB Memory. Average load is very low, and memory should not be a problem. Please let me know if you need any additional information. Best wishes, Ebbe

    Read the article

  • Postfix - Gmail - Mountain Lion // can't send mail

    - by miako
    I have read most of the tutorials found on google but still can't make it work. I run the command : date | mail -s "Test" [email protected] . The log is this : Oct 22 11:38:00 XXX.local postfix/master[288]: daemon started -- version 2.9.2, configuration /etc/postfix Oct 22 11:38:00 XXX.local postfix/pickup[289]: 9D85418A031: uid=501 from=<me> Oct 22 11:38:00 XXX.local postfix/cleanup[291]: 9D85418A031: message-id=<[email protected]> Oct 22 11:38:00 XXX.local postfix/qmgr[290]: 9D85418A031: from=<[email protected]>, size=327, nrcpt=1 (queue active) Oct 22 11:38:00 XXX.local postfix/smtp[293]: initializing the client-side TLS engine Oct 22 11:38:02 XXX.local postfix/smtp[293]: setting up TLS connection to smtp.gmail.com[173.194.70.109]:587 Oct 22 11:38:02 XXX.local postfix/smtp[293]: smtp.gmail.com[173.194.70.109]:587: TLS cipher list "ALL:!EXPORT:!LOW:+RC4:@STRENGTH:!eNULL" Oct 22 11:38:02 XXX.local postfix/smtp[293]: SSL_connect:before/connect initialization Oct 22 11:38:02 XXX.local postfix/smtp[293]: SSL_connect:SSLv2/v3 write client hello A Oct 22 11:38:03 XXX.local postfix/smtp[293]: SSL_connect:SSLv3 read server hello A Oct 22 11:38:03 XXX.local postfix/smtp[293]: smtp.gmail.com[173.194.70.109]:587: certificate verification depth=2 verify=0 subject=/C=US/O=GeoTrust Inc./CN=GeoTrust Global CA Oct 22 11:38:03 --- last message repeated 1 time --- Oct 22 11:38:03 XXX.local postfix/smtp[293]: smtp.gmail.com[173.194.70.109]:587: certificate verification depth=1 verify=1 subject=/C=US/O=Google Inc/CN=Google Internet Authority G2 Oct 22 11:38:03 XXX.local postfix/smtp[293]: smtp.gmail.com[173.194.70.109]:587: certificate verification depth=0 verify=1 subject=/C=US/ST=California/L=Mountain View/O=Google Inc/CN=smtp.gmail.com Oct 22 11:38:03 XXX.local postfix/smtp[293]: SSL_connect:SSLv3 read server certificate A Oct 22 11:38:03 XXX.local postfix/smtp[293]: SSL_connect:SSLv3 read server done A Oct 22 11:38:03 XXX.local postfix/smtp[293]: SSL_connect:SSLv3 write client key exchange A Oct 22 11:38:03 XXX.local postfix/smtp[293]: SSL_connect:SSLv3 write change cipher spec A Oct 22 11:38:03 XXX.local postfix/smtp[293]: SSL_connect:SSLv3 write finished A Oct 22 11:38:03 XXX.local postfix/smtp[293]: SSL_connect:SSLv3 flush data Oct 22 11:38:03 XXX.local postfix/smtp[293]: SSL_connect:SSLv3 read server session ticket A Oct 22 11:38:03 XXX.local postfix/smtp[293]: SSL_connect:SSLv3 read finished A Oct 22 11:38:03 XXX.local postfix/smtp[293]: smtp.gmail.com[173.194.70.109]:587: subject_CN=smtp.gmail.com, issuer_CN=Google Internet Authority G2, fingerprint E4:CA:10:85:C3:53:00:E6:A1:D2:AC:C4:35:E4:A2:10, pkey_fingerprint=D6:06:2E:15:AF:DF:E9:50:A5:B4:E2:E4:C5:2E:F9:BA Oct 22 11:38:03 XXX.local postfix/smtp[293]: Untrusted TLS connection established to smtp.gmail.com[173.194.70.109]:587: TLSv1 with cipher RC4-SHA (128/128 bits) Oct 22 11:38:03 XXX.local postfix/smtp[293]: 9D85418A031: to=<[email protected]>, relay=smtp.gmail.com[173.194.70.109]:587, delay=3.4, delays=0.26/0.13/2.8/0.26, dsn=5.5.1, status=bounced (host smtp.gmail.com[173.194.70.109] said: 530-5.5.1 Authentication Required. Learn more at 530 5.5.1 http://support.google.com/mail/bin/answer.py?answer=14257 s3sm54097220eeo.3 - gsmtp (in reply to MAIL FROM command)) Oct 22 11:38:04 XXX.local postfix/cleanup[291]: D4D2F18A03C: message-id=<[email protected]> Oct 22 11:38:04 XXX.local postfix/qmgr[290]: D4D2F18A03C: from=<>, size=2382, nrcpt=1 (queue active) Oct 22 11:38:04 XXX.local postfix/bounce[297]: 9D85418A031: sender non-delivery notification: D4D2F18A03C Oct 22 11:38:04 XXX.local postfix/qmgr[290]: 9D85418A031: removed Oct 22 11:38:04 XXX.local postfix/local[298]: D4D2F18A03C: to=<[email protected]>, relay=local, delay=0.11, delays=0/0.08/0/0.02, dsn=2.0.0, status=sent (delivered to mailbox) Oct 22 11:38:04 XXX.local postfix/qmgr[290]: D4D2F18A03C: removed Oct 22 11:39:00 XXX.local postfix/master[288]: master exit time has arrived I am really confused as i have never setup MTA again an i need it for local web development. I don't use XAMPP. I use the built in Servers. Can anyone guide me?

    Read the article

  • creating a heirarchy of terminals or workspaces

    - by intuited
    <rant This question occurred to me ('occurred' meaning 'whispered seductively in my ear for the 100th time') while using GNU-screen, so I'll make that my example. However this is a much more general question about user interfaces and what I perceive as a flawmissing feature in every implementation I've yet seen. I'm wondering if there is some way to create a heirarchy/tree of terminals in a screen session. EG I'd like to have something like 1 bash 1.1 bash 1.2 bash 2 bash 3 bash 3.1 bash 3.1.1 bash 3.1.2 bash It would be good if the terminals could be labelled instead of having to be navigated to via some arrangement that I suspect doesn't exist. So then you could jump to one using eg ^A:goto happydays or ^A:goto dykstra.angry. So to generalize that: Firefox, Chrome, Internet Explorer, gnome-terminal, roxterm, konsole, yakuake, OpenOffice, Microsoft Office, Mr. Snuffaluppagus's Funtime Carousel™, and Your Mom's Jam Browser™ all offer the ability to create a flat set of tabs containing documents of an identical nature: web pages, terminals, documents, fun rideable animals, and jams. GNU-screen implements the same functionality without using tabs. Linux and OS/X window managers provide the ability to organize windows into an array of workspaces, which amounts to again, the same deal. Over the past few years, this has become a more or less ubiquitous concept which has been righteously welcomed into the far reaches of the computer interface funfest. Heavy users of these systems quickly encounter a problem with it: the set of entities is flat. In the case of workspaces, an option may be available to create a 2d array. However none of these applications furnish their users with the ability to create heirarchies, similar to filesystem directory structures, containing instances of their particular contained type. I for one am consistently bothered by this, and am wondering if the community can offer some wisdom as to why this has not happened in any of the foremost collections of computational functionality our culture has yet produced. Or if perhaps it has and I'm just an ignorant savage. I'd like to be able to not only group things into a tree structure, but also to create references (aka symbolic links, aka pointers) from one part of the structure to another, as well as apply properties (eg default directory, colorscheme, ...) recursively downward from a given node. I see no reason why we shouldn't be able to save these structures as known sessions, and apply tags to particular instances. So then you can sort through them by tag, find them by name, or just use the arrow keys (with an appropriate modifier) to move left or right and in or out of a given level. Another key combo would serve to create a branch in the place of the current terminal/webpage/lifelike statue/spreadsheet/spreadsheet sheet/presentation/jam and move that entity into the new branch, then create a fresh one as a sibling to it: a second leaf node within the same branch node. They would get along well. I find it a bit astonishing that this hasn't happened yet, and the only reason I can venture as a guess is that the creators of these fine systems do not consider such functionality to be useful to a significant portion of their userbase. I posit that the probability that that such an assumption would be correct is pretty low. On the other hand, given the relative ease with which such structures can be implemented using modern libraries/languages, it doesn't seem likely that difficulty of implementation would be a major roadblock. If it could be done in 1972 or whenever within the constraints of a filesystem driver, it should be relatively painless to implement in 2010 in a fullblown application. Given that all of these systems are capable of maintaining a set of equivalent entities, it seems unlikely that a major infrastructure overhaul would be necessary in order to enable a navigable heirarchy of them. </rant Mostly I'm just looking to start up a discussion and/or brainstorming on this topic. Any ideas, examples, criticism, or analysis are quite welcome. * Mr. Snuffaluppagus's Funtime Carousel is a registered trademark of Children's Television Workshop Inc. * Your Mom's Jam Browser is a registered trademark of Your Mom Inc.

    Read the article

  • RAID and Partitions, guidance Needed

    - by beauregarde
    Alright I have a Biostar TA790GX3A2+ Mobo 2x Seagate 750Gb Hard drive (with 2 different speeds) an X4 9750 A GeForce 9800GT and 2GB RAM Hardware Specs link text I want to configure my computer with partitions in various RAID arrays. The Partitions I know i want (disk letters are mostly for reference here) C: XP Boot D: XP Swap E: XP Run F: Games G: Data The Partitions I think I want (repeat caveat) H: small FAT for Win Legacy and DOS I: Linux J: Linux Swap K-?M?: Other Linux /whatever partitions N & O: Attic for D1 and D2 What I'd like to do, is have C: written on Disk 1 (D1),.. D: on D2,.. E: and F: striped on D1 & D2,.. G: mirrored or D1 & D2,.. I: on D2 (so i can just switch disc boot priority to open in Ubuntu),.. J: on D1,.. and H: somewhere low on D1 I am inexperienced with VMs, so i am unsure as to whether those run out of XP, or whether i need to reserve a primary partition for them. However, I think they would be preferable for testing new OS's to scheduling a partition for the same purpose. I'm also not married to XP, but -64 IS pretty important to me. QUestion Time 1) Ignoring the irrationality of it all, is such a configuration possible? If not, can some pseudo-approximation be achieved? 2) My RAID is software, isnt it? 3) How much should I short a 750GB HD? And should i use that space for my attics, or for my attics and something else, or for something else (.iso's perhaps?)? 4) if XP is striped on D1 & D2, will that interfere egregiously with my Swap writes on D2? If so, would striping both XP and Swap relieve (or at least mitigate) that issue? Should XP and Swap just be written normally on 2 different HDs? 5) Should I keep DL's and Drivers on E: (XP Run), F: (Games), or elsewhere? 6) Is 4GB enough for C:? 7) Is 30GB enough (or too much) for E:? 8) How much to reserve for the Linux and sub-Linux partitions? Also, where on the platter do you think i should put them? 9) Am I a fool to use FAT16 instead of FAT32 for H: because I'd rather run 95 than 98SE? If not, do you think 2GB or 4GB? 10) I cant predict what my Max Commit Charge will be, so recommendations for Pagefile size? 5GB? 12GB? 11) VMs, where do I run them? do they exacerbate anything? Would it be better to just emulate Linux, 95, and DOS? EC) What havent I considered that I really should? Notes: computer is mostly for playing games and watching media, though I wouldnt rule out the use of particularly blah-intensive anything.

    Read the article

  • Have I pushed the limits of my current VPS or is there room for optimization?

    - by JRameau
    I am currently on a mediatemple DV server (basic) 512mb dedicated ram, this is a CentOS based VPS with Plesk and Virtuozzo. My experience with it from day 1 has been bad and I only could sooth my server issues with several caching "Band-aids," but my sites are not as small as they were a year ago either so the issues have worsen. I have 3 Drupal installs running on separate (plesk) domains, 1 of those drupal installs is a multisite, that consists of 5-6 sites 2 of those sites are bringing in actual traffic. Those caching "Band-aids" I mentioned are APC, which seemed to help alot initially, and Drupal's Boost, which is considered a poorman's Varnish, it makes all my pages static for anonymous users. Last 30day combined estimate on Google Ananlytics: 90k visitors 260k pageviews. Issue: alot of downtime, I am continually checking if my sites are up, and lately I have been finding it down more than 3 times daily. Restarting Apache will bring it back up, for some time. I have google search every error message and looked up ways to optimize my DV server, and I am beyond stump what is my next move. Is this server bad, have I hit a impossibly low restriction such as the 12mb kernel memory barrier (kmemsize), is it on my end, do I need to optimize some more? *I have provided as much information as I can below, any help or suggestions given will be appreciated Common Error messages I see in the log: [error] (12)Cannot allocate memory: fork: Unable to fork new process [error] make_obcallback: could not import mod_python.apache.\n Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/mod_python/apache.py", line 21, in ? import traceback File "/usr/lib/python2.4/traceback.py", line 3, in ? import linecache ImportError: No module named linecache [error] python_handler: no interpreter callback found. [warn-phpd] mmap cache can't open /var/www/vhosts/***/httpdocs/*** - Too many open files in system (pid ***) [alert] Child 8125 returned a Fatal error... Apache is exiting! [emerg] (43)Identifier removed: couldn't grab the accept mutex [emerg] (22)Invalid argument: couldn't release the accept mutex cat /proc/user_beancounters: Version: 2.5 uid resource held maxheld barrier limit failcnt 41548: kmemsize 4582652 5306699 12288832 13517715 21105036 lockedpages 0 0 600 600 0 privvmpages 38151 42676 229036 249036 0 shmpages 16274 16274 17237 17237 2 dummy 0 0 0 0 0 numproc 43 46 300 300 0 physpages 27260 29528 0 2147483647 0 vmguarpages 0 0 131072 2147483647 0 oomguarpages 27270 29538 131072 2147483647 0 numtcpsock 21 29 300 300 0 numflock 8 8 480 528 0 numpty 1 1 30 30 0 numsiginfo 0 1 1024 1024 0 tcpsndbuf 648440 675272 2867477 4096277 1711499 tcprcvbuf 301620 359716 2867477 4096277 0 othersockbuf 4472 4472 1433738 2662538 0 dgramrcvbuf 0 0 1433738 1433738 0 numothersock 12 12 300 300 0 dcachesize 0 0 2684271 2764800 0 numfile 3447 3496 6300 6300 3872 dummy 0 0 0 0 0 dummy 0 0 0 0 0 dummy 0 0 0 0 0 numiptent 14 14 200 200 0 TOP: (In January the load avg was really high 3-10, I was able to bring it down where it is currently is by giving APC more memory play around with) top - 16:46:07 up 2:13, 1 user, load average: 0.34, 0.20, 0.20 Tasks: 40 total, 2 running, 37 sleeping, 0 stopped, 1 zombie Cpu(s): 0.3% us, 0.1% sy, 0.0% ni, 99.7% id, 0.0% wa, 0.0% hi, 0.0% si Mem: 916144k total, 156668k used, 759476k free, 0k buffers Swap: 0k total, 0k used, 0k free, 0k cached MySQLTuner: (after optimizing every table and repairing any table with overage I got the fragmented count down to 86) [--] Data in MyISAM tables: 285M (Tables: 1105) [!!] Total fragmented tables: 86 [--] Up for: 2h 44m 38s (409K q [41.421 qps], 6K conn, TX: 1B, RX: 174M) [--] Reads / Writes: 79% / 21% [--] Total buffers: 58.0M global + 2.7M per thread (100 max threads) [!!] Query cache prunes per day: 675307 [!!] Temporary tables created on disk: 35% (7K on disk / 20K total)

    Read the article

  • mod evasive not working properly on ubuntu 10.04

    - by Joe Hopfgartner
    I have an ubuntu 10.04 server where I installed mod_evasive using apt-get install libapache2-mod-evasive I already tried several configurations, the result stays the same. The blocking does work, but randomly. I tried with low limis and long blocking periods as well as short limits. The behaviour I expect is that I can request websites until either page or site limit is reached per given interval. After that I expect to be blocked until I did not make another request for as long as the block period. However the behaviour is that I can request sites and after a while I get random 403 blocks, which increase and decrase in percentage, however they are very scattered. This is an output of siege, so you get an idea: HTTP/1.1 200 0.09 secs: 75 bytes ==> /robots.txt HTTP/1.1 403 0.08 secs: 242 bytes ==> /robots.txt HTTP/1.1 200 0.08 secs: 75 bytes ==> /robots.txt HTTP/1.1 403 0.08 secs: 242 bytes ==> /robots.txt HTTP/1.1 200 0.11 secs: 75 bytes ==> /robots.txt HTTP/1.1 403 0.08 secs: 242 bytes ==> /robots.txt HTTP/1.1 200 0.08 secs: 75 bytes ==> /robots.txt HTTP/1.1 403 0.09 secs: 242 bytes ==> /robots.txt HTTP/1.1 200 0.08 secs: 75 bytes ==> /robots.txt HTTP/1.1 200 0.09 secs: 75 bytes ==> /robots.txt HTTP/1.1 200 0.08 secs: 75 bytes ==> /robots.txt HTTP/1.1 200 0.09 secs: 75 bytes ==> /robots.txt HTTP/1.1 403 0.08 secs: 242 bytes ==> /robots.txt HTTP/1.1 200 0.08 secs: 75 bytes ==> /robots.txt HTTP/1.1 403 0.08 secs: 242 bytes ==> /robots.txt HTTP/1.1 200 0.10 secs: 75 bytes ==> /robots.txt HTTP/1.1 403 0.08 secs: 242 bytes ==> /robots.txt HTTP/1.1 200 0.08 secs: 75 bytes ==> /robots.txt HTTP/1.1 403 0.09 secs: 242 bytes ==> /robots.txt HTTP/1.1 200 0.10 secs: 75 bytes ==> /robots.txt HTTP/1.1 403 0.09 secs: 242 bytes ==> /robots.txt HTTP/1.1 200 0.09 secs: 75 bytes ==> /robots.txt HTTP/1.1 200 0.08 secs: 75 bytes ==> /robots.txt HTTP/1.1 200 0.09 secs: 75 bytes ==> /robots.txt HTTP/1.1 200 0.08 secs: 75 bytes ==> /robots.txt HTTP/1.1 200 0.10 secs: 75 bytes ==> /robots.txt HTTP/1.1 200 0.08 secs: 75 bytes ==> /robots.txt The exac limits in place during this test run were: DOSHashTableSize 3097 DOSPageCount 10 DOSSiteCount 100 DOSPageInterval 10 DOSSiteInterval 10 DOSBlockingPeriod 120 DOSLogDir /var/log/mod_evasive DOSEmailNotify ***@gmail.com DOSWhitelist 127.0.0.1 So I would expect to be blocked at least 120 seconds after being blocked once. Any ideas aobut this? I also tried adding my configuration at different places (vhost, server config, directory context) and with of without ifmodule directive... This doesnt change anything.

    Read the article

  • Network Restructure Method for Double-NAT network

    - by Adrian
    Due to a series of poor network design decisions (mostly) made many years ago in order to save a few bucks here and there, I have a network that is decidedly sub-optimally architected. I'm looking for suggestions to improve this less-than-pleasant situation. We're a non-profit with a Linux-based IT department and a limited budget. (Note: None of the Windows equipment we have runs does anything that talks to the Internet nor do we have any Windows admins on staff.) Key points: We have a main office and about 12 remote sites that essentially double NAT their subnets with physically-segregated switches. (No VLANing and limited ability to do so with current switches) These locations have a "DMZ" subnet that are NAT'd on an identically assigned 10.0.0/24 subnet at each site. These subnets cannot talk to DMZs at any other location because we don't route them anywhere except between server and adjacent "firewall". Some of these locations have multiple ISP connections (T1, Cable, and/or DSLs) that we manually route using IP Tools in Linux. These firewalls all run on the (10.0.0/24) network and are mostly "pro-sumer" grade firewalls (Linksys, Netgear, etc.) or ISP-provided DSL modems. Connecting these firewalls (via simple unmanaged switches) is one or more servers that must be publically-accessible. Connected to the main office's 10.0.0/24 subnet are servers for email, tele-commuter VPN, remote office VPN server, primary router to the internal 192.168/24 subnets. These have to be access from specific ISP connections based on traffic type and connection source. All our routing is done manually or with OpenVPN route statements Inter-office traffic goes through the OpenVPN service in the main 'Router' server which has it's own NAT'ing involved. Remote sites only have one server installed at each site and cannot afford multiple servers due to budget constraints. These servers are all LTSP servers several 5-20 terminals. The 192.168.2/24 and 192.168.3/24 subnets are mostly but NOT entirely on Cisco 2960 switches that can do VLAN. The remainder are DLink DGS-1248 switches that I am not sure I trust well enough to use with VLANs. There is also some remaining internal concern about VLANs since only the senior networking staff person understands how it works. All regular internet traffic goes through the CentOS 5 router server which in turns NATs the 192.168/24 subnets to the 10.0.0.0/24 subnets according to the manually-configured routing rules that we use to point outbound traffic to the proper internet connection based on '-host' routing statements. I want to simplify this and ready All Of The Things for ESXi virtualization, including these public-facing services. Is there a no- or low-cost solution that would get rid of the Double-NAT and restore a little sanity to this mess so that my future replacement doesn't hunt me down? Basic Diagram for the main office: These are my goals: Public-facing Servers with interfaces on that middle 10.0.0/24 network to be moved in to 192.168.2/24 subnet on ESXi servers. Get rid of the double NAT and get our entire network on one single subnet. My understanding is that this is something we'll need to do under IPv6 anyway, but I think this mess is standing in the way.

    Read the article

  • Weird Network Behavior of Home Router

    - by Stilgar
    First of all I would like to apologize because what you are going to read will be long and confusing but I am fighting this issue for 3 days now and am out of ideas. At home I have the following setup 50Mbps Internet connects into a home router A 2 desktop computers connect to router A via standard FTP LAN cables including one where the cable is ~20m long. a second router B connects to router A via standard FTP LAN cable X (~20m long). several devices connect to the wireless network of router B and there are a couple of desktop computers connected to it through FTP LAN cables. For some reason computers connected to router B when it is connected via cable X have very slow Internet connection. It is like 5 times slower than what is expected. This is the actual problem I am trying to solve. Interesting facts If a computer is connected to cable X directly instead of through router B the Internet speed is just fine (up to the 50Mbps I get from the ISP). Tested with two computers. I have tried replacing router B with another router C and the problem persists. If I connect router B via another cable to the same ports with the same settings everything seems to work fine and computers connected to router B have quite fast Internet I have tested mainly via Speedtest.net but I have also achieved similar speeds when downloading a file The upload speed is quite higher than the download speed in all cases. Note that my ISP usually has higher upload speed (unless it manages to hit the 50Mbps cap) It seems like the speed when connecting through router B with cable X is reduced 4-5 times no matter what the original speed is. For example via router B I get 10Mbps speed to local servers where I get 50Mbps when connected on router A. If I use a distant server where the ISP is only able to provide 25Mbps I get 4-5Mbps on router B. WiFi is slower than LAN on both routers (which is normal) but the reduced speed is reduced proportionally for WiFi. In addition the upload speed is normally higher from the ISP and it is also reduced proportionally. I have tried two different network configurations. One where I have NAT behind NAT where router B connects to router A via the WAN port and has its own DHCP. Second where router B connects to router A via standard LAN port and has DHCP disabled. In this configuration router B serves as a switch and the Network Gateway for computers connected to router B is the internal IP address of router A. Both configurations work just fine but both manifest the reduced speed issue. pings seem to work just fine As far as I can tell none of the cables is crossed The RJ45 setup for cable X orange orange-white brown brow-white blue blue-white green green-white This is a big problem for me since cable X passes through walls and floors and is very hard to replace. I also may have gotten some of the facts wrong because I am almost going crazy with this issue and testing includes going several floors up and down the staircase. One hypothesis I came up with is that the cable is defective in such a way that the voltage from the router affects its performance. When it is connected to a computer it performs just fine but the router has less power. Related hypothesis includes the cable being affected by electricity cables in the walls when the voltage is low. (I know nothing about electricity) So any ideas what to do, what to test or what the issue may be?

    Read the article

  • Is the Cloud ready for an Enterprise Java web application? Seeking a JEE hosting advice.

    - by Jakub Holý
    Greetings to all the smart people around here! I'd like to ask whether it is feasible or a good idea at all to deploy a Java enterprise web application to a Cloud such as Amazon EC2. More exactly, I'm looking for infrastructure options for an application that shall handle few hundred users with long but neither CPU nor memory intensive sessions. I'm considering dedicated servers, virtual private servers (VPSs) and EC2. I've noticed that there is a project called JBoss Cloud so people are working on enabling such a deployment, on the other hand it doesn't seem to be mature yet and I'm not sure that the cloud is ready for this kind of applications, which differs from the typical cloud-based applications like Twitter. Would you recommend to deploy it to the cloud? What are the pros and cons? The application is a Java EE 5 web application whose main function is to enable users to compose their own customized Product by combining the available Parts. It uses stateless and stateful session beans and JPA for persistence of entities to a RDBMS and fetches information about Parts from the company's inventory system via a web service. Aside of external users it's used also by few internal ones, who are authenticated against the company's LDAP. The application should handle around 300-400 concurrent users building their product and should be reasonably scalable and available though these qualities are only of a medium importance at this stage. I've proposed an architecture consisting of a firewall (FW) and load balancer supporting sticky sessions and https (in the Cloud this would be replaced with EC2's Elastic Load Balancing service and FW on the app. servers, in a physical architecture the load-balancer would be a HW), then two physical clustered application servers combined with web servers (so that if one fails, a user doesn't loose his/her long built product) and finally a database server. The DB server would need a slave backup instance that can replace the master instance if it fails. This should provide reasonable availability and fault tolerance and provide good scalability as long as a single RDBMS can keep with the load, which should be OK for quite a while because most of the operations are done in the memory using a stateful bean and only occasionally stored or retrieved from the DB and the amount of data is low too. A problematic part could be the dependency on the remote inventory system webservice but with good caching of its outputs in the application it should be OK too. Unfortunately I've only vague idea of the system resources (memory size, number and speed of CPUs/cores) that such an "average Java EE application" for few hundred users needs. My rough and mostly unfounded estimate based on actual Amazon offerings is that 1.7GB and a single, 2-core "modern CPU" with speed around 2.5GHz (the High-CPU Medium Instance) should be sufficient for any of the two application servers (since we can handle higher load by provisioning more of them). Alternatively I would consider using the Large instance (64b, 7.5GB RAM, 2 cores at 1GHz) So my question is whether such a deployment to the cloud is technically and financially feasible or whether dedicated/VPS servers would be a better option and whether there are some real-world experiences with something similar. Thank you very much! /Jakub Holy PS: I've found the JBoss EAP in a Cloud Case Study that shows that it is possible to deploy a real-world Java EE application to the EC2 cloud but unfortunately there're no details regarding topology, instance types, or anything :-(

    Read the article

  • Active Directory Time Synchronisation - Time-Service Event ID 50

    - by George
    I have an Active Directory domain with two DCs. The first DC in the forest/domain is Server 2012, the second is 2008 R2. The first DC holds the PDC Emulator role. I sporadically receive a warning from the Time-Service source, event ID 50: The time service detected a time difference of greater than %1 milliseconds for %2 seconds. The time difference might be caused by synchronization with low-accuracy time sources or by suboptimal network conditions. The time service is no longer synchronized and cannot provide the time to other clients or update the system clock. When a valid time stamp is received from a time service provider, the time service will correct itself. Time sync in the domain is configured with the second DC to synchronise using the /syncfromflags:DOMHIER flag. The first DC is configured to sync time using a /syncfromflags:MANUAL /reliable:YES, from a peerlist consisting of a number of UK based stratum 2 servers, such as ntp2d.mcc.ac.uk. I'm confused why I receive this event warning. It implies that my PDC emulator cannot synchronise time with a supposedly reliable external time source, and it quotes a time difference of 5 seconds for 900 seconds. It's worth also mentioning that I used to use a UK pool from ntp.org but I would receive the warning much more often. Since updating to a number of UK based academic time servers, it seems to be more reliable. Can someone with more experience shed some light on this - perhaps it is purely transient? Should I disregard the warning? Is my configuration sound? EDIT: I should add that the DCs are virtual, and installed on two separate VMware ESXi/vSphere physical hosts. I can also confirm that as per MDMarra's comment and best practice, VMware timesync is disabled, since: c:\Program Files\VMware\VMware Tools\VMwareToolboxCmd.exe timesync status returns Disabled. EDIT 2 Some strange new issue has cropped up. I've noticed a pattern. Originally, the event ID 50 warnings would occur at about 1230pm each day. This is interesting since our veeam backup happens at 12 midday. Since I made the changes discussed here, I now receive an event ID 51 instead of 50. The new warning says that: The time sample received from peer server.ac.uk differs from the local time by -40 seconds (Or approximately 40 seconds). This has happened two days in a row. Now I'm even more confused. Obviously the time never updates until I manually intervene. The issue seems to be related to virtualisation and veeam. Something may be occuring when veeam is backing up the PDCe. Any suggestions? UPDATE & SUMMARY msemack's excellent list of resources below (the accepted answer) provided enough information to correctly configure the time service in the domain. This should be the first port of call for any future people looking to verify their configuration. The final "40 second jump" issue I have resolved (there are no more warnings) through adjusting the VMware time sync settings as noted in the veeam knowledge base article here: http://www.veeam.com/kb1202 In any case, should any future reader use ESXi, veeam or not, the resources here are an excellent source of information on the time sync topic and msemack's answer is particularly invaluable.

    Read the article

  • UFW as an active service on Ubuntu

    - by lamcro
    Every time I restart my computer, and check the status of the UFW firewall (sudo ufw status), it is disabled, even if I then enable and restart it. I tried putting sudo ufw enable as one of the startup applications but it asks for the sudo password every time I log on, and I'm guessing it does not protect anyone else who logs on my computer. How can I setup ufw so it is activated when I turn on my computer, and protects all accounts? Update I just tried /etc/init.d/ufw start, and it activated the firewall. Then I restarted the computer, and again it was disabled. content of /etc/ufw/ufw.conf # /etc/ufw/ufw.conf # # set to yes to start on boot ENABLED=yes # set to one of 'off', 'low', 'medium', 'high' LOGLEVEL=full content of /etc/default/ufw # /etc/default/ufw # # Set to yes to apply rules to support IPv6 (no means only IPv6 on loopback # accepted). You will need to 'disable' and then 'enable' the firewall for # the changes to take affect. IPV6=no # Set the default input policy to ACCEPT, ACCEPT_NO_TRACK, DROP, or REJECT. # ACCEPT enables connection tracking for NEW inbound packets on the INPUT # chain, whereas ACCEPT_NO_TRACK does not use connection tracking. Please note # that if you change this you will most likely want to adjust your rules. DEFAULT_INPUT_POLICY="DROP" # Set the default output policy to ACCEPT, ACCEPT_NO_TRACK, DROP, or REJECT. # ACCEPT enables connection tracking for NEW outbound packets on the OUTPUT # chain, whereas ACCEPT_NO_TRACK does not use connection tracking. Please note # that if you change this you will most likely want to adjust your rules. DEFAULT_OUTPUT_POLICY="ACCEPT" # Set the default forward policy to ACCEPT, DROP or REJECT. Please note that # if you change this you will most likely want to adjust your rules DEFAULT_FORWARD_POLICY="DROP" # Set the default application policy to ACCEPT, DROP, REJECT or SKIP. Please # note that setting this to ACCEPT may be a security risk. See 'man ufw' for # details DEFAULT_APPLICATION_POLICY="SKIP" # By default, ufw only touches its own chains. Set this to 'yes' to have ufw # manage the built-in chains too. Warning: setting this to 'yes' will break # non-ufw managed firewall rules MANAGE_BUILTINS=no # # IPT backend # # only enable if using iptables backend IPT_SYSCTL=/etc/ufw/sysctl.conf # extra connection tracking modules to load IPT_MODULES="nf_conntrack_ftp nf_nat_ftp nf_conntrack_irc nf_nat_irc" Update Followed your advise and ran update-rc.d with no luck. lester@mcgrath-pc:~$ sudo update-rc.d ufw defaults update-rc.d: warning: /etc/init.d/ufw missing LSB information update-rc.d: see <http://wiki.debian.org/LSBInitScripts> Adding system startup for /etc/init.d/ufw ... /etc/rc0.d/K20ufw -> ../init.d/ufw /etc/rc1.d/K20ufw -> ../init.d/ufw /etc/rc6.d/K20ufw -> ../init.d/ufw /etc/rc2.d/S20ufw -> ../init.d/ufw /etc/rc3.d/S20ufw -> ../init.d/ufw /etc/rc4.d/S20ufw -> ../init.d/ufw /etc/rc5.d/S20ufw -> ../init.d/ufw lester@mcgrath-pc:~$ ls -l /etc/rc?.d/*ufw lrwxrwxrwx 1 root root 13 2009-12-20 20:34 /etc/rc0.d/K20ufw -> ../init.d/ufw lrwxrwxrwx 1 root root 13 2009-12-20 20:34 /etc/rc1.d/K20ufw -> ../init.d/ufw lrwxrwxrwx 1 root root 13 2009-12-20 20:34 /etc/rc2.d/S20ufw -> ../init.d/ufw lrwxrwxrwx 1 root root 13 2009-12-20 20:34 /etc/rc3.d/S20ufw -> ../init.d/ufw lrwxrwxrwx 1 root root 13 2009-12-20 20:34 /etc/rc4.d/S20ufw -> ../init.d/ufw lrwxrwxrwx 1 root root 13 2009-12-20 20:34 /etc/rc5.d/S20ufw -> ../init.d/ufw lrwxrwxrwx 1 root root 13 2009-12-20 20:34 /etc/rc6.d/K20ufw -> ../init.d/ufw

    Read the article

  • What would make a noise in a PC on graphics operations on a passively-cooled system?

    - by T.J. Crowder
    I have this system based on the Intel D510MO motherboard, which is basically an Atom D510 (dual-core HT Atom w/built-in GPU), an Intel NM10 chipset, and a Realtek Gigabit LAN controller. It's entirely passively cooled. I noticed almost immediately that there was a kind of very, very soft noise that corresponded with graphics operations, sort of the noise you'd get if you had a sheet of flat paper and slid something really light across it — but more electronic than that. I wrote it off as observation error and/or disk activity triggered by the graphics operation (although the latter seemed like a lot of unnecessary disk activity). It isn't. I got curious enough that I finally did a few controlled experiments, and here's what I've determined: It isn't the HDD. For one thing, the sounds the HDD makes (when seeking, when reading or writing, when just sitting there spinning) is different. For another, I used sudo hdparm -y /dev/sda (I'm using Ubuntu 10.04 LTS) to temporarily put the disk on standby while making sure that non-disk graphics op was happening in a loop. The disk spun down, but the other sound continued, corresponding perfectly with the timing of the graphics op. (Then the disk spun up again, but it takes long enough that I could rule out the HDD.) It isn't the monitor; I ensured the two were well physically-separated and the sound was definitely coming from the main box. It isn't something else in the room; the sound is coming from the box. It isn't cross-talk to an audio circuit coming out the speakers. (It doesn't have any speakers.) It isn't my mouse (e.g., when I'm trying to make graphics ops happen); the sound happens if I set up a recurring operation and don't use the mouse at all, or if I lift the mouse off the table slightly (but enough that the laser still registers movement). It isn't the voices in my head; they never whisper like that. Other observations: It doesn't seem to matter what the graphics operation is; anything that changes what's on the screen seems to do it. I get the sound when moving the mouse over the Chromium tab bar (which makes the tab backgrounds change); I get it when a web page has a counter on it that changes the text on the page: I get it when dragging window contents around. The sound is very, very slightly louder if the graphics op is larger, like scrolling a text area when writing a question on superuser.com, than for smaller operations like the tick counter on the web page. But it's very slight. It's fairly loud (and of good duration) when the op involves color changes to substantial surface areas. For instance, when asking a question here on superuser and you move the cursor between the question box and the tag box, and the help to the right fades out, changes, and fades back in. (Yet another example related to the web browser, so let me say: I hear it when operations completely unrelated to the web browser as well.) It doesn't sound like arcing or anything like that (I'd've shut off the machine Right Quick Like if it did). Moving windows does it. Scrolling windows (by and large) doesn't. I have the feeling I've heard this sort of thing before, when all system fans were on low and such, with other systems — but (again) written it off as observational error. For all the world it's like I'm hearing the CPU working (as opposed to the GPU; note the window scroll thing above) or data being transferred somewhere, but that just seems...unlikely. So what am I hearing? This may seem like a very localized question, but perhaps other silent PC enthusiasts may be interested as well...

    Read the article

  • Why do we need different CPU architecture for server & mini/mainframe & mixed-core?

    - by claws
    Hello, I was just wondering what other CPU architectures are available other than INTEL & AMD. So, found List of CPU architectures on Wikipedia. It categorizes notable CPU architectures into following categories. Embedded CPU architectures Microcomputer CPU architectures Workstation/Server CPU architectures Mini/Mainframe CPU architectures Mixed core CPU architectures I was analyzing the purposes and have few doubts. I taking Microcomputer CPU (PC) architecture as reference and comparing others. Embedded CPU architecture: They are a completely new world. Embedded systems are small & do very specific task mostly real time & low power consuming so we do not need so many & such wide registers available in a microcomputer CPU (typical PC). In other words we do need a new small & tiny architecture. Hence new architecture & new instruction RISC. The above point also clarifies why do we need a separate operating system (RTOS). Workstation/Server CPU architectures I don't know what is a workstation. Someone clarify regarding the workstation. As of the server. It is dedicated to run a specific software (server software like httpd, mysql etc.). Even if other processes run we need to give server process priority therefore there is a need for new scheduling scheme and thus we need operating system different than general purpose one. If you have any more points for the need of server OS please mention. But I don't get why do we need a new CPU Architecture. Why cant Microcomputer CPU architecture do the job. Can someone please clarify? Mini/Mainframe CPU architectures Again I don't know what are these & what miniframes or mainframes used for? I just know they are very big and occupy complete floor. But I never read about some real world problems they are trying to solve. If any one working on one of these. Share your knowledge. Can some one clarify its purpose & why is it that microcomputer CPU archicture not suitable for it? Is there a new kind of operating system for this too? Why? Mixed core CPU architectures Never heard of these. If possible please keep your answer in this format: XYZ CPU architectures Purpose of XYZ Need for a new architecture. why can't current microcomputer CPU architecture work? They go upto 3GHZ & have upto 8 cores. Need for a new Operating System Why do we need a new kind of operating system for this kind of archictures?

    Read the article

< Previous Page | 186 187 188 189 190 191 192 193 194 195 196 197  | Next Page >