Search Results

Search found 6172 results on 247 pages for 'limit choices to'.

Page 221/247 | < Previous Page | 217 218 219 220 221 222 223 224 225 226 227 228  | Next Page >

  • UPS vs Solar Power in case of power failure for a server [on hold]

    - by Zen 8000k
    I am looking for a low power, low end pc able to run 24/7 without overheating and a way to support it in case of power failure. Power failures can be up to 72 hours. The pc dosen't need a monitor or keyboard. A modem must also be protected in case of power failure. When i say low end, i don't mean crap. The cpu needs to be x86 and have at least 1k cpu in this chart: http://www.cpubenchmark.net/index.php What's the best way to do this? EDIT: more info. I need to run a home server. The server will perform light tasks mainly. A x86 cpu sadly is the only route for my use. I want to be able to run the server and the router/modem in case of power failure. Now, regarding how long the power will fail: 1) 1 hours is OK for most situations. (say 90%) 2) 3 hours is OK (say 98%) 3) 6 hours is more thank OK. (say 99.5%) 4) On extreme cases the power might fail days. I believe this is very unlikely to happen. More is great but, really, how ofter power will fail more than 3 hours? I believe once every year at best. Well, that's too rare to care about. Given the above, I am looking for a cost effective way to archive 1-3 hour power or 6 hour if possible. Solutions: You guys give me great ideas. 1) Power generator: no good as power will fail for 10 seconds before returning. Also I read online, "clean" power generators cost 1.5k+, so it's out of budged. Non clean generator might damage electronics, right? 2) Solar power: i don't know for sure about this. Sounds like a great idea, too good to be true, honestly. For only 200$ i get 100+w? What are the drawbacks here? 3) UPS: This seems to be the best. The only problem is the cost. Cost < 200$ = great 400$ = budged limit

    Read the article

  • Need advice on which PCI SATA Controller Card to Purchase

    - by Matt1776
    I have a major issue with the build of a machine I am trying to get up and running. My goal is to create a file server that will service the needs of my software development, personal media storage and streaming/media server needs, as well as provide a strong platform for backing up all this data in a routine, cron-job oriented German efficiency sort of way. The issue is a simple one - all my drives are SATA drives and my motherboard controller only contains 4 ports. Solving the issue has proven to be an unmitigated nightmare. I would like advice on the purchase of the following: 4 Port internal SATA / 2 Port external eSATA PCI SATA Controller Card that has the following features and/or advantages: It must function. If I plug it in and attach drives, I expect my system to still make it to the Operating System login screen. It must function on CentOS, and I mean it must function WELL and with MINIMAL hassle. If hassle is unavoidable, there shall be CLEAR CUT and EASY TO FOLLOW instructions on how to install drivers and other supporting software. I do not need nor want fakeRAID - I will be setting up any RAID configurations from within the operating system. Now, if I am able to find such a mythical device, I would be eternally grateful to whomever would be able to point me in the right direction, a direction which I assume will be paved with yellow bricks. I am prepared to pay a considerable sum of money (as SATA controller cards go) and so paying anywhere between 60 to 120 dollars will not be an issue whatsoever. Does such a magical device exist? The following link shows an "example" of the type of thing I am looking for, however, I have no way of verifying that once I plug this baby in that my system will still continue to function once I've attached the drives, or that once I've made it to the OS, I will be able to install whatever drivers or software programs I need to make it work with relative ease. It doesn't have to be dog-shit simple, but it cannot involve kernels or brain surgery. http://www.amazon.com/gp/product/B00552PLN4/ref=pd_lpo_k2_dp_sr_1?pf_rd_p=486539851&pf_rd_s=lpo-top-stripe-1&pf_rd_t=201&pf_rd_i=B003GSGMPU&pf_rd_m=ATVPDKIKX0DER&pf_rd_r=1HJG60XTZFJ48Z173HKY So does anyone have a suggestion regarding the subject I am asking about? PCI SATA Controller Cards? It would help if you've had experience with the component before - that is after all why I am asking here - for those who have had experience that I do not have. Bear in mind that this is for a home setup and that I do not have a company credit card. I have a budget with a 'relative' upper limit of about $150.00.

    Read the article

  • networked storage for a research group, 10-100 TB

    - by Marc
    this is related to this post: http://serverfault.com/questions/80854/scalable-24-tb-nas-for-research-department but perhaps a little more general. Background: We're a research lab of around 10 people who do a lot of experiments that involve taking pictures at one of several lab setups and then analyzing it an one of several lab computers. Each experiment may produce 2 or 3 GB of data, and we are generating data at the rate of about 10 TB/year. Right now, we are storing the data on a 6-bay netgear readynas pro, but even with 2 TB drive, this only gives us 10 TB of storage. Also, right now we are not backing up at all. Our short term backup plan is to get a second readynas, put it in a different building and mirror the one drive onto the other. Obviously, this is somewhat non-ideal. Our options: 1) We can pay our university $400/ TB /year for "backed up" online storage. We trust them more than we trust us, but not a whole lot. 2) We can continue to buy small NASs and mirror them between offices. One limit, although stupid, is that we don't have an unlimited number of ethernet jacks. 3) We can try to implement our own data storage solution, which is why I'm asking you guys. One thing to consider is that we're a very transient population and none of us are network administration experts. I will probably be here only another year or so, and graduate students, who are here the longest, have a 5-6 year time scale. So nothing can require expert oversight. Our data transfer rates are low - most of the data will just sit on the server waiting for someone to look at it once or twice - so we don't need a really high speed system. Given these contraints, can someone recommend a fairly low-cost, scalable, more or less turn key shared data storage system with backup in a separate physical location. Does such a thing exist or should we just pay the university to take care of it for us? As a second question, our professor just got tenure and is putting together a budget. Here the goal is to ask for as much as you can and hope you get a fraction of it. So the same question, minus the low-cost. Without budget constraints, can you recommend a scalable turn-key backed up storage system. Thanks

    Read the article

  • serving mp3s to mobile devices is flooding nginx with partial requests

    - by drumfire
    I am serving mp3s with a minimalistic nginx server. What I see in my log files is that there are a lot of requests, in particular from AppleCoreMedia and sometimes Android useragents, that flood the server with short requests. Sometimes they keep requesting to download the same partial content for a very long time; sometimes more than an hour. For example: "GET /somefile.mp3 HTTP/1.1" 206 33041 "AppleCoreMedia/1.0.0.9B206 (iPhone; U; CPU OS 5_1_1 like Mac OS X; en_us)" "GET /somefile.mp3 HTTP/1.1" 206 33041 "AppleCoreMedia/1.0.0.9B206 (iPhone; U; CPU OS 5_1_1 like Mac OS X; en_us)" "GET /somefile.mp3 HTTP/1.1" 206 33041 "AppleCoreMedia/1.0.0.9B206 (iPhone; U; CPU OS 5_1_1 like Mac OS X; en_us)" [...] I also get a lot, but not as much, of these: "-" 400 0 "-" "-" 400 0 "-" The IP addresses are always from clients that start downloading shortly after that request, usually they have roughly the same UserAgent as in the first example. emphasized text I have enabled server throttling and connection limits in nginx to limit the huge amount of log entries from equivalent IPs at least somewhat. There was a performance issue when I saw the same behaviour on the previous server that used Apache. I installed nginx on a better server then moved the site. When Apache could not handle more connections from the increasing number of clients effectively that server was ddossed. There was no bandwidth issue with already connected clients and I don't know if the already connected clients were using more than one connection at a time. Please tell me: Are clients that appear to get stuck on a download a Bad Thing™ I heard people say their mobile bandwidth use was much higher than they could account for. I'm thinking this type of client behaviour can account for that. And costs us more bandwidth too. Which up to date alternatives exist out there that can handle serving this type of data better than plain HTTP? Useful general insights for someone who just came into this field straight out of the late 90s. :-)

    Read the article

  • Linksys WiFi usb dongle and linux woes

    - by MrStatic
    I have a Linksys WUSB54GC usb dongle and I have exhausted every thing I know about making this thing work in linux. I am using Fedora 13. Since it is not ready I can not view any networks. Any ideas would be great. tail of the system log Jun 2 20:14:35 localhost kernel: usb 1-7: new high speed USB device using ehci_hcd and address 8 Jun 2 20:14:35 localhost kernel: usb 1-7: New USB device found, idVendor=1737, idProduct=0077 Jun 2 20:14:35 localhost kernel: usb 1-7: New USB device strings: Mfr=1, Product=2, SerialNumber=3 Jun 2 20:14:35 localhost kernel: usb 1-7: Product: 802.11 g WLAN Jun 2 20:14:35 localhost kernel: usb 1-7: Manufacturer: Ralink Jun 2 20:14:35 localhost kernel: usb 1-7: SerialNumber: 1.0 Jun 2 20:14:35 localhost kernel: Registered led device: rt2800usb-phy3::radio Jun 2 20:14:35 localhost kernel: Registered led device: rt2800usb-phy3::assoc Jun 2 20:14:35 localhost kernel: Registered led device: rt2800usb-phy3::quality Jun 2 20:14:35 localhost NetworkManager[1367]: <info> found WiFi radio killswitch rfkill3 (at /sys/devices/pci0000:00/0000:00:1d.7/usb1/1-7/1-7:1.0/ieee80211/phy3/rfkill3) (driver <unknown>) Jun 2 20:14:35 localhost kernel: rt2800usb 1-7:1.0: firmware: requesting rt2870.bin Jun 2 20:14:35 localhost NetworkManager[1367]: <info> (wlan0): driver supports SSID scans (scan_capa 0x01). Jun 2 20:14:35 localhost NetworkManager[1367]: <info> (wlan0): new 802.11 WiFi device (driver: 'rt2800usb' ifindex: 6) Jun 2 20:14:35 localhost NetworkManager[1367]: <info> (wlan0): exported as /org/freedesktop/NetworkManager/Devices/4 Jun 2 20:14:35 localhost NetworkManager[1367]: <info> (wlan0): now managed Jun 2 20:14:35 localhost NetworkManager[1367]: <info> (wlan0): device state change: 1 -> 2 (reason 2) Jun 2 20:14:35 localhost NetworkManager[1367]: <info> (wlan0): bringing up device. Jun 2 20:14:35 localhost kernel: ADDRCONF(NETDEV_UP): wlan0: link is not ready Jun 2 20:14:35 localhost NetworkManager[1367]: <info> (wlan0): preparing device. Jun 2 20:14:35 localhost NetworkManager[1367]: <info> (wlan0): deactivating device (reason: 2). Jun 2 20:14:35 localhost NetworkManager[1367]: <info> (wlan0): supplicant interface state: starting -> ready Jun 2 20:14:35 localhost NetworkManager[1367]: <info> (wlan0): device state change: 2 -> 3 (reason 42) [root@localhost log]# iwconfig lo no wireless extensions. eth0 no wireless extensions. wlan0 IEEE 802.11bg Mode:Managed Access Point: Not-Associated Tx-Power=8 dBm Retry long limit:7 RTS thr:off Fragment thr:off Encryption key:off Power Management:on

    Read the article

  • mongodb : Can create new thread on FreeBSD?

    - by user197739
    We experienced some strange thing in our mongodb gridfs platform. The platform actually is a bi Xeon E5 (bi quad core) with 128GB of memory, running on freebsd 9 with a zfs pool dedicated for mongodb. [root@mongofile1 ~]# uname -sr FreeBSD 9.1-RELEASE our /boot/loader.conf vfs.zfs.arc_min="2048M" vfs.zfs.arc_max="7680M" vm.kmem_size_max="16G" vm.kmem_size="12G" vfs.zfs.prefetch_disable="1" kern.ipc.nmbclusters="32768" /etc/sysctl.conf net.inet.tcp.msl=15000 net.inet.tcp.keepidle=300000 kern.ipc.nmbclusters=32768 kern.ipc.maxsockbuf=2097152 kern.ipc.somaxconn=8192 kern.maxfiles=65536 kern.maxfilesperproc=32768 net.inet.tcp.delayed_ack=0 net.inet.tcp.sendspace=65535 net.inet.udp.recvspace=65535 net.inet.udp.maxdgram=57344 net.local.stream.recvspace=65535 net.local.stream.sendspace=65535 we follow the recommendation for the ulimit : [root@mongofile1 ~]# su - mongodb $ ulimit -a cpu time (seconds, -t) unlimited file size (512-blocks, -f) unlimited data seg size (kbytes, -d) 33554432 stack size (kbytes, -s) 524288 core file size (512-blocks, -c) unlimited max memory size (kbytes, -m) unlimited locked memory (kbytes, -l) unlimited max user processes (-u) 5547 open files (-n) 32768 virtual mem size (kbytes, -v) unlimited swap limit (kbytes, -w) unlimited sbsize (bytes, -b) unlimited pseudo-terminals (-p) unlimited This server have a twin (same config exactly) for ReplSet in other data center and we have a virtualized arbiter. Some time, almost 3 days, the process of mongodb exit. The problem begin with: Fri Nov 8 11:27:31.741 [conn774697] end connection 192.168.10.162:47963 (23 connections now open) Fri Nov 8 11:27:31.770 [initandlisten] can't create new thread, closing connection Fri Nov 8 11:27:31.771 [rsHealthPoll] replSet member mongofile2:27017 is now in state DOWN Fri Nov 8 11:27:31.774 [initandlisten] connection accepted from 192.168.10.162:47968 #774702 (20 connections now open) Fri Nov 8 11:27:31.774 [initandlisten] connection accepted from 192.168.10.161:28522 #774703 (21 connections now open) Fri Nov 8 11:27:31.774 [initandlisten] connection accepted from 192.168.10.164:15406 #774704 (22 connections now open) Fri Nov 8 11:27:31.774 [initandlisten] connection accepted from 192.168.10.163:25750 #774705 (23 connections now open) Fri Nov 8 11:27:31.810 [initandlisten] connection accepted from 192.168.10.182:20779 #774706 (24 connections now open) Fri Nov 8 11:27:31.855 [initandlisten] connection accepted from 192.168.10.161:28524 #774707 (25 connections now open) Fri Nov 8 11:27:31.869 [initandlisten] connection accepted from 192.168.10.182:20786 #774708 (26 connections now open) and after many "can create new thread" [root@mongofile1 /usr/mongodb]# tail -n 15000 mongod.log.old |grep "create new thread"|wc 5020 55220 421680 and finish by a magnificent Fri Nov 8 11:30:22.333 [rsMgr] replSet warning caught unexpected exception in electSelf() pure virtual method called Fri Nov 8 11:30:22.333 Got signal: 6 (Abort trap: 6). Fri Nov 8 11:30:22.337 Backtrace: 0x599efc 0x8035cb516 0x599efc <_ZN5mongo10abruptQuitEi+988> at /usr/local/bin/mongod 0x8035cb516 <_pthread_sigmask+918> at /lib/libthr.so.3 Extract of mongodb from top 78126 mongodb 77 20 0 1253G 1449M sbwait 0 0:20 0.00% mongod If I restart the process when it crash, the problem is fixed for almost 3 days. Has anyone seen this before, or know of a fix?

    Read the article

  • How to get the best LINPACK result and conquer the Top500?

    - by knweiss
    Given a large Linux HPC cluster with hundreds/thousands of nodes. What are your best practices to get the best possible LINPACK benchmark (HPL) result to submit for the Top500 supercomputer list? To give you an idea what kind of answers I would appreciate here are some sub-questions (with links): How to you tune the parameters (N, NB, P, Q, memory-alignment, etc) for the HPL.dat file (without spending too much time trying each possible permutation - esp with large problem sizes N)? Are there any Top500 submission rules to be aware of? What is allowed, what isn't? Which MPI product, which version? Does it make a difference? Any special host order in your MPI machine file? Do you use CPU pinning? How to you configure your interconnect? Which interconnect? Which BLAS package do you use for which CPU model? (Intel MKL, AMD ACML, GotoBLAS2, etc.) How do you prepare for the big run (on all nodes)? Start with small runs on a subset of nodes and then scale up? Is it really necessary to run LINPACK with a big run on all of the nodes (or is extrapolation allowed)? How do you optimize for the latest Intel/AMD CPUs? Hyperthreading? NUMA? Is it worth it to recompile the software stack or do you use precompiled binaries? Which settings? Which compiler optimizations, which compiler? (What about profile-based compilation?) How to get the best result given only a limited amount of time to do the benchmark run? (You can block a huge cluster forever) How do you prepare the individual nodes (stopping system daemons, freeing memory, etc)? How do you deal with hardware faults (ruining a huge run)? Are there any must-read documents or websites about this topic? E.g. I would love to hear about some background stories of some of the current Top500 systems and how they did their LINPACK benchmark. I deliberately don't want to mention concrete hardware details or discuss hardware recommendations because I don't want to limit the answers. However, feel free to mention hints e.g. for specific CPU models.

    Read the article

  • 2nd Year College - Learning - Microsoft Server Products

    - by Ryan
    As the title says, I just finished my first year of college (majoring in Software Engineering). Fortunately my school likes Microsoft enough, and I can get pretty much anything I want that Microsoft sells. I also can get IBM Websphere and the like for free as well. Earlier this year, I set up an oldish computer (2.6 Pentium D, x64) to run ubuntu server headless. I'm predominately a Java developer, so Apache, Maven, Nexus, Sonar, SVN, etc made it onto the machine. It worked really well for personal and school projects, especially team projects (quick ramp up). Anyways, I started to pick up C# to complement my Java knowledge (don't judge me :P), and am interested in working with some of the associated Microsoft equivalents. The machine currently has the Ubuntu install, as well as Windows 7 Ultimate. I do all of my actual development work off my laptop, also running Windows 7 Ultimate. I was wondering what software you would recommend putting on the machine. I’m not actually serving anything off the machine itself, but in Ubuntu I had it doing integration tests with Hudson on every commit, and profiling my applications, etc, etc. The machine would be running headless, and I would remote into it. Here is what I am currently leaning towards / wondering about: Windows 7 Ultimate vs Windows Server 2008 (R2) (no one is really clear why I should go with one over the other) Windows Team Foundation Sharepoint (Never used it before, kind of meh about it) IBM Websphere or Glassfish (Some Java EE web server) SQL Server 2008 A DVCS In order to better control product conflicts / limit resource use, I’m wondering if I should install things into virtual machines (I can get VmWare or Microsoft Virtualization Products) I also plan on installing everything I had running under Linux (it’s almost entirely Java based development software, so it’ll run on both, only reason I went with ubuntu during the year was because the apache build seemed better). I’m primarily looking to become familiar with enterprise software development tools, as well as get something functional that will help my development process. (IE, I’ll still use project and assign tasks even though I might be the only one to assign tasks to, just to practice doing so). Is there any other software / configuration details I should explore? Opinions on my current list? I primarily use C#, Java, and PHP. I'm familiar with ruby, and python as well. Thanks!

    Read the article

  • routing through multiple subinterfaces in debian

    - by Kstro21
    my question is as simple as the title, i have a debian 6 , 2 NICs, 3 different subnets in a single interface, just like this: auto eth0 iface eth0 inet static address 192.168.106.254 netmask 255.255.255.0 auto eth0:0 iface eth0:0 inet static address 172.19.221.81 netmask 255.255.255.248 auto eth0:1 iface eth0:1 inet static address 192.168.254.1 netmask 255.255.255.248 auto eth1 iface eth1 inet static address 172.19.216.3 netmask 255.255.255.0 gateway 172.19.216.13 eth0 is conected to a swith with 3 differents vlans, eth1 is conected to a router. No iptables DROP, so, all traffic is allowed. Now, passing the traffic through eth0 is OK, passing the traffic through eth0:0 is OK, but, passing the traffic through eth0:1 is not working, i can ping the ip address of that sub interface from a pc where this ip is the default gateway, but can't get to servers in the subnet of the eth1 interface, the traffic is not passing, even when i set the iptables to log all the traffic in the FORWARD chain and i can see the traffic there, but, the traffic is not really passing. And the funny is i can do any the other way around, i mean, passing from eth1 to eth0:1, RDP, telnet, ping, etc, doing some work with the iptable, i manage to pass some traffic from eth0:1 to eth1, the iptables look like this: iptables -t nat PREROUTING -d 192.168.254.1/32 -p tcp -m multiport --dports 25,110,5269 -j DNAT --to-destination 172.19.216.1 iptables -t nat PREROUTING -d 192.168.254.1/32 -p udp -m udp --dport 53 -j DNAT --to-destination 172.19.216.9 iptables -t nat PREROUTING -d 192.168.254.1/32 -p tcp -m tcp --dport 21 -j DNAT --to-destination 172.19.216.11 iptables -t nat POSTROUTING -s 172.19.216.0/24 -d 172.19.221.80/29 -j SNAT --to-source 172.19.221.81 iptables -t nat POSTROUTING -s 172.19.216.0/24 -d 192.168.254.0/29 -j SNAT --to-source 192.168.254.1 iptables -t nat POSTROUTING -s 172.19.216.0/24 -o eth0 -j SNAT --to-source 192.168.106.254 dong this is working, but,it is really a headache have to map each port with the server, imagine if i move the service from server, so, now i have doubts: can debian route through multiple subinterfaces?? exist a limit for this?? if not, what i'm doing wrong when i have the same setup with other subnets and it is working ok?? without the iptables rules in the nat, it doesn't work thanks and i hope good comments/answers

    Read the article

  • how to setup kismet.conf on Ubuntu

    - by Registered User
    I installed Kismet on my Ubuntu 10.04 machine as apt-get install kismet every thing seems to work fine. but when I launch it I see following error kismet Launching kismet_server: //usr/bin/kismet_server Suid priv-dropping disabled. This may not be secure. No specific sources given to be enabled, all will be enabled. Non-RFMon VAPs will be destroyed on multi-vap interfaces (ie, madwifi-ng) Enabling channel hopping. Enabling channel splitting. NOTICE: Disabling channel hopping, no enabled sources are able to change channel. Source 0 (addme): Opening none source interface none... FATAL: Please configure at least one packet source. Kismet will not function if no packet sources are defined in kismet.conf or on the command line. Please read the README for more information about configuring Kismet. Kismet exiting. Done. I followed this guide http://www.ubuntugeek.com/kismet-an-802-11-wireless-network-detector-sniffer-and-intrusion-detection-system.html#more-1776 how ever in kismet.conf I am not clear with following line source=none,none,addme as to what should I change this to. lspci -vnn shows 0c:00.0 Network controller [0280]: Broadcom Corporation BCM4312 802.11b/g [14e4:4315] (rev 01) Subsystem: Dell Device [1028:000c] Flags: bus master, fast devsel, latency 0, IRQ 17 Memory at f69fc000 (64-bit, non-prefetchable) [size=16K] Capabilities: [40] Power Management version 3 Capabilities: [58] Vendor Specific Information <?> Capabilities: [e8] Message Signalled Interrupts: Mask- 64bit+ Queue=0/0 Enable- Capabilities: [d0] Express Endpoint, MSI 00 Capabilities: [100] Advanced Error Reporting <?> Capabilities: [13c] Virtual Channel <?> Capabilities: [160] Device Serial Number Capabilities: [16c] Power Budgeting <?> Kernel driver in use: wl Kernel modules: wl, ssb and iwconfig shows lo no wireless extensions. eth0 no wireless extensions. eth1 IEEE 802.11bg ESSID:"WIKUCD" Mode:Managed Frequency:2.462 GHz Access Point: <00:43:92:21:H5:09> Bit Rate=11 Mb/s Tx-Power:24 dBm Retry min limit:7 RTS thr:off Fragment thr:off Encryption key:off Power Managementmode:All packets received Link Quality=1/5 Signal level=-81 dBm Noise level=-90 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:169 Invalid misc:0 Missed beacon:0 So what should I be putting in place of source=none,none,addme with output I mentioned above ?

    Read the article

  • How to circumvent ISP Limiting "Unknown" traffic - (SSH)Proxy, VPN

    - by connery
    I am having issues with using a proxy/VPN, with my current ISP (Comenersol, Spain). From my point of view they limit traffic by protocol or by traffic they "know" and "dont know". I'll explain my findings so far below. Internet connection in Spain: ~400-420KByte/sec (speedtest.net) OpenVPN Server in Sweden(pfsense): 100/100Mbit. LZO Compression. TCP. Tun. Aes128 Squid Proxy server in Sweden (pfsense): 100/100 (same box as the vpn server). Plain, no encryption. Runs in stealth mode to hide the use of proxy. NOT running OpenVPN or Squid Proxy, this is my findings: When I download a file from my pfsense box in Sweden, I get maximum speed When I run speedtest.net and choose any european server (including Swedish), I get max speed When I download a torrent (with non default port above 10K), I get limited to ~100KByte/sec. Encryption is turned off If I download something through https, I get max speed Running either Squid Proxy or VPN, this is my findings When I download a file from my pfsense box in Sweden, I get ~100KByte/sec When I run speedtest.net and choose any european server (including Swedish and Spanish), I get ~100Kbyte/sec When I download a torrent, I get same limitation ~100KByte/sec When I download something through https, I get ~100KByte/sec I verify the speeds above with speedtest.net measure, firefox measure in addition to having bmon running in terminal in the background. This way I am certain that the speeds I get presented, are in fact correct. If I connect through a different ISP with VPN or Squid Proxy, I get better speeds (400KByte/sec ++) In short: Whenever I tunnel my traffic through Sweden, my SPanish ISP throttles the traffic. I thought tunneling it through Squid would solve the issue, since I then would no longer hide my traffic through encryption. This does not seem to be the case. Wget and fetch gives same result. I did not try 'nc', but I assume this would give the same result. Does anyone know how to circumvent this issue? I would very much like to be able to get full speed with Swedish ip, as this would make me able to stream TV at higher quality than today. 100KByte/sec just does not cut it quality wise. Thanks for reading. Looking forward for your help.

    Read the article

  • Creating a really public Windows network share

    - by Timur Aydin
    I want to create a shared folder under Windows (actually, Windows XP, Vista, and Win 7) which can be mounted from a linux system without prompting for a username/password. But before attempting this, I first wanted to establish that this works between two Windows 7 machines. So, on machine A (The server that will hold the public share), I created a folder and set its permissions such that Everyone has read/write access. Then I visited Control Panel - Network and Sharing Center - Advanced Sharing Settings and then selected "Turn off password protected sharing". Then, on machine B (The client that wants to access the public share with no username/password prompt), I tried to "map network driver" and I was immediately prompted by a password prompt. Some search on google suggested changing "Acconts: Limit local account use of blank passwords to console logon only" to "Disabled". Tried that, no luck, still getting username/password prompt. If I enter the username/password, I am not prompted for it again and can use the share as long as the session is active. But still, I really need to access the share without any username/password transaction whatsoever and this is not just a convenience related thing. Here is the actual reason: The device that will access this windows network share is an embedded system running uclinux. It will mount this share locally and then play media files. Its only user interface is a javascript based web page. So, if there is going to be any username/password transaction, I would have to ask the user to enter them over the web page, which will be ridiculously insecure and completely exposed to packet sniffing. After hours of doing experiments, I have found one way to make this happen, but I am not really very fond of it... I first create a new user (shareuser) and give it a password (sharepass). Then I open Group Policy Editor and set "Deny log on locally" to "A\shareuser". Then, I create a folder on A and share it so that shareuser has Read access to it. This way, shareuser cannot login to A, but can access the shared folder. And, if someone discovers the shareuser/sharepass through network sniffing, they can just access the shared folder, but can't logon to A. The same thing can be achieved by enabling the Guest user and then going to Group Policy Editor and deleting the "Guest" from the "Deny access to this computer from the network" setting. Again, Guest can mount the public share, but logging in to A as Guest won't be possible, because Guest is already not allowed to log in by default. So my question would be, how can I create a network share that is truly public, so that it can be mounted from a linux machine without requiring a password? Sorry for the long question, but I wanted to explain the reason for really needing this...

    Read the article

  • tc rules block traffic from some hosts at network

    - by user139430
    I have a problem I can not solve. The script, which sets the rules for traffic shaping is blocking the traffic from some hosts.If I remove all the rules, then it works. I can not understand why? Here is my script... #!/bin/sh cmdTC=/sbin/tc rateLANDl="60mbit" ceilLANDl="60mbit" rateLANUl="40mbit" ceilLANUl="40mbit" quantLAN="1514" # Nowaday bandwidth limit set to 100mbit. # We devide it with 60mbit download and 40mbit upload bandthes. rateHiDl="30mbit" ceilHiDl="60mbit" rateHiUl="20mbit" ceilHiUl="40mbit" quantHi="1514" rateLoDl="30mbit" ceilLoDl="60mbit" rateLoUl="20mbit" ceilLoUl="40mbit" quantLo="1514" devNIF=eth0 devFIF=ifb0 modprobe ifb ip link set $devFIF up 2>/dev/null #exit 0 ################################################################################################ # Remove discuiplines from network and fake interfaces ################################################################################################ $cmdTC qdisc del dev $devNIF root 2>/dev/null $cmdTC qdisc del dev $devFIF root 2>/dev/null $cmdTC qdisc del dev $devNIF ingress 2>/dev/null if [ "$1" = "down" ]; then exit 0 fi ################################################################################################ # Create discuiplines for network interface ################################################################################################ $cmdTC qdisc add dev $devNIF root handle 1:0 htb default 12 # Create classes for network interface $cmdTC class add dev $devNIF parent 1:0 classid 1:1 htb rate ${rateLANDl} ceil ${ceilLANDl} quantum ${quantLAN} $cmdTC class add dev $devNIF parent 1:1 classid 1:11 htb rate ${rateHiDl} ceil ${ceilHiDl} quantum ${quantHi} $cmdTC class add dev $devNIF parent 1:1 classid 1:12 htb rate ${rateLoDl} ceil ${ceilLoDl} quantum ${quantLo} $cmdTC qdisc add dev $devNIF parent 1:11 handle 111: sfq perturb 10 $cmdTC qdisc add dev $devNIF parent 1:12 handle 112: sfq perturb 10 # Create filters for network interface $cmdTC filter add dev $devNIF protocol all parent 1:0 u32 match ip dst 10.252.2.0/24 flowid 1:11 $cmdTC filter add dev $devNIF protocol all parent 111: handle 111 flow hash keys dst divisor 1024 baseclass 1:11 $cmdTC filter add dev $devNIF protocol all parent 112: handle 112 flow hash keys dst divisor 1024 baseclass 1:12 ################################################################################################ # Create discuiplines for fake interface ################################################################################################ $cmdTC qdisc add dev $devFIF root handle 1:0 htb default 12 # Create classes for network interface $cmdTC class add dev $devFIF parent 1:0 classid 1:1 htb rate ${rateLANUl} ceil ${ceilLANUl} quantum ${quantLAN} $cmdTC class add dev $devFIF parent 1:1 classid 1:11 htb rate ${rateHiUl} ceil ${ceilHiUl} quantum ${quantHi} $cmdTC class add dev $devFIF parent 1:1 classid 1:12 htb rate ${rateLoUl} ceil ${ceilLoUl} quantum ${quantLo} $cmdTC qdisc add dev $devFIF parent 1:11 handle 111: sfq perturb 10 $cmdTC qdisc add dev $devFIF parent 1:12 handle 112: sfq perturb 10 # Create filters for network interface $cmdTC filter add dev $devFIF protocol all parent 1:0 u32 match ip src 10.252.2.0/24 flowid 1:11 $cmdTC filter add dev $devFIF protocol all parent 111: handle 111 flow hash keys src divisor 1024 baseclass 1:11 $cmdTC filter add dev $devFIF protocol all parent 112: handle 112 flow hash keys src divisor 1024 baseclass 1:12 ################################################################################################ # Create redirect discuiplines from network to fake interface ################################################################################################ $cmdTC qdisc add dev $devNIF handle ffff:0 ingress $cmdTC filter add dev $devNIF parent ffff:0 protocol all u32 match u32 0 0 action mirred egress redirect dev $devFIF Here is my /etc/modules: loop ifb ppp_mppe nf_conntrack_pptp nt_conntrack_proto_gre nf_nat_pptp nf_nat_proto_gre The system is Linux wall 2.6.32-5-amd64 #1 SMP Sun Sep 23 10:07:46 UTC 2012 x86_64 GNU/Linux

    Read the article

  • Excel data range - to sum series within date range

    - by Mark
    I have a set of data that I would like to manipulate but my problem is not straight forward. In this data I have date ranges that include multiple entries of the same date on some days and not on others. What I need to accomplish is to manage a trading account so that no more than 1% of the account is put at risk on any given day (retrospectively). To do this, when a series of trades falls on the same day, I need to total the risk associated with each of those trades so that I can limit the total risk of the combined trades by limiting the position size I take in each. Here is a sample set of the data I am working with. As you can see, there are 5 trades on Jan 3. Each of these trades comes with a risk value. I need to add the risk values of these 5 trades so that I can compare it to an account value and then determine if I should take more than 1 position in each trade. As you can see there are different numbers of trades that occur on the 4th, 5th 6th and 9th. I need the values returned in each row so that I can further manipulate them in the spreadsheet. I am not new to Excel, but cannot come up with a solution here - your input is much appreciated. Forgive the presentation below - I cannot upload a pic (new user) and the format does not carry across from excel. I have aligned the first several lines manually. Thx. Date ............. Pair ....... L/S ...... Initial Risk .......Win ......Loss ....BE. ....Avg Gain Avg Loss pips/swing 1/3/2012 ....EUR/USD ....S .............15 ................1 ..................................10 ..........................15. .. 1/3/2012 ....USD/CHF .....L ............15 ..........................................1 ..........0 1/3/2012 ....AUD/USD ....S .............15 ................1 .................................16 ...........................18 1/3/2012 ....NZD/USD ....S .............15 ................1 ...................................7 .............................8 1/3/2012 ....AUD/JPY .... S .............10 ................1 .................................25 ............................20 1/4/2012 ....EUR/USD ....L .............20 ................1 .................................19 ...........................19 1/4/2012 ....USD/CHF ....S ............ 15 ................1 .................................17 ...........................20 1/4/2012 EUR/JPY L 20 1 0 1/5/2012 EUR/USD L 15 1 10 20 1/5/2012 GBP/USD L 20 1 15 20 1/5/2012 USD/CHF S 15 1 0 1/5/2012 USD/JPY S 10 1 7 10 1/5/2012 USD/CAD S 15 1 28 36 1/5/2012 AUD/USD L 15 1 20 20 1/6/2012 USD/CAD S 15 1 5 -10 1/6/2012 EUR/JPY L 15 1 7 7 1/9/2012 AUD/USD S 15 1 22 30 1/9/2012 NZD/USD S 15 1 10 15

    Read the article

  • Email client wont connect to SMTP Authentication server

    - by Jason
    Im having trouble installing SMTH Auth for my ubuntu email server. I have followed ubuntu own guide for SMTH AUT (https://help.ubuntu.com/14.04/serverguide/postfix.html). But my email client thunderbird is giving this error " lost connection to SMTP-client 127.0.0.1." I cant add new users to thundbird either because of this connection problem. Do i have to alter any setting on my Thunderbird perhaps since ? I did try to make thunderbird use SSL for imap as well but that neither works. I restarted postfix and dovecot to find errors but both run just fine. Prior to SMTP auth changes thunderbird could connect just fine to my server and send mails. This is my main.cf file in postfix. It looks just like the one on ubuntu guide above. readme_directory = no # TLS parameters #smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache myhostname = mail.mysite.com mydomain = mysite.com alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = $mydomain mydestination = mysite.com #relayhost = smtp.192.168.10.1.com mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 192.168.10.0/24 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all home_mailbox = Maildir/ mailbox_command = #SMTP AUTH smtpd_sasl_type = dovecot smtpd_recipient_restrictions=permit_mynetworks, permit_sasl_authenticated,reject_unauth_destination smtpd_sasl_local_domain = smtpd_sasl_auth_enable = yes smtpd_sasl_security_options = noanonymous broken_sasl_auth_clients = yes smtpd_tls_auth_only = no smtp_tls_security_level = may smtpd_tls_security_level = may smtp_tls_note_starttls_offer = yes smtpd_tls_key_file = /etc/ssl/private/smtpd.key smtpd_tls_cert_file = /etc/ssl/certs/smtpd.crt smtpd_tls_CAfile = /etc/ssl/certs/cacert.pem smtpd_tls_loglevel = 1 smtpd_tls_received_header = yes This my dovecot configuration at 10-master.conf service imap-login { inet_listener imap { #port = 143 } inet_listener imaps { #port = 993 #ssl = yes } # Number of connections to handle before starting a new process. Typically # the only useful values are 0 (unlimited) or 1. 1 is more secure, but 0 # is faster. <doc/wiki/LoginProcess.txt> #service_count = 1 # Number of processes to always keep waiting for more connections. #process_min_avail = 0 # If you set service_count=0, you probably need to grow this. #vsz_limit = $default_vsz_limit } service pop3-login { inet_listener pop3 { #port = 110 } inet_listener pop3s { #port = 995 #ssl = yes } } service lmtp { unix_listener lmtp { #mode = 0666 } # Create inet listener only if you can't use the above UNIX socket #inet_listener lmtp { # Avoid making LMTP visible for the entire internet #address = #port = #} } service imap { # Most of the memory goes to mmap()ing files. You may need to increase this # limit if you have huge mailboxes. #vsz_limit = $default_vsz_limit # Max. number of IMAP processes (connections) #process_limit = 1024 } service pop3 { # Max. number of POP3 processes (connections) #process_limit = 1024 } service auth { unix_listener auth-userdb { #mode = 0600 #user = #group = } # Postfix smtp-auth unix_listener /var/spool/postfix/private/auth { mode = 0660 user = postfix } } service dict { # If dict proxy is used, mail processes should have access to its socket. # For example: mode=0660, group=vmail and global mail_access_groups=vmail unix_listener dict { #mode = 0600 #user = #group = } } I did add auth_mechanisms = plain login to 10-auth.conf as well.

    Read the article

  • Trying to grok Linux quotas, where is the data stored?

    - by CarpeNoctem
    So all the tutorials and documentation for the Linux quota system has left me confused. For each filesystem with quotas enabled/on where is the actual quota information stored? Is it filesystem metadata or is it in a file? Say user foo creates a new file on /home. How does the kernel determine whether user foo is below their hard limit? Does the kernel have to tally up quota information on that filesystem each time or is it in the superblock or somewhere else? As far as I understand, the kernel consults the aquota.user file for the actual rules, but where is the current quota usage data stored? Can this be viewed with any tools outside repquota and the like? TIA!! Update: Thanks for the help. I had already read that mini-HOWTO. I am pretty clear on the usage of the user space tools. What I was unclear on is whether the usage data was ALSO in the file that stored per-user limits and you answered this with a yes. From what I can tell, rc.sysinit runs quotacheck and quotaon on startup. The quotacheck program analyzes the filesystem, updates the aquota.* files. It then makes use of quota.h and the quotactl() syscall to inform the kernel of quota info. From this point forward the kernel hashes that information and increments/decrements quota stats as changes occur. Upon shutdown, the init.d/halt script runs the quotaoff command RIGHT before the filesystems are unmounted. The quotaoff command does not appear to update the aquota.* files with the information the kernel has in memory. I say this because the {a,c,m}times for the aquota.user file are only updated upon a reboot of the system or by manual running the quotacheck command. It appears - as far as I can tell - that the kernel just drops it's up-to-date usage data on the floor at shutdown. This information is never used to update the aquota.* files. They are updated during startup by quotacheck(rc.sysinit). Seems silly to me since that updated info had already been collected by the kernel. So...in conclusion I am still not entirely clear on the methods. ;)

    Read the article

  • New-ManagedContentSettings - not working properly under Exchange 2010

    - by mfinni
    I have a client that is divesting a business unit into a new AD forest, Exchange org, etc. We're using Quest tools to migrate users and mailboxes. However, I have to build the new infrastructure to match the old one. In the old one, we're using Managed Folder Mailbox Policies to limit (or allow) retention. They started with Exchange 2007 and never upgraded to Retention Policies; oh well. So, in the old environment, when you use a 2007 server to define a new Managed Content Setting, you can pick "Email" from the dropdown for MessageClass. This is a display name; the actual MessageClass values are thus: MessageClass : IPM.Note;IPM.Note.AS/400 Move Notification Form v1.0;IPM.Note.Delayed;IPM.Note.Exchange.ActiveSync.Report;IPM.Note.JournalReport.Msg;IPM.Note.JournalReport.Tnef;IPM.Note.Microsoft.Missed.Voice;IPM.Note.Rules.OofTemplate.Microsoft;IPM.Note.Rules.ReplyTemplate.Microsoft;IPM.Note.Secure.Sign;IPM.Note.SMIME;IPM.Note.SMIME.MultipartSigned;IPM.Note.StorageQuotaWarning;IPM.Note.StorageQuotaWarning.Warning;IPM.Notification.Meeting.Forward;IPM.Outlook.Recall;IPM.Recall.Report.Success;IPM.Schedule.Meeting.*;REPORT.IPM.Note.NDR If I take that and try to mangle it into a new cmdlet for Ex2010 in my new environment here's what I get New-ManagedContentSettings -Name "Delete Messages older then 90 days" -FolderName "Entire Mailbox" -RetentionEnabled $True -AgeLimitForRetention 90 -TriggerForRetention WhenDelivered -RetentionAction DeleteAndAllowRecovery -MessageClass "IPM.Note","IPM.Note.AS/400MoveNotificationFormv1.0","IPM.Note.Delayed","IPM.Note.Exchange.ActiveSync.Report","IPM.Note.JournalReport.Msg","IPM.Note.JournalReport.Tnef","IPM.Note.Microsoft.Missed.Voice","IPM.Note.Rules.OofTemplate.Microsoft","IPM.Note.Rules.ReplyTemplate.Microsoft","IPM.Note.Secure.Sign","IPM.Note.SMIME","IPM.Note.SMIME.MultipartSigned","IPM.Note.StorageQuotaWarning","IPM.Note.StorageQuotaWarning.Warning","IPM.Notification.Meeting.Forward","IPM.Outlook.Recall","IPM.Recall.Report.Success","IPM.Schedule.Meeting.*","REPORT.IPM.Note.NDR" -whatif Invoke-Command : Cannot bind parameter 'MessageClass' to the target. Exception setting "MessageClass": "The length of t he property is too long. The maximum length is 255 and the length of the value provided is 518." At C:\Users\MFinnigan.sa\AppData\Roaming\Microsoft\Exchange\RemotePowerShell\pfexcas02.fve.ad.5ssl.com\pfexcas02.fve.ad .5ssl.com.psm1:28204 char:29 + $scriptCmd = { & <<<< $script:InvokeCommand ` + CategoryInfo : WriteError: (:) [New-ManagedContentSettings], ParameterBindingException + FullyQualifiedErrorId : ParameterBindingFailed,Microsoft.Exchange.Management.SystemConfigurationTasks.NewManaged ContentSettings So, the config object can store all that mess, but I can't fit it in through the cmdlet to create the object. Lovely. Any ideas?

    Read the article

  • Why am I unable to send an attachment with Outlook via SMTP that I am able to send via Gmail / Google Apps?

    - by cwd
    I have Google Apps installed and I have tried to set up Outlook 2007 to send messages via SMTP. I followed the guide, selecting what I believe are all the correct settings. Yes, I am using POP for incoming, that is intentional but I don't believe it should affect outgoing messages. When I log into gmail (google apps) for my company, I can send a message that has an 8MB attachment (pdf file, not zipped or anything) and it sends fine. However, when I send the same message in Outlook with that same 8mb attachment it fails. Why am I unable to send an attachment with Outlook via SMTP that I am able to send via Gmail / Google Apps? The message headers are (some info omitted for privacy): Technical details of permanent failure: Google tried to deliver your message, but it was rejected by the recipient domain. We recommend contacting the other email provider for further information about the cause of this error. The error that the other server returned was: 552 552 #5.3.4 message size exceeds limit (state 17). ----- Original message ----- DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=company.com; s=google; h=from:to:cc:references:in-reply-to:subject:date:message-id :mime-version:content-type:x-mailer:thread-index:content-language; bh=7d4i/Cbt0v0sY3zt5lN6y5CdvxjbRmTBG4AuBuMxtF4=; b=IJwwxuIEdg1E4zXuGjeDod+1w3RYBBCNzSsqpuX77ih36HSiq++s3ZCQXPeU9CIZVg K8JPJQu9xjivYYjrRaYwyeowLIu0GIdR2h4kKEkFM/GNC2RFF3VwVgj+gvi5eqVZIuWn osT5/VEm10IED6B54NPOtGMgFTci6a57zzVKE= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=from:to:cc:references:in-reply-to:subject:date:message-id :mime-version:content-type:x-mailer:thread-index:content-language :x-gm-message-state; bh=7d4i/Cbt0v0sY3zt5lN6y5CdvxjbRmTBG4AuBuMxtF4=; b=LjTecjok5K71Bymp6tZqAL2XCz03hWROV1mTK8Vf2AeEJwtel9ACu9kE5jW5iJqckb upYKPzoqYLBwAPOzMb9asWoTAZPzC7LMG65eDUc2/ZEvGqXrZs3ziUxwhF4t169yRVuy /6nm/aAt5uPMLPdobxGTJ8ahOIku1Z3gW+OcvZ6ERk1Av/bvuln09vcnyJIrHGh7eK8n cbGVxmK0aecgSPgIj2NALbHkyuxwj+LEBRV6uiz3THDjxAiNfsO5UFjV59sD+lVSBT3z ThOGE8WEXRnKHuP3FuKXyeUxKBZ2CxpWJpvDuS9EsFkln7zkISYEsRA0nUA6GSGi2Z/n 8YUg== Received: by 10.60.169.197 with SMTP id ag5mr12254920oec.137.1351036287413; Tue, 23 Oct 2012 16:51:27 -0700 (PDT) References: Date: Tue, 23 Oct 2012 19:51:16 -0400 Message-ID: <003a01cdb179$4bb2ca60$e3185f20$@com> MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_NextPart_000_003B_01CDB157.C4A12A60" X-Mailer: Microsoft Office Outlook 12.0 Thread-Index: Ac2xVCHGxoC7DDOkQBK3JSXowHb0EQAEB7agAAA/YKAAAIGcQAAAngfQAABAAPAAAFe7gAAAadvw AALgvLA= Content-Language: en-us X-Gm-Message-State: ALoCoQniMq7Fnh+NlfoWjTJPvKWbkhEaftSaFo9ZVvtRpWufTmhlRDx1a9Jf+wmYcbRh896gygNr The company I am sending email to is a company that uses Google Apps for Teams. This is their apps admin login. Should I be worried about that message? My Settings On the Google apps side I have set my SPF record and set / verified my DKIM key. Here are my outlook settings: Why am I unable to send an attachment with Outlook via SMTP that I am able to send via Gmail / Google Apps?

    Read the article

  • How to open a server port outside of an OpenVPN tunnel with a pf firewall on OSX (BSD)

    - by Timbo
    I have a Mac mini that I use as a media server running XBMC and serves media from my NAS to my stereo and TV (which has been color calibrated with a Spyder3Express, happy). The Mac runs OSX 10.8.2 and the internet connection is tunneled for general privacy over OpenVPN through Tunnelblick. I believe my anonymous VPN provider pushes "redirect_gateway" to OpenVPN/Tunnelblick because when on it effectively tunnels all non-LAN traffic in- and outbound. As an unwanted side effect that also opens the boxes server ports unprotected to the outside world and bypasses my firewall-router (Netgear SRX5308). I have run nmap from outside the LAN on the VPN IP and the server ports on the mini are clearly visible and connectable. The mini has the following ports open: ssh/22, ARD/5900 and 8080+9090 for the XBMC iOS client Constellation. I also have Synology NAS which apart from LAN file serving over AFP and WebDAV only serves up an OpenVPN/1194 and a PPTP/1732 server. When outside of the LAN I connect to this from my laptop over OpenVPN and over PPTP from my iPhone. I only want to connect through AFP/548 from the mini to the NAS. The border firewall (SRX5308) just works excellently, stable and with a very high throughput when streaming from various VOD services. My connection is a 100/10 with a close to theoretical max throughput. The ruleset is as follows Inbound: PPTP/1723 Allow always to 10.0.0.40 (NAS/VPN server) from a restricted IP range >corresponding to possible cell provider range OpenVPN/1194 Allow always to 10.0.0.40 (NAS/VPN server) from any Outbound: Default outbound policy: Allow Always OpenVPN/1194 TCP Allow always from 10.0.0.40 (NAS) to a.b.8.1-a.b.8.254 (VPN provider) OpenVPN/1194 UDP Allow always to 10.0.0.40 (NAS) to a.b.8.1-a.b.8.254 (VPN provider) Block always from NAS to any On the Mini I have disabled the OSX Application Level Firewall because it throws popups which don't remember my choices from one time to another and that's annoying on a media server. Instead I run Little Snitch which controls outgoing connections nicely on an application level. I have configured the excellent OSX builtin firewall pf (from BSD) as follows pf.conf (Apple App firewall tie-ins removed) (# replaced with % to avoid formatting errors) ### macro name for external interface. eth_if = "en0" vpn_if = "tap0" ### wifi_if = "en1" ### %usb_if = "en3" ext_if = $eth_if LAN="{10.0.0.0/24}" ### General housekeeping rules ### ### Drop all blocked packets silently set block-policy drop ### all incoming traffic on external interface is normalized and fragmented ### packets are reassembled. scrub in on $ext_if all fragment reassemble scrub in on $vpn_if all fragment reassemble scrub out all ### exercise antispoofing on the external interface, but add the local ### loopback interface as an exception, to prevent services utilizing the ### local loop from being blocked accidentally. ### set skip on lo0 antispoof for $ext_if inet antispoof for $vpn_if inet ### spoofing protection for all interfaces block in quick from urpf-failed ############################# block all ### Access to the mini server over ssh/22 and remote desktop/5900 from LAN/en0 only pass in on $eth_if proto tcp from $LAN to any port {22, 5900, 8080, 9090} ### Allow all udp and icmp also, necessary for Constellation. Could be tightened. pass on $eth_if proto {udp, icmp} from $LAN to any ### Allow AFP to 10.0.0.40 (NAS) pass out on $eth_if proto tcp from any to 10.0.0.40 port 548 ### Allow OpenVPN tunnel setup over unprotected link (en0) only to VPN provider IPs ### and port ranges pass on $eth_if proto tcp from any to a.b.8.0/24 port 1194:1201 ### OpenVPN Tunnel rules. All traffic allowed out, only in to ports 4100-4110 ### Outgoing pings ok pass in on $vpn_if proto {tcp, udp} from any to any port 4100:4110 pass out on $vpn_if proto {tcp, udp, icmp} from any to any So what are my goals and what does the above setup achieve? (until you tell me otherwise :) 1) Full LAN access to the above ports on the mini/media server (including through my own VPN server) 2) All internet traffic from the mini/media server is anonymized and tunneled over VPN 3) If OpenVPN/Tunnelblick on the mini drops the connection, nothing is leaked both because of pf and the router outgoing ruleset. It can't even do a DNS lookup through the router. So what do I have to hide with all this? Nothing much really, I just got carried away trying to stop port scans through the VPN tunnel :) In any case this setup works perfectly and it is very stable. The Problem at last! I want to run a minecraft server and I installed that on a separate user account on the mini server (user=mc) to keep things partitioned. I don't want this server accessible through the anonymized VPN tunnel because there are lots more port scans and hacking attempts through that than over my regular IP and I don't trust java in general. So I added the following pf rule on the mini: ### Allow Minecraft public through user mc pass in on $eth_if proto {tcp,udp} from any to any port 24983 user mc pass out on $eth_if proto {tcp, udp} from any to any user mc And these additions on the border firewall: Inbound: Allow always TCP/UDP from any to 10.0.0.40 (NAS) Outbound: Allow always TCP port 80 from 10.0.0.40 to any (needed for online account checkups) This works fine but only when the OpenVPN/Tunnelblick tunnel is down. When up no connection is possbile to the minecraft server from outside of LAN. inside LAN is always OK. Everything else functions as intended. I believe the redirect_gateway push is close to the root of the problem, but I want to keep that specific VPN provider because of the fantastic throughput, price and service. The Solution? How can I open up the minecraft server port outside of the tunnel so it's only available over en0 not the VPN tunnel? Should I a static route? But I don't know which IPs will be connecting...stumbles How secure would to estimate this setup to be and do you have other improvements to share? I've searched extensively in the last few days to no avail...If you've read this far I bet you know the answer :)

    Read the article

  • Server to server replication and CPU and 32k\ corrupt doc

    - by nick wall
    Summary: if database contains a doc with 32K issue or corrupt, on server to server replication it causes marked increase in CPU in nserver.exe task, which effectively causes our server(s) to slow right down. We have a 5 server cluster (1 "hub" and 4 HTTP servers accessed via reverse proxy and SSO for load balancing and redundancy). All are physically located next to each other on network, they don't have dedicated network\ ports for cluster or replication. I realise IBM recommendation is dedicated port for cluster. Cluster queues are in tolerance and under heavy application user load, i.e. the maximum number of documents are being created, edited, deleted, the replication times between servers are negligible. Normally, all is well. Of the servers in the cluster, 1 is considered the "hub", and imitates a PUSH-PULL replication with it's cluster mates every 60mins, so that the replication load is taken by the hub and not cluster mates. The problem we have: every now and then we get a slow replication time from the hub to a cluster mate, sometimes up to 30mins. This maxes out the nserver.exe task on the "cluster mate" which causes it to respond to http requests very slowly. In the past, we have found that if a corrupt document is in the DB, it can have this affect, but on those occasions, the server log will show the corrupt doc noteId, we run fixup, all well. But we are not now seeing any record of corrupt docs. What we have noticed is if a doc with the 32K issue is present, the same thing can happen. Our only solution in that case is to run a : fixup mydb.nsf -V, which shows it is purging a 32K doc. Luckily we run a reverse proxy, so we can shut HTTP servers down without users noticing, but users do notice when a server has the problem! Has anyone else seen this occur? I have set up DDM event handlers for many of the replication events. I have set the replication time out limit to 5 mins (the max we usually see under full user load is 0.1min), to prevent it rep'ing for 30mins as before. This ia a temporary work around. Does anyone know of a DDM event to trap the 32K issue? we could at least then send alert. Regarding 32K issue: this prob needs another thread, but we are finding this relatively hard to find the source of the issue as the 32K event is fairly rare. Our app is fairly complex, interacting with various other external web services, with 2 way data transfer. But if we do encounter a 32K doc, we can't look at field properties, so we can't work out which field has issue which would give us a clue as to which process is culprit. As above, we run a fixup -V. Any help\ comments on this would be gratefully received.

    Read the article

  • Concerning persistence size in the Linux Live Creator

    - by user63085
    Message : Hello everyone! I have ,for the last several months, used the Linux Live USB Creator which it is a very useful app to make portable OS on to flash drives. I mostly use this application to test and try out new OS's as they are released, before I decide to make a hard disk installatio on to the computer. In many cases, the application developers will allow the “persistence” feature in the flash-drive-installed OS, which is just another way of saying that after multiple boot-ups and shutdowns, all the changes made to the OS will be saved in the flash-drive. But I have a question about the limit of the Persistence size in Linux Live USB Creator (currently version 2.6). I install Super OS 10 on to a partition on my external drive which has 30 GB. I wanted to reserve 10 GB for the persistence so that I can install more applications and space will not run out as I update the installed applications or when I do system updates. But why is it that only 3950 MB can be put for persistence? It would be great if, when desired, as much more persistence space could be set aside so that the space will not run out soon. Also, as I have installed the OS on a 30 GB drive, I tried to see how much space is left. But it seems only the remaining of the Persistence space is displayed when I click on the File System folder. For example, after I have just installed it now, there is 3.5 GB of free space. Where can I access the remaining 26 GB or so drive space which is in the same drive? How do I access it Sir?? It would be helpful if any one could explain and help me with this. Most importantly, it would be a big relief if the persistence can be somehow expanded by a work-around so that I can continue using my SuperOS 10.04 (now heavily customized) OS, which unfortunately has just over 576 MB of space left now, after I removed OpenOffice.org and installed the Libre Office earlier today. This is what remains from the maximum allowable 3950 MB of space for persistence at set-up. Thanks in advance!

    Read the article

  • mod_rewrite all but two files causing loop

    - by mpounsett
    I'm trying to set up a web site to allow the creation of a semaphore file to close the site. The logic I want to follow is: when the semaphore file exists and the request is not for /style.css or /favicon.icon show the content of /closed.html I have 1 and 3 working, but my exceptions for 2 result in a processing loop when style.css or favicon.ico are requested. This is my most recent attempt: RewriteEngine on RewriteCond %{REQUEST_URI} !^/style.css RewriteCond %{REQUEST_URI} !^/favicon.ico RewriteCond /usr/local/etc/site/closed -f RewriteRule ^.*$ /closed.html [L] This is in a VirtualHost block, not in a Directory. There is no .htaccess file in play. I have also recently tried this, based on an answer I found elsewhere, but with the same (looping) result: RewriteCond %{REQUEST_URI} ^/style.css [OR] RewriteCond %{REQUEST_URI} ^/favicon.ico RewriteRule ^.*$ - [L] RewriteCond /usr/local/etc/site/closed -f RewriteRule ^.*$ /closed.html [L] I expect a request for /style.css or /favicon.ico to fail to match one of the first two rewrite conditions, which should prevent the URI from being rewritten, which should stop the mod_rewrite iteration. However, mod_rewrite seems to think the URI has been rewritten in those cases, and iterates over the rules again (and again, and again). The above works properly in all cases except for style.css or favicon.ico. In those cases I exceed the loop limits. What am I missing here to cause the rewrite iteration to stop when someone requests style.css or favicon.ico? EDIT: Here's a loglevel 9 example of what happens using the first ruleset when a request arrives for /style.css. This is just the first two iterations.. it continues to loop identically until the limit is reached. 2001:4900:1044:0:145f:826e:6436:dc1 - - [29/May/2014:15:29:26 +0000] [host.example/sid#80c1c48b0][rid#80c1db0a0/initial] (2) init rewrite engine with requested uri /style.css 2001:4900:1044:0:145f:826e:6436:dc1 - - [29/May/2014:15:29:26 +0000] [host.example/sid#80c1c48b0][rid#80c1db0a0/initial] (3) applying pattern '^.*$' to uri '/style.css' 2001:4900:1044:0:145f:826e:6436:dc1 - - [29/May/2014:15:29:26 +0000] [host.example/sid#80c1c48b0][rid#80c1db0a0/initial] (4) RewriteCond: input='/style.css' pattern='!^/style.css' => not-matched 2001:4900:1044:0:145f:826e:6436:dc1 - - [29/May/2014:15:29:26 +0000] [host.example/sid#80c1c48b0][rid#80c1db0a0/initial] (1) pass through /style.css 2001:4900:1044:0:145f:826e:6436:dc1 - - [29/May/2014:15:29:26 +0000] [host.example/sid#80c1c48b0][rid#80c1dd0a0/initial] (2) init rewrite engine with requested uri /style.css 2001:4900:1044:0:145f:826e:6436:dc1 - - [29/May/2014:15:29:26 +0000] [host.example/sid#80c1c48b0][rid#80c1dd0a0/initial] (3) applying pattern '^.*$' to uri '/style.css' 2001:4900:1044:0:145f:826e:6436:dc1 - - [29/May/2014:15:29:26 +0000] [host.example/sid#80c1c48b0][rid#80c1dd0a0/initial] (4) RewriteCond: input='/style.css' pattern='!^/style.css' => not-matched 2001:4900:1044:0:145f:826e:6436:dc1 - - [29/May/2014:15:29:26 +0000] [host.example/sid#80c1c48b0][rid#80c1dd0a0/initial] (1) pass through /style.css

    Read the article

  • Upgrading from TFS 2010 RC to TFS 2010 RTM done

    - by Martin Hinshelwood
    Today is the big day, with the Launch of Visual Studio 2010 already done in Asia, and rolling around the world towards us, we are getting ready for the RTM (Released). We have had TFS 2010 in Production for nearly 6 months and have had only minimal problems. Update 12th April 2010  – Added Scott Hanselman’s tweet about the MSDN download release time. SSW was the first company in the world outside of Microsoft to deploy Visual Studio 2010 Team Foundation Server to production, not once, but twice. I am hoping to make it 3 in a row, but with all the hype around the new version, and with it being a production release and not just a go-live, I think there will be a lot of competition. Developers: MSDN will be updated with #vs2010 downloads and details at 10am PST *today*! @shanselman - Scott Hanselman Same as before, we need to Uninstall 2010 RC and install 2010 RTM. The installer will take care of all the complexity of actually upgrading any schema changes. If you are upgrading from TFS 2008 to TFS2010 you can follow our Rules To Better TFS 2010 Migration and read my post on our successes.   We run TFS 2010 in a Hyper-V virtual environment, so we have the advantage of running a snapshot as well as taking a DB backup. Done - Snapshot the hyper-v server Microsoft does not support taking a snapshot of a running server, for very good reason, and Brian Harry wrote a post after my last upgrade with the reason why you should never snapshot a running server. Done - Uninstall Visual Studio Team Explorer 2010 RC You will need to uninstall all of the Visual Studio 2010 RC client bits that you have on the server. Done - Uninstall TFS 2010 RC Done - Install TFS 2010 RTM Done - Configure TFS 2010 RTM Pick the Upgrade option and point it at your existing “tfs_Configuration” database to load all of the existing settings Done - Upgrade the SharePoint Extensions Upgrade Build Servers (Pending) Test the server The back out plan, and you should always have one, is to restore the snapshot. Upgrading to Team Foundation Server 2010 – Done The first thing you need to do is off the TFS server and then log into the Hyper-v server and create a snapshot. Figure: Make sure you turn the server off and delete all old snapshots before you take a new one I noticed that the snapshot that was taken before the Beta 2 to RC upgrade was still there. You should really delete old snapshots before you create a new one, but in this case the SysAdmin (who is currently tucked up in bed) asked me not to. I guess he is worried about a developer messing up his server Turn your server on and wait for it to boot in anticipation of all the nice shiny RTM’ness that is coming next. The upgrade procedure for TFS2010 is to uninstal the old version and install the new one. Figure: Remove Visual Studio 2010 Team Foundation Server RC from the system.   Figure: Most of the heavy lifting is done by the Uninstaller, but make sure you have removed any of the client bits first. Specifically Visual Studio 2010 or Team Explorer 2010.  Once the uninstall is complete, this took around 5 minutes for me, you can begin the install of the RTM. Running the 64 bit OS will allow the application to use more than 2GB RAM, which while not common may be of use in heavy load situations. Figure: It is always recommended to install the 64bit version of a server application where possible. I do not think it is likely, with SharePoint 2010 and Exchange 2010  and even Windows Server 2008 R2 being 64 bit only, I do not think there will be another release of a server app that is 32bit. You then need to choose what it is you want to install. This depends on how you are running TFS and on how many servers. In our case we run TFS and the Team Foundation Build Service (controller only) on out TFS server along with Analysis services and Reporting Services. But our SharePoint server lives elsewhere. Figure: This always confuses people, but in reality it makes sense. Don’t install what you do not need. Every extra you install has an impact of performance. If you are integrating with SharePoint you will need to run this install on every Front end server in your farm and don’t forget to upgrade your Build servers and proxy servers later. Figure: Selecting only Team Foundation Server (TFS) and Team Foundation Build Services (TFBS)   It is worth noting that if you have a lot of builds kicking off, and hence a lot of get operations against your TFS server, you can use a proxy server to cache the source control on another server in between your TFS server and your build servers. Figure: Installing Microsoft .NET Framework 4 takes the most time. Figure: Now run Windows Update, and SSW Diagnostic to make sure all your bits and bobs are up to date. Note: SSW Diagnostic will check your Power Tools, Add-on’s, Check in Policies and other bits as well. Configure Team Foundation Server 2010 – Done Now you can configure the server. If you have no key you will need to pick “Install a Trial Licence”, but it is only £500, or free with a MSDN subscription. Anyway, if you pick Trial you get 90 days to get your key. Figure: You can pick trial and add your key later using the TFS Server Admin. Here is where the real choices happen. We are doing an Upgrade from a previous version, so I will pick Upgrade the same as all you folks that are using the RC or TFS 2008. Figure: The upgrade wizard takes your existing 2010 or 2008 databases and upgraded them to the release.   Once you have entered your database server name you can click “List available databases” and it will show what it can upgrade. Figure: Select your database from the list and at this point, make sure you have a valid backup. At this point you have not made ANY changes to the databases. At this point the configuration wizard will load configuration from your existing database if you have one. If you are upgrading TFS 2008 refer to Rules To Better TFS 2010 Migration. Mostly during the wizard the default values will suffice, but depending on the configuration you want you can pick different options. Figure: Set the application tier account and Authentication method to use. We use NTLM to keep things simple as we host our TFS server externally for our remote developers.  Figure: Setting your TFS server URL’s to be the remote URL’s allows the reports to be accessed without using VPN. Very handy for those remote developers. Figure: Detected the existing Warehouse no problem. Figure: Again we love green ticks. It gives us a warm fuzzy feeling. Figure: The username for connecting to Reporting services should be a domain account (if you are on a domain that is). Figure: Setup the SharePoint integration to connect to your external SharePoint server. You can take the option to connect later.   You then need to run all of your readiness checks. These check can save your life! it will check all of the settings that you have entered as well as checking all the external services are configures and running properly. There are two reasons that TFS 2010 is so easy and painless to install where previous version were not. Microsoft changes the install to two steps, Install and configuration. The second reason is that they have pulled out all of the stops in making the install run all the checks necessary to make sure that once you start the install that it will complete. if you find any errors I recommend that you report them on http://connect.microsoft.com so everyone can benefit from your misery.   Figure: Now we have everything setup the configuration wizard can do its work.  Figure: Took a while on the “Web site” stage for some point, but zipped though after that.  Figure: last wee bit. TFS Needs to do a little tinkering with the data to complete the upgrade. Figure: All upgraded. I am not worried about the yellow triangle as SharePoint was being a little silly Exception Message: TF254021: The account name or password that you specified is not valid. (type TfsAdminException) Exception Stack Trace:    at Microsoft.TeamFoundation.Management.Controls.WizardCommon.AccountSelectionControl.TestLogon(String connectionString)    at System.ComponentModel.BackgroundWorker.WorkerThreadStart(Object argument) [Info   @16:10:16.307] Benign exception caught as part of verify: Exception Message: TF255329: The following site could not be accessed: http://projects.ssw.com.au/. The server that you specified did not return the expected response. Either you have not installed the Team Foundation Server Extensions for SharePoint Products on this server, or a firewall is blocking access to the specified site or the SharePoint Central Administration site. For more information, see the Microsoft Web site (http://go.microsoft.com/fwlink/?LinkId=161206). (type TeamFoundationServerException) Exception Stack Trace:    at Microsoft.TeamFoundation.Client.SharePoint.WssUtilities.VerifyTeamFoundationSharePointExtensions(ICredentials credentials, Uri url)    at Microsoft.TeamFoundation.Admin.VerifySharePointSitesUrl.Verify() Inner Exception Details: Exception Message: TF249064: The following Web service returned an response that is not valid: http://projects.ssw.com.au/_vti_bin/TeamFoundationIntegrationService.asmx. This Web service is used for the Team Foundation Server Extensions for SharePoint Products. Either the extensions are not installed, the request resulted in HTML being returned, or there is a problem with the URL. Verify that the following URL points to a valid SharePoint Web application and that the application is available: http://projects.ssw.com.au. If the URL is correct and the Web application is operating normally, verify that a firewall is not blocking access to the Web application. (type TeamFoundationServerInvalidResponseException) Exception Data Dictionary: ResponseStatusCode = InternalServerError I’ll look at SharePoint after, probably the SharePoint box just needs a restart or a kick If there is a problem with SharePoint it will come out in testing, But I will definatly be passing this on to Microsoft.   Upgrading the SharePoint connector to TFS 2010 You will need to upgrade the Extensions for SharePoint Products and Technologies on all of your SharePoint farm front end servers. To do this uninstall  the TFS 2010 RC from it in the same way as the server, and then install just the RTM Extensions. Figure: Only install the SharePoint Extensions on your SharePoint front end servers. TFS 2010 supports both SharePoint 2007 and SharePoint 2010.   Figure: When you configure SharePoint it uploads all of the solutions and templates. Figure: Everything is uploaded Successfully. Figure: TFS even remembered the settings from the previous installation, fantastic.   Upgrading the Team Foundation Build Servers to TFS 2010 Just like on the SharePoint servers you will need to upgrade the Build Server to the RTM. Just uninstall TFS 2010 RC and then install only the Team Foundation Build Services component. Unlike on the SharePoint server you will probably have some version of Visual Studio installed. You will need to remove this as well. (Coming Soon) Connecting Visual Studio 2010 / 2008 / 2005 and Eclipse to TFS2010 If you have developers still on Visual Studio 2005 or 2008 you will need do download the respective compatibility pack: Visual Studio Team System 2005 Service Pack 1 Forward Compatibility Update for Team Foundation Server 2010 Visual Studio Team System 2008 Service Pack 1 Forward Compatibility Update for Team Foundation Server 2010 If you are using Eclipse you can download the new Team Explorer Everywhere install for connecting to TFS. Get your developers to check that you have the latest version of your applications with SSW Diagnostic which will check for Service Packs and hot fixes to Visual Studio as well.   Technorati Tags: TFS,TFS2010,TFS 2010,Upgrade

    Read the article

  • Oracle Fusion Applications: Changing the Game

    - by kellsey.ruppel(at)oracle.com
    Originally posted in the Oracle Profit Magazine, November 2010 Edition. When the order processing system red-flags a customer's credit status, the IT department doesn't get the customer's call. When a supplier misses a delivery date for a key automotive assembly, it's not the CIO who has to answer for the error. Knowledge workers (known in IT circles as "users") are on the front lines when an exception occurs in an established business process. They're also the ones who study sales trends to decide when to open a new store in an up-and-coming neighborhood, which products are most profitable, how employee skill sets are evolving, and which suppliers are most efficient. In short, knowledge workers are masters of business as unusual. Traditional enterprise resource planning (ERP) systems and other familiar enterprise applications excel at automating, managing, and executing standard business processes. These programs shine when everything goes as planned. Life gets even trickier when a traditional application needs to be extended with a new service or an extra step is added to a business process when new products are brought to market, divisions are merged, or companies are acquired. Monolithic applications often need the IT department to step in and make the necessary adjustments--incurring additional costs and delays. Until now. When Oracle unveiled the much-anticipated family of Oracle Fusion Applications at Oracle OpenWorld in September 2010, knowledge workers in particular had a lot to cheer about. Business users will soon have ready access to analytical information and collaboration tools in the context of what they are working on, so they can make better decisions when problems or opportunities arise. Additionally, the Oracle Fusion Applications platform will make it easy for business users to tweak processes, create new capabilities, and find information, often without the need for IT department assistance and while still following company guidelines. And IT leaders will be happy to hear about new deployment options, guided implementation and setup tools, and cost-saving management capabilities. Just as important, the underlying technologies in Oracle Fusion Applications will allow organizations to choose among their existing investments and next-generation enterprise applications so they can introduce innovations at a pace that makes the most business and financial sense. "Oracle Fusion Applications are architected so you don't have to do rip and replace," says Jim Hayes, managing director of the consulting firm Accenture. "That's very important for creating a business case that will get through the steering committee and be approved by the board. It shows you can drive value and make a difference in the near term." For these and other reasons, analysts and early adopters are calling Oracle Fusion Applications a game changer for enterprise customers. The differences become apparent in three key areas: the way we innovate, work, and adopt technology. Game Changer #1: New Standard for InnovationChange is a constant challenge for most businesses, whether the catalysts are market dynamics, new competition, or the ever-expanding regulatory environment. And, in an ongoing effort to differentiate, business leaders are constantly looking for new ways to do business, serve constituents, and bring new products and services to market. In addition, companies face significant costs to keep their applications up-to-date. For example, when a company adds new suppliers to a procurement system, the IT shop typically has to invest time, effort, and even consulting fees for custom integrations that allow various ERP systems to communicate with each other. Oracle Fusion Applications were built on Web services and a modular SOA foundation to ease customizations and integration activities among all applications--whether from Oracle or another vendor. Interfaces and updates written in ubiquitous Java, rather than a proprietary coding language, allow organizations to tap into existing in-house technical skills rather than seek expensive outside specialists. And with SOA, organizations can extend a feature set or integrate with other SOA environments by combining Web services such as "look up customer" into a new business process managed by the BPEL orchestration engine. Flexibility like this has long-term implications. "Because users capture these changes at a higher metadata layer, not in the application's code, changes and additions are protected even as new versions of Oracle Fusion Applications are released," says Steve Miranda, senior vice president of applications development at Oracle. "This is a much more sustainable approach because you don't incur costly customizations that prevent upgrades and other innovations." And changes are easier to make: if one change is made in the metadata, that change is automatically reflected throughout the application interface, business intelligence, business process, and business logic. Game Changer #2: New Standard for WorkBoosting productivity comes down to doing the basics right: running business processes more efficiently and managing exceptions more effectively, so users can accomplish more in the course of a day or spend more quality time with the most profitable customers. The fastest way to improve process efficiency is to reduce the number of steps it takes to execute common tasks, such as ordering office equipment from an internal procurement system. Oracle Fusion Applications will deliver a complete role-based user experience with business intelligence and collaboration capabilities provided in the context of the work at hand. "We created every Oracle Fusion Applications screen by asking 'What does the user need to know?' 'What does he or she need to do?' and 'Who do they need to work with to get the job done?'" Miranda explains. So when the sales department heads need new laptops, the self-service procurement screen will not only display a list of approved vendors and configurations, but also a running list of reviews by coworkers who recently purchased the various models. Embedded intelligence may also display prevailing delivery lead times based on actual order histories, not the generic shipping dates vendors may quote. The pervasive business intelligence serves many other business activities across all areas of the enterprise. For example, a manager considering whether to promote a direct report can see the person's employee profile, with a salary history, appraisal summaries, and a rundown of skills and training. This approach to business intelligence also has implications for supply chain management. "One of the challenges at Ingersoll Rand is lack of visibility in our supply chain," says Mike Macrie, global director of enterprise applications for global industrial firm Ingersoll Rand. "Oracle Fusion Applications are going to provide the embedded intelligence to give us that visibility and give us the ability to analyze those orders at any point in our supply chain." Oracle Fusion Applications will also create a "role-based user experience" that displays a work list of events that need attention, based on user job function. Role awareness guides users with daily lists of action items and exceptions. So a credit manager may see seven invoices with discounts that are about to expire or 12 suppliers that have been put on hold because credit memos are awaiting approval. Individualization extends to the search capabilities of Oracle Fusion Applications. The platform uses Web-style search screens powered by an Oracle enterprise search engine, with a security framework that filters search results so individuals will only see the internal information they're authorized to access. A further aid to productivity is Oracle Fusion Applications' integration with Web 2.0 collaboration and social networking resources for business environments. Hover-over text will reveal relevant contact information whenever the name of a person appears in an Oracle Fusion Application. Users can connect via an online chat, phone call, or instant message without leaving the main application, reducing the time required for an accounts payable staffer to resolve a mismatch between an invoiced charge and the service record, for example. Addresses of suppliers, customers, or partners will also initiate hover-over text to show contact details and Web-based maps. Finally, Oracle Fusion Applications will promote a new way of working with purpose-driven communities that can bring new efficiencies to everything from cultivating sales leads to managing new projects. As soon as a lead or project materializes, the applications will automatically gather relevant participants into an online community that shares member contact information, schedules, discussion forums, and Wiki pages. "Oracle Fusion Applications will allow us to take it to the next level with embedded Web 2.0 tools and the embedded analytics," says Steve Printz, CIO and vice president, supply chain management, at window-and-door manufacturer Pella. "[This] allows those employees today who are processing transactions to really contribute to the success of the company and become decision-makers." Game Changer #3: New Standard for Technology AdoptionAs IT becomes a dominant component of how businesses run and compete, organizations need to lower the cost of implementing applications and introducing new application features. In the past, rolling out new code often required creating a test bed system, moving beta code to a separate system for user feedback, and--once all the revisions were made--moving version one of the software onto production systems, where business users could finally get the needed new features. Oracle Fusion Applications will use a dedicated setup manager application to streamline this process. First, the setup manager will help scope out the project, querying users about their requirements. "From those questions and answers we determine the steps and the order of those steps that will enable that task," Miranda says. Next, system utilities will assign tasks to owners, track completion status, and monitor the overall status of a programming effort. Oracle Fusion Applications can then recommend Web services that allow users to migrate setup choices and steps across all the various deployments of the application. Those setup capabilities automate the migration from test systems to production systems, as well as between different business units that may be using the same application. "The self-service ability of the setup manager helps business users change setups with very little intervention from the IT team," says Ravi Kumar, vice president at IT services company Infosys. "That to me is a big difference from how we've viewed enterprise applications before." For additional flexibility, organizations will be able to adopt Oracle Fusion Applications modules in either of two modes: a single-instance alternative uses one database for all Oracle Fusion Applications, while a "pillar mode" creates separate databases to underpin each application. This means IT departments running any one of Oracle's applications or even third-party applications can plug Oracle Fusion Applications modules into their environment and see additional business value created on top of their existing systems. And Oracle Fusion Applications offer a hybrid approach to deployment. The applications are all software-as-a-service-ready, so customers can choose on-premises, public or private cloud, or a combination of these to suit their business needs. It's that combination of flexibility and a roadmap for the future that may be the biggest game changer of all. "The Oracle Fusion Applications architecture allows us to migrate our company at a pace that's consistent with our business strategy, whereas before we might have had to do it with a massive upgrade," says Macrie of Ingersoll Rand. "We're looking forward to that architecture to really give us more flexibility in how we migrate over time." For More InformationUser Input Key to the Success of Oracle Fusion ApplicationsTransforming Coexistence into Strategic ValueUnder the HoodOracle Fusion ApplicationsOracle Service-Oriented Architecture  

    Read the article

  • ASP.NET MVC : AJAX ActionLink - Persist Data

    - by Mio
    Hi guys, I'm really new at this and I was searching the web for an answer to my question and I couldn't find it, so here I am posting my question :) I'm trying to create a new record in my table Facility. For my goreign keys I'm displaying the choices in tables instead of a dropdowns. When the user clicks on the select link which is an Ajax.ActionLink(), I wanna retrieve the right record from the DB and set the foreign key of my object Facility to the one slected and replace the Div by a new Partial View. The problem is when I try to submit the form, the Facility object doesnt seem to have the foreign key that I've just set in my ajax fuction in my controller. And if the user has enter some data in the other fields of the create form, I don't them to lose what they already entered. Here's my code. Model only contains a Facility. public ActionResult Create() { Model.Facility = new Facility(); return View(Model); } This is part of my Create View <div id="FacilityTypePartialView"> <% Html.RenderPartial("FacilityType"); %> </div> This is my Partial View FacilityType <% if (Model.IsNewFacility()) { %> <p> Id: <%= Html.Encode(Model.Facility.FacilityType.FId)%> </p> <p> Type: <%= Html.Encode(Model.Facility.FacilityType.FType)%> </p> <p> Description: <%= Html.Encode(Model.Facility.FacilityType.FDescription) %> </p> <% } %> <p> <%= Html.ActionLink("Manage Facility Type", "Index","FacilityType") %> </p> <table id="FacilityTypesList"> <tr> <th> Select </th> <th> FId </th> <th> FType </th> <th> FDescription </th> </tr> <% foreach (var item in Model.GetFacilityTypes()) { %> <tr> <td> <%=Ajax.ActionLink("Select", "FacilityTypeSelect", new { id = item.FId}, new AjaxOptions { UpdateTargetId = "FacilityTypePartialView" })%> </td> <td> <%= Html.Encode(item.FId) %> </td> <td> <%= Html.Encode(item.FType) %> </td> <td> <%= Html.Encode(item.FDescription) %> </td> </tr> <% } %> </table> Here's y Ajax Funcion public PartialViewResult FacilityTypeSelect(int id) { Facility facility = new Facility(); facility.FacilityType = _repository.GetFacilityType(id); Model.Facility = facility; if (this.Request.IsAjaxRequest() == false) { return PartialView("FacilityType/FacilityType", Model); } else { return PartialView("FacilityType/FacilityTypeSelected", Model); } } Finally, my Post method [AcceptVerbs(HttpVerbs.Post)] public ActionResult Create( Facility facility) { Model.Facility = facility; if (ModelState.IsValid) { try { _repository.AddEntity(facility); _repository.Save(); return RedirectToAction("Details", new { id = facility.Id }); } catch { } } return View("Create", Model); } My Faciliy object coming from the View have the Facility.FacilityType set to nothing.

    Read the article

< Previous Page | 217 218 219 220 221 222 223 224 225 226 227 228  | Next Page >