Search Results

Search found 1246 results on 50 pages for 'backupup compression'.

Page 24/50 | < Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >

  • CLSF & CLK 2013 Trip Report by Jeff Liu

    - by jamesmorris
    This is a contributed post from Jeff Liu, lead XFS developer for the Oracle mainline Linux kernel team. Recently, I attended both the China Linux Storage and Filesystem workshop (CLSF), and the China Linux Kernel conference (CLK), which were held in Shanghai. Here are the highlights for both events. CLSF - 17th October XFS update (led by Jeff Liu) XFS keeps rapid progress with a lot of changes, especially focused on the infrastructure/performance improvements as well as  new feature development.  This can be reflected with a sample statistics among XFS/Ext4+JBD2/Btrfs via: # git diff --stat --minimal -C -M v3.7..v3.12-rc4 -- fs/xfs|fs/ext4+fs/jbd2|fs/btrfs XFS: 141 files changed, 27598 insertions(+), 19113 deletions(-) Ext4+JBD2: 39 files changed, 10487 insertions(+), 5454 deletions(-) Btrfs: 70 files changed, 19875 insertions(+), 8130 deletions(-) What made up those changes in XFS? Self-describing metadata(CRC32c). This is a new feature and it contributed about 70% code changes, it can be enabled via `mkfs.xfs -m crc=1 /dev/xxx` for v5 superblock. Transaction log space reservation improvements. With this change, we can calculate the log space reservation at mount time rather than runtime to reduce the the CPU overhead. User namespace support. So both XFS and USERNS can be enabled on kernel configuration begin from Linux 3.10. Thanks Dwight Engen's efforts for this thing. Split project/group quota inodes. Originally, project quota can not be enabled with group quota at the same time because they were share the same quota file inode, now it works but only for v5 super block. i.e, CRC enabled. CONFIG_XFS_WARN, an new lightweight runtime debugger which can be deployed in production environment. Readahead log object recovery, this change can speed up the log replay progress significantly. Speculative preallocation inode tracking, clearing and throttling. The main purpose is to deal with inodes with post-EOF space due to speculative preallocation, support improved quota management to free up a significant amount of unwritten space when at or near EDQUOT. It support backgroup scanning which occurs on a longish interval(5 mins by default, tunable), and on-demand scanning/trimming via ioctl(2). Bitter arguments ensued from this session, especially for the comparison between Ext4 and Btrfs in different areas, I have to spent a whole morning of the 1st day answering those questions. We basically agreed on XFS is the best choice in Linux nowadays because: Stable, XFS has a good record in stability in the past 10 years. Fengguang Wu who lead the 0-day kernel test project also said that he has observed less error than other filesystems in the past 1+ years, I own it to the XFS upstream code reviewer, they always performing serious code review as well as testing. Good performance for large/small files, XFS does not works very well for small files has already been an old story for years. Best choice (maybe) for distributed PB filesystems. e.g, Ceph recommends delopy OSD daemon on XFS because Ext4 has limited xattr size. Best choice for large storage (>16TB). Ext4 does not support a single file more than around 15.95TB. Scalability, any objection to XFS is best in this point? :) XFS is better to deal with transaction concurrency than Ext4, why? The maximum size of the log in XFS is 2038MB compare to 128MB in Ext4. Misc. Ext4 is widely used and it has been proved fast/stable in various loads and scenarios, XFS just need more customers, and Btrfs is still on the road to be a manhood. Ceph Introduction (Led by Li Wang) This a hot topic.  Li gave us a nice introduction about the design as well as their current works. Actually, Ceph client has been included in Linux kernel since 2.6.34 and supported by Openstack since Folsom but it seems that it has not yet been widely deployment in production environment. Their major work is focus on the inline data support to separate the metadata and data storage, reduce the file access time, i.e, a file access need communication twice, fetch the metadata from MDS and then get data from OSD, and also, the small file access is limited by the network latency. The solution is, for the small files they would like to store the data at metadata so that when accessing a small file, the metadata server can push both metadata and data to the client at the same time. In this way, they can reduce the overhead of calculating the data offset and save the communication to OSD. For this feature, they have only run some small scale testing but really saw noticeable improvements. Test environment: Intel 2 CPU 12 Core, 64GB RAM, Ubuntu 12.04, Ceph 0.56.6 with 200GB SATA disk, 15 OSD, 1 MDS, 1 MON. The sequence read performance for 1K size files improved about 50%. I have asked Li and Zheng Yan (the core developer of Ceph, who also worked on Btrfs) whether Ceph is really stable and can be deployed at production environment for large scale PB level storage, but they can not give a positive answer, looks Ceph even does not spread over Dreamhost (subject to confirmation). From Li, they only deployed Ceph for a small scale storage(32 nodes) although they'd like to try 6000 nodes in the future. Improve Linux swap for Flash storage (led by Shaohua Li) Because of high density, low power and low price, flash storage (SSD) is a good candidate to partially replace DRAM. A quick answer for this is using SSD as swap. But Linux swap is designed for slow hard disk storage, so there are a lot of challenges to efficiently use SSD for swap. SWAPOUT swap_map scan swap_map is the in-memory data structure to track swap disk usage, but it is a slow linear scan. It will become a bottleneck while finding many adjacent pages in the use of SSD. Shaohua Li have changed it to a cluster(128K) list, resulting in O(1) algorithm. However, this apporoach needs restrictive cluster alignment and only enabled for SSD. IO pattern In most cases, the swap io is in interleaved pattern because of mutiple reclaimers or a free cluster is shared by all reclaimers. Even though block layer can merge interleaved IO to some extent, but we cannot count on it completely. Hence the per-cpu cluster is added base on the previous change, it can help reclaimer do sequential IO and the block layer will be easier to merge IO. TLB flush: If we're reclaiming one active page, we should first move the page from active lru list to inactive lru list, and then reclaim the page from inactive lru to swap it out. During the process, we need to clear PTE twice: first is 'A'(ACCESS) bit, second is 'P'(PRESENT) bit. Processors need to send lots of ipi which make the TLB flush really expensive. Some works have been done to improve this, including rework smp_call_functiom_many() or remove the first TLB flush in x86, but there still have some arguments here and only parts of works have been pushed to mainline. SWAPIN: Page fault does iodepth=1 sync io, but it's a little waste if only issue a page size's IO. The obvious solution is doing swap readahead. But the current in-kernel swap readahead is arbitary(always 8 pages), and it always doesn't perform well for both random and sequential access workload. Shaohua introduced a new flag for madvise(MADV_WILLNEED) to do swap prefetch, so the changes happen in userspace API and leave the in-kernel readahead unchanged(but I think some improvement can also be done here). SWAP discard As we know, discard is important for SSD write throughout, but the current swap discard implementation is synchronous. He changed it to async discard which allow discard and write run in the same time. Meanwhile, the unit of discard is also optimized to cluster. Misc: lock contention For many concurrent swapout and swapin , the lock contention such as anon_vma or swap_lock is high, so he changed the swap_lock to a per-swap lock. But there still have some lock contention in very high speed SSD because of swapcache address_space lock. Zproject (led by Bob Liu) Bob gave us a very nice introduction about the current memory compression status. Now there are 3 projects(zswap/zram/zcache) which all aim at smooth swap IO storm and promote performance, but they all have their own pros and cons. ZSWAP It is implemented based on frontswap API and it uses a dynamic allocater named Zbud to allocate free pages. Zbud means pairs of zpages are "buddied" and it can only store at most two compressed pages in one page frame, so the max compress ratio is 50%. Each page frame is lru-linked and can do shink in memory pressure. If the compressed memory pool reach its limitation, shink or reclaim happens. It decompress the page frame into two new allocated pages and then write them to real swap device, but it can fail when allocating the two pages. ZRAM Acts as a compressed ramdisk and used as swap device, and it use zsmalloc as its allocator which has high density but may have fragmentation issues. Besides, page reclaim is hard since it will need more pages to uncompress and free just one page. ZRAM is preferred by embedded system which may not have any real swap device. Now both ZRAM and ZSWAP are in driver/staging tree, and in the mm community there are some disscussions of merging ZRAM into ZSWAP or viceversa, but no agreement yet. ZCACHE Handles file page compression but it is removed out of staging recently. From industry (led by Tang Jie, LSI) An LSI engineer introduced several new produces to us. The first is raid5/6 cards that it use full stripe writes to improve performance. The 2nd one he introduced is SandForce flash controller, who can understand data file types (data entropy) to reduce write amplification (WA) for nearly all writes. It's called DuraWrite and typical WA is 0.5. What's more, if enable its Dynamic Logical Capacity function module, the controller can do data compression which is transparent to upper layer. LSI testing shows that with this virtual capacity enables 1x TB drive can support up to 2x TB capacity, but the application must monitor free flash space to maintain optimal performance and to guard against free flash space exhaustion. He said the most useful application is for datebase. Another thing I think it's worth to mention is that a NV-DRAM memory in NMR/Raptor which is directly exposed to host system. Applications can directly access the NV-DRAM via a memory address - using standard system call mmap(). He said that it is very useful for database logging now. This kind of NVM produces are beginning to appear in recent years, and it is said that Samsung is building a research center in China for related produces. IMHO, NVM will bring an effect to current os layer especially on file system, e.g. its journaling may need to redesign to fully utilize these nonvolatile memory. OCFS2 (led by Canquan Shen) Without a doubt, HuaWei is the biggest contributor to OCFS2 in the past two years. They have posted 46 upstream patches and 39 patches have been merged. Their current project is based on 32/64 nodes cluster, but they also tried 128 nodes at the experimental stage. The major work they are working is to support ATS (atomic test and set), it can be works with DLM at the same time. Looks this idea is inspired by the vmware VMFS locking, i.e, http://blogs.vmware.com/vsphere/2012/05/vmfs-locking-uncovered.html CLK - 18th October 2013 Improving Linux Development with Better Tools (Andi Kleen) This talk focused on how to find/solve bugs along with the Linux complexity growing. Generally, we can do this with the following kind of tools: Static code checkers tools. e.g, sparse, smatch, coccinelle, clang checker, checkpatch, gcc -W/LTO, stanse. This can help check a lot of things, simple mistakes, complex problems, but the challenges are: some are very slow, false positives, may need a concentrated effort to get false positives down. Especially, no static checker I found can follow indirect calls (“OO in C”, common in kernel): struct foo_ops { int (*do_foo)(struct foo *obj); } foo->do_foo(foo); Dynamic runtime checkers, e.g, thread checkers, kmemcheck, lockdep. Ideally all kernel code would come with a test suite, then someone could run all the dynamic checkers. Fuzzers/test suites. e.g, Trinity is a great tool, it finds many bugs, but needs manual model for each syscall. Modern fuzzers around using automatic feedback, but notfor kernel yet: http://taviso.decsystem.org/making_software_dumber.pdf Debuggers/Tracers to understand code, e.g, ftrace, can dump on events/oops/custom triggers, but still too much overhead in many cases to run always during debug. Tools to read/understand source, e.g, grep/cscope work great for many cases, but do not understand indirect pointers (OO in C model used in kernel), give us all “do_foo” instances: struct foo_ops { int (*do_foo)(struct foo *obj); } = { .do_foo = my_foo }; foo>do_foo(foo); That would be great to have a cscope like tool that understands this based on types/initializers XFS: The High Performance Enterprise File System (Jeff Liu) [slides] I gave a talk for introducing the disk layout, unique features, as well as the recent changes.   The slides include some charts to reflect the performances between XFS/Btrfs/Ext4 for small files. About a dozen users raised their hands when I asking who has experienced with XFS. I remembered that when I asked the same question in LinuxCon/Japan, only 3 people raised their hands, but they are Chris Mason, Ric Wheeler, and another attendee. The attendee questions were mainly focused on stability, and comparison with other file systems. Linux Containers (Feng Gao) The speaker introduced us that the purpose for those kind of namespaces, include mount/UTS/IPC/Network/Pid/User, as well as the system API/ABI. For the userspace tools, He mainly focus on the Libvirt LXC rather than us(LXC). Libvirt LXC is another userspace container management tool, implemented as one type of libvirt driver, it can manage containers, create namespace, create private filesystem layout for container, Create devices for container and setup resources controller via cgroup. In this talk, Feng also mentioned another two possible new namespaces in the future, the 1st is the audit, but not sure if it should be assigned to user namespace or not. Another is about syslog, but the question is do we really need it? In-memory Compression (Bob Liu) Same as CLSF, a nice introduction that I have already mentioned above. Misc There were some other talks related to ACPI based memory hotplug, smart wake-affinity in scheduler etc., but my head is not big enough to record all those things. -- Jeff Liu

    Read the article

  • Cannot determine ethernet address for proxy ARP on PPTP

    - by Linux Intel
    I installed pptp server on a centos 6 64bit server PPTP Server ip : 55.66.77.10 PPTP Local ip : 10.0.0.1 Client1 IP : 10.0.0.60 centos 5 64bit Client2 IP : 10.0.0.61 centos5 64bit PPTP Server can ping Client1 And client 1 can ping PPTP Server PPTP Server can ping Client2 And client 2 can ping PPTP Server The problem is client 1 can not ping Client 2 and i get this error also on PPTP server error log Cannot determine ethernet address for proxy ARP Ping from Client2 to Client1 PING 10.0.0.60 (10.0.0.60) 56(84) bytes of data. --- 10.0.0.60 ping statistics --- 6 packets transmitted, 0 received, 100% packet loss, time 5000ms route -n on PPTP Server Destination Gateway Genmask Flags Metric Ref Use Iface 10.0.0.60 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0 10.0.0.61 0.0.0.0 255.255.255.255 UH 0 0 0 ppp1 55.66.77.10 0.0.0.0 255.255.255.248 U 0 0 0 eth0 10.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 eth0 0.0.0.0 55.66.77.19 0.0.0.0 UG 0 0 0 eth0 route -n On Client 1 Destination Gateway Genmask Flags Metric Ref Use Iface 10.0.0.1 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0 55.66.77.10 70.14.13.19 255.255.255.255 UGH 0 0 0 eth0 10.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 eth1 0.0.0.0 70.14.13.19 0.0.0.0 UG 0 0 0 eth0 route -n On Client 2 Destination Gateway Genmask Flags Metric Ref Use Iface 10.0.0.1 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0 55.66.77.10 84.56.120.60 255.255.255.255 UGH 0 0 0 eth1 10.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 eth0 0.0.0.0 84.56.120.60 0.0.0.0 UG 0 0 0 eth1 cat /etc/ppp/options.pptpd on PPTP server ############################################################################### # $Id: options.pptpd,v 1.11 2005/12/29 01:21:09 quozl Exp $ # # Sample Poptop PPP options file /etc/ppp/options.pptpd # Options used by PPP when a connection arrives from a client. # This file is pointed to by /etc/pptpd.conf option keyword. # Changes are effective on the next connection. See "man pppd". # # You are expected to change this file to suit your system. As # packaged, it requires PPP 2.4.2 and the kernel MPPE module. ############################################################################### # Authentication # Name of the local system for authentication purposes # (must match the second field in /etc/ppp/chap-secrets entries) name pptpd # Strip the domain prefix from the username before authentication. # (applies if you use pppd with chapms-strip-domain patch) #chapms-strip-domain # Encryption # (There have been multiple versions of PPP with encryption support, # choose with of the following sections you will use.) # BSD licensed ppp-2.4.2 upstream with MPPE only, kernel module ppp_mppe.o # {{{ refuse-pap refuse-chap refuse-mschap # Require the peer to authenticate itself using MS-CHAPv2 [Microsoft # Challenge Handshake Authentication Protocol, Version 2] authentication. require-mschap-v2 # Require MPPE 128-bit encryption # (note that MPPE requires the use of MSCHAP-V2 during authentication) require-mppe-128 # }}} # OpenSSL licensed ppp-2.4.1 fork with MPPE only, kernel module mppe.o # {{{ #-chap #-chapms # Require the peer to authenticate itself using MS-CHAPv2 [Microsoft # Challenge Handshake Authentication Protocol, Version 2] authentication. #+chapms-v2 # Require MPPE encryption # (note that MPPE requires the use of MSCHAP-V2 during authentication) #mppe-40 # enable either 40-bit or 128-bit, not both #mppe-128 #mppe-stateless # }}} # Network and Routing # If pppd is acting as a server for Microsoft Windows clients, this # option allows pppd to supply one or two DNS (Domain Name Server) # addresses to the clients. The first instance of this option # specifies the primary DNS address; the second instance (if given) # specifies the secondary DNS address. #ms-dns 10.0.0.1 #ms-dns 10.0.0.2 # If pppd is acting as a server for Microsoft Windows or "Samba" # clients, this option allows pppd to supply one or two WINS (Windows # Internet Name Services) server addresses to the clients. The first # instance of this option specifies the primary WINS address; the # second instance (if given) specifies the secondary WINS address. #ms-wins 10.0.0.3 #ms-wins 10.0.0.4 # Add an entry to this system's ARP [Address Resolution Protocol] # table with the IP address of the peer and the Ethernet address of this # system. This will have the effect of making the peer appear to other # systems to be on the local ethernet. # (you do not need this if your PPTP server is responsible for routing # packets to the clients -- James Cameron) proxyarp # Normally pptpd passes the IP address to pppd, but if pptpd has been # given the delegate option in pptpd.conf or the --delegate command line # option, then pppd will use chap-secrets or radius to allocate the # client IP address. The default local IP address used at the server # end is often the same as the address of the server. To override this, # specify the local IP address here. # (you must not use this unless you have used the delegate option) #10.8.0.100 # Logging # Enable connection debugging facilities. # (see your syslog configuration for where pppd sends to) debug # Print out all the option values which have been set. # (often requested by mailing list to verify options) #dump # Miscellaneous # Create a UUCP-style lock file for the pseudo-tty to ensure exclusive # access. lock # Disable BSD-Compress compression nobsdcomp # Disable Van Jacobson compression # (needed on some networks with Windows 9x/ME/XP clients, see posting to # poptop-server on 14th April 2005 by Pawel Pokrywka and followups, # http://marc.theaimsgroup.com/?t=111343175400006&r=1&w=2 ) novj novjccomp # turn off logging to stderr, since this may be redirected to pptpd, # which may trigger a loopback nologfd # put plugins here # (putting them higher up may cause them to sent messages to the pty) cat /etc/ppp/options.pptp on Client1 and Client2 ############################################################################### # $Id: options.pptp,v 1.3 2006/03/26 23:11:05 quozl Exp $ # # Sample PPTP PPP options file /etc/ppp/options.pptp # Options used by PPP when a connection is made by a PPTP client. # This file can be referred to by an /etc/ppp/peers file for the tunnel. # Changes are effective on the next connection. See "man pppd". # # You are expected to change this file to suit your system. As # packaged, it requires PPP 2.4.2 or later from http://ppp.samba.org/ # and the kernel MPPE module available from the CVS repository also on # http://ppp.samba.org/, which is packaged for DKMS as kernel_ppp_mppe. ############################################################################### # Lock the port lock # Authentication # We don't need the tunnel server to authenticate itself noauth # We won't do PAP, EAP, CHAP, or MSCHAP, but we will accept MSCHAP-V2 # (you may need to remove these refusals if the server is not using MPPE) refuse-pap refuse-eap refuse-chap refuse-mschap # Compression # Turn off compression protocols we know won't be used nobsdcomp nodeflate # Encryption # (There have been multiple versions of PPP with encryption support, # choose which of the following sections you will use. Note that MPPE # requires the use of MSCHAP-V2 during authentication) # # Note that using PPTP with MPPE and MSCHAP-V2 should be considered # insecure: # http://marc.info/?l=pptpclient-devel&m=134372640219039&w=2 # https://github.com/moxie0/chapcrack/blob/master/README.md # http://technet.microsoft.com/en-us/security/advisory/2743314 # http://ppp.samba.org/ the PPP project version of PPP by Paul Mackarras # ppp-2.4.2 or later with MPPE only, kernel module ppp_mppe.o # If the kernel is booted in FIPS mode (fips=1), the ppp_mppe.ko module # is not allowed and PPTP-MPPE is not available. # {{{ # Require MPPE 128-bit encryption #require-mppe-128 # }}} # http://mppe-mppc.alphacron.de/ fork from PPP project by Jan Dubiec # ppp-2.4.2 or later with MPPE and MPPC, kernel module ppp_mppe_mppc.o # {{{ # Require MPPE 128-bit encryption #mppe required,stateless # }}} IPtables is stopped on clients and server, Also net.ipv4.ip_forward = 1 is enabled on PPTP Server. How can i solve this problem .?

    Read the article

  • Varnish default.vcl grace period

    - by Vladimir
    These are my settings for a grace period (/etc/varnish/default.vcl) sub vcl_recv { .... set req.grace = 360000s; ... } sub vcl_fetch { ... set beresp.grace = 360000s; ... } I tested Varnish using localhost and nodejs as a server. I started localhost, the site was up. Then I disconnected server and the site got disconnected in less than 2 min. It says: Error 503 Service Unavailable Service Unavailable Guru Meditation: XID: 1890127100 Varnish cache server Could you tell me what could be the problem? sub vcl_fetch { if (beresp.ttl < 120s) { ##std.log("Adjusting TTL"); set beresp.ttl = 36000s; ##120s; } # Do not cache the object if the backend application does not want us to. if (beresp.http.Cache-Control ~ "(no-cache|no-store|private|must-revalidate)") { return(hit_for_pass); } # Do not cache the object if the status is not in the 200s if (beresp.status >= 300) { # Remove the Set-Cookie header #remove beresp.http.Set-Cookie; return(hit_for_pass); } # # Everything below here should be cached # # Remove the Set-Cookie header ####remove beresp.http.Set-Cookie; # Set the grace time ## set beresp.grace = 1s; //change this to minutes in case of app shutdown set beresp.grace = 360000s; ## 10 hour - reduce if it has negative impact # Static assets - browser caches tpiphem for a long time. if (req.url ~ "\.(css|js|.js|jpg|jpeg|gif|ico|png)\??\d*$") { /* Remove Expires from backend, it's not long enough */ unset beresp.http.expires; /* Set the clients TTL on this object */ set beresp.http.cache-control = "public, max-age=31536000"; /* marker for vcl_deliver to reset Age: */ set beresp.http.magicmarker = "1"; } else { set beresp.http.Cache-Control = "private, max-age=0, must-revalidate"; set beresp.http.Pragma = "no-cache"; } if (req.url ~ "\.(css|js|min|)\??\d*$") { set beresp.do_gzip = true; unset beresp.http.expires; set beresp.http.cache-control = "public, max-age=31536000"; set beresp.http.expires = beresp.ttl; set beresp.http.age = "0"; } ##do not duplicate these settings if (req.url ~ ".css") { set beresp.do_gzip = true; unset beresp.http.expires; set beresp.http.cache-control = "public, max-age=31536000"; set beresp.http.expires = beresp.ttl; set beresp.http.age = "0"; } if (req.url ~ ".js") { set beresp.do_gzip = true; unset beresp.http.expires; set beresp.http.cache-control = "public, max-age=31536000"; set beresp.http.expires = beresp.ttl; set beresp.http.age = "0"; } if (req.url ~ ".min") { set beresp.do_gzip = true; unset beresp.http.expires; set beresp.http.cache-control = "public, max-age=31536000"; set beresp.http.expires = beresp.ttl; set beresp.http.age = "0"; } ## If the request to the backend returns a code other than 200, restart the loop ## If the number of restarts reaches the value of the parameter max_restarts, ## the request will be error'ed. max_restarts defaults to 4. This prevents ## an eternal loop in the event that, e.g., the object does not exist at all. if (beresp.status != 200 && beresp.status != 403 && beresp.status != 404) { return(restart); } if (beresp.status == 302) { return(deliver); } # Never cache posts if (req.url ~ "\/post\/" || req.url ~ "\/submit\/" || req.url ~ "\/ask\/" || req.url ~ "\/add\/") { return(hit_for_pass); } ##check this setting to ensure that it does not cause issues for browsers with no gzip if (beresp.http.content-type ~ "text") { set beresp.do_gzip = true; } if (beresp.http.Set-Cookie) { return(deliver); } ##if (req.url == "/index.html") { set beresp.do_esi = true; ##} ## check if this is needed or should be used # return(deliver); the object return(deliver); } sub vcl_recv { ##avoid leeching of images call hot_link; set req.grace = 360000s; ##2m ## if one backend is down - use another if (req.restarts == 0) { set req.backend = cache_director; ##we can specify individual VMs } else if (req.restarts == 1) { set req.backend = cache_director; } ## post calls should not be cached - add cookie for these requests if using micro-caching # Pass requests that are not GET or HEAD if (req.request != "GET" && req.request != "HEAD") { return(pass); ## return(pass) goes to backend - not cache } # Don't cache the result of a redirect if (req.http.Referer ~ "redir" || req.http.Origin ~ "jumpto") { return(pass); } # Don't cache the result of a redirect (asking for logon) if (req.http.Referer ~ "post" || req.http.Referer ~ "submit" || req.http.Referer ~ "add" || req.http.Referer ~ "ask") { return(pass); } # Never cache posts - ensure that we do not use these strings in our URLs' that need to be cached if (req.url ~ "\/post\/" || req.url ~ "\/submit\/" || req.url ~ "\/ask\/" || req.url ~ "\/add\/") { return(pass); } ## if (req.http.Authorization || req.http.Cookie) { if (req.http.Authorization) { /* Not cacheable by default */ return (pass); } # Handle compression correctly. Different browsers send different # "Accept-Encoding" headers, even though they mostly all support the same # compression mechanisms. By consolidating these compression headers into # a consistent format, we can reduce the size of the cache and get more hits. # @see: http:// varnish.projects.linpro.no/wiki/FAQ/Compression if (req.http.Accept-Encoding) { if (req.url ~ "\.(jpg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|ico)$") { # No point in compressing these remove req.http.Accept-Encoding; } else if (req.http.Accept-Encoding ~ "gzip") { # If the browser supports it, we'll use gzip. set req.http.Accept-Encoding = "gzip"; } else if (req.http.Accept-Encoding ~ "deflate") { # Next, try deflate if it is supported. set req.http.Accept-Encoding = "deflate"; } else { # Unknown algorithm. Remove it and send unencoded. unset req.http.Accept-Encoding; } } # lookup graphics, css, js & ico files in the cache if (req.url ~ "\.(png|gif|jpg|jpeg|css|.js|ico)$") { return(lookup); } ##added on 0918 - check if it causes issues with user specific content if (req.request == "GET" && req.http.cookie) { return(lookup); } # Pipe requests that are non-RFC2616 or CONNECT which is weird. if (req.request != "GET" && req.request != "HEAD" && req.request != "PUT" && req.request != "POST" && req.request != "TRACE" && req.request != "OPTIONS" && req.request != "DELETE") { ##closing connection and calling pipe return(pipe); } ##purge content via localhost only if (req.request == "PURGE") { if (!client.ip ~ purge) { error 405 "Not allowed."; } return(lookup); } ## do we need this? ## return(lookup); }

    Read the article

  • How to use sudo with WinSCP and ProFTPd?

    - by Gaia
    I need to run the SFTP fileserver binary as root, but direct root login is not allowed. In WinSCP, if I use "default" on SFTP server protocol option everything works as expected. Following the instructions to sudo in WinSCP, I tried using "sudo /usr/sbin/proftpd" (works on the command line without any prompts) but it brings up "Cannot initialize SFTP protocol. Is the host running a SFTP server?" How to use sudo with WinSCP and ProFTPd? WinSCP 4.3.7 GUI Protocol: SFTP-3 CentOS 6.2 Webmin/Virtualmin (Current Version) PS: only cert based login is allowed . 2012-06-17 11:05:56.998 -------------------------------------------------------------------------- . 2012-06-17 11:05:56.998 WinSCP Version 4.3.7 (Build 1679) (OS 6.1.7601 Service Pack 1) . 2012-06-17 11:05:56.998 Configuration: HKEY_CURRENT_USER\Software\Martin Prikryl\WinSCP 2\ . 2012-06-17 11:05:56.999 Login time: Sunday, June 17, 2012 11:05:56 AM . 2012-06-17 11:05:56.999 -------------------------------------------------------------------------- . 2012-06-17 11:05:56.999 Session name: KVM1 (Modified stored session) . 2012-06-17 11:05:57.047 Host name: mykvm.com (Port: 22) . 2012-06-17 11:05:57.048 User name: adminuser (Password: No, Key file: Yes) . 2012-06-17 11:05:57.048 Tunnel: No . 2012-06-17 11:05:57.048 Transfer Protocol: SFTP (SCP) . 2012-06-17 11:05:57.048 Ping type: -, Ping interval: 30 sec; Timeout: 15 sec . 2012-06-17 11:05:57.048 Proxy: none . 2012-06-17 11:05:57.048 SSH protocol version: 2; Compression: Yes . 2012-06-17 11:05:57.048 Bypass authentication: No . 2012-06-17 11:05:57.048 Try agent: Yes; Agent forwarding: No; TIS/CryptoCard: No; KI: Yes; GSSAPI: No . 2012-06-17 11:05:57.048 Ciphers: aes,blowfish,3des,WARN,arcfour,des; Ssh2DES: No . 2012-06-17 11:05:57.048 SSH Bugs: -,-,-,-,-,-,-,-,- . 2012-06-17 11:05:57.048 SFTP Bugs: -,- . 2012-06-17 11:05:57.048 Return code variable: Autodetect; Lookup user groups: Yes . 2012-06-17 11:05:57.048 Shell: default . 2012-06-17 11:05:57.048 EOL: 0, UTF: 2 . 2012-06-17 11:05:57.048 Clear aliases: Yes, Unset nat.vars: Yes, Resolve symlinks: Yes . 2012-06-17 11:05:57.048 LS: ls -la, Ign LS warn: Yes, Scp1 Comp: No . 2012-06-17 11:05:57.048 Local directory: default, Remote directory: home, Update: No, Cache: Yes . 2012-06-17 11:05:57.048 Cache directory changes: Yes, Permanent: Yes . 2012-06-17 11:05:57.048 DST mode: 1 . 2012-06-17 11:05:57.048 -------------------------------------------------------------------------- . 2012-06-17 11:05:57.113 Looking up host "mykvm.com" . 2012-06-17 11:05:57.132 Connecting to xxx.xxx.128.59 port 22 . 2012-06-17 11:05:57.499 Server version: SSH-2.0-OpenSSH_5.3 . 2012-06-17 11:05:57.499 Using SSH protocol version 2 . 2012-06-17 11:05:57.499 We claim version: SSH-2.0-WinSCP_release_4.3.7 . 2012-06-17 11:05:57.679 Server supports delayed compression; will try this later . 2012-06-17 11:05:57.679 Doing Diffie-Hellman group exchange . 2012-06-17 11:05:58.077 Doing Diffie-Hellman key exchange with hash SHA-1 . 2012-06-17 11:05:58.498 Host key fingerprint is: . 2012-06-17 11:05:58.498 ssh-rsa 2048 bd:e4:34:b1:d4:69:d6:4e:e4:26:04:8b:b7:b3:de:c3 . 2012-06-17 11:05:58.498 Initialised AES-256 SDCTR client->server encryption . 2012-06-17 11:05:58.498 Initialised HMAC-SHA1 client->server MAC algorithm . 2012-06-17 11:05:58.498 Initialised AES-256 SDCTR server->client encryption . 2012-06-17 11:05:58.498 Initialised HMAC-SHA1 server->client MAC algorithm . 2012-06-17 11:05:58.922 Reading private key file "D:\id_rsa.ppk" ! 2012-06-17 11:05:58.924 Using username "adminuser". . 2012-06-17 11:05:59.550 Offered public key . 2012-06-17 11:05:59.743 Offer of public key accepted ! 2012-06-17 11:05:59.743 Authenticating with public key "masterkey for admin" . 2012-06-17 11:05:59.764 Prompt (3, SSH key passphrase, , Passphrase for key "masterkey for admin": ) . 2012-06-17 11:06:02.938 Sent public key signature . 2012-06-17 11:06:03.352 Access granted . 2012-06-17 11:06:03.352 Initiating key re-exchange (enabling delayed compression) . 2012-06-17 11:06:03.765 Doing Diffie-Hellman group exchange . 2012-06-17 11:06:03.955 Doing Diffie-Hellman key exchange with hash SHA-1 . 2012-06-17 11:06:04.410 Initialised AES-256 SDCTR client->server encryption . 2012-06-17 11:06:04.410 Initialised HMAC-SHA1 client->server MAC algorithm . 2012-06-17 11:06:04.410 Initialised zlib (RFC1950) compression . 2012-06-17 11:06:04.410 Initialised AES-256 SDCTR server->client encryption . 2012-06-17 11:06:04.410 Initialised HMAC-SHA1 server->client MAC algorithm . 2012-06-17 11:06:04.410 Initialised zlib (RFC1950) decompression . 2012-06-17 11:06:04.839 Opened channel for session . 2012-06-17 11:06:05.247 Started a shell/command . 2012-06-17 11:06:05.253 -------------------------------------------------------------------------- . 2012-06-17 11:06:05.253 Using SFTP protocol. . 2012-06-17 11:06:05.253 Doing startup conversation with host. > 2012-06-17 11:06:05.259 Type: SSH_FXP_INIT, Size: 5, Number: -1 . 2012-06-17 11:06:05.354 Server sent command exit status 0 . 2012-06-17 11:06:05.354 Disconnected: All channels closed * 2012-06-17 11:06:05.380 (ESshFatal) Connection has been unexpectedly closed. Server sent command exit status 0. * 2012-06-17 11:06:05.380 Cannot initialize SFTP protocol. Is the host running a SFTP server?

    Read the article

  • PPTP ping client to client error

    - by Linux Intel
    I installed pptp server on a centos 6 64bit server PPTP Server ip : 55.66.77.10 PPTP Local ip : 10.0.0.1 Client1 IP : 10.0.0.60 centos 5 64bit Client2 IP : 10.0.0.61 centos5 64bit PPTP Server can ping Client1 And client 1 can ping PPTP Server PPTP Server can ping Client2 And client 2 can ping PPTP Server The problem is client 1 can not ping Client 2 route -n on PPTP Server Destination Gateway Genmask Flags Metric Ref Use Iface 10.0.0.60 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0 10.0.0.61 0.0.0.0 255.255.255.255 UH 0 0 0 ppp1 55.66.77.10 0.0.0.0 255.255.255.248 U 0 0 0 eth0 10.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 eth0 0.0.0.0 55.66.77.19 0.0.0.0 UG 0 0 0 eth0 route -n On Client 1 Destination Gateway Genmask Flags Metric Ref Use Iface 10.0.0.1 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0 55.66.77.10 70.14.13.19 255.255.255.255 UGH 0 0 0 eth0 10.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 eth1 0.0.0.0 70.14.13.19 0.0.0.0 UG 0 0 0 eth0 route -n On Client 2 Destination Gateway Genmask Flags Metric Ref Use Iface 10.0.0.1 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0 55.66.77.10 84.56.120.60 255.255.255.255 UGH 0 0 0 eth1 10.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 eth0 0.0.0.0 84.56.120.60 0.0.0.0 UG 0 0 0 eth1 cat /etc/ppp/options.pptpd on PPTP server ############################################################################### # $Id: options.pptpd,v 1.11 2005/12/29 01:21:09 quozl Exp $ # # Sample Poptop PPP options file /etc/ppp/options.pptpd # Options used by PPP when a connection arrives from a client. # This file is pointed to by /etc/pptpd.conf option keyword. # Changes are effective on the next connection. See "man pppd". # # You are expected to change this file to suit your system. As # packaged, it requires PPP 2.4.2 and the kernel MPPE module. ############################################################################### # Authentication # Name of the local system for authentication purposes # (must match the second field in /etc/ppp/chap-secrets entries) name pptpd # Strip the domain prefix from the username before authentication. # (applies if you use pppd with chapms-strip-domain patch) #chapms-strip-domain # Encryption # (There have been multiple versions of PPP with encryption support, # choose with of the following sections you will use.) # BSD licensed ppp-2.4.2 upstream with MPPE only, kernel module ppp_mppe.o # {{{ refuse-pap refuse-chap refuse-mschap # Require the peer to authenticate itself using MS-CHAPv2 [Microsoft # Challenge Handshake Authentication Protocol, Version 2] authentication. require-mschap-v2 # Require MPPE 128-bit encryption # (note that MPPE requires the use of MSCHAP-V2 during authentication) require-mppe-128 # }}} # OpenSSL licensed ppp-2.4.1 fork with MPPE only, kernel module mppe.o # {{{ #-chap #-chapms # Require the peer to authenticate itself using MS-CHAPv2 [Microsoft # Challenge Handshake Authentication Protocol, Version 2] authentication. #+chapms-v2 # Require MPPE encryption # (note that MPPE requires the use of MSCHAP-V2 during authentication) #mppe-40 # enable either 40-bit or 128-bit, not both #mppe-128 #mppe-stateless # }}} # Network and Routing # If pppd is acting as a server for Microsoft Windows clients, this # option allows pppd to supply one or two DNS (Domain Name Server) # addresses to the clients. The first instance of this option # specifies the primary DNS address; the second instance (if given) # specifies the secondary DNS address. #ms-dns 10.0.0.1 #ms-dns 10.0.0.2 # If pppd is acting as a server for Microsoft Windows or "Samba" # clients, this option allows pppd to supply one or two WINS (Windows # Internet Name Services) server addresses to the clients. The first # instance of this option specifies the primary WINS address; the # second instance (if given) specifies the secondary WINS address. #ms-wins 10.0.0.3 #ms-wins 10.0.0.4 # Add an entry to this system's ARP [Address Resolution Protocol] # table with the IP address of the peer and the Ethernet address of this # system. This will have the effect of making the peer appear to other # systems to be on the local ethernet. # (you do not need this if your PPTP server is responsible for routing # packets to the clients -- James Cameron) proxyarp # Normally pptpd passes the IP address to pppd, but if pptpd has been # given the delegate option in pptpd.conf or the --delegate command line # option, then pppd will use chap-secrets or radius to allocate the # client IP address. The default local IP address used at the server # end is often the same as the address of the server. To override this, # specify the local IP address here. # (you must not use this unless you have used the delegate option) #10.8.0.100 # Logging # Enable connection debugging facilities. # (see your syslog configuration for where pppd sends to) debug # Print out all the option values which have been set. # (often requested by mailing list to verify options) #dump # Miscellaneous # Create a UUCP-style lock file for the pseudo-tty to ensure exclusive # access. lock # Disable BSD-Compress compression nobsdcomp # Disable Van Jacobson compression # (needed on some networks with Windows 9x/ME/XP clients, see posting to # poptop-server on 14th April 2005 by Pawel Pokrywka and followups, # http://marc.theaimsgroup.com/?t=111343175400006&r=1&w=2 ) novj novjccomp # turn off logging to stderr, since this may be redirected to pptpd, # which may trigger a loopback nologfd # put plugins here # (putting them higher up may cause them to sent messages to the pty) cat /etc/ppp/options.pptp on Client1 and Client2 ############################################################################### # $Id: options.pptp,v 1.3 2006/03/26 23:11:05 quozl Exp $ # # Sample PPTP PPP options file /etc/ppp/options.pptp # Options used by PPP when a connection is made by a PPTP client. # This file can be referred to by an /etc/ppp/peers file for the tunnel. # Changes are effective on the next connection. See "man pppd". # # You are expected to change this file to suit your system. As # packaged, it requires PPP 2.4.2 or later from http://ppp.samba.org/ # and the kernel MPPE module available from the CVS repository also on # http://ppp.samba.org/, which is packaged for DKMS as kernel_ppp_mppe. ############################################################################### # Lock the port lock # Authentication # We don't need the tunnel server to authenticate itself noauth # We won't do PAP, EAP, CHAP, or MSCHAP, but we will accept MSCHAP-V2 # (you may need to remove these refusals if the server is not using MPPE) refuse-pap refuse-eap refuse-chap refuse-mschap # Compression # Turn off compression protocols we know won't be used nobsdcomp nodeflate # Encryption # (There have been multiple versions of PPP with encryption support, # choose which of the following sections you will use. Note that MPPE # requires the use of MSCHAP-V2 during authentication) # # Note that using PPTP with MPPE and MSCHAP-V2 should be considered # insecure: # http://marc.info/?l=pptpclient-devel&m=134372640219039&w=2 # https://github.com/moxie0/chapcrack/blob/master/README.md # http://technet.microsoft.com/en-us/security/advisory/2743314 # http://ppp.samba.org/ the PPP project version of PPP by Paul Mackarras # ppp-2.4.2 or later with MPPE only, kernel module ppp_mppe.o # If the kernel is booted in FIPS mode (fips=1), the ppp_mppe.ko module # is not allowed and PPTP-MPPE is not available. # {{{ # Require MPPE 128-bit encryption #require-mppe-128 # }}} # http://mppe-mppc.alphacron.de/ fork from PPP project by Jan Dubiec # ppp-2.4.2 or later with MPPE and MPPC, kernel module ppp_mppe_mppc.o # {{{ # Require MPPE 128-bit encryption #mppe required,stateless # }}} IPtables are stopped on clients and server, Also net.ipv4.ip_forward = 1 is enabled on PPTP Server. How can i solve this problem .?

    Read the article

  • Mono mkbundle issue

    - by Sean
    Hello, I am trying to bundle mono runtime with a C# a simple (console) program that does nothing, but 'hello world'. Got Cygwin, configured it all, though it fails: $ mkbundle -o x2 x.exe --deps -z OS is: Windows Sources: 1 Auto-dependencies: True embedding: C:\cygwin\home\Sean\x.exe compression ratio: 31.71% embedding: C:\PROGRA~2\MONO-2~1.1\lib\mono\4.0\mscorlib.dll compression ratio: 34.68% Compiling: as -o temp.o temp.s gcc -mno-cygwin -g -o x2 -Wall temp.c `pkg-config --cflags --libs mono-2|dos2unix` -lz temp.o temp.c: In function `main': temp.c:173: warning: implicit declaration of function `g_utf16_to_utf8' temp.c:173: warning: assignment makes pointer from integer without a cast temp.c:188: warning: assignment makes pointer from integer without a cast /tmp/ccu8fTcQ.o: In function `main': /home/Sean/temp.c:173: undefined reference to `_g_utf16_to_utf8' /home/Sean/temp.c:188: undefined reference to `_g_utf16_to_utf8' collect2: ld returned 1 exit status [Fail] Tried post with similar problem to mine here, but it didn't work either. My configuration: Windows 7 64bit Mono 2.8.1 Latest Cygwin Not sure what's the story here. Help appreciated.

    Read the article

  • buffer overflow with boost::program_options

    - by f4
    Hello, I have a problem using boost:program_options this simple program, copy-pasted from boosts' documentation : #include <boost/program_options.hpp> int main( int argc, char** argv ) { namespace po = boost::program_options; po::options_description desc("Allowed options"); desc.add_options() ("help", "produce help message") ("compression", po::value<int>(), "set compression level") ; return 0; } fails with a buffer overflow. I have activated the "buffer security switch", and when I run it I get an "unknown exception (0xc0000409)" when I step over the line desc.add_options()... I use Visual Studio 2005 and boost 1.43.0. By the way it does run if I deactivate the switch but I don't feel comfortable doing so... unless it's possible to deactivate it locally. So do you have a solution to this problem? EDIT I found the problem I was linking against libboost_program_options-vc80-mt.lib which wasn't the good library.

    Read the article

  • VPN in Ubuntu fails every time

    - by fazpas
    I am trying to setup a vpn connection in Ubuntu 10.04 to use the service from relakks.com I used the network manager to add the vpn connection and the settings are: Gateway: pptp.relakks.com Username: user Password: pwd IPv4 Settings: Automatic (VPN) Advanced: MSCHAP & MSCHAPv2 checked Use point-to-point encryption (security:default) Allow BSD data compression checked Allow deflate data compression checked Use TCP header compression checked The connection always fail, here is the syslog: Jun 27 20:11:56 desktop NetworkManager: <info> Starting VPN service 'org.freedesktop.NetworkManager.pptp'... Jun 27 20:11:56 desktop NetworkManager: <info> VPN service 'org.freedesktop.NetworkManager.pptp' started (org.freedesktop.NetworkManager.pptp), PID 2064 Jun 27 20:11:56 desktop NetworkManager: <info> VPN service 'org.freedesktop.NetworkManager.pptp' just appeared, activating connections Jun 27 20:11:56 desktop NetworkManager: <info> VPN plugin state changed: 3 Jun 27 20:11:56 desktop NetworkManager: <info> VPN connection 'Relakks' (Connect) reply received. Jun 27 20:11:56 desktop pppd[2067]: Plugin /usr/lib/pppd/2.4.5//nm-pptp-pppd-plugin.so loaded. Jun 27 20:11:56 desktop pppd[2067]: pppd 2.4.5 started by root, uid 0 Jun 27 20:11:56 desktop NetworkManager: SCPlugin-Ifupdown: devices added (path: /sys/devices/virtual/net/ppp1, iface: ppp1) Jun 27 20:11:56 desktop NetworkManager: SCPlugin-Ifupdown: device added (path: /sys/devices/virtual/net/ppp1, iface: ppp1): no ifupdown configuration found. Jun 27 20:11:56 desktop pppd[2067]: Using interface ppp1 Jun 27 20:11:56 desktop pppd[2067]: Connect: ppp1 <--> /dev/pts/0 Jun 27 20:11:56 desktop pptp[2071]: nm-pptp-service-2064 log[main:pptp.c:314]: The synchronous pptp option is NOT activated Jun 27 20:11:57 desktop pptp[2079]: nm-pptp-service-2064 log[ctrlp_rep:pptp_ctrl.c:251]: Sent control packet type is 1 'Start-Control-Connection-Request' Jun 27 20:11:58 desktop pptp[2079]: nm-pptp-service-2064 log[ctrlp_disp:pptp_ctrl.c:739]: Received Start Control Connection Reply Jun 27 20:11:58 desktop pptp[2079]: nm-pptp-service-2064 log[ctrlp_disp:pptp_ctrl.c:773]: Client connection established. Jun 27 20:11:58 desktop pptp[2079]: nm-pptp-service-2064 log[ctrlp_rep:pptp_ctrl.c:251]: Sent control packet type is 7 'Outgoing-Call-Request' Jun 27 20:11:59 desktop pptp[2079]: nm-pptp-service-2064 log[ctrlp_disp:pptp_ctrl.c:858]: Received Outgoing Call Reply. Jun 27 20:11:59 desktop pptp[2079]: nm-pptp-service-2064 log[ctrlp_disp:pptp_ctrl.c:897]: Outgoing call established (call ID 0, peer's call ID 1024). Jun 27 20:11:59 desktop kernel: [ 56.564074] Inbound IN=ppp0 OUT= MAC= SRC=93.182.139.2 DST=186.110.76.26 LEN=61 TOS=0x00 PREC=0x00 TTL=52 ID=40460 DF PROTO=47 Jun 27 20:11:59 desktop kernel: [ 56.944054] Inbound IN=ppp0 OUT= MAC= SRC=93.182.139.2 DST=186.110.76.26 LEN=60 TOS=0x00 PREC=0x00 TTL=52 ID=40461 DF PROTO=47 Jun 27 20:11:59 desktop pptp[2079]: nm-pptp-service-2064 log[pptp_read_some:pptp_ctrl.c:544]: read returned zero, peer has closed Jun 27 20:11:59 desktop pptp[2079]: nm-pptp-service-2064 log[callmgr_main:pptp_callmgr.c:258]: Closing connection (shutdown) Jun 27 20:11:59 desktop pptp[2079]: nm-pptp-service-2064 log[ctrlp_rep:pptp_ctrl.c:251]: Sent control packet type is 12 'Call-Clear-Request' Jun 27 20:11:59 desktop pptp[2079]: nm-pptp-service-2064 log[pptp_read_some:pptp_ctrl.c:544]: read returned zero, peer has closed Jun 27 20:11:59 desktop pptp[2079]: nm-pptp-service-2064 log[call_callback:pptp_callmgr.c:79]: Closing connection (call state) Jun 27 20:11:59 desktop pppd[2067]: Modem hangup Jun 27 20:11:59 desktop pppd[2067]: Connection terminated. Jun 27 20:11:59 desktop NetworkManager: <info> VPN plugin failed: 1 Jun 27 20:11:59 desktop NetworkManager: SCPlugin-Ifupdown: devices removed (path: /sys/devices/virtual/net/ppp1, iface: ppp1) Jun 27 20:11:59 desktop pppd[2067]: Exit. Does someone can identify something in the syslog? I've been googling and reading about pptp but couldn't find anything about the error "read returned zero, peer has closed"

    Read the article

  • How to transfer data between two netowks efficiently

    - by Tono Nam
    I will like to transfer files between two places over the internet. Right now I have a VPN and I am able to browse, download and transfer files. So my question is not really how to transfer the files; Instead, I will like to use the most efficient approach because the two places constantly share a lot of data. The reason why I want to get rid of the vpn is because it is two slow. Having high upload speed is very expensive/impossible on residential places so I will like to use a different approach. I was thinking about using programs such as http://www.dropbox.com . The problem with dropbox is it only enables 2 GB of storage in order for it to be free. I think the deals they offer are ok and I might be willing to pay to get that increase in speed. But I am concerned with the speed of transferring data. Dropbox will upload the file to their server then send it from the server to the other location. I will like it even faster lol. Anyways I was thinking why not create a program my self. This is the algorithm that I was thinking let me know if it sounds to crazy. (remember my goal is to transfer files as fastest as possible) Things that I will use in this algorithm: Server on the internet called S ( has fast download and upload speed. I pay to host a website and some services in there. I want to take advantage of it) Client A on location 1 Client B on location 2 So lets say on location 1 20 large files are created and need to be transferred to location 2. Client A compresses the files with the highest compression ratio possible. Client A starts sending data via UDP to client B. Because I am using UDP I will include the sequence number on each package. Have server S help speed up things. For example every time a package is lost we can use Server S to inform client A that it needs to resend a package. Anyways I think this approach will increase the transfer rate. I do not know if it is possible to start sending data meanwhile it is being compressed. Also if it is possible to start decompressing data even if we are not done receiving all the info. Maybe it will be faster to start sending the files right away without compressing. If I knew that I will always be sending large text files then I will obviously use the compression. I need this as a general algorithm. So i guess my question is should using UDP over TCP could increase performance by using an extra server to keep track of lost packages? and How should I compress files before sending? compressing a 1 GB file with the highest compression ration takes about 1 hour! I will like to take advantage of that time by sending it meanwhile it is compressed.

    Read the article

  • How to transfer data between two networks efficiently

    - by Tono Nam
    I would like to transfer files between two places over the internet. Right now I have a VPN and I am able to browse, download and transfer files. So my question is not really how to transfer the files; Instead, I would like to use the most efficient approach because the two places constantly share a lot of data. The reason why I want to get rid of the VPN is because it is two slow. Having high upload speed is very expensive/impossible in residential places so I would like to use a different approach. I was thinking about using programs such as http://www.dropbox.com . The problem with Dropbox is that the free version comes with only 2 GB of storage. I think the deals they offer are OK and I might be willing to pay to get that increase in speed. But I am concerned with the speed of transferring data. Dropbox will upload the file to their server then send it from the server to the other location. I would like it to be even faster. Anyway I was thinking why not create a program myself. This is the algorithm that I was thinking of. Let me know if it sounds too crazy. (Remember my goal is to transfer files as fast as possible) Things that I will use in this algorithm: Server on the internet called S (Has fast download and upload speed. I pay to host a website and some services in there. I want to take advantage of it.) Client A at location 1 Client B at location 2 So lets say at location 1, 20 large files are created and need to be transferred to location 2. Client A compresses the files with the highest compression ratio possible. Client A starts sending data via UDP to client B. Because I am using UDP I will include the sequence number on each packet. Have server S help speed up things. For example every time a packet is lost we can use Server S to inform client A that it needs to resend a packet. Anyways I think this approach will increase the transfer rate. I do not know if it is possible to start sending data while it is being compressed. Or if it is possible to start decompressing data even if we are not done receiving the whole file. Maybe it will be faster to start sending the files right away without compressing. If I knew that I will always be sending large text files then I will obviously use the compression. I need this as a general algorithm. So I guess my question is could I increase performance by using UDP instead of TCP and by using an extra server to keep track of lost packets? And how should I compress files before sending? Compressing a 1 GB file with the highest compression ratio takes about 1 hour! I would like to take advantage of that time by sending it as it is being compressed.

    Read the article

  • PPTP VPN from Ubuntu cannot connect

    - by Andrea Polci
    I'm trying to configure under Linux (Kubuntu 9.10) a VPN I already use from Windows. I installed the network-manager-pptp package and added the VPN under Network Manager. These are the parameters under "advanced" button: Authentication Methods: PAP, CHAP, MSCHAP, MSCHAP2, EAP (I also tried "MSCHAP, MSCHAP2") Use MPPE Encryption: yes Crypto: Any Use stateful encryption: no Allow BSD compression: yes Allow Deflate compression: yes Allow TCP header compression: yes Send PPP echo packets: no When I try to connnect it doesn't work and this is what I get in the system log: 2010-04-08 13:53:47 pcelena NetworkManager <info> Starting VPN service 'org.freedesktop.NetworkManager.pptp'... 2010-04-08 13:53:47 pcelena NetworkManager <info> VPN service 'org.freedesktop.NetworkManager.pptp' started (org.freedesktop.NetworkManager.pptp), PID 4931 2010-04-08 13:53:47 pcelena NetworkManager <info> VPN service 'org.freedesktop.NetworkManager.pptp' just appeared, activating connections 2010-04-08 13:53:47 pcelena pppd[4932] Plugin /usr/lib/pppd/2.4.5//nm-pptp-pppd-plugin.so loaded. 2010-04-08 13:53:47 pcelena NetworkManager <info> VPN plugin state changed: 3 2010-04-08 13:53:47 pcelena pppd[4932] pppd 2.4.5 started by root, uid 0 2010-04-08 13:53:47 pcelena NetworkManager <info> VPN connection 'MYVPN' (Connect) reply received. 2010-04-08 13:53:47 pcelena NetworkManager SCPlugin-Ifupdown: devices added (path: /sys/devices/virtual/net/ppp0, iface: ppp0) 2010-04-08 13:53:47 pcelena NetworkManager SCPlugin-Ifupdown: device added (path: /sys/devices/virtual/net/ppp0, iface: ppp0): no ifupdown configuration found. 2010-04-08 13:53:47 pcelena pppd[4932] Using interface ppp0 2010-04-08 13:53:47 pcelena pppd[4932] Connect: ppp0 <--> /dev/pts/2 2010-04-08 13:53:47 pcelena pptp[4934] nm-pptp-service-4931 log[main:pptp.c:314]: The synchronous pptp option is NOT activated 2010-04-08 13:53:47 pcelena pptp[4927] nm-pptp-service-4918 log[ctrlp_rep:pptp_ctrl.c:251]: Sent control packet type is 7 'Outgoing-Call-Request' 2010-04-08 13:53:47 pcelena pptp[4927] nm-pptp-service-4918 log[ctrlp_disp:pptp_ctrl.c:858]: Received Outgoing Call Reply. 2010-04-08 13:53:47 pcelena pptp[4927] nm-pptp-service-4918 log[ctrlp_disp:pptp_ctrl.c:897]: Outgoing call established (call ID 1, peer's call ID 14800). 2010-04-08 13:53:48 pcelena pppd[4932] CHAP authentication succeeded 2010-04-08 13:53:48 pcelena pppd[4932] CHAP authentication succeeded 2010-04-08 13:53:48 pcelena pppd[4932] LCP terminated by peer 2010-04-08 13:53:48 pcelena pptp[4927] nm-pptp-service-4918 log[ctrlp_disp:pptp_ctrl.c:929]: Call disconnect notification received (call id 14800) 2010-04-08 13:53:48 pcelena pptp[4927] nm-pptp-service-4918 log[ctrlp_disp:pptp_ctrl.c:788]: Received Stop Control Connection Request. 2010-04-08 13:53:48 pcelena pptp[4927] nm-pptp-service-4918 log[ctrlp_rep:pptp_ctrl.c:251]: Sent control packet type is 4 'Stop-Control-Connection-Reply' 2010-04-08 13:53:48 pcelena pptp[4927] nm-pptp-service-4918 log[callmgr_main:pptp_callmgr.c:258]: Closing connection (shutdown) 2010-04-08 13:53:48 pcelena pptp[4927] nm-pptp-service-4918 log[ctrlp_rep:pptp_ctrl.c:251]: Sent control packet type is 12 'Call-Clear-Request' 2010-04-08 13:53:48 pcelena pptp[4927] nm-pptp-service-4918 log[callmgr_main:pptp_callmgr.c:258]: Closing connection (shutdown) 2010-04-08 13:53:48 pcelena pptp[4927] nm-pptp-service-4918 log[ctrlp_rep:pptp_ctrl.c:251]: Sent control packet type is 12 'Call-Clear-Request' 2010-04-08 13:53:48 pcelena pptp[4927] nm-pptp-service-4918 log[call_callback:pptp_callmgr.c:79]: Closing connection (call state) 2010-04-08 13:53:48 pcelena pppd[4932] Modem hangup 2010-04-08 13:53:48 pcelena pppd[4932] Connection terminated. 2010-04-08 13:53:48 pcelena NetworkManager <info> VPN plugin failed: 1 2010-04-08 13:53:48 pcelena NetworkManager SCPlugin-Ifupdown: devices removed (path: /sys/devices/virtual/net/ppp0, iface: ppp0) 2010-04-08 13:53:48 pcelena pppd[4932] Exit. 2010-04-08 13:53:48 pcelena NetworkManager <info> VPN plugin failed: 1 2010-04-08 13:53:48 pcelena NetworkManager <info> VPN plugin state changed: 6 2010-04-08 13:53:48 pcelena NetworkManager <info> VPN plugin state change reason: 0 2010-04-08 13:53:48 pcelena NetworkManager <WARN> connection_state_changed(): Could not process the request because no VPN connection was active. 2010-04-08 13:53:48 pcelena NetworkManager <info> Policy set 'Auto eth0' (eth0) as default for routing and DNS. 2010-04-08 13:54:01 pcelena NetworkManager <debug> [1270727641.001390] ensure_killed(): waiting for vpn service pid 4931 to exit 2010-04-08 13:54:01 pcelena NetworkManager <debug> [1270727641.001479] ensure_killed(): vpn service pid 4931 cleaned up The error that sticks out here is "pppd[4932] LCP terminated by peer". Does anyone has suggestion on what can be the problem and how to make it work?

    Read the article

  • VPN in Ubuntu fails every time

    - by fazpas
    I am trying to setup a vpn connection in Ubuntu 10.04 to use the service from relakks.com I used the network manager to add the vpn connection and the settings are: Gateway: pptp.relakks.com Username: user Password: pwd IPv4 Settings: Automatic (VPN) Advanced: MSCHAP & MSCHAPv2 checked Use point-to-point encryption (security:default) Allow BSD data compression checked Allow deflate data compression checked Use TCP header compression checked The connection always fail, here is the syslog: Jun 27 20:11:56 desktop NetworkManager: <info> Starting VPN service 'org.freedesktop.NetworkManager.pptp'... Jun 27 20:11:56 desktop NetworkManager: <info> VPN service 'org.freedesktop.NetworkManager.pptp' started (org.freedesktop.NetworkManager.pptp), PID 2064 Jun 27 20:11:56 desktop NetworkManager: <info> VPN service 'org.freedesktop.NetworkManager.pptp' just appeared, activating connections Jun 27 20:11:56 desktop NetworkManager: <info> VPN plugin state changed: 3 Jun 27 20:11:56 desktop NetworkManager: <info> VPN connection 'Relakks' (Connect) reply received. Jun 27 20:11:56 desktop pppd[2067]: Plugin /usr/lib/pppd/2.4.5//nm-pptp-pppd-plugin.so loaded. Jun 27 20:11:56 desktop pppd[2067]: pppd 2.4.5 started by root, uid 0 Jun 27 20:11:56 desktop NetworkManager: SCPlugin-Ifupdown: devices added (path: /sys/devices/virtual/net/ppp1, iface: ppp1) Jun 27 20:11:56 desktop NetworkManager: SCPlugin-Ifupdown: device added (path: /sys/devices/virtual/net/ppp1, iface: ppp1): no ifupdown configuration found. Jun 27 20:11:56 desktop pppd[2067]: Using interface ppp1 Jun 27 20:11:56 desktop pppd[2067]: Connect: ppp1 <--> /dev/pts/0 Jun 27 20:11:56 desktop pptp[2071]: nm-pptp-service-2064 log[main:pptp.c:314]: The synchronous pptp option is NOT activated Jun 27 20:11:57 desktop pptp[2079]: nm-pptp-service-2064 log[ctrlp_rep:pptp_ctrl.c:251]: Sent control packet type is 1 'Start-Control-Connection-Request' Jun 27 20:11:58 desktop pptp[2079]: nm-pptp-service-2064 log[ctrlp_disp:pptp_ctrl.c:739]: Received Start Control Connection Reply Jun 27 20:11:58 desktop pptp[2079]: nm-pptp-service-2064 log[ctrlp_disp:pptp_ctrl.c:773]: Client connection established. Jun 27 20:11:58 desktop pptp[2079]: nm-pptp-service-2064 log[ctrlp_rep:pptp_ctrl.c:251]: Sent control packet type is 7 'Outgoing-Call-Request' Jun 27 20:11:59 desktop pptp[2079]: nm-pptp-service-2064 log[ctrlp_disp:pptp_ctrl.c:858]: Received Outgoing Call Reply. Jun 27 20:11:59 desktop pptp[2079]: nm-pptp-service-2064 log[ctrlp_disp:pptp_ctrl.c:897]: Outgoing call established (call ID 0, peer's call ID 1024). Jun 27 20:11:59 desktop kernel: [ 56.564074] Inbound IN=ppp0 OUT= MAC= SRC=93.182.139.2 DST=186.110.76.26 LEN=61 TOS=0x00 PREC=0x00 TTL=52 ID=40460 DF PROTO=47 Jun 27 20:11:59 desktop kernel: [ 56.944054] Inbound IN=ppp0 OUT= MAC= SRC=93.182.139.2 DST=186.110.76.26 LEN=60 TOS=0x00 PREC=0x00 TTL=52 ID=40461 DF PROTO=47 Jun 27 20:11:59 desktop pptp[2079]: nm-pptp-service-2064 log[pptp_read_some:pptp_ctrl.c:544]: read returned zero, peer has closed Jun 27 20:11:59 desktop pptp[2079]: nm-pptp-service-2064 log[callmgr_main:pptp_callmgr.c:258]: Closing connection (shutdown) Jun 27 20:11:59 desktop pptp[2079]: nm-pptp-service-2064 log[ctrlp_rep:pptp_ctrl.c:251]: Sent control packet type is 12 'Call-Clear-Request' Jun 27 20:11:59 desktop pptp[2079]: nm-pptp-service-2064 log[pptp_read_some:pptp_ctrl.c:544]: read returned zero, peer has closed Jun 27 20:11:59 desktop pptp[2079]: nm-pptp-service-2064 log[call_callback:pptp_callmgr.c:79]: Closing connection (call state) Jun 27 20:11:59 desktop pppd[2067]: Modem hangup Jun 27 20:11:59 desktop pppd[2067]: Connection terminated. Jun 27 20:11:59 desktop NetworkManager: <info> VPN plugin failed: 1 Jun 27 20:11:59 desktop NetworkManager: SCPlugin-Ifupdown: devices removed (path: /sys/devices/virtual/net/ppp1, iface: ppp1) Jun 27 20:11:59 desktop pppd[2067]: Exit. Does someone can identify something in the syslog? I've been googling and reading about pptp but couldn't find anything about the error "read returned zero, peer has closed"

    Read the article

  • How to Eliminate Tape Backup and Off-site Storage Service?

    - by Daniel Lucas
    PLEASE READ UPDATE AT THE BOTTOM. THANKS! ;) Environment Info (all Windows): 2 sites 30 servers site #1 (3TB of backup data) 5 servers site #2 (1TB of backup data) MPLS backbone tunnel connecting site #1 and site #2 Current Backup Process: Online Backup (disk-to-disk) Site #1 has a server running Symantec Backup Exec 12.5 with four 1TB USB 2.0 disks. BE jobs for full backups run nightly on all servers in site #1 to these disks. Site #2 backs up to a central file server there using software they already had when we purchased them. A BE job pulls that data nightly to site #1 and stores them on said disks. Off-site Backup (tape) Connected to our backup server is a tape drive. BE backs up the external disks to tape once a week which gets picked up by our off-site storage company. Obviously we rotate two tape libraries, one is always here and one is always there. Requirements: Eliminate the need for tape and off-site storage service by doing disk-to-disk at each site and replicating site #1 to site #2 and vice versa. Software based solution as hardware options have been too pricey (ie, SonicWall, Arkeia). Agents for Exchange, SharePoint, and SQL. Some Ideas So Far: Storage DroboPro at each site with an initial 8TB of storage (these are expandable up to 16TB at present). I like these because they are rackmountable, allow disparate drives, and have iSCSI interfaces. They are relatively cheap too. Software Symantec Backup Exec 12.5 already has all the agents and licenses we need. I'd like to keep using it unless there is a better solution, similarly priced, that does everything BE does plus deduplication and replication. Server Because there is no more need for a SCSI adapter (for tape drive) we are going to virtualize our backup server as it is currently the only physical machine save for SQL boxes. Problems: When replicating between sites we want as little data as possible to go across the pipe. There is no deduplication or compression in what I have laid out here so far. The files being replicated are BE's virtual tape libraries from our disk-to-disk backup. Because of this each of those huge files will go across the wire every week because they change every day. And Finally, the Question: Is there any software out there that does deduplication, or at least compression, to handle just our site-to-site replication? Or, looking at our setup, is there any other solution that I am missing that might be cheaper, faster, better? Thanks. Sorry so long. UPDATE 2: I've set a bounty on this question to get it more attention. I'm looking for software that will handle replication of data between two sites using the least amount of data possible (either compression, deduplication, or some other method). Something similar to rsync would work but it needs to be native to Windows and not a port involving shenanigans to get up and running. Prefer a GUI based product and I don't mind shelling out a few bones if it works. Please, answers that meet the above criteria only. If you don't think one exists or if you think I'm being to restrictive keep it to yourself. If after seven days there is no answer at all, so be it. Thanks again everyone. UPDATE 2: I really appreciate everyone coming forward with suggestions. There is no way for me to try all of these before the bounty expires. For now I'm going to let this bounty run out and whoever has the most votes will get the 100 rep points. Thanks again!

    Read the article

  • Ajax Control Toolkit Now Supports jQuery

    - by Stephen.Walther
    I’m excited to announce the September 2013 release of the Ajax Control Toolkit, which now supports building new Ajax Control Toolkit controls with jQuery. You can download the latest release of the Ajax Control Toolkit from http://AjaxControlToolkit.CodePlex.com or you can install the Ajax Control Toolkit directly within Visual Studio by executing the following NuGet command: The New jQuery Extender Base Class This release of the Ajax Control Toolkit introduces a new jQueryExtender base class. This new base class enables you to create Ajax Control Toolkit controls with jQuery instead of the Microsoft Ajax Library. Currently, only one control in the Ajax Control Toolkit has been rewritten to use the new jQueryExtender base class (only one control has been jQueryized). The ToggleButton control is the first of the Ajax Control Toolkit controls to undergo this dramatic transformation. All of the other controls in the Ajax Control Toolkit are written using the Microsoft Ajax Library. We hope to gradually rewrite these controls as jQuery controls over time. You can view the new jQuery ToggleButton live at the Ajax Control Toolkit sample site: http://www.asp.net/ajaxLibrary/AjaxControlToolkitSampleSite/ToggleButton/ToggleButton.aspx Why are we rewriting Ajax Control Toolkits with jQuery? There are very few developers actively working with the Microsoft Ajax Library while there are thousands of developers actively working with jQuery. Because we want talented developers in the community to continue to contribute to the Ajax Control Toolkit, and because almost all JavaScript developers are familiar with jQuery, it makes sense to support jQuery with the Ajax Control Toolkit. Also, we believe that the Ajax Control Toolkit is a great framework for Web Forms developers who want to build new ASP.NET controls that use JavaScript. The Ajax Control Toolkit has great features such as automatic bundling, minification, caching, and compression. We want to make it easy for ASP.NET developers to build new controls that take advantage of these features. Instantiating Controls with data-* Attributes We took advantage of the new JQueryExtender base class to change the way that Ajax Control Toolkit controls are instantiated. In the past, adding an Ajax Control Toolkit to a page resulted in inline JavaScript being injected into the page. For example, adding the ToggleButton control to a page injected the following HTML and script: <input id="ctl00_SampleContent_CheckBox1" name="ctl00$SampleContent$CheckBox1" type="checkbox" checked="checked" /> <script type="text/javascript"> //<![CDATA[ Sys.Application.add_init(function() { $create(Sys.Extended.UI.ToggleButtonBehavior, {"CheckedImageAlternateText":"Check", "CheckedImageUrl":"ToggleButton_Checked.gif", "ImageHeight":19, "ImageWidth":19, "UncheckedImageAlternateText":"UnCheck", "UncheckedImageUrl":"ToggleButton_Unchecked.gif", "id":"ctl00_SampleContent_ToggleButtonExtender1"}, null, null, $get("ctl00_SampleContent_CheckBox1")); }); //]]> </script> Notice the call to the JavaScript $create() method at the bottom of the page. When using the Microsoft Ajax Library, this call to the $create() method is necessary to create the Ajax Control Toolkit control. This inline script looks pretty ugly to a modern JavaScript developer. Inline script! Horrible! The jQuery version of the ToggleButton injects the following HTML and script into the page: <input id="ctl00_SampleContent_CheckBox1" name="ctl00$SampleContent$CheckBox1" type="checkbox" checked="checked" data-act-togglebuttonextender="imageWidth:19, imageHeight:19, uncheckedImageUrl:'ToggleButton_Unchecked.gif', checkedImageUrl:'ToggleButton_Checked.gif', uncheckedImageAlternateText:'I don&#39;t understand why you don&#39;t like ASP.NET', checkedImageAlternateText:'It&#39;s really nice to hear from you that you like ASP.NET'" /> Notice that there is no script! There is no call to the $create() method. In fact, there is no inline JavaScript at all. The jQuery version of the ToggleButton uses an HTML5 data-* attribute instead of an inline script. The ToggleButton control is instantiated with a data-act-togglebuttonextender attribute. Using data-* attributes results in much cleaner markup (You don’t need to feel embarrassed when selecting View Source in your browser). Ajax Control Toolkit versus jQuery So in a jQuery world why is the Ajax Control Toolkit needed at all? Why not just use jQuery plugins instead of the Ajax Control Toolkit? For example, there are lots of jQuery ToggleButton plugins floating around the Internet. Why not just use one of these jQuery plugins instead of using the Ajax Control Toolkit ToggleButton control? There are three main reasons why the Ajax Control Toolkit continues to be valuable in a jQuery world: Ajax Control Toolkit controls run on both the server and client jQuery plugins are client only. A jQuery plugin does not include any server-side code. If you need to perform any work on the server – think of the AjaxFileUpload control – then you can’t use a pure jQuery solution. Ajax Control Toolkit controls provide a better Visual Studio experience You don’t get any design time experience when you use jQuery plugins within Visual Studio. Ajax Control Toolkit controls, on the other hand, are designed to work with Visual Studio. For example, you can use the Visual Studio Properties window to set Ajax Control Toolkit control properties. Ajax Control Toolkit controls shield you from working with JavaScript I like writing code in JavaScript. However, not all developers like JavaScript and some developers want to completely avoid writing any JavaScript code at all. The Ajax Control Toolkit enables you to take advantage of JavaScript (and the latest features of HTML5) in your ASP.NET Web Forms websites without writing a single line of JavaScript. Better ToolkitScriptManager Documentation With this release, we have added more detailed documentation for using the ToolkitScriptManager. In particular, we added documentation that describes how to take advantage of the new bundling, minification, compression, and caching features of the Ajax Control Toolkit. The ToolkitScriptManager documentation is part of the Ajax Control Toolkit sample site and it can be read here: http://www.asp.net/ajaxLibrary/AjaxControlToolkitSampleSite/ToolkitScriptManager/ToolkitScriptManager.aspx Other Fixes This release of the Ajax Control Toolkit includes several important bug fixes. For example, the Ajax Control Toolkit Twitter control was completely rewritten with this release. Twitter is in the process of retiring the first version of their API. You can read about their plans here: https://dev.twitter.com/blog/planning-for-api-v1-retirement We completely rewrote the Ajax Control Toolkit Twitter control to use the new Twitter API. To take advantage of the new Twitter API, you must get a key and access token from Twitter and add the key and token to your web.config file. Detailed instructions for using the new version of the Ajax Control Toolkit Twitter control can be found here: http://www.asp.net/ajaxLibrary/AjaxControlToolkitSampleSite/Twitter/Twitter.aspx   Summary We’ve made some really great changes to the Ajax Control Toolkit over the last two releases to modernize the toolkit. In the previous release, we updated the Ajax Control Toolkit to use a better bundling, minification, compression, and caching system. With this release, we updated the Ajax Control Toolkit to support jQuery. We also continue to update the Ajax Control Toolkit with important bug fixes. I hope you like these changes and I look forward to hearing your feedback.

    Read the article

  • Rendering ASP.NET Script References into the Html Header

    - by Rick Strahl
    One thing that I’ve come to appreciate in control development in ASP.NET that use JavaScript is the ability to have more control over script and script include placement than ASP.NET provides natively. Specifically in ASP.NET you can use either the ClientScriptManager or ScriptManager to embed scripts and script references into pages via code. This works reasonably well, but the script references that get generated are generated into the HTML body and there’s very little operational control for placement of scripts. If you have multiple controls or several of the same control that need to place the same scripts onto the page it’s not difficult to end up with scripts that render in the wrong order and stop working correctly. This is especially critical if you load script libraries with dependencies either via resources or even if you are rendering referenced to CDN resources. Natively ASP.NET provides a host of methods that help embedding scripts into the page via either Page.ClientScript or the ASP.NET ScriptManager control (both with slightly different syntax): RegisterClientScriptBlock Renders a script block at the top of the HTML body and should be used for embedding callable functions/classes. RegisterStartupScript Renders a script block just prior to the </form> tag and should be used to for embedding code that should execute when the page is first loaded. Not recommended – use jQuery.ready() or equivalent load time routines. RegisterClientScriptInclude Embeds a reference to a script from a url into the page. RegisterClientScriptResource Embeds a reference to a Script from a resource file generating a long resource file string All 4 of these methods render their <script> tags into the HTML body. The script blocks give you a little bit of control by having a ‘top’ and ‘bottom’ of the document location which gives you some flexibility over script placement and precedence. Script includes and resource url unfortunately do not even get that much control – references are simply rendered into the page in the order of declaration. The ASP.NET ScriptManager control facilitates this task a little bit with the abililty to specify scripts in code and the ability to programmatically check what scripts have already been registered, but it doesn’t provide any more control over the script rendering process itself. Further the ScriptManager is a bear to deal with generically because generic code has to always check and see if it is actually present. Some time ago I posted a ClientScriptProxy class that helps with managing the latter process of sending script references either to ClientScript or ScriptManager if it’s available. Since I last posted about this there have been a number of improvements in this API, one of which is the ability to control placement of scripts and script includes in the page which I think is rather important and a missing feature in the ASP.NET native functionality. Handling ScriptRenderModes One of the big enhancements that I’ve come to rely on is the ability of the various script rendering functions described above to support rendering in multiple locations: /// <summary> /// Determines how scripts are included into the page /// </summary> public enum ScriptRenderModes { /// <summary> /// Inherits the setting from the control or from the ClientScript.DefaultScriptRenderMode /// </summary> Inherit, /// Renders the script include at the location of the control /// </summary> Inline, /// <summary> /// Renders the script include into the bottom of the header of the page /// </summary> Header, /// <summary> /// Renders the script include into the top of the header of the page /// </summary> HeaderTop, /// <summary> /// Uses ClientScript or ScriptManager to embed the script include to /// provide standard ASP.NET style rendering in the HTML body. /// </summary> Script, /// <summary> /// Renders script at the bottom of the page before the last Page.Controls /// literal control. Note this may result in unexpected behavior /// if /body and /html are not the last thing in the markup page. /// </summary> BottomOfPage } This enum is then applied to the various Register functions to allow more control over where scripts actually show up. Why is this useful? For me I often render scripts out of control resources and these scripts often include things like a JavaScript Library (jquery) and a few plug-ins. The order in which these can be loaded is critical so that jQuery.js always loads before any plug-in for example. Typically I end up with a general script layout like this: Core Libraries- HeaderTop Plug-ins: Header ScriptBlocks: Header or Script depending on other dependencies There’s also an option to render scripts and CSS at the very bottom of the page before the last Page control on the page which can be useful for speeding up page load when lots of scripts are loaded. The API syntax of the ClientScriptProxy methods is closely compatible with ScriptManager’s using static methods and control references to gain access to the page and embedding scripts. For example, to render some script into the current page in the header: // Create script block in header ClientScriptProxy.Current.RegisterClientScriptBlock(this, typeof(ControlResources), "hello_function", "function helloWorld() { alert('hello'); }", true, ScriptRenderModes.Header); // Same again - shouldn't be rendered because it's the same id ClientScriptProxy.Current.RegisterClientScriptBlock(this, typeof(ControlResources), "hello_function", "function helloWorld() { alert('hello'); }", true, ScriptRenderModes.Header); // Create a second script block in header ClientScriptProxy.Current.RegisterClientScriptBlock(this, typeof(ControlResources), "hello_function2", "function helloWorld2() { alert('hello2'); }", true, ScriptRenderModes.Header); // This just calls ClientScript and renders into bottom of document ClientScriptProxy.Current.RegisterStartupScript(this,typeof(ControlResources), "call_hello", "helloWorld();helloWorld2();", true); which generates: <html xmlns="http://www.w3.org/1999/xhtml" > <head><title> </title> <script type="text/javascript"> function helloWorld() { alert('hello'); } </script> <script type="text/javascript"> function helloWorld2() { alert('hello2'); } </script> </head> <body> … <script type="text/javascript"> //<![CDATA[ helloWorld();helloWorld2();//]]> </script> </form> </body> </html> Note that the scripts are generated into the header rather than the body except for the last script block which is the call to RegisterStartupScript. In general I wouldn’t recommend using RegisterStartupScript – ever. It’s a much better practice to use a script base load event to handle ‘startup’ code that should fire when the page first loads. So instead of the code above I’d actually recommend doing: ClientScriptProxy.Current.RegisterClientScriptBlock(this, typeof(ControlResources), "call_hello", "$().ready( function() { alert('hello2'); });", true, ScriptRenderModes.Header); assuming you’re using jQuery on the page. For script includes from a Url the following demonstrates how to embed scripts into the header. This example injects a jQuery and jQuery.UI script reference from the Google CDN then checks each with a script block to ensure that it has loaded and if not loads it from a server local location: // load jquery from CDN ClientScriptProxy.Current.RegisterClientScriptInclude(this, typeof(ControlResources), "http://ajax.googleapis.com/ajax/libs/jquery/1.3.2/jquery.min.js", ScriptRenderModes.HeaderTop); // check if jquery loaded - if it didn't we're not online string scriptCheck = @"if (typeof jQuery != 'object') document.write(unescape(""%3Cscript src='{0}' type='text/javascript'%3E%3C/script%3E""));"; string jQueryUrl = ClientScriptProxy.Current.GetWebResourceUrl(this, typeof(ControlResources), ControlResources.JQUERY_SCRIPT_RESOURCE); ClientScriptProxy.Current.RegisterClientScriptBlock(this, typeof(ControlResources), "jquery_register", string.Format(scriptCheck,jQueryUrl),true, ScriptRenderModes.HeaderTop); // Load jquery-ui from cdn ClientScriptProxy.Current.RegisterClientScriptInclude(this, typeof(ControlResources), "http://ajax.googleapis.com/ajax/libs/jqueryui/1.7.2/jquery-ui.min.js", ScriptRenderModes.Header); // check if we need to load from local string jQueryUiUrl = ResolveUrl("~/scripts/jquery-ui-custom.min.js"); ClientScriptProxy.Current.RegisterClientScriptBlock(this, typeof(ControlResources), "jqueryui_register", string.Format(scriptCheck, jQueryUiUrl), true, ScriptRenderModes.Header); // Create script block in header ClientScriptProxy.Current.RegisterClientScriptBlock(this, typeof(ControlResources), "hello_function", "$().ready( function() { alert('hello'); });", true, ScriptRenderModes.Header); which in turn generates this HTML: <html xmlns="http://www.w3.org/1999/xhtml" > <head> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.3.2/jquery.min.js" type="text/javascript"></script> <script type="text/javascript"> if (typeof jQuery != 'object') document.write(unescape("%3Cscript src='/WestWindWebToolkitWeb/WebResource.axd?d=DIykvYhJ_oXCr-TA_dr35i4AayJoV1mgnQAQGPaZsoPM2LCdvoD3cIsRRitHKlKJfV5K_jQvylK7tsqO3lQIFw2&t=633979863959332352' type='text/javascript'%3E%3C/script%3E")); </script> <title> </title> <script src="http://ajax.googleapis.com/ajax/libs/jqueryui/1.7.2/jquery-ui.min.js" type="text/javascript"></script> <script type="text/javascript"> if (typeof jQuery != 'object') document.write(unescape("%3Cscript src='/WestWindWebToolkitWeb/scripts/jquery-ui-custom.min.js' type='text/javascript'%3E%3C/script%3E")); </script> <script type="text/javascript"> $().ready(function() { alert('hello'); }); </script> </head> <body> …</body> </html> As you can see there’s a bit more control in this process as you can inject both script includes and script blocks into the document at the top or bottom of the header, plus if necessary at the usual body locations. This is quite useful especially if you create custom server controls that interoperate with script and have certain dependencies. The above is a good example of a useful switchable routine where you can switch where scripts load from by default – the above pulls from Google CDN but a configuration switch may automatically switch to pull from the local development copies if your doing development for example. How does it work? As mentioned the ClientScriptProxy object mimicks many of the ScriptManager script related methods and so provides close API compatibility with it although it contains many additional overloads that enhance functionality. It does however work against ScriptManager if it’s available on the page, or Page.ClientScript if it’s not so it provides a single unified frontend to script access. There are however many overloads of the original SM methods like the above to provide additional functionality. The implementation of script header rendering is pretty straight forward – as long as a server header (ie. it has to have runat=”server” set) is available. Otherwise these routines fall back to using the default document level insertions of ScriptManager/ClientScript. Given that there is a server header it’s relatively easy to generate the script tags and code and append them to the header either at the top or bottom. I suspect Microsoft didn’t provide header rendering functionality precisely because a runat=”server” header is not required by ASP.NET so behavior would be slightly unpredictable. That’s not really a problem for a custom implementation however. Here’s the RegisterClientScriptBlock implementation that takes a ScriptRenderModes parameter to allow header rendering: /// <summary> /// Renders client script block with the option of rendering the script block in /// the Html header /// /// For this to work Header must be defined as runat="server" /// </summary> /// <param name="control">any control that instance typically page</param> /// <param name="type">Type that identifies this rendering</param> /// <param name="key">unique script block id</param> /// <param name="script">The script code to render</param> /// <param name="addScriptTags">Ignored for header rendering used for all other insertions</param> /// <param name="renderMode">Where the block is rendered</param> public void RegisterClientScriptBlock(Control control, Type type, string key, string script, bool addScriptTags, ScriptRenderModes renderMode) { if (renderMode == ScriptRenderModes.Inherit) renderMode = DefaultScriptRenderMode; if (control.Page.Header == null || renderMode != ScriptRenderModes.HeaderTop && renderMode != ScriptRenderModes.Header && renderMode != ScriptRenderModes.BottomOfPage) { RegisterClientScriptBlock(control, type, key, script, addScriptTags); return; } // No dupes - ref script include only once const string identifier = "scriptblock_"; if (HttpContext.Current.Items.Contains(identifier + key)) return; HttpContext.Current.Items.Add(identifier + key, string.Empty); StringBuilder sb = new StringBuilder(); // Embed in header sb.AppendLine("\r\n<script type=\"text/javascript\">"); sb.AppendLine(script); sb.AppendLine("</script>"); int? index = HttpContext.Current.Items["__ScriptResourceIndex"] as int?; if (index == null) index = 0; if (renderMode == ScriptRenderModes.HeaderTop) { control.Page.Header.Controls.AddAt(index.Value, new LiteralControl(sb.ToString())); index++; } else if(renderMode == ScriptRenderModes.Header) control.Page.Header.Controls.Add(new LiteralControl(sb.ToString())); else if (renderMode == ScriptRenderModes.BottomOfPage) control.Page.Controls.AddAt(control.Page.Controls.Count-1,new LiteralControl(sb.ToString())); HttpContext.Current.Items["__ScriptResourceIndex"] = index; } Note that the routine has to keep track of items inserted by id so that if the same item is added again with the same key it won’t generate two script entries. Additionally the code has to keep track of how many insertions have been made at the top of the document so that entries are added in the proper order. The RegisterScriptInclude method is similar but there’s some additional logic in here to deal with script file references and ClientScriptProxy’s (optional) custom resource handler that provides script compression /// <summary> /// Registers a client script reference into the page with the option to specify /// the script location in the page /// </summary> /// <param name="control">Any control instance - typically page</param> /// <param name="type">Type that acts as qualifier (uniqueness)</param> /// <param name="url">the Url to the script resource</param> /// <param name="ScriptRenderModes">Determines where the script is rendered</param> public void RegisterClientScriptInclude(Control control, Type type, string url, ScriptRenderModes renderMode) { const string STR_ScriptResourceIndex = "__ScriptResourceIndex"; if (string.IsNullOrEmpty(url)) return; if (renderMode == ScriptRenderModes.Inherit) renderMode = DefaultScriptRenderMode; // Extract just the script filename string fileId = null; // Check resource IDs and try to match to mapped file resources // Used to allow scripts not to be loaded more than once whether // embedded manually (script tag) or via resources with ClientScriptProxy if (url.Contains(".axd?r=")) { string res = HttpUtility.UrlDecode( StringUtils.ExtractString(url, "?r=", "&", false, true) ); foreach (ScriptResourceAlias item in ScriptResourceAliases) { if (item.Resource == res) { fileId = item.Alias + ".js"; break; } } if (fileId == null) fileId = url.ToLower(); } else fileId = Path.GetFileName(url).ToLower(); // No dupes - ref script include only once const string identifier = "script_"; if (HttpContext.Current.Items.Contains( identifier + fileId ) ) return; HttpContext.Current.Items.Add(identifier + fileId, string.Empty); // just use script manager or ClientScriptManager if (control.Page.Header == null || renderMode == ScriptRenderModes.Script || renderMode == ScriptRenderModes.Inline) { RegisterClientScriptInclude(control, type,url, url); return; } // Retrieve script index in header int? index = HttpContext.Current.Items[STR_ScriptResourceIndex] as int?; if (index == null) index = 0; StringBuilder sb = new StringBuilder(256); url = WebUtils.ResolveUrl(url); // Embed in header sb.AppendLine("\r\n<script src=\"" + url + "\" type=\"text/javascript\"></script>"); if (renderMode == ScriptRenderModes.HeaderTop) { control.Page.Header.Controls.AddAt(index.Value, new LiteralControl(sb.ToString())); index++; } else if (renderMode == ScriptRenderModes.Header) control.Page.Header.Controls.Add(new LiteralControl(sb.ToString())); else if (renderMode == ScriptRenderModes.BottomOfPage) control.Page.Controls.AddAt(control.Page.Controls.Count-1, new LiteralControl(sb.ToString())); HttpContext.Current.Items[STR_ScriptResourceIndex] = index; } There’s a little more code here that deals with cleaning up the passed in Url and also some custom handling of script resources that run through the ScriptCompressionModule – any script resources loaded in this fashion are automatically cached based on the resource id. Raw urls extract just the filename from the URL and cache based on that. All of this to avoid doubling up of scripts if called multiple times by multiple instances of the same control for example or several controls that all load the same resources/includes. Finally RegisterClientScriptResource utilizes the previous method to wrap the WebResourceUrl as well as some custom functionality for the resource compression module: /// <summary> /// Returns a WebResource or ScriptResource URL for script resources that are to be /// embedded as script includes. /// </summary> /// <param name="control">Any control</param> /// <param name="type">A type in assembly where resources are located</param> /// <param name="resourceName">Name of the resource to load</param> /// <param name="renderMode">Determines where in the document the link is rendered</param> public void RegisterClientScriptResource(Control control, Type type, string resourceName, ScriptRenderModes renderMode) { string resourceUrl = GetClientScriptResourceUrl(control, type, resourceName); RegisterClientScriptInclude(control, type, resourceUrl, renderMode); } /// <summary> /// Works like GetWebResourceUrl but can be used with javascript resources /// to allow using of resource compression (if the module is loaded). /// </summary> /// <param name="control"></param> /// <param name="type"></param> /// <param name="resourceName"></param> /// <returns></returns> public string GetClientScriptResourceUrl(Control control, Type type, string resourceName) { #if IncludeScriptCompressionModuleSupport // If wwScriptCompression Module through Web.config is loaded use it to compress // script resources by using wcSC.axd Url the module intercepts if (ScriptCompressionModule.ScriptCompressionModuleActive) { string url = "~/wwSC.axd?r=" + HttpUtility.UrlEncode(resourceName); if (type.Assembly != GetType().Assembly) url += "&t=" + HttpUtility.UrlEncode(type.FullName); return WebUtils.ResolveUrl(url); } #endif return control.Page.ClientScript.GetWebResourceUrl(type, resourceName); } This code merely retrieves the resource URL and then simply calls back to RegisterClientScriptInclude with the URL to be embedded which means there’s nothing specific to deal with other than the custom compression module logic which is nice and easy. What else is there in ClientScriptProxy? ClientscriptProxy also provides a few other useful services beyond what I’ve already covered here: Transparent ScriptManager and ClientScript calls ClientScriptProxy includes a host of routines that help figure out whether a script manager is available or not and all functions in this class call the appropriate object – ScriptManager or ClientScript – that is available in the current page to ensure that scripts get embedded into pages properly. This is especially useful for control development where controls have no control over the scripting environment in place on the page. RegisterCssLink and RegisterCssResource Much like the script embedding functions these two methods allow embedding of CSS links. CSS links are appended to the header or to a form declared with runat=”server”. LoadControlScript Is a high level resource loading routine that can be used to easily switch between different script linking modes. It supports loading from a WebResource, a url or not loading anything at all. This is very useful if you build controls that deal with specification of resource urls/ids in a standard way. Check out the full Code You can check out the full code to the ClientScriptProxyClass here: ClientScriptProxy.cs ClientScriptProxy Documentation (class reference) Note that the ClientScriptProxy has a few dependencies in the West Wind Web Toolkit of which it is part of. ControlResources holds a few standard constants and script resource links and the ScriptCompressionModule which is referenced in a few of the script inclusion methods. There’s also another useful ScriptContainer companion control  to the ClientScriptProxy that allows scripts to be placed onto the page’s markup including the ability to specify the script location and script minification options. You can find all the dependencies in the West Wind Web Toolkit repository: West Wind Web Toolkit Repository West Wind Web Toolkit Home Page© Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  JavaScript  

    Read the article

  • How can I determine what codec is being used?

    - by pwnguin
    This forum comment and this superuser answer suggest that the audio compression contributes to loss of quality. I've noticed that music played over my BT setup sometimes pitch bends in ways I don't remember the original doing, and I'm wondering if SBC has something to do with it. I'm using Ubuntu 10.10 on a Mac Pro, connecting to a pair of Sony DR-BT50's. Is there a way to inspect which Bluetooth codec pulseaudio is using, what codecs both ends of the bluetooth link support?

    Read the article

  • IBM System x3850 X5 TPC-H Benchmark

    - by jchang
    IBM just published a TPC-H SF 1000 result for their x3850 X5 , 4-way Xeon 7560 system featuring a special MAX5 memory expansion board to support 1.5TB memory. In Dec 2010, IBM also published a TPC-H SF1000 for their Power 780 system, 8-way, quad-core, (4 logical processors per physical core). In Feb 2011, Ingres published a TPC-H SF 100 on a 2-way Xeon 5680 for their VectorWise column-store engine (plus enhancements for memory architecture, SIMD and compression). The figure table below shows TPC-H...(read more)

    Read the article

  • The SPARC SuperCluster

    - by Karoly Vegh
    Oracle has been providing a lead in the Engineered Systems business for quite a while now, in accordance with the motto "Hardware and Software Engineered to Work Together." Indeed it is hard to find a better definition of these systems.  Allow me to summarize the idea. It is:  Build a compute platform optimized to run your technologies Develop application aware, intelligently caching storage components Take an impressively fast network technology interconnecting it with the compute nodes Tune the application to scale with the nodes to yet unseen performance Reduce the amount of data moving via compression Provide this all in a pre-integrated single product with a single-pane management interface All these ideas have been around in IT for quite some time now. The real Oracle advantage is adding the last one to put these all together. Oracle has built quite a portfolio of Engineered Systems, to run its technologies - and run those like they never ran before. In this post I'll focus on one of them that serves as a consolidation demigod, a multi-purpose engineered system.  As you probably have guessed, I am talking about the SPARC SuperCluster. It has many great features inherited from its predecessors, and it adds several new ones. Allow me to pick out and elaborate about some of the most interesting ones from a technological point of view.  I. It is the SPARC SuperCluster T4-4. That is, as compute nodes, it includes SPARC T4-4 servers that we learned to appreciate and respect for their features: The SPARC T4 CPUs: Each CPU has 8 cores, each core runs 8 threads. The SPARC T4-4 servers have 4 sockets. That is, a single compute node can in parallel, simultaneously  execute 256 threads. Now, a full-rack SPARC SuperCluster has 4 of these servers on board. Remember the keyword demigod.  While retaining the forerunner SPARC T3's exceptional throughput, the SPARC T4 CPUs raise the bar with single performance too - a humble 5x better one than their ancestors.  actually, the SPARC T4 CPU cores run in both single-threaded and multi-threaded mode, and switch between these two on-the-fly, fulfilling not only single-threaded OR multi-threaded applications' needs, but even mixed requirements (like in database workloads!). Data security, anyone? Every SPARC T4 CPU core has a built-in encryption engine, that is, encryption algorithms cast into silicon.  A PCI controller right on the chip for customers who need I/O performance.  Built-in, no-cost Virtualization:  Oracle VM for SPARC (the former LDoms or Logical Domains) is not a server-emulation virtualization technology but rather a serverpartitioning one, the hypervisor runs in the server firmware, and all the VMs' HW resources (I/O, CPU, memory) are accessed natively, without performance overhead.  This enables customers to run a number of Solaris 10 and Solaris 11 VMs separated, independent of each other within a physical server II. For Database performance, it includes Exadata Storage Cells - one of the main reasons why the Exadata Database Machine performs at diabolic speed. What makes them important? They provide DB backend storage for your Oracle Databases to run on the SPARC SuperCluster, that is what they are built and tuned for DB performance.  These storage cells are SQL-aware.  That is, if a SPARC T4 database compute node executes a query, it doesn't simply request tons of raw datablocks from the storage, filters the received data, and throws away most of it where the statement doesn't apply, but provides the SQL query to the storage node too. The storage cell software speaks SQL, that is, it is able to prefilter and through that transfer only the relevant data. With this, the traffic between database nodes and storage cells is reduced immensely. Less I/O is a good thing - as they say, all the CPUs of the world do one thing just as fast as any other - and that is waiting for I/O.  They don't only pre-filter, but also provide data preprocessing features - e.g. if a DB-node requests an aggregate of data, they can calculate it, and handover only the results, not the whole set. Again, less data to transfer.  They support the magical HCC, (Hybrid Columnar Compression). That is, data can be stored in a precompressed form on the storage. Less data to transfer.  Of course one can't simply rely on disks for performance, there is Flash Storage included there for caching.  III. The low latency, high-speed backbone network: InfiniBand, that interconnects all the members with: Real High Speed: 40 Gbit/s. Full Duplex, of course. Oh, and a really low latency.  RDMA. Remote Direct Memory Access. This technology allows the DB nodes to do exactly that. Remotely, directly placing SQL commands into the Memory of the storage cells. Dodging all the network-stack bottlenecks, avoiding overhead, placing requests directly into the process queue.  You can also run IP over InfiniBand if you please - that's the way the compute nodes can communicate with each other.  IV. Including a general-purpose storage too: the ZFSSA, which is a unified storage, providing NAS and SAN access too, with the following features:  NFS over RDMA over InfiniBand. Nothing is faster network-filesystem-wise.  All the ZFS features onboard, hybrid storage pools, compression, deduplication, snapshot, replication, NFS and CIFS shares Storageheads in a HA-Cluster configuration providing availability of the data  DTrace Live Analytics in a web-based Administration UI Being a general purpose application data storage for your non-database applications running on the SPARC SuperCluster over whichever protocol they prefer, easily replicating, snapshotting, cloning data for them.  There's a lot of great technology included in Oracle's SPARC SuperCluster, we have talked its interior through. As for external scalability: you can start with a half- of full- rack SPARC SuperCluster, and scale out to several racks - that is, stacking not separate full-rack SPARC SuperClusters, but extending always one large instance of the size of several full-racks. Yes, over InfiniBand network. Add racks as you grow.  What technologies shall run on it? SPARC SuperCluster is a general purpose scaleout consolidation/cloud environment. You can run Oracle Databases with RAC scaling, or Oracle Weblogic (end enjoy the SPARC T4's advantages to run Java). Remember, Oracle technologies have been integrated with the Oracle Engineered Systems - this is the Oracle on Oracle advantage. But you can run other software environments such as SAP if you please too. Run any application that runs on Oracle Solaris 10 or Solaris 11. Separate them in Virtual Machines, or even Oracle Solaris Zones, monitor and manage those from a central UI. Here the key takeaways once again: The SPARC SuperCluster: Is a pre-integrated Engineered System Contains SPARC T4-4 servers with built-in virtualization, cryptography, dynamic threading Contains the Exadata storage cells that intelligently offload the burden of the DB-nodes  Contains a highly available ZFS Storage Appliance, that provides SAN/NAS storage in a unified way Combines all these elements over a high-speed, low-latency backbone network implemented with InfiniBand Can grow from a single half-rack to several full-rack size Supports the consolidation of hundreds of applications To summarize: All these technologies are great by themselves, but the real value is like in every other Oracle Engineered System: Integration. All these technologies are tuned to perform together. Together they are way more than the sum of all - and a careful and actually very time consuming integration process is necessary to orchestrate all these for performance. The SPARC SuperCluster's goal is to enable infrastructure operations and offer a pre-integrated solution that can be architected and delivered in hours instead of months of evaluations and tests. The tedious and most importantly time and resource consuming part of the work - testing and evaluating - has been done.  Now go, provide services.   -- charlie  

    Read the article

  • Showplan Operator of the Week - Merge Interval

    When Fabiano agreed to undertake the epic task of describing each showplan operator, none of us quite predicted the interesting ways that the series helps to understand how the query optimizer works. With the Merge Interval, Fabiano comes up with some insights about the way that the Query optimizer handles overlapping ranges efficiently. Free trial of SQL Backup™“SQL Backup was able to cut down my backup time significantly AND achieved a 90% compression at the same time!” Joe Cheng. Download a free trial now.

    Read the article

  • New BIDSHelper Release - 1.4.3.0

    - by Darren Gosbell
    Today we released an update for BIDSHelper which you can download from here This release addresses the following issues: For some people the BIDS Helper extensions to the Project Properties page for the SSIS Deploy plugin was not available. Copy and Paste in errors in SSIS packages Updates to Parent-Child Dim Naturalizer The following features are new in this release: Analysis Services Many-to-Many Matrix Compression Roles Report General Preferences   If you are interested to find out what else BIDSHelper can do, full documentation of all the features is available on the project website here: http://bidshelper.codeplex.com

    Read the article

  • Columnar Databases

    - by jchang
    Ingres just published a TPC-H benchmark for VectorWise , an analytic database technology employing 1) SIMD processing (Intel SSE 4.2), 2) better memory optimizations to leverage on-chip cache, 3) compression, 4) Column-based storage. Ingres originated as a research project at UC Berkeley (see Wikipedia ) in the 1970s, and has since become a commercially supported, open source database system. Apparently, Ingres project people later founded Sybase. So Ingres in a sense, is the grandfather (or perhap...(read more)

    Read the article

  • TechEd 2010 Followup

    - by AllenMWhite
    Last week I presented a couple of sessions at Tech Ed NA in New Orleans. It was a great experience, even though my demos didn't always work out as planned. Here are the sessions I presented: DAT01-INT Administrative Demo-Fest for SQL Server 2008 SQL Server 2008 provides a wealth of features aimed at the DBA. In this demofest of features we'll see ways to make administering SQL Server easier and faster such as Centralized Data Management, Performance Data Warehouse, Resource Governor, Backup Compression...(read more)

    Read the article

  • how to convert avi (xvid) to mkv or mp4 (h264)

    - by bcsteeve
    Very noob when it comes to video. I'm trying to make sense of what I"m finding via Google... but its mostly Greek to me. I have a bunch of Avi files that won't play in my WD TV Play box. Mediainfo tells me they are xvid. Specs for the box show that should be fine... but digging through forums says its hit-and-miss. So I'd like to try converting them to h264 encoded MKV or mp4 files. I gather avconv is the tool, but reading the manual just has me really really confused. I tried the very basic example of: avconv -i file.avi -c copy file.mp4 it took less than 4 seconds. And it worked... sort of. It "played" in that something came up on the screen... but there was horrible artifacting and scenes would just sort of melt into each other. I want to preserve quality if possible. I'm not concerned about file size. I'm not terribly concerned with the time it takes either, provided I can do them in a batch. Can someone familiar with the process please give me a command with the options? Thank you for your help. I'm posting the mediainfo in case it helps: General Complete name : \\SERVER\Video\Public\test.avi Format : AVI Format/Info : Audio Video Interleave File size : 189 MiB Duration : 11mn 18s Overall bit rate : 2 335 Kbps Writing application : Lavf52.32.0 Video ID : 0 Format : MPEG-4 Visual Format profile : Advanced Simple@L5 Format settings, BVOP : 2 Format settings, QPel : No Format settings, GMC : No warppoints Format settings, Matrix : Default (H.263) Muxing mode : Packed bitstream Codec ID : XVID Codec ID/Hint : XviD Duration : 11mn 18s Bit rate : 2 129 Kbps Width : 720 pixels Height : 480 pixels Display aspect ratio : 16:9 Frame rate : 29.970 fps Standard : NTSC Color space : YUV Chroma subsampling : 4:2:0 Bit depth : 8 bits Scan type : Progressive Compression mode : Lossy Bits/(Pixel*Frame) : 0.206 Stream size : 172 MiB (91%) Writing library : XviD 1.2.1 (UTC 2008-12-04) Audio ID : 1 Format : MPEG Audio Format version : Version 1 Format profile : Layer 3 Mode : Joint stereo Mode extension : MS Stereo Codec ID : 55 Codec ID/Hint : MP3 Duration : 11mn 18s Bit rate mode : Constant Bit rate : 192 Kbps Channel(s) : 2 channels Sampling rate : 48.0 KHz Compression mode : Lossy Stream size : 15.5 MiB (8%) Alignment : Aligned on interleaves Interleave, duration : 24 ms (0.72 video frame)

    Read the article

  • How to Communicate Between SSBS Applications Across Instances

    Arshad Ali demonstrates how to verify the SQL Server Service Broker (SSBS) configuration when both the Initiator and Target are in different SQL Server instances, how to communicate between them and how to monitor the conversation. Free trial of SQL Backup™“SQL Backup was able to cut down my backup time significantly AND achieved a 90% compression at the same time!” Joe Cheng. Download a free trial now.

    Read the article

  • Új adatbázis-biztonsági termék: Audit Vault and Database Firewall, lényegesen olcsóbban

    - by user645740
    Az Oracle összevonta az Audit Vault és a Database Firewall termékeket, még szélesebb felhasználói körnek elérhetové téve az adatbázisok biztonságának magasabb szintjét. Az új termék, az Oracle Audit Vault and Database Firewall (AVDF) mostantól kedvezobb áron érheto el. A jelentések megtekintéséhez restricted use-ban tartalmazza  a Business Intelligence Publisher licencet. Az adatgyujto, management szerver komponensek kiemelten védettek, az Audit Vault Server és a Database Firewall szerverekre restricted use-ban használhatók:Oracle Database Enterprise Edition, Database Vault, Partitioning, Advanced Compression és Advanced Security.

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >