Search Results

Search found 10055 results on 403 pages for 'what do i like most in 11'.

Page 54/403 | < Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >

  • Redirect particular hostname from https to httpd in httpd/apache2

    - by webnothing
    I have a webserver that has an ssl certificate applied to a subdomain https://shop.mydomain.com. I also have the hostname http://mydomain.com that has no ssl certificate. When invoking https://mydomain.com, browsers issue a warning that a certificate could not be verified because the webserver is identifying itself as https://shop.mydomain.com. I would like all traffic that hits https://mydomain.com to be redirected to http://mydomain.com, and leave https://shop.mydomain.com as is. My httpd.conf file generally looks like this: < VirtualHost 122.11.11.21:80 > ServerName shop.mydomain.com .. regular old port 80 .. < /VirtualHost > < VirtualHost 122.11.11.21:443 > ServerName shop.mydomain.com .. SSL applies here .. < /VirtualHost > < VirtualHost 122.11.11.21:80 > ServerName mydomain.com .. regular old port 80 .. < /VirtualHost > It does not look as if I have SSL set up for https://mydomain.com yet one can invoke SSL mode and the browser identifies the connection as https://shop.mydomain.com. I need to redirect from https://mydomain.com because for some reason, Google has indexed my website with this url even though it shows a warning. I have tried various methods to get this to redirect and nothing has worked. Any help would be greatly appreciated.

    Read the article

  • Multiple cable adapter setup not working - VGA to smartphone. All cables tested and work

    - by Christopher Rucinski
    Issue Pictured overhead projector setup does not work. #1 - #2 - #3 - Phone. All cables are tested and work! The issue is the HDMI connection between cable #2 and #3. With all other cables, the screen will automatically be displayed onto the projector screen. No extra work needed. With the pictured setup, the smartphone screen is not displayed onto the projector screen. What is the issue with the HDMI connection?? Background We recently had to do presentations at work (school), but the administration only provided VGA means of hooking up to it. Mostly likely reason probably dealt with cost. Anyways, there are several teachers that have brand new Samsung Series 9 ultrabooks (or similar). You know, the ones without VGA support. So I bought an adapter for those ultrabooks. Cable #5 in the picture below. However, both my coworker and I have been wanting to just display our phone screens on the projector. This I knew would require some extra work. What I have VGA cable to projector (cables go through the wall) For laptops HDMI to VGA cable For laptops MHL adapter For 11-pin microUSB phones microHDMI to VGA cable For ultrabooks 11-pin to 5-pin microUSB adapter For older 5-pin microUSB phones) Equipment Projectors 1 projector with VGA and HDMI input (issue is coworkers forget to switch sources) 1 projector with VGA only input Laptops 2 new Samsung ultrabooks w/o VGA or HDMI support 1 ultrabook with VGA and HDMI support several other laptops with at least VGA support 1 tablet with 11-pin microUSB at least 1 new phone with 11-pin microUSB at least 1 old phone with 5-pin microUSB Tested VGA cable (#1) to laptop Good VGA cable (#1) to HDMI adapter (#2) to laptop Good VGA cable (#1) to microHDMI adapter (#5) to laptop Good Projector to HDMI cable (not shown) to MHL adapter (#3) to Galaxy Note 3 smartphone Good VGA cable (#1) to HDMI adapter (#2) to MHL adapter (#3) to Galaxy Note 3 smartphone Does not work!! Extra Notes The 11-pin MHL adapter will not fit inside the 11-pin to 5-pin microUSB adapter so older phones can be displayed on the screen.

    Read the article

  • lm-sensors - always returns 32 degrees (celsius) for temperature

    - by mopoke
    On my VIA EPIA motherboard (using VIA VT8231 ISA bridge), I get strange output for the lm-sensors temperature reading. It always returns 32 degrees (celsius). I have previously had correct output for temperature (my munin graphs show temperatures typically in the range of 50 to 60 degrees. I've tried uninstalling (and purging) the lm-sensors package, have re-run sensors-detect a number of times and rebooted but nothing seems to change the output. I am running Ubuntu Karmic Koala (9.10). Anyone got any bright ideas on what I might have missed? uname -a: Linux george 2.6.31-16-386 #53-Ubuntu SMP Tue Dec 8 06:39:34 UTC 2009 i686 GNU/Linux cpuinfo: processor : 0 vendor_id : CentaurHauls cpu family : 6 model : 7 model name : VIA Samuel 2 stepping : 3 cpu MHz : 399.000 cache size : 64 KB fdiv_bug : no hlt_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 1 wp : yes flags : fpu de tsc msr cx8 mtrr pge mmx 3dnow up bogomips : 800.04 clflush size : 32 power management: lspci: 00:00.0 Host bridge: VIA Technologies, Inc. VT8601 [Apollo ProMedia] (rev 05) 00:01.0 PCI bridge: VIA Technologies, Inc. VT8601 [Apollo ProMedia AGP] 00:11.0 ISA bridge: VIA Technologies, Inc. VT8231 [PCI-to-ISA Bridge] (rev 10) 00:11.1 IDE interface: VIA Technologies, Inc. VT82C586A/B/VT82C686/A/B/VT823x/A/C PIPC Bus Master IDE (rev 06) 00:11.2 USB Controller: VIA Technologies, Inc. VT82xxxxx UHCI USB 1.1 Controller (rev 1e) 00:11.3 USB Controller: VIA Technologies, Inc. VT82xxxxx UHCI USB 1.1 Controller (rev 1e) 00:11.4 Bridge: VIA Technologies, Inc. VT8235 ACPI (rev 10) 00:11.5 Multimedia audio controller: VIA Technologies, Inc. VT82C686 AC97 Audio Controller (rev 40) 00:12.0 Ethernet controller: VIA Technologies, Inc. VT6102 [Rhine-II] (rev 51) 01:00.0 VGA compatible controller: Trident Microsystems CyberBlade/i1 (rev 6a) sensors: acpitz-virtual-0 Adapter: Virtual device temp1: +32.0°C (crit = +60.0°C)

    Read the article

  • Ruckus wireless AP and Dell PowerConnect configuration problems

    - by DanielJay
    We are working on trying to get some Ruckus Access Points to work correctly on our network. Currently our network is as follows: VLAN 10 - Servers VLAN 11 – Computers/DHCP VLAN 12 – Voice VLAN 13 – Guest We use Dell PowerConnect 6248P switches for our switches. Port settings are as follows: ZoneDirector 1100 is plugged into this port. Should be accessing the server VLAN and then allowing all other traffic. interface ethernet 1/g2 classofservice trust ip-dscp description 'Ruckus ZoneDirector 1100' switchport mode general switchport general pvid 10 switchport general allowed vlan add 10 switchport general allowed vlan add 11-13 tagged exit Access point is plugged into this port. The port has to be on VLAN 11 in order to get DHCP. interface ethernet 1/g16 classofservice trust ip-dscp description 'Ruckus - IT' switchport mode general switchport general pvid 11 switchport general allowed vlan add 10-12 switchport general allowed vlan add 13 tagged exit If we tag the traffic from the SSID as VLAN 11 data fails. If we leave the SSID tagged as 1 the data flows correctly. Are there problems with passing tagged traffic to untagged ports? We are looking to see what we can do to get the SSID tagged as 11 instead of 1. Any suggestions?

    Read the article

  • Android openvpn + zeroconf browser sending mdns query packets over eth0 instead of tap0 interface on wifi

    - by Mrunal
    On an android device, I am connecting to a remote network using openvpn for performing service discovery. WORKING CASE: After the device is camped on 3g/4g and after connecting to remote network by openvpn, when the zeroconf browser is launched, I can see the mdns query packets being send through the tap0 interface resulting into rendering of services on the browser. From the tcpdump captured on the device, I can see that the mdns query packets are send to tap0 interface. tap0 ip: 192.168.11.200 Route table information: Destination Gateway Genmask Flags Metric Ref Use Iface 76.26.112.234 10.179.240.1 255.255.255.255 UGH 0 0 0 pdpbr1 10.179.240.1 * 255.255.255.255 UH 0 0 0 pdpbr1 32.1.72.136 * 255.255.255.255 UH 0 0 0 pdpbr0 10.179.240.0 * 255.255.255.0 U 0 0 0 pdpbr1 192.168.11.0 * 255.255.255.0 U 0 0 0 tap0 default 192.168.11.1 0.0.0.0 UG 0 0 0 tap0 NOT WORKING CASE: However, after switching on the wifi and connecting it to remote network, when the zeroconf browser is launched, instead of sending the mdns query packets to tap0 interface; these packets are being send to eth0 interface due to which we cannot see the services. From the tcpdump captured on the device, I can see that mdns query packets are send to eth0 interface. tap0 ip: 192.168.11.200 eth0 ip: 192.168.43.230 route table information: Destination Gateway Genmask Flags Metric Ref Use Iface 76.26.112.234 192.168.43.1 255.255.255.255 UGH 0 0 0 eth0 32.1.72.136 * 255.255.255.255 UH 0 0 0 pdpbr0 192.168.11.0 * 255.255.255.0 U 0 0 0 tap0 192.168.43.0 * 255.255.255.0 U 0 0 0 eth0 default 192.168.11.1 0.0.0.0 UG 0 0 0 tap0 In the above case, even though there is a default route for tap0, all the multicast packets are being routed through eth0. How is this possible? Has anyone observed a similar problem and it would be really helpful if you can help us to discover services through zeroconf browser after the device is connected to remote network via openvpn through wifi. Thank You Very much, Mrunal

    Read the article

  • Debian Stable: Can't update kernel, libc won't update.

    - by pascal
    I use Debian Stable (squeeze) on a virtual host where I can't touch the kernel, it's stuck (and will be for some time as support told me) at Linux 2.6.18-028stab070.3 #1 SMP Wed Jul 21 18:33:27 MSD 2010 x86_64 So when I try to update, several packages fail with FATAL: kernel too old for example Preparing to replace libgcc1 1:4.6.0-11 (using .../libgcc1_1%3a4.6.1-1_amd64.deb) ... Unpacking replacement libgcc1 ... Setting up libgcc1 (1:4.6.1-1) ... FATAL: kernel too old Segmentation fault dpkg: error processing libgcc1 (--configure): subprocess installed post-installation script returned error exit status 139 and some version chaos ensued: The following packages have unmet dependencies: libc-dev-bin : Depends: libc6 (> 2.13) but 2.11.2-13 is installed libc6 : Depends: libc-bin (= 2.11.2-13) but 2.13-5 is installed libc6-dev : Depends: libc6 (= 2.13-5) but 2.11.2-13 is installed libquadmath0 : Depends: gcc-4.6-base (= 4.6.0-2) but 4.6.0-11 is installed libstdc++6 : Depends: gcc-4.6-base (= 4.6.0-2) but 4.6.0-11 is installed locales : Depends: glibc-2.13-1 What should I do? I want to keep the system up-to-date, so I want to pin as few packets as possible, but I also don't want to have to compile anything manually. Trying to pin the status quo and figured out where the error came from: ldconfig segfaults. -v doesn't print anything more so I can't tell what's the actual problem. # ldconfig FATAL: kernel too old

    Read the article

  • Unable to connect via NetBIOS Name

    - by grom
    I can't connect to machines/shares by NetBIOS names. Below is console output showing the problem. C:\>nbtstat -n Local Area Connection: Node IpAddress: [192.168.1.100] Scope Id: [] NetBIOS Local Name Table Name Type Status --------------------------------------------- BEAST <00> UNIQUE Registered WORKGROUP <00> GROUP Registered BEAST <20> UNIQUE Registered WORKGROUP <1E> GROUP Registered WORKGROUP <1D> UNIQUE Registered ..__MSBROWSE__.<01> GROUP Registered C:\>nbtstat -A 192.168.1.3 Local Area Connection: Node IpAddress: [192.168.1.100] Scope Id: [] NetBIOS Remote Machine Name Table Name Type Status --------------------------------------------- BRCLAPTOP <00> UNIQUE Registered WORKGROUP <00> GROUP Registered BRCLAPTOP <20> UNIQUE Registered WORKGROUP <1E> GROUP Registered MAC Address = 00-1C-BF-14-B8-6E C:\>ping beast Pinging beast [fe80::59b8:179f:b90b:a63f%11] with 32 bytes of data: Reply from fe80::59b8:179f:b90b:a63f%11: time<1ms Reply from fe80::59b8:179f:b90b:a63f%11: time<1ms Reply from fe80::59b8:179f:b90b:a63f%11: time<1ms Reply from fe80::59b8:179f:b90b:a63f%11: time<1ms Ping statistics for fe80::59b8:179f:b90b:a63f%11: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 0ms, Maximum = 0ms, Average = 0ms C:\>ping brclaptop Ping request could not find host brclaptop. Please check the name and try again. C:\>nbtstat -a brclaptop Local Area Connection: Node IpAddress: [192.168.1.100] Scope Id: [] Host not found.

    Read the article

  • kill a hung mount process

    - by John P
    I have a virtual machine drive that ran out of space, so I shutdown the VM, extended the volume using lvextend. After resizing the partition (ext3), I ran e2fsck on it, and it found and corrected errors. Unfortunately, when I ran efsck one more time, there were more errors that had to be fixed. I went through 3 rounds of e2fsck before I decided to try mounting it to clean up some space manually. I tried mounting it, but the mount process hung. I tried to "kill -9" the mount process, but that did not kill it. I killed the parent process, but that did not kill it either. Any ideas on how to kill a rogue mount process? Some evidence: ps -l 13292 F S UID PID PPID C PRI NI ADDR SZ WCHAN TTY TIME CMD 4 R 0 13292 1 99 85 0 - 17964 - ? 11:27 mount /dev/mapper/xen7-123p3 /tmp/p3/ lsof -p 13292 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME mount 13292 root cwd DIR 9,2 4096 25264129 /root mount 13292 root rtd DIR 9,2 4096 2 / mount 13292 root txt REG 9,2 61656 2916434 /bin/mount mount 13292 root mem REG 9,2 144776 31457282 /lib64/ld-2.5.so mount 13292 root mem REG 9,2 1718232 31457284 /lib64/libc-2.5.so mount 13292 root mem REG 9,2 23360 31457291 /lib64/libdl-2.5.so mount 13292 root mem REG 9,2 43808 31457783 /lib64/libblkid.so.1.0 mount 13292 root mem REG 9,2 247496 31457331 /lib64/libsepol.so.1 mount 13292 root mem REG 9,2 95464 31457337 /lib64/libselinux.so.1 mount 13292 root mem REG 9,2 154640 31457491 /lib64/libdevmapper.so.1.02 mount 13292 root mem REG 9,2 17936 31457472 /lib64/libuuid.so.1.2 mount 13292 root mem REG 9,2 56438208 12684878 /usr/lib/locale/locale-archive mount 13292 root 0u CHR 136,11 0t0 13 /dev/pts/11 (deleted) mount 13292 root 1u CHR 136,11 0t0 13 /dev/pts/11 (deleted) mount 13292 root 2u CHR 136,11 0t0 13 /dev/pts/11 (deleted) umount -f /tmp/p3/ umount2: Invalid argument umount: /tmp/p3/: not mounted

    Read the article

  • How can I fill (not replace) TAB with Spaces in MSWord?

    - by Morteza
    How can I fill (not replace) TAB with Spaces in MS Office Word? In other word, have a look at the following pic: 1 -> 222 -> 3 111 -> 2 -> 333 11 -> 22 -> 33 11 -> 2222 -> 3333 Suppose that - is indicated one TAB. As you see, each column is justified from left. I need to fill each TAB with Spaces, so that the justification not be confused. If I use 'Find & Replace' option to change each TABs to a specific number of Spaces, justification will be confused because each column have its own character number. In other word, if I change each TAB with 6 Spaces, the above will be changed to the follow: 1 222 3 111 2 333 11 22 33 11 2222 3333 My need is as follow (each dot indicate a Space): 1......222......3 111....2........333 11.....22.......33 11.....2222.....3333

    Read the article

  • Excel table auto update

    - by Mike
    So I have a table in Excel with formulas. When I add a new row, the new row automatically fills in the formulas as well, which is great. My problem is that it also changes the formula in the row above the added row as well. Here's what happens specifically: My table's last row is row 24. A formula I have in that row is the following: =COUNTIF(C$11:C24,"y")/(COUNTIF(C$11:C24,"Y")+COUNTIF(C$11:C24,"N")) When I add in data in row 25 the formula is updated in row 25 as well, which is what I want, to the following: =COUNTIF(C$11:C25,"y")/(COUNTIF(C$11:C25,"Y")+COUNTIF(C$11:C25,"N")) My problem is that the row above also updates - my row 24 changes to the same as row 25 (the C24 goes to C25). Why is my row 24 formula changing when I add a row 25? Note, my formulas above row 24 stay the same when I add in row 25 - only row 24 changes when I add in 25. Is there a way to not update the row above the row being added? This problem continues when additional rows are added - If I add in a row 26, then the formula in rows 24-26 then all reference C26. Why are they all updating?

    Read the article

  • CBO????????

    - by Liu Maclean(???)
    ???Itpub????????CBO??????????, ????????: SQL> create table maclean1 as select * from dba_objects; Table created. SQL> update maclean1 set status='INVALID' where owner='MACLEAN'; 2 rows updated. SQL> commit; Commit complete. SQL> create index ind_maclean1 on maclean1(status); Index created. SQL> exec dbms_stats.gather_table_stats('SYS','MACLEAN1',cascade=>true); PL/SQL procedure successfully completed. SQL> explain plan for select * from maclean1 where status='INVALID'; Explained. SQL> set linesize 140 pagesize 1400 SQL> select * from table(dbms_xplan.display()); PLAN_TABLE_OUTPUT --------------------------------------------------------------------------- Plan hash value: 987568083 ------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------ | 0 | SELECT STATEMENT | | 11320 | 1028K| 85 (0)| 00:00:02 | |* 1 | TABLE ACCESS FULL| MACLEAN1 | 11320 | 1028K| 85 (0)| 00:00:02 | ------------------------------------------------------------------------------ Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter("STATUS"='INVALID') 13 rows selected. 10053 trace Access path analysis for MACLEAN1 *************************************** SINGLE TABLE ACCESS PATH   Single Table Cardinality Estimation for MACLEAN1[MACLEAN1]   Column (#10): STATUS(     AvgLen: 7 NDV: 2 Nulls: 0 Density: 0.500000   Table: MACLEAN1  Alias: MACLEAN1     Card: Original: 22639.000000  Rounded: 11320  Computed: 11319.50  Non Adjusted: 11319.50   Access Path: TableScan     Cost:  85.33  Resp: 85.33  Degree: 0       Cost_io: 85.00  Cost_cpu: 11935345       Resp_io: 85.00  Resp_cpu: 11935345   Access Path: index (AllEqRange)     Index: IND_MACLEAN1     resc_io: 185.00  resc_cpu: 8449916     ix_sel: 0.500000  ix_sel_with_filters: 0.500000     Cost: 185.24  Resp: 185.24  Degree: 1   Best:: AccessPath: TableScan          Cost: 85.33  Degree: 1  Resp: 85.33  Card: 11319.50  Bytes: 0 ?????10053????????????,?????Density = 0.5 ?? 1/ NDV ??? ??????????????STATUS='INVALID"???????????, ????????????????? ????”STATUS”=’INVALID’ condition???2?,?status??????,??????dbms_stats?????????????,???CBO????INDEX Range ind_maclean1,???????,??????opitimizer?????? ?????????????????????????,????????,??????????status=’INVALID’???????card??,????????: [oracle@vrh4 ~]$ sqlplus / as sysdba SQL*Plus: Release 11.2.0.2.0 Production on Mon Oct 17 19:15:45 2011 Copyright (c) 1982, 2010, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options SQL> select * from v$version; BANNER -------------------------------------------------------------------------------- Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production PL/SQL Release 11.2.0.2.0 - Production CORE 11.2.0.2.0 Production TNS for Linux: Version 11.2.0.2.0 - Production NLSRTL Version 11.2.0.2.0 - Production SQL> show parameter optimizer_fea NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ optimizer_features_enable string 11.2.0.2 SQL> select * from global_name; GLOBAL_NAME -------------------------------------------------------------------------------- www.oracledatabase12g.com & www.askmaclean.com SQL> drop table maclean; Table dropped. SQL> create table maclean as select * from dba_objects; Table created. SQL> update maclean set status='INVALID' where owner='MACLEAN'; 2 rows updated. SQL> commit; Commit complete. SQL> create index ind_maclean on maclean(status); Index created. SQL> exec dbms_stats.gather_table_stats('SYS','MACLEAN',cascade=>true, method_opt=>'FOR ALL COLUMNS SIZE 2'); PL/SQL procedure successfully completed. ???????2?bucket????, ??????????????? ???Quest???Guy Harrison???????FREQUENCY????????,??????: rem rem Generate a histogram of data distribution in a column as recorded rem in dba_tab_histograms rem rem Guy Harrison Jan 2010 : www.guyharrison.net rem rem hexstr function is from From http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:707586567563 set pagesize 10000 set lines 120 set verify off col char_value format a10 heading "Endpoint|value" col bucket_count format 99,999,999 heading "bucket|count" col pct format 999.99 heading "Pct" col pct_of_max format a62 heading "Pct of|Max value" rem col endpoint_value format 9999999999999 heading "endpoint|value" CREATE OR REPLACE FUNCTION hexstr (p_number IN NUMBER) RETURN VARCHAR2 AS l_str LONG := TO_CHAR (p_number, 'fm' || RPAD ('x', 50, 'x')); l_return VARCHAR2 (4000); BEGIN WHILE (l_str IS NOT NULL) LOOP l_return := l_return || CHR (TO_NUMBER (SUBSTR (l_str, 1, 2), 'xx')); l_str := SUBSTR (l_str, 3); END LOOP; RETURN (SUBSTR (l_return, 1, 6)); END; / WITH hist_data AS ( SELECT endpoint_value,endpoint_actual_value, NVL(LAG (endpoint_value) OVER (ORDER BY endpoint_value),' ') prev_value, endpoint_number, endpoint_number, endpoint_number - NVL (LAG (endpoint_number) OVER (ORDER BY endpoint_value), 0) bucket_count FROM dba_tab_histograms JOIN dba_tab_col_statistics USING (owner, table_name,column_name) WHERE owner = '&owner' AND table_name = '&table' AND column_name = '&column' AND histogram='FREQUENCY') SELECT nvl(endpoint_actual_value,endpoint_value) endpoint_value , bucket_count, ROUND(bucket_count*100/SUM(bucket_count) OVER(),2) PCT, RPAD(' ',ROUND(bucket_count*50/MAX(bucket_count) OVER()),'*') pct_of_max FROM hist_data; WITH hist_data AS ( SELECT endpoint_value,endpoint_actual_value, NVL(LAG (endpoint_value) OVER (ORDER BY endpoint_value),' ') prev_value, endpoint_number, endpoint_number, endpoint_number - NVL (LAG (endpoint_number) OVER (ORDER BY endpoint_value), 0) bucket_count FROM dba_tab_histograms JOIN dba_tab_col_statistics USING (owner, table_name,column_name) WHERE owner = '&owner' AND table_name = '&table' AND column_name = '&column' AND histogram='FREQUENCY') SELECT hexstr(endpoint_value) char_value, bucket_count, ROUND(bucket_count*100/SUM(bucket_count) OVER(),2) PCT, RPAD(' ',ROUND(bucket_count*50/MAX(bucket_count) OVER()),'*') pct_of_max FROM hist_data ORDER BY endpoint_value; ?????,??????????FREQUENCY?????: ??dbms_stats ?????STATUS=’INVALID’ bucket count=9 percent = 0.04 ,??????10053 trace????????: SQL> explain plan for select * from maclean where status='INVALID'; Explained. SQL>  select * from table(dbms_xplan.display()); PLAN_TABLE_OUTPUT ------------------------------------- Plan hash value: 3087014066 ------------------------------------------------------------------------------------------- | Id  | Operation                   | Name        | Rows  | Bytes | Cost (%CPU)| Time     | ------------------------------------------------------------------------------------------- |   0 | SELECT STATEMENT            |             |     9 |   837 |     2   (0)| 00:00:01 | |   1 |  TABLE ACCESS BY INDEX ROWID| MACLEAN     |     9 |   837 |     2   (0)| 00:00:01 | |*  2 |   INDEX RANGE SCAN          | IND_MACLEAN |     9 |       |     1   (0)| 00:00:01 | ------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): ---------------------------------------------------    2 - access("STATUS"='INVALID') ??????????????CBO???????STATUS=’INVALID’?cardnality?? , ??????????? ,??index range scan??Full table scan? ????????????????10053 trace: SQL> alter system flush shared_pool; System altered. SQL> oradebug setmypid; Statement processed. SQL> oradebug event 10053 trace name context forever ,level 1; Statement processed. SQL> explain plan for select * from maclean where status='INVALID'; Explained. SINGLE TABLE ACCESS PATH Single Table Cardinality Estimation for MACLEAN[MACLEAN] Column (#10): NewDensity:0.000199, OldDensity:0.000022 BktCnt:22640, PopBktCnt:22640, PopValCnt:2, NDV:2 ???NewDensity= bucket_count / SUM(bucket_count) /2 Column (#10): STATUS( AvgLen: 7 NDV: 2 Nulls: 0 Density: 0.000199 Histogram: Freq #Bkts: 2 UncompBkts: 22640 EndPtVals: 2 Table: MACLEAN Alias: MACLEAN Card: Original: 22640.000000 Rounded: 9 Computed: 9.00 Non Adjusted: 9.00 Access Path: TableScan Cost: 85.30 Resp: 85.30 Degree: 0 Cost_io: 85.00 Cost_cpu: 10804625 Resp_io: 85.00 Resp_cpu: 10804625 Access Path: index (AllEqRange) Index: IND_MACLEAN resc_io: 2.00 resc_cpu: 20763 ix_sel: 0.000398 ix_sel_with_filters: 0.000398 Cost: 2.00 Resp: 2.00 Degree: 1 Best:: AccessPath: IndexRange Index: IND_MACLEAN Cost: 2.00 Degree: 1 Resp: 2.00 Card: 9.00 Bytes: 0 ???????????2 bucket?????CBO????????????,???????????????????,???dbms_stats.DEFAULT_METHOD_OPT????????????????????? ???dbms_stats?????????????????????col_usage$??????predicate???????,??col_usage$??<????????SMON??(?):??col_usage$????>? ??????????dbms_stats????????,col_usage$????????????predicate???,??dbms_stats??????????????????, ?: SQL> drop table maclean; Table dropped. SQL> create table maclean as select * from dba_objects; Table created. SQL> update maclean set status='INVALID' where owner='MACLEAN'; 2 rows updated. SQL> commit; Commit complete. SQL> create index ind_maclean on maclean(status); Index created. ??dbms_stats??method_opt??maclean? SQL> exec dbms_stats.gather_table_stats('SYS','MACLEAN'); PL/SQL procedure successfully completed. @histogram.sql Enter value for owner: SYS old  12:    WHERE owner = '&owner' new  12:    WHERE owner = 'SYS' Enter value for table: MACLEAN old  13:      AND table_name = '&table' new  13:      AND table_name = 'MACLEAN' Enter value for column: STATUS old  14:      AND column_name = '&column' new  14:      AND column_name = 'STATUS' no rows selected ????col_usage$?????,????????status????? declare begin for i in 1..500 loop execute immediate ' alter system flush shared_pool'; DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO; execute immediate 'select count(*) from maclean where status=''INVALID'' ' ; end loop; end; / PL/SQL procedure successfully completed. SQL> select obj# from obj$ where name='MACLEAN';       OBJ# ----------      97215 SQL> select * from  col_usage$ where  OBJ#=97215;       OBJ#    INTCOL# EQUALITY_PREDS EQUIJOIN_PREDS NONEQUIJOIN_PREDS RANGE_PREDS LIKE_PREDS NULL_PREDS TIMESTAMP ---------- ---------- -------------- -------------- ----------------- ----------- ---------- ---------- ---------      97215          1              1              0                 0           0          0          0 17-OCT-11      97215         10            499              0                 0           0          0          0 17-OCT-11 SQL> exec dbms_stats.gather_table_stats('SYS','MACLEAN'); PL/SQL procedure successfully completed. @histogram.sql Enter value for owner: SYS Enter value for table: MACLEAN Enter value for column: STATUS Endpoint        bucket         Pct of value            count     Pct Max value ---------- ----------- ------- -------------------------------------------------------------- INVALI               2     .04 VALIC3           5,453   99.96  *************************************************

    Read the article

  • Cannot install virtualbox-ose-guest-dkms on Ubuntu 10.04

    - by mrduongnv
    I'm using Ubuntu 10.04 and kernel: 2.6.38-11-generic-pae. While using VirtualBox, i got this error: Kernel driver not installed (rc=-1908) Please install the virtualbox-ose-dkms package and execute 'modprobe vboxdrv' as root. I tried to execute the command: sudo modprobe vboxdrv and got this error: FATAL: Module vboxdrv not found. After that, i execute this: sudo aptitude install virtualbox-ose-guest-dkms duongnv@duongnv-laptop:~$ sudo aptitude install virtualbox-ose-guest-dkms Reading package lists... Done Building dependency tree Reading state information... Done Reading extended state information Initializing package states... Done The following partially installed packages will be configured: virtualbox-ose-guest-dkms 0 packages upgraded, 0 newly installed, 0 to remove and 105 not upgraded. Need to get 0B of archives. After unpacking 0B will be used. Writing extended state information... Done Setting up virtualbox-ose-guest-dkms (3.1.6-dfsg-2ubuntu2) ... Removing old virtualbox-ose-guest-3.1.6 DKMS files... ------------------------------ Deleting module version: 3.1.6 completely from the DKMS tree. ------------------------------ Done. Loading new virtualbox-ose-guest-3.1.6 DKMS files... First Installation: checking all kernels... Building only for 2.6.38-11-generic-pae Building for architecture i686 Building initial module for 2.6.38-11-generic-pae Error! Bad return status for module build on kernel: 2.6.38-11-generic-pae (i686) Consult the make.log in the build directory /var/lib/dkms/virtualbox-ose-guest/3.1.6/build/ for more information. dpkg: error processing virtualbox-ose-guest-dkms (--configure): subprocess installed post-installation script returned error exit status 10 Errors were encountered while processing: virtualbox-ose-guest-dkms E: Sub-process /usr/bin/dpkg returned an error code (1) A package failed to install. Trying to recover: Setting up virtualbox-ose-guest-dkms (3.1.6-dfsg-2ubuntu2) ... Removing old virtualbox-ose-guest-3.1.6 DKMS files... ------------------------------ Deleting module version: 3.1.6 completely from the DKMS tree. ------------------------------ Done. Loading new virtualbox-ose-guest-3.1.6 DKMS files... First Installation: checking all kernels... Building only for 2.6.38-11-generic-pae Building for architecture i686 Building initial module for 2.6.38-11-generic-pae Error! Bad return status for module build on kernel: 2.6.38-11-generic-pae (i686) Consult the make.log in the build directory /var/lib/dkms/virtualbox-ose-guest/3.1.6/build/ for more information. dpkg: error processing virtualbox-ose-guest-dkms (--configure): subprocess installed post-installation script returned error exit status 10 Errors were encountered while processing: virtualbox-ose-guest-dkms Reading package lists... Done Building dependency tree Reading state information... Done Reading extended state information Initializing package states... Done

    Read the article

  • SSRS 2005 giving me "Invalid URI: The format of the URI could not be determined" when trying to cust

    - by Brian
    Hello, I'm getting the error "Invalid URI: The format of the URI could not be determined" when customizing it. I've made several changes to the configuration files and UI, but I keep getting this error. It isn't logging it too in the event log nor the log files, which makes it very annoying to debug. So how do I figure out where the error is coming from? Is it with the URL that's pointing to the ReportServer2005.asmx file, or something else? Updated: The specific error being logged is: aspnet_wp!library!9!3/11/2010-15:52:49:: i INFO: Initializing WatsonDumpOnExceptions to default value of 'Microsoft.ReportingServices.Diagnostics.Utilities.InternalCatalogException,Microsoft.ReportingServices.Modeling.InternalModelingException' because it was not specified in Configuration file. aspnet_wp!library!9!3/11/2010-15:52:49:: i INFO: Initializing WatsonDumpExcludeIfContainsExceptions to default value of 'System.Data.SqlClient.SqlException,System.Threading.ThreadAbortException' because it was not specified in Configuration file. aspnet_wp!library!9!3/11/2010-15:52:49:: i INFO: Initializing SecureConnectionLevel to default value of '1' because it was not specified in Configuration file. aspnet_wp!library!9!3/11/2010-15:52:49:: i INFO: Initializing DisplayErrorLink to 'True' as specified in Configuration file. aspnet_wp!library!9!3/11/2010-15:52:49:: i INFO: Initializing WebServiceUseFileShareStorage to default value of 'False' because it was not specified in Configuration file. aspnet_wp!ui!9!3/11/2010-15:52:52:: e ERROR: Invalid URI: The format of the URI could not be determined. aspnet_wp!ui!9!3/11/2010-15:52:53:: e ERROR: HTTP status code -- 500 -------Details-------- System.UriFormatException: Invalid URI: The format of the URI could not be determined. at Microsoft.SqlServer.ReportingServices2005.RSConnection.GetSecureMethods() at Microsoft.ReportingServices.UI.Global.RSWebServiceWrapper.GetSecureMethods() at Microsoft.SqlServer.ReportingServices2005.RSConnection.IsSecureMethod(String methodname) at Microsoft.SqlServer.ReportingServices2005.RSConnection.ValidateConnection() at Microsoft.ReportingServices.UI.Global.SecureAllAPI() at Microsoft.ReportingServices.UI.ReportingPage.EnsureHttpsLevel(HttpsLevel level) at Microsoft.ReportingServices.UI.ReportingPage.ReportingPage_Init(Object sender, EventArgs args) at System.EventHandler.Invoke(Object sender, EventArgs e) at System.Web.UI.Control.OnInit(EventArgs e) at System.Web.UI.Page.OnInit(EventArgs e) at System.Web.UI.Control.InitRecursive(Control namingContainer) at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) aspnet_wp!ui!9!3/11/2010-15:52:53:: e ERROR: Exception in ShowErrorPage: System.Threading.ThreadAbortException: Thread was being aborted. at System.Threading.Thread.AbortInternal() at System.Threading.Thread.Abort(Object stateInfo) at System.Web.HttpResponse.End() at System.Web.HttpServerUtility.Transfer(String path, Boolean preserveForm) at Microsoft.ReportingServices.UI.ReportingPage.ShowErrorPage(String errMsg) at at System.Threading.Thread.AbortInternal() at System.Threading.Thread.Abort(Object stateInfo) at System.Web.HttpResponse.End() at System.Web.HttpServerUtility.Transfer(String path, Boolean preserveForm) at Microsoft.ReportingServices.UI.ReportingPage.ShowErrorPage(String errMsg) Thanks.

    Read the article

  • Translate binary joke to ASCII

    - by Nix
    I found this cartoon is a binary joke and have been trying to translate! I got the binary from the GIF, but I can't find out how to translate to ASCII... 10001 11011 11 1011 01 01001 11011 0111011 01 100101 10 01111 01 100101 10 01111 01 01001 11011 0111011 01 10001 11011 11 1011 01 10001 11011 11 1011 01 01001 11011 0111011 01 100101 10 01111 01 100101 10 01111 01 01001 11011 0111011 01 1001 11011 11 1011 01 01101 Any help translating would be great.

    Read the article

  • RegEx to exclude a string

    - by sameera207
    Hi All, I have the following strings in my application. /admin/stylesheets/11 /admin/javascripts/11 /contactus what I want to do is to write a regular expression to capture anything other than string starting with 'admin' basically my regex should capture only /contactus by excluding both /admin/stylesheets/11 /admin/javascripts/11 to capture all i wrote /.+/ and i wrote /(admin).+/ which captures everything starts with 'admin'. how can i do the reverse. I mean get everything not starting with 'admin' thanks in advance cheers sameera

    Read the article

  • Kernel upgrade CentOS 5.3 mount: could not find filesystem '/dev/root'

    - by matt
    We have a CentOS 5.3 x64 server that by default runs kernel version 2.6.18-164.11.1 and we are attempting to upgrade the box to 2.6.31.12 The drive is LVM +ext3, and the problem I'm having is when I upgrade the kernel and attempt to boot from it, no matter what version of the kernel I use, I get /dev/root not found towards the end of the boot process, and the kernel panics, and than reboots. I'm installing the kernel exactly as it says in this doc. I've tried it "The centOS way " using make rpm and than installing that. I've updated my mkinitrd. The most interesting part of this problem is that it has been so frustrating that I decided to try and clean install centos on an identical machine without LVM, and the result is EXACTLY the same. After upgrading the kernel, I get /dev/root not found. Does anyone know how to fix this, or what information would be relevant to remedy it? I'm open to try anything at this point. One more interesting thing about this problem is that in the new version of the kernel, during boot it complains that dm-mapper is started twice, than panics right after that. I've tried this with other kernel versions, and the result is the same. What am I missing here? If you need any more files, please just ask. Linux cg 2.6.18-164.11.1.el5 #1 SMP Wed Jan 20 07:32:21 EST 2010 x86_64 x86_64 x86_64 GNU/Linux /dev/VolGroup00/LogVol00 / ext3 defaults 1 1 LABEL=/boot /boot ext3 defaults 1 2 tmpfs /dev/shm tmpfs defaults 0 0 devpts /dev/pts devpts gid=5,mode=620 0 0 sysfs /sys sysfs defaults 0 0 proc /proc proc defaults 0 0 /dev/VolGroup00/LogVol01 swap swap defaults 0 0 default=1 timeout=5 splashimage=(hd0,0)/grub/splash.xpm.gz hiddenmenu title CentOS (2.6.31.12-rt20) //NOT WORKING!!!! root (hd0,0) kernel /vmlinuz-2.6.31.12-rt20 ro root=/dev/VolGroup00/LogVol00 isolcpus=8,9,10,11,12,13,14,15 panic=10 initrd /initrd-2.6.31.12-rt20.img title CentOS (2.6.18-164.11.1.el5) //WORKING!! root (hd0,0) kernel /vmlinuz-2.6.18-164.11.1.el5 ro root=/dev/VolGroup00/LogVol00 isolcpus=8,9,10,11,12,13,14,15 panic=10 initrd /initrd-2.6.18-164.11.1.el5.img

    Read the article

  • xen-create-image does not create inird or initramfs image and domU does not starts with system image

    - by user219372
    I have Fedora 19 as Dom0. To create image I run # xen-create-image --hostname=debian-wheezy --memory=512Mb --dhcp --size=20Gb --swap=512Mb --dir=/xen --arch=amd64 --dist=wheezy After generation finished I start vm and see: # xl create /etc/xen/debian-wheezy.cfg Parsing config from /etc/xen/debian-wheezy.cfg libxl: error: libxl_dom.c:409:libxl__build_pv: xc_dom_ramdisk_file failed: No such file or directory libxl: error: libxl_create.c:919:domcreate_rebuild_done: cannot (re-)build domain: -3 In the /etc/xen/debian-wheezy.cfg i have # # Kernel + memory size # kernel = '/boot/vmlinuz-3.11.2-201.fc19.x86_64' ramdisk = '/boot/initrd.img-3.11.2-201.fc19.x86_64' and ls -1 /boot/*201* shows /boot/config-3.11.2-201.fc19.x86_64 /boot/initramfs-3.11.2-201.fc19.x86_64.img /boot/System.map-3.11.2-201.fc19.x86_64 /boot/vmlinuz-3.11.2-201.fc19.x86_64 Then if I fix ramdisk directive in .cfg file to /boot/initramfs-3.11.2-201.fc19.x86_64.img vm will start but os inside will not boot. In a tail of xl console I get [ OK ] Reached target Basic System. dracut-initqueue[130]: Warning: Could not boot. dracut-initqueue[130]: Warning: /dev/disk/by-uuid/085883ad-73ca-45cc-8bc5-e6249f869b26 does not exist dracut-initqueue[130]: Warning: /dev/fedora/root does not exist dracut-initqueue[130]: Warning: /dev/fedora/swap does not exist dracut-initqueue[130]: Warning: /dev/mapper/fedora-root does not exist dracut-initqueue[130]: Warning: /dev/mapper/fedora-swap does not exist dracut-initqueue[130]: Warning: /dev/xvda2 does not exist Starting Dracut Emergency Shell... Warning: /dev/disk/by-uuid/085883ad-73ca-45cc-8bc5-e6249f869b26 does not exist Warning: /dev/fedora/root does not exist Warning: /dev/fedora/swap does not exist Warning: /dev/mapper/fedora-root does not exist Warning: /dev/mapper/fedora-swap does not exist Warning: /dev/xvda2 does not exist Generating "/run/initramfs/sosreport.txt" Entering emergency mode. Exit the shell to continue. Type "journalctl" to view system logs. You might want to save "/run/initramfs/sosreport.txt" to a USB stick or /boot after mounting them and attach it to a bug report. dracut:/# .img files in /xen/domains/debian-wheezy exists and listed in disk section of debian-wheezy.cfg So what should i do? Update: I've found that xl does not mount images. In debian-wheezy.cfg I have that: root = '/dev/xvda2 ro' disk = [ 'file:/xen/domains/debian-wheezy/disk.img,xvda2,w', 'file:/xen/domains/debian-wheeze/swap.img,xvda1,w', ] And there is no /dev/xvda* or /dev/sda* or /dev/hda* files in VM.

    Read the article

  • How do I add values from two seperate querys in SQL

    - by fishhead
    Below is my attempt at addinf two values from seperate select statments...it's not working...I can't see why. Looking for some direction thanks select (v1.Value + v2.Value) as total from ( (Select Max(Value) as [Value] from History WHERE Datetime>='Apr 11 2010 6:05AM' and Datetime<='Apr 11 2010 6:05PM' and Tagname ='RWQ272017DTD' ) as v1 (Select Max(Value) as [Value] from History WHERE Datetime>='Apr 11 2010 6:05AM' and Datetime<='Apr 11 2010 6:05PM' and Tagname ='RU282001DTD' ) as v2 ) boy do I feel foolish...I asked the same question a few days ago...now I can't delete this.

    Read the article

  • Generate lags R

    - by Btibert3
    Hi All, I hope this is basic; just need a nudge in the right direction. I have read in a database table from MS Access into a data frame using RODBC. Here is a basic structure of what I read in: PRODID PROD Year Week QTY SALES INVOICE Here is the structure: str(data) 'data.frame': 8270 obs. of 7 variables: $ PRODID : int 20001 20001 20001 100001 100001 100001 100001 100001 100001 100001 ... $ PROD : Factor w/ 1239 levels "1% 20qt Box",..: 335 335 335 128 128 128 128 128 128 128 ... $ Year : int 2010 2010 2010 2009 2009 2009 2009 2009 2009 2010 ... $ Week : int 12 18 19 14 15 16 17 18 19 9 ... $ QTY : num 1 1 0 135 300 270 300 270 315 315 ... $ SALES : num 15.5 0 -13.9 243 540 ... $ INVOICES: num 1 1 2 5 11 11 10 11 11 12 ... Here are the top few rows: head(data, n=10) PRODID PROD Year Week QTY SALES INVOICES 1 20001 Dolie 12" 2010 12 1 15.46 1 2 20001 Dolie 12" 2010 18 1 0.00 1 3 20001 Dolie 12" 2010 19 0 -13.88 2 4 100001 Cage Free Eggs 2009 14 135 243.00 5 5 100001 Cage Free Eggs 2009 15 300 540.00 11 6 100001 Cage Free Eggs 2009 16 270 486.00 11 7 100001 Cage Free Eggs 2009 17 300 540.00 10 8 100001 Cage Free Eggs 2009 18 270 486.00 11 9 100001 Cage Free Eggs 2009 19 315 567.00 11 10 100001 Cage Free Eggs 2010 9 315 569.25 12 I simply want to generate lags for QTY, SALES, INVOICE for each product but I am not sure where to start. I know R is great with Time Series, but I am not sure where to start. I have two questions: 1- I have the raw invoice data but have aggregated it for reporting purposes. Would it be easier if I didn't aggregate the data? 2- Regardless of aggregation or not, what functions will I need to loop over each product and generate the lags as I need them? In short, I want to loop over a set of records, calculate lags for a product (if possible), append the lags (as they apply) to the current record for each product, and write the results back to a table in my database for my reporting software to use. Any help you can provide will be greatly appreciated! Many thanks in advance, Brock

    Read the article

  • MySQL won't use index for query?

    - by Jack Sleight
    I have this table: CREATE TABLE `point` ( `id` INT(11) NOT NULL AUTO_INCREMENT, `siteid` INT(11) NOT NULL, `lft` INT(11) DEFAULT NULL, `rgt` INT(11) DEFAULT NULL, `level` SMALLINT(6) DEFAULT NULL, PRIMARY KEY (`id`), KEY `point_siteid_site_id` (`siteid`), CONSTRAINT `point_siteid_site_id` FOREIGN KEY (`siteid`) REFERENCES `site` (`id`) ON DELETE CASCADE ) ENGINE=INNODB AUTO_INCREMENT=35 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci And this query: SELECT * FROM `point` WHERE siteid = 1; Which results in this EXPLAIN information: +----+-------------+-------+------+----------------------+------+---------+------+------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+------+----------------------+------+---------+------+------+-------------+ | 1 | SIMPLE | point | ALL | point_siteid_site_id | NULL | NULL | NULL | 6 | Using where | +----+-------------+-------+------+----------------------+------+---------+------+------+-------------+ Question is, why isn't the query using the point_siteid_site_id index?

    Read the article

  • ASA5505 Novice. Setting up Outside/Inside/and DMZ as Guest Network

    - by GriffJ
    I need a little help in developing a config for our ASA5505. I'm an MCSA/MCITPAS but I don't have a lot of practical cisco experience. Here is what I need help with, we currently have a PIX as our boarder gateway and well it's antiquated and it only has a 50 user license which means I'm constantly clearing local-host throughout the day as people complain. I discovered that the last IT person bought at couple ASA5505s and they've been sitting in the back of a cupboard. So far I've duplicated the configuration from the pix to the asa but as I was going to be going this far I thought I'd go further and remove another old cisco router that was used only for the guest network, I know the asa can do both jobs. So I'm going to paste a scenario I wrote up with the actual IPs changed to protect the innocent. ... Outside Network: 1.2.3.10 255.255.255.248 (we have a /29) Inside Network: 10.10.36.0 255.255.252.0 DMZ Network: 192.168.15.0 255.255.255.0 Outside Network on e0/0 DMZ Network on e0/1 Inside Network on e0/2-7 DMZ Network has DHCPD Enabled. DMZ DHCPD Pool is 192.168.15.50-192.168.15.250 DMZ Network needs to be able to see DNS on Inside Network at 10.10.37.11 and 10.10.37.12 DMZ Network needs to be able to access webmail on inside network at 10.10.37.15 DMZ Network needs to be able to access business website on inside network at 10.10.37.17 DMZ Network needs to be able to access the outside network (access to the internet). Inside Network has NO DHCPD. (dhcp is handled by domain controller) Inside Network needs to be able to see anything on the DMZ network. Inside Network needs to be able to access the outside network (access to the internet). There is some access-list stuff already, some static mapping already. Maps external IPs from our ISP to our inside server IPs static (inside,outside) 1.2.3.11 10.10.37.15 netmask 255.255.255.255 static (inside,outside) 1.2.3.12 10.10.37.17 netmask 255.255.255.255 static (inside,outside) 1.2.3.13 10.10.37.20 netmask 255.255.255.255 Allows access to our Webserver/Mailserver/VPN from the Outside. access-list 108 permit tcp any host 1.2.3.11 eq https access-list 108 permit tcp any host 1.2.3.11 eq smtp access-list 108 permit tcp any host 1.2.3.11 eq 993 access-list 108 permit tcp any host 1.2.3.11 eq 465 access-list 108 permit tcp any host 1.2.3.12 eq www access-list 108 permit tcp any host 1.2.3.12 eq https access-list 108 permit tcp any host 1.2.3.13 eq pptp Here is all the NAT and route stuff I have so far. global (outside) 1 interface global (outside) 2 1.2.3.11-1.2.3.14 netmask 255.255.255.248 nat (inside) 1 0.0.0.0 0.0.0.0 nat (dmz) 1 0.0.0.0 0.0.0.0 route outside 0.0.0.0 0.0.0.0 1.2.3.9 1

    Read the article

  • Upload File to Windows Azure Blob in Chunks through ASP.NET MVC, JavaScript and HTML5

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2013/07/01/upload-file-to-windows-azure-blob-in-chunks-through-asp.net.aspxMany people are using Windows Azure Blob Storage to store their data in the cloud. Blob storage provides 99.9% availability with easy-to-use API through .NET SDK and HTTP REST. For example, we can store JavaScript files, images, documents in blob storage when we are building an ASP.NET web application on a Web Role in Windows Azure. Or we can store our VHD files in blob and mount it as a hard drive in our cloud service. If you are familiar with Windows Azure, you should know that there are two kinds of blob: page blob and block blob. The page blob is optimized for random read and write, which is very useful when you need to store VHD files. The block blob is optimized for sequential/chunk read and write, which has more common usage. Since we can upload block blob in blocks through BlockBlob.PutBlock, and them commit them as a whole blob with invoking the BlockBlob.PutBlockList, it is very powerful to upload large files, as we can upload blocks in parallel, and provide pause-resume feature. There are many documents, articles and blog posts described on how to upload a block blob. Most of them are focus on the server side, which means when you had received a big file, stream or binaries, how to upload them into blob storage in blocks through .NET SDK.  But the problem is, how can we upload these large files from client side, for example, a browser. This questioned to me when I was working with a Chinese customer to help them build a network disk production on top of azure. The end users upload their files from the web portal, and then the files will be stored in blob storage from the Web Role. My goal is to find the best way to transform the file from client (end user’s machine) to the server (Web Role) through browser. In this post I will demonstrate and describe what I had done, to upload large file in chunks with high speed, and save them as blocks into Windows Azure Blob Storage.   Traditional Upload, Works with Limitation The simplest way to implement this requirement is to create a web page with a form that contains a file input element and a submit button. 1: @using (Html.BeginForm("About", "Index", FormMethod.Post, new { enctype = "multipart/form-data" })) 2: { 3: <input type="file" name="file" /> 4: <input type="submit" value="upload" /> 5: } And then in the backend controller, we retrieve the whole content of this file and upload it in to the blob storage through .NET SDK. We can split the file in blocks and upload them in parallel and commit. The code had been well blogged in the community. 1: [HttpPost] 2: public ActionResult About(HttpPostedFileBase file) 3: { 4: var container = _client.GetContainerReference("test"); 5: container.CreateIfNotExists(); 6: var blob = container.GetBlockBlobReference(file.FileName); 7: var blockDataList = new Dictionary<string, byte[]>(); 8: using (var stream = file.InputStream) 9: { 10: var blockSizeInKB = 1024; 11: var offset = 0; 12: var index = 0; 13: while (offset < stream.Length) 14: { 15: var readLength = Math.Min(1024 * blockSizeInKB, (int)stream.Length - offset); 16: var blockData = new byte[readLength]; 17: offset += stream.Read(blockData, 0, readLength); 18: blockDataList.Add(Convert.ToBase64String(BitConverter.GetBytes(index)), blockData); 19:  20: index++; 21: } 22: } 23:  24: Parallel.ForEach(blockDataList, (bi) => 25: { 26: blob.PutBlock(bi.Key, new MemoryStream(bi.Value), null); 27: }); 28: blob.PutBlockList(blockDataList.Select(b => b.Key).ToArray()); 29:  30: return RedirectToAction("About"); 31: } This works perfect if we selected an image, a music or a small video to upload. But if I selected a large file, let’s say a 6GB HD-movie, after upload for about few minutes the page will be shown as below and the upload will be terminated. In ASP.NET there is a limitation of request length and the maximized request length is defined in the web.config file. It’s a number which less than about 4GB. So if we want to upload a really big file, we cannot simply implement in this way. Also, in Windows Azure, a cloud service network load balancer will terminate the connection if exceed the timeout period. From my test the timeout looks like 2 - 3 minutes. Hence, when we need to upload a large file we cannot just use the basic HTML elements. Besides the limitation mentioned above, the simple HTML file upload cannot provide rich upload experience such as chunk upload, pause and pause-resume. So we need to find a better way to upload large file from the client to the server.   Upload in Chunks through HTML5 and JavaScript In order to break those limitation mentioned above we will try to upload the large file in chunks. This takes some benefit to us such as - No request size limitation: Since we upload in chunks, we can define the request size for each chunks regardless how big the entire file is. - No timeout problem: The size of chunks are controlled by us, which means we should be able to make sure request for each chunk upload will not exceed the timeout period of both ASP.NET and Windows Azure load balancer. It was a big challenge to upload big file in chunks until we have HTML5. There are some new features and improvements introduced in HTML5 and we will use them to implement our solution.   In HTML5, the File interface had been improved with a new method called “slice”. It can be used to read part of the file by specifying the start byte index and the end byte index. For example if the entire file was 1024 bytes, file.slice(512, 768) will read the part of this file from the 512nd byte to 768th byte, and return a new object of interface called "Blob”, which you can treat as an array of bytes. In fact,  a Blob object represents a file-like object of immutable, raw data. The File interface is based on Blob, inheriting blob functionality and expanding it to support files on the user's system. For more information about the Blob please refer here. File and Blob is very useful to implement the chunk upload. We will use File interface to represent the file the user selected from the browser and then use File.slice to read the file in chunks in the size we wanted. For example, if we wanted to upload a 10MB file with 512KB chunks, then we can read it in 512KB blobs by using File.slice in a loop.   Assuming we have a web page as below. User can select a file, an input box to specify the block size in KB and a button to start upload. 1: <div> 2: <input type="file" id="upload_files" name="files[]" /><br /> 3: Block Size: <input type="number" id="block_size" value="512" name="block_size" />KB<br /> 4: <input type="button" id="upload_button_blob" name="upload" value="upload (blob)" /> 5: </div> Then we can have the JavaScript function to upload the file in chunks when user clicked the button. 1: <script type="text/javascript"> 1: 2: $(function () { 3: $("#upload_button_blob").click(function () { 4: }); 5: });</script> Firstly we need to ensure the client browser supports the interfaces we are going to use. Just try to invoke the File, Blob and FormData from the “window” object. If any of them is “undefined” the condition result will be “false” which means your browser doesn’t support these premium feature and it’s time for you to get your browser updated. FormData is another new feature we are going to use in the future. It could generate a temporary form for us. We will use this interface to create a form with chunk and associated metadata when invoked the service through ajax. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: if (window.File && window.Blob && window.FormData) { 4: alert("Your brwoser is awesome, let's rock!"); 5: } 6: else { 7: alert("Oh man plz update to a modern browser before try is cool stuff out."); 8: return; 9: } 10: }); Each browser supports these interfaces by their own implementation and currently the Blob, File and File.slice are supported by Chrome 21, FireFox 13, IE 10, Opera 12 and Safari 5.1 or higher. After that we worked on the files the user selected one by one since in HTML5, user can select multiple files in one file input box. 1: var files = $("#upload_files")[0].files; 2: for (var i = 0; i < files.length; i++) { 3: var file = files[i]; 4: var fileSize = file.size; 5: var fileName = file.name; 6: } Next, we calculated the start index and end index for each chunks based on the size the user specified from the browser. We put them into an array with the file name and the index, which will be used when we upload chunks into Windows Azure Blob Storage as blocks since we need to specify the target blob name and the block index. At the same time we will store the list of all indexes into another variant which will be used to commit blocks into blob in Azure Storage once all chunks had been uploaded successfully. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10:  11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: var blockSizeInKB = $("#block_size").val(); 14: var blockSize = blockSizeInKB * 1024; 15: var blocks = []; 16: var offset = 0; 17: var index = 0; 18: var list = ""; 19: while (offset < fileSize) { 20: var start = offset; 21: var end = Math.min(offset + blockSize, fileSize); 22:  23: blocks.push({ 24: name: fileName, 25: index: index, 26: start: start, 27: end: end 28: }); 29: list += index + ","; 30:  31: offset = end; 32: index++; 33: } 34: } 35: }); Now we have all chunks’ information ready. The next step should be upload them one by one to the server side, and at the server side when received a chunk it will upload as a block into Blob Storage, and finally commit them with the index list through BlockBlobClient.PutBlockList. But since all these invokes are ajax calling, which means not synchronized call. So we need to introduce a new JavaScript library to help us coordinate the asynchronize operation, which named “async.js”. You can download this JavaScript library here, and you can find the document here. I will not explain this library too much in this post. We will put all procedures we want to execute as a function array, and pass into the proper function defined in async.js to let it help us to control the execution sequence, in series or in parallel. Hence we will define an array and put the function for chunk upload into this array. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4:  5: // start to upload each files in chunks 6: var files = $("#upload_files")[0].files; 7: for (var i = 0; i < files.length; i++) { 8: var file = files[i]; 9: var fileSize = file.size; 10: var fileName = file.name; 11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: ... ... 14:  15: // define the function array and push all chunk upload operation into this array 16: blocks.forEach(function (block) { 17: putBlocks.push(function (callback) { 18: }); 19: }); 20: } 21: }); 22: }); As you can see, I used File.slice method to read each chunks based on the start and end byte index we calculated previously, and constructed a temporary HTML form with the file name, chunk index and chunk data through another new feature in HTML5 named FormData. Then post this form to the backend server through jQuery.ajax. This is the key part of our solution. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: blocks.forEach(function (block) { 15: putBlocks.push(function (callback) { 16: // load blob based on the start and end index for each chunks 17: var blob = file.slice(block.start, block.end); 18: // put the file name, index and blob into a temporary from 19: var fd = new FormData(); 20: fd.append("name", block.name); 21: fd.append("index", block.index); 22: fd.append("file", blob); 23: // post the form to backend service (asp.net mvc controller action) 24: $.ajax({ 25: url: "/Home/UploadInFormData", 26: data: fd, 27: processData: false, 28: contentType: "multipart/form-data", 29: type: "POST", 30: success: function (result) { 31: if (!result.success) { 32: alert(result.error); 33: } 34: callback(null, block.index); 35: } 36: }); 37: }); 38: }); 39: } 40: }); Then we will invoke these functions one by one by using the async.js. And once all functions had been executed successfully I invoked another ajax call to the backend service to commit all these chunks (blocks) as the blob in Windows Azure Storage. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.series(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); That’s all in the client side. The outline of our logic would be - Calculate the start and end byte index for each chunks based on the block size. - Defined the functions of reading the chunk form file and upload the content to the backend service through ajax. - Execute the functions defined in previous step with “async.js”. - Commit the chunks by invoking the backend service in Windows Azure Storage finally.   Save Chunks as Blocks into Blob Storage In above we finished the client size JavaScript code. It uploaded the file in chunks to the backend service which we are going to implement in this step. We will use ASP.NET MVC as our backend service, and it will receive the chunks, upload into Windows Azure Bob Storage in blocks, then finally commit as one blob. As in the client side we uploaded chunks by invoking the ajax call to the URL "/Home/UploadInFormData", I created a new action under the Index controller and it only accepts HTTP POST request. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: } 8: catch (Exception e) 9: { 10: error = e.ToString(); 11: } 12:  13: return new JsonResult() 14: { 15: Data = new 16: { 17: success = string.IsNullOrWhiteSpace(error), 18: error = error 19: } 20: }; 21: } Then I retrieved the file name, index and the chunk content from the Request.Form object, which was passed from our client side. And then, used the Windows Azure SDK to create a blob container (in this case we will use the container named “test”.) and create a blob reference with the blob name (same as the file name). Then uploaded the chunk as a block of this blob with the index, since in Blob Storage each block must have an index (ID) associated with so that finally we can put all blocks as one blob by specifying their block ID list. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var index = int.Parse(Request.Form["index"]); 9: var file = Request.Files[0]; 10: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 11:  12: var container = _client.GetContainerReference("test"); 13: container.CreateIfNotExists(); 14: var blob = container.GetBlockBlobReference(name); 15: blob.PutBlock(id, file.InputStream, null); 16: } 17: catch (Exception e) 18: { 19: error = e.ToString(); 20: } 21:  22: return new JsonResult() 23: { 24: Data = new 25: { 26: success = string.IsNullOrWhiteSpace(error), 27: error = error 28: } 29: }; 30: } Next, I created another action to commit the blocks into blob once all chunks had been uploaded. Similarly, I retrieved the blob name from the Request.Form. I also retrieved the chunks ID list, which is the block ID list from the Request.Form in a string format, split them as a list, then invoked the BlockBlob.PutBlockList method. After that our blob will be shown in the container and ready to be download. 1: [HttpPost] 2: public JsonResult Commit() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var list = Request.Form["list"]; 9: var ids = list 10: .Split(',') 11: .Where(id => !string.IsNullOrWhiteSpace(id)) 12: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 13: .ToArray(); 14:  15: var container = _client.GetContainerReference("test"); 16: container.CreateIfNotExists(); 17: var blob = container.GetBlockBlobReference(name); 18: blob.PutBlockList(ids); 19: } 20: catch (Exception e) 21: { 22: error = e.ToString(); 23: } 24:  25: return new JsonResult() 26: { 27: Data = new 28: { 29: success = string.IsNullOrWhiteSpace(error), 30: error = error 31: } 32: }; 33: } Now we finished all code we need. The whole process of uploading would be like this below. Below is the full client side JavaScript code. 1: <script type="text/javascript" src="~/Scripts/async.js"></script> 2: <script type="text/javascript"> 3: $(function () { 4: $("#upload_button_blob").click(function () { 5: // assert the browser support html5 6: if (window.File && window.Blob && window.FormData) { 7: alert("Your brwoser is awesome, let's rock!"); 8: } 9: else { 10: alert("Oh man plz update to a modern browser before try is cool stuff out."); 11: return; 12: } 13:  14: // start to upload each files in chunks 15: var files = $("#upload_files")[0].files; 16: for (var i = 0; i < files.length; i++) { 17: var file = files[i]; 18: var fileSize = file.size; 19: var fileName = file.name; 20:  21: // calculate the start and end byte index for each blocks(chunks) 22: // with the index, file name and index list for future using 23: var blockSizeInKB = $("#block_size").val(); 24: var blockSize = blockSizeInKB * 1024; 25: var blocks = []; 26: var offset = 0; 27: var index = 0; 28: var list = ""; 29: while (offset < fileSize) { 30: var start = offset; 31: var end = Math.min(offset + blockSize, fileSize); 32:  33: blocks.push({ 34: name: fileName, 35: index: index, 36: start: start, 37: end: end 38: }); 39: list += index + ","; 40:  41: offset = end; 42: index++; 43: } 44:  45: // define the function array and push all chunk upload operation into this array 46: var putBlocks = []; 47: blocks.forEach(function (block) { 48: putBlocks.push(function (callback) { 49: // load blob based on the start and end index for each chunks 50: var blob = file.slice(block.start, block.end); 51: // put the file name, index and blob into a temporary from 52: var fd = new FormData(); 53: fd.append("name", block.name); 54: fd.append("index", block.index); 55: fd.append("file", blob); 56: // post the form to backend service (asp.net mvc controller action) 57: $.ajax({ 58: url: "/Home/UploadInFormData", 59: data: fd, 60: processData: false, 61: contentType: "multipart/form-data", 62: type: "POST", 63: success: function (result) { 64: if (!result.success) { 65: alert(result.error); 66: } 67: callback(null, block.index); 68: } 69: }); 70: }); 71: }); 72:  73: // invoke the functions one by one 74: // then invoke the commit ajax call to put blocks into blob in azure storage 75: async.series(putBlocks, function (error, result) { 76: var data = { 77: name: fileName, 78: list: list 79: }; 80: $.post("/Home/Commit", data, function (result) { 81: if (!result.success) { 82: alert(result.error); 83: } 84: else { 85: alert("done!"); 86: } 87: }); 88: }); 89: } 90: }); 91: }); 92: </script> And below is the full ASP.NET MVC controller code. 1: public class HomeController : Controller 2: { 3: private CloudStorageAccount _account; 4: private CloudBlobClient _client; 5:  6: public HomeController() 7: : base() 8: { 9: _account = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("DataConnectionString")); 10: _client = _account.CreateCloudBlobClient(); 11: } 12:  13: public ActionResult Index() 14: { 15: ViewBag.Message = "Modify this template to jump-start your ASP.NET MVC application."; 16:  17: return View(); 18: } 19:  20: [HttpPost] 21: public JsonResult UploadInFormData() 22: { 23: var error = string.Empty; 24: try 25: { 26: var name = Request.Form["name"]; 27: var index = int.Parse(Request.Form["index"]); 28: var file = Request.Files[0]; 29: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 30:  31: var container = _client.GetContainerReference("test"); 32: container.CreateIfNotExists(); 33: var blob = container.GetBlockBlobReference(name); 34: blob.PutBlock(id, file.InputStream, null); 35: } 36: catch (Exception e) 37: { 38: error = e.ToString(); 39: } 40:  41: return new JsonResult() 42: { 43: Data = new 44: { 45: success = string.IsNullOrWhiteSpace(error), 46: error = error 47: } 48: }; 49: } 50:  51: [HttpPost] 52: public JsonResult Commit() 53: { 54: var error = string.Empty; 55: try 56: { 57: var name = Request.Form["name"]; 58: var list = Request.Form["list"]; 59: var ids = list 60: .Split(',') 61: .Where(id => !string.IsNullOrWhiteSpace(id)) 62: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 63: .ToArray(); 64:  65: var container = _client.GetContainerReference("test"); 66: container.CreateIfNotExists(); 67: var blob = container.GetBlockBlobReference(name); 68: blob.PutBlockList(ids); 69: } 70: catch (Exception e) 71: { 72: error = e.ToString(); 73: } 74:  75: return new JsonResult() 76: { 77: Data = new 78: { 79: success = string.IsNullOrWhiteSpace(error), 80: error = error 81: } 82: }; 83: } 84: } And if we selected a file from the browser we will see our application will upload chunks in the size we specified to the server through ajax call in background, and then commit all chunks in one blob. Then we can find the blob in our Windows Azure Blob Storage.   Optimized by Parallel Upload In previous example we just uploaded our file in chunks. This solved the problem that ASP.NET MVC request content size limitation as well as the Windows Azure load balancer timeout. But it might introduce the performance problem since we uploaded chunks in sequence. In order to improve the upload performance we could modify our client side code a bit to make the upload operation invoked in parallel. The good news is that, “async.js” library provides the parallel execution function. If you remembered the code we invoke the service to upload chunks, it utilized “async.series” which means all functions will be executed in sequence. Now we will change this code to “async.parallel”. This will invoke all functions in parallel. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallel(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); In this way all chunks will be uploaded to the server side at the same time to maximize the bandwidth usage. This should work if the file was not very large and the chunk size was not very small. But for large file this might introduce another problem that too many ajax calls are sent to the server at the same time. So the best solution should be, upload the chunks in parallel with maximum concurrency limitation. The code below specified the concurrency limitation to 4, which means at the most only 4 ajax calls could be invoked at the same time. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallelLimit(putBlocks, 4, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: });   Summary In this post we discussed how to upload files in chunks to the backend service and then upload them into Windows Azure Blob Storage in blocks. We focused on the frontend side and leverage three new feature introduced in HTML 5 which are - File.slice: Read part of the file by specifying the start and end byte index. - Blob: File-like interface which contains the part of the file content. - FormData: Temporary form element that we can pass the chunk alone with some metadata to the backend service. Then we discussed the performance consideration of chunk uploading. Sequence upload cannot provide maximized upload speed, but the unlimited parallel upload might crash the browser and server if too many chunks. So we finally came up with the solution to upload chunks in parallel with the concurrency limitation. We also demonstrated how to utilize “async.js” JavaScript library to help us control the asynchronize call and the parallel limitation.   Regarding the chunk size and the parallel limitation value there is no “best” value. You need to test vary composition and find out the best one for your particular scenario. It depends on the local bandwidth, client machine cores and the server side (Windows Azure Cloud Service Virtual Machine) cores, memory and bandwidth. Below is one of my performance test result. The client machine was Windows 8 IE 10 with 4 cores. I was using Microsoft Cooperation Network. The web site was hosted on Windows Azure China North data center (in Beijing) with one small web role (1.7GB 1 core CPU, 1.75GB memory with 100Mbps bandwidth). The test cases were - Chunk size: 512KB, 1MB, 2MB, 4MB. - Upload Mode: Sequence, parallel (unlimited), parallel with limit (4 threads, 8 threads). - Chunk Format: base64 string, binaries. - Target file: 100MB. - Each case was tested 3 times. Below is the test result chart. Some thoughts, but not guidance or best practice: - Parallel gets better performance than series. - No significant performance improvement between parallel 4 threads and 8 threads. - Transform with binaries provides better performance than base64. - In all cases, chunk size in 1MB - 2MB gets better performance.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Cannot get libcurl-devl on OpenSUSE 11.3

    - by Dai
    I have a server running OpenSUSE 11.3 that I can't really upgrade to a newer version of OpenSUSE (it's a managed appliance). I have some PHP shell scripts that need to run on the server that have a dependency on both cURL and OpenSSL. I discovered that the PHP 5.3.3 binaries on the server did not include OpenSSL but did include cURL I downloaded the latest PHP sources, extracted them, and ran ./configure --with-openssl --with-zlib --with-bcmath --with-curl --with-readline --with-libxml --enable-sockets This failed: the configure script complained that it couldn't find cURL: checking for cURL support... yes checking for cURL in default path... not found configure: error: Please reinstall the libcurl distribution - easy.h should be in /include/curl/ I tried to install libcurl by running zypper install libcurl-devl This failed too: doom:~/phpworksite/php-5.5.15 # zypper install libcurl-devl Loading repository data... Warning: Repository 'Updates for openSUSE 11.3 11.3-1.82' appears to outdated. Consider using a different mirror or server. Warning: Repository 'openSUSE_11.3_Updates' appears to outdated. Consider using a different mirror or server. Reading installed packages... 'libcurl-devl' not found in package names. Trying capabilities. No provider of 'libcurl-devl' found. Resolving package dependencies... Nothing to do. However, libcurl-devl is listed when I run zypper search curl. doom:~/phpworksite/php-5.5.15 # zypper search curl Loading repository data... Warning: Repository 'Updates for openSUSE 11.3 11.3-1.82' appears to outdated. Consider using a different mirror or server. Warning: Repository 'openSUSE_11.3_Updates' appears to outdated. Consider using a different mirror or server. Reading installed packages... S | Name | Summary | Type --+-----------------------------+----------------------------------------------------------+-------- i | curl | A Tool for Transferring Data from URLs | package | curlftpfs | Filesystem for mounting FTP hosts using FUSE and libcurl | package | libcurl-devel | A Tool for Transferring Data from URLs | package i | libcurl4 | cURL shared library version 4 | package i | perl-WWW-Curl | Perl extension interface for libcurl | package i | php5-curl | PHP5 Extension Module | package | python-curl | Python module interface to the cURL library | package | python-curl-doc | Documentation for python-curl | package | xmms2-plugin-curl | Curl Support for xmms2 | package | xmms2-plugin-curl-debuginfo | Debug information for package xmms2-plugin-curl | package doom:~/phpworksite/php-5.5.15 # Here are the current repositories. doom:~/phpworksite/php-5.5.15 # zypper repos # | Alias | Name | Enabled | Refresh ---+----------------------------------------------+----------------------------------------------+---------+-------- 1 | PHP_extensions_(openSUSE_11.3) | PHP_extensions_(openSUSE_11.3) | No | Yes 2 | Packman_11.3 | Packman_11.3 | Yes | Yes 3 | Updates for openSUSE 11.3 11.3-1.82 | Updates for openSUSE 11.3 11.3-1.82 | Yes | Yes 4 | openSUSE_11.3_OSS | openSUSE_11.3_OSS | Yes | Yes 5 | openSUSE_11.3_Updates | openSUSE_11.3_Updates | Yes | Yes 6 | openSUSE_BuildService_-_devel:languages:perl | openSUSE_BuildService_-_devel:languages:perl | No | Yes 7 | repo-debug | openSUSE-11.3-Debug | No | Yes 8 | repo-non-oss | openSUSE-11.3-Non-Oss | Yes | Yes 9 | repo-oss | openSUSE-11.3-Oss | Yes | Yes 10 | repo-source | openSUSE-11.3-Source | No | Yes BTW, I did try building PHP without cURL, however it broke a lot of things, so apparently I really need cURL. My question: how can I install libcurl-devl (or just install cURL) so that I can build PHP?

    Read the article

  • /phpTest/zologize/axa.php? Another botnet?

    - by M132
    Starring at the log made me think, what is /phpTest/zologize/axa.php and why are bots looking for it? Previously, I had lots of /HNAP1/ requests. Requesting /HNAP1/ from IPs from log revealed, that all of them were sent by Linksys routers. 3 months later, these requests turned out to be generated by a router worm called TheMoon. But requesting /phpTest/zologize/axa.php from these servers returns a 404 error. How these servers got infected, and how can I protect mine from this? 124.11.224.69 - - [02/Feb/2014:00:37:16 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 168 "-" "-" 140.113.238.121 - - [21/Feb/2014:01:24:32 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 168 "-" "-" 77.121.132.79 - - [22/Feb/2014:00:03:56 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 168 "-" "-" 142.4.201.210 - - [24/Feb/2014:21:54:33 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 168 "-" "-" 212.83.168.39 - - [24/Feb/2014:23:16:00 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 168 "-" "-" 87.117.229.210 - - [26/Feb/2014:06:34:58 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 162 "-" "-" 78.100.82.99 - - [26/Feb/2014:08:25:48 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 162 "-" "-" 198.50.205.219 - - [26/Feb/2014:09:59:11 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 162 "-" "-" 210.60.142.107 - - [27/Feb/2014:00:12:12 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 162 "-" "-" 101.109.4.73 - - [27/Feb/2014:08:50:46 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 162 "-" "-" 61.91.128.158 - - [27/Feb/2014:08:59:15 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 162 "-" "-" 201.188.41.175 - - [27/Feb/2014:11:25:42 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 162 "-" "-" 220.133.137.2 - - [27/Feb/2014:12:12:46 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 162 "-" "-" 203.156.104.88 - - [28/Feb/2014:18:11:49 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 162 "-" "-" 61.19.52.58 - - [28/Feb/2014:22:02:56 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 162 "-" "-" 84.2.92.40 - - [28/Feb/2014:23:04:17 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 162 "-" "-" 58.64.205.11 - - [01/Mar/2014:06:08:33 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 21 "-" "-" 113.61.200.151 - - [01/Mar/2014:18:25:25 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 21 "-" "-" 178.33.219.12 - - [03/Mar/2014:14:41:48 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 21 "-" "-" 74.63.220.132 - - [04/Mar/2014:01:16:44 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 21 "-" "-" 187.141.230.106 - - [04/Mar/2014:15:39:26 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 21 "-" "-" 103.22.181.146 - - [09/May/2014:17:16:56 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 502 166 "-" "-" 176.31.200.14 - - [10/May/2014:19:52:24 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 68 "-" "-" 124.120.92.70 - - [12/May/2014:16:19:40 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 68 "-" "-" 219.85.198.142 - - [15/May/2014:19:21:22 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-" 80.84.53.226 - - [23/May/2014:08:58:25 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-" 87.213.11.165 - - [25/May/2014:06:20:27 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-" 122.116.220.106 - - [25/May/2014:07:10:21 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-" 58.8.128.30 - - [29/May/2014:02:43:49 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-" 142.4.197.135 - - [29/May/2014:11:36:45 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-" 178.32.243.65 - - [30/May/2014:01:59:53 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-" 58.8.164.221 - - [30/May/2014:14:04:16 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-" 140.127.182.15 - - [01/Jun/2014:14:45:40 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-" 218.166.43.21 - - [01/Jun/2014:16:07:52 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-" 178.32.188.140 - - [01/Jun/2014:19:11:46 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-" 94.23.211.173 - - [05/Jun/2014:00:52:52 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-" 120.117.105.201 - - [05/Jun/2014:04:39:39 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-" 187.172.27.146 - - [05/Jun/2014:10:20:22 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-" 203.195.219.91 - - [05/Jun/2014:10:53:42 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-"

    Read the article

< Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >