Search Results

Search found 4304 results on 173 pages for 'bytes'.

Page 54/173 | < Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >

  • Why does the MaxReceiveMessageSize in WCF matter in case of Streaming

    The default value of MaxReceiveMessageSize in WCF is 65,536.  When you choose streaming as TransferMode, WCF runtime will create 8192 as buffer size. So what happened now is that WCF channel stack will read the first 8192 bytes, and decode the first couple of bytes as the size of the entire envelope. Then we will do a size check, and send back fault if the actual size exceeds the limit.   According to MSDN documentation, the MaxReceiveMessageSize is something that prevents a DOS attack,...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How can I switch memory modules to 1600 Mhz?

    - by Salvador
    Some months ago I bought 4 Memory modules of 4GB DDR3 1600 KINGSTON HYPERX. The official Kingston manual says: *This module has been tested to run at DDR3-1600 at a low latency timing of 9-9-9-27 at 1.65V.The SPD is programmed to JEDEC standard latency DDR3-1333 timing of 9-9-9. I cannot find which is the real speed of my memory modules. I normally get from several tools that the real speed is 1333 Mhz srs@ubuntu:~$ sudo dmidecode -t memory # dmidecode 2.9 SMBIOS 2.6 present. Handle 0x005D, DMI type 16, 15 bytes Physical Memory Array Location: System Board Or Motherboard Use: System Memory Error Correction Type: None Maximum Capacity: 32 GB Error Information Handle: 0x005F Number Of Devices: 4 Handle 0x005C, DMI type 17, 28 bytes Memory Device Array Handle: 0x005D Error Information Handle: 0x0060 Total Width: 64 bits Data Width: 64 bits Size: 4096 MB Form Factor: DIMM Set: None Locator: ChannelA-DIMM0 Bank Locator: BANK 0 Type: <OUT OF SPEC> Type Detail: Synchronous Speed: 1333 MHz (0.8 ns) Manufacturer: Kingston Serial Number: 07288F23 Asset Tag: 9876543210 Part Number: 9905403-439.A00LF How can I switch memory modules to 1600 Mhz?

    Read the article

  • CUPS: HP printer DNS url

    - by wintersolutions
    The URL for my printer generated by hp-makeuri looks like this: hp:/net/Officejet_6500_E710n-z?ip=192.168.178.30 But the printer is on a dhcp enabled wifi network and so its IP-address does and could change. On the other hand my wifi router seems smart enough to have some sort of DNS: $ ping hp-6500a PING hp-6500a.fritz.box (192.168.178.30) 56(84) bytes of data. 64 bytes from hp-6500a.fritz.box (192.168.178.30): icmp_req=1 ttl=255 time=11.3 ms I tried to use the hostname in the CUPS URL/DeviceUID but it failed, any suggestions if this is possible and the correct format?

    Read the article

  • calling calloc - memory leak valgrind

    - by Mike
    The following code is an example from the NCURSES menu library. I'm not sure what could be wrong with the code, but valgrind reports some problems. Any ideas... ==4803== 1,049 (72 direct, 977 indirect) bytes in 1 blocks are definitely lost in loss record 25 of 36 ==4803== at 0x4C24477: calloc (vg_replace_malloc.c:418) ==4803== by 0x400E93: main (in /home/gerardoj/a.out) ==4803== ==4803== LEAK SUMMARY: ==4803== definitely lost: 72 bytes in 1 blocks ==4803== indirectly lost: 977 bytes in 10 blocks ==4803== possibly lost: 0 bytes in 0 blocks ==4803== still reachable: 64,942 bytes in 262 blocks Source code: #include <curses.h> #include <menu.h> #define ARRAY_SIZE(a) (sizeof(a) / sizeof(a[0])) #define CTRLD 4 char *choices[] = { "Choice 1", "Choice 2", "Choice 3", "Choice 4", "Choice 5", "Choice 6", "Choice 7", "Exit", } ; int main() { ITEM **my_items; int c; MENU *my_menu; int n_choices, i; ITEM *cur_item; /* Initialize curses */ initscr(); cbreak(); noecho(); keypad(stdscr, TRUE); /* Initialize items */ n_choices = ARRAY_SIZE(choices); my_items = (ITEM **)calloc(n_choices + 1, sizeof(ITEM *)); for (i = 0; i < n_choices; ++i) { my_items[i] = new_item(choices[i], choices[i]); } my_items[n_choices] = (ITEM *)NULL; my_menu = new_menu((ITEM **)my_items); /* Make the menu multi valued */ menu_opts_off(my_menu, O_ONEVALUE); mvprintw(LINES - 3, 0, "Use <SPACE> to select or unselect an item."); mvprintw(LINES - 2, 0, "<ENTER> to see presently selected items(F1 to Exit)"); post_menu(my_menu); refresh(); while ((c = getch()) != KEY_F(1)) { switch (c) { case KEY_DOWN: menu_driver(my_menu, REQ_DOWN_ITEM); break; case KEY_UP: menu_driver(my_menu, REQ_UP_ITEM); break; case ' ': menu_driver(my_menu, REQ_TOGGLE_ITEM); break; case 10: { char temp[200]; ITEM **items; items = menu_items(my_menu); temp[0] = '\0'; for (i = 0; i < item_count(my_menu); ++i) if(item_value(items[i]) == TRUE) { strcat(temp, item_name(items[i])); strcat(temp, " "); } move(20, 0); clrtoeol(); mvprintw(20, 0, temp); refresh(); } break; } } free_item(my_items[0]); free_item(my_items[1]); free_menu(my_menu); endwin(); }

    Read the article

  • Shrink NTFS Windows 7 Partition with GParted

    - by user15961
    I am running a dual-boot system with Windows 7 and Ubuntu 10.10. Initially I allocated about 20GB for my Ubuntu partition; however, I quickly ran out of that space and am now looking to expand my partition. Currently my NTFS partition (450GB) has about 130GB of free space. I tried using GParted to shrink the partition but encountered the following error. I booted into windows so I could run chkdsk but the countdown freezes at 1 upon reboot. I tried multiple methods to resolve that issue but nothing seems to work. Finally I gave up, and now I just want to know what is the best way for me to force GParted to shrink the partition regardless of the errors. I don't really have anything important and I don't mind risking the data. I just don't want to wipe the entire NTFS partition because I don't have a Windows install CD and might require Windows later on for some programs. I tried using sudo ntfsresize but that spews out the same error as GParted... Any ideas? Check and repair file system (ntfs) on /dev/sda2 00:00:09 ( ERROR ) calibrate /dev/sda2 00:00:00 ( SUCCESS ) path: /dev/sda2 start: 36944325 end: 976771119 size: 939826795 (448.14 GiB) check file system on /dev/sda2 for errors and (if possible) fix them 00:00:09 ( ERROR ) ntfsresize -P -i -f -v /dev/sda2 ntfsresize v2.0.0 (libntfs 10:0:0) Device name : /dev/sda2 NTFS volume version: 3.1 Cluster size : 4096 bytes Current volume size: 481191318016 bytes (481192 MB) Current device size: 481191319040 bytes (481192 MB) Checking for bad sectors ... Checking filesystem consistency ... Cluster 63468 is referenced multiple times! Cluster 63469 is referenced multiple times! Cluster 63465 is referenced multiple times! Cluster 63466 is referenced multiple times! Cluster 63467 is referenced multiple times! Cluster 165621 is referenced multiple times! Cluster 165622 is referenced multiple times! Cluster 165623 is referenced multiple times! Cluster 165624 is referenced multiple times! ERROR: Filesystem check failed! ERROR: 9 clusters are referenced multiply times. NTFS is inconsistent. Run chkdsk /f on Windows then reboot it TWICE! The usage of the /f parameter is very IMPORTANT! No modification was and will be made to NTFS by this software until it gets repaired.

    Read the article

  • Atheros 922 PCI WIFI is disabled in Unity but enabled in terminal - How to get it to work?

    - by zewone
    I am trying to get my PCI Wireless Atheros 922 card to work. It is disabled in Unity: both the network utility and the desktop (see screenshot http://www.amisdurailhalanzy.be/Screenshot%20from%202012-10-25%2013:19:54.png) I tried many different advises on many different forums. Installed 12.10 instead of 12.04, enabled all interfaces... etc. I have read about the aht9 driver... The terminal shows no hw or sw lock for the Atheros card, nevertheless, it is still disabled. Nothing worked so far, the card is still disabled. Any help is much appreciated. Here are more tech details: myuser@adri1:~$ sudo lshw -C network *-network:0 DISABLED description: Wireless interface product: AR922X Wireless Network Adapter vendor: Atheros Communications Inc. physical id: 2 bus info: pci@0000:03:02.0 logical name: wlan1 version: 01 serial: 00:18:e7:cd:68:b1 width: 32 bits clock: 66MHz capabilities: pm bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=ath9k driverversion=3.5.0-17-generic firmware=N/A latency=168 link=no multicast=yes wireless=IEEE 802.11bgn resources: irq:18 memory:d8000000-d800ffff *-network:1 description: Ethernet interface product: VT6105/VT6106S [Rhine-III] vendor: VIA Technologies, Inc. physical id: 6 bus info: pci@0000:03:06.0 logical name: eth0 version: 8b serial: 00:11:09:a3:76:4a size: 10Mbit/s capacity: 100Mbit/s width: 32 bits clock: 33MHz capabilities: pm bus_master cap_list ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=via-rhine driverversion=1.5.0 duplex=half latency=32 link=no maxlatency=8 mingnt=3 multicast=yes port=MII speed=10Mbit/s resources: irq:18 ioport:d300(size=256) memory:d8013000-d80130ff *-network DISABLED description: Wireless interface physical id: 1 bus info: usb@1:8.1 logical name: wlan0 serial: 00:11:09:51:75:36 capabilities: ethernet physical wireless configuration: broadcast=yes driver=rt2500usb driverversion=3.5.0-17-generic firmware=N/A link=no multicast=yes wireless=IEEE 802.11bg myuser@adri1:~$ sudo rfkill list all 0: hci0: Bluetooth Soft blocked: no Hard blocked: no 1: phy1: Wireless LAN Soft blocked: no Hard blocked: yes 2: phy0: Wireless LAN Soft blocked: no Hard blocked: no myuser@adri1:~$ dmesg | grep wlan0 [ 15.114235] IPv6: ADDRCONF(NETDEV_UP): wlan0: link is not ready myuser@adri1:~$ dmesg | egrep 'ath|firm' [ 14.617562] ath: EEPROM regdomain: 0x30 [ 14.617568] ath: EEPROM indicates we should expect a direct regpair map [ 14.617572] ath: Country alpha2 being used: AM [ 14.617575] ath: Regpair used: 0x30 [ 14.637778] ieee80211 phy0: >Selected rate control algorithm 'ath9k_rate_control' [ 14.639410] Registered led device: ath9k-phy0 myuser@adri1:~$ dmesg | grep wlan1 [ 15.119922] IPv6: ADDRCONF(NETDEV_UP): wlan1: link is not ready myuser@adri1:~$ lspci -nn | grep 'Atheros' 03:02.0 Network controller [0280]: Atheros Communications Inc. AR922X Wireless Network Adapter [168c:0029] (rev 01) myuser@adri1:~$ sudo ifconfig eth0 Link encap:Ethernet HWaddr 00:11:09:a3:76:4a inet addr:192.168.2.2 Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: fe80::211:9ff:fea3:764a/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:5457 errors:0 dropped:0 overruns:0 frame:0 TX packets:2548 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:3425684 (3.4 MB) TX bytes:282192 (282.1 KB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:590 errors:0 dropped:0 overruns:0 frame:0 TX packets:590 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:53729 (53.7 KB) TX bytes:53729 (53.7 KB) myuser@adri1:~$ sudo iwconfig wlan0 IEEE 802.11bg ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=off Retry long limit:7 RTS thr:off Fragment thr:off Encryption key:off Power Management:on lo no wireless extensions. eth0 no wireless extensions. wlan1 IEEE 802.11bgn ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=0 dBm Retry long limit:7 RTS thr:off Fragment thr:off Encryption key:off Power Management:off myuser@adri1:~$ lsmod | grep "ath9k" ath9k 116549 0 mac80211 461161 3 rt2x00usb,rt2x00lib,ath9k ath9k_common 13783 1 ath9k ath9k_hw 376155 2 ath9k,ath9k_common ath 19187 3 ath9k,ath9k_common,ath9k_hw cfg80211 175375 4 rt2x00lib,ath9k,mac80211,ath myuser@adri1:~$ iwlist scan wlan0 Failed to read scan data : Network is down lo Interface doesn't support scanning. eth0 Interface doesn't support scanning. wlan1 Failed to read scan data : Network is down myuser@adri1:~$ lsb_release -d Description: Ubuntu 12.10 myuser@adri1:~$ uname -mr 3.5.0-17-generic i686 ![Schizophrenic Ubuntu](http://www.amisdurailhalanzy.be/Screenshot%20from%202012-10-25%2013:19:54.png) Any help much appreciated... Thanks, Philippe 31-10-2012 ... I have some more updates. When I do the following command it does see my Wifi router... So even if it is still disabled... the card seems to work and see the router (ESSID:"5791BC26-CE9C-11D1-97BF-0000F81E") See below: sudo iwlist wlan1 scanning wlan1 Scan completed : Cell 01 - Address: 00:19:70:8F:B0:EA Channel:10 Frequency:2.457 GHz (Channel 10) Quality=51/70 Signal level=-59 dBm Encryption key:on ESSID:"5791BC26-CE9C-11D1-97BF-0000F81E" Bit Rates:1 Mb/s; 2 Mb/s; 5.5 Mb/s; 11 Mb/s; 6 Mb/s 9 Mb/s; 12 Mb/s; 18 Mb/s Bit Rates:24 Mb/s; 36 Mb/s; 48 Mb/s; 54 Mb/s Mode:Master Extra:tsf=000000025dbf2188 Extra: Last beacon: 108ms ago IE: Unknown: 002035373931424332362D434539432D313144312D393742462D3030303046383145 IE: Unknown: 010882848B960C121824 IE: Unknown: 03010A IE: Unknown: 0706424520010D14 IE: IEEE 802.11i/WPA2 Version 1 Group Cipher : TKIP Pairwise Ciphers (2) : CCMP TKIP Authentication Suites (1) : PSK IE: Unknown: 2A0100 IE: Unknown: 32043048606C IE: Unknown: DD180050F2020101030003A4000027A4000042435E0062322F00 IE: Unknown: DD0900037F01010000FF7F IE: Unknown: DD0A00037F04010000000000

    Read the article

  • EPM 11.1.2 - Issues during configuration when using Oracle DB if not using UTF8

    - by Ahmed A
    If you see issues during configuration when using Oracle DB if not using UTF8: Workaround: a. During configuration of EPM products, a warning message is displayed if the Oracle DB is not UTF8 enabled. If you continue with the configuration, certain products will not work as they will not be able to read the contents in the tables as the format is wrong.b. The Oracle DB must be setup to use AL32UTF8 or a superset that contains AL32UTF8. c. The only difference between AL32UTF8 and UTF8 character sets is that AL32UTF8 stores characters beyond U+FFFF as four bytes (exactly as Unicode defines UTF-8). Oracle’s “UTF8” stores these characters as a sequence of two UTF-16 surrogate characters encoded using UTF-8 (or six bytes per character). Besides this storage difference, another difference is better support for supplementary characters in AL32UTF8 character set.

    Read the article

  • What is a safe ulimit ceiling?

    - by Kaustubh P
    This is the output of ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 16382 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) unlimited virtual memory (kbytes, -v) unlimited file locks (-x) unlimited This is a 64bit install, and I would like to increase the max-open files from 1024 to a more heady limit such as 5000. Will that be any problem? Will it cause instability? Thanks.

    Read the article

  • Ubuntu 11.10 is not detecting wired connection on Lenovo G580

    - by Dilip
    I am not able to see wired connection options in header menu. I can see wireless option but still It is not working. Could any one suggest what I am missing? Please find my ifconfig output: =================================== root@rails-dev:/home/rails_work/rdfnet# ifconfig lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:41944 errors:0 dropped:0 overruns:0 frame:0 TX packets:41944 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:4602835 (4.6 MB) TX bytes:4602835 (4.6 MB) ===================================

    Read the article

  • Extremely slow transfer speed ubuntu -> Windows

    - by Hailwood
    I have two laptops, One is running Ubuntu 12.04 (EXT4) the other is running Windows 7 (NTFS). I am copying over 40gb of data (one file) from the Ubuntu laptop to the Windows Laptop. (Browse the shared folder on Ubuntu using Windows copy/paste) But I am getting transfer speeds topping out at ~700kb/s Surely this is not right. I am transferring via wifi on both laptops. My download speeds can reach 7-8mb/s on both laptops, so I know it is not the wifi cards or the router topping out. wlan0 Link encap:Ethernet HWaddr 84:4b:f5:db:b4:85 inet addr:192.168.1.66 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::864b:f5ff:fedb:b485/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:11941185 errors:0 dropped:0 overruns:0 frame:0 TX packets:11306693 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:10087111370 (10.0 GB) TX bytes:7843524888 (7.8 GB)

    Read the article

  • Why do the interfaces show ipv6 address along with ipv4

    - by nixnotwin
    I have manually specified only ipv4 address for my interfaces. But all the interfaces automatically show inet6 address as well. Does it mean that ubuntu starts an ipv6 tunnel by default. If it does, isn't it dangerous, as ipv6 assigns public ips for all LAN clients. I only have a firewall on my NAT router, and my clients, who's interfaces show ipv6 address, do not have firewalls. Here is a screenshot: eth0 Link encap:Ethernet HWaddr 34:dc:47:2e:ad:13 inet6 addr: fe80::28cf:38ff:fb7b:da19/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:5783 errors:0 dropped:0 overruns:0 frame:0 TX packets:6098 errors:0 dropped:0 overruns:0 carrier:1 collisions:0 txqueuelen:1000 RX bytes:2961324 (2.9 MB) TX bytes:1573757 (1.5 MB) Interrupt:46 Note: For privacy reasons I have modified the HWaddr and inet6 addr values.

    Read the article

  • Cannot decode complete cipher list in .NET SslStream handshake.

    - by karmasponge
    While attempting to move from a 'C' based SSL implementation to C# using the .NET SslStream and we have run into what look like cipher compatibility issues with the .NET SslStream and a AS400 machine we are trying to connect to (which worked previously). When we call SslStream.AuthenticateAsClient it is sending the following: 16 03 00 00 37 01 00 00 33 03 00 4d 2c 00 ee 99 4e 0c 5d 83 14 77 78 5c 0f d3 8f 8b d5 e6 b8 cd 61 0f 29 08 ab 75 03 f7 fa 7d 70 00 00 0c 00 05 00 0a 00 13 00 04 00 02 00 ff 01 00 Which decodes as (based on http://www.mozilla.org/projects/security/pki/nss/ssl/draft302.txt) [16] Record Type [03 00] SSL Version [00 37] Body length [01] SSL3_MT_CLIENT_HELLO [00 00 33] Length (51 bytes) [03 00] Version number = 768 [4d 2c 00 ee] 4 Bytes unix time [… ] 28 Bytes random number [00] Session number [00 0c] 12 bytes (2 * 6 Cyphers)? [00 05, 00 0a, 00 13, 00 04, 00 02, 00 ff] - [RC4, PBE-MD5-DES, RSA, MD5, PKCS, ???] [01 00] Null compression method The as400 server responds back with: 15 03 00 00 02 02 28 [15] SSL3_RT_ALERT [03 00] SSL Version [00 02] Body Length (2 Bytes) [02 28] 2 = SSL3_RT_FATAL, 40 = SSL3_AD_HANDSHAKE_FAILURE I'm specifically looking to decode the '00 FF' at the end of the cyphers. Have I decoded it correctly? What does, if anything, '00 FF' decode too? I am using the following code to test/reproduce: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Net.Sockets; using System.Net.Security; using System.Security.Authentication; using System.IO; using System.Diagnostics; using System.Security.Cryptography.X509Certificates; namespace TestSslStreamApp { class DebugStream : Stream { private Stream AggregatedStream { get; set; } public DebugStream(Stream stream) { AggregatedStream = stream; } public override bool CanRead { get { return AggregatedStream.CanRead; } } public override bool CanSeek { get { return AggregatedStream.CanSeek; } } public override bool CanWrite { get { return AggregatedStream.CanWrite; } } public override void Flush() { AggregatedStream.Flush(); } public override long Length { get { return AggregatedStream.Length; } } public override long Position { get { return AggregatedStream.Position; } set { AggregatedStream.Position = value; } } public override int Read(byte[] buffer, int offset, int count) { int bytesRead = AggregatedStream.Read(buffer, offset, count); return bytesRead; } public override long Seek(long offset, SeekOrigin origin) { return AggregatedStream.Seek(offset, origin); } public override void SetLength(long value) { AggregatedStream.SetLength(value); } public override void Write(byte[] buffer, int offset, int count) { AggregatedStream.Write(buffer, offset, count); } } class Program { static void Main(string[] args) { const string HostName = "as400"; TcpClient tcpClient = new TcpClient(HostName, 992); SslStream sslStream = new SslStream(new DebugStream(tcpClient.GetStream()), false, null, null, EncryptionPolicy.AllowNoEncryption); sslStream.AuthenticateAsClient(HostName, null, SslProtocols.Ssl3, false); } } }

    Read the article

  • Developing Schema Compare for Oracle (Part 6): 9i Query Performance

    - by Simon Cooper
    All throughout the EAP and beta versions of Schema Compare for Oracle, our main request was support for Oracle 9i. After releasing version 1.0 with support for 10g and 11g, our next step was then to get version 1.1 of SCfO out with support for 9i. However, there were some significant problems that we had to overcome first. This post will concentrate on query execution time. When we first tested SCfO on a 9i server, after accounting for various changes to the data dictionary, we found that database registration was taking a long time. And I mean a looooooong time. The same database that on 10g or 11g would take a couple of minutes to register would be taking upwards of 30 mins on 9i. Obviously, this is not ideal, so a poke around the query execution plans was required. As an example, let's take the table population query - the one that reads ALL_TABLES and joins it with a few other dictionary views to get us back our list of tables. On 10g, this query takes 5.6 seconds. On 9i, it takes 89.47 seconds. The difference in execution plan is even more dramatic - here's the (edited) execution plan on 10g: -------------------------------------------------------------------------------| Id | Operation | Name | Bytes | Cost |-------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 108K| 939 || 1 | SORT ORDER BY | | 108K| 939 || 2 | NESTED LOOPS OUTER | | 108K| 938 ||* 3 | HASH JOIN RIGHT OUTER | | 103K| 762 || 4 | VIEW | ALL_EXTERNAL_LOCATIONS | 2058 | 3 ||* 20 | HASH JOIN RIGHT OUTER | | 73472 | 759 || 21 | VIEW | ALL_EXTERNAL_TABLES | 2097 | 3 ||* 34 | HASH JOIN RIGHT OUTER | | 39920 | 755 || 35 | VIEW | ALL_MVIEWS | 51 | 7 || 58 | NESTED LOOPS OUTER | | 39104 | 748 || 59 | VIEW | ALL_TABLES | 6704 | 668 || 89 | VIEW PUSHED PREDICATE | ALL_TAB_COMMENTS | 2025 | 5 || 106 | VIEW | ALL_PART_TABLES | 277 | 11 |------------------------------------------------------------------------------- And the same query on 9i: -------------------------------------------------------------------------------| Id | Operation | Name | Bytes | Cost |-------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 16P| 55G|| 1 | SORT ORDER BY | | 16P| 55G|| 2 | NESTED LOOPS OUTER | | 16P| 862M|| 3 | NESTED LOOPS OUTER | | 5251G| 992K|| 4 | NESTED LOOPS OUTER | | 4243M| 2578 || 5 | NESTED LOOPS OUTER | | 2669K| 1440 ||* 6 | HASH JOIN OUTER | | 398K| 302 || 7 | VIEW | ALL_TABLES | 342K| 276 || 29 | VIEW | ALL_MVIEWS | 51 | 20 ||* 50 | VIEW PUSHED PREDICATE | ALL_TAB_COMMENTS | 2043 | ||* 66 | VIEW PUSHED PREDICATE | ALL_EXTERNAL_TABLES | 1777K| ||* 80 | VIEW PUSHED PREDICATE | ALL_EXTERNAL_LOCATIONS | 1744K| ||* 96 | VIEW | ALL_PART_TABLES | 852K| |------------------------------------------------------------------------------- Have a look at the cost column. 10g's overall query cost is 939, and 9i is 55,000,000,000 (or more precisely, 55,496,472,769). It's also having to process far more data. What on earth could be causing this huge difference in query cost? After trawling through the '10g New Features' documentation, we found item 1.9.2.21. Before 10g, Oracle advised that you do not collect statistics on data dictionary objects. From 10g, it advised that you do collect statistics on the data dictionary; for our queries, Oracle therefore knows what sort of data is in the dictionary tables, and so can generate an efficient execution plan. On 9i, no statistics are present on the system tables, so Oracle has to use the Rule Based Optimizer, which turns most LEFT JOINs into nested loops. If we force 9i to use hash joins, like 10g, we get a much better plan: -------------------------------------------------------------------------------| Id | Operation | Name | Bytes | Cost |-------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 7587K| 3704 || 1 | SORT ORDER BY | | 7587K| 3704 ||* 2 | HASH JOIN OUTER | | 7587K| 822 ||* 3 | HASH JOIN OUTER | | 5262K| 616 ||* 4 | HASH JOIN OUTER | | 2980K| 465 ||* 5 | HASH JOIN OUTER | | 710K| 432 ||* 6 | HASH JOIN OUTER | | 398K| 302 || 7 | VIEW | ALL_TABLES | 342K| 276 || 29 | VIEW | ALL_MVIEWS | 51 | 20 || 50 | VIEW | ALL_PART_TABLES | 852K| 104 || 78 | VIEW | ALL_TAB_COMMENTS | 2043 | 14 || 93 | VIEW | ALL_EXTERNAL_LOCATIONS | 1744K| 31 || 106 | VIEW | ALL_EXTERNAL_TABLES | 1777K| 28 |------------------------------------------------------------------------------- That's much more like it. This drops the execution time down to 24 seconds. Not as good as 10g, but still an improvement. There are still several problems with this, however. 10g introduced a new join method - a right outer hash join (used in the first execution plan). The 9i query optimizer doesn't have this option available, so forcing a hash join means it has to hash the ALL_TABLES table, and furthermore re-hash it for every hash join in the execution plan; this could be thousands and thousands of rows. And although forcing hash joins somewhat alleviates this problem on our test systems, there's no guarantee that this will improve the execution time on customers' systems; it may even increase the time it takes (say, if all their tables are partitioned, or they've got a lot of materialized views). Ideally, we would want a solution that provides a speedup whatever the input. To try and get some ideas, we asked some oracle performance specialists to see if they had any ideas or tips. Their recommendation was to add a hidden hook into the product that allowed users to specify their own query hints, or even rewrite the queries entirely. However, we would prefer not to take that approach; as well as a lot of new infrastructure & a rewrite of the population code, it would have meant that any users of 9i would have to spend some time optimizing it to get it working on their system before they could use the product. Another approach was needed. All our population queries have a very specific pattern - a base table provides most of the information we need (ALL_TABLES for tables, or ALL_TAB_COLS for columns) and we do a left join to extra subsidiary tables that fill in gaps (for instance, ALL_PART_TABLES for partition information). All the left joins use the same set of columns to join on (typically the object owner & name), so we could re-use the hash information for each join, rather than re-hashing the same columns for every join. To allow us to do this, along with various other performance improvements that could be done for the specific query pattern we were using, we read all the tables individually and do a hash join on the client. Fortunately, this 'pure' algorithmic problem is the kind that can be very well optimized for expected real-world situations; as well as storing row data we're not using in the hash key on disk, we use very specific memory-efficient data structures to store all the information we need. This allows us to achieve a database population time that is as fast as on 10g, and even (in some situations) slightly faster, and a memory overhead of roughly 150 bytes per row of data in the result set (for schemas with 10,000 tables in that means an extra 1.4MB memory being used during population). Next: fun with the 9i dictionary views.

    Read the article

  • Processing Kinect v2 Color Streams in Parallel

    - by Chris Gardner
    Originally posted on: http://geekswithblogs.net/freestylecoding/archive/2014/08/20/processing-kinect-v2-color-streams-in-parallel.aspxProcessing Kinect v2 Color Streams in Parallel I've really been enjoying being a part of the Kinect for Windows Developer's Preview. The new hardware has some really impressive capabilities. However, with great power comes great system specs. Unfortunately, my little laptop that could is not 100% up to the task; I've had to get a little creative. The most disappointing thing I've run into is that I can't always cleanly display the color camera stream in managed code. I managed to strip the code down to what I believe is the bear minimum: using( ColorFrame _ColorFrame = e.FrameReference.AcquireFrame() ) { if( null == _ColorFrame ) return;   BitmapToDisplay.Lock(); _ColorFrame.CopyConvertedFrameDataToIntPtr( BitmapToDisplay.BackBuffer, Convert.ToUInt32( BitmapToDisplay.BackBufferStride * BitmapToDisplay.PixelHeight ), ColorImageFormat.Bgra ); BitmapToDisplay.AddDirtyRect( new Int32Rect( 0, 0, _ColorFrame.FrameDescription.Width, _ColorFrame.FrameDescription.Height ) ); BitmapToDisplay.Unlock(); } With this snippet, I'm placing the converted Bgra32 color stream directly on the BackBuffer of the WriteableBitmap. This gives me pretty smooth playback, but I still get the occasional freeze for half a second. After a bit of profiling, I discovered there were a few problems. The first problem is the size of the buffer along with the conversion on the buffer. At this time, the raw image format of the data from the Kinect is Yuy2. This is great for direct video processing. It would be ideal if I had a WriteableVideo object in WPF. However, this is not the case. Further digging led me to the real problem. It appears that the SDK is converting the input serially. Let's think about this for a second. The color camera is a 1080p camera. As we should all know, this give us a native resolution of 1920 x 1080. This produces 2,073,600 pixels. Yuy2 uses 4 bytes per 2 pixel, for a buffer size of 4,147,200 bytes. Bgra32 uses 4 bytes per pixel, for a buffer size of 8,294,400 bytes. The SDK appears to be doing this on one thread. I started wondering if I chould do this better myself. I mean, I have 8 cores in my system. Why can't I use them all? The first problem is converting a Yuy2 frame into a Bgra32 frame. It is NOT trivial. I spent a day of research of just how to do this. In the end, I didn't even produce the best algorithm possible, but it did work. After I managed to get that to work, I knew my next step was the get the conversion operation off the UI Thread. This was a simple process of throwing the work into a Task. Of course, this meant I had to marshal the final write to the WriteableBitmap back to the UI thread. Finally, I needed to vectorize the operation so I could run it safely in parallel. This was, mercifully, not quite as hard as I thought it would be. I had my loop return an index to a pair of pixels. From there, I had to tell the loop to do everything for this pair of pixels. If you're wondering why I did it for pairs of pixels, look back above at the specification for the Yuy2 format. I won't go into full detail on why each 4 bytes contains 2 pixels of information, but rest assured that there is a reason why the format is described in that way. The first working attempt at this algorithm successfully turned my poor laptop into a space heater. I very quickly brought and maintained all 8 cores up to about 97% usage. That's when I remembered that obscure option in the Task Parallel Library where you could limit the amount of parallelism used. After a little trial and error, I discovered 4 parallel tasks was enough for most cases. This yielded the follow code: private byte ClipToByte( int p_ValueToClip ) { return Convert.ToByte( ( p_ValueToClip < byte.MinValue ) ? byte.MinValue : ( ( p_ValueToClip > byte.MaxValue ) ? byte.MaxValue : p_ValueToClip ) ); }   private void ColorFrameArrived( object sender, ColorFrameArrivedEventArgs e ) { if( null == e.FrameReference ) return;   // If you do not dispose of the frame, you never get another one... using( ColorFrame _ColorFrame = e.FrameReference.AcquireFrame() ) { if( null == _ColorFrame ) return;   byte[] _InputImage = new byte[_ColorFrame.FrameDescription.LengthInPixels * _ColorFrame.FrameDescription.BytesPerPixel]; byte[] _OutputImage = new byte[BitmapToDisplay.BackBufferStride * BitmapToDisplay.PixelHeight]; _ColorFrame.CopyRawFrameDataToArray( _InputImage );   Task.Factory.StartNew( () => { ParallelOptions _ParallelOptions = new ParallelOptions(); _ParallelOptions.MaxDegreeOfParallelism = 4;   Parallel.For( 0, Sensor.ColorFrameSource.FrameDescription.LengthInPixels / 2, _ParallelOptions, ( _Index ) => { // See http://msdn.microsoft.com/en-us/library/windows/desktop/dd206750(v=vs.85).aspx int _Y0 = _InputImage[( _Index << 2 ) + 0] - 16; int _U = _InputImage[( _Index << 2 ) + 1] - 128; int _Y1 = _InputImage[( _Index << 2 ) + 2] - 16; int _V = _InputImage[( _Index << 2 ) + 3] - 128;   byte _R = ClipToByte( ( 298 * _Y0 + 409 * _V + 128 ) >> 8 ); byte _G = ClipToByte( ( 298 * _Y0 - 100 * _U - 208 * _V + 128 ) >> 8 ); byte _B = ClipToByte( ( 298 * _Y0 + 516 * _U + 128 ) >> 8 );   _OutputImage[( _Index << 3 ) + 0] = _B; _OutputImage[( _Index << 3 ) + 1] = _G; _OutputImage[( _Index << 3 ) + 2] = _R; _OutputImage[( _Index << 3 ) + 3] = 0xFF; // A   _R = ClipToByte( ( 298 * _Y1 + 409 * _V + 128 ) >> 8 ); _G = ClipToByte( ( 298 * _Y1 - 100 * _U - 208 * _V + 128 ) >> 8 ); _B = ClipToByte( ( 298 * _Y1 + 516 * _U + 128 ) >> 8 );   _OutputImage[( _Index << 3 ) + 4] = _B; _OutputImage[( _Index << 3 ) + 5] = _G; _OutputImage[( _Index << 3 ) + 6] = _R; _OutputImage[( _Index << 3 ) + 7] = 0xFF; } );   Application.Current.Dispatcher.Invoke( () => { BitmapToDisplay.WritePixels( new Int32Rect( 0, 0, Sensor.ColorFrameSource.FrameDescription.Width, Sensor.ColorFrameSource.FrameDescription.Height ), _OutputImage, BitmapToDisplay.BackBufferStride, 0 ); } ); } ); } } This seemed to yield a results I wanted, but there was still the occasional stutter. This lead to what I realized was the second problem. There is a race condition between the UI Thread and me locking the WriteableBitmap so I can write the next frame. Again, I'm writing approximately 8MB to the back buffer. Then, I started thinking I could cheat. The Kinect is running at 30 frames per second. The WPF UI Thread runs at 60 frames per second. This made me not feel bad about exploiting the Composition Thread. I moved the bulk of the code from the FrameArrived handler into CompositionTarget.Rendering. Once I was in there, I polled from a frame, and rendered it if it existed. Since, in theory, I'm only killing the Composition Thread every other hit, I decided I was ok with this for cases where silky smooth video performance REALLY mattered. This ode looked like this: private byte ClipToByte( int p_ValueToClip ) { return Convert.ToByte( ( p_ValueToClip < byte.MinValue ) ? byte.MinValue : ( ( p_ValueToClip > byte.MaxValue ) ? byte.MaxValue : p_ValueToClip ) ); }   void CompositionTarget_Rendering( object sender, EventArgs e ) { using( ColorFrame _ColorFrame = FrameReader.AcquireLatestFrame() ) { if( null == _ColorFrame ) return;   byte[] _InputImage = new byte[_ColorFrame.FrameDescription.LengthInPixels * _ColorFrame.FrameDescription.BytesPerPixel]; byte[] _OutputImage = new byte[BitmapToDisplay.BackBufferStride * BitmapToDisplay.PixelHeight]; _ColorFrame.CopyRawFrameDataToArray( _InputImage );   ParallelOptions _ParallelOptions = new ParallelOptions(); _ParallelOptions.MaxDegreeOfParallelism = 4;   Parallel.For( 0, Sensor.ColorFrameSource.FrameDescription.LengthInPixels / 2, _ParallelOptions, ( _Index ) => { // See http://msdn.microsoft.com/en-us/library/windows/desktop/dd206750(v=vs.85).aspx int _Y0 = _InputImage[( _Index << 2 ) + 0] - 16; int _U = _InputImage[( _Index << 2 ) + 1] - 128; int _Y1 = _InputImage[( _Index << 2 ) + 2] - 16; int _V = _InputImage[( _Index << 2 ) + 3] - 128;   byte _R = ClipToByte( ( 298 * _Y0 + 409 * _V + 128 ) >> 8 ); byte _G = ClipToByte( ( 298 * _Y0 - 100 * _U - 208 * _V + 128 ) >> 8 ); byte _B = ClipToByte( ( 298 * _Y0 + 516 * _U + 128 ) >> 8 );   _OutputImage[( _Index << 3 ) + 0] = _B; _OutputImage[( _Index << 3 ) + 1] = _G; _OutputImage[( _Index << 3 ) + 2] = _R; _OutputImage[( _Index << 3 ) + 3] = 0xFF; // A   _R = ClipToByte( ( 298 * _Y1 + 409 * _V + 128 ) >> 8 ); _G = ClipToByte( ( 298 * _Y1 - 100 * _U - 208 * _V + 128 ) >> 8 ); _B = ClipToByte( ( 298 * _Y1 + 516 * _U + 128 ) >> 8 );   _OutputImage[( _Index << 3 ) + 4] = _B; _OutputImage[( _Index << 3 ) + 5] = _G; _OutputImage[( _Index << 3 ) + 6] = _R; _OutputImage[( _Index << 3 ) + 7] = 0xFF; } );   BitmapToDisplay.WritePixels( new Int32Rect( 0, 0, Sensor.ColorFrameSource.FrameDescription.Width, Sensor.ColorFrameSource.FrameDescription.Height ), _OutputImage, BitmapToDisplay.BackBufferStride, 0 ); } }

    Read the article

  • Is it a good practice to use smaller data types for variables to save memory?

    - by ThePlan
    When I learned the C++ language for the first time I learned that besides int, float etc, smaller or bigger versions of these data types existed within the language. For example I could call a variable x int x; or short int x; The main difference being that short int takes 2 bytes of memory while int takes 4 bytes, and short int has a lesser value, but we could also call this to make it even smaller: int x; short int x; unsigned short int x; which is even more restrictive. My question here is if it's a good practice to use separate data types according to what values your variable take within the program. Is it a good idea to always declare variables according to these data types?

    Read the article

  • What are some potential issues in blocking all incoming requests from the Amazon cloud?

    - by ElHaix
    Recently I, along with the rest of the world, have seen a significant increase in what appears to be scraping from Amazon AWS-related sources. So simply put, I blocked all incoming requests from the Amazon cloud for our hosted application. I know that some good services/bots are now hosted on the cloud, and I'm wondering if certain IP addresses should be allowed, as they may gather data that would in the end benefit our site's SEO rankings? -- UPDATE -- I added a feature to block requests from the following hosts: Amazon Softlayer ServerDeals GigAvenue Since then, I have seen my network traffic decrease (monitored by network out bytes). Average operation is around 10,000,000 bytes. You can see where last week I was not blocking, then started blocking. I've since removed the blocks and will see what the outcome is.

    Read the article

  • Cannot connect to ethernet port

    - by Jnir
    I'm running Ubuntu 12.04 LTE. I'm trying to connect via ethernet for the first time but have had no success. Ifconfig eth0 returns: eth0 Link encap:Ethernet HWaddr: 00:01:2e:3f:f1:a0 inet6 addr: fe80::201:2eff:fe3f:f1a0 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:68 errors:0 dropped:0 overruns:0 frame:0 TX packets:183 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:8374 (8.3 KB) TX bytes:42944 (42.9KB) Interrupt:16 Base address:0x6c00 /etc/network/interfaces has auto lo iface lo inef loopback auto eth0 iface eth0 inet dhcp sudo /etc/init.d/networking prints: * Running /etc/init.d/networking restart is deprecated because it may not enable again some interfaces * Reconfiguring network interfaces... Failed to bring up eth0 [OK]

    Read the article

  • Collaborative Whiteboard using WebSocket in GlassFish 4 - Text/JSON and Binary/ArrayBuffer Data Transfer (TOTD #189)

    - by arungupta
    This blog has published a few blogs on using JSR 356 Reference Implementation (Tyrus) as its integrated in GlassFish 4 promoted builds. TOTD #183: Getting Started with WebSocket in GlassFish TOTD #184: Logging WebSocket Frames using Chrome Developer Tools, Net-internals and Wireshark TOTD #185: Processing Text and Binary (Blob, ArrayBuffer, ArrayBufferView) Payload in WebSocket TOTD #186: Custom Text and Binary Payloads using WebSocket One of the typical usecase for WebSocket is online collaborative games. This Tip Of The Day (TOTD) explains a sample that can be used to build such games easily. The application is a collaborative whiteboard where different shapes can be drawn in multiple colors. The shapes drawn on one browser are automatically drawn on all other peer browsers that are connected to the same endpoint. The shape, color, and coordinates of the image are transfered using a JSON structure. A browser may opt-out of sharing the figures. Alternatively any browser can send a snapshot of their existing whiteboard to all other browsers. Take a look at this video to understand how the application work and the underlying code. The complete sample code can be downloaded here. The code behind the application is also explained below. The web page (index.jsp) has a HTML5 Canvas as shown: <canvas id="myCanvas" width="150" height="150" style="border:1px solid #000000;"></canvas> And some radio buttons to choose the color and shape. By default, the shape, color, and coordinates of any figure drawn on the canvas are put in a JSON structure and sent as a message to the WebSocket endpoint. The JSON structure looks like: { "shape": "square", "color": "#FF0000", "coords": { "x": 31.59999942779541, "y": 49.91999053955078 }} The endpoint definition looks like: @WebSocketEndpoint(value = "websocket",encoders = {FigureDecoderEncoder.class},decoders = {FigureDecoderEncoder.class})public class Whiteboard { As you can see, the endpoint has decoder and encoder registered that decodes JSON to a Figure (a POJO class) and vice versa respectively. The decode method looks like: public Figure decode(String string) throws DecodeException { try { JSONObject jsonObject = new JSONObject(string); return new Figure(jsonObject); } catch (JSONException ex) { throw new DecodeException("Error parsing JSON", ex.getMessage(), ex.fillInStackTrace()); }} And the encode method looks like: public String encode(Figure figure) throws EncodeException { return figure.getJson().toString();} FigureDecoderEncoder implements both decoder and encoder functionality but thats purely for convenience. But the recommended design pattern is to keep them in separate classes. In certain cases, you may even need only one of them. On the client-side, the Canvas is initialized as: var canvas = document.getElementById("myCanvas");var context = canvas.getContext("2d");canvas.addEventListener("click", defineImage, false); The defineImage method constructs the JSON structure as shown above and sends it to the endpoint using websocket.send(). An instant snapshot of the canvas is sent using binary transfer with WebSocket. The WebSocket is initialized as: var wsUri = "ws://localhost:8080/whiteboard/websocket";var websocket = new WebSocket(wsUri);websocket.binaryType = "arraybuffer"; The important part is to set the binaryType property of WebSocket to arraybuffer. This ensures that any binary transfers using WebSocket are done using ArrayBuffer as the default type seem to be blob. The actual binary data transfer is done using the following: var image = context.getImageData(0, 0, canvas.width, canvas.height);var buffer = new ArrayBuffer(image.data.length);var bytes = new Uint8Array(buffer);for (var i=0; i<bytes.length; i++) { bytes[i] = image.data[i];}websocket.send(bytes); This comprehensive sample shows the following features of JSR 356 API: Annotation-driven endpoints Send/receive text and binary payload in WebSocket Encoders/decoders for custom text payload In addition, it also shows how images can be captured and drawn using HTML5 Canvas in a JSP. How could this be turned in to an online game ? Imagine drawing a Tic-tac-toe board on the canvas with two players playing and others watching. Then you can build access rights and controls within the application itself. Instead of sending a snapshot of the canvas on demand, a new peer joining the game could be automatically transferred the current state as well. Do you want to build this game ? I built a similar game a few years ago. Do somebody want to rewrite the game using WebSocket APIs ? :-) Many thanks to Jitu and Akshay for helping through the WebSocket internals! Here are some references for you: JSR 356: Java API for WebSocket - Specification (Early Draft) and Implementation (already integrated in GlassFish 4 promoted builds) Subsequent blogs will discuss the following topics (not necessary in that order) ... Error handling Interface-driven WebSocket endpoint Java client API Client and Server configuration Security Subprotocols Extensions Other topics from the API

    Read the article

  • Given two sets of DNA, what does it take to computationally "grow" that person from a fertilised egg and see what they become? [closed]

    - by Nicholas Hill
    My question is essentially entirely in the title, but let me add some points to prevent some "why on earth would you want to do that" sort of answers: This is more of a mind experiment than an attempt to implement real software. For fun. Don't worry about computational speed or the number of available memory bytes. Computers get faster and better all of the time. Imagine we have two data files: Mother.dna and Father.dna. What else would be required? (Bonus point for someone who tells me approx how many GB each file will be, and if the size of the files are exactly the same number of bytes for everyone alive on Earth!) There would ideally need to be a way to see what the egg becomes as it becomes a human adult. If you fancy, feel free to outline the design. I am initially thinking that there'd need to be some sort of volumetric voxel-based 3D environment for simulation purposes.

    Read the article

  • What's the maximum filename length in encrypted ext4 partition?

    - by fdlm
    I'm using a two-partition setup with ubuntu (one partition for the system, one for my home directories). Until now I had an ext3 formatted home partition, which I'm currently trying to migrate to an encrypted ext4 partition. However, when copying the files the system is bugging me with errors, complaining that filenames are too long. As far as I could find out using wikipedia filename length is 255 bytes for ext3 and 256 bytes for ext4. So where's the problem, and how can I solve it? Thanks!

    Read the article

  • Dynamic obfuscation by self-modifying code

    - by Fallout2
    Hi all, Here what's i am trying to do: assume you have two fonction void f1(int *v) { *v = 55; } void f2(int *v) { *v = 44; } char *template; template = allocExecutablePages(...); char *allocExecutablePages (int pages) { template = (char *) valloc (getpagesize () * pages); if (mprotect (template, getpagesize (), PROT_READ|PROT_EXEC|PROT_WRITE) == -1) { perror (“mprotect”); } } I would like to do a comparison between f1 and f2 (so tell what is identical and what is not) (so get the assembly lines of those function and make a line by line comparison) And then put those line in my template. Is there a way in C to do that? THanks Update Thank's for all you answers guys but maybe i haven't explained my need correctly. basically I'm trying to write a little obfuscation method. The idea consists in letting two or more functions share the same location in memory. A region of memory (which we will call a template) is set up containing some of the machine code bytes from the functions, more specifically, the ones they all have in common. Before a particular function is executed, an edit script is used to patch the template with the necessary machine code bytes to create a complete version of that function. When another function assigned to the same template is about to be executed, the process repeats, this time with a different edit script. To illustrate this, suppose you want to obfuscate a program that contains two functions f1 and f2. The first one (f1) has the following machine code bytes Address Machine code 0 10 1 5 2 6 3 20 and the second one (f2) has Address Machine code 0 10 1 9 2 3 3 20 At obfuscation time, one will replace f1 and f2 by the template Address Machine code 0 10 1 ? 2 ? 3 20 and by the two edit scripts e1 = {1 becomes 5, 2 becomes 6} and e2 = {1 becomes 9, 2 becomes 3}. #include <stdlib.h> #include <string.h> typedef unsigned int uint32; typedef char * addr_t; typedef struct { uint32 offset; char value; } EDIT; EDIT script1[200], script2[200]; char *template; int template_len, script_len = 0; typedef void(*FUN)(int *); int val, state = 0; void f1_stub () { if (state != 1) { patch (script1, script_len, template); state = 1; } ((FUN)template)(&val); } void f2_stub () { if (state != 2) { patch (script2, script_len, template); state = 2; } ((FUN)template)(&val); } int new_main (int argc, char **argv) { f1_stub (); f2_stub (); return 0; } void f1 (int *v) { *v = 99; } void f2 (int *v) { *v = 42; } int main (int argc, char **argv) { int f1SIZE, f2SIZE; /* makeCodeWritable (...); */ /* template = allocExecutablePages(...); */ /* Computed at obfuscation time */ diff ((addr_t)f1, f1SIZE, (addr_t)f2, f2SIZE, script1, script2, &script_len, template, &template_len); /* We hide the proper code */ memset (f1, 0, f1SIZE); memset (f2, 0, f2SIZE); return new_main (argc, argv); } So i need now to write the diff function. that will take the addresses of my two function and that will generate a template with the associated script. So that is why i would like to compare bytes by bytes my two function Sorry for my first post who was not very understandable! Thank you

    Read the article

  • Error when I compile kernel 3.3.2 in ubuntu 12.04

    - by rock-alternativo
    I thinks is not a bug of ubuntu. This is the output: OBJCOPY arch/x86/boot/vmlinux.bin HOSTCC arch/x86/boot/tools/build BUILD arch/x86/boot/bzImage Setup is 16800 bytes (padded to 16896 bytes). System is 4599 kB CRC f77d64c0 Kernel: arch/x86/boot/bzImage is ready (#1) Building modules, stage 2. MODPOST 3268 modules ERROR: "__modver_version_show" [drivers/staging/rts5139/rts5139.ko] undefined! WARNING: modpost: Found 4 section mismatch(es). To see full details build your kernel with: 'make CONFIG_DEBUG_SECTION_MISMATCH=y' make[1]: *** [__modpost] Error 1 make: *** [modules] Error 2

    Read the article

  • Why would Linux VM in vSphere ESXi 5.5 show dramatically increased disk i/o latency?

    - by mhucka
    I'm stumped and I hope someone else will recognize the symptoms of this problem. Hardware: new Dell T110 II, dual-core Pentium G860 2.9 GHz, onboard SATA controller, one new 500 GB 7200 RPM cabled hard drive inside the box, other drives inside but not mounted yet. No RAID. Software: fresh CentOS 6.5 virtual machine under VMware ESXi 5.5.0 (build 174 + vSphere Client). 2.5 GB RAM allocated. The disk is how CentOS offered to set it up, namely as a volume inside an LVM Volume Group, except that I skipped having a separate /home and simply have / and /boot. CentOS is patched up, ESXi patched up, latest VMware tools installed in the VM. No users on the system, no services running, no files on the disk but the OS installation. I'm interacting with the VM via the VM virtual console in vSphere Client. Before going further, I wanted to check that I configured things more or less reasonably. I ran the following command as root in a shell on the VM: for i in 1 2 3 4 5 6 7 8 9 10; do dd if=/dev/zero of=/test.img bs=8k count=256k conv=fdatasync done I.e., just repeat the dd command 10 times, which results in printing the transfer rate each time. The results are disturbing. It starts off well: 262144+0 records in 262144+0 records out 2147483648 bytes (2.1 GB) copied, 20.451 s, 105 MB/s 262144+0 records in 262144+0 records out 2147483648 bytes (2.1 GB) copied, 20.4202 s, 105 MB/s ... but after 7-8 of these, it then prints 262144+0 records in 262144+0 records out 2147483648 bytes (2.1 GG) copied, 82.9779 s, 25.9 MB/s 262144+0 records in 262144+0 records out 2147483648 bytes (2.1 GB) copied, 84.0396 s, 25.6 MB/s 262144+0 records in 262144+0 records out 2147483648 bytes (2.1 GB) copied, 103.42 s, 20.8 MB/s If I wait a significant amount of time, say 30-45 minutes, and run it again, it again goes back to 105 MB/s, and after several rounds (sometimes a few, sometimes 10+), it drops to ~20-25 MB/s again. Plotting the disk latency in vSphere's interface, it shows periods of high disk latency hitting 1.2-1.5 seconds during the times that dd reports the low throughput. (And yes, things get pretty unresponsive while that's happening.) What could be causing this? I'm comfortable that it is not due to the disk failing, because I also had configured two other disks as an additional volume in the same system. At first I thought I did something wrong with that volume, but after commenting the volume out from /etc/fstab and rebooting, and trying the tests on / as shown above, it became clear that the problem is elsewhere. It is probably an ESXi configuration problem, but I'm not very experienced with ESXi. It's probably something stupid, but after trying to figure this out for many hours over multiple days, I can't find the problem, so I hope someone can point me in the right direction. (P.S.: yes, I know this hardware combo won't win any speed awards as a server, and I have reasons for using this low-end hardware and running a single VM, but I think that's besides the point for this question [unless it's actually a hardware problem].) ADDENDUM #1: Reading other answers such as this one made me try adding oflag=direct to dd. However, it makes no difference in the pattern of results: initially the numbers are higher for many rounds, then they drop to 20-25 MB/s. (The initial absolute numbers are in the 50 MB/s range.) ADDENDUM #2: Adding sync ; echo 3 > /proc/sys/vm/drop_caches into the loop does not make a difference at all. ADDENDUM #3: To take out further variables, I now run dd such that the file it creates is larger than the amount of RAM on the system. The new command is dd if=/dev/zero of=/test.img bs=16k count=256k conv=fdatasync oflag=direct. Initial throughput numbers with this version of the command are ~50 MB/s. They drop to 20-25 MB/s when things go south. ADDENDUM #4: Here is the output of iostat -d -m -x 1 running in another terminal window while performance is "good" and then again when it's "bad". (While this is going on, I'm running dd if=/dev/zero of=/test.img bs=16k count=256k conv=fdatasync oflag=direct.) First, when things are "good", it shows this: When things go "bad", iostat -d -m -x 1 shows this:

    Read the article

  • PMDB Block Size Choice

    - by Brian Diehl
    Choosing a block size for the P6 PMDB database is not a difficult task. In fact, taking the default of 8k is going to be just fine. Block size is one of those things that is always hotly debated. Everyone has their personal preference and can sight plenty of good reasons for their choice. To add to the confusion, Oracle supports multiple block sizes withing the same instance. So how to decide and what is the justification? Like most OLTP systems, Oracle Primavera P6 has a wide variety of data. A typical table's average row size may be less than 50 bytes or upwards of 500 bytes. There are also several tables with BLOB types but the LOB data tends not to be very large. It is likely that no single block size would be perfect for every table. So how to choose? My preference is for the 8k (8192 bytes) block size. It is a good compromise that is not too small for the wider rows, yet not to big for the thin rows. It is also important to remember that database blocks are the smallest unit of change and caching. I prefer to have more, individual "working units" in my database. For an instance with 4gb of buffer cache, an 8k block will provide 524,288 blocks of cache. The following SQL*Plus script returns the average, median, min, and max rows per block. column "AVG(CNT)" format 999.99 set verify off select avg(cnt), median(cnt), min(cnt), max(cnt), count(*) from ( select dbms_rowid.ROWID_RELATIVE_FNO(rowid) , dbms_rowid.ROWID_BLOCK_NUMBER(rowid) , count(*) cnt from &tab group by dbms_rowid.ROWID_RELATIVE_FNO(rowid) , dbms_rowid.ROWID_BLOCK_NUMBER(rowid) ) Running this for the TASK table, I get this result on a database with an 8k block size. Each activity, on average, has about 19 rows per block. Enter value for tab: task AVG(CNT) MEDIAN(CNT) MIN(CNT) MAX(CNT) COUNT(*) -------- ----------- ---------- ---------- ---------- 18.72 19 3 28 415917 I recommend an 8k block size for the P6 transactional database. All of our internal performance and scalability test are done with this block size. This does not mean that other block sizes will not work. Instead, like many other parameters, this is the safest choice.

    Read the article

< Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >