Search Results

Search found 2551 results on 103 pages for 'hailstone sequence'.

Page 37/103 | < Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >

  • Windows XP Boot Issue - Diagnosing A Hard Drive Failure

    - by duffymo
    My five-year-old HP desktop running Windows XP SP3 wouldn't boot from the hard drive yesterday afternoon. I would see the boot sequence begin, then nothing but a black screen. Fortunately, I had just done an Acronis backup to my external drive in the morning, and I have a bootable USB key. I put the USB key into the drive, powered up the machine, and put the USB key first in line in the boot sequence. Voila! My machine came alive. But now I'm confused as to what the problem is and what to do next. I assumed that my hard drive was toast. But now that the machine is alive I can see files on my C: drive that have changes I made just yesterday. Clearly the drive is not dead. Here are my questions: What could explain my inability to boot from the hard drive? What would a remedy be? What's my best course of action? Should I replace the hard drive with a new one? If I replace the hard drive, do I reinstall the OS and apply the backup I did yesterday? If I decide that re-installing Windows XP makes no sense, how do I get back the Acronis backup that I did yesterday? I don't want to lose that. UPDATE: I just learned one more key fact. I'm having some work done on my house. I neglected to shut my machine down before the contractor came. My wife said he shut down the power to do some work on a circuit and then powered the house back up. I have a surge protector, but is it possible that cycling the power did some damage?

    Read the article

  • MAMP MySQL won't start

    - by Tony
    I uninstalled MAMP completely, downloaded fresh copy of MAMP 2 from the MAMP website, did a clean install. However, when I try to start mysql, I get the following error log 111120 21:37:49 mysqld_safe Starting mysqld daemon with databases from /Applications/MAMP/db/mysql 111120 21:37:50 [Warning] You have forced lower_case_table_names to 0 through a command-line option, even though your file system '/Applications/MAMP/db/mysql/' is case insensitive. This means that you can corrupt a MyISAM table by accessing it with different cases. You should consider changing lower_case_table_names to 1 or 2 111120 21:37:50 [Note] Plugin 'FEDERATED' is disabled. 111120 21:37:50 InnoDB: The InnoDB memory heap is disabled 111120 21:37:50 InnoDB: Mutexes and rw_locks use GCC atomic builtins 111120 21:37:50 InnoDB: Compressed tables use zlib 1.2.3 111120 21:37:50 InnoDB: Initializing buffer pool, size = 128.0M 111120 21:37:50 InnoDB: Completed initialization of buffer pool 111120 21:37:50 InnoDB: highest supported file format is Barracuda. 111120 21:37:50 InnoDB: Waiting for the background threads to start 111120 21:37:51 InnoDB: 1.1.5 started; log sequence number 1595675 111120 21:37:51 [ERROR] /Applications/MAMP/Library/bin/mysqld: unknown option '--skip-locking' 111120 21:37:51 [ERROR] Aborting 111120 21:37:51 InnoDB: Starting shutdown... 111120 21:37:51 InnoDB: Shutdown completed; log sequence number 1595675 111120 21:37:51 [Note] /Applications/MAMP/Library/bin/mysqld: Shutdown complete 111120 21:37:51 mysqld_safe mysqld from pid file /Applications/MAMP/tmp/mysql/mysql.pid ended I've no clue why this is happening. I googled around and made sure that no instance of MySQL is running. Nothing seems to help.

    Read the article

  • VMWare steals IP addresses

    - by Ishan Amin
    I'm having a peculiar problem, that I think I have narrowed down to VMware. For the past one year, every once in a while we lose internet connection and not all users (about 10 users) go down at the same time, its usually one-by-one. First someone will call me and say "Internet is down" and then we would go reset the router and modem and switch and it would be working again for a while, then go down again without any pattern or replicatable sequence. We'd go repeat the steps again to get everyone in the office running again. We called our Internet Service Provider and they constantly say, We see your modem and we see your router and from thier end everything is OK. we replaced our router and switch and modem, twice! Last friday, it dawned upon me, that everytime we turn on a VMware machine, this sequence of taking everyone down starts, which also explains the message that my users get for "IP Conflict Found" So we do alot of VMware testing and lo and behold, it takes my Internet down. My Yahoo and Gtalk would continue working but www is down when the VMware machines are started. I do use bridged networking to all the VMware machines, but I dont know what else to set it at. now, sorry for this long rambling but anyone have any clue on how to stop this? thanks IA

    Read the article

  • mysqld crashes on any statement

    - by ??iu
    I restarted my slave to change configuration settings to skip reverse hostname lookup on connecting and to enable the slow query log. I edited /etc/my.cnf making only these changes, then restarted mysqld with /etc/init.d/mysql restart All appeared to be well but when I connect to msyqld remotely or locally though it connects okay a slight problem is that mysqld crashes whenever you try to issue any kind of statement. The client looks like: Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -A Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 3 Server version: 5.1.31-1ubuntu2-log Type 'help;' or '\h' for help. Type '\c' to clear the buffer. mysql> show tables; ERROR 2006 (HY000): MySQL server has gone away No connection. Trying to reconnect... Connection id: 1 Current database: mydb ERROR 2006 (HY000): MySQL server has gone away No connection. Trying to reconnect... ERROR 2003 (HY000): Can't connect to MySQL server on 'xx.xx.xx.xx' (61) ERROR: Can't connect to the server ERROR 2006 (HY000): MySQL server has gone away No connection. Trying to reconnect... ERROR 2003 (HY000): Can't connect to MySQL server on 'xx.xx.xx.xx' (61) ERROR: Can't connect to the server ERROR 2006 (HY000): MySQL server has gone away Bus error The mysqld error log looks like: 101210 16:35:51 InnoDB: Error: (1500) Couldn't read the MAX(job_id) autoinc value from the index (PRIMARY). 101210 16:35:51 InnoDB: Assertion failure in thread 140245598570832 in file handler/ha_innodb.cc line 2595 InnoDB: Failing assertion: error == DB_SUCCESS InnoDB: We intentionally generate a memory trap. InnoDB: Submit a detailed bug report to http://bugs.mysql.com. InnoDB: If you get repeated assertion failures or crashes, even InnoDB: immediately after the mysqld startup, there may be InnoDB: corruption in the InnoDB tablespace. Please refer to InnoDB: http://dev.mysql.com/doc/refman/5.1/en/forcing-recovery.html InnoDB: about forcing recovery. 101210 16:35:51 - mysqld got signal 6 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=16777216 read_buffer_size=131072 max_used_connections=3 max_threads=600 threads_connected=3 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 1328077 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. thd: 0x18209220 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 0x7f8d791580d0 thread_stack 0x20000 /usr/sbin/mysqld(my_print_stacktrace+0x29) [0x8b4f89] /usr/sbin/mysqld(handle_segfault+0x383) [0x5f8f03] /lib/libpthread.so.0 [0x7f902a76a080] /lib/libc.so.6(gsignal+0x35) [0x7f90291f8fb5] /lib/libc.so.6(abort+0x183) [0x7f90291fabc3] /usr/sbin/mysqld(ha_innobase::open(char const*, int, unsigned int)+0x41b) [0x781f4b] /usr/sbin/mysqld(handler::ha_open(st_table*, char const*, int, int)+0x3f) [0x6db00f] /usr/sbin/mysqld(open_table_from_share(THD*, st_table_share*, char const*, unsigned int, unsigned int, unsigned int, st_table*, bool)+0x57a) [0x64760a] /usr/sbin/mysqld [0x63f281] /usr/sbin/mysqld(open_table(THD*, TABLE_LIST*, st_mem_root*, bool*, unsigned int)+0x626) [0x641e16] /usr/sbin/mysqld(open_tables(THD*, TABLE_LIST**, unsigned int*, unsigned int)+0x5db) [0x6429cb] /usr/sbin/mysqld(open_normal_and_derived_tables(THD*, TABLE_LIST*, unsigned int)+0x1e) [0x642b0e] /usr/sbin/mysqld(mysqld_list_fields(THD*, TABLE_LIST*, char const*)+0x22) [0x70b292] /usr/sbin/mysqld(dispatch_command(enum_server_command, THD*, char*, unsigned int)+0x146d) [0x60dc1d] /usr/sbin/mysqld(do_command(THD*)+0xe8) [0x60dda8] /usr/sbin/mysqld(handle_one_connection+0x226) [0x601426] /lib/libpthread.so.0 [0x7f902a7623ba] /lib/libc.so.6(clone+0x6d) [0x7f90292abfcd] Trying to get some variables. Some pointers may be invalid and cause the dump to abort... thd->query at 0x18213c70 = thd->thread_id=3 thd->killed=NOT_KILLED The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash. 101210 16:35:51 mysqld_safe Number of processes running now: 0 101210 16:35:51 mysqld_safe mysqld restarted InnoDB: The log sequence number in ibdata files does not match InnoDB: the log sequence number in the ib_logfiles! 101210 16:35:54 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... 101210 16:35:56 InnoDB: Started; log sequence number 456 143528628 101210 16:35:56 [Warning] 'user' entry 'root@PSDB102' ignored in --skip-name-resolve mode. 101210 16:35:56 [Warning] Neither --relay-log nor --relay-log-index were used; so replication may break when this MySQL server acts as a slave and has his hostname changed!! Please use '--relay-log=mysqld-relay-bin' to avoid this problem. 101210 16:35:56 [Note] Event Scheduler: Loaded 0 events 101210 16:35:56 [Note] /usr/sbin/mysqld: ready for connections. Version: '5.1.31-1ubuntu2-log' socket: '/var/run/mysqld/mysqld.sock' port: 3306 (Ubuntu) 101210 16:36:11 InnoDB: Error: (1500) Couldn't read the MAX(job_id) autoinc value from the index (PRIMARY). 101210 16:36:11 InnoDB: Assertion failure in thread 139955151501648 in file handler/ha_innodb.cc line 2595 InnoDB: Failing assertion: error == DB_SUCCESS InnoDB: We intentionally generate a memory trap. InnoDB: Submit a detailed bug report to http://bugs.mysql.com. InnoDB: If you get repeated assertion failures or crashes, even InnoDB: immediately after the mysqld startup, there may be InnoDB: corruption in the InnoDB tablespace. Please refer to InnoDB: http://dev.mysql.com/doc/refman/5.1/en/forcing-recovery.html InnoDB: about forcing recovery. 101210 16:36:11 - mysqld got signal 6 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=16777216 read_buffer_size=131072 max_used_connections=1 max_threads=600 threads_connected=1 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 1328077 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. thd: 0x18588720 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 0x7f49d916f0d0 thread_stack 0x20000 /usr/sbin/mysqld(my_print_stacktrace+0x29) [0x8b4f89] /usr/sbin/mysqld(handle_segfault+0x383) [0x5f8f03] /lib/libpthread.so.0 [0x7f4c8a73f080] /lib/libc.so.6(gsignal+0x35) [0x7f4c891cdfb5] /lib/libc.so.6(abort+0x183) [0x7f4c891cfbc3] /usr/sbin/mysqld(ha_innobase::open(char const*, int, unsigned int)+0x41b) [0x781f4b] /usr/sbin/mysqld(handler::ha_open(st_table*, char const*, int, int)+0x3f) [0x6db00f] /usr/sbin/mysqld(open_table_from_share(THD*, st_table_share*, char const*, unsigned int, unsigned int, unsigned int, st_table*, bool)+0x57a) [0x64760a] /usr/sbin/mysqld [0x63f281] /usr/sbin/mysqld(open_table(THD*, TABLE_LIST*, st_mem_root*, bool*, unsigned int)+0x626) [0x641e16] /usr/sbin/mysqld(open_tables(THD*, TABLE_LIST**, unsigned int*, unsigned int)+0x5db) [0x6429cb] /usr/sbin/mysqld(open_normal_and_derived_tables(THD*, TABLE_LIST*, unsigned int)+0x1e) [0x642b0e] /usr/sbin/mysqld(mysqld_list_fields(THD*, TABLE_LIST*, char const*)+0x22) [0x70b292] /usr/sbin/mysqld(dispatch_command(enum_server_command, THD*, char*, unsigned int)+0x146d) [0x60dc1d] /usr/sbin/mysqld(do_command(THD*)+0xe8) [0x60dda8] /usr/sbin/mysqld(handle_one_connection+0x226) [0x601426] /lib/libpthread.so.0 [0x7f4c8a7373ba] /lib/libc.so.6(clone+0x6d) [0x7f4c89280fcd] Trying to get some variables. Some pointers may be invalid and cause the dump to abort... thd->query at 0x18599950 = thd->thread_id=1 thd->killed=NOT_KILLED The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash. 101210 16:36:11 mysqld_safe Number of processes running now: 0 101210 16:36:11 mysqld_safe mysqld restarted The config is [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] innodb_file_per_table innodb_buffer_pool_size=10G innodb_log_buffer_size=4M innodb_flush_log_at_trx_commit=2 innodb_thread_concurrency=8 skip-slave-start server-id=3 # # * IMPORTANT # If you make changes to these settings and your system uses apparmor, you may # also need to also adjust /etc/apparmor.d/usr.sbin.mysqld. # user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /DB2/mysql tmpdir = /tmp skip-external-locking # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. #bind-address = 127.0.0.1 # # * Fine Tuning # key_buffer = 16M max_allowed_packet = 16M thread_stack = 128K thread_cache_size = 8 # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP max_connections = 600 #table_cache = 64 #thread_concurrency = 10 # # * Query Cache Configuration # query_cache_limit = 1M query_cache_size = 32M # skip-federated slow-query-log skip-name-resolve Update: I followed the instructions as per http://dev.mysql.com/doc/refman/5.1/en/forcing-innodb-recovery.html and set innodb_force_recovery = 4 and the logs are showing a different error but the behavior is still the same: 101210 19:14:15 mysqld_safe mysqld restarted 101210 19:14:19 InnoDB: Started; log sequence number 456 143528628 InnoDB: !!! innodb_force_recovery is set to 4 !!! 101210 19:14:19 [Warning] 'user' entry 'root@PSDB102' ignored in --skip-name-resolve mode. 101210 19:14:19 [Warning] Neither --relay-log nor --relay-log-index were used; so replication may break when this MySQL server acts as a slave and has his hostname changed!! Please use '--relay-log=mysqld-relay-bin' to avoid this problem. 101210 19:14:19 [Note] Event Scheduler: Loaded 0 events 101210 19:14:19 [Note] /usr/sbin/mysqld: ready for connections. Version: '5.1.31-1ubuntu2-log' socket: '/var/run/mysqld/mysqld.sock' port: 3306 (Ubuntu) 101210 19:14:32 InnoDB: error: space object of table mydb/__twitter_friend, InnoDB: space id 1602 did not exist in memory. Retrying an open. 101210 19:14:32 InnoDB: error: space object of table mydb/access_request, InnoDB: space id 1318 did not exist in memory. Retrying an open. 101210 19:14:32 InnoDB: error: space object of table mydb/activity, InnoDB: space id 1595 did not exist in memory. Retrying an open. 101210 19:14:32 - mysqld got signal 11 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=16777216 read_buffer_size=131072 max_used_connections=1 max_threads=600 threads_connected=1 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 1328077 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. thd: 0x1753c070 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 0x7f7a0b5800d0 thread_stack 0x20000 /usr/sbin/mysqld(my_print_stacktrace+0x29) [0x8b4f89] /usr/sbin/mysqld(handle_segfault+0x383) [0x5f8f03] /lib/libpthread.so.0 [0x7f7cbc350080] /usr/sbin/mysqld(ha_innobase::innobase_get_index(unsigned int)+0x46) [0x77c516] /usr/sbin/mysqld(ha_innobase::innobase_initialize_autoinc()+0x40) [0x77c640] /usr/sbin/mysqld(ha_innobase::open(char const*, int, unsigned int)+0x3f3) [0x781f23] /usr/sbin/mysqld(handler::ha_open(st_table*, char const*, int, int)+0x3f) [0x6db00f] /usr/sbin/mysqld(open_table_from_share(THD*, st_table_share*, char const*, unsigned int, unsigned int, unsigned int, st_table*, bool)+0x57a) [0x64760a] /usr/sbin/mysqld [0x63f281] /usr/sbin/mysqld(open_table(THD*, TABLE_LIST*, st_mem_root*, bool*, unsigned int)+0x626) [0x641e16] /usr/sbin/mysqld(open_tables(THD*, TABLE_LIST**, unsigned int*, unsigned int)+0x5db) [0x6429cb] /usr/sbin/mysqld(open_normal_and_derived_tables(THD*, TABLE_LIST*, unsigned int)+0x1e) [0x642b0e] /usr/sbin/mysqld(mysqld_list_fields(THD*, TABLE_LIST*, char const*)+0x22) [0x70b292] /usr/sbin/mysqld(dispatch_command(enum_server_command, THD*, char*, unsigned int)+0x146d) [0x60dc1d] /usr/sbin/mysqld(do_command(THD*)+0xe8) [0x60dda8] /usr/sbin/mysqld(handle_one_connection+0x226) [0x601426] /lib/libpthread.so.0 [0x7f7cbc3483ba] /lib/libc.so.6(clone+0x6d) [0x7f7cbae91fcd] Trying to get some variables. Some pointers may be invalid and cause the dump to abort... thd->query at 0x1754d690 = thd->thread_id=1 thd->killed=NOT_KILLED The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash.

    Read the article

  • MAMP Pro mysqld won't start on os x lion

    - by Mike
    getting a Start MySQL Failed error in the GUI.. when i attempt to start mysqld from the CLI i get the following error: ? /Applications/MAMP/Library/bin/mysqld 120623 23:12:47 [Warning] Setting lower_case_table_names=2 because file system for /Applications/MAMP/db/mysql/ is case insensitive 120623 23:12:47 [Note] Plugin 'FEDERATED' is disabled. 120623 23:12:47 InnoDB: The InnoDB memory heap is disabled 120623 23:12:47 InnoDB: Mutexes and rw_locks use GCC atomic builtins 120623 23:12:47 InnoDB: Compressed tables use zlib 1.2.3 120623 23:12:47 InnoDB: Initializing buffer pool, size = 128.0M 120623 23:12:47 InnoDB: Completed initialization of buffer pool 120623 23:12:47 InnoDB: highest supported file format is Barracuda. 120623 23:12:47 InnoDB: Waiting for the background threads to start 120623 23:12:48 InnoDB: 1.1.5 started; log sequence number 1595675 120623 23:12:48 [ERROR] /Applications/MAMP/Library/bin/mysqld: unknown option '--skip-locking' 120623 23:12:48 [ERROR] Aborting 120623 23:12:48 InnoDB: Starting shutdown... 120623 23:12:49 InnoDB: Shutdown completed; log sequence number 1595675 120623 23:12:49 [Note] /Applications/MAMP/Library/bin/mysqld: Shutdown complete i have deleted the mysql.pid file located at /application/mamp/tmp/mysql/mysql.pid and i still get the error above. I can't find where MAMP has set --skip-locking set, my.cnf doesnt have it anywhere. Activity monitor gives me a mysqld process running by me, and everytime i KILL the process both via Activity Monitor and via kill =9 pid it starts right back up.. Sampling the process points back to the MAMP mysqld.. wtf?! About to throw MAMP out the window and boot up a VM of CentOS =)

    Read the article

  • What XMonad Configuration Best Replicates Default Ion3 Behavior and Feature Set?

    - by mtp
    Not being very familiar with Haskell and lamenting that Ion 3 is now abandonware, I am curious if anyone out there has found a way of replicating the default Ion 3 behavior and aesthetics in XMonad. If I can't have a near-exact replica of Ion 3-style behavior in XMonad, here is what would be critical to me: Virtual desktops that are empty by default and that spawn full-screen applications, which can be split horizontally or vertically evenly, leaving an empty adjacent pane. The panes, which house open windows, are manually resizable, preferably via keyboard. The panes exhibit tabbed behavior, meaning that they can house multiple windows. Windows can be tagged and moved between panes / virtual desktops via keyboard sequence. A given window may be temporarily exploded into full-screen mode via keyboard sequence. Each new virtual desktop starts in the same state—i.e., with one pane. Each virtual desktop may have its panes divided independently of other virtual desktops. From my investigation, it appears that there are several configurations that provide #3. For as much as I want to spend the time to familiarize myself with Haskell, I just simply don't have time. Any suggestions would be greatly appreciated. As far as I can tell, Ion has no conception of master pane or window, so this behavior is not desired.

    Read the article

  • Apache and MySQL not working well after extending filesystem

    - by xtrimsky
    I had 4Gb on my /var (/dev/mapper/vg00-var) filesystem, and I wanted to extend it to 160Gb. I did it following this tutorial: http://faq.1and1.com/dedicated_servers/root_server/linux_admin_help/7.html Now I have 160: Filesystem Size Used Avail Use% Mounted on /dev/md1 4.0G 424M 3.6G 11% / /dev/mapper/vg00-usr 4.3G 1.4G 3.0G 32% /usr /dev/mapper/vg00-var 198G 6.5G 192G 4% /var /dev/mapper/vg00-home 4.3G 4.4M 4.3G 1% /home none 1.1G 0 1.1G 0% /tmp Now I have a problem, in order for Apache to work, each time I reboot, I need to also reboot apache: "apachectl -k restart" which is already terrible. I think this is because /var contains the htdocs The worst part is, mysql is not starting at all. Mysql has some files also in /var What have I done wrong ?? :( Thank you EDIT: Attaching /var/log/mysqld.log: 120602 11:17:44 InnoDB: Waiting for the background threads to start 120602 11:17:45 InnoDB: 1.1.8 started; log sequence number 8354009 120602 11:17:45 [ERROR] /usr/libexec/mysqld: unknown variable 'set-variable=local-infile=0' 120602 11:17:45 [ERROR] Aborting 120602 11:17:45 InnoDB: Starting shutdown... 120602 11:17:46 InnoDB: Shutdown completed; log sequence number 8354009 120602 11:17:46 [Note] /usr/libexec/mysqld: Shutdown complete 120602 11:17:46 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended

    Read the article

  • Cannot destroy ZFS snapshot: dataset already exists

    - by Morven
    I have a server (T5220, though I doubt it matters) running Solaris 10 8/07 and I have a ZFS pool, "mysql", on internal disk. Within it I have a filesystem "mysql/data/4.1.12", which I snapshot hourly with a script from cron. I have one snapshot, created as one of those hourly snaps, that will not destroy. I have renamed it out of sequence to be "mysql/data/4.1.12@wibble" so that my script will not try and fail to destroy it, but it was originally within the sequence, though I doubt that matters. It renames successfully. The snapshot can be successfully navigated and read from through the .zfs/snapshots directory. It has no clones based on it. Trying to destroy it does this: (265) root@web-mysql4:/# zfs destroy mysql/data/4.1.12@wibble cannot destroy 'mysql/data/4.1.12@wibble': dataset already exists (266) root@web-mysql4:/# which is apparently nonsensical: of course it already exists, that's the point! Anyone seen anything like this before? Web searches show nothing obviously similar. I can provide patches installed if necessary.

    Read the article

  • Cannot Install Windows 7 SP1 (64-bit)

    - by Clever Human
    I have tried every way I know how to get Windows 7 SP1 to install. It fails every time. Below is what looks like the relevant contents of the CBS.Log file. If there are further details that would help or more information I can gather, I will get it. 2011-08-15 10:32:52, Info CBS Startup: Package: Package_for_KB976902~31bf3856ad364e35~amd64~~6.1.1.17514 completed startup processing, new state: Installed, original: Installed, targeted: Installed. hr = 0x80070490 2011-08-15 10:32:52, Info CBS WER: Generating failure report for package: Package_for_KB976932~31bf3856ad364e35~amd64~~6.1.1.17514, status: 0x80070490, failure source: CBS Other, start state: Partially Installed, target state: Installed, client id: SP Coordinater Engine 2011-08-15 10:32:52, Info CBS Failed to query DisableWerReporting flag. Assuming not set... [HRESULT = 0x80070002 - ERROR_FILE_NOT_FOUND] 2011-08-15 10:32:52, Info CBS Failed to add %windir%\winsxs\pending.xml to WER report because it is missing. Continuing without it... 2011-08-15 10:32:52, Info CBS Failed to add %windir%\winsxs\pending.xml.bad to WER report because it is missing. Continuing without it... 2011-08-15 10:32:52, Info CBS SQM: Reporting package change completion for package: Package_for_KB976932~31bf3856ad364e35~amd64~~6.1.1.17514, current: Partially Installed, original: Partially Installed, target: Installed, status: 0x80070490, failure source: CBS Other, failure details: "(null)", client id: SP Coordinater Engine, initiated offline: False, execution sequence: 517, first merged sequence: 517 2011-08-15 10:32:52, Info CBS SQM: Upload requested for report: PackageChangeEnd_Package_for_KB976932~31bf3856ad364e35~amd64~~6.1.1.17514, session id: 101457924, sample type: Standard 2011-08-15 10:32:52, Info CBS SQM: Ignoring upload request because the sample type is not enabled: Standard I have downloaded the service pack and ran it from the EXE, I have installed it from Windows Update, I have ran all the "troubleshooting" trouble shots I could find. Nothing has worked so far. Any advice would be appreciated.

    Read the article

  • How to consume PHP SOAP service using WCF

    - by mr.b
    I am new in web services so apologize me if I am making some cardinal mistake here, hehe. I have built SOAP service using PHP. Service is SOAP 1.2 compatible, and I have WSDL available. I have enabled sessions, so that I can track login status, etc. I don't need some super security here (ie message-level security), all I need is transport security (HTTPS), since this service will be used infrequently, and performances are not so much of an issue. I am having difficulties making it to work at all. C# throws some unclear exception ("Server returned an invalid SOAP Fault. Please see InnerException for more details.", which in turn says "Unbound prefix used in qualified name 'rpc:ProcedureNotPresent'."), but consuming service using PHP SOAP client behaves as expected (including session and all). So far, I have following code. note: due to amount of real code, I am posting minimal code configuration PHP SOAP server (using Zend Soap Server library), including class(es) exposed via service: <?php class Verification_LiteralDocumentProxy { protected $instance; public function __call($methodName, $args) { if ($this->instance === null) { $this->instance = new Verification(); } $result = call_user_func_array(array($this->instance, $methodName), $args[0]); return array($methodName.'Result' => $result); } } class Verification { private $guid = ''; private $hwid = ''; /** * Initialize connection * * @param string GUID * @param string HWID * @return bool */ public function Initialize($guid, $hwid) { $this->guid = $guid; $this->hwid = $hwid; return true; } /** * Closes session * * @return void */ public function Close() { // if session is working, $this->hwid and $this->guid // should contain non-empty values } } // start up session stuff $sess = Session::instance(); require_once 'Zend/Soap/Server.php'; $server = new Zend_Soap_Server('https://www.somesite.com/api?wsdl'); $server->setClass('Verification_LiteralDocumentProxy'); $server->setPersistence(SOAP_PERSISTENCE_SESSION); $server->handle(); WSDL: <definitions xmlns="http://schemas.xmlsoap.org/wsdl/" xmlns:tns="https://www.somesite.com/api" xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:soap-enc="http://schemas.xmlsoap.org/soap/encoding/" xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/" name="Verification" targetNamespace="https://www.somesite.com/api"> <types> <xsd:schema targetNamespace="https://www.somesite.com/api"> <xsd:element name="Initialize"> <xsd:complexType> <xsd:sequence> <xsd:element name="guid" type="xsd:string"/> <xsd:element name="hwid" type="xsd:string"/> </xsd:sequence> </xsd:complexType> </xsd:element> <xsd:element name="InitializeResponse"> <xsd:complexType> <xsd:sequence> <xsd:element name="InitializeResult" type="xsd:boolean"/> </xsd:sequence> </xsd:complexType> </xsd:element> <xsd:element name="Close"> <xsd:complexType/> </xsd:element> </xsd:schema> </types> <portType name="VerificationPort"> <operation name="Initialize"> <documentation> Initializes connection with server</documentation> <input message="tns:InitializeIn"/> <output message="tns:InitializeOut"/> </operation> <operation name="Close"> <documentation> Closes session between client and server</documentation> <input message="tns:CloseIn"/> </operation> </portType> <binding name="VerificationBinding" type="tns:VerificationPort"> <soap:binding style="document" transport="http://schemas.xmlsoap.org/soap/http"/> <operation name="Initialize"> <soap:operation soapAction="https://www.somesite.com/api#Initialize"/> <input> <soap:body use="literal"/> </input> <output> <soap:body use="literal"/> </output> </operation> <operation name="Close"> <soap:operation soapAction="https://www.somesite.com/api#Close"/> <input> <soap:body use="literal"/> </input> <output> <soap:body use="literal"/> </output> </operation> </binding> <service name="VerificationService"> <port name="VerificationPort" binding="tns:VerificationBinding"> <soap:address location="https://www.somesite.com/api"/> </port> </service> <message name="InitializeIn"> <part name="parameters" element="tns:Initialize"/> </message> <message name="InitializeOut"> <part name="parameters" element="tns:InitializeResponse"/> </message> <message name="CloseIn"> <part name="parameters" element="tns:Close"/> </message> </definitions> And finally, WCF C# consumer code: [ServiceContract(SessionMode = SessionMode.Required)] public interface IVerification { [OperationContract(Action = "Initialize", IsInitiating = true)] bool Initialize(string guid, string hwid); [OperationContract(Action = "Close", IsInitiating = false, IsTerminating = true)] void Close(); } class Program { static void Main(string[] args) { WSHttpBinding whb = new WSHttpBinding(SecurityMode.Transport, true); ChannelFactory<IVerification> cf = new ChannelFactory<IVerification>( whb, "https://www.somesite.com/api"); IVerification client = cf.CreateChannel(); Console.WriteLine(client.Initialize("123451515", "15498518").ToString()); client.Close(); } } Any ideas? What am I doing wrong here? Thanks!

    Read the article

  • What pseudo-operators exist in Perl 5?

    - by Chas. Owens
    I am currently documenting all of Perl 5's operators (see the perlopref GitHub project) and I have decided to include Perl 5's pseudo-operators as well. To me, a pseudo-operator in Perl is anything that looks like an operator, but is really more than one operator or a some other piece of syntax. I have documented the four I am familiar with already: ()= the countof operator =()= the goatse/countof operator ~~ the scalar context operator }{ the Eskimo-kiss operator What other names exist for these pseudo-operators, and do you know of any pseudo-operators I have missed? =head1 Pseudo-operators There are idioms in Perl 5 that appear to be operators, but are really a combination of several operators or pieces of syntax. These pseudo-operators have the precedence of the constituent parts. =head2 ()= X =head3 Description This pseudo-operator is the list assignment operator (aka the countof operator). It is made up of two items C<()>, and C<=>. In scalar context it returns the number of items in the list X. In list context it returns an empty list. It is useful when you have something that returns a list and you want to know the number of items in that list and don't care about the list's contents. It is needed because the comma operator returns the last item in the sequence rather than the number of items in the sequence when it is placed in scalar context. It works because the assignment operator returns the number of items available to be assigned when its left hand side has list context. In the following example there are five values in the list being assigned to the list C<($x, $y, $z)>, so C<$count> is assigned C<5>. my $count = my ($x, $y, $z) = qw/a b c d e/; The empty list (the C<()> part of the pseudo-operator) triggers this behavior. =head3 Example sub f { return qw/a b c d e/ } my $count = ()= f(); #$count is now 5 my $string = "cat cat dog cat"; my $cats = ()= $string =~ /cat/g; #$cats is now 3 print scalar( ()= f() ), "\n"; #prints "5\n" =head3 See also L</X = Y> and L</X =()= Y> =head2 X =()= Y This pseudo-operator is often called the goatse operator for reasons better left unexamined; it is also called the list assignment or countof operator. It is made up of three items C<=>, C<()>, and C<=>. When X is a scalar variable, the number of items in the list Y is returned. If X is an array or a hash it it returns an empty list. It is useful when you have something that returns a list and you want to know the number of items in that list and don't care about the list's contents. It is needed because the comma operator returns the last item in the sequence rather than the number of items in the sequence when it is placed in scalar context. It works because the assignment operator returns the number of items available to be assigned when its left hand side has list context. In the following example there are five values in the list being assigned to the list C<($x, $y, $z)>, so C<$count> is assigned C<5>. my $count = my ($x, $y, $z) = qw/a b c d e/; The empty list (the C<()> part of the pseudo-operator) triggers this behavior. =head3 Example sub f { return qw/a b c d e/ } my $count =()= f(); #$count is now 5 my $string = "cat cat dog cat"; my $cats =()= $string =~ /cat/g; #$cats is now 3 =head3 See also L</=> and L</()=> =head2 ~~X =head3 Description This pseudo-operator is named the scalar context operator. It is made up of two bitwise negation operators. It provides scalar context to the expression X. It works because the first bitwise negation operator provides scalar context to X and performs a bitwise negation of the result; since the result of two bitwise negations is the original item, the value of the original expression is preserved. With the addition of the Smart match operator, this pseudo-operator is even more confusing. The C<scalar> function is much easier to understand and you are encouraged to use it instead. =head3 Example my @a = qw/a b c d/; print ~~@a, "\n"; #prints 4 =head3 See also L</~X>, L</X ~~ Y>, and L<perlfunc/scalar> =head2 X }{ Y =head3 Description This pseudo-operator is called the Eskimo-kiss operator because it looks like two faces touching noses. It is made up of an closing brace and an opening brace. It is used when using C<perl> as a command-line program with the C<-n> or C<-p> options. It has the effect of running X inside of the loop created by C<-n> or C<-p> and running Y at the end of the program. It works because the closing brace closes the loop created by C<-n> or C<-p> and the opening brace creates a new bare block that is closed by the loop's original ending. You can see this behavior by using the L<B::Deparse> module. Here is the command C<perl -ne 'print $_;'> deparsed: LINE: while (defined($_ = <ARGV>)) { print $_; } Notice how the original code was wrapped with the C<while> loop. Here is the deparsing of C<perl -ne '$count++ if /foo/; }{ print "$count\n"'>: LINE: while (defined($_ = <ARGV>)) { ++$count if /foo/; } { print "$count\n"; } Notice how the C<while> loop is closed by the closing brace we added and the opening brace starts a new bare block that is closed by the closing brace that was originally intended to close the C<while> loop. =head3 Example # count unique lines in the file FOO perl -nle '$seen{$_}++ }{ print "$_ => $seen{$_}" for keys %seen' FOO # sum all of the lines until the user types control-d perl -nle '$sum += $_ }{ print $sum' =head3 See also L<perlrun> and L<perlsyn> =cut

    Read the article

  • Client no longer getting data from Web Service after introducing targetNamespace in XSD

    - by Laurence
    Sorry if there is way too much info in this post – there’s a load of story before I get to the actual problem. I thought I‘d include everything that might be relevant as I don’t have much clue what is wrong. I had a working web service and client (both written with VS 2008 in C#) for passing product data to an e-commerce site. The XSD started like this: <xs:schema id="Ecommerce" elementFormDefault="qualified" xmlns:mstns="http://tempuri.org/Ecommerce.xsd" xmlns:xs="http://www.w3.org/2001/XMLSchema"> <xs:element name="eur"> <xs:complexType> <xs:sequence> <xs:element ref="sec" minOccurs="1" maxOccurs="1"/> </xs:sequence> etc Here’s a sample document sent from client to service: <eur xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" class="ECommerce_WebService" type="product" method="GetLastDateSent" chunk_no="1" total_chunks="1" date_stamp="2010-03-10T17:16:34.523" version="1.1"> <sec guid="BFBACB3C-4C17-4786-ACCF-96BFDBF32DA5" company_name="Company" version="1.1"> <data /> </sec> </eur> Then, I had to give the service a targetNamespace. Actually I don’t know if I “had” to set it, but I added (to the same VS project) some code to act as a client to a completely unrelated service (which also had no namespace), and the project would not build until I gave my service a namespace. Now the XSD starts like this: <xs:schema id="Ecommerce" elementFormDefault="qualified" xmlns:mstns="http://tempuri.org/Ecommerce.xsd" xmlns:xs="http://www.w3.org/2001/XMLSchema" targetNamespace="http://www.company.com/ecommerce" xmlns:ecom="http://www. company.com/ecommerce"> <xs:element name="eur"> <xs:complexType> <xs:sequence> <xs:element ref="ecom:sec" minOccurs="1" maxOccurs="1" /> </xs:sequence> etc As you can see above I also updated all the xs:element ref attributes to give them the “ecom” prefix. Now the project builds again. I found the client needed some modification after this. The client uses a SQL stored procedure to generate the XML. This is then de-serialised into an object of the correct type for the service’s “get_data” method. The object’s type used to be “eur” but after updating the web reference to the service, it became “get_dataEur”. And sure enough the parent element in the XML had to be changed to “get_dataEur” to be accepted. Then bizarrely I also had to put the xmlns attribute containing my namespace on the “sec” element (the immediate child of the parent element) rather than the parent element. Here’s a sample document now sent from client to service: <get_dataEur xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" class="ECommerce_WebService" type="product" method="GetLastDateSent" chunk_no="1" total_chunks="1" date_stamp="2010-03-10T18:23:20.653" version="1.1"> <sec xmlns="http://www.company.com/ecommerce" guid="BFBACB3C-4C17-4786-ACCF-96BFDBF32DA5" company_name="Company" version="1.1"> <data /> </sec> </get_dataEur> If in the service’s get_data method I then serialize the incoming object I see this (the parent element is “eur” and the xmlns attribute is on the parent element): <eur xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns="http://www.company.com/ecommerce" class="ECommerce_WebService" type="product" method="GetLastDateSent" chunk_no="1" total_chunks="1" date_stamp="2010-03-10T18:23:20.653" version="1.1"> <sec guid="BFBACB3C-4C17-4786-ACCF-96BFDBF32DA5" company_name="Company" version="1.1"> <data /> </sec> </eur> The service then prepares a reply to go back to the client. The XML looks like this (the important data being sent back is the date_stamp attribute in the last_sent element): <eur xmlns="http://www.company.com/ecommerce" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" class="ECommerce_WebService" type="product" method="GetLastDateSent" chunk_no="1" total_chunks="1" date_stamp="2010-03-10T18:22:57.530" version="1.1"> <sec version="1.1" xmlns=""> <data> <last_sent date_stamp="2010-02-25T15:15:10.193" /> </data> </sec> </eur> Now finally, here’s the problem!!! The client does not see any data – all it sees is the parent element with nothing inside it. If I serialize the reply object in the client code it looks like this: <get_dataResponseEur xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" class="ECommerce_WebService" type="product" method="GetLastDateSent" chunk_no="1" total_chunks="1" date_stamp="2010-03-10T18:22:57.53" version="1.1" /> So, my questions are: why isn’t my client seeing the contents of the reply document? how do I fix it? why do I have to put the xmlns attribute on a child element rather than the parent element in the outgoing document? Here’s a bit more possibly relevant info: The client code (pre-namespace) called the service method like this: XmlSerializer serializer = new XmlSerializer(typeof(eur)); XmlReader reader = xml.CreateReader(); eur eur = (eur)serializer.Deserialize(reader); service.Credentials = new NetworkCredential(login, pwd); service.Url = url; rc = service.get_data(ref eur); After the namespace was added I had to change it to this: XmlSerializer serializer = new XmlSerializer(typeof(get_dataEur)); XmlReader reader = xml.CreateReader(); get_dataEur eur = (get_dataEur)serializer.Deserialize(reader); get_dataResponseEur eur1 = new get_dataResponseEur(); service.Credentials = new NetworkCredential(login, pwd); service.Url = url; rc = service.get_data(eur, out eur1);

    Read the article

  • XSD and plain text

    - by Paul Knopf
    I have a rest/xml service that gives me the following... <verse-unit unit-id="38009001"> <marker class="begin-verse" mid="v38009001"/> <begin-chapter num="9"/><heading>Judgment on Israel&apos;s Enemies</heading> <begin-block-indent/> <begin-paragraph class="line-group"/> <begin-line/><verse-num begin-chapter="9">1</verse-num>The burden of the word of the <span class="divine-name">Lord</span> is against the land of Hadrach<end-line class="br"/> <begin-line class="indent"/>and Damascus is its resting place.<end-line class="br"/> <begin-line/>For the <span class="divine-name">Lord</span> has an eye on mankind<end-line class="br"/> <begin-line class="indent"/>and on all the tribes of Israel,<footnote id="f1"> A slight emendation yields <i> For to the <span class="divine-name">Lord</span> belongs the capital of Syria and all the tribes of Israel </i> </footnote><end-line class="br"/> </verse-unit> I used visual studio to generate a schema from this and used XSD.EXE to generate classes that I can use to deserialize this mess into programmable stuff. I got everything to work and it is deserialized perfectly (almost). The problem I have is with the random text mixed throughout the child nodes. The generated verse-unit objects gives me a list of objects (begin-line, begin-block-indent, etc), and also another list of string objects that represent the bits of string throughout the xml. Here is my schema <xs:element maxOccurs="unbounded" name="verse-unit"> <xs:complexType mixed="true"> <xs:sequence> <xs:choice maxOccurs="unbounded"> <xs:element name="marker"> <xs:complexType> <xs:attribute name="class" type="xs:string" use="required" /> <xs:attribute name="mid" type="xs:string" use="required" /> </xs:complexType> </xs:element> <xs:element name="begin-chapter"> <xs:complexType> <xs:attribute name="num" type="xs:unsignedByte" use="required" /> </xs:complexType> </xs:element> <xs:element name="heading"> <xs:complexType mixed="true"> <xs:sequence minOccurs="0"> <xs:element name="span"> <xs:complexType> <xs:simpleContent> <xs:extension base="xs:string"> <xs:attribute name="class" type="xs:string" use="required" /> </xs:extension> </xs:simpleContent> </xs:complexType> </xs:element> </xs:sequence> </xs:complexType> </xs:element> <xs:element name="begin-block-indent" /> <xs:element name="begin-paragraph"> <xs:complexType> <xs:attribute name="class" type="xs:string" use="required" /> </xs:complexType> </xs:element> <xs:element name="begin-line"> <xs:complexType> <xs:attribute name="class" type="xs:string" use="optional" /> </xs:complexType> </xs:element> <xs:element name="verse-num"> <xs:complexType> <xs:simpleContent> <xs:extension base="xs:unsignedByte"> <xs:attribute name="begin-chapter" type="xs:unsignedByte" use="optional" /> </xs:extension> </xs:simpleContent> </xs:complexType> </xs:element> <xs:element name="end-line"> <xs:complexType> <xs:attribute name="class" type="xs:string" use="optional" /> </xs:complexType> </xs:element> <xs:element name="end-paragraph" /> <xs:element name="end-block-indent" /> <xs:element name="end-chapter" /> </xs:choice> </xs:sequence> <xs:attribute name="unit-id" type="xs:unsignedInt" use="required" /> </xs:complexType> </xs:element> WHAT I NEED IS THIS. I need the random text that is NOT surrounded by an xml node to be represented by an object so I know the order that everything is in. I know this is complicated, so let me try to simplify it. <field name="test_field_0"> Some text I'm sure you don't want. <subfield>Some text.</subfield> More text you don't want. </field> I need the xsd to generate a field object with items that can have either a text object, or a subfield object. I need to no where the random text is within the child nodes.

    Read the article

  • OWB 11gR2 - Early Arriving Facts

    - by Dawei Sun
    A common challenge when building ETL components for a data warehouse is how to handle early arriving facts. OWB 11gR2 introduced a new feature to address this for dimensional objects entitled Orphan Management. An orphan record is one that does not have a corresponding existing parent record. Orphan management automates the process of handling source rows that do not meet the requirements necessary to form a valid dimension or cube record. In this article, a simple example will be provided to show you how to use Orphan Management in OWB. We first import a sample MDL file that contains all the objects we need. Then we take some time to examine all the objects. After that, we prepare the source data, deploy the target table and dimension/cube loading map. Finally, we run the loading maps, and check the data in target dimension/cube tables. OK, let’s start… 1. Import MDL file and examine sample project First, download zip file from here, which includes a MDL file and three source data files. Then we open OWB design center, import orphan_management.mdl by using the menu File->Import->Warehouse Builder Metadata. Now we have several objects in BI_DEMO project as below: Mapping LOAD_CHANNELS_OM: The mapping for dimension loading. Mapping LOAD_SALES_OM: The mapping for cube loading. Dimension CHANNELS_OM: The dimension that contains channels data. Cube SALES_OM: The cube that contains sales data. Table CHANNELS_OM: The star implementation table of dimension CHANNELS_OM. Table SALES_OM: The star implementation table of cube SALES_OM. Table SRC_CHANNELS: The source table of channels data, that will be loaded into dimension CHANNELS_OM. Table SRC_ORDERS and SRC_ORDER_ITEMS: The source tables of sales data that will be loaded into cube SALES_OM. Sequence CLASS_OM_DIM_SEQ: The sequence used for loading dimension CHANNELS_OM. Dimension CHANNELS_OM This dimension has a hierarchy with three levels: TOTAL, CLASS and CHANNEL. Each level has three attributes: ID (surrogate key), NAME and SOURCE_ID (business key). It has a standard star implementation. The orphan management policy and the default parent setting are shown in the following screenshots: The orphan management policy options that you can set for loading are: Reject Orphan: The record is not inserted. Default Parent: You can specify a default parent record. This default record is used as the parent record for any record that does not have an existing parent record. If the default parent record does not exist, Warehouse Builder creates the default parent record. You specify the attribute values of the default parent record at the time of defining the dimensional object. If any ancestor of the default parent does not exist, Warehouse Builder also creates this record. No Maintenance: This is the default behavior. Warehouse Builder does not actively detect, reject, or fix orphan records. While removing data from a dimension, you can select one of the following orphan management policies: Reject Removal: Warehouse Builder does not allow you to delete the record if it has existing child records. No Maintenance: This is the default behavior. Warehouse Builder does not actively detect, reject, or fix orphan records. (More details are at http://download.oracle.com/docs/cd/E11882_01/owb.112/e10935/dim_objects.htm#insertedID1) Cube SALES_OM This cube is references to dimension CHANNELS_OM. It has three measures: AMOUNT, QUANTITY and COST. The orphan management policy setting are shown as following screenshot: The orphan management policy options that you can set for loading are: No Maintenance: Warehouse Builder does not actively detect, reject, or fix orphan rows. Default Dimension Record: Warehouse Builder assigns a default dimension record for any row that has an invalid or null dimension key value. Use the Settings button to define the default parent row. Reject Orphan: Warehouse Builder does not insert the row if it does not have an existing dimension record. (More details are at http://download.oracle.com/docs/cd/E11882_01/owb.112/e10935/dim_objects.htm#BABEACDG) Mapping LOAD_CHANNELS_OM This mapping loads source data from table SRC_CHANNELS to dimension CHANNELS_OM. The operator CHANNELS_IN is bound to table SRC_CHANNELS; CHANNELS_OUT is bound to dimension CHANNELS_OM. The TOTALS operator is used for generating a constant value for the top level in the dimension. The CLASS_FILTER operator is used to filter out the “invalid” class name, so then we can see what will happen when those channel records with an “invalid” parent are loading into dimension. Some properties of the dimension operator in this mapping are important to orphan management. See the screenshot below: Create Default Level Records: If YES, then default level records will be created. This property must be set to YES for dimensions and cubes if one of their orphan management policies is “Default Parent” or “Default Dimension Record”. This property is set to NO by default, so the user may need to set this to YES manually. LOAD policy for INVALID keys/ LOAD policy for NULL keys: These two properties have the same meaning as in the dimension editor. The values are set to the same as the dimension value when user drops the dimension into the mapping. The user does not need to modify these properties. Record Error Rows: If YES, error rows will be inserted into error table when loading the dimension. REMOVE Orphan Policy: This property is used when removing data from a dimension. Since the dimension loading type is set to LOAD in this example, this property is disabled. Mapping LOAD_SALES_OM This mapping loads source data from table SRC_ORDERS and SRC_ORDER_ITEMS to cube SALES_OM. This mapping seems a little bit complicated, but operators in the red rectangle are used to filter out and generate the records with “invalid” or “null” dimension keys. Some properties of the cube operator in a mapping are important to orphan management. See the screenshot below: Enable Source Aggregation: Should be checked in this example. If the default dimension record orphan policy is set for the cube operator, then it is recommended that source aggregation also be enabled. Otherwise, the orphan management processing may produce multiple fact rows with the same default dimension references, which will cause an “unstable rowset” execution error in the database, since the dimension refs are used as update match attributes for updating the fact table. LOAD policy for INVALID keys/ LOAD policy for NULL keys: These two properties have the same meaning as in the cube editor. The values are set to the same as in the cube editor when the user drops the cube into the mapping. The user does not need to modify these properties. Record Error Rows: If YES, error rows will be inserted into error table when loading the cube. 2. Deploy objects and mappings We now can deploy the objects. First, make sure location SALES_WH_LOCAL has been correctly configured. Then open Control Center Manager by using the menu Tools->Control Center Manager. Expand BI_DEMO->SALES_WH_LOCAL, click SALES_WH node on the project tree. We can see the following objects: Deploy all the objects in the following order: Sequence CLASS_OM_DIM_SEQ Table CHANNELS_OM, SALES_OM, SRC_CHANNELS, SRC_ORDERS, SRC_ORDER_ITEMS Dimension CHANNELS_OM Cube SALES_OM Mapping LOAD_CHANNELS_OM, LOAD_SALES_OM Note that we deployed source tables as well. Normally, we import source table from database instead of deploying them to target schema. However, in this example, we designed the source tables in OWB and deployed them to database for the purpose of this demonstration. 3. Prepare and examine source data Before running the mappings, we need to populate and examine the source data first. Run SRC_CHANNELS.sql, SRC_ORDERS.sql and SRC_ORDER_ITEMS.sql as target user. Then we check the data in these three tables. Table SRC_CHANNELS SQL> select rownum, id, class, name from src_channels; Records 1~5 are correct; they should be loaded into dimension without error. Records 6,7 and 8 have null parents; they should be loaded into dimension with a default parent value, and should be inserted into error table at the same time. Records 9, 10 and 11 have “invalid” parents; they should be rejected by dimension, and inserted into error table. Table SRC_ORDERS and SRC_ORDER_ITEMS SQL> select rownum, a.id, a.channel, b.amount, b.quantity, b.cost from src_orders a, src_order_items b where a.id = b.order_id; Record 178 has null dimension reference; it should be loaded into cube with a default dimension reference, and should be inserted into error table at the same time. Record 179 has “invalid” dimension reference; it should be rejected by cube, and inserted into error table. Other records should be aggregated and loaded into cube correctly. 4. Run the mappings and examine the target data In the Control Center Manager, expand BI_DEMO-> SALES_WH_LOCAL-> SALES_WH-> Mappings, right click on LOAD_CHANNELS_OM node, click Start. Use the same way to run mapping LOAD_SALES_OM. When they successfully finished, we can check the data in target tables. Table CHANNELS_OM SQL> select rownum, total_id, total_name, total_source_id, class_id,class_name, class_source_id, channel_id, channel_name,channel_source_id from channels_om order by abs(dimension_key); Records 1,2 and 3 are the default dimension records for the three levels. Records 8, 10 and 15 are the loaded records that originally have null parents. We see their parents name (class_name) is set to DEF_CLASS_NAME. Those records whose CHANNEL_NAME are Special_4, Special_5 and Special_6 are not loaded to this table because of the invalid parent. Error Table CHANNELS_OM_ERR SQL> select rownum, class_source_id, channel_id, channel_name,channel_source_id, err$$$_error_reason from channels_om_err order by channel_name; We can see all the record with null parent or invalid parent are inserted into this error table. Error reason is “Default parent used for record” for the first three records, and “No parent found for record” for the last three. Table SALES_OM SQL> select a.*, b.channel_name from sales_om a, channels_om b where a.channels=b.channel_id; We can see the order record with null channel_name has been loaded into target table with a default channel_name. The one with “invalid” channel_name are not loaded. Error Table SALES_OM_ERR SQL> select a.amount, a.cost, a.quantity, a.channels, b.channel_name, a.err$$$_error_reason from sales_om_err a, channels_om b where a.channels=b.channel_id(+); We can see the order records with null or invalid channel_name are inserted into error table. If the dimension reference column is null, the error reason is “Default dimension record used for fact”. If it is invalid, the error reason is “Dimension record not found for fact”. Summary In summary, this article illustrated the Orphan Management feature in OWB 11gR2. Automated orphan management policies improve ETL developer and administrator productivity by addressing an important cause of cube and dimension load failures, without requiring developers to explicitly build logic to handle these orphan rows.

    Read the article

  • boost fusion: strange problem depending on number of elements on a vector

    - by ChAoS
    I am trying to use Boost::Fusion (Boost v1.42.0) in a personal project. I get an interesting error with this code: #include "boost/fusion/include/sequence.hpp" #include "boost/fusion/include/make_vector.hpp" #include "boost/fusion/include/insert.hpp" #include "boost/fusion/include/invoke_procedure.hpp" #include "boost/fusion/include/make_vector.hpp" #include <iostream> class Class1 { public: typedef boost::fusion::vector<int,float,float,char,int,int> SequenceType; SequenceType s; Class1(SequenceType v):s(v){} }; class Class2 { public: Class2(){} void met(int a,float b ,float c ,char d ,int e,int f) { std::cout << a << " " << b << " " << c << " " << d << " " << e << std::endl; } }; int main(int argn, char**) { Class2 p; Class1 t(boost::fusion::make_vector(9,7.66f,8.99f,'s',7,6)); boost::fusion::begin(t.s); //OK boost::fusion::insert(t.s, boost::fusion::begin(t.s), &p); //OK boost::fusion::invoke_procedure(&Class2::met,boost::fusion::insert(t.s, boost::fusion::begin(t.s), &p)); //FAILS } It fails to compile (gcc 4.4.1): In file included from /home/thechaos/Escriptori/of_preRelease_v0061_linux_FAT/addons/ofxTableGestures/ext/boost/fusion/include/invoke_procedur e.hpp:10, from problema concepte.cpp:11: /home/thechaos/Escriptori/of_preRelease_v0061_linux_FAT/addons/ofxTableGestures/ext/boost/fusion/functional/invocation/invoke_procedure.hpp: I n function ‘void boost::fusion::invoke_procedure(Function, const Sequence&) [with Function = void (Class2::*)(int, float, float, char, int, in t), Sequence = boost::fusion::joint_view<boost::fusion::joint_view<boost::fusion::iterator_range<boost::fusion::vector_iterator<const boost::f usion::vector<int, float, float, char, int, int, boost::fusion::void_, boost::fusion::void_, boost::fusion::void_, boost::fusion::void_>, 0>, boost::fusion::vector_iterator<boost::fusion::vector<int, float, float, char, int, int, boost::fusion::void_, boost::fusion::void_, boost::fus ion::void_, boost::fusion::void_>, 0> >, const boost::fusion::single_view<Class2*> >, boost::fusion::iterator_range<boost::fusion::vector_iter ator<boost::fusion::vector<int, float, float, char, int, int, boost::fusion::void_, boost::fusion::void_, boost::fusion::void_, boost::fusion: :void_>, 0>, boost::fusion::vector_iterator<const boost::fusion::vector<int, float, float, char, int, int, boost::fusion::void_, boost::fusion ::void_, boost::fusion::void_, boost::fusion::void_>, 6> > >]’: problema concepte.cpp:39: instantiated from here /home/thechaos/Escriptori/of_preRelease_v0061_linux_FAT/addons/ofxTableGestures/ext/boost/fusion/functional/invocation/invoke_procedure.hpp:88 : error: incomplete type ‘boost::fusion::detail::invoke_procedure_impl<void (Class2::*)(int, float, float, char, int, int), const boost::fusio n::joint_view<boost::fusion::joint_view<boost::fusion::iterator_range<boost::fusion::vector_iterator<const boost::fusion::vector<int, float, f loat, char, int, int, boost::fusion::void_, boost::fusion::void_, boost::fusion::void_, boost::fusion::void_>, 0>, boost::fusion::vector_itera tor<boost::fusion::vector<int, float, float, char, int, int, boost::fusion::void_, boost::fusion::void_, boost::fusion::void_, boost::fusion:: void_>, 0> >, const boost::fusion::single_view<Class2*> >, boost::fusion::iterator_range<boost::fusion::vector_iterator<boost::fusion::vector< int, float, float, char, int, int, boost::fusion::void_, boost::fusion::void_, boost::fusion::void_, boost::fusion::void_>, 0>, boost::fusion: :vector_iterator<const boost::fusion::vector<int, float, float, char, int, int, boost::fusion::void_, boost::fusion::void_, boost::fusion::voi d_, boost::fusion::void_>, 6> > >, 7, true, false>’ used in nested name specifier However, if I change the number of arguments in the vectors and the method from 6 to 5 from int,float,float,char,int,int to int,float,float,char,int,I can compile it without problems. I suspected about the maximum number of arguments being a limitation, but I tried to change it through defining FUSION_MAX_VECTOR_SIZE without success. I am unable to see what am I doing wrong. Can you reproduce this? Can it be a boost bug (i doubt it but is not impossible)?

    Read the article

  • Add Service Reference is generating Message Contracts

    - by JohnIdol
    OK, this has been haunting me for a while, can't find much on Google and I am starting to lose hope so I am reverting to the SO community. When I import a given service using "Add service Reference" on Visual Studio 2008 (SP1) all the Request/Response messages are being unnecessarily wrapped into Message Contracts (named as -- "operationName" + "Request"/"Response" + "1" at the end). The code generator says: // CODEGEN: Generating message contract since the operation XXX is neither RPC nor document wrapped. The guys who are generating the wsdl from a Java service say they are specifying DOCUMENT-LITERAL/WRAPPED. Any help/pointer/clue would be highly appreciated. Update: this is a sample of my wsdl for one of the operations that look suspicious. Note the mismatch on the message element attribute for the request, compared to the response. <!- imports namespaces and defines elements --> <wsdl:types> <xsd:schema targetNamespace="http://WHATEVER/" xmlns:xsd_1="http://WHATEVER_1/" xmlns:xsd_2="http://WHATEVER_2/"> <xsd:import namespace="http://WHATEVER_1/" schemaLocation="WHATEVER_1.xsd"/> <xsd:import namespace="http://WHATEVER_2/" schemaLocation="WHATEVER_2.xsd"/> <xsd:element name="myOperationResponse" type="xsd_1:MyOperationResponse"/> <xsd:element name="myOperation" type="xsd_1:MyOperationRequest"/> </xsd:schema> </wsdl:types> <!- declares messages - NOTE the mismatch on the request element attribute compared to response --> <wsdl:message name="myOperationRequest"> <wsdl:part element="tns:myOperation" name="request"/> </wsdl:message> <wsdl:message name="myOperationResponse"> <wsdl:part element="tns:myOperationResponse" name="response"/> </wsdl:message> <!- operations --> <wsdl:portType name="MyService"> <wsdl:operation name="myOperation"> <wsdl:input message="tns:myOperationRequest"/> <wsdl:output message="tns:myOperationResponse"/> <wsdl:fault message="tns:myOperationFault" name="myOperationFault"/> <wsdl:fault message="tns:myOperationFault1" name="myOperationFault1"/> </wsdl:operation> </wsdl:portType> Update 2: I pulled all the types that I had in my imported namespace (they were in a separate xsd) into the wsdl, as I suspected the import could be triggering the message contract generation. To my surprise it was not the case and having all the types defined in the wsdl did not change anything. I then (out of desperation) started constructing wsdls from scratch and playing with the maxOccurs attributes of element attributes contained in a sequence attribute I was able to reproduce the undesired message contract generation behavior. Here's a sample of an element: <xsd:element name="myElement"> <xsd:complexType> <xsd:sequence> <xsd:element minOccurs="0" maxOccurs="1" name="arg1" type="xsd:string"/> </xsd:sequence> </xsd:complexType> </xsd:element> Playing with maxOccurs on elements that are used as messages (all requests and responses basically) the following happens: maxOccurs = "1" does not trigger the wrapping macOcccurs 1 triggers the wrapping maxOccurs = "unbounded" triggers the wrapping I was not able to reproduce this on my production wsdl yet because the nesting of the types goes very deep, and it's gonna take me time to inspect it thoroughly. In the meanwhile I am hoping it might ring a bell - any help highly appreciated.

    Read the article

  • Point in polygon OR point on polygon using LINQ

    - by wageoghe
    As noted in an earlier question, How to Zip enumerable with itself, I am working on some math algorithms based on lists of points. I am currently working on point in polygon. I have the code for how to do that and have found several good references here on SO, such as this link Hit test. So, I can figure out whether or not a point is in a polygon. As part of determining that, I want to determine if the point is actually on the polygon. This I can also do. If I can do all of that, what is my question you might ask? Can I do it efficiently using LINQ? I can already do something like the following (assuming a Pairwise extension method as described in my earlier question as well as in links to which my question/answers links, and assuming a Position type that has X and Y members). I have not tested much, so the lambda might not be 100% correct. Also, it does not take very small differences into account. public static PointInPolygonLocation PointInPolygon(IEnumerable<Position> pts, Position pt) { int numIntersections = pts.Pairwise( (p1, p2) => { if (p1.Y != p2.Y) { if ((p1.Y >= pt.Y && p2.Y < pt.Y) || (p1.Y < pt.Y && p2.Y >= pt.Y)) { if (p1.X < p1.X && p2.X < pt.X) { return 1; } if (p1.X < pt.X || p2.X < pt.X) { if (((pt.Y - p1.Y) * ((p1.X - p2.X) / (p1.Y - p2.Y)) * p1.X) < pt.X) { return 1; } } } } return 0; }).Sum(); if (numIntersections % 2 == 0) { return PointInPolygonLocation.Outside; } else { return PointInPolygonLocation.Inside; } } This function, PointInPolygon, takes the input Position, pt, iterates over the input sequence of position values, and uses the Jordan Curve method to determine how many times a ray extended from pt to the left intersects the polygon. The lambda expression will yield, into the "zipped" list, 1 for every segment that is crossed, and 0 for the rest. The sum of these values determines if pt is inside or outside of the polygon (odd == inside, even == outside). So far, so good. Now, for any consecutive pairs of position values in the sequence (i.e. in any execution of the lambda), we can also determine if pt is ON the segment p1, p2. If that is the case, we can stop the calculation because we have our answer. Ultimately, my question is this: Can I perform this calculation (maybe using Aggregate?) such that we will only iterate over the sequence no more than 1 time AND can we stop the iteration if we encounter a segment that pt is ON? In other words, if pt is ON the very first segment, there is no need to examine the rest of the segments because we have the answer. It might very well be that this operation (particularly the requirement/desire to possibly stop the iteration early) does not really lend itself well to the LINQ approach. It just occurred to me that maybe the lambda expression could yield a tuple, the intersection value (1 or 0 or maybe true or false) and the "on" value (true or false). Maybe then I could use TakeWhile(anontype.PointOnPolygon == false). If I Sum the tuples and if ON == 1, then the point is ON the polygon. Otherwise, the oddness or evenness of the sum of the other part of the tuple tells if the point is inside or outside.

    Read the article

  • Question about how to use strong typed dataset in N-tier application for .NET

    - by sb
    Hello All, I need some expert advice on strong typed data sets in ADO.NET that are generated by the Visual Studio. Here are the details. Thank you in advance. I want to write a N-tier application where Presentation layer is in C#/windows forms, Business Layer is a Web service and Data Access Layer is SQL db. So, I used Visual Studio 2005 for this and created 3 projects in a solution. project 1 is the Data access layer. In this I have used visual studio data set generator to create a strong typed data set and table adapter (to test I created this on the customers table in northwind). The data set is called NorthWindDataSet and the table inside is CustomersTable. project 2 has the web service which exposes only 1 method which is GetCustomersDataSet. This uses the project1 library's table adapter to fill the data set and return it to the caller. To be able to use the NorthWindDataSet and table adapter, I added a reference to the project 1. project 3 is a win forms app and this uses the web service as a reference and calls that service to get the data set. In the process of building this application, in the PL, I added a reference to the DataSet generated above in the project 1 and in form's load I call the web service and assign the received DataSet from the web service to this dataset. But I get the error: "Cannot implicitly convert type 'PL.WebServiceLayerReference.NorthwindDataSet' to 'BL.NorthwindDataSet' e:\My Documents\Visual Studio 2008\Projects\DataSetWebServiceExample\PL\Form1.cs". Both the data sets are same but because I added references from different locations, I am getting the above error I think. So, what I did was I added a reference to project 1 (which defines the data set) to project 3 (the UI) and used the web service to get the DataSet and assing to the right type, now when the project 3 (which has the web form) runs, I get the below runtime exception. "System.InvalidOperationException: There is an error in XML document (1, 5058). --- System.Xml.Schema.XmlSchemaException: Multiple definition of element 'http://tempuri.org/NorthwindDataSet.xsd:Customers' causes the content model to become ambiguous. A content model must be formed such that during validation of an element information item sequence, the particle contained directly, indirectly or implicitly therein with which to attempt to validate each item in the sequence in turn can be uniquely determined without examining the content or attributes of that item, and without any information about the items in the remainder of the sequence." I think this might be because of some cross referenceing errors. My question is, is there a way to use the visual studio generated DataSets in such a way that I can use the same DataSet in all layers (for reuse) but separate the Table Adapter logic to the Data Access Layer so that the front end is abstracted from all this by the web service? If I have hand write the code I loose the goodness the data set generator gives and also if there are columns added later I need to add it by hand etc so I want to use the visual studio wizard as much as possible. thanks for any help on this. sb

    Read the article

  • SQLAlchemy session management in long-running process

    - by codeape
    Scenario: A .NET-based application server (Wonderware IAS/System Platform) hosts automation objects that communicate with various equipment on the factory floor. CPython is hosted inside this application server (using Python for .NET). The automation objects have scripting functionality built-in (using a custom, .NET-based language). These scripts call Python functions. The Python functions are part of a system to track Work-In-Progress on the factory floor. The purpose of the system is to track the produced widgets along the process, ensure that the widgets go through the process in the correct order, and check that certain conditions are met along the process. The widget production history and widget state is stored in a relational database, this is where SQLAlchemy plays its part. For example, when a widget passes a scanner, the automation software triggers the following script (written in the application server's custom scripting language): ' wiget_id and scanner_id provided by automation object ' ExecFunction() takes care of calling a CPython function retval = ExecFunction("WidgetScanned", widget_id, scanner_id); ' if the python function raises an Exception, ErrorOccured will be true ' in this case, any errors should cause the production line to stop. if (retval.ErrorOccured) then ProductionLine.Running = False; InformationBoard.DisplayText = "ERROR: " + retval.Exception.Message; InformationBoard.SoundAlarm = True end if; The script calls the WidgetScanned python function: # pywip/functions.py from pywip.database import session from pywip.model import Widget, WidgetHistoryItem from pywip import validation, StatusMessage from datetime import datetime def WidgetScanned(widget_id, scanner_id): widget = session.query(Widget).get(widget_id) validation.validate_widget_passed_scanner(widget, scanner) # raises exception on error widget.history.append(WidgetHistoryItem(timestamp=datetime.now(), action=u"SCANNED", scanner_id=scanner_id)) widget.last_scanner = scanner_id widget.last_update = datetime.now() return StatusMessage("OK") # ... there are a dozen similar functions My question is: How do I best manage SQLAlchemy sessions in this scenario? The application server is a long-running process, typically running months between restarts. The application server is single-threaded. Currently, I do it the following way: I apply a decorator to the functions I make avaliable to the application server: # pywip/iasfunctions.py from pywip import functions def ias_session_handling(func): def _ias_session_handling(*args, **kwargs): try: retval = func(*args, **kwargs) session.commit() return retval except: session.rollback() raise return _ias_session_handling # ... actually I populate this module with decorated versions of all the functions in pywip.functions dynamically WidgetScanned = ias_session_handling(functions.WidgetScanned) Question: Is the decorator above suitable for handling sessions in a long-running process? Should I call session.remove()? The SQLAlchemy session object is a scoped session: # pywip/database.py from sqlalchemy.orm import scoped_session, sessionmaker session = scoped_session(sessionmaker()) I want to keep the session management out of the basic functions. For two reasons: There is another family of functions, sequence functions. The sequence functions call several of the basic functions. One sequence function should equal one database transaction. I need to be able to use the library from other environments. a) From a TurboGears web application. In that case, session management is done by TurboGears. b) From an IPython shell. In that case, commit/rollback will be explicit. (I am truly sorry for the long question. But I felt I needed to explain the scenario. Perhaps not necessary?)

    Read the article

  • JAXB: Unmarshalling does not always populate certain classes?

    - by user278458
    Hello, I have a JAXB class generation problem I was hoping to get some help with. Here's the part of the XML that is the source of my problem... Code: <xs:complexType name="IDType"> <xs:choice minOccurs="0" maxOccurs="2"> <xs:element name="DriversLicense" minOccurs="0" maxOccurs="1" type="an..35" /> <xs:element name="SSN" minOccurs="0" maxOccurs="1" type="an..35" /> <xs:element name="CompanyID" minOccurs="0" maxOccurs="1" type="an..35" /> </xs:choice> </xs:complexType> <xs:simpleType name="an..35"> <xs:restriction base="an"> <xs:maxLength value="35" /> </xs:restriction> </xs:simpleType> <xs:simpleType name="an"> <xs:restriction base="xs:string"> <xs:pattern value="[ !-~]*" /> </xs:restriction> </xs:simpleType> ...now this will generate JAXBElement types due the the "choice" with a "maxOccurs 1" . I want to avoid those, so I did that by modifying the code to use a "Wrapper" element and move the maxOccurs up to a sequence tag as follows... Code: <xs:complexType name="IDType"> <xs:sequence maxOccurs="2"> <xs:element name=Wrapper> <xs:complexType> <xs:choice> <xs:element name="DriversLicense" minOccurs="0" maxOccurs="1" type="an..35" /> <xs:element name="SSN" minOccurs="0" maxOccurs="1" type="an..35" /> <xs:element name="CompanyID" minOccurs="0" maxOccurs="1" type="an..35" /> </xs:choice> </xs:complexType> </xs:element> </xs:sequence> </xs:complexType> <xs:simpleType name="an..35"> <xs:restriction base="an"> <xs:maxLength value="35" /> </xs:restriction> </xs:simpleType> <xs:simpleType name="an"> <xs:restriction base="xs:string"> <xs:pattern value="[ !-~]*" /> </xs:restriction> </xs:simpleType> For class generating, looks like it works great - the JAXB element is replaced with a list of wrappers as String (i.e. List ) and compiles fine. However, when I unmarshall the actual XML data into the generated classes the data in the wrapper class is not populated - yet JAXB does not throw an exception. My question is: Do I need to change the schema a different way to make this work? Or is there something I can add/change/delete to the generated code or annotations? Appreciate any help you can offer! Thanks.

    Read the article

  • multple inner joins 3 or more crashes mysql server 5.1.30 opensolaris

    - by user331849
    when doing simple query on 4 inner joined tables, the server crashes with the output below appearing in the the mysql .err file. eg. select * from table1 inner join table2 on table1.a = table2.a and table1.b = table2.b inner join table3 on table2.a = table3.a and table2.c = table3.c inner join table4 on table3.a = table4.a and table3.d = table4.d If i remove one of the tables it executes fine. Likewise if I remove a different table, it executes fine. Though all tables have been checked anyway, this would suggest that it is not a problem specifically with one of the tables. mysql.err trace: 100503 18:13:19 - mysqld got signal 11 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=1572864000 read_buffer_size=2097152 max_used_connections=11 max_threads=151 threads_connected=10 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 2155437 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. thd: 0x72febda8 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = fe07efb0 thread_stack 0x40000 Trying to get some variables. Some pointers may be invalid and cause the dump to abort... thd-query at be1021f0 = explain select * from business inner join timetable on business.id = timetable.business_id inner join timetableentry on timetable.business_id = timetableentry.business_id and timetable.kid = timetableentry.parent inner join staff on timetable.business_id = staff.business_id and timetable.staf f_person = staff.kid where business.id = '3050bb04fda41df64a9c1c149150026c' thd-thread_id=9 thd-killed=NOT_KILLED The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash. 100503 18:13:19 mysqld_safe mysqld restarted 100503 18:13:20 InnoDB: Failed to set DIRECTIO_ON on file ./ibdata1: OPEN: Inap propriate ioctl for device, continuing anyway 100503 18:13:20 InnoDB: Failed to set DIRECTIO_ON on file ./ibdata1: OPEN: Inap propriate ioctl for device, continuing anyway InnoDB: The log sequence number in ibdata files does not match InnoDB: the log sequence number in the ib_logfiles! 100503 18:13:20 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... InnoDB: Last MySQL binlog file position 0 2731, file name ./mysql-bin.000093 100503 18:13:20 InnoDB: Started; log sequence number 0 2650338426 100503 18:13:20 [Note] Recovering after a crash using mysql-bin 100503 18:13:20 [Note] Starting crash recovery... 100503 18:13:20 [Note] Crash recovery finished. This on opensolaris SunOS 5.11 snv_111b i86pc i386 i86pc Mysql 5.1.30 Here is a snippet from the my.cnf file: key_buffer = 1500M max_allowed_packet = 1M thread_stack = 256K thread_cache_size = 8 sort_buffer_size = 2M read_buffer_size = 2M read_rnd_buffer_size = 8M table_cache = 512 tmp_table_size = 400M max_heap_table_size = 64M query_cache_limit = 20M query_cache_size = 200M Is this a bug or a configuration issue?

    Read the article

  • Getting text position while parsing pdf with Quartz 2D

    - by Koteg
    Hi guys, another question regarding pdf parsing... Just read PDF Reference version 1.7 "5.3.1 Text-Positioning Operators" and I am a little bit confused. I wrote some code to get transformation matrix and initial text position. CGPDFOperatorTableSetCallback (table, "MP", &op_MP);//Define marked-content point CGPDFOperatorTableSetCallback (table, "DP", &op_DP);//Define marked-content point with property list CGPDFOperatorTableSetCallback (table, "BMC", &op_BMC);//Begin marked-content sequence CGPDFOperatorTableSetCallback (table, "BDC", &op_BDC);//Begin marked-content sequence with property list CGPDFOperatorTableSetCallback (table, "EMC", &op_EMC);//End marked-content sequence //Text State operators CGPDFOperatorTableSetCallback(table, "Tc", &op_Tc); CGPDFOperatorTableSetCallback(table, "Tw", &op_Tw); CGPDFOperatorTableSetCallback(table, "Tz", &op_Tz); CGPDFOperatorTableSetCallback(table, "TL", &op_TL); CGPDFOperatorTableSetCallback(table, "Tf", &op_Tf); CGPDFOperatorTableSetCallback(table, "Tr", &op_Tr); CGPDFOperatorTableSetCallback(table, "Ts", &op_Ts); //text showing operators CGPDFOperatorTableSetCallback(table, "TJ", &op_TJ); CGPDFOperatorTableSetCallback(table, "Tj", &op_Tj); CGPDFOperatorTableSetCallback(table, "'", &op_apostrof); CGPDFOperatorTableSetCallback(table, "\"", &op_double_apostrof); //text positioning operators CGPDFOperatorTableSetCallback(table, "Td", &op_Td); CGPDFOperatorTableSetCallback(table, "TD", &op_TD); CGPDFOperatorTableSetCallback(table, "Tm", &op_Tm); CGPDFOperatorTableSetCallback(table, "T*", &op_T); //text object operators CGPDFOperatorTableSetCallback(table, "BT", &op_BT);//Begin text object CGPDFOperatorTableSetCallback(table, "ET", &op_ET);//End text object So this is the output after application lunch: 2010-09-02 15:09:23.041 testSearch[8251:207] op_BT begin Integer value: 0 2010-09-02 15:09:23.043 testSearch[8251:207] op_BT end 2010-09-02 15:09:23.043 testSearch[8251:207] op_Tf begin Integer value: 1 2010-09-02 15:09:23.044 testSearch[8251:207] op_Tf end 2010-09-02 15:09:23.044 testSearch[8251:207] op_Tm begin Float value: 557.364197 2010-09-02 15:09:23.045 testSearch[8251:207] op_Tm end 2010-09-02 15:09:23.045 testSearch[8251:207] op_TJ begin 2010-09-02 15:09:23.046 testSearch[8251:207] Array string value [0]: F 2010-09-02 15:09:23.046 testSearch[8251:207] Array integer value [1]: 94985208 2010-09-02 15:09:23.047 testSearch[8251:207] Array string value [2]: r 2010-09-02 15:09:23.047 testSearch[8251:207] Array integer value [3]: 94985208 2010-09-02 15:09:23.048 testSearch[8251:207] Array string value [4]: o 2010-09-02 15:09:23.048 testSearch[8251:207] Array integer value [5]: 94985208 2010-09-02 15:09:23.049 testSearch[8251:207] Array string value [6]: m s 2010-09-02 15:09:23.049 testSearch[8251:207] Array integer value [7]: 94985208 2010-09-02 15:09:23.049 testSearch[8251:207] Array string value [8]: a 2010-09-02 15:09:23.050 testSearch[8251:207] Array integer value [9]: 94985208 2010-09-02 15:09:23.050 testSearch[8251:207] Array string value [10]: m 2010-09-02 15:09:23.051 testSearch[8251:207] Array integer value [11]: 94985208 2010-09-02 15:09:23.051 testSearch[8251:207] Array string value [12]: p 2010-09-02 15:09:23.052 testSearch[8251:207] Array integer value [13]: 94985208 2010-09-02 15:09:23.053 testSearch[8251:207] Array string value [14]: l 2010-09-02 15:09:23.054 testSearch[8251:207] Array integer value [15]: 94985208 2010-09-02 15:09:23.055 testSearch[8251:207] Array string value [16]: e t 2010-09-02 15:09:23.055 testSearch[8251:207] Array integer value [17]: 94985208 2010-09-02 15:09:23.057 testSearch[8251:207] Array string value [18]: o r 2010-09-02 15:09:23.057 testSearch[8251:207] Array integer value [19]: 94985208 2010-09-02 15:09:23.058 testSearch[8251:207] Array string value [20]: e 2010-09-02 15:09:23.058 testSearch[8251:207] Array integer value [21]: 94985208 2010-09-02 15:09:23.059 testSearch[8251:207] Array string value [22]: s 2010-09-02 15:09:23.059 testSearch[8251:207] Array integer value [23]: 94985208 2010-09-02 15:09:23.060 testSearch[8251:207] Array string value [24]: u 2010-09-02 15:09:23.061 testSearch[8251:207] Array integer value [25]: 94985208 2010-09-02 15:09:23.061 testSearch[8251:207] Array string value [26]: l 2010-09-02 15:09:23.062 testSearch[8251:207] Array integer value [27]: 94985208 2010-09-02 15:09:23.062 testSearch[8251:207] Array string value [28]: t 2010-09-02 15:09:23.063 testSearch[8251:207] op_TJ end If someone is familiar with text matrix and text positioning operators it would be nice to explain how all those thing work. How to calculate text position (or glyph?) using Tm (transformation matrix and other data)?

    Read the article

  • Implementing Release Notes in TFS Team Build 2010

    - by Jakob Ehn
    In TFS Team Build (all versions), each build is associated with changesets and work items. To determine which changesets that should be associated with the current build, Team Build finds the label of the “Last Good Build” an then aggregates all changesets up unitl the label for the current build. Basically this means that if your build is failing, every changeset that is checked in will be accumulated in this list until the build is successful. All well, but there uis a dimension missing here, regarding to releases. Often you can run several release builds until you actually deploy the result of the build to a test or production system. When you do this, wouldn’t it be nice to be able to send the customer a nice release note that contain all work items and changeset since the previously deployed version? At our company, we have developed a Release Repository, which basically is a siple web site with a SQL database as storage. Every time we run a Release Build, the resulting installers, zip-files, sql scripts etc, gets pushed into the release repositor together with the relevant build information. This information contains things such as start time, who triggered the build etc. Also, it contains the associated changesets and work items. When deploying the MSI’s for a new version, we mark the build as Deployed in the release repository. The depoyed status is stored in the release repository database, but it could also have been implemented by setting the Build Quality for that build to Deployed. When generating the release notes, the web site simple runs through each release build back to the previous build that was marked as Deplyed, and aggregates the work items and changesets: Here is a sample screenshot on how this looks for a sample build/application The web site is available both for us and also for the customers and testers, which means that they can easily get the latest version of a particular application and at the same time see what changes are included in this version. There is a lot going on in the Release Build Process that drives this in our TFS 2010 server, but in this post I will show how you can access and read the changeset and work item information in a custom activity. Since Team Build associates changesets and work items for each build, this information is (partially) available inside the build process template. The Associate Changesets and Work Items for non-Shelveset Builds activity (located inside the Try  Compile, Test, and Associate Changesets and Work Items activity) defines and populates a variable called associatedWorkItems   You can see that this variable is an IList containing instances of the Changeset class (from the Microsoft.TeamFoundation.VersionControl.Client namespace). Now, if you want to access this variable later on in the build process template, you need to declare a new variable in the corresponding scope and the assign the value to this variable. In this sample, I declared a variable called assocChangesets in the RunAgent sequence, which basically covers the whol compile, test and drop part of the build process:   Now, you need to assign the value from the AssociatedChangesets to this variable. This is done using the Assign workflow activity:   Now you can add a custom activity any where inside the RunAgent sequence and use this variable. NB: Of course your activity must place somewhere after the variable has been poplated. To finish off, here is code snippet that shows how you can read the changeset and work item information from the variable.   First you add an InArgumet on your activity where you can pass i the variable that we defined. [RequiredArgument] public InArgument<IList<Changeset>> AssociatedChangesets { get; set; } Then you can traverse all the changesets in the list, and for each changeset use the WorkItems property to get the work items that were associated in that changeset: foreach (Changeset ch in associatedChangesets) { // Add change theChangesets.Add( new AssociatedChangeset(ch.ChangesetId, ch.ArtifactUri, ch.Committer, ch.Comment, ch.ChangesetId)); foreach (var wi in ch.WorkItems) { theWorkItems.Add( new AssociatedWorkItem(wi["System.AssignedTo"].ToString(), wi.Id, wi["System.State"].ToString(), wi.Title, wi.Type.Name, wi.Id, wi.Uri)); } } NB: AssociatedChangeset and AssociatedWorkItem are custom classes that we use internally for storing this information that is eventually pushed to the release repository.

    Read the article

  • The Incremental Architect&rsquo;s Napkin - #5 - Design functions for extensibility and readability

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/08/24/the-incremental-architectrsquos-napkin---5---design-functions-for.aspx The functionality of programs is entered via Entry Points. So what we´re talking about when designing software is a bunch of functions handling the requests represented by and flowing in through those Entry Points. Designing software thus consists of at least three phases: Analyzing the requirements to find the Entry Points and their signatures Designing the functionality to be executed when those Entry Points get triggered Implementing the functionality according to the design aka coding I presume, you´re familiar with phase 1 in some way. And I guess you´re proficient in implementing functionality in some programming language. But in my experience developers in general are not experienced in going through an explicit phase 2. “Designing functionality? What´s that supposed to mean?” you might already have thought. Here´s my definition: To design functionality (or functional design for short) means thinking about… well, functions. You find a solution for what´s supposed to happen when an Entry Point gets triggered in terms of functions. A conceptual solution that is, because those functions only exist in your head (or on paper) during this phase. But you may have guess that, because it´s “design” not “coding”. And here is, what functional design is not: It´s not about logic. Logic is expressions (e.g. +, -, && etc.) and control statements (e.g. if, switch, for, while etc.). Also I consider calling external APIs as logic. It´s equally basic. It´s what code needs to do in order to deliver some functionality or quality. Logic is what´s doing that needs to be done by software. Transformations are either done through expressions or API-calls. And then there is alternative control flow depending on the result of some expression. Basically it´s just jumps in Assembler, sometimes to go forward (if, switch), sometimes to go backward (for, while, do). But calling your own function is not logic. It´s not necessary to produce any outcome. Functionality is not enhanced by adding functions (subroutine calls) to your code. Nor is quality increased by adding functions. No performance gain, no higher scalability etc. through functions. Functions are not relevant to functionality. Strange, isn´t it. What they are important for is security of investment. By introducing functions into our code we can become more productive (re-use) and can increase evolvability (higher unterstandability, easier to keep code consistent). That´s no small feat, however. Evolvable code can hardly be overestimated. That´s why to me functional design is so important. It´s at the core of software development. To sum this up: Functional design is on a level of abstraction above (!) logical design or algorithmic design. Functional design is only done until you get to a point where each function is so simple you are very confident you can easily code it. Functional design an logical design (which mostly is coding, but can also be done using pseudo code or flow charts) are complementary. Software needs both. If you start coding right away you end up in a tangled mess very quickly. Then you need back out through refactoring. Functional design on the other hand is bloodless without actual code. It´s just a theory with no experiments to prove it. But how to do functional design? An example of functional design Let´s assume a program to de-duplicate strings. The user enters a number of strings separated by commas, e.g. a, b, a, c, d, b, e, c, a. And the program is supposed to clear this list of all doubles, e.g. a, b, c, d, e. There is only one Entry Point to this program: the user triggers the de-duplication by starting the program with the string list on the command line C:\>deduplicate "a, b, a, c, d, b, e, c, a" a, b, c, d, e …or by clicking on a GUI button. This leads to the Entry Point function to get called. It´s the program´s main function in case of the batch version or a button click event handler in the GUI version. That´s the physical Entry Point so to speak. It´s inevitable. What then happens is a three step process: Transform the input data from the user into a request. Call the request handler. Transform the output of the request handler into a tangible result for the user. Or to phrase it a bit more generally: Accept input. Transform input into output. Present output. This does not mean any of these steps requires a lot of effort. Maybe it´s just one line of code to accomplish it. Nevertheless it´s a distinct step in doing the processing behind an Entry Point. Call it an aspect or a responsibility - and you will realize it most likely deserves a function of its own to satisfy the Single Responsibility Principle (SRP). Interestingly the above list of steps is already functional design. There is no logic, but nevertheless the solution is described - albeit on a higher level of abstraction than you might have done yourself. But it´s still on a meta-level. The application to the domain at hand is easy, though: Accept string list from command line De-duplicate Present de-duplicated strings on standard output And this concrete list of processing steps can easily be transformed into code:static void Main(string[] args) { var input = Accept_string_list(args); var output = Deduplicate(input); Present_deduplicated_string_list(output); } Instead of a big problem there are three much smaller problems now. If you think each of those is trivial to implement, then go for it. You can stop the functional design at this point. But maybe, just maybe, you´re not so sure how to go about with the de-duplication for example. Then just implement what´s easy right now, e.g.private static string Accept_string_list(string[] args) { return args[0]; } private static void Present_deduplicated_string_list( string[] output) { var line = string.Join(", ", output); Console.WriteLine(line); } Accept_string_list() contains logic in the form of an API-call. Present_deduplicated_string_list() contains logic in the form of an expression and an API-call. And then repeat the functional design for the remaining processing step. What´s left is the domain logic: de-duplicating a list of strings. How should that be done? Without any logic at our disposal during functional design you´re left with just functions. So which functions could make up the de-duplication? Here´s a suggestion: De-duplicate Parse the input string into a true list of strings. Register each string in a dictionary/map/set. That way duplicates get cast away. Transform the data structure into a list of unique strings. Processing step 2 obviously was the core of the solution. That´s where real creativity was needed. That´s the core of the domain. But now after this refinement the implementation of each step is easy again:private static string[] Parse_string_list(string input) { return input.Split(',') .Select(s => s.Trim()) .ToArray(); } private static Dictionary<string,object> Compile_unique_strings(string[] strings) { return strings.Aggregate( new Dictionary<string, object>(), (agg, s) => { agg[s] = null; return agg; }); } private static string[] Serialize_unique_strings( Dictionary<string,object> dict) { return dict.Keys.ToArray(); } With these three additional functions Main() now looks like this:static void Main(string[] args) { var input = Accept_string_list(args); var strings = Parse_string_list(input); var dict = Compile_unique_strings(strings); var output = Serialize_unique_strings(dict); Present_deduplicated_string_list(output); } I think that´s very understandable code: just read it from top to bottom and you know how the solution to the problem works. It´s a mirror image of the initial design: Accept string list from command line Parse the input string into a true list of strings. Register each string in a dictionary/map/set. That way duplicates get cast away. Transform the data structure into a list of unique strings. Present de-duplicated strings on standard output You can even re-generate the design by just looking at the code. Code and functional design thus are always in sync - if you follow some simple rules. But about that later. And as a bonus: all the functions making up the process are small - which means easy to understand, too. So much for an initial concrete example. Now it´s time for some theory. Because there is method to this madness ;-) The above has only scratched the surface. Introducing Flow Design Functional design starts with a given function, the Entry Point. Its goal is to describe the behavior of the program when the Entry Point is triggered using a process, not an algorithm. An algorithm consists of logic, a process on the other hand consists just of steps or stages. Each processing step transforms input into output or a side effect. Also it might access resources, e.g. a printer, a database, or just memory. Processing steps thus can rely on state of some sort. This is different from Functional Programming, where functions are supposed to not be stateful and not cause side effects.[1] In its simplest form a process can be written as a bullet point list of steps, e.g. Get data from user Output result to user Transform data Parse data Map result for output Such a compilation of steps - possibly on different levels of abstraction - often is the first artifact of functional design. It can be generated by a team in an initial design brainstorming. Next comes ordering the steps. What should happen first, what next etc.? Get data from user Parse data Transform data Map result for output Output result to user That´s great for a start into functional design. It´s better than starting to code right away on a given function using TDD. Please get me right: TDD is a valuable practice. But it can be unnecessarily hard if the scope of a functionn is too large. But how do you know beforehand without investing some thinking? And how to do this thinking in a systematic fashion? My recommendation: For any given function you´re supposed to implement first do a functional design. Then, once you´re confident you know the processing steps - which are pretty small - refine and code them using TDD. You´ll see that´s much, much easier - and leads to cleaner code right away. For more information on this approach I call “Informed TDD” read my book of the same title. Thinking before coding is smart. And writing down the solution as a bunch of functions possibly is the simplest thing you can do, I´d say. It´s more according to the KISS (Keep It Simple, Stupid) principle than returning constants or other trivial stuff TDD development often is started with. So far so good. A simple ordered list of processing steps will do to start with functional design. As shown in the above example such steps can easily be translated into functions. Moving from design to coding thus is simple. However, such a list does not scale. Processing is not always that simple to be captured in a list. And then the list is just text. Again. Like code. That means the design is lacking visuality. Textual representations need more parsing by your brain than visual representations. Plus they are limited in their “dimensionality”: text just has one dimension, it´s sequential. Alternatives and parallelism are hard to encode in text. In addition the functional design using numbered lists lacks data. It´s not visible what´s the input, output, and state of the processing steps. That´s why functional design should be done using a lightweight visual notation. No tool is necessary to draw such designs. Use pen and paper; a flipchart, a whiteboard, or even a napkin is sufficient. Visualizing processes The building block of the functional design notation is a functional unit. I mostly draw it like this: Something is done, it´s clear what goes in, it´s clear what comes out, and it´s clear what the processing step requires in terms of state or hardware. Whenever input flows into a functional unit it gets processed and output is produced and/or a side effect occurs. Flowing data is the driver of something happening. That´s why I call this approach to functional design Flow Design. It´s about data flow instead of control flow. Control flow like in algorithms is of no concern to functional design. Thinking about control flow simply is too low level. Once you start with control flow you easily get bogged down by tons of details. That´s what you want to avoid during design. Design is supposed to be quick, broad brush, abstract. It should give overview. But what about all the details? As Robert C. Martin rightly said: “Programming is abot detail”. Detail is a matter of code. Once you start coding the processing steps you designed you can worry about all the detail you want. Functional design does not eliminate all the nitty gritty. It just postpones tackling them. To me that´s also an example of the SRP. Function design has the responsibility to come up with a solution to a problem posed by a single function (Entry Point). And later coding has the responsibility to implement the solution down to the last detail (i.e. statement, API-call). TDD unfortunately mixes both responsibilities. It´s just coding - and thereby trying to find detailed implementations (green phase) plus getting the design right (refactoring). To me that´s one reason why TDD has failed to deliver on its promise for many developers. Using functional units as building blocks of functional design processes can be depicted very easily. Here´s the initial process for the example problem: For each processing step draw a functional unit and label it. Choose a verb or an “action phrase” as a label, not a noun. Functional design is about activities, not state or structure. Then make the output of an upstream step the input of a downstream step. Finally think about the data that should flow between the functional units. Write the data above the arrows connecting the functional units in the direction of the data flow. Enclose the data description in brackets. That way you can clearly see if all flows have already been specified. Empty brackets mean “no data is flowing”, but nevertheless a signal is sent. A name like “list” or “strings” in brackets describes the data content. Use lower case labels for that purpose. A name starting with an upper case letter like “String” or “Customer” on the other hand signifies a data type. If you like, you also can combine descriptions with data types by separating them with a colon, e.g. (list:string) or (strings:string[]). But these are just suggestions from my practice with Flow Design. You can do it differently, if you like. Just be sure to be consistent. Flows wired-up in this manner I call one-dimensional (1D). Each functional unit just has one input and/or one output. A functional unit without an output is possible. It´s like a black hole sucking up input without producing any output. Instead it produces side effects. A functional unit without an input, though, does make much sense. When should it start to work? What´s the trigger? That´s why in the above process even the first processing step has an input. If you like, view such 1D-flows as pipelines. Data is flowing through them from left to right. But as you can see, it´s not always the same data. It get´s transformed along its passage: (args) becomes a (list) which is turned into (strings). The Principle of Mutual Oblivion A very characteristic trait of flows put together from function units is: no functional units knows another one. They are all completely independent of each other. Functional units don´t know where their input is coming from (or even when it´s gonna arrive). They just specify a range of values they can process. And they promise a certain behavior upon input arriving. Also they don´t know where their output is going. They just produce it in their own time independent of other functional units. That means at least conceptually all functional units work in parallel. Functional units don´t know their “deployment context”. They now nothing about the overall flow they are place in. They are just consuming input from some upstream, and producing output for some downstream. That makes functional units very easy to test. At least as long as they don´t depend on state or resources. I call this the Principle of Mutual Oblivion (PoMO). Functional units are oblivious of others as well as an overall context/purpose. They are just parts of a whole focused on a single responsibility. How the whole is built, how a larger goal is achieved, is of no concern to the single functional units. By building software in such a manner, functional design interestingly follows nature. Nature´s building blocks for organisms also follow the PoMO. The cells forming your body do not know each other. Take a nerve cell “controlling” a muscle cell for example:[2] The nerve cell does not know anything about muscle cells, let alone the specific muscel cell it is “attached to”. Likewise the muscle cell does not know anything about nerve cells, let a lone a specific nerve cell “attached to” it. Saying “the nerve cell is controlling the muscle cell” thus only makes sense when viewing both from the outside. “Control” is a concept of the whole, not of its parts. Control is created by wiring-up parts in a certain way. Both cells are mutually oblivious. Both just follow a contract. One produces Acetylcholine (ACh) as output, the other consumes ACh as input. Where the ACh is going, where it´s coming from neither cell cares about. Million years of evolution have led to this kind of division of labor. And million years of evolution have produced organism designs (DNA) which lead to the production of these different cell types (and many others) and also to their co-location. The result: the overall behavior of an organism. How and why this happened in nature is a mystery. For our software, though, it´s clear: functional and quality requirements needs to be fulfilled. So we as developers have to become “intelligent designers” of “software cells” which we put together to form a “software organism” which responds in satisfying ways to triggers from it´s environment. My bet is: If nature gets complex organisms working by following the PoMO, who are we to not apply this recipe for success to our much simpler “machines”? So my rule is: Wherever there is functionality to be delivered, because there is a clear Entry Point into software, design the functionality like nature would do it. Build it from mutually oblivious functional units. That´s what Flow Design is about. In that way it´s even universal, I´d say. Its notation can also be applied to biology: Never mind labeling the functional units with nouns. That´s ok in Flow Design. You´ll do that occassionally for functional units on a higher level of abstraction or when their purpose is close to hardware. Getting a cockroach to roam your bedroom takes 1,000,000 nerve cells (neurons). Getting the de-duplication program to do its job just takes 5 “software cells” (functional units). Both, though, follow the same basic principle. Translating functional units into code Moving from functional design to code is no rocket science. In fact it´s straightforward. There are two simple rules: Translate an input port to a function. Translate an output port either to a return statement in that function or to a function pointer visible to that function. The simplest translation of a functional unit is a function. That´s what you saw in the above example. Functions are mutually oblivious. That why Functional Programming likes them so much. It makes them composable. Which is the reason, nature works according to the PoMO. Let´s be clear about one thing: There is no dependency injection in nature. For all of an organism´s complexity no DI container is used. Behavior is the result of smooth cooperation between mutually oblivious building blocks. Functions will often be the adequate translation for the functional units in your designs. But not always. Take for example the case, where a processing step should not always produce an output. Maybe the purpose is to filter input. Here the functional unit consumes words and produces words. But it does not pass along every word flowing in. Some words are swallowed. Think of a spell checker. It probably should not check acronyms for correctness. There are too many of them. Or words with no more than two letters. Such words are called “stop words”. In the above picture the optionality of the output is signified by the astrisk outside the brackets. It means: Any number of (word) data items can flow from the functional unit for each input data item. It might be none or one or even more. This I call a stream of data. Such behavior cannot be translated into a function where output is generated with return. Because a function always needs to return a value. So the output port is translated into a function pointer or continuation which gets passed to the subroutine when called:[3]void filter_stop_words( string word, Action<string> onNoStopWord) { if (...check if not a stop word...) onNoStopWord(word); } If you want to be nitpicky you might call such a function pointer parameter an injection. And technically you´re right. Conceptually, though, it´s not an injection. Because the subroutine is not functionally dependent on the continuation. Firstly continuations are procedures, i.e. subroutines without a return type. Remember: Flow Design is about unidirectional data flow. Secondly the name of the formal parameter is chosen in a way as to not assume anything about downstream processing steps. onNoStopWord describes a situation (or event) within the functional unit only. Translating output ports into function pointers helps keeping functional units mutually oblivious in cases where output is optional or produced asynchronically. Either pass the function pointer to the function upon call. Or make it global by putting it on the encompassing class. Then it´s called an event. In C# that´s even an explicit feature.class Filter { public void filter_stop_words( string word) { if (...check if not a stop word...) onNoStopWord(word); } public event Action<string> onNoStopWord; } When to use a continuation and when to use an event dependens on how a functional unit is used in flows and how it´s packed together with others into classes. You´ll see examples further down the Flow Design road. Another example of 1D functional design Let´s see Flow Design once more in action using the visual notation. How about the famous word wrap kata? Robert C. Martin has posted a much cited solution including an extensive reasoning behind his TDD approach. So maybe you want to compare it to Flow Design. The function signature given is:string WordWrap(string text, int maxLineLength) {...} That´s not an Entry Point since we don´t see an application with an environment and users. Nevertheless it´s a function which is supposed to provide a certain functionality. The text passed in has to be reformatted. The input is a single line of arbitrary length consisting of words separated by spaces. The output should consist of one or more lines of a maximum length specified. If a word is longer than a the maximum line length it can be split in multiple parts each fitting in a line. Flow Design Let´s start by brainstorming the process to accomplish the feat of reformatting the text. What´s needed? Words need to be assembled into lines Words need to be extracted from the input text The resulting lines need to be assembled into the output text Words too long to fit in a line need to be split Does sound about right? I guess so. And it shows a kind of priority. Long words are a special case. So maybe there is a hint for an incremental design here. First let´s tackle “average words” (words not longer than a line). Here´s the Flow Design for this increment: The the first three bullet points turned into functional units with explicit data added. As the signature requires a text is transformed into another text. See the input of the first functional unit and the output of the last functional unit. In between no text flows, but words and lines. That´s good to see because thereby the domain is clearly represented in the design. The requirements are talking about words and lines and here they are. But note the asterisk! It´s not outside the brackets but inside. That means it´s not a stream of words or lines, but lists or sequences. For each text a sequence of words is output. For each sequence of words a sequence of lines is produced. The asterisk is used to abstract from the concrete implementation. Like with streams. Whether the list of words gets implemented as an array or an IEnumerable is not important during design. It´s an implementation detail. Does any processing step require further refinement? I don´t think so. They all look pretty “atomic” to me. And if not… I can always backtrack and refine a process step using functional design later once I´ve gained more insight into a sub-problem. Implementation The implementation is straightforward as you can imagine. The processing steps can all be translated into functions. Each can be tested easily and separately. Each has a focused responsibility. And the process flow becomes just a sequence of function calls: Easy to understand. It clearly states how word wrapping works - on a high level of abstraction. And it´s easy to evolve as you´ll see. Flow Design - Increment 2 So far only texts consisting of “average words” are wrapped correctly. Words not fitting in a line will result in lines too long. Wrapping long words is a feature of the requested functionality. Whether it´s there or not makes a difference to the user. To quickly get feedback I decided to first implement a solution without this feature. But now it´s time to add it to deliver the full scope. Fortunately Flow Design automatically leads to code following the Open Closed Principle (OCP). It´s easy to extend it - instead of changing well tested code. How´s that possible? Flow Design allows for extension of functionality by inserting functional units into the flow. That way existing functional units need not be changed. The data flow arrow between functional units is a natural extension point. No need to resort to the Strategy Pattern. No need to think ahead where extions might need to be made in the future. I just “phase in” the remaining processing step: Since neither Extract words nor Reformat know of their environment neither needs to be touched due to the “detour”. The new processing step accepts the output of the existing upstream step and produces data compatible with the existing downstream step. Implementation - Increment 2 A trivial implementation checking the assumption if this works does not do anything to split long words. The input is just passed on: Note how clean WordWrap() stays. The solution is easy to understand. A developer looking at this code sometime in the future, when a new feature needs to be build in, quickly sees how long words are dealt with. Compare this to Robert C. Martin´s solution:[4] How does this solution handle long words? Long words are not even part of the domain language present in the code. At least I need considerable time to understand the approach. Admittedly the Flow Design solution with the full implementation of long word splitting is longer than Robert C. Martin´s. At least it seems. Because his solution does not cover all the “word wrap situations” the Flow Design solution handles. Some lines would need to be added to be on par, I guess. But even then… Is a difference in LOC that important as long as it´s in the same ball park? I value understandability and openness for extension higher than saving on the last line of code. Simplicity is not just less code, it´s also clarity in design. But don´t take my word for it. Try Flow Design on larger problems and compare for yourself. What´s the easier, more straightforward way to clean code? And keep in mind: You ain´t seen all yet ;-) There´s more to Flow Design than described in this chapter. In closing I hope I was able to give you a impression of functional design that makes you hungry for more. To me it´s an inevitable step in software development. Jumping from requirements to code does not scale. And it leads to dirty code all to quickly. Some thought should be invested first. Where there is a clear Entry Point visible, it´s functionality should be designed using data flows. Because with data flows abstraction is possible. For more background on why that´s necessary read my blog article here. For now let me point out to you - if you haven´t already noticed - that Flow Design is a general purpose declarative language. It´s “programming by intention” (Shalloway et al.). Just write down how you think the solution should work on a high level of abstraction. This breaks down a large problem in smaller problems. And by following the PoMO the solutions to those smaller problems are independent of each other. So they are easy to test. Or you could even think about getting them implemented in parallel by different team members. Flow Design not only increases evolvability, but also helps becoming more productive. All team members can participate in functional design. This goes beyon collective code ownership. We´re talking collective design/architecture ownership. Because with Flow Design there is a common visual language to talk about functional design - which is the foundation for all other design activities.   PS: If you like what you read, consider getting my ebook “The Incremental Architekt´s Napkin”. It´s where I compile all the articles in this series for easier reading. I like the strictness of Function Programming - but I also find it quite hard to live by. And it certainly is not what millions of programmers are used to. Also to me it seems, the real world is full of state and side effects. So why give them such a bad image? That´s why functional design takes a more pragmatic approach. State and side effects are ok for processing steps - but be sure to follow the SRP. Don´t put too much of it into a single processing step. ? Image taken from www.physioweb.org ? My code samples are written in C#. C# sports typed function pointers called delegates. Action is such a function pointer type matching functions with signature void someName(T t). Other languages provide similar ways to work with functions as first class citizens - even Java now in version 8. I trust you find a way to map this detail of my translation to your favorite programming language. I know it works for Java, C++, Ruby, JavaScript, Python, Go. And if you´re using a Functional Programming language it´s of course a no brainer. ? Taken from his blog post “The Craftsman 62, The Dark Path”. ?

    Read the article

  • Simple task framework - building software from reusable pieces

    - by RuslanD
    I'm writing a web service with several APIs, and they will be sharing some of the implementation code. In order not to copy-paste, I would like to ideally implement each API call as a series of tasks, which are executed in a sequence determined by the business logic. One obvious question is whether that's the best strategy for code reuse, or whether I can look at it in a different way. But assuming I want to go with tasks, several issues arise: What's a good task interface to use? How do I pass data computed in one task to another task in the sequence that might need it? In the past, I've worked with task interfaces like: interface Task<T, U> { U execute(T input); } Then I also had sort of a "task context" object which had getters and setters for any kind of data my tasks needed to produce or consume, and it gets passed to all tasks. I'm aware that this suffers from a host of problems. So I wanted to figure out a better way to implement it this time around. My current idea is to have a TaskContext object which is a type-safe heterogeneous container (as described in Effective Java). Each task can ask for an item from this container (task input), or add an item to the container (task output). That way, tasks don't need to know about each other directly, and I don't have to write a class with dozens of methods for each data item. There are, however, several drawbacks: Each item in this TaskContext container should be a complex type that wraps around the actual item data. If task A uses a String for some purpose, and task B uses a String for something entirely different, then just storing a mapping between String.class and some object doesn't work for both tasks. The other reason is that I can't use that kind of container for generic collections directly, so they need to be wrapped in another object. This means that, based on how many tasks I define, I would need to also define a number of classes for the task items that may be consumed or produced, which may lead to code bloat and duplication. For instance, if a task takes some Long value as input and produces another Long value as output, I would have to have two classes that simply wrap around a Long, which IMO can spiral out of control pretty quickly as the codebase evolves. I briefly looked at workflow engine libraries, but they kind of seem like a heavy hammer for this particular nail. How would you go about writing a simple task framework with the following requirements: Tasks should be as self-contained as possible, so they can be composed in different ways to create different workflows. That being said, some tasks may perform expensive computations that are prerequisites for other tasks. We want to have a way of storing the results of intermediate computations done by tasks so that other tasks can use those results for free. The task framework should be light, i.e. growing the code doesn't involve introducing many new types just to plug into the framework.

    Read the article

< Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >