Search Results

Search found 10549 results on 422 pages for 'recovery console'.

Page 178/422 | < Previous Page | 174 175 176 177 178 179 180 181 182 183 184 185  | Next Page >

  • disk-to-disk backup without costly backup redundancy?

    - by AaronLS
    A good backup strategy involves a combination of 1) disconnected backups/snapshots that will not be affected by bugs, viruses, and/or security breaches 2) geographically distributed backups to protect against local disasters 3) testing backups to ensure that they can be restored as needed Generally I take an onsite backup daily, and an offsite backup weekly, and do test restores periodically. In the rare circumstance that I need to restore files, I do some from the local backup. Should a catastrophic event destroy the servers and local backups, then the offsite weekly tape backup would be used to restore the files. I don't need multiple offsite backups with redundancy. I ALREADY HAVE REDUNDANCY THROUGH THE USE OF BOTH LOCAL AND REMOTE BACKUPS. I have recovery blocks and par files with the backups, so I already have protection against a small percentage of corrupt bits. I perform test restores to ensure the backups function properly. Should the remote backups experience a dataloss, I can replace them with one of the local backups. There are historical offsite backups as well, so if a dataloss was not noticed for a few weeks(such as a bug/security breach/virus), the data could be restored from an older backup. By doing this, the only scenario that poses a risk to complete data loss would be one where both the local, remote, and servers all experienced a data loss in the same time period. I'm willing to risk that happening since the odds of that trifecta negligibly small, and the data isn't THAT valuable to me. So I hope I have emphasized that I don't need redundancy in my offsite backups because I have covered all the bases. I know this exact technique is employed by numerous businesses. Of course there are some that take multiple offsite backups, because the data is so incredibly valuable that they don't even want to risk that trifecta disaster, but in the majority of cases the trifecta disaster is an accepted risk. I HAD TO COVER ALL THIS BECAUSE SOME PEOPLE DON'T READ!!! I think I have justified my backup strategy and the majority of businesses who use offsite tape backups do not have any additional redundancy beyond what is mentioned above(recovery blocks, par files, historical snapshots). Now I would like to eliminate the use of tapes for offsite backups, and instead use a backup service. Most however are extremely costly for $/gb/month storage. I don't mind paying for transfer bandwidth, but the cost of storage is way to high. All of them advertise that they maintain backups of the data, and I imagine they use RAID as well. Obviously if you were using them to host servers this would all be necessary, but for my scenario, I am simply replacing my offsite backups with such a service. So there is no need for RAID, and absolutely no value in another layer of backups of backups. My one and only question: "Are there online data-storage/backup services that do not use redundancy or offer backups(backups of my backups) as part of their packages, and thus are more reasonably priced?" NOT my question: "Is this a flawed strategy?" I don't care if you think this is a good strategy or not. I know it pretty standard. Very few people make an extra copy of their offsite backups. They already have local backups that they can use to replace the remote backups if something catastrophic happens at the remote site. Please limit your responses to the question posed. Sorry if I seem a little abrasive, but I had some trolls in my last post who didn't read my requirements nor my question, and were trying to go off answering a totally different question. I made it pretty clear, but didn't try to justify my strategy, because I didn't ask about whether my strategy was justifyable. So I apologize if this was lengthy, as it really didn't need to be, but since there are so many trolls here who try to sidetrack questions by responding without addressing the question at hand.

    Read the article

  • KVM machine does not start ssh, network is started, used to work

    - by lleto
    have been searching an pulling my hear out for the last 6 hours. I have a virtual machine that has been running fine for the last six months. I was happy ssh'ing into it and it was running a database and some small apps. Tonight ssh stopped working, so I decided to reboot the machine. I now have the following situation: virsh list --all states machine as running I can ping the machine and get a reply When I ssh to the machine I see "ssh: connect to host [myserver] port 22: Connection refused" nmap does not show port 22 as open I have tried to: - reboot the machine once more (no luck) - mount the filesystem and check /etc/ssh/sshd.conf (has not changed since working situation) - install virsh console, however this does not seem to work When I mount the fs directly using losetup the strange thing is that file dates seem to be frozen in /var/log/ around the time of the crash. If I look in /var/run/ I can see an sshd.pid, but the time is 6 hours ago (and numerous reboots). My virsh xml looks like this: <domain type='kvm' id='21'> <name>myserver</name> <uuid>09678c8d-a99b-1d18-a7af-88d027cc8f93</uuid> <memory>1048576</memory> <currentMemory>1048576</currentMemory> <vcpu>1</vcpu> <os> <type arch='x86_64' machine='pc-1.0'>hvm</type> <boot dev='hd'/> </os> <features> <acpi/> </features> <clock offset='utc'/> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>destroy</on_crash> <devices> <emulator>/usr/bin/kvm</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw'/> <source file='/dev/disk01/myserver'/> <target dev='hda' bus='ide'/> <alias name='ide0-0-0'/> <address type='drive' controller='0' bus='0' unit='0'/> </disk> <controller type='ide' index='0'> <alias name='ide0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <interface type='bridge'> <mac address='52:54:00:e3:13:86'/> <source bridge='br0'/> <target dev='vnet0'/> <model type='virtio'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> <serial type='pty'> <source path='/dev/pts/1'/> <target port='0'/> <alias name='serial0'/> </serial> <console type='pty' tty='/dev/pts/1'> <source path='/dev/pts/1'/> <target type='serial' port='0'/> <alias name='serial0'/> </console> <input type='mouse' bus='ps2'/> <graphics type='vnc' port='5900' autoport='yes' listen='127.0.0.1'> <listen type='address' address='127.0.0.1'/> </graphics> <video> <model type='cirrus' vram='9216' heads='1'/> <alias name='video0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </video> <memballoon model='virtio'> <alias name='balloon0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </memballoon> </devices> <seclabel type='dynamic' model='apparmor' relabel='yes'> <label>libvirt-09678c8d-a99b-1d18-a7af-88d027cc8f93</label> <imagelabel>libvirt-09678c8d-a99b-1d18-a7af-88d027cc8f93</imagelabel> </seclabel> </domain> I'm sort of lost as to where I can look to get the machine up and running again. On the same instance of kvm I have another server running which is working fine. Both are Ubuntu 12.04. All help is welcome....

    Read the article

  • MySQL 5.5 (Percona) assertion failure log.. what would cause this?

    - by Tom Geee
    256GB, 64 Core , AMD running Ubuntu 12.04 with Percona MySQL 5.5.28. Below is the assertion failure. We just had a second assertion failure (different "in file", position, etc) while running a large set of inserts. After the first failure, MySQL restarted after a reboot only - after continuously looping on the same error after trying to recover. I decided to do a mysqlcheck with -o for optimize. Since these are all Innodb tables (very large tables, 60+GB) this would do an alter table on all tables. In the middle of this , the below assertion failure happened again: 121115 22:30:31 InnoDB: Assertion failure in thread 140086589445888 in file btr0pcur.c line 452 InnoDB: Failing assertion: btr_page_get_prev(next_page, mtr) == buf_block_get_page_no(btr_pcur_get_block(cursor)) InnoDB: We intentionally generate a memory trap. InnoDB: Submit a detailed bug report to http://bugs.mysql.com. InnoDB: If you get repeated assertion failures or crashes, even InnoDB: immediately after the mysqld startup, there may be InnoDB: corruption in the InnoDB tablespace. Please refer to InnoDB: http://dev.mysql.com/doc/refman/5.5/en/forcing-innodb-recovery.html InnoDB: about forcing recovery. 03:30:31 UTC - mysqld got signal 6 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. Please help us make Percona Server better by reporting any bugs at http://bugs.percona.com/ key_buffer_size=536870912 read_buffer_size=131072 max_used_connections=404 max_threads=500 thread_count=90 connection_count=90 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 1618416 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. Thread pointer: 0x14edeb710 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 7f687366ce80 thread_stack 0x30000 /usr/sbin/mysqld(my_print_stacktrace+0x2e)[0x7b52ee] /usr/sbin/mysqld(handle_fatal_signal+0x484)[0x68f024] /lib/x86_64-linux-gnu/libpthread.so.0(+0xfcb0)[0x7f9cbb23fcb0] /lib/x86_64-linux-gnu/libc.so.6(gsignal+0x35)[0x7f9cbaea6425] /lib/x86_64-linux-gnu/libc.so.6(abort+0x17b)[0x7f9cbaea9b8b] /usr/sbin/mysqld[0x858463] /usr/sbin/mysqld[0x804513] /usr/sbin/mysqld[0x808432] /usr/sbin/mysqld[0x7db8bf] /usr/sbin/mysqld(_Z13rr_sequentialP11READ_RECORD+0x1d)[0x755aed] /usr/sbin/mysqld(_Z17mysql_alter_tableP3THDPcS1_P24st_ha_create_informationP10TABLE_LISTP10Alter_infojP8st_orderb+0x216b)[0x60399b] /usr/sbin/mysqld(_Z20mysql_recreate_tableP3THDP10TABLE_LIST+0x166)[0x604bd6] /usr/sbin/mysqld[0x647da1] /usr/sbin/mysqld(_ZN24Optimize_table_statement7executeEP3THD+0xde)[0x64891e] /usr/sbin/mysqld(_Z21mysql_execute_commandP3THD+0x1168)[0x59b558] /usr/sbin/mysqld(_Z11mysql_parseP3THDPcjP12Parser_state+0x30c)[0x5a132c] /usr/sbin/mysqld(_Z16dispatch_command19enum_server_commandP3THDPcj+0x1620)[0x5a2a00] /usr/sbin/mysqld(_Z24do_handle_one_connectionP3THD+0x14f)[0x63ce6f] /usr/sbin/mysqld(handle_one_connection+0x51)[0x63cf31] /lib/x86_64-linux-gnu/libpthread.so.0(+0x7e9a)[0x7f9cbb237e9a] /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7f9cbaf63cbd] Trying to get some variables. Some pointers may be invalid and cause the dump to abort. Query (7f6300004b60): is an invalid pointer Connection ID (thread ID): 876 Status: NOT_KILLED You may download the Percona Server operations manual by visiting http://www.percona.com/software/percona-server/. You may find information in the manual which will help you identify the cause of the crash. 121115 22:31:07 [Note] Plugin 'FEDERATED' is disabled. 121115 22:31:07 InnoDB: The InnoDB memory heap is disabled 121115 22:31:07 InnoDB: Mutexes and rw_locks use GCC atomic builtins .. Then it recovered , without a reboot this time. from the log, what would cause this? I am currently running a dump to see if the problem resurfaces. edit: data partition is all in / since this is a hosted, defaulted file system unfortunately: Filesystem Size Used Avail Use% Mounted on /dev/vda3 742G 445G 260G 64% / udev 121G 4.0K 121G 1% /dev tmpfs 49G 248K 49G 1% /run none 5.0M 0 5.0M 0% /run/lock none 121G 0 121G 0% /run/shm /dev/vda1 99M 54M 40M 58% /boot my.cnf: [client] port = 3306 socket = /var/run/mysqld/mysqld.sock [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] skip-name-resolve innodb_file_per_table default_storage_engine=InnoDB user = mysql socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /data/mysql tmpdir = /tmp skip-external-locking key_buffer = 512M max_allowed_packet = 128M thread_stack = 192K thread_cache_size = 64 myisam-recover = BACKUP max_connections = 500 table_cache = 812 table_definition_cache = 812 #query_cache_limit = 4M #query_cache_size = 512M join_buffer_size = 512K innodb_additional_mem_pool_size = 20M innodb_buffer_pool_size = 196G #innodb_file_io_threads = 4 #innodb_thread_concurrency = 12 innodb_flush_log_at_trx_commit = 1 innodb_log_buffer_size = 8M innodb_log_file_size = 1024M innodb_log_files_in_group = 2 innodb_max_dirty_pages_pct = 90 innodb_lock_wait_timeout = 120 log_error = /var/log/mysql/error.log long_query_time = 5 slow_query_log = 1 slow_query_log_file = /var/log/mysql/slowlog.log [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] [isamchk] key_buffer = 16M

    Read the article

  • MySQL Memory usage

    - by Rob Stevenson-Leggett
    Our MySQL server seems to be using a lot of memory. I've tried looking for slow queries and queries with no index and have halved the peak CPU usage and Apache memory usage but the MySQL memory stays constantly at 2.2GB (~51% of available memory on the server). Here's the graph from Plesk. Running top in the SSH window shows the same figures. Does anyone have any ideas on why the memory usage is constant like this and not peaks and troughs with usage of the app? Here's the output of the MySQL Tuning Primer script: -- MYSQL PERFORMANCE TUNING PRIMER -- - By: Matthew Montgomery - MySQL Version 5.0.77-log x86_64 Uptime = 1 days 14 hrs 4 min 21 sec Avg. qps = 22 Total Questions = 3059456 Threads Connected = 13 Warning: Server has not been running for at least 48hrs. It may not be safe to use these recommendations To find out more information on how each of these runtime variables effects performance visit: http://dev.mysql.com/doc/refman/5.0/en/server-system-variables.html Visit http://www.mysql.com/products/enterprise/advisors.html for info about MySQL's Enterprise Monitoring and Advisory Service SLOW QUERIES The slow query log is enabled. Current long_query_time = 1 sec. You have 6 out of 3059477 that take longer than 1 sec. to complete Your long_query_time seems to be fine BINARY UPDATE LOG The binary update log is NOT enabled. You will not be able to do point in time recovery See http://dev.mysql.com/doc/refman/5.0/en/point-in-time-recovery.html WORKER THREADS Current thread_cache_size = 0 Current threads_cached = 0 Current threads_per_sec = 2 Historic threads_per_sec = 0 Threads created per/sec are overrunning threads cached You should raise thread_cache_size MAX CONNECTIONS Current max_connections = 100 Current threads_connected = 14 Historic max_used_connections = 20 The number of used connections is 20% of the configured maximum. Your max_connections variable seems to be fine. INNODB STATUS Current InnoDB index space = 6 M Current InnoDB data space = 18 M Current InnoDB buffer pool free = 0 % Current innodb_buffer_pool_size = 8 M Depending on how much space your innodb indexes take up it may be safe to increase this value to up to 2 / 3 of total system memory MEMORY USAGE Max Memory Ever Allocated : 2.07 G Configured Max Per-thread Buffers : 274 M Configured Max Global Buffers : 2.01 G Configured Max Memory Limit : 2.28 G Physical Memory : 3.84 G Max memory limit seem to be within acceptable norms KEY BUFFER Current MyISAM index space = 4 M Current key_buffer_size = 7 M Key cache miss rate is 1 : 40 Key buffer free ratio = 81 % Your key_buffer_size seems to be fine QUERY CACHE Query cache is supported but not enabled Perhaps you should set the query_cache_size SORT OPERATIONS Current sort_buffer_size = 2 M Current read_rnd_buffer_size = 256 K Sort buffer seems to be fine JOINS Current join_buffer_size = 132.00 K You have had 16 queries where a join could not use an index properly You should enable "log-queries-not-using-indexes" Then look for non indexed joins in the slow query log. If you are unable to optimize your queries you may want to increase your join_buffer_size to accommodate larger joins in one pass. Note! This script will still suggest raising the join_buffer_size when ANY joins not using indexes are found. OPEN FILES LIMIT Current open_files_limit = 1024 files The open_files_limit should typically be set to at least 2x-3x that of table_cache if you have heavy MyISAM usage. Your open_files_limit value seems to be fine TABLE CACHE Current table_cache value = 64 tables You have a total of 426 tables You have 64 open tables. Current table_cache hit rate is 1% , while 100% of your table cache is in use You should probably increase your table_cache TEMP TABLES Current max_heap_table_size = 16 M Current tmp_table_size = 32 M Of 15134 temp tables, 9% were created on disk Effective in-memory tmp_table_size is limited to max_heap_table_size. Created disk tmp tables ratio seems fine TABLE SCANS Current read_buffer_size = 128 K Current table scan ratio = 2915 : 1 read_buffer_size seems to be fine TABLE LOCKING Current Lock Wait ratio = 1 : 142213 Your table locking seems to be fine The app is a facebook game with about 50-100 concurrent users. Thanks, Rob

    Read the article

  • Mount TMPFS instead of ro /dev

    - by schiggn
    I am working on a ARM-Based embedded system with a custom Debian Linux based on kernel 2.6.31. In the final system, the Root file system is stored as squashfs on flash. Now, the folder /dev is created by udev, but since there is no hot plugging functionality needed and booting time is critical, I wanted to delete udev and "hard code" the /dev folder (read here, page 5). because i still need to change parameters of the devices (with ioctl /sysfs) this does not work for me in this case. so i thought of mounting a tmpfs on /dev and change the parameters there. is this possible? and how to do best? my approach would be: delete /dev from RFS create tar containing basic devices mount tmpfs /dev untar tar-file into /dev change parameters Could this work? Do you see any problems? I found out, that you can mount on top of already mounted mount point, is it somehow possible just to take data with while mounting the new file system? if so that would be very convenient! Thanks Update: I just tried that out, but I'm stuck at a certain point. I packed all my devices into devices.tar, packed it into /usr of my squashfs and added the following lines to mountkernfs.sh, which is executed right after INIT. #mount /dev on tmpfs echo -n "Mounting /dev on tmpfs..." mount -o size=5M,mode=0755 -t tmpfs tmpfs /dev mknod -m 600 /dev/console c 5 1 mknod -m 600 /dev/null c 1 3 echo "done." echo -n "Populating /dev..." tar -xf /usr/devices.tar -C /dev echo "done." This works fine on the version over NFS, if I place printf's in the code, I can see it executing, if I comment out the extracting part, its complaining about missing devices. Booting OK mmc0: new high speed SDHC card at address 0007 mmcblk0: mmc0:0007 SD04G 3.67 GiB mmcblk0: p1 IP-Config: Unable to set interface netmask (-22). Looking up port of RPC 100003/2 on 192.168.1.234 Looking up port of RPC 100005/1 on 192.168.1.234 VFS: Mounted root (nfs filesystem) on device 0:14. Freeing init memory: 136K INIT: version 2.86 booting Mounting /dev on tmpfs...done. Populating /dev...done. Initializing /var...done. Setting the system clock. System Clock set to: Thu Sep 13 11:26:23 UTC 2012. INIT: Entering runlevel: 2 UBI: attaching mtd8 to ubi0 Commenting out the extraction of the tar mmc0: new high speed SDHC card at address 0007 mmcblk0: mmc0:0007 SD04G 3.67 GiB mmcblk0: p1 IP-Config: Unable to set interface netmask (-22). Looking up port of RPC 100003/2 on 192.168.1.234 Looking up port of RPC 100005/1 on 192.168.1.234 VFS: Mounted root (nfs filesystem) on device 0:14. Freeing init memory: 136K INIT: version 2.86 booting Mounting /dev on tmpfs...done. Populating /dev...done. Initializing /var...done. Setting the system clock. Cannot access the Hardware Clock via any known method. Use the --debug option to see the details of our search for an access method. Unable to set System Clock to: Thu Sep 13 12:24:00 UTC 2012 ... (warning). INIT: Entering runlevel: 2 libubi: error!: cannot open "/dev/ubi_ctrl" So far so good. But if I pack the whole story into a squashfs and boot from there, it is acting strange. It's telling me while booting that it is unable to open an initial console and its throwing errors on mounting the UBIFS devices, but finally provides a login anyway. Over that my echo's are not executed. If I then log in, /dev is mounted as TMPFS as desired and all the devices reside inside. When I redo the "mount" command to mount the UBIFS partitions it is executed whitout problem and useable. From squashfs VFS: Mounted root (squashfs filesystem) readonly on device 31:15. Freeing init memory: 136K Warning: unable to open an initial console. mmc0: new high speed SDHC card at address 0007 mmcblk0: mmc0:0007 SD04G 3.67 GiB mmcblk0: p1 UBIFS error (pid 484): ubifs_get_sb: cannot open "ubi1_0", error -19 Additionally, a part of the rest of the bootscripts is still exexuted, but not all of them. Does anyone has a clue why? Other question, is 5MB enough/too much for /dev?

    Read the article

  • XFS disk becomes unavailable after a while

    - by Guard
    Ubuntu 12.04 (but the same was on 11.10 before upgrading) WD MyBook, 2TB, no RAID (or RAID0, not completely sure, anyway no mirroring, both 1TB disks are in use, mounted as a single device). Formatted to XFS, normally used for big movie files. Connected to Firewire 800. At some point the LED started going up and down as when constantly reading/writing. The device gives access error. When unplugged (cable, then holding the power button for a while, then unplugging the power) and re-connected becomes available. xfs_check with no results. xfs_repair did something, but looks like didn't fix any error. Then after a massive read (checking 1.5GB torrent file for integrity) becomes unavailable again. Any ideas what's wrong? Drives? Cables? Motherboard? OS? UPD: not sure how relevant this is, but here are dmesg output [14380.632816] SGI XFS with ACLs, security attributes, realtime, large block/inode numbers, no debug enabled [14380.633356] SGI XFS Quota Management subsystem [14421.812220] firewire_core: phy config: card 0, new root=ffc1, gap_count=5 [14441.890596] firewire_core: phy config: card 0, new root=ffc1, gap_count=5 [14441.896858] firewire_core: phy config: card 0, new root=ffc1, gap_count=5 [14453.895347] firewire_core: created device fw1: GUID 0090a99500a35518, S400, 9 config ROM retries [14453.904818] scsi6 : SBP-2 IEEE-1394 [14453.905014] scsi7 : SBP-2 IEEE-1394 [14454.139993] firewire_sbp2: fw1.0: logged in to LUN 0000 (0 retries) [14454.158769] scsi 6:0:0:0: Direct-Access WD My Book 1015 PQ: 0 ANSI: 4 [14454.159251] sd 6:0:0:0: Attached scsi generic sg3 type 0 [14454.162391] firewire_sbp2: fw1.1: logged in to LUN 0001 (0 retries) [14454.167453] sd 6:0:0:0: [sdc] 3907017568 512-byte logical blocks: (2.00 TB/1.81 TiB) [14454.178822] sd 6:0:0:0: [sdc] Write Protect is off [14454.178826] sd 6:0:0:0: [sdc] Mode Sense: 10 00 00 00 [14454.186830] scsi 7:0:0:1: Enclosure WD My Book Device 1015 PQ: 0 ANSI: 4 [14454.186995] scsi 7:0:0:1: Attached scsi generic sg4 type 13 [14454.190078] sd 6:0:0:0: [sdc] Cache data unavailable [14454.190087] sd 6:0:0:0: [sdc] Assuming drive cache: write through [14454.202176] sd 6:0:0:0: [sdc] Cache data unavailable [14454.202185] sd 6:0:0:0: [sdc] Assuming drive cache: write through [14454.239940] sdc: [mac] sdc1 sdc2 sdc3 sdc4 [14454.271262] sd 6:0:0:0: [sdc] Cache data unavailable [14454.271270] sd 6:0:0:0: [sdc] Assuming drive cache: write through [14454.271354] sd 6:0:0:0: [sdc] Attached SCSI disk [14454.272149] ses 7:0:0:1: Attached Enclosure device [14606.090024] XFS (sdc3): Mounting Filesystem [14612.048343] XFS (sdc3): Starting recovery (logdev: internal) [14620.697636] XFS (sdc3): Ending recovery (logdev: internal) [14748.120957] e1000e: eth0 NIC Link is Up 100 Mbps Full Duplex, Flow Control: Rx/Tx [14748.120963] e1000e 0000:00:19.0: eth0: 10/100 speed: disabling TSO [14752.568382] uhci_hcd 0000:00:1a.0: PCI INT A disabled [14752.568579] uhci_hcd 0000:00:1a.1: PCI INT B disabled [14752.568738] ehci_hcd 0000:00:1a.7: PCI INT C disabled [14752.568779] ehci_hcd 0000:00:1a.7: PME# enabled [14752.584526] uhci_hcd 0000:00:1d.1: PCI INT B disabled [14752.584689] uhci_hcd 0000:00:1d.2: PCI INT C disabled [14752.680079] ehci_hcd 0000:00:1a.7: BAR 0: set to [mem 0xe4641000-0xe46413ff] (PCI address [0xe4641000-0xe46413ff]) [14752.680104] ehci_hcd 0000:00:1a.7: restoring config space at offset 0xf (was 0x300, writing 0x30b) [14752.680136] ehci_hcd 0000:00:1a.7: restoring config space at offset 0x1 (was 0x2900000, writing 0x2900002) [14752.680170] ehci_hcd 0000:00:1a.7: PME# disabled [14752.680182] ehci_hcd 0000:00:1a.7: PCI INT C -> GSI 18 (level, low) -> IRQ 18 [14752.680190] ehci_hcd 0000:00:1a.7: setting latency timer to 64 [14752.710334] uhci_hcd 0000:00:1a.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [14752.710342] uhci_hcd 0000:00:1a.0: setting latency timer to 64 [14752.749186] uhci_hcd 0000:00:1a.1: PCI INT B -> GSI 17 (level, low) -> IRQ 17 [14752.749194] uhci_hcd 0000:00:1a.1: setting latency timer to 64 [14752.790231] uhci_hcd 0000:00:1d.1: PCI INT B -> GSI 22 (level, low) -> IRQ 22 [14752.790239] uhci_hcd 0000:00:1d.1: setting latency timer to 64 [14752.829170] uhci_hcd 0000:00:1d.2: PCI INT C -> GSI 18 (level, low) -> IRQ 18 [14752.829178] uhci_hcd 0000:00:1d.2: setting latency timer to 64

    Read the article

  • RHCS: GFS2 in A/A cluster with common storage. Configuring GFS with rgmanager

    - by Pavel A
    I'm configuring a two node A/A cluster with a common storage attached via iSCSI, which uses GFS2 on top of clustered LVM. So far I have prepared a simple configuration, but am not sure which is the right way to configure gfs resource. Here is the rm section of /etc/cluster/cluster.conf: <rm> <failoverdomains> <failoverdomain name="node1" nofailback="0" ordered="0" restricted="1"> <failoverdomainnode name="rhc-n1"/> </failoverdomain> <failoverdomain name="node2" nofailback="0" ordered="0" restricted="1"> <failoverdomainnode name="rhc-n2"/> </failoverdomain> </failoverdomains> <resources> <script file="/etc/init.d/clvm" name="clvmd"/> <clusterfs name="gfs" fstype="gfs2" mountpoint="/mnt/gfs" device="/dev/vg-cs/lv-gfs"/> </resources> <service name="shared-storage-inst1" autostart="0" domain="node1" exclusive="0" recovery="restart"> <script ref="clvmd"> <clusterfs ref="gfs"/> </script> </service> <service name="shared-storage-inst2" autostart="0" domain="node2" exclusive="0" recovery="restart"> <script ref="clvmd"> <clusterfs ref="gfs"/> </script> </service> </rm> This is what I mean: when using clusterfs resource agent to handle GFS partition, it is not unmounted by default (unless force_unmount option is given). This way when I issue clusvcadm -s shared-storage-inst1 clvm is stopped, but GFS is not unmounted, so a node cannot alter LVM structure on shared storage anymore, but can still access data. And even though a node can do it quite safely (dlm is still running), this seems to be rather inappropriate to me, since clustat reports that the service on a particular node is stopped. Moreover if I later try to stop cman on that node, it will find a dlm locking, produced by GFS, and fail to stop. I could have simply added force_unmount="1", but I would like to know what is the reason behind the default behavior. Why is it not unmounted? Most of the examples out there silently use force_unmount="0", some don't, but none of them give any clue on how the decision was made. Apart from that I have found sample configurations, where people manage GFS partitions with gfs2 init script - https://alteeve.ca/w/2-Node_Red_Hat_KVM_Cluster_Tutorial#Defining_The_Resources or even as simply as just enabling services such as clvm and gfs2 to start automatically at boot (http://pbraun.nethence.com/doc/filesystems/gfs2.html), like: chkconfig gfs2 on If I understand the latest approach correctly, such cluster only controls whether nodes are still alive and can fence errant ones, but such cluster has no control over the status of its resources. I have some experience with Pacemaker and I'm used to that all resources are controlled by a cluster and an action can be taken when not only there are connectivity issues, but any of the resources misbehave. So, which is the right way for me to go: leave GFS partition mounted (any reasons to do so?) set force_unmount="1". Won't this break anything? Why this is not the default? use script resource <script file="/etc/init.d/gfs2" name="gfs"/> to manage GFS partition. start it at boot and don't include in cluster.conf (any reasons to do so?) This may be a sort of question that cannot be answered unambiguously, so it would be also of much value for me if you shared your experience or expressed your thoughts on the issue. How does for example /etc/cluster/cluster.conf look like when configuring gfs with Conga or ccs (they are not available to me since for now I have to use Ubuntu for the cluster)? Thanks you very much!

    Read the article

  • Proliant server will not accept new hard disks in RAID 1+0?

    - by Leigh
    I have a HP ProLiant DL380 G5, I have two logical drives configured with RAID. I have one logical drive RAID 1+0 with two 72 gb 10k sas 1 port spare no 376597-001. I had one hard disk fail and ordered a replacement. The configuration utility showed error and would not rebuild the RAID. I presumed a hard disk fault and ordered a replacement again. In the mean time I put the original failed disk back in the server and this started rebuilding. Currently shows ok status however in the log I can see hardware errors. The new disk has come and I again have the same problem of not accepting the hard disk. I have updated the P400 controller with the latest firmware 7.24 , but still no luck. The only difference I can see is the original drive has firmware 0103 (same as the RAID drive) and the new one has HPD2. Any advice would be appreciated. Thanks in advance Logs from server ctrl all show config Smart Array P400 in Slot 1 (sn: PAFGK0P9VWO0UQ) array A (SAS, Unused Space: 0 MB) logicaldrive 1 (68.5 GB, RAID 1, Interim Recovery Mode) physicaldrive 2I:1:1 (port 2I:box 1:bay 1, SAS, 73.5 GB, OK) physicaldrive 2I:1:2 (port 2I:box 1:bay 2, SAS, 72 GB, Failed array B (SAS, Unused Space: 0 MB) logicaldrive 2 (558.7 GB, RAID 5, OK) physicaldrive 1I:1:5 (port 1I:box 1:bay 5, SAS, 300 GB, OK) physicaldrive 2I:1:3 (port 2I:box 1:bay 3, SAS, 300 GB, OK) physicaldrive 2I:1:4 (port 2I:box 1:bay 4, SAS, 300 GB, OK) ctrl all show config detail Smart Array P400 in Slot 1 Bus Interface: PCI Slot: 1 Serial Number: PAFGK0P9VWO0UQ Cache Serial Number: PA82C0J9VWL8I7 RAID 6 (ADG) Status: Disabled Controller Status: OK Hardware Revision: E Firmware Version: 7.24 Rebuild Priority: Medium Expand Priority: Medium Surface Scan Delay: 15 secs Surface Scan Mode: Idle Wait for Cache Room: Disabled Surface Analysis Inconsistency Notification: Disabled Post Prompt Timeout: 0 secs Cache Board Present: True Cache Status: OK Cache Status Details: A cache error was detected. Run more information. Cache Ratio: 100% Read / 0% Write Drive Write Cache: Disabled Total Cache Size: 256 MB Total Cache Memory Available: 208 MB No-Battery Write Cache: Disabled Battery/Capacitor Count: 0 SATA NCQ Supported: True Array: A Interface Type: SAS Unused Space: 0 MB Status: Failed Physical Drive Array Type: Data One of the drives on this array have failed or has Logical Drive: 1 Size: 68.5 GB Fault Tolerance: RAID 1 Heads: 255 Sectors Per Track: 32 Cylinders: 17594 Strip Size: 128 KB Full Stripe Size: 128 KB Status: Interim Recovery Mode Caching: Enabled Unique Identifier: 600508B10010503956574F305551 Disk Name: \\.\PhysicalDrive0 Mount Points: C:\ 68.5 GB Logical Drive Label: A0100539PAFGK0P9VWO0UQ0E93 Mirror Group 0: physicaldrive 2I:1:2 (port 2I:box 1:bay 2, S Mirror Group 1: physicaldrive 2I:1:1 (port 2I:box 1:bay 1, S Drive Type: Data physicaldrive 2I:1:1 Port: 2I Box: 1 Bay: 1 Status: OK Drive Type: Data Drive Interface Type: SAS Size: 73.5 GB Rotational Speed: 10000 Firmware Revision: 0103 Serial Number: B379P8C006RK Model: HP DG072A9B7 PHY Count: 2 PHY Transfer Rate: Unknown, Unknown physicaldrive 2I:1:2 Port: 2I Box: 1 Bay: 2 Status: Failed Drive Type: Data Drive Interface Type: SAS Size: 72 GB Rotational Speed: 15000 Firmware Revision: HPD9 Serial Number: D5A1PCA04SL01244 Model: HP EH0072FARUA PHY Count: 2 PHY Transfer Rate: Unknown, Unknown Array: B Interface Type: SAS Unused Space: 0 MB Status: OK Array Type: Data Logical Drive: 2 Size: 558.7 GB Fault Tolerance: RAID 5 Heads: 255 Sectors Per Track: 32 Cylinders: 65535 Strip Size: 64 KB Full Stripe Size: 128 KB Status: OK Caching: Enabled Parity Initialization Status: Initialization Co Unique Identifier: 600508B10010503956574F305551 Disk Name: \\.\PhysicalDrive1 Mount Points: E:\ 558.7 GB Logical Drive Label: AF14FD12PAFGK0P9VWO0UQD007 Drive Type: Data physicaldrive 1I:1:5 Port: 1I Box: 1 Bay: 5 Status: OK Drive Type: Data Drive Interface Type: SAS Size: 300 GB Rotational Speed: 10000 Firmware Revision: HPD4 Serial Number: 3SE07QH300009923X1X3 Model: HP DG0300BALVP Current Temperature (C): 32 Maximum Temperature (C): 45 PHY Count: 2 PHY Transfer Rate: Unknown, Unknown physicaldrive 2I:1:3 Port: 2I Box: 1 Bay: 3 Status: OK Drive Type: Data Drive Interface Type: SAS Size: 300 GB Rotational Speed: 10000 Firmware Revision: HPD4 Serial Number: 3SE0AHVH00009924P8F3 Model: HP DG0300BALVP Current Temperature (C): 34 Maximum Temperature (C): 47 PHY Count: 2 PHY Transfer Rate: Unknown, Unknown physicaldrive 2I:1:4 Port: 2I Box: 1 Bay: 4 Status: OK Drive Type: Data Drive Interface Type: SAS Size: 300 GB Rotational Speed: 10000 Firmware Revision: HPD4 Serial Number: 3SE08NAK00009924KWD6 Model: HP DG0300BALVP Current Temperature (C): 35 Maximum Temperature (C): 47 PHY Count: 2 PHY Transfer Rate: Unknown, Unknown

    Read the article

  • Windows 8.1 Update 1 Disk Usage 100%

    - by Gookjin Jeong
    Background Information / Computer Specs I have a 14-inch Samsung Series 5 Ultra. Core i5 CPU, 750GB HDD, 8GB RAM, Intel HD Graphics 4000. I've had the computer for about 1.5 years with no major problems. Problem The issue appeared at the beginning of April this year, when I updated the OS to Windows 8.1 Update 1 (not from 8 to 8.1). After being on continually (except for at night, when I put it on sleep mode) for about 48 hours, the disk usage as seen by Task Manager hits 100%. When this happens, everything from opening/closing applications to typing and even bringing up the start screen by pressing the Windows key becomes extremely slow. The only way to make the disk usage decrease is to restart the computer. Then the problem repeats. I've used my current laptop (as well as my previous laptops) this way -- putting it on sleep mode at night and restarting it only when Windows needs to install updates -- for a long time. So I know the 100% disk usage is not due to the way I use the computer. The thing that causes the spike varies. Sometimes it's System, sometimes it's one of the various applications I installed (e.g. Chrome, Evernote, Spotify, Wunderlist, iTunes, etc.), and sometimes it's Antimalware Service Executable, etc. Tried Solutions I think I tried almost every solution out there for this problem: Running the check disk command (chkdsk /b /f /v /scan c:) from Admin Command Prompt Running Windows Memory Diagnostic Disabling Superfetch and Windows Search from services.msc Running "Fix problems with Windows Update" from Control Panel -- Troubleshooting Updating and rolling back the graphics driver (Intel HD 4000) Disabling "Use hardware acceleration when available" from Chrome settings Disabling Intel Rapid Storage Technology Running the SFC /SCANNOW command as recommended here Running a quick scan & a full scan from Windows Defender (no threats found) Taking the hard drive out and putting it back Refreshing the computer, from the Update and recovery -- Recovery option in Windows settings NONE of the above worked for me. I was about to give up but then noticed that one of the main culprits of the disk usage spike, as shown in the "Disk Activity" section of the Resource Monitor, was C:\System (pagefile.sys). I googled around and found that one of the recommended solutions was to disable pagefile. I then went to **Control Panel -- System and Security -- System -- Advanced system settings -- Advanced tab -- Performance settings -- Advanced tab -- "Change" under Virtual memory and discovered that the number for "Currently allocated" at the bottom was 1280MB, although the number for "Recommended" was 4533MB. I immediately changed it to 4533MB and checked my family members' computers to see what the numbers were like. All of theirs had a currently allocated space that was only slightly smaller than the recommended space. See screenshot below: This might fix the problem. I'll have to wait a couple more days.But if it doesn't, what in the world should I do next? I'm guessing the hard drive isn't failing because This computer is less than 2 years old; and Speccy says that the status of the HDD is good. Update 5/27/2014 The "4533MB" solution did not work. I had to reboot the computer about 30 minutes ago because the disk usage again hit 100%. When I opened Resource Monitor the C:\System (pagefile.sys) again was shown to be the culprit. I have now disabled pagefile entirely via the same window shown above in the screenshot. The number for "currently allocated" is now 0MB. Will update again in a couple days, or if the problem occurs again, whichever comes sooner.

    Read the article

  • ASA 5540 v8.4(3) vpn to ASA 5505 v8.2(5), tunnel up but I cant ping from 5505 to IP on other side

    - by user223833
    I am having problems pinging from a 5505(remote) to IP 10.160.70.10 in the network behind the 5540(HQ side). 5505 inside IP: 10.56.0.1 Out: 71.43.109.226 5540 Inside: 10.1.0.8 out: 64.129.214.27 I Can ping from 5540 to 5505 inside 10.56.0.1. I also ran ASDM packet tracer in both directions, it is ok from 5540 to 5505, but drops the packet from 5505 to 5540. It gets through the ACL and dies at the NAT. Here is the 5505 config, I am sure it is something simple I am missing. ASA Version 8.2(5) ! hostname ASA-CITYSOUTHDEPOT domain-name rngint.net names ! interface Ethernet0/0 switchport access vlan 2 ! interface Ethernet0/1 ! interface Ethernet0/2 ! interface Ethernet0/3 ! interface Ethernet0/4 ! interface Ethernet0/5 ! interface Ethernet0/6 ! interface Ethernet0/7 ! interface Vlan1 nameif inside security-level 100 ip address 10.56.0.1 255.255.0.0 ! interface Vlan2 nameif outside security-level 0 ip address 71.43.109.226 255.255.255.252 ! banner motd ***ASA-CITYSOUTHDEPOT*** banner asdm CITY SOUTH DEPOT ASA5505 ftp mode passive clock timezone EST -5 clock summer-time EDT recurring dns server-group DefaultDNS domain-name rngint.net access-list outside_1_cryptomap extended permit ip host 71.43.109.226 host 10.1.0.125 access-list outside_1_cryptomap extended permit ip 10.56.0.0 255.255.0.0 10.0.0.0 255.0.0.0 access-list outside_1_cryptomap extended permit ip 10.56.0.0 255.255.0.0 10.106.70.0 255.255.255.0 access-list outside_1_cryptomap extended permit ip 10.56.0.0 255.255.0.0 10.106.130.0 255.255.255.0 access-list outside_1_cryptomap extended permit ip host 71.43.109.226 host 10.160.70.10 access-list inside_nat0_outbound extended permit ip host 71.43.109.226 host 10.1.0.125 access-list inside_nat0_outbound extended permit ip 10.56.0.0 255.255.0.0 10.0.0.0 255.0.0.0 access-list inside_nat0_outbound extended permit ip 10.56.0.0 255.255.0.0 10.106.130.0 255.255.255.0 access-list inside_nat0_outbound extended permit ip 10.56.0.0 255.255.0.0 10.106.70.0 255.255.255.0 access-list inside_nat0_outbound extended permit ip host 71.43.109.226 10.106.70.0 255.255.255.0 pager lines 24 logging enable logging buffer-size 25000 logging buffered informational logging asdm warnings mtu inside 1500 mtu outside 1500 icmp unreachable rate-limit 1 burst-size 1 icmp permit any inside no asdm history enable arp timeout 14400 global (outside) 1 interface nat (inside) 0 access-list inside_nat0_outbound nat (inside) 1 0.0.0.0 0.0.0.0 route outside 0.0.0.0 0.0.0.0 71.43.109.225 1 timeout xlate 3:00:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02 timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00 timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00 timeout sip-provisional-media 0:02:00 uauth 0:05:00 absolute timeout tcp-proxy-reassembly 0:01:00 timeout floating-conn 0:00:00 dynamic-access-policy-record DfltAccessPolicy aaa-server TACACS+ protocol tacacs+ aaa-server TACACS+ (inside) host 10.106.70.36 key ***** aaa authentication http console LOCAL aaa authentication ssh console LOCAL aaa authorization exec authentication-server http server enable http 192.168.1.0 255.255.255.0 inside http 10.0.0.0 255.0.0.0 inside http 0.0.0.0 0.0.0.0 outside snmp-server host inside 10.106.70.7 community ***** no snmp-server location no snmp-server contact snmp-server community ***** snmp-server enable traps snmp authentication linkup linkdown coldstart crypto ipsec transform-set ESP-3DES-SHA esp-3des esp-sha-hmac crypto ipsec transform-set ESP-AES-128-SHA esp-aes esp-sha-hmac crypto ipsec transform-set ESP-AES-128-MD5 esp-aes esp-md5-hmac crypto ipsec transform-set ESP-AES-192-SHA esp-aes-192 esp-sha-hmac crypto ipsec transform-set ESP-AES-192-MD5 esp-aes-192 esp-md5-hmac crypto ipsec transform-set ESP-AES-256-SHA esp-aes-256 esp-sha-hmac crypto ipsec transform-set ESP-AES-256-MD5 esp-aes-256 esp-md5-hmac crypto ipsec transform-set ESP-3DES-MD5 esp-3des esp-md5-hmac crypto ipsec transform-set ESP-DES-SHA esp-des esp-sha-hmac crypto ipsec transform-set ESP-DES-MD5 esp-des esp-md5-hmac crypto ipsec security-association lifetime seconds 28800 crypto ipsec security-association lifetime kilobytes 4608000 crypto map outside_map 1 match address outside_1_cryptomap crypto map outside_map 1 set pfs group1 crypto map outside_map 1 set peer 64.129.214.27 crypto map outside_map 1 set transform-set ESP-3DES-SHA crypto map outside_map interface outside crypto isakmp enable outside crypto isakmp policy 1 authentication pre-share encryption des hash md5 group 2 lifetime 86400 telnet timeout 5 ssh 10.0.0.0 255.0.0.0 inside ssh 0.0.0.0 0.0.0.0 outside ssh timeout 5 console timeout 0 management-access inside dhcpd auto_config outside ! dhcpd address 10.56.0.100-10.56.0.121 inside dhcpd dns 10.1.0.125 interface inside dhcpd auto_config outside interface inside ! dhcprelay server 10.1.0.125 outside dhcprelay enable inside dhcprelay setroute inside dhcprelay timeout 60 threat-detection basic-threat threat-detection statistics access-list no threat-detection statistics tcp-intercept tftp-server inside 10.1.1.25 CITYSOUTHDEPOT-ASA-Confg webvpn tunnel-group 64.129.214.27 type ipsec-l2l tunnel-group 64.129.214.27 ipsec-attributes pre-shared-key ***** ! ! prompt hostname context

    Read the article

  • SQL Server Log File Won't Shrink due cause "log are pending replication" on non replicated DB?

    - by user796466
    I have a non Mission Critial DB 9am-5pm SQL Server database that I have set up to do nightly full backups and log backups every 30 minutes during business hours. The database is in full recovery and normally I have no reason to truncate/shrink logs unless I do some heavy maintenance. Log backups manage the size with no issue. However I have not been at this client for several weeks and upon inspection I noticed that the log had grown to about 10 times the size of the .mdf file. I poked around backups had been running and I had not gotten any severity error alerts (SQL mail). I attempted to put DB in simple recovery and shrink the log, this was no good. I precede to try a log backup and I got: The log was not truncated because records at the beginning of the log are pending replication or Change Data Capture. Ensure the Log Reader Agent or capture job is running or use sp_repldone to mark transactions as distributed or captured. Restart SQL Server rinse repeat same thing ... I said ??? Replication is not nor ever has been set up on this DB or database /server ??? So the log backups have not been flushing the .ldf. So I did a couple hours of research and I found: http://www.sqlmonster.com/Uwe/Forum.aspx/sql-server/5445/Log-file-is-not-truncated-inspite-of-regular-log-backup http://www.eggheadcafe.com/software/aspnet/30708322/the-log-was-not-truncated-because-records-at-the-beginning-of-the-log-are-pending-replication.aspx seems to be some kind of poorly documented bug ?? The solution seems to have been to run exec sp_repldone, more precisley EXEC sp_repldone @xactid = NULL, @xact_segno = NULL, @numtrans = 0, @time= 0, @reset = 1 This procedure can be used in emergency situations to allow truncation of the transaction log when transactions pending replication are present. Using this procedure prevents Microsoft SQL Server 2000 from replicating the database until the database is unpublished and republished. ~ MSDN When I do that I get the following Msg 18757, Level 16, State 1, Procedure sp_repldone, Line 1 Unable to execute procedure. The database is not published. Execute the procedure in a database that is published for replication. Which makes sense Because the DB has never been published for replication. I have several questions: A) First and foremost is, WTF is going on ? What is causeing this, I am interested in knowing the why here ? Is this genuinley a bug or is there some aspect of the backup that is not functioning properly that cause's the DB to mimick a replicated state ? Someone please edify me on this. B) Second ... Do I really have to publish / replicate this DB to exec this SP to fix this ??? Sounds crazy or is there some T-SQL that I can put it in a published state exec the proc and be on my way ... C) Third, if I do indeed have to publish this database to exec the SP to release this unneeded mis replicated/intended log , to get my .ldf file and backup back on track. How do I publish the database without an online host that it is asking for ??? I don't generally do this kind of database administration and need some guidance. Sorry if this is too verbose but just voicing the question helps me clarify it ... Thank you in advance for your help

    Read the article

  • Ejb 2.0 deployment issues on Jboss 5.1

    - by Ravi
    I am deploying an ear application on Jboss 5.1.0. and i facing some issues. I had two ears one i had copied to deploy folder and the other in deploy-hasingleton. The ear which is in deploy-hasingleton is throwing some errors.when i serached in google i came to know that there is some issue with EJB 2.x on jboss 5.1.i was not able to find the solution. Below is the log. profileservice-secured.jar 11:16:17,162 INFO [JBossASKernel] installing bean: jboss.j2ee:jar=profileservice-secured.jar,name=SecureManagementView,service=EJB3 11:16:17,162 INFO [JBossASKernel] with dependencies: 11:16:17,162 INFO [JBossASKernel] and demands: 11:16:17,162 INFO [JBossASKernel] jboss.ejb:service=EJBTimerService 11:16:17,162 INFO [JBossASKernel] and supplies: 11:16:17,162 INFO [JBossASKernel] jndi:SecureManagementView/remote-org.jboss.deployers.spi.management.ManagementView 11:16:17,162 INFO [JBossASKernel] Class:org.jboss.deployers.spi.management.ManagementView 11:16:17,162 INFO [JBossASKernel] jndi:SecureManagementView/remote 11:16:17,162 INFO [JBossASKernel] Added bean(jboss.j2ee:jar=profileservice-secured.jar,name=SecureManagementView,service=EJB3) to KernelDeployment of: profileservice-secured.jar 11:16:17,162 INFO [EJB3EndpointDeployer] Deploy AbstractBeanMetaData@17cabbb{name=jboss.j2ee:jar=profileservice-secured.jar,name=SecureProfileService,service=EJB3_endpoint bean=org.jboss.ejb3.endpoint.deployers.impl.EndpointImpl properties=[container] constructor=null autowireCandidate=true} 11:16:17,162 INFO [EJB3EndpointDeployer] Deploy AbstractBeanMetaData@1fedd5c{name=jboss.j2ee:jar=profileservice-secured.jar,name=SecureDeploymentManager,service=EJB3_endpoint bean=org.jboss.ejb3.endpoint.deployers.impl.EndpointImpl properties=[container] constructor=null autowireCandidate=true} 11:16:17,162 INFO [EJB3EndpointDeployer] Deploy AbstractBeanMetaData@1ef4b31{name=jboss.j2ee:jar=profileservice-secured.jar,name=SecureManagementView,service=EJB3_endpoint bean=org.jboss.ejb3.endpoint.deployers.impl.EndpointImpl properties=[container] constructor=null autowireCandidate=true} 11:16:17,833 INFO [SessionSpecContainer] Starting jboss.j2ee:jar=profileservice-secured.jar,name=SecureDeploymentManager,service=EJB3 11:16:17,833 INFO [EJBContainer] STARTED EJB: org.jboss.profileservice.ejb.SecureDeploymentManager ejbName: SecureDeploymentManager 11:16:18,066 INFO [JndiSessionRegistrarBase] Binding the following Entries in Global JNDI: SecureDeploymentManager/remote - EJB3.x Default Remote Business Interface SecureDeploymentManager/remote-org.jboss.deployers.spi.management.deploy.DeploymentManager - EJB3.x Remote Business Interface 11:16:18,129 INFO [SessionSpecContainer] Starting jboss.j2ee:jar=profileservice-secured.jar,name=SecureManagementView,service=EJB3 11:16:18,129 INFO [EJBContainer] STARTED EJB: org.jboss.profileservice.ejb.SecureManagementView ejbName: SecureManagementView 11:16:18,160 INFO [JndiSessionRegistrarBase] Binding the following Entries in Global JNDI: SecureManagementView/remote - EJB3.x Default Remote Business Interface SecureManagementView/remote-org.jboss.deployers.spi.management.ManagementView - EJB3.x Remote Business Interface 11:16:18,206 INFO [SessionSpecContainer] Starting jboss.j2ee:jar=profileservice-secured.jar,name=SecureProfileService,service=EJB3 11:16:18,206 INFO [EJBContainer] STARTED EJB: org.jboss.profileservice.ejb.SecureProfileServiceBean ejbName: SecureProfileService 11:16:18,238 INFO [JndiSessionRegistrarBase] Binding the following Entries in Global JNDI: SecureProfileService/remote - EJB3.x Default Remote Business Interface SecureProfileService/remote-org.jboss.profileservice.spi.ProfileService - EJB3.x Remote Business Interface 11:16:18,534 INFO [TomcatDeployment] deploy, ctxPath=/admin-console 11:16:18,612 INFO [config] Initializing Mojarra (1.2_12-b01-FCS) for context '/admin-console' 11:16:21,759 INFO [TomcatDeployment] deploy, ctxPath=/ 11:16:21,853 INFO [TomcatDeployment] deploy, ctxPath=/jmx-console 11:16:21,993 INFO [JBossASKernel] Created KernelDeployment for: hapi-0.5.jar 11:16:21,993 INFO [JBossASKernel] installing bean: jboss.j2ee:ear=jca-ear-1.3-SNAPSHOT.ear,jar=hapi-0.5.jar,name=hapi-0.5,service=EJB3 11:16:21,993 INFO [JBossASKernel] with dependencies: 11:16:21,993 INFO [JBossASKernel] and demands: 11:16:21,993 INFO [JBossASKernel] and supplies: 11:16:21,993 INFO [JBossASKernel] Added bean(jboss.j2ee:ear=jca-ear-1.3-SNAPSHOT.ear,jar=hapi-0.5.jar,name=hapi-0.5,service=EJB3) to KernelDeployment of: hapi-0.5.jar 11:16:23,302 INFO [ClientENCInjectionContainer] STARTED CLIENT ENC CONTAINER: hapi-0.5 11:16:23,473 INFO [SystemEventService] NODE_STARTED on node [HCA-5C1P1BS] 11:16:23,489 INFO [AbstractConnector] [aware] connector started 11:16:23,536 INFO [AbstractConnector] [datacaptor] connector started 11:16:23,536 INFO [AbstractConnector] [intellivue] connector started 11:16:23,972 ERROR [ProfileServiceBootstrap] Failed to load profile: Summary of incomplete deployments (SEE PREVIOUS ERRORS FOR DETAILS): DEPLOYMENTS MISSING DEPENDENCIES: Deployment "gehc.com:service=KernelServiceMBean" is missing the following dependencies: Dependency "jboss.j2ee:module=kernel-ejb-1.3-SNAPSHOT.jar,service=EjbModule" (should be in state "Create", but is actually in state " NOT FOUND Depends on 'jboss.j2ee:module=kernel-ejb-1.3-SNAPSHOT.jar,service=EjbModule' ") Deployment "jboss.j2ee:module="kernel-ejb-1.3-SNAPSHOT.jar",service=EjbModule" is missing the following dependencies: Dependency "gehc.com:service=KernelServiceMBean" (should be in state "Create", but is actually in state "Configured") DEPLOYMENTS IN ERROR: Deployment "jboss.j2ee:module=kernel-ejb-1.3-SNAPSHOT.jar,service=EjbModule" is in error due to the following reason(s): ** NOT FOUND Depends on 'jboss.j2ee:module=kernel-ejb-1.3-SNAPSHOT.jar,service=EjbModule' ** 11:16:24,003 INFO [Http11Protocol] Starting Coyote HTTP/1.1 on http-127.0.0.1-8080 11:16:24,034 INFO [AjpProtocol] Starting Coyote AJP/1.3 on ajp-127.0.0.1-8009 11:16:24,050 INFO [ServerImpl] JBoss (Microcontainer) [5.1.0.GA (build: SVNTag=JBoss_5_1_0_GA date=200905221053)] Started in 1m:49s:575ms I had marked the error with bold, there is some circular dependency also. Thanks Ravi S

    Read the article

  • Overflow exception while performing parallel factorization using the .NET Task Parallel Library (TPL

    - by Aviad P.
    Hello, I'm trying to write a not so smart factorization program and trying to do it in parallel using TPL. However, after about 15 minutes of running on a core 2 duo machine, I am getting an aggregate exception with an overflow exception inside it. All the entries in the stack trace are part of the .NET framework, the overflow does not come from my code. Any help would be appreciated in figuring out why this happens. Here's the commented code, hopefully it's simple enough to understand: class Program { static List<Tuple<BigInteger, int>> factors = new List<Tuple<BigInteger, int>>(); static void Main(string[] args) { BigInteger theNumber = BigInteger.Parse( "653872562986528347561038675107510176501827650178351386656875178" + "568165317809518359617865178659815012571026531984659218451608845" + "719856107834513527"); Stopwatch sw = new Stopwatch(); bool isComposite = false; sw.Start(); do { /* Print out the number we are currently working on. */ Console.WriteLine(theNumber); /* Find a factor, stop when at least one is found (using the Any operator). */ isComposite = Range(theNumber) .AsParallel() .Any(x => CheckAndStoreFactor(theNumber, x)); /* Of the factors found, take the one with the lowest base. */ var factor = factors.OrderBy(x => x.Item1).First(); Console.WriteLine(factor); /* Divide the number by the factor. */ theNumber = BigInteger.Divide( theNumber, BigInteger.Pow(factor.Item1, factor.Item2)); /* Clear the discovered factors cache, and keep looking. */ factors.Clear(); } while (isComposite); sw.Stop(); Console.WriteLine(isComposite + " " + sw.Elapsed); } static IEnumerable<BigInteger> Range(BigInteger squareOfTarget) { BigInteger two = BigInteger.Parse("2"); BigInteger element = BigInteger.Parse("3"); while (element * element < squareOfTarget) { yield return element; element = BigInteger.Add(element, two); } } static bool CheckAndStoreFactor(BigInteger candidate, BigInteger factor) { BigInteger remainder, dividend = candidate; int exponent = 0; do { dividend = BigInteger.DivRem(dividend, factor, out remainder); if (remainder.IsZero) { exponent++; } } while (remainder.IsZero); if (exponent > 0) { lock (factors) { factors.Add(Tuple.Create(factor, exponent)); } } return exponent > 0; } } Here's the exception thrown: Unhandled Exception: System.AggregateException: One or more errors occurred. --- > System.OverflowException: Arithmetic operation resulted in an overflow. at System.Linq.Parallel.PartitionedDataSource`1.ContiguousChunkLazyEnumerator.MoveNext(T& currentElement, Int32& currentKey) at System.Linq.Parallel.AnyAllSearchOperator`1.AnyAllSearchOperatorEnumerator`1.MoveNext(Boolean& currentElement, Int32& currentKey) at System.Linq.Parallel.StopAndGoSpoolingTask`2.SpoolingWork() at System.Linq.Parallel.SpoolingTaskBase.Work() at System.Linq.Parallel.QueryTask.BaseWork(Object unused) at System.Linq.Parallel.QueryTask.<.cctor>b__0(Object o) at System.Threading.Tasks.Task.InnerInvoke() at System.Threading.Tasks.Task.Execute() --- End of inner exception stack trace --- at System.Linq.Parallel.QueryTaskGroupState.QueryEnd(Boolean userInitiatedDispose) at System.Linq.Parallel.SpoolingTask.SpoolStopAndGo[TInputOutput,TIgnoreKey](QueryTaskGroupState groupState, PartitionedStream`2 partitions, SynchronousChannel`1[] channels, TaskScheduler taskScheduler) at System.Linq.Parallel.DefaultMergeHelper`2.System.Linq.Parallel.IMergeHelper<TInputOutput>.Execute() at System.Linq.Parallel.MergeExecutor`1.Execute[TKey](PartitionedStream`2 partitions, Boolean ignoreOutput, ParallelMergeOptions options, TaskScheduler taskScheduler, Boolean isOrdered, CancellationState cancellationState, Int32 queryId) at System.Linq.Parallel.PartitionedStreamMerger`1.Receive[TKey](PartitionedStream`2 partitionedStream) at System.Linq.Parallel.AnyAllSearchOperator`1.WrapPartitionedStream[TKey](PartitionedStream`2 inputStream, IPartitionedStreamRecipient`1 recipient, BooleanpreferStriping, QuerySettings settings) at System.Linq.Parallel.UnaryQueryOperator`2.UnaryQueryOperatorResults.ChildResultsRecipient.Receive[TKey](PartitionedStream`2 inputStream) at System.Linq.Parallel.ScanQueryOperator`1.ScanEnumerableQueryOperatorResults.GivePartitionedStream(IPartitionedStreamRecipient`1 recipient) at System.Linq.Parallel.UnaryQueryOperator`2.UnaryQueryOperatorResults.GivePartitionedStream(IPartitionedStreamRecipient`1 recipient) at System.Linq.Parallel.QueryOperator`1.GetOpenedEnumerator(Nullable`1 mergeOptions, Boolean suppressOrder, Boolean forEffect, QuerySettings querySettings) at System.Linq.Parallel.QueryOpeningEnumerator`1.OpenQuery() at System.Linq.Parallel.QueryOpeningEnumerator`1.MoveNext() at System.Linq.Parallel.AnyAllSearchOperator`1.Aggregate() at System.Linq.ParallelEnumerable.Any[TSource](ParallelQuery`1 source, Func`2 predicate) at PFact.Program.Main(String[] args) in d:\myprojects\PFact\PFact\Program.cs:line 34 Any help would be appreciated. Thanks!

    Read the article

  • Performance surprise with "as" and nullable types

    - by Jon Skeet
    I'm just revising chapter 4 of C# in Depth which deals with nullable types, and I'm adding a section about using the "as" operator, which allows you to write: object o = ...; int? x = o as int?; if (x.HasValue) { ... // Use x.Value in here } I thought this was really neat, and that it could improve performance over the C# 1 equivalent, using "is" followed by a cast - after all, this way we only need to ask for dynamic type checking once, and then a simple value check. This appears not to be the case, however. I've included a sample test app below, which basically sums all the integers within an object array - but the array contains a lot of null references and string references as well as boxed integers. The benchmark measures the code you'd have to use in C# 1, the code using the "as" operator, and just for kicks a LINQ solution. To my astonishment, the C# 1 code is 20 times faster in this case - and even the LINQ code (which I'd have expected to be slower, given the iterators involved) beats the "as" code. Is the .NET implementation of isinst for nullable types just really slow? Is it the additional unbox.any that causes the problem? Is there another explanation for this? At the moment it feels like I'm going to have to include a warning against using this in performance sensitive situations... Results: Cast: 10000000 : 121 As: 10000000 : 2211 LINQ: 10000000 : 2143 Code: using System; using System.Diagnostics; using System.Linq; class Test { const int Size = 30000000; static void Main() { object[] values = new object[Size]; for (int i = 0; i < Size - 2; i += 3) { values[i] = null; values[i+1] = ""; values[i+2] = 1; } FindSumWithCast(values); FindSumWithAs(values); FindSumWithLinq(values); } static void FindSumWithCast(object[] values) { Stopwatch sw = Stopwatch.StartNew(); int sum = 0; foreach (object o in values) { if (o is int) { int x = (int) o; sum += x; } } sw.Stop(); Console.WriteLine("Cast: {0} : {1}", sum, (long) sw.ElapsedMilliseconds); } static void FindSumWithAs(object[] values) { Stopwatch sw = Stopwatch.StartNew(); int sum = 0; foreach (object o in values) { int? x = o as int?; if (x.HasValue) { sum += x.Value; } } sw.Stop(); Console.WriteLine("As: {0} : {1}", sum, (long) sw.ElapsedMilliseconds); } static void FindSumWithLinq(object[] values) { Stopwatch sw = Stopwatch.StartNew(); int sum = values.OfType<int>().Sum(); sw.Stop(); Console.WriteLine("LINQ: {0} : {1}", sum, (long) sw.ElapsedMilliseconds); } }

    Read the article

  • Problem with socket communication between C# and Flex

    - by Chris Lee
    Hi all, I am implementing a simulated b/s stock data system. I am using flex and c# for client and server sides. I found flash has a security policy and I handled the policy-file-request in my server code. But seems it doesn't work, because the code jumped out at "socket.Receive(b)" after connection. I've tried sending message on client in the connection handler, in that case the server can receive correct message. But the auto-generated "policy-file-request" can never be received, and the client can get no data sending from server. Here I put my code snippet. my ActionScript code: public class StockClient extends Sprite { private var hostName:String = "192.168.84.103"; private var port:uint = 55555; private var socket:XMLSocket; public function StockClient() { socket = new XMLSocket(); configureListeners(socket); socket.connect(hostName, port); } public function send(data:Object) : void{ socket.send(data); } private function configureListeners(dispatcher:IEventDispatcher):void { dispatcher.addEventListener(Event.CLOSE, closeHandler); dispatcher.addEventListener(Event.CONNECT, connectHandler); dispatcher.addEventListener(IOErrorEvent.IO_ERROR, ioErrorHandler); dispatcher.addEventListener(ProgressEvent.PROGRESS, progressHandler); dispatcher.addEventListener(SecurityErrorEvent.SECURITY_ERROR, securityErrorHandler); dispatcher.addEventListener(ProgressEvent.SOCKET_DATA, dataHandler); } private function closeHandler(event:Event):void { trace("closeHandler: " + event); } private function connectHandler(event:Event):void { trace("connectHandler: " + event); //following testing message can be received, but client can't invoke data handler //send("<policy-file-request/>"); } private function dataHandler(event:ProgressEvent):void { //never fired trace("dataHandler: " + event); } private function ioErrorHandler(event:IOErrorEvent):void { trace("ioErrorHandler: " + event); } private function progressHandler(event:ProgressEvent):void { trace("progressHandler loaded:" + event.bytesLoaded + " total: " + event.bytesTotal); } private function securityErrorHandler(event:SecurityErrorEvent):void { trace("securityErrorHandler: " + event); } } my C# code: const int PORT_NUMBER = 55555; const String BEGIN_REQUEST = "begin"; const String END_REQUEST = "end"; const String POLICY_REQUEST = "<policy-file-request/>\u0000"; const String POLICY_FILE = "<?xml version=\"1.0\"?>\n" + "<!DOCTYPE cross-domain-policy SYSTEM \"http://www.adobe.com/xml/dtds/cross-domain-policy.dtd\">\n" + "<cross-domain-policy> \n" + " <allow-access-from domain=\"*\" to-ports=\"55555\"/> \n" + "</cross-domain-policy>\u0000"; ................ private void startListening() { provider = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp); provider.Bind(new IPEndPoint(IPAddress.Parse("192.168.84.103"), PORT_NUMBER)); provider.Listen(10); isListened = true; while (isListened) { Socket socket = provider.Accept(); Console.WriteLine("connect!"); byte[] b = new byte[1024]; int receiveLength = 0; try { // code jump out at this statement receiveLength = socket.Receive(b); } catch (Exception e) { Debug.WriteLine(e.ToString()); } String request = System.Text.Encoding.UTF8.GetString(b, 0, receiveLength); Console.WriteLine("request:"+request); if (request == POLICY_REQUEST) { socket.Send(Encoding.UTF8.GetBytes(POLICY_FILE)); Console.WriteLine("response:" + POLICY_FILE); } else if (request == END_REQUEST) { Dispose(socket); } else { StartSocket(socket); break; } } } Sorry for the long code, please someone help with it, thanks a million

    Read the article

  • How can I properly implement inetcpl.cpl as an external dll?

    - by Kyt
    I have the following 2 sets of code, both of which produce the same results: using System.Linq; using System.Runtime.InteropServices; using System.Text; using System.Threading.Tasks; using System.Windows.Forms; namespace ResetIE { class Program { [DllImport("InetCpl.cpl", SetLastError=true, CharSet=CharSet.Unicode, EntryPoint="ClearMyTracksByProcessW")] public static extern long ClearMyTracksByProcess(IntPtr hwnd, IntPtr hinst, ref TargetHistory lpszCmdLine, FormWindowState nCmdShow); static void Main(string[] args) { TargetHistory th = TargetHistory.CLEAR_TEMPORARY_INTERNET_FILES; ClearMyTracksByProcessW(Process.GetCurrentProcess().Handle, Marshal.GetHINSTANCE(typeof(Program).Module), ref th, FormWindowState.Maximized); Console.WriteLine("Done."); } } and ... static class NativeMethods { [DllImport("kernel32.dll")] public static extern IntPtr LoadLibrary(string dllToLoad); [DllImport("kernel32.dll")] public static extern IntPtr GetProcAddress(IntPtr hModule, string procedureName); [DllImport("kernel32.dll")] public static extern bool FreeLibrary(IntPtr hModule); } public class CallExternalDLL { [UnmanagedFunctionPointer(CallingConvention.Cdecl)] public delegate long ClearMyTracksByProcessW(IntPtr hwnd, IntPtr hinst, ref TargetHistory lpszCmdLine, FormWindowState nCmdShow); public static void Clear_IE_Cache() { IntPtr pDll = NativeMethods.LoadLibrary(@"C:\Windows\System32\inetcpl.cpl"); if (pDll == IntPtr.Zero) { Console.WriteLine("An Error has Occurred."); } IntPtr pAddressOfFunctionToCall = NativeMethods.GetProcAddress(pDll, "ClearMyTracksByProcessW"); if (pAddressOfFunctionToCall == IntPtr.Zero) { Console.WriteLine("Function Not Found."); } ClearMyTracksByProcessW cmtbp = (ClearMyTracksByProcessW)Marshal.GetDelegateForFunctionPointer(pAddressOfFunctionToCall, typeof(ClearMyTracksByProcessW)); TargetHistory q = TargetHistory.CLEAR_TEMPORARY_INTERNET_FILES; long result = cmtbp(Process.GetCurrentProcess().Handle, Marshal.GetHINSTANCE(typeof(ClearMyTracksByProcessW).Module), ref q, FormWindowState.Normal); } } both use the following Enum: public enum TargetHistory { CLEAR_ALL = 0xFF, CLEAR_ALL_WITH_ADDONS = 0x10FF, CLEAR_HISTORY = 0x1, CLEAR_COOKIES = 0x2, CLEAR_TEMPORARY_INTERNET_FILES = 0x8, CLEAR_FORM_DATA = 0x10, CLEAR_PASSWORDS = 0x20 } Both methods of doing this compile and run just fine, offering no errors, but both churn endlessly never returning from their work. The PInvoke code was ported from the following VB, which was fairly difficult to track down: Option Explicit Private Enum TargetHistory CLEAR_ALL = &HFF& CLEAR_ALL_WITH_ADDONS = &H10FF& CLEAR_HISTORY = &H1& CLEAR_COOKIES = &H2& CLEAR_TEMPORARY_INTERNET_FILES = &H8& CLEAR_FORM_DATA = &H10& CLEAR_PASSWORDS = &H20& End Enum Private Declare Function ClearMyTracksByProcessW Lib "InetCpl.cpl" _ (ByVal hwnd As OLE_HANDLE, _ ByVal hinst As OLE_HANDLE, _ ByRef lpszCmdLine As Byte, _ ByVal nCmdShow As VbAppWinStyle) As Long Private Sub Command1_Click() Dim b() As Byte Dim o As OptionButton For Each o In Option1 If o.Value Then b = o.Tag ClearMyTracksByProcessW Me.hwnd, App.hInstance, b(0), vbNormalFocus Exit For End If Next End Sub Private Sub Form_Load() Command1.Caption = "??" Option1(0).Caption = "?????????????" Option1(0).Tag = CStr(CLEAR_TEMPORARY_INTERNET_FILES) Option1(1).Caption = "Cookie" Option1(1).Tag = CStr(CLEAR_COOKIES) Option1(2).Caption = "??" Option1(2).Tag = CStr(CLEAR_HISTORY) Option1(3).Caption = "???? ???" Option1(3).Tag = CStr(CLEAR_HISTORY) Option1(4).Caption = "?????" Option1(4).Tag = CStr(CLEAR_PASSWORDS) Option1(5).Caption = "?????" Option1(5).Tag = CStr(CLEAR_ALL) Option1(2).Value = True End Sub The question is simply what am I doing wrong? I need to clear the internet cache, and would prefer to use this method as I know it does what I want it to when it works (rundll32 inetcpl.cpl,ClearMyTracksByProcess 8 works fine). I've tried running both as normal user and admin to no avail. This project is written using C# in VS2012 and compiled against .NET3.5 (must remain at 3.5 due to client restrictions)

    Read the article

  • Why does C# thread die?

    - by JackN
    This is my 1st C# project so I may be doing something obviously improper in the code below. I am using .NET, WinForms (I think), and this is a desktop application until I get the bugs out. UpdateGui() uses Invoke((MethodInvoker)delegate to update various GUI controls based on received serial data and sends a GetStatus() command out the serial port 4 times a second. Thread Read() reads the response from serial port whenever it arrives which should be near immediate. SerialPortFixer is a SerialPort IOException Workaround in C# I found at http://zachsaw.blogspot.com/2010/07/serialport-ioexception-workaround-in-c.html. After one or both threads die I'll see something like The thread 0x1288 has exited with code 0 (0x0). in the debug code output. Why do UpdateGui() and/or Read() eventually die? public partial class UpdateStatus : Form { private readonly byte[] Command = new byte[32]; private readonly byte[] Status = new byte[32]; readonly Thread readThread; private static readonly Mutex commandMutex = new Mutex(); private static readonly Mutex statusMutex = new Mutex(); ... public UpdateStatus() { InitializeComponent(); SerialPortFixer.Execute("COM2"); if (serialPort1.IsOpen) { serialPort1.Close(); } try { serialPort1.Open(); } catch (Exception e) { labelWarning.Text = LOST_COMMUNICATIONS + e; labelStatus.Text = LOST_COMMUNICATIONS + e; labelWarning.Visible = true; } readThread = new Thread(Read); readThread.Start(); new Timer(UpdateGui, null, 0, 250); } static void ProcessStatus(byte[] status) { Status.State = (State) status[4]; Status.Speed = status[6]; // MSB Status.Speed *= 256; Status.Speed += status[5]; var Speed = Status.Speed/GEAR_RATIO; Status.Speed = (int) Speed; ... } public void Read() { while (serialPort1 != null) { try { serialPort1.Read(Status, 0, 1); if (Status[0] != StartCharacter[0]) continue; serialPort1.Read(Status, 1, 1); if (Status[1] != StartCharacter[1]) continue; serialPort1.Read(Status, 2, 1); if (Status[2] != (int)Command.GetStatus) continue; serialPort1.Read(Status, 3, 1); ... statusMutex.WaitOne(); ProcessStatus(Status); Status.update = true; statusMutex.ReleaseMutex(); } catch (Exception e) { Console.WriteLine(@"ERROR! Read() " + e); } } } public void GetStatus() { const int parameterLength = 0; // For GetStatus statusMutex.WaitOne(); Status.update = false; statusMutex.ReleaseMutex(); commandMutex.WaitOne(); if (!SendCommand(Command.GetStatus, parameterLength)) { Console.WriteLine(@"ERROR! SendCommand(GetStatus)"); } commandMutex.ReleaseMutex(); } private void UpdateGui(object x) { try { Invoke((MethodInvoker)delegate { Text = DateTime.Now.ToLongTimeString(); statusMutex.WaitOne(); if (Status.update) { if (Status.Speed > progressBarSpeed.Maximum) { Status.Speed = progressBarSpeed.Maximum; } progressBarSpeed.Value = Status.Speed; labelSpeed.Text = Status.Speed + RPM; ... } else { labelWarning.Text = LOST_COMMUNICATIONS; labelStatus.Text = LOST_COMMUNICATIONS; labelWarning.Visible = true; } statusMutex.ReleaseMutex(); GetStatus(); }); } catch (Exception e) { Console.WriteLine(@"ERROR! UpdateGui() " + e); } } }

    Read the article

  • C# SQLite file import prevent duplicates

    - by jakesankey
    Hi, I am attempting to get a directory (which is ever-growing) full of .txt comma delimited files to import into my SQLite db. I now have all of the files importing ok, however I need to have some way of excluding the files that have been previously added to db. I have a column in the db called FileName where the name and extension are stored next to each record from each file. Now I need to say 'If the code finds XXX.txt and XXX.txt is already in db, then skip this file'. Can I somehow add this logic to the getfiles command or is there another easy way? using (SQLiteCommand insertCommand = con.CreateCommand()) { SQLiteCommand cmdd = con.CreateCommand(); string[] files = Directory.GetFiles(@"C:\Documents and Settings\js91162\Desktop\", "R303717*.txt*", SearchOption.AllDirectories); foreach (string file in files) { string FileNameExt1 = Path.GetFileName(file); cmdd.CommandText = @" SELECT COUNT(*) FROM Import WHERE FileName = @FileExt;"; cmdd.Parameters.Add(new SQLiteParameter("@FileExt", FileNameExt1)); int count = Convert.ToInt32(cmdd.ExecuteScalar()); //int count = ((IConvertible)insertCommand.ExecuteScalar().ToInt32(null)); if (count == 0) { Console.WriteLine("Parsing CMM data for SQL database... Please wait."); insertCommand.CommandText = @" INSERT INTO Import (FeatType, FeatName, Value, Actual, Nominal, Dev, TolMin, TolPlus, OutOfTol, PartNumber, CMMNumber, Date, FileName) VALUES (@FeatType, @FeatName, @Value, @Actual, @Nominal, @Dev, @TolMin, @TolPlus, @OutOfTol, @PartNumber, @CMMNumber, @Date, @FileName);"; insertCommand.Parameters.Add(new SQLiteParameter("@FeatType", DbType.String)); insertCommand.Parameters.Add(new SQLiteParameter("@FeatName", DbType.String)); insertCommand.Parameters.Add(new SQLiteParameter("@Value", DbType.String)); insertCommand.Parameters.Add(new SQLiteParameter("@Actual", DbType.Decimal)); insertCommand.Parameters.Add(new SQLiteParameter("@Nominal", DbType.Decimal)); insertCommand.Parameters.Add(new SQLiteParameter("@Dev", DbType.Decimal)); insertCommand.Parameters.Add(new SQLiteParameter("@TolMin", DbType.Decimal)); insertCommand.Parameters.Add(new SQLiteParameter("@TolPlus", DbType.Decimal)); insertCommand.Parameters.Add(new SQLiteParameter("@OutOfTol", DbType.Decimal)); insertCommand.Parameters.Add(new SQLiteParameter("@Comment", DbType.String)); string FileNameExt = Path.GetFileName(file); string RNumber = Path.GetFileNameWithoutExtension(file); string RNumberE = RNumber.Split('_')[0]; string RNumberD = RNumber.Split('_')[1]; string RNumberDate = RNumber.Split('_')[2]; DateTime dateTime = DateTime.ParseExact(RNumberDate, "yyyyMMdd", Thread.CurrentThread.CurrentCulture); string cmmDate = dateTime.ToString("dd-MMM-yyyy"); string[] lines = File.ReadAllLines(file); bool parse = false; foreach (string tmpLine in lines) { string line = tmpLine.Trim(); if (!parse && line.StartsWith("Feat. Type,")) { parse = true; continue; } if (!parse || string.IsNullOrEmpty(line)) { continue; } Console.WriteLine(tmpLine); foreach (SQLiteParameter parameter in insertCommand.Parameters) { parameter.Value = null; } string[] values = line.Split(new[] { ',' }); for (int i = 0; i < values.Length - 1; i++) { SQLiteParameter param = insertCommand.Parameters[i]; if (param.DbType == DbType.Decimal) { decimal value; param.Value = decimal.TryParse(values[i], out value) ? value : 0; } else { param.Value = values[i]; } } insertCommand.Parameters.Add(new SQLiteParameter("@PartNumber", RNumberE)); insertCommand.Parameters.Add(new SQLiteParameter("@CMMNumber", RNumberD)); insertCommand.Parameters.Add(new SQLiteParameter("@Date", cmmDate)); insertCommand.Parameters.Add(new SQLiteParameter("@FileName", FileNameExt)); // insertCommand.ExecuteNonQuery(); } } } Console.WriteLine("CMM data successfully imported to SQL database..."); } con.Close(); }

    Read the article

  • Linq2SQL vs NHibernate performance (have I gone mad?)

    - by HeavyWave
    I have written the following tests to compare performance of Linq2SQL and NHibernate and I find results to be somewhat strange. Mappings are straight forward and identical for both. Both are running against a live DB. Although I'm not deleting Campaigns in case of Linq, but that shouldn't affect performance by more than 10 ms. Linq: [Test] public void Test1000ReadsWritesToAgentStateLinqPrecompiled() { Stopwatch sw = new Stopwatch(); Stopwatch swIn = new Stopwatch(); sw.Start(); for (int i = 0; i < 1000; i++) { swIn.Reset(); swIn.Start(); ReadWriteAndDeleteAgentStateWithLinqPrecompiled(); swIn.Stop(); Console.WriteLine("Run ReadWriteAndDeleteAgentState: " + swIn.ElapsedMilliseconds + " ms"); } sw.Stop(); Console.WriteLine("Total Time: " + sw.ElapsedMilliseconds + " ms"); Console.WriteLine("Average time to execute queries: " + sw.ElapsedMilliseconds / 1000 + " ms"); } private static readonly Func<AgentDesktop3DataContext, int, EntityModel.CampaignDetail> GetCampaignById = CompiledQuery.Compile<AgentDesktop3DataContext, int, EntityModel.CampaignDetail>( (ctx, sessionId) => (from cd in ctx.CampaignDetails join a in ctx.AgentCampaigns on cd.CampaignDetailId equals a.CampaignDetailId where a.AgentStateId == sessionId select cd).FirstOrDefault()); private void ReadWriteAndDeleteAgentStateWithLinqPrecompiled() { int id = 0; using (var ctx = new AgentDesktop3DataContext()) { EntityModel.AgentState agentState = new EntityModel.AgentState(); var campaign = new EntityModel.CampaignDetail { CampaignName = "Test" }; var campaignDisposition = new EntityModel.CampaignDisposition { Code = "123" }; campaignDisposition.Description = "abc"; campaign.CampaignDispositions.Add(campaignDisposition); agentState.CallState = 3; campaign.AgentCampaigns.Add(new AgentCampaign { AgentState = agentState }); ctx.CampaignDetails.InsertOnSubmit(campaign); ctx.AgentStates.InsertOnSubmit(agentState); ctx.SubmitChanges(); id = agentState.AgentStateId; } using (var ctx = new AgentDesktop3DataContext()) { var dbAgentState = ctx.GetAgentStateById(id); Assert.IsNotNull(dbAgentState); Assert.AreEqual(dbAgentState.CallState, 3); var campaignDetails = GetCampaignById(ctx, id); Assert.AreEqual(campaignDetails.CampaignDispositions[0].Description, "abc"); } using (var ctx = new AgentDesktop3DataContext()) { ctx.DeleteSessionById(id); } } NHibernate (the loop is the same): private void ReadWriteAndDeleteAgentState() { var id = WriteAgentState().Id; StartNewTransaction(); var dbAgentState = agentStateRepository.Get(id); Assert.IsNotNull(dbAgentState); Assert.AreEqual(dbAgentState.CallState, 3); Assert.AreEqual(dbAgentState.Campaigns[0].Dispositions[0].Description, "abc"); var campaignId = dbAgentState.Campaigns[0].Id; agentStateRepository.Delete(dbAgentState); NHibernateSession.Current.Transaction.Commit(); Cleanup(campaignId); NHibernateSession.Current.BeginTransaction(); } Results: NHibernate: Total Time: 9469 ms Average time to execute 13 queries: 9 ms Linq: Total Time: 127200 ms Average time to execute 13 queries: 127 ms Linq lost by 13.5 times! Event with precompiled queries (both read queries are precompiled). This can't be right, although I expected NHibernate to be faster, this is just too big of a difference, considering mappings are identical and NHibernate actually executes more queries against the DB.

    Read the article

  • C# Asynchronous Network IO and OutOfMemoryException

    - by The.Anti.9
    I'm working on a client/server application in C#, and I need to get Asynchronous sockets working so I can handle multiple connections at once. Technically it works the way it is now, but I get an OutOfMemoryException after about 3 minutes of running. MSDN says to use a WaitHandler to do WaitOne() after the socket.BeginAccept(), but it doesn't actually let me do that. When I try to do that in the code it says WaitHandler is an abstract class or interface, and I can't instantiate it. I thought maybe Id try a static reference, but it doesnt have teh WaitOne() method, just WaitAll() and WaitAny(). The main problem is that in the docs it doesn't give a full code snippet, so you can't actually see what their "wait handler" is coming from. its just a variable called allDone, which also has a Reset() method in the snippet, which a waithandler doesn't have. After digging around in their docs, I found some related thing about an AutoResetEvent in the Threading namespace. It has a WaitOne() and a Reset() method. So I tried that around the while(true) { ... socket.BeginAccept( ... ); ... }. Unfortunately this makes it only take one connection at a time. So I'm not really sure where to go. Here's my code: class ServerRunner { private Byte[] data = new Byte[2048]; private int size = 2048; private Socket server; static AutoResetEvent allDone = new AutoResetEvent(false); public ServerRunner() { server = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp); IPEndPoint iep = new IPEndPoint(IPAddress.Any, 33333); server.Bind(iep); Console.WriteLine("Server initialized.."); } public void Run() { server.Listen(100); Console.WriteLine("Listening..."); while (true) { //allDone.Reset(); server.BeginAccept(new AsyncCallback(AcceptCon), server); //allDone.WaitOne(); } } void AcceptCon(IAsyncResult iar) { Socket oldserver = (Socket)iar.AsyncState; Socket client = oldserver.EndAccept(iar); Console.WriteLine(client.RemoteEndPoint.ToString() + " connected"); byte[] message = Encoding.ASCII.GetBytes("Welcome"); client.BeginSend(message, 0, message.Length, SocketFlags.None, new AsyncCallback(SendData), client); } void SendData(IAsyncResult iar) { Socket client = (Socket)iar.AsyncState; int sent = client.EndSend(iar); client.BeginReceive(data, 0, size, SocketFlags.None, new AsyncCallback(ReceiveData), client); } void ReceiveData(IAsyncResult iar) { Socket client = (Socket)iar.AsyncState; int recv = client.EndReceive(iar); if (recv == 0) { client.Close(); server.BeginAccept(new AsyncCallback(AcceptCon), server); return; } string receivedData = Encoding.ASCII.GetString(data, 0, recv); //process received data here byte[] message2 = Encoding.ASCII.GetBytes("reply"); client.BeginSend(message2, 0, message2.Length, SocketFlags.None, new AsyncCallback(SendData), client); } }

    Read the article

  • Serializing WPF DataTemplates and {Binding Expressions} (from PowerShell?)

    - by Jaykul
    Ok, here's the deal: I have code that works in C#, but when I call it from PowerShell, it fails. I can't quite figure it out, but it's something specific to PowerShell. Here's the relevant code calling the library (assuming you've added a reference ahead of time) from C#: public class Test { [STAThread] public static void Main() { Console.WriteLine( PoshWpf.XamlHelper.RoundTripXaml( "<TextBlock Text=\"{Binding FullName}\" xmlns=\"http://schemas.microsoft.com/winfx/2006/xaml/presentation\"/>" ) ); } } Compiled into an executable, that works fine ... but if you call that method from PowerShell, it returns with no {Binding FullName} for the Text! add-type -path .\PoshWpf.dll [PoshWpf.Test]::Main() I've pasted below the entire code for the library, all wrapped up in a PowerShell Add-Type call so you can just compile it by pasting it into PowerShell (you can leave off the first and last lines if you want to paste it into a new console app in Visual Studio. To output (from PowerShell 2) as an executable, just change the -OutputType parameter to ConsoleApplication and the -OutputAssembly to PoshWpf.exe (or something). Thus, you can see that running the SAME CODE from the executable gives you the correct output. But running the two lines as above or manually calling [PoshWpf.XamlHelper]::RoundTripXaml or [PoshWpf.XamlHelper]::ConvertToXaml from PowerShell just doesn't seem to work at all ... HELP?! Add-Type -TypeDefinition @" using System; using System.ComponentModel; using System.Globalization; using System.Linq; using System.Windows; using System.Windows.Data; using System.Windows.Markup; namespace PoshWpf { public class Test { [STAThread] public static void Main() { Console.WriteLine( PoshWpf.XamlHelper.RoundTripXaml( "<TextBlock Text=\"{Binding FullName}\" xmlns=\"http://schemas.microsoft.com/winfx/2006/xaml/presentation\"/>" ) ); } } public class BindingTypeDescriptionProvider : TypeDescriptionProvider { private static readonly TypeDescriptionProvider _DEFAULT_TYPE_PROVIDER = TypeDescriptor.GetProvider(typeof(Binding)); public BindingTypeDescriptionProvider() : base(_DEFAULT_TYPE_PROVIDER) { } public override ICustomTypeDescriptor GetTypeDescriptor(Type objectType, object instance) { ICustomTypeDescriptor defaultDescriptor = base.GetTypeDescriptor(objectType, instance); return instance == null ? defaultDescriptor : new BindingCustomTypeDescriptor(defaultDescriptor); } } public class BindingCustomTypeDescriptor : CustomTypeDescriptor { public BindingCustomTypeDescriptor(ICustomTypeDescriptor parent) : base(parent) { } public override PropertyDescriptorCollection GetProperties(Attribute[] attributes) { PropertyDescriptor pd; var pdc = new PropertyDescriptorCollection(base.GetProperties(attributes).Cast<PropertyDescriptor>().ToArray()); if ((pd = pdc.Find("Source", false)) != null) { pdc.Add(TypeDescriptor.CreateProperty(typeof(Binding), pd, new Attribute[] { new DefaultValueAttribute("null") })); pdc.Remove(pd); } return pdc; } } public class BindingConverter : ExpressionConverter { public override bool CanConvertTo(ITypeDescriptorContext context, Type destinationType) { return (destinationType == typeof(MarkupExtension)) ? true : false; } public override object ConvertTo(ITypeDescriptorContext context, CultureInfo culture, object value, Type destinationType) { if (destinationType == typeof(MarkupExtension)) { var bindingExpression = value as BindingExpression; if (bindingExpression == null) throw new Exception(); return bindingExpression.ParentBinding; } return base.ConvertTo(context, culture, value, destinationType); } } public static class XamlHelper { static XamlHelper() { // this is absolutely vital: TypeDescriptor.AddProvider(new BindingTypeDescriptionProvider(), typeof(Binding)); TypeDescriptor.AddAttributes(typeof(BindingExpression), new Attribute[] { new TypeConverterAttribute(typeof(BindingConverter)) }); } public static string RoundTripXaml(string xaml) { return XamlWriter.Save(XamlReader.Parse(xaml)); } public static string ConvertToXaml(object wpf) { return XamlWriter.Save(wpf); } } } "@ -language CSharpVersion3 -reference PresentationCore, PresentationFramework, WindowsBase -OutputType Library -OutputAssembly PoshWpf.dll Again, you can get an executable by just altering the last line like so: "@ -language CSharpVersion3 -reference PresentationCore, PresentationFramework, WindowsBase -OutputType ConsoleApplication -OutputAssembly PoshWpf.exe

    Read the article

  • Entity Framework Generic Repository Error

    - by Jeff Ancel
    I am trying to create a very generic generics repository for my Entity Framework repository that has the basic CRUD statements and uses an Interface. I have hit a brick wall head first and been knocked over. Here is my code, written in a console application, using a Entity Framework Model, with a table named Hurl. Simply trying to pull back the object by its ID. Here is the full application code. using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Data.Objects; using System.Linq.Expressions; using System.Reflection; using System.Data.Objects.DataClasses; namespace GenericsPlay { class Program { static void Main(string[] args) { var hs = new HurlRepository(new hurladminEntity()); var hurl = hs.Load<Hurl>(h => h.Id == 1); Console.Write(hurl.ShortUrl); Console.ReadLine(); } } public interface IHurlRepository { T Load<T>(Expression<Func<T, bool>> expression); } public class HurlRepository : IHurlRepository, IDisposable { private ObjectContext _objectContext; public HurlRepository(ObjectContext objectContext) { _objectContext = objectContext; } public ObjectContext ObjectContext { get { return _objectContext; } } private Type GetBaseType(Type type) { Type baseType = type.BaseType; if (baseType != null && baseType != typeof(EntityObject)) { return GetBaseType(type.BaseType); } return type; } private bool HasBaseType(Type type, out Type baseType) { Type originalType = type.GetType(); baseType = GetBaseType(type); return baseType != originalType; } public IQueryable<T> GetQuery<T>() { Type baseType; if (HasBaseType(typeof(T), out baseType)) { return this.ObjectContext.CreateQuery<T>("[" + baseType.Name.ToString() + "]").OfType<T>(); } else { return this.ObjectContext.CreateQuery<T>("[" + typeof(T).Name.ToString() + "]"); } } public T Load<T>(Expression<Func<T, bool>> whereCondition) { return this.GetQuery<T>().Where(whereCondition).First(); } public void Dispose() { if (_objectContext != null) { _objectContext.Dispose(); } } } } Here is the error that I am getting: System.Data.EntitySqlException was unhandled Message="'Hurl' could not be resolved in the current scope or context. Make sure that all referenced variables are in scope, that required schemas are loaded, and that namespaces are referenced correctly., near escaped identifier, line 3, column 1." Source="System.Data.Entity" Column=1 ErrorContext="escaped identifier" ErrorDescription="'Hurl' could not be resolved in the current scope or context. Make sure that all referenced variables are in scope, that required schemas are loaded, and that namespaces are referenced correctly." This is where I am attempting to extract this information from. http://blog.keithpatton.com/2008/05/29/Polymorphic+Repository+For+ADONet+Entity+Framework.aspx

    Read the article

  • use .net webbrowser control by multithread, throw "EXCEPTION code=ACCESS_VIOLATION"

    - by user1507827
    i want to make a console program to monitor a webpage's htmlsourcecode, because some of the page content are created by some javescript, so i have to use webbrowser control. like : View Generated Source (After AJAX/JavaScript) in C# my code is below: public class WebProcessor { public string GeneratedSource; public string URL ; public DateTime beginTime; public DateTime endTime; public object GetGeneratedHTML(object url) { URL = url.ToString(); try { Thread[] t = new Thread[10]; for (int i = 0; i < 10; i++) { t[i] = new Thread(new ThreadStart(WebBrowserThread)); t[i].SetApartmentState(ApartmentState.STA); t[i].Name = "Thread" + i.ToString(); t[i].Start(); //t[i].Join(); } } catch (Exception ex) { Console.WriteLine(ex.ToString()); } return GeneratedSource; } private void WebBrowserThread() { WebBrowser wb = new WebBrowser(); wb.ScriptErrorsSuppressed = true; wb.DocumentCompleted += new WebBrowserDocumentCompletedEventHandler( wb_DocumentCompleted); while(true ) { beginTime = DateTime.Now; wb.Navigate(URL); while (wb.ReadyState != WebBrowserReadyState.Complete) { Thread.Sleep(new Random().Next(10,100)); Application.DoEvents(); } } //wb.Dispose(); } private void wb_DocumentCompleted(object sender, WebBrowserDocumentCompletedEventArgs e) { WebBrowser wb = (WebBrowser)sender; if (wb.ReadyState == WebBrowserReadyState.Complete) { GeneratedSource= wb.Document.Body.InnerHtml; endTime = DateTime.Now; Console.WriteLine("WebBrowser " + (endTime-beginTime).Milliseconds + Thread.CurrentThread.Name + wb.Document.Title); } } } when it run, after a while (20-50 times), it throw the exception like this EXCEPTION code=ACCESS_VIOLATION (null)(null)(null)(null)(null)(null)(null)(null)(null)(null)(null)(null)(null)(null)(null)(null)(nul l)(null)(null)(null)(null)(null)(null)(null)(null)(null)(null)(null)(null)(null)(null)(null)(null)(n ull)(null)(null)(null)(null)(null)(null)(null)(null)(null)(null)(null)(null)(null)(null)(null)(null) (null)(null)(null)(null)(null)(null)(null)(null)(null)(null)(null)(null)(null)(null)(null)(null)(nul l)(null)BACKTRACE: 33 stack frames: #0 0x083dba8db0 at MatchExactGetIDsOfNames in mshtml.dll #1 0x0879f9b837 at StrongNameErrorInfo in mscorwks.dll #2 0x0879f9b8e3 at StrongNameErrorInfo in mscorwks.dll #3 0x0879f9b93a at StrongNameErrorInfo in mscorwks.dll #4 0x0879f9b9e0 at StrongNameErrorInfo in mscorwks.dll #5 0x0879f9b677 at StrongNameErrorInfo in mscorwks.dll #6 0x0879f9b785 at StrongNameErrorInfo in mscorwks.dll #7 0x0879f192a8 at InstallCustomModule in mscorwks.dll #8 0x0879f19444 at InstallCustomModule in mscorwks.dll #9 0x0879f194ab at InstallCustomModule in mscorwks.dll #10 0x0879fa6491 at StrongNameErrorInfo in mscorwks.dll #11 0x0879f44bcf at DllGetClassObjectInternal in mscorwks.dll #12 0x089bbafa at in #13 0x087b18cc10 at in System.Windows.Forms.ni.dll #14 0x087b91f4c1 at in System.Windows.Forms.ni.dll #15 0x08d00669 at in #16 0x08792d6e46 at in mscorlib.ni.dll #17 0x08792e02cf at in mscorlib.ni.dll #18 0x08792d6dc4 at in mscorlib.ni.dll #19 0x0879e71b4c at in mscorwks.dll #20 0x0879e896ce at in mscorwks.dll #21 0x0879e96ea9 at CoUninitializeEE in mscorwks.dll #22 0x0879e96edc at CoUninitializeEE in mscorwks.dll #23 0x0879e96efa at CoUninitializeEE in mscorwks.dll #24 0x0879f88357 at GetPrivateContextsPerfCounters in mscorwks.dll #25 0x0879e9cc8f at CoUninitializeEE in mscorwks.dll #26 0x0879e9cc2b at CoUninitializeEE in mscorwks.dll #27 0x0879e9cb51 at CoUninitializeEE in mscorwks.dll #28 0x0879e9ccdd at CoUninitializeEE in mscorwks.dll #29 0x0879f88128 at GetPrivateContextsPerfCounters in mscorwks.dll #30 0x0879f88202 at GetPrivateContextsPerfCounters in mscorwks.dll #31 0x0879f0e255 at InstallCustomModule in mscorwks.dll #32 0x087c80b729 at GetModuleFileNameA in KERNEL32.dll i have try lots of methods to solve the problem, finally, i found that if i thread sleep more millseconds, it will run for a longer time, but the exception is still throw. hope somebody give me the answer of how to slove ... thanks very much !!!

    Read the article

  • tastypie posting and full example

    - by Justin M
    Is there a full tastypie django example site and setup available for download? I have been wrestling with wrapping my head around it all day. I have the following code. Basically, I have a POST form that is handled with ajax. When I click "submit" on my form and the ajax request runs, the call returns "POST http://192.168.1.110:8000/api/private/client_basic_info/ 404 (NOT FOUND)" I have the URL configured alright, I think. I can access http://192.168.1.110:8000/api/private/client_basic_info/?format=json just fine. Am I missing some settings or making some fundamental errors in my methods? My intent is that each user can fill out/modify one and only one "client basic information" form/model. a page: {% extends "layout-column-100.html" %} {% load uni_form_tags sekizai_tags %} {% block title %}Basic Information{% endblock %} {% block main_content %} {% addtoblock "js" %} <script language="JavaScript"> $(document).ready( function() { $('#client_basic_info_form').submit(function (e) { form = $(this) form.find('span.error-message, span.success-message').remove() form.find('.invalid').removeClass('invalid') form.find('input[type="submit"]').attr('disabled', 'disabled') e.preventDefault(); var values = {} $.each($(this).serializeArray(), function(i, field) { values[field.name] = field.value; }) $.ajax({ type: 'POST', contentType: 'application/json', data: JSON.stringify(values), dataType: 'json', processData: false, url: '/api/private/client_basic_info/', success: function(data, status, jqXHR) { form.find('input[type="submit"]') .after('<span class="success-message">Saved successfully!</span>') .removeAttr('disabled') }, error: function(jqXHR, textStatus, errorThrown) { console.log(jqXHR) console.log(textStatus) console.log(errorThrown) var errors = JSON.parse(jqXHR.responseText) for (field in errors) { var field_error = errors[field][0] $('#id_' + field).addClass('invalid') .after('<span class="error-message">'+ field_error +'</span>') } form.find('input[type="submit"]').removeAttr('disabled') } }) // end $.ajax() }) // end $('#client_basic_info_form').submit() }) // end $(document).ready() </script> {% endaddtoblock %} {% uni_form form form.helper %} {% endblock %} resources from residence.models import ClientBasicInfo from residence.forms.profiler import ClientBasicInfoForm from tastypie import fields from tastypie.resources import ModelResource from tastypie.authentication import BasicAuthentication from tastypie.authorization import DjangoAuthorization, Authorization from tastypie.validation import FormValidation from tastypie.resources import ModelResource, ALL, ALL_WITH_RELATIONS from django.core.urlresolvers import reverse from django.contrib.auth.models import User class UserResource(ModelResource): class Meta: queryset = User.objects.all() resource_name = 'user' fields = ['username'] filtering = { 'username': ALL, } include_resource_uri = False authentication = BasicAuthentication() authorization = DjangoAuthorization() def dehydrate(self, bundle): forms_incomplete = [] if ClientBasicInfo.objects.filter(user=bundle.request.user).count() < 1: forms_incomplete.append({'name': 'Basic Information', 'url': reverse('client_basic_info')}) bundle.data['forms_incomplete'] = forms_incomplete return bundle class ClientBasicInfoResource(ModelResource): user = fields.ForeignKey(UserResource, 'user') class Meta: authentication = BasicAuthentication() authorization = DjangoAuthorization() include_resource_uri = False queryset = ClientBasicInfo.objects.all() resource_name = 'client_basic_info' validation = FormValidation(form_class=ClientBasicInfoForm) list_allowed_methods = ['get', 'post', ] detail_allowed_methods = ['get', 'post', 'put', 'delete'] Edit: My resources file is now: from residence.models import ClientBasicInfo from residence.forms.profiler import ClientBasicInfoForm from tastypie import fields from tastypie.resources import ModelResource from tastypie.authentication import BasicAuthentication from tastypie.authorization import DjangoAuthorization, Authorization from tastypie.validation import FormValidation from tastypie.resources import ModelResource, ALL, ALL_WITH_RELATIONS from django.core.urlresolvers import reverse from django.contrib.auth.models import User class UserResource(ModelResource): class Meta: queryset = User.objects.all() resource_name = 'user' fields = ['username'] filtering = { 'username': ALL, } include_resource_uri = False authentication = BasicAuthentication() authorization = DjangoAuthorization() #def apply_authorization_limits(self, request, object_list): # return object_list.filter(username=request.user) def dehydrate(self, bundle): forms_incomplete = [] if ClientBasicInfo.objects.filter(user=bundle.request.user).count() < 1: forms_incomplete.append({'name': 'Basic Information', 'url': reverse('client_basic_info')}) bundle.data['forms_incomplete'] = forms_incomplete return bundle class ClientBasicInfoResource(ModelResource): # user = fields.ForeignKey(UserResource, 'user') class Meta: authentication = BasicAuthentication() authorization = DjangoAuthorization() include_resource_uri = False queryset = ClientBasicInfo.objects.all() resource_name = 'client_basic_info' validation = FormValidation(form_class=ClientBasicInfoForm) #list_allowed_methods = ['get', 'post', ] #detail_allowed_methods = ['get', 'post', 'put', 'delete'] def apply_authorization_limits(self, request, object_list): return object_list.filter(user=request.user) I made the user field of the ClientBasicInfo nullable and the POST seems to work. I want to try updating the entry now. Would that just be appending the pk to the ajax url? For example /api/private/client_basic_info/21/? When I submit that form I get a 501 NOT IMPLEMENTED message. What exactly haven't I implemented? I am subclassing ModelResource, which should have all the ORM-related functions implemented according to the docs.

    Read the article

  • Append <ul> and <li> in recursive loop

    - by Batman
    I have a site collection. I was told I need a recursive loop to do this. This is what I've tried: When the site loads, call getSiteTree() which passes the top level website to my getSubSite() function. From there I check if there are any subsites. I have a boolean but I'm not really using it for anything yet, I've just seen it used before for this type of work. Anyways, from there I check if there are any sub sits, if not I log the end of the branch, if there are, I call the function again using the new url and repeat the process. Looking at my console, it seems to work as intended. function getSiteTree(){ var tree = $('#treeviewList'); var rootsite = window.location.protocol + "//" + window.location.hostname; var siteEnd = false; getSubSite(rootsite); } function getSubSite(url){ $().SPServices({ operation: "GetWebCollection", webURL: url, async: true, completefunc: function(xData, Status) { var siteUrl; var siteCount = $(xData.responseXML).find("Web").length; if(siteCount == 0){ console.log("end of branch"); siteEnd = true; }else{ $(xData.responseXML).find("Web").each(function() { siteUrl = $(this).attr("Url"); console.log(siteUrl); getSubSite(siteUrl); }); } } }); } My questions: now that I have my sites, I need to take those sites and create something like this but I'm not sure how to accomplish this. <li>Site 1 <ul> <li>sub 1.1</li> <li>sub 1.2</li> <li>sub 1.3</li> <ul> <li>1.3.1</li> </ul> <li>sub 1.4</li> <li>sub 1.5</li> </ul> </li> <li>Site 2 <ul> <li>sub 2.1</li> <li>sub 2.2</li> <li>sub 2.3</li> <ul> <li>2.3.1</li> <li>2.3.2</li> </ul> </ul> </li> </ul> I have this inital html: <div id="treeviewDiv" style="width:200px;height:150px;overflow:scroll"> <ui id="treeviewList"></ui> </div>

    Read the article

< Previous Page | 174 175 176 177 178 179 180 181 182 183 184 185  | Next Page >