Search Results

Search found 2130 results on 86 pages for 'swap'.

Page 82/86 | < Previous Page | 78 79 80 81 82 83 84 85 86  | Next Page >

  • setting up Ubuntu 10.10 as paravirtualized guest in Xen on RHEL5 host - what kernel?

    - by kostmo
    I've discovered the tool ubuntu-vm-builder, which I've installed and then invoked on an Ubuntu workstation as: sudo vmbuilder xen ubuntu --suite maverick --flavour virtual --arch amd64 --mem=512 --rootsize 8192 This workstation is not the intended target host of the virtual machine, however; I would like to host the guest on a Red Hat Enterprise Linux 5 machine that is running Xen 3.0.3. The output of this command appears to be a folder named ubuntu-xen containing three files: tmpXXXXXX, a very large file which I assume is the root partition image tmpYYYYYY, a somewhat large file which I assume is the swap partition image xen.conf, a text file I have copied the xen.conf file to the RHEL server's /etc/xen directory under the new name newvm, adjusting the paths of tempXXXXXX and tempYYYYYYin the file after also copying them from my local workstation to the RHEL server. When I launch the Virtual Machine Manager virt-manager, I can see the newvm virtual machine listed underneath the Dom0 machine. When I try to start newvm, I get the error: Error starting domain: virDomainCreate() failed POST operation failed: (xend.err 'Error creating domain: Kernel image does not exist: None') Indeed, there exists an entry kernel = 'None' in the xen.conf file. How do I find out what the path of the kernel should be? Is this path supposed to be to a kernel stored on the local filesystem of the RHEL5 host, or is it supposed to be a path inside the guest image? I see that the vmbuilder command provides for a --xen-kernel option, along with a --xen-ramdisk option, but I'm not sure what to use for either. I think I should be able to get this to work, since Ubuntu is said to be supported as a Xen guest, even though the Xen 4.0.1 docs state support for only a limited set of distributions, Ubuntu excluded. Update 1 When running vmbuilder on my local workstation, I did observe an output line saying: Calling hook: install_kernel and later, output lines saying: update-initramfs: Generating /boot/initrd.img-2.6.35-23-virtual [...] run-parts: executing /etc/kernel/postinst.d/initramfs-tools 2.6.35-23-virtual /boot/vmlinuz-2.6.35-23-virtual So in the xen.conf file, I tried setting the lines: kernel = '/boot/vmlinuz-2.6.35-23-virtual' ramdisk = '/boot/initrd.img-2.6.35-23-virtual' When trying to start the VM, I got an error similar to last time: Error starting domain: virDomainCreate() failed POST operation failed: (xend.err 'Error creating domain: Kernel image does not exist: /boot/vmlinuz-2.6.35-23-virtual') This makes me think that the RHEL5 machine is looking for local files, rather than a file within the binary guest disk image. After running sudo updatedb on my workstation, neither of those files were found. If the vmbuilder tool had tried to install them, it must have failed. Update 2 I was able to extract the kernel and initrd images from the guest disk binary by mounting it: mkdir mnt_tmp sudo mount ubuntu-xen/tmpXXXXXX mnt_tmp/ -o loop cp mnt_tmp/boot/vmlinuz-2.6.35-23-virtual virtual_kernel_ubuntu cp mnt_tmp/boot/initrd.img-2.6.35-23-virtual virtual_initrd_ubuntu These two files I copied to the RHEL5 server, and edited the xen.conf file to point to them as kernel and ramdisk. With this done, I could "run" the newvm virtual machine from within virt-manager, but was met with the message Console Not Configured For Guest when I double clicked the entry to open the Virtual Machine Console. As suggested by a forum, I then added the line vfb = [ 'type=vnc' ] to the configuration file, recreated the virtual machine (a ~10 min process), and this time got the message: Connecting to console for guest This remained indefinitely; after selecting View - Serial Console, I found a kernel panic: [5442621.272173] Kernel panic - not syncing: Attempted to kill the idle task! [5442621.272179] Pid: 0, comm: swapper Tainted: G D 2.6.35-23-virtual #41-Ubuntu [5442621.272184] Call Trace: [5442621.272191] [<ffffffff815a1b81>] panic+0x90/0x111 [5442621.272199] [<ffffffff810652ee>] do_exit+0x3be/0x3f0 [5442621.272204] [<ffffffff815a5e20>] oops_end+0xb0/0xf0 [5442621.272211] [<ffffffff8100ddeb>] die+0x5b/0x90 [5442621.272216] [<ffffffff815a56c4>] do_trap+0xc4/0x170 [5442621.272221] [<ffffffff8100ba35>] do_invalid_op+0x95/0xb0 [5442621.272227] [<ffffffff8130851c>] ? intel_idle+0xac/0x180 [5442621.272232] [<ffffffff810072bf>] ? xen_restore_fl_direct_end+0x0/0x1 [5442621.272239] [<ffffffff815a48fe>] ? _raw_spin_unlock_irqrestore+0x1e/0x30 [5442621.272247] [<ffffffff8108dfb7>] ? tick_broadcast_oneshot_control+0xc7/0x120 [5442621.272253] [<ffffffff8100ad5b>] invalid_op+0x1b/0x20 [5442621.272259] [<ffffffff8130851c>] ? intel_idle+0xac/0x180 [5442621.272264] [<ffffffff813084e0>] ? intel_idle+0x70/0x180 [5442621.272269] [<ffffffff810072bf>] ? xen_restore_fl_direct_end+0x0/0x1 [5442621.272275] [<ffffffff8148a147>] cpuidle_idle_call+0xa7/0x140 [5442621.272281] [<ffffffff81008d93>] cpu_idle+0xb3/0x110 [5442621.272286] [<ffffffff815873aa>] rest_init+0x8a/0x90 [5442621.272291] [<ffffffff81b04c9d>] start_kernel+0x387/0x390 [5442621.272297] [<ffffffff81b04341>] x86_64_start_reservations+0x12c/0x130 [5442621.272303] [<ffffffff81b08002>] xen_start_kernel+0x55d/0x561 Update 3 I tried an i386 architecture instead of amd64, but got the same kernel panic. Also, it seems the Virtual Machine Manager pays attention to the format of the filename of the kernel; for the same kernel binary, I tried simply naming it vmlinuz-virtual, which threw out an error box about an invalid kernel. When I named it vmlinuz-2.6.35-23-virtual, it did not throw the error, but it did still result in the kernel panic shortly thereafter.

    Read the article

  • Windows 8, NVIDIA graphics recognition fails

    - by Roy Grubb
    I just installed Windows 8 Pro OEM 64-bit (clean install) and it won't properly recognize my graphics adapter. When I installed Win8, it automatically installed the BasicDisplay.sys driver dated 6/21/2006. 6.2.9200.16384 (win8_rtm.120725-1247). Hardware - Mobo:MSi G41M-P33 Combo CPU:Intel CoreDuo 6600 Graphics:NVIDIA GeForce 9400GT *OS* - Windows 8 Pro 64-bit OEM The graphics adapter worked fine in Windows XP. The PC is a generic box, bought locally and its mobo failed recently, so I replaced it with the G41M. Microsoft wouldn't let me re-activate Windows XP with a different mobo, so I installed Win8, which appears to work except as described next. Win8 only partially recognizes the graphics adapter and won't allow NVIDIA latest driver installer to see that it's an NVIDIA card. As a result, OpenGL doesn't work, and this is needed by the software I most use. Other than that the graphics look OK. When I say 'partially recognizes', I mean that via the Control Panel, I can see that the adapter is described as NVIDIA, but the driver remains stuck at Microsoft Basic Display Adapter no matter what I try, including "Update driver..." in adapter properties. Display Screen Resolution Advanced Settings Adapter shows: Adapter Type: Microsoft Basic Display Adapter Chip Type: NVIDIA DAC Type: NVIDIA Corporation Bios Information: G27 Board - p381n17 Don't know what this means ... no mention of 9400GT Total Available Graphics Memory: 256 MB Dedicated Video Memory: 0 MB In fact the adapter has 512MB on-board video memory. System Video Memory: 0 MB Shared System Memory: 256 MB And Control Panel Device Manager Display adapters just shows Microsoft Basic Display Adapter. No other graphics adapter, and no unknown device or yellow question mark. What I have tried so far: 1. Cleared CMOS and reset. Updated BIOS and all mobo drivers as follows: 1st I used Driver Reviver to see if any driver updates were required. It found some but I didn't use that to get the drivers. Then I switched to MSi's own mobo driver utility Live Update 5. This also showed the board needed to update several so I used it to fetch the new drivers. After that it showed that everything was up to date and I checked with Driver Reviver again, which also reported no drivers now needed updating. Rebooted. Went to the NVIDIA site to get the latest graphics adapter driver. Their auto-detect "Option 2: Automatically find drivers for my NVIDIA products" said "The NVIDIA Smart Scan was unable to evaluate your system hardware. Please use Option 1 to manually find drivers for your NVIDIA products." So I downloaded 310.70-desktop-win8-win7-winvista-64bit-international-whql.exe, which lists 9400 GT under supported products, but when I run it, it says: "NVIDIA Installer cannot continue This graphics driver could not find compatible graphics hardware." Connected the display to the on-board Intel graphics (G41 Intel Express), removed the NVIDIA card and rebooted, changed to internal graphics in CMOS. Again it installs the MS Basic Display Adapter, and can't properly run my s/w that needs OpenGL. It runs on other machines with Intel Express graphics (WinXP and 7) Shut down and pulled out the power cord. Held start button to discharge all capacitors. Removed and re-inserted NVIDIA adapter in PCI-E slot and made sure properly seated. Connected the monitor to the card, screwed plug to socket. Reconnected power cord. Started and checked in BIOS that Primary Graphics Adapter was set to PCI-E. Started Windows. Uninstalled MS Basic Display Adapter in Device Manager. Screen blanks briefly, reappears. No Graphics adapter entry was then visible in Device Manager. Restarted PC. MS Basic Display Adapter Visible again in Device Manager. Clicked in Device Manager View Show hidden devices. No other graphics adapter appears, no unknown devices. Rebooted. Tried Scan for Hardware changes. None detected. Tried right-click on MS Basic Display Adapter Properties Driver Update Driver... Search automatically. It replied that it had determined driver was up to date. I checked that there were no graphic driver-related entries in Programs and Features that I could delete (none). Searched for any other drivers with nvidia in their name and deleted them, just keeping the 306.97 installer exe file. Did a Windows Update. Ran GPU-Z which shows (main items): Microsoft Basic Display Adapter GPU G72 BIOS 5.72.22.76.88 Device ID 10DE - 01D5 DDR2 Bus Width 32 Bit Memory size 64MB Driver Version nvlddmkm 6.2.9200.16384 (ForceWare 0.00) / Win8 64 NVIDIA SLI Unknown in the drop-down at the foot, "Microsoft Basic Display Adapter" is the only option If I swap hard disks in that machine to one with a Ubuntu 10.4 installation (originally installed on the same PC), lspci shows "VGA compatible controller as NVIDIA Corporation Device 01d5 (rev a1) (prog-if 00 [VGA controller])" and "kernel driver in use: nvidia" I'm out of ideas for new things to try and would be really grateful of suggestions. Thanks!

    Read the article

  • Various problems with software raid1 array built with Samsung 840 Pro SSDs

    - by Andy B
    I am bringing to ServerFault a problem that is tormenting me for 6+ months. I have a CentOS 6 (64bit) server with an md software raid-1 array with 2 x Samsung 840 Pro SSDs (512GB). Problems: Serious write speed problems: root [~]# time dd if=arch.tar.gz of=test4 bs=2M oflag=sync 146+1 records in 146+1 records out 307191761 bytes (307 MB) copied, 23.6788 s, 13.0 MB/s real 0m23.680s user 0m0.000s sys 0m0.932s When doing the above (or any other larger copy) the load spikes to unbelievable values (even over 100) going up from ~ 1. When doing the above I've also noticed very weird iostat results: Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sda 0.00 1589.50 0.00 54.00 0.00 13148.00 243.48 0.60 11.17 0.46 2.50 sdb 0.00 1627.50 0.00 16.50 0.00 9524.00 577.21 144.25 1439.33 60.61 100.00 md1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 md2 0.00 0.00 0.00 1602.00 0.00 12816.00 8.00 0.00 0.00 0.00 0.00 md0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 And it keeps it this way until it actually writes the file to the device (out from swap/cache/memory). The problem is that the second SSD in the array has svctm and await roughly 100 times larger than the second. For some reason the wear is different between the 2 members of the array root [~]# smartctl --attributes /dev/sda | grep -i wear 177 Wear_Leveling_Count 0x0013 094% 094 000 Pre-fail Always - 180 root [~]# smartctl --attributes /dev/sdb | grep -i wear 177 Wear_Leveling_Count 0x0013 070% 070 000 Pre-fail Always - 1005 The first SSD has a wear of 6% while the second SSD has a wear of 30%!! It's like the second SSD in the array works at least 5 times as hard as the first one as proven by the first iteration of iostat (the averages since reboot): Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sda 10.44 51.06 790.39 125.41 8803.98 1633.11 11.40 0.33 0.37 0.06 5.64 sdb 9.53 58.35 322.37 118.11 4835.59 1633.11 14.69 0.33 0.76 0.29 12.97 md1 0.00 0.00 1.88 1.33 15.07 10.68 8.00 0.00 0.00 0.00 0.00 md2 0.00 0.00 1109.02 173.12 10881.59 1620.39 9.75 0.00 0.00 0.00 0.00 md0 0.00 0.00 0.41 0.01 3.10 0.02 7.42 0.00 0.00 0.00 0.00 What I've tried: I've updated the firmware to DXM05B0Q (following reports of dramatic improvements for 840Ps after this update). I have looked for "hard resetting link" in dmesg to check for cable/backplane issues but nothing. I have checked the alignment and I believe they are aligned correctly (1MB boundary, listing below) I have checked /proc/mdstat and the array is Optimal (second listing below). root [~]# fdisk -ul /dev/sda Disk /dev/sda: 512.1 GB, 512110190592 bytes 255 heads, 63 sectors/track, 62260 cylinders, total 1000215216 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00026d59 Device Boot Start End Blocks Id System /dev/sda1 2048 4196351 2097152 fd Linux raid autodetect Partition 1 does not end on cylinder boundary. /dev/sda2 * 4196352 4605951 204800 fd Linux raid autodetect Partition 2 does not end on cylinder boundary. /dev/sda3 4605952 814106623 404750336 fd Linux raid autodetect root [~]# fdisk -ul /dev/sdb Disk /dev/sdb: 512.1 GB, 512110190592 bytes 255 heads, 63 sectors/track, 62260 cylinders, total 1000215216 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0003dede Device Boot Start End Blocks Id System /dev/sdb1 2048 4196351 2097152 fd Linux raid autodetect Partition 1 does not end on cylinder boundary. /dev/sdb2 * 4196352 4605951 204800 fd Linux raid autodetect Partition 2 does not end on cylinder boundary. /dev/sdb3 4605952 814106623 404750336 fd Linux raid autodetect /proc/mdstat root # cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sdb2[1] sda2[0] 204736 blocks super 1.0 [2/2] [UU] md2 : active raid1 sdb3[1] sda3[0] 404750144 blocks super 1.0 [2/2] [UU] md1 : active raid1 sdb1[1] sda1[0] 2096064 blocks super 1.1 [2/2] [UU] unused devices: Running a read test with hdparm root [~]# hdparm -t /dev/sda /dev/sda: Timing buffered disk reads: 664 MB in 3.00 seconds = 221.33 MB/sec root [~]# hdparm -t /dev/sdb /dev/sdb: Timing buffered disk reads: 288 MB in 3.01 seconds = 95.77 MB/sec But look what happens if I add --direct root [~]# hdparm --direct -t /dev/sda /dev/sda: Timing O_DIRECT disk reads: 788 MB in 3.01 seconds = 262.08 MB/sec root [~]# hdparm --direct -t /dev/sdb /dev/sdb: Timing O_DIRECT disk reads: 534 MB in 3.02 seconds = 176.90 MB/sec Both tests increase but /dev/sdb doubles while /dev/sda increases maybe 20%. I just don't know what to make of this. As suggested by Mr. Wagner I've done another read test with dd this time and it confirms the hdparm test: root [/home2]# dd if=/dev/sda of=/dev/null bs=1G count=10 10+0 records in 10+0 records out 10737418240 bytes (11 GB) copied, 38.0855 s, 282 MB/s root [/home2]# dd if=/dev/sdb of=/dev/null bs=1G count=10 10+0 records in 10+0 records out 10737418240 bytes (11 GB) copied, 115.24 s, 93.2 MB/s So sda is 3 times faster than sdb. Or maybe sdb is doing also something else besides what sda does. Is there some way to find out if sdb is doing more than what sda does? UPDATE Again, as suggested by Mr. Wagner, I have swapped the 2 SSDs. And as he thought it would happen, the problem moved from sdb to sda. So I guess I'll RMA one of the SSDs. I wonder if the cage might be problematic. What is wrong with this array? Please help!

    Read the article

  • GitLab on a fresh Ubuntu 13 EC2 instance

    - by Polly
    I've spun up a fresh Amazon EC2 instance for a micro Ubuntu 13 server to be used as a GitLab server. I know the specs are a little low, but it should serve well for my purposes. It has an elastic (static) IP address that I have created an A record for git.mydomain.com. The first thing I did to the instance was add 1GB of swap to keep it happy from a memory perspective. I then set the hostname of the box to be git.mydomain.com and followed https://github.com/gitlabhq/gitlabhq/blob/6-2-stable/doc/install/installation.md to the letter. Everything seems to have worked, except for the web server side of things. Doing a gitlab:check shows the following: Checking Environment ... Git configured for git user? ... yes Has python2? ... yes python2 is supported version? ... yes Checking Environment ... Finished Checking GitLab Shell ... GitLab Shell version >= 1.7.4 ? ... OK (1.7.4) Repo base directory exists? ... yes Repo base directory is a symlink? ... no Repo base owned by git:git? ... yes Repo base access is drwxrws---? ... yes update hook up-to-date? ... yes update hooks in repos are links: ... can't check, you have no projects Running /home/git/gitlab-shell/bin/check Check GitLab API access: /usr/local/lib/ruby/2.0.0/net/http.rb:878:in `initialize': Connection refused - connect(2) (Errno::ECONNREFUSED) from /usr/local/lib/ruby/2.0.0/net/http.rb:878:in `open' from /usr/local/lib/ruby/2.0.0/net/http.rb:878:in `block in connect' from /usr/local/lib/ruby/2.0.0/timeout.rb:52:in `timeout' from /usr/local/lib/ruby/2.0.0/net/http.rb:877:in `connect' from /usr/local/lib/ruby/2.0.0/net/http.rb:862:in `do_start' from /usr/local/lib/ruby/2.0.0/net/http.rb:851:in `start' from /home/git/gitlab-shell/lib/gitlab_net.rb:62:in `get' from /home/git/gitlab-shell/lib/gitlab_net.rb:29:in `check' from /home/git/gitlab-shell/bin/check:11:in `<main>' gitlab-shell self-check failed Try fixing it: Make sure GitLab is running; Check the gitlab-shell configuration file: sudo -u git -H editor /home/git/gitlab-shell/config.yml Please fix the error above and rerun the checks. Checking GitLab Shell ... Finished Checking Sidekiq ... Running? ... yes Number of Sidekiq processes ... 1 Checking Sidekiq ... Finished Checking GitLab ... Database config exists? ... yes Database is SQLite ... no All migrations up? ... yes GitLab config exists? ... yes GitLab config outdated? ... no Log directory writable? ... yes Tmp directory writable? ... yes Init script exists? ... yes Init script up-to-date? ... yes projects have namespace: ... can't check, you have no projects Projects have satellites? ... can't check, you have no projects Redis version >= 2.0.0? ... yes Your git bin path is "/usr/bin/git" Git version >= 1.7.10 ? ... yes (1.8.3) Checking GitLab ... Finished It seems like I'm very nearly there. Searching on this error I have only found advice that unfortunately hasn't helped. I'm not using any kind of SSL setup, which a lot of the posts I found were about. I have tried appending 127.0.0.1 git.mydomain.com to /etc/hosts and giving the instance a reboot but there was no change. My config/gitlab.yml file has host: git.mydomain.com in it, and my gitlab-shell/config.yml has gitlab_url: "http://git.mydomain.com/" in it. I'm sure I'm missing something simple, but I've been through every relevant link I can find and have had no positive results; thank you in advance for any help!

    Read the article

  • Linux/Apache performance very slow even on local network

    - by klausch
    I have an Ubuntu server machine running Apache and MYSQL. System and version info is as follows: Linux kernel 3.0.0.-12 Apache/2.2.20 MySQL Ver 14.14.Distrib 5.1.58 I am running a few websites on this server, some HTML only, some PHP/MySQL. THe [problem is that response time is very slow, both on static as well as the dynamic sites. Sometimes it takes more than 10 seconds before a response is given, this makes the sites very slow and almost unusable. The problem occurs even when requesting from the local network. I have added the involved subdomains to my /etc/hosts file, and abolve all the problem is not solved by using IP numbers instead of URL's. So there is no DNS lookup issue. I have modified the log format by showing the response times and sometimes a files takes 12 seconds to be served, see the jquery~.js file in the example screenshot. I have no explanation for this extremely long response time, but is is not even the only issue here, some other files takes a long time to be served too, but do not show a long response time in the log file. So probably different tissues are involved here. I cannot find a solution until now, any suggestions??? THanx in advance, Klaas link to screenshot picture from access logfile Some extra configuration info: apache2.conf (comment is removed) LockFile ${APACHE_LOCK_DIR}/accept.lock PidFile ${APACHE_PID_FILE} Timeout 300 KeepAlive On MaxKeepAliveRequests 100 KeepAliveTimeout 5 <IfModule mpm_prefork_module> StartServers 5 MinSpareServers 5 MaxSpareServers 10 MaxClients 150 MaxRequestsPerChild 0 </IfModule> <IfModule mpm_worker_module> StartServers 2 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxClients 150 MaxRequestsPerChild 0 </IfModule> <IfModule mpm_event_module> StartServers 2 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxClients 150 MaxRequestsPerChild 0 </IfModule> User ${APACHE_RUN_USER} Group ${APACHE_RUN_GROUP} AccessFileName .htaccess <Files ~ "^\.ht"> Order allow,deny Deny from all Satisfy all </Files> DefaultType text/plain HostnameLookups Off ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel warn Include mods-enabled/*.load Include mods-enabled/*.conf Include httpd.conf Include ports.conf LogFormat "%v:%p %h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" vhost_combined LogFormat "%h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\" %T/%D" combined LogFormat "%h %l %u %t \"%r\" %>s %O" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent Include conf.d/ Include sites-enabled/ And the virtual hostfile for one of the slow sites, in fact it is pretty straightforward... <VirtualHost *:80> ServerAdmin [email protected] ServerSignature EMail ServerName toenjoy.drsklaus.nl DocumentRoot /var/www/toenjoy.drsklaus.nl <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/toenjoy.drsklaus.nl/> Options Indexes FollowSymLinks MultiViews AllowOverride AuthConfig AuthType Basic AuthName "To Enjoy" AuthUserFile /etc/.htpasswd Require user petraaa Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> And the output of free -m: klaas@ubuntu-server:/etc/apache2$ free -m total used free shared buffers cached Mem: 1997 1401 595 0 144 1017 -/+ buffers/cache: 238 1758 Swap: 2035 0 2035 and I have no indication that swapping occurs on the moments the site is slow. I have runned top and it does not appear to be a CPU issue. I have the impression that the spawning of a apache thread could maybe be the bottleneck but it is just a suggestion. Maybe this gives some extra information! EDIT: The problem seemed to be gone for some time but occurs again! And not only with Apache, also connecting using SSH takes a tremendous time, sometimes it takes up to 15 seconds before the keyphrase is asked for. Also scp works very slowly. The behavious is really unpredoctable and makes the server very hard to use. Any ideas...?

    Read the article

  • overriding ctype<wchar_t>

    - by Potatoswatter
    I'm writing a lambda calculus interpreter for fun and practice. I got iostreams to properly tokenize identifiers by adding a ctype facet which defines punctuation as whitespace: struct token_ctype : ctype<char> { mask t[ table_size ]; token_ctype() : ctype<char>( t ) { for ( size_t tx = 0; tx < table_size; ++ tx ) { t[tx] = isalnum( tx )? alnum : space; } } }; (classic_table() would probably be cleaner but that doesn't work on OS X!) And then swap the facet in when I hit an identifier: locale token_loc( in.getloc(), new token_ctype ); … locale const &oldloc = in.imbue( token_loc ); in.unget() >> token; in.imbue( oldloc ); There seems to be surprisingly little lambda calculus code on the Web. Most of what I've found so far is full of unicode ? characters. So I thought to try adding Unicode support. But ctype<wchar_t> works completely differently from ctype<char>. There is no master table; there are four methods do_is x2, do_scan_is, and do_scan_not. So I did this: struct token_ctype : ctype< wchar_t > { typedef ctype<wchar_t> base; bool do_is( mask m, char_type c ) const { return base::do_is(m,c) || (m&space) && ( base::do_is(punct,c) || c == L'?' ); } const char_type* do_is (const char_type* lo, const char_type* hi, mask* vec) const { base::do_is(lo,hi,vec); for ( mask *vp = vec; lo != hi; ++ vp, ++ lo ) { if ( *vp & punct || *lo == L'?' ) *vp |= space; } return hi; } const char_type *do_scan_is (mask m, const char_type* lo, const char_type* hi) const { if ( m & space ) m |= punct; hi = do_scan_is(m,lo,hi); if ( m & space ) hi = find( lo, hi, L'?' ); return hi; } const char_type *do_scan_not (mask m, const char_type* lo, const char_type* hi) const { if ( m & space ) { m |= punct; while ( * ( lo = base::do_scan_not(m,lo,hi) ) == L'?' && lo != hi ) ++ lo; return lo; } return base::do_scan_not(m,lo,hi); } }; (Apologies for the flat formatting; the preview converted the tabs differently.) The code is WAY less elegant. I does better express the notion that only punctuation is additional whitespace, but that would've been fine in the original had I had classic_table. Is there a simpler way to do this? Do I really need all those overloads? (Testing showed do_scan_not is extraneous here, but I'm thinking more broadly.) Am I abusing facets in the first place? Is the above even correct? Would it be better style to implement less logic?

    Read the article

  • How optimize code with introspection + heavy alloc on iPhone

    - by mamcx
    I have a problem. I try to display a UITable that could have 2000-20000 records (typicall numbers.) I have a SQLite database similar to the Apple contacts application. I do all the tricks I know to get a smoth scroll, but I have a problem. I load the data in 50 recods blocks. Then, when the user scroll, request next 50 until finish the list. However, load that 50 records cause a notable "pause" in loading and scrolling. Everything else works fine. I cache the data, have opaque cells, draw it by code, etc... I swap the code loading the same data in dicts and have a performance boost but wonder if I could keep my object oriented aproach and improve the actual code. This is the code I think have the performance problem: -(NSArray *) loadAndFill: (NSString *)sql theClass: (Class)cls { [self openDb]; NSMutableArray *list = [NSMutableArray array]; NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; DbObject *ds; Class myClass = NSClassFromString([DbObject getTableName:cls]); FMResultSet *rs = [self load:sql]; while ([rs next]) { ds = [[myClass alloc] init]; NSDictionary *props = [ds properties]; NSString *fieldType = nil; id fieldValue; for (NSString *fieldName in [props allKeys]) { fieldType = [props objectForKey: fieldName]; fieldValue = [self ValueForField:rs Name:fieldName Type:fieldType]; [ds setValue:fieldValue forKey:fieldName]; } [list addObject :ds]; [ds release]; } [rs close]; [pool drain]; return list; } And I think the main culprit is: -(id) ValueForField: (FMResultSet *)rs Name:(NSString *)fieldName Type:(NSString *)fieldType { id fieldValue = nil; if ([fieldType isEqualToString:@"i"] || // int [fieldType isEqualToString:@"I"] || // unsigned int [fieldType isEqualToString:@"s"] || // short [fieldType isEqualToString:@"S"] || // unsigned short [fieldType isEqualToString:@"f"] || // float [fieldType isEqualToString:@"d"] ) // double { fieldValue = [NSNumber numberWithInt: [rs longForColumn:fieldName]]; } else if ([fieldType isEqualToString:@"B"]) // bool or _Bool { fieldValue = [NSNumber numberWithBool: [rs boolForColumn:fieldName]]; } else if ([fieldType isEqualToString:@"l"] || // long [fieldType isEqualToString:@"L"] || // usigned long [fieldType isEqualToString:@"q"] || // long long [fieldType isEqualToString:@"Q"] ) // unsigned long long { fieldValue = [NSNumber numberWithLong: [rs longForColumn:fieldName]]; } else if ([fieldType isEqualToString:@"c"] || // char [fieldType isEqualToString:@"C"] ) // unsigned char { fieldValue = [rs stringForColumn:fieldName]; //Is really a boolean? if ([fieldValue isEqualToString:@"0"] || [fieldValue isEqualToString:@"1"]) { fieldValue = [NSNumber numberWithInt: [fieldValue intValue]]; } } else if ([fieldType hasPrefix:@"@"] ) // Object { NSString *className = [fieldType substringWithRange:NSMakeRange(2, [fieldType length]-3)]; if ([className isEqualToString:@"NSString"]) { fieldValue = [rs stringForColumn:fieldName]; } else if ([className isEqualToString:@"NSDate"]) { NSDateFormatter* dateFormatter = [[NSDateFormatter alloc] init]; [dateFormatter setDateFormat:@"yyyy-MM-dd'T'HH:mm:ss"]; NSString *theDate = [rs stringForColumn:fieldName]; if (theDate) { fieldValue = [dateFormatter dateFromString: theDate]; } else { fieldValue = nil; } [dateFormatter release]; } else if ([className isEqualToString:@"NSInteger"]) { fieldValue = [NSNumber numberWithInt: [rs intForColumn :fieldName]]; } else if ([className isEqualToString:@"NSDecimalNumber"]) { fieldValue = [rs stringForColumn :fieldName]; if (fieldValue) { fieldValue = [NSDecimalNumber decimalNumberWithString:[rs stringForColumn :fieldName]]; } } else if ([className isEqualToString:@"NSNumber"]) { fieldValue = [NSNumber numberWithDouble: [rs doubleForColumn:fieldName]]; } else { //Is a relationship one-to-one? if (![fieldType hasPrefix:@"NS"]) { id rel = class_createInstance(NSClassFromString(className), sizeof(unsigned)); Class theClass = [rel class]; if ([rel isKindOfClass:[DbObject class]]) { fieldValue = [rel init]; //Load the record... NSInteger Id = [rs intForColumn:[theClass relationName]]; if (Id>0) { [fieldValue release]; Db *db = [Db currentDb]; fieldValue = [db loadById: theClass theId:Id]; } } } else { NSString *error = [NSString stringWithFormat:@"Err Can't get value for field %@ of type %@", fieldName, fieldType]; NSLog(error); NSException *e = [NSException exceptionWithName:@"DBError" reason:error userInfo:nil]; @throw e; } } } return fieldValue; }

    Read the article

  • SDL side-scroller scrolls inconsistantly

    - by SDLFunTimes
    So I'm working on an upgrade from my previous project (that I posted here for code review) this time implementing a repeating background (like what is used on cartoons) so that SDL doesn't have to load really big images for a level. There's a strange inconsistency in the program, however: the first time the user scrolls all the way to the right 2 less panels are shown than is specified. Going backwards (left) the correct number of panels is shown (that is the panels repeat the number of times specified in the code). After that it appears that going right again (once all the way at the left) the correct number of panels is shown and same going backwards. Here's some selected code and here's a .zip of all my code constructor: Game::Game(SDL_Event* event, SDL_Surface* scr, int level_w, int w, int h, int bpp) { this->event = event; this->bpp = bpp; level_width = level_w; screen = scr; w_width = w; w_height = h; //load images and set rects background = format_surface("background.jpg"); person = format_surface("person.png"); background_rect_left = background->clip_rect; background_rect_right = background->clip_rect; current_background_piece = 1; //we are displaying the first clip rect_in_view = &background_rect_right; other_rect = &background_rect_left; person_rect = person->clip_rect; background_rect_left.x = 0; background_rect_left.y = 0; background_rect_right.x = background->w; background_rect_right.y = 0; person_rect.y = background_rect_left.h - person_rect.h; person_rect.x = 0; } and here's the move method which is probably causing all the trouble: void Game::move(SDLKey direction) { if(direction == SDLK_RIGHT) { if(move_screen(direction)) { if(!background_reached_right()) { //move background right background_rect_left.x += movement_increment; background_rect_right.x += movement_increment; if(rect_in_view->x >= 0) { //move the other rect in to fill the empty space SDL_Rect* temp; other_rect->x = -w_width + rect_in_view->x; temp = rect_in_view; rect_in_view = other_rect; other_rect = temp; current_background_piece++; std::cout << current_background_piece << std::endl; } if(background_overshoots_right()) { //sees if this next blit is past the surface //this is used only for re-aligning the rects when //the end of the screen is reached background_rect_left.x = 0; background_rect_right.x = w_width; } } } else { //move the person instead person_rect.x += movement_increment; if(get_person_right_side() > w_width) { //person went too far right person_rect.x = w_width - person_rect.w; } } } else if(direction == SDLK_LEFT) { if(move_screen(direction)) { if(!background_reached_left()) { //moves background left background_rect_left.x -= movement_increment; background_rect_right.x -= movement_increment; if(rect_in_view->x <= -w_width) { //swap the rect in view SDL_Rect* temp; rect_in_view->x = w_width; temp = rect_in_view; rect_in_view = other_rect; other_rect = temp; current_background_piece--; std::cout << current_background_piece << std::endl; } if(background_overshoots_left()) { background_rect_left.x = 0; background_rect_right.x = w_width; } } } else { //move the person instead person_rect.x -= movement_increment; if(person_rect.x < 0) { //person went too far left person_rect.x = 0; } } } } without the rest of the code this doesn't make too much sense. Since there is too much of it I'll upload it here for testing. Anyway does anyone know how I could fix this inconsistency?

    Read the article

  • Python dictionary key missing

    - by Greg K
    I thought I'd put together a quick script to consolidate the CSS rules I have distributed across multiple CSS files, then I can minify it. I'm new to Python but figured this would be a good exercise to try a new language. My main loop isn't parsing the CSS as I thought it would. I populate a list with selectors parsed from the CSS files to return the CSS rules in order. Later in the script, the list contains an element that is not found in the dictionary. for line in self.file.readlines(): if self.hasSelector(line): selector = self.getSelector(line) if selector not in self.order: self.order.append(selector) elif selector and self.hasProperty(line): # rules.setdefault(selector,[]).append(self.getProperty(line)) property = self.getProperty(line) properties = [] if selector not in rules else rules[selector] if property not in properties: properties.append(property) rules[selector] = properties # print "%s :: %s" % (selector, "".join(rules[selector])) return rules Error encountered: $ css-combine combined.css test1.css test2.css Traceback (most recent call last): File "css-combine", line 108, in <module> c.run(outfile, stylesheets) File "css-combine", line 64, in run [(selector, rules[selector]) for selector in parser.order], KeyError: 'p' Swap the inputs: $ css-combine combined.css test2.css test1.css Traceback (most recent call last): File "css-combine", line 108, in <module> c.run(outfile, stylesheets) File "css-combine", line 64, in run [(selector, rules[selector]) for selector in parser.order], KeyError: '#header_.title' I've done some quirky things in the code like sub spaces for underscores in dictionary key names in case it was an issue - maybe this is a benign precaution? Depending on the order of the inputs, a different key cannot be found in the dictionary. The script: #!/usr/bin/env python import optparse import re class CssParser: def __init__(self): self.file = False self.order = [] # store rules assignment order def parse(self, rules = {}): if self.file == False: raise IOError("No file to parse") selector = False for line in self.file.readlines(): if self.hasSelector(line): selector = self.getSelector(line) if selector not in self.order: self.order.append(selector) elif selector and self.hasProperty(line): # rules.setdefault(selector,[]).append(self.getProperty(line)) property = self.getProperty(line) properties = [] if selector not in rules else rules[selector] if property not in properties: properties.append(property) rules[selector] = properties # print "%s :: %s" % (selector, "".join(rules[selector])) return rules def hasSelector(self, line): return True if re.search("^([#a-z,\.:\s]+){", line) else False def getSelector(self, line): s = re.search("^([#a-z,:\.\s]+){", line).group(1) return "_".join(s.strip().split()) def hasProperty(self, line): return True if re.search("^\s?[a-z-]+:[^;]+;", line) else False def getProperty(self, line): return re.search("([a-z-]+:[^;]+;)", line).group(1) class Consolidator: """Class to consolidate CSS rule attributes""" def run(self, outfile, files): parser = CssParser() rules = {} for file in files: try: parser.file = open(file) rules = parser.parse(rules) except IOError: print "Cannot read file: " + file finally: parser.file.close() self.serialize( [(selector, rules[selector]) for selector in parser.order], outfile ) def serialize(self, rules, outfile): try: f = open(outfile, "w") for rule in rules: f.write( "%s {\n\t%s\n}\n\n" % ( " ".join(rule[0].split("_")), "\n\t".join(rule[1]) ) ) except IOError: print "Cannot write output to: " + outfile finally: f.close() def init(): op = optparse.OptionParser( usage="Usage: %prog [options] <output file> <stylesheet1> " + "<stylesheet2> ... <stylesheetN>", description="Combine CSS rules spread across multiple " + "stylesheets into a single file" ) opts, args = op.parse_args() if len(args) < 3: if len(args) == 1: print "Error: No input files specified.\n" elif len(args) == 2: print "Error: One input file specified, nothing to combine.\n" op.print_help(); exit(-1) return [opts, args] if __name__ == '__main__': opts, args = init() outfile, stylesheets = [args[0], args[1:]] c = Consolidator() c.run(outfile, stylesheets) Test CSS file 1: body { background-color: #e7e7e7; } p { margin: 1em 0em; } File 2: body { font-size: 16px; } #header .title { font-family: Tahoma, Geneva, sans-serif; font-size: 1.9em; } #header .title a, #header .title a:hover { color: #f5f5f5; border-bottom: none; text-shadow: 2px 2px 3px rgba(0, 0, 0, 1); } Thanks in advance.

    Read the article

  • A RenderTargetView cannot be created from a NULL Resource

    - by numerical25
    I am trying to create my render target view but I get this error from direct X A RenderTargetView cannot be created from a NULL Resource To my knowledge it seems that I must fill the rendertarget pointer with data before passing it. But I am having trouble figure out how. Below is my declaration and implementation declaration #pragma once #include "stdafx.h" #include "resource.h" #include "d3d10.h" #include "d3dx10.h" #include "dinput.h" #define MAX_LOADSTRING 100 class RenderEngine { protected: RECT m_screenRect; //direct3d Members ID3D10Device *m_pDevice; // The IDirect3DDevice10 // interface ID3D10Texture2D *m_pBackBuffer; // Pointer to the back buffer ID3D10RenderTargetView *m_pRenderTargetView; // Pointer to render target view IDXGISwapChain *m_pSwapChain; // Pointer to the swap chain RECT m_rcScreenRect; // The dimensions of the screen ID3DX10Font *m_pFont; // The font used for rendering text // Sprites used to hold font characters ID3DX10Sprite *m_pFontSprite; ATOM RegisterEngineClass(); void Present(); public: static HINSTANCE m_hInst; HWND m_hWnd; int m_nCmdShow; TCHAR m_szTitle[MAX_LOADSTRING]; // The title bar text TCHAR m_szWindowClass[MAX_LOADSTRING]; // the main window class name void DrawTextString(int x, int y, D3DXCOLOR color, const TCHAR *strOutput); //static functions static LRESULT CALLBACK WndProc(HWND hWnd, UINT message, WPARAM wParam, LPARAM lParam); static INT_PTR CALLBACK About(HWND hDlg, UINT message, WPARAM wParam, LPARAM lParam); bool InitWindow(); bool InitDirectX(); bool InitInstance(); int Run(); RenderEngine() { m_screenRect.right = 800; m_screenRect.bottom = 600; } }; my implementation bool RenderEngine::InitDirectX() { //potential error. You did not set to zero memory and you did not set the scaling property DXGI_MODE_DESC bd; bd.Width = m_screenRect.right; bd.Height = m_screenRect.bottom; bd.Format = DXGI_FORMAT_R8G8B8A8_UNORM; bd.RefreshRate.Numerator = 60; bd.RefreshRate.Denominator = 1; DXGI_SAMPLE_DESC sd; sd.Count = 1; sd.Quality = 0; DXGI_SWAP_CHAIN_DESC swapDesc; ZeroMemory(&swapDesc, sizeof(swapDesc)); swapDesc.BufferDesc = bd; swapDesc.SampleDesc = sd; swapDesc.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT; swapDesc.OutputWindow = m_hWnd; swapDesc.BufferCount = 1; swapDesc.SwapEffect = DXGI_SWAP_EFFECT_DISCARD, swapDesc.Windowed = true; swapDesc.Flags = 0; HRESULT hr; hr = D3D10CreateDeviceAndSwapChain(NULL, D3D10_DRIVER_TYPE_HARDWARE, NULL, D3D10_CREATE_DEVICE_DEBUG, D3D10_SDK_VERSION , &swapDesc, &m_pSwapChain, &m_pDevice); if(FAILED(hr)) return false; // Create a render target view hr = m_pDevice->CreateRenderTargetView( m_pBackBuffer, NULL, &m_pRenderTargetView); // FAILS RIGHT HERE // if(FAILED(hr)) return false; return true; }

    Read the article

  • 42 passed to TerminateProcess, sometimes GetExitCodeProcess returns 0

    - by Emil
    After I get a handle returned by CreateProcess, I call TerminateProcess, passing 42 for the process exit code. Then, I use WaitForSingleObject for the process to terminate, and finally I call GetExitCodeProcess. None of the function calls report errors. The child process is an infinite loop and does not terminate on its own. The problem is that sometimes GetExitCodeProcess returns 42 for the exit code (as it should) and sometimes it returns 0. Any idea why? #include <string> #include <sstream> #include <iostream> #include <assert.h> #include <windows.h> void check_call( bool result, char const * call ); #define CHECK_CALL(call) check_call(call,#call); int main( int argc, char const * argv[] ) { if( argc>1 ) { assert( !strcmp(argv[1],"inf") ); for(;;) { } } int err=0; for( int i=0; i!=200; ++i ) { STARTUPINFO sinfo; ZeroMemory(&sinfo,sizeof(STARTUPINFO)); sinfo.cb=sizeof(STARTUPINFO); PROCESS_INFORMATION pe; char cmd_line[32768]; strcat(strcpy(cmd_line,argv[0])," inf"); CHECK_CALL((CreateProcess(0,cmd_line,0,0,TRUE,0,0,0,&sinfo,&pe)!=0)); CHECK_CALL((CloseHandle(pe.hThread)!=0)); CHECK_CALL((TerminateProcess(pe.hProcess,42)!=0)); CHECK_CALL((WaitForSingleObject(pe.hProcess,INFINITE)==WAIT_OBJECT_0)); DWORD ec=0; CHECK_CALL((GetExitCodeProcess(pe.hProcess,&ec)!=0)); CHECK_CALL((CloseHandle(pe.hProcess)!=0)); err += (ec!=42); } std::cout << err; return 0; } std::string get_last_error_str( DWORD err ) { std::ostringstream s; s << err; LPVOID lpMsgBuf=0; if( FormatMessageA( FORMAT_MESSAGE_ALLOCATE_BUFFER|FORMAT_MESSAGE_FROM_SYSTEM|FORMAT_MESSAGE_IGNORE_INSERTS, 0, err, MAKELANGID(LANG_NEUTRAL,SUBLANG_DEFAULT), (LPSTR)&lpMsgBuf, 0, 0) ) { assert(lpMsgBuf!=0); std::string msg; try { std::string((LPCSTR)lpMsgBuf).swap(msg); } catch( ... ) { } LocalFree(lpMsgBuf); if( !msg.empty() && msg[msg.size()-1]=='\n' ) msg.resize(msg.size()-1); if( !msg.empty() && msg[msg.size()-1]=='\r' ) msg.resize(msg.size()-1); s << ", \"" << msg << '"'; } return s.str(); } void check_call( bool result, char const * call ) { assert(call && *call); if( !result ) { std::cerr << call << " failed.\nGetLastError:" << get_last_error_str(GetLastError()) << std::endl; exit(2); } }

    Read the article

  • Facelet selectOneMenu with POJOs and converter problem

    - by c0d3x
    Hi, I want to have a facelet page with a formular and a dropdown menu. With the dropdown menu the user shoul select a POJO of the type Lieferant: public class Lieferant extends AbstractPersistentWarenwirtschaftsObject { private String firma; public Lieferant(WarenwirtschaftDatabaseLayer database, String firma) { this(database, null, firma); } public Lieferant(WarenwirtschaftDatabaseLayer database, Long primaryKey, String firma) { super(database, primaryKey); this.firma = firma; } public String getFirma() { return firma; } @Override public String toString() { return getFirma(); } @Override public int hashCode() { final int prime = 31; int result = 1; result = prime * result + ((firma == null) ? 0 : firma.hashCode()); return result; } @Override public boolean equals(Object obj) { if (this == obj) return true; if (obj == null) return false; if (getClass() != obj.getClass()) return false; Lieferant other = (Lieferant) obj; if (firma == null) { if (other.firma != null) return false; } else if (!firma.equals(other.firma)) return false; return true; } } Here is the facelet code that I wrote: <h:selectOneMenu> tag. This tag should display a list of POJOs (not beans) of the type Lieferant. Here is the facelet code: <h:selectOneMenu id="lieferant" value="#{lieferantenBestellungBackingBean.lieferant}"> <f:selectItems var="lieferant" value="#{lieferantenBackingBean.lieferanten}" itemLabel="#{lieferant.firma}" itemValue="#{lieferant.primaryKey}" /> <f:converter converterId="LieferantConverter" /> </h:selectOneMenu> Here is the refenrenced managed backing bean @ManagedBean @RequestScoped public class LieferantenBackingBean extends AbstractWarenwirtschaftsBackingBean { private List<Lieferant> lieferanten; public List<Lieferant> getLieferanten() { if (lieferanten == null) { lieferanten = getApplication().getLieferanten(); } return lieferanten; } } As far as I know, I need a custom converter, to swap beetween POJO and String representations of the Lieferant objects. Here is what the converter looks like: @FacesConverter(value="LieferantConverter") public class LieferantConverter implements Converter { @Override public Object getAsObject(FacesContext context, UIComponent component, String value) { long primaryKey = Long.parseLong(value); WarenwirtschaftApplicationLayer application = WarenwirtschaftApplication.getInstance(); Lieferant lieferant = application.getLieferant(primaryKey); return lieferant; } @Override public String getAsString(FacesContext context, UIComponent component, Object value) { return value.toString(); } } The page loads without any error. When I fill out the formular and submit it, there is an error message displayed on the page: Bestellung:lieferantenBestellungForm:lieferant: Validierungsfehler: Wert ist keine gültige Auswahl translated: validation error: value is not a valid selection Unfortunaltely it does not say which value it is talking about. The converter seems to work correctly. I found this similar question from stackoverflow and this article about selectOneMenu and converters, but I was not able to find the problem in my code. Why is List<SelectItem> used in the example from the second link. Gives the same error for me. Any help would be appreciated. Thanks in advance.

    Read the article

  • Boost::Spirit::Qi autorules -- avoiding repeated copying of AST data structures

    - by phooji
    I've been using Qi and Karma to do some processing on several small languages. Most of the grammars are pretty small (20-40 rules). I've been able to use autorules almost exclusively, so my parse trees consist entirely of variants, structs, and std::vectors. This setup works great for the common case: 1) parse something (Qi), 2) make minor manipulations to the parse tree (visitor), and 3) output something (Karma). However, I'm concerned about what will happen if I want to make complex structural changes to a syntax tree, like moving big subtrees around. Consider the following toy example: A grammar for s-expr-style logical expressions that uses autorules... // Inside grammar class; rule names match struct names... pexpr %= pand | por | var | bconst; pand %= lit("(and ") >> (pexpr % lit(" ")) >> ")"; por %= lit("(or ") >> (pexpr % lit(" ")) >> ")"; pnot %= lit("(not ") >> pexpr >> ")"; ... which leads to parse tree representation that looks like this... struct var { std::string name; }; struct bconst { bool val; }; struct pand; struct por; struct pnot; typedef boost::variant<bconst, var, boost::recursive_wrapper<pand>, boost::recursive_wrapper<por>, boost::recursive_wrapper<pnot> > pexpr; struct pand { std::vector<pexpr> operands; }; struct por { std::vector<pexpr> operands; }; struct pnot { pexpr victim; }; // Many Fusion Macros here Suppose I have a parse tree that looks something like this: pand / ... \ por por / \ / \ var var var var (The ellipsis means 'many more children of similar shape for pand.') Now, suppose that I want negate each of the por nodes, so that the end result is: pand / ... \ pnot pnot | | por por / \ / \ var var var var The direct approach would be, for each por subtree: - create pnot node (copies por in construction); - re-assign the appropriate vector slot in the pand node (copies pnot node and its por subtree). Alternatively, I could construct a separate vector, and then replace (swap) the pand vector wholesale, eliminating a second round of copying. All of this seems cumbersome compared to a pointer-based tree representation, which would allow for the pnot nodes to be inserted without any copying of existing nodes. My question: Is there a way to avoid copy-heavy tree manipulations with autorule-compliant data structures? Should I bite the bullet and just use non-autorules to build a pointer-based AST (e.g., http://boost-spirit.com/home/2010/03/11/s-expressions-and-variants/)?

    Read the article

  • Advice on Factory Method

    - by heath
    Using php 5.2, I'm trying to use a factory to return a service to the controller. My request uri would be of the format www.mydomain.com/service/method/param1/param2/etc. My controller would then call a service factory using the token sent in the uri. From what I've seen, there are two main routes I could go with my factory. Single method: class ServiceFactory { public static function getInstance($token) { switch($token) { case 'location': return new StaticPageTemplateService('location'); break; case 'product': return new DynamicPageTemplateService('product'); break; case 'user' return new UserService(); break; default: return new StaticPageTemplateService($token); } } } or multiple methods: class ServiceFactory { public static function getLocationService() { return new StaticPageTemplateService('location'); } public static function getProductService() { return new DynamicPageTemplateService('product'); } public static function getUserService() { return new UserService(); } public static function getDefaultService($token) { return new StaticPageTemplateService($token); } } So, given this, I will have a handful of generic services in which I will pass that token (for example, StaticPageTemplateService and DynamicPageTemplateService) that will probably implement another factory method just like this to grab templates, domain objects, etc. And some that will be specific services (for example, UserService) which will be 1:1 to that token and not reused. So, this seems to be an ok approach (please give suggestions if it is not) for a small amount of services. But what about when, over time and my site grows, I end up with 100s of possibilities. This no longer seems like a good approach. Am I just way off to begin with or is there another design pattern that would be a better fit? Thanks. UPDATE: @JSprang - the token is actually sent in the uri like mydomain.com/location would want a service specific to loction and mydomain.com/news would want a service specific to news. Now, for a lot of these, the service will be generic. For instance, a lot of pages will call a StaticTemplatePageService in which the token is passed in to the service. That service in turn will grab the "location" template or "links" template and just spit it back out. Some will need DynamicTemplatePageService in which the token gets passed in, like "news" and that service will grab a NewsDomainObject, determine how to present it and spit that back out. Others, like "user" will be specific to a UserService in which it will have methods like Login, Logout, etc. So basically, the token will be used to determine which service is needed AND if it is generic service, that token will be passed to that service. Maybe token isn't the correct terminology but I hope you get the purpose. I wanted to use the factory so I can easily swap out which Service I need in case my needs change. I just worry that after the site grows larger (both pages and functionality) that the factory will become rather bloated. But I'm starting to feel like I just can't get away from storing the mappings in an array (like Stephen's solution). That just doesn't feel OOP to me and I was hoping to find something more elegant.

    Read the article

  • unresolved external symbol _D3D10CreateDeviceAndSwapChain@32 referenced in function "public: bool

    - by numerical25
    Having trouble creating my swap chain. I receive the following error. DX3dApp.obj : error LNK2019: unresolved external symbol _D3D10CreateDeviceAndSwapChain@32 referenced in function "public: bool __thiscall DX3dApp::InitDirect3D(void)" (?InitDirect3D@DX3dApp@@QAE_NXZ) Below is the code ive done so far. #include "DX3dApp.h" bool DX3dApp::Init(HINSTANCE hInstance, int width, int height) { mhInst = hInstance; mWidth = width; mHeight = height; if(!WindowsInit()) { return false; } if(!InitDirect3D()) { return false; } } int DX3dApp::Run() { MSG msg = {0}; while (WM_QUIT != msg.message) { while (PeekMessage(&msg, NULL, 0, 0, PM_REMOVE) == TRUE) { TranslateMessage(&msg); DispatchMessage(&msg); } Render(); } return (int) msg.wParam; } bool DX3dApp::WindowsInit() { WNDCLASSEX wcex; wcex.cbSize = sizeof(WNDCLASSEX); wcex.style = CS_HREDRAW | CS_VREDRAW; wcex.lpfnWndProc = (WNDPROC)WndProc; wcex.cbClsExtra = 0; wcex.cbWndExtra = 0; wcex.hInstance = mhInst; wcex.hIcon = 0; wcex.hCursor = LoadCursor(NULL, IDC_ARROW); wcex.hbrBackground = (HBRUSH)(COLOR_WINDOW+1); wcex.lpszMenuName = NULL; wcex.lpszClassName = TEXT("DirectXExample"); wcex.hIconSm = 0; RegisterClassEx(&wcex); // Resize the window RECT rect = { 0, 0, mWidth, mHeight }; AdjustWindowRect(&rect, WS_OVERLAPPEDWINDOW, FALSE); // create the window from the class above mMainhWnd = CreateWindow(TEXT("DirectXExample"), TEXT("DirectXExample"), WS_OVERLAPPEDWINDOW, CW_USEDEFAULT, CW_USEDEFAULT, rect.right - rect.left, rect.bottom - rect.top, NULL, NULL, mhInst, NULL); if (!mMainhWnd) { return false; } ShowWindow(mMainhWnd, SW_SHOW); UpdateWindow(mMainhWnd); return true; } bool DX3dApp::InitDirect3D() { DXGI_SWAP_CHAIN_DESC scd; ZeroMemory(&scd, sizeof(scd)); scd.BufferCount = 1; scd.BufferDesc.Width = mWidth; scd.BufferDesc.Height = mHeight; scd.BufferDesc.Format = DXGI_FORMAT_B8G8R8A8_UNORM; scd.BufferDesc.RefreshRate.Numerator = 60; scd.BufferDesc.RefreshRate.Denominator = 1; scd.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT; scd.OutputWindow = mMainhWnd; scd.SampleDesc.Count = 1; scd.SampleDesc.Quality = 0; scd.Windowed = TRUE; HRESULT hr = D3D10CreateDeviceAndSwapChain(NULL,D3D10_DRIVER_TYPE_REFERENCE, NULL, 0, D3D10_SDK_VERSION, &scd, &mpSwapChain, &mpD3DDevice); if(!hr != S_OK) { return FALSE; } ID3D10Texture2D *pBackBuffer; return TRUE; } void DX3dApp::Render() { } LRESULT CALLBACK WndProc(HWND hWnd, UINT message, WPARAM wParam, LPARAM lParam) { switch (message) { // Allow the user to press the escape key to end the application case WM_KEYDOWN: switch(wParam) { // Check if the user hit the escape key case VK_ESCAPE: PostQuitMessage(0); break; } break; // The user hit the close button, close the application case WM_DESTROY: PostQuitMessage(0); break; } return DefWindowProc(hWnd, message, wParam, lParam); }

    Read the article

  • I try to change activity to next page, but it can't.

    - by Daisy
    I try to change page on android application. It have error but look like its swap a little while. public class gps_gui extends Activity implements View.OnClickListener{ /** Called when the activity is first created. */ private static final int ACTIVITY_CREATE = 0; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); final Button login = (Button) findViewById(R.id.login); login.setOnClickListener((OnClickListener) this); } public void onClick(View v){ //Toast.makeText(this, "Already Login",Toast.LENGTH_SHORT).show(); Intent i = new Intent(this, SecondPage.class); startActivityForResult(i, ACTIVITY_CREATE); } } public class SecondPage extends Activity { @Override protected void onCreate(Bundle savedInstanceState) { // TODO Auto-generated method stub super.onCreate(savedInstanceState); setContentView(R.layout.second_page); } } In AndriodManifest.xml <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="gps.GUI" android:versionCode="1" android:versionName="1.0"> <application android:icon="@drawable/icon" android:label="@string/app_name"> <activity android:name=".gps_gui" android:label="@string/app_name"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> <activity android:name="second_page"></activity> </application> <uses-sdk android:minSdkVersion="8" /> </manifest> Anyone can help me ? thanks Errors: 01-29 13:56:57.709: ERROR/AndroidRuntime(393): FATAL EXCEPTION: main 01-29 13:56:57.709: ERROR/AndroidRuntime(393): android.content.ActivityNotFoundException: Unable to find explicit activity class {gps.GUI/gps.GUI.SecondPage}; have you declared this activity in your AndroidManifest.xml? 01-29 13:56:57.709: ERROR/AndroidRuntime(393): at android.app.Instrumentation.checkStartActivityResult(Instrumentation.java:1404) 01-29 13:56:57.709: ERROR/AndroidRuntime(393): at android.app.Instrumentation.execStartActivity(Instrumentation.java:1378) 01-29 13:56:57.709: ERROR/AndroidRuntime(393): at android.app.Activity.startActivityForResult(Activity.java:2817) 01-29 13:56:57.709: ERROR/AndroidRuntime(393): at gps.GUI.gps_gui$1.onClick(gps_gui.java:30) 01-29 13:56:57.709: ERROR/AndroidRuntime(393): at android.view.View.performClick(View.java:2408) 01-29 13:56:57.709: ERROR/AndroidRuntime(393): at android.view.View$PerformClick.run(View.java:8816) 01-29 13:56:57.709: ERROR/AndroidRuntime(393): at android.os.Handler.handleCallback(Handler.java:587) 01-29 13:56:57.709: ERROR/AndroidRuntime(393): at android.os.Handler.dispatchMessage(Handler.java:92) 01-29 13:56:57.709: ERROR/AndroidRuntime(393): at android.os.Looper.loop(Looper.java:123) 01-29 13:56:57.709: ERROR/AndroidRuntime(393): at android.app.ActivityThread.main(ActivityThread.java:4627) 01-29 13:56:57.709: ERROR/AndroidRuntime(393): at java.lang.reflect.Method.invokeNative(Native Method) 01-29 13:56:57.709: ERROR/AndroidRuntime(393): at java.lang.reflect.Method.invoke(Method.java:521) 01-29 13:56:57.709: ERROR/AndroidRuntime(393): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:868) 01-29 13:56:57.709: ERROR/AndroidRuntime(393): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:626) 01-29 13:56:57.709: ERROR/AndroidRuntime(393): at dalvik.system.NativeStart.main(Native Method)

    Read the article

  • Iphone: controlling text with delay problem with UIWebView

    - by James B.
    Hi, I've opted to use UIWebView so I can control the layout of text I've got contained in a local html document in my bundle. I want the text to display within a UIWebView I've got contained within my view. So the text isn't the whole view, just part of it. Everything runs fine, but when the web page loads I get a blank screen for a second before the text appears. This looks really bad. can anyone give me an example of how to stop this happening? I'm assuming I have to somehow hide the web view until it has fully loaded? Could someone one tell me how to do this? At the moment I'm calling my code through the viewDidLoad like this: [myUIWebView loadRequest: [NSURLRequest requestWithURL:[NSURL fileURLWithPath:[[NSBundle mainBundle]pathForResource:@"localwebpage" ofType:@"html"] isDirectory:NO]]]; Any help is much appreciated. I've read round a few forums and not seen a good answer to this question, and it seems like it recurs a lot as an issue for beginners like myself. Thanks for taking the time to read this post! UPDATED info Thanks for your response. The suggestions below solves the problem but creates a new one for me as now when my view loads it is totally hidden until I click on my toggle switch. to understand this it's maybe most helpful if I post all my code. Before this though let me explain the setup of my view. I've got a standard view within which I've also got two web views, one on top of the other. each web view contains different text with different styling. the user flicks between views using a toggle switch, which hides/reveals the web views. I'm using the web views because I want to control the style/layout of the text. Below is my full .m code, I can't figure out where it's going wrong. My web views are called oxford & harvard I'm sure its something to do with how/when I'm hiding/revealing views. I've played around with this but can't seem to get it right. Maybe my approach is wrong. A bit of advice ironing this out would be really appreciated: @implementation ReferenceViewController @synthesize oxford; @synthesize harvard; // Implement viewDidLoad to do additional setup after loading the view, typically from a nib. - (void)viewDidLoad { [super viewDidLoad]; [oxford loadRequest: [NSURLRequest requestWithURL:[NSURL fileURLWithPath:[[NSBundle mainBundle]pathForResource:@"Oxford" ofType:@"html"] isDirectory:NO]]]; [harvard loadRequest: [NSURLRequest requestWithURL:[NSURL fileURLWithPath:[[NSBundle mainBundle]pathForResource:@"Harvard" ofType:@"html"] isDirectory:NO]]]; [oxford setHidden:YES]; [harvard setHidden:YES]; } - (void)webViewDidFinishLoad:(UIWebView *)webView { if([webView hidden]) { [oxford setHidden:NO]; [harvard setHidden:NO]; } } //Toggle controls for toggle switch in UIView to swap between webviews - (IBAction)toggleControls:(id)sender { if ([sender selectedSegmentIndex] == kSwitchesSegmentIndex) { oxford.hidden = NO; harvard.hidden = YES; } else { oxford.hidden = YES; harvard.hidden = NO; } } - (void)didReceiveMemoryWarning { // Releases the view if it doesn't have a superview. [super didReceiveMemoryWarning]; // Release any cached data, images, etc that aren't in use. } - (void)viewDidUnload { [super viewDidUnload]; // Release any retained subviews of the main view. // e.g. self.myOutlet = nil; } - (void)dealloc { [super dealloc]; [oxford release]; [harvard release]; } @end

    Read the article

  • Exit code 3 (not my return value, looking for source)

    - by Kathoz
    Greetings, my program exits with the code 3. No error messages, no exceptions, and the exit is not initiated by my code. The problem occurs when I am trying to read extremely long integer values from a text file (the text file is present and correctly opened, with successful prior reading). I am using very large amounts of memory (in fact, I think that this might be the cause, as I am nearly sure I go over the 2GB per process memory limit). I am also using the GMP (or, rather, MPIR) library to multiply bignums. I am fairly sure that this is not a file I/O problem as I got the same error code on a previous program version that was fully in-memory. System: MS Visual Studio 2008 MS Windows Vista Home Premium x86 MPIR 2.1.0 rc2 4GB RAM Where might this error code originate from? EDIT: this is the procedure that exits with the code void condenseBinSplitFile(const char *sourceFilename, int partCount){ //condense results file into final P and Q std::string tempFilename; std::string inputFilename(sourceFilename); std::string outputFilename(BIN_SPLIT_FILENAME_DATA2); mpz_class *P = new mpz_class(0); mpz_class *Q = new mpz_class(0); mpz_class *PP = new mpz_class(0); mpz_class *QQ = new mpz_class(0); FILE *sourceFile; FILE *resultFile; fpos_t oldPos; int swapCount = 0; while (partCount > 1){ std::cout << partCount << std::endl; sourceFile = fopen(inputFilename.c_str(), "r"); resultFile = fopen(outputFilename.c_str(), "w"); for (int i=0; i<partCount/2; i++){ //Multiplication order: //Get Q, skip P //Get QQ, mul Q and QQ, print Q, delete Q //Jump back to P, get P //Mul P and QQ, delete QQ //Skip QQ, get PP //Mul P and PP, delete P and PP //Get Q, skip P mpz_inp_str(Q->get_mpz_t(), sourceFile, CALC_BASE); fgetpos(sourceFile, &oldPos); skipLine(sourceFile); skipLine(sourceFile); //Get QQ, mul Q and QQ, print Q, delete Q mpz_inp_str(QQ->get_mpz_t(), sourceFile, CALC_BASE); (*Q) *= (*QQ); mpz_out_str(resultFile, CALC_BASE, Q->get_mpz_t()); fputc('\n', resultFile); (*Q) = 0; //Jump back to P, get P fsetpos(sourceFile, &oldPos); mpz_inp_str(P->get_mpz_t(), sourceFile, CALC_BASE); //Mul P and QQ, delete QQ (*P) *= (*QQ); (*QQ) = 0; //Skip QQ, get PP skipLine(sourceFile); skipLine(sourceFile); mpz_inp_str(PP->get_mpz_t(), sourceFile, CALC_BASE); //Mul P and PP, delete PP, print P, delete P (*P) += (*PP); (*PP) = 0; mpz_out_str(resultFile, CALC_BASE, P->get_mpz_t()); fputc('\n', resultFile); (*P) = 0; } partCount /= 2; fclose(sourceFile); fclose(resultFile); //swap filenames tempFilename = inputFilename; inputFilename = outputFilename; outputFilename = tempFilename; swapCount++; } delete P; delete Q; delete PP; delete QQ; remove(BIN_SPLIT_FILENAME_RESULTS); if (swapCount%2 == 0) rename(sourceFilename, BIN_SPLIT_FILENAME_RESULTS); else rename(BIN_SPLIT_FILENAME_DATA2, BIN_SPLIT_FILENAME_RESULTS); }

    Read the article

  • How to reduce virtual memory by optimising my PHP code?

    - by iCeR
    My current code (see below) uses 147MB of virtual memory! My provider has allocated 100MB by default and the process is killed once run, causing an internal error. The code is utilising curl multi and must be able to loop with more than 150 iterations whilst still minimizing the virtual memory. The code below is only set at 150 iterations and still causes the internal server error. At 90 iterations the issue does not occur. How can I adjust my code to lower the resource use / virtual memory? Thanks! <?php function udate($format, $utimestamp = null) { if ($utimestamp === null) $utimestamp = microtime(true); $timestamp = floor($utimestamp); $milliseconds = round(($utimestamp - $timestamp) * 1000); return date(preg_replace('`(?<!\\\\)u`', $milliseconds, $format), $timestamp); } $url = 'https://www.testdomain.com/'; $curl_arr = array(); $master = curl_multi_init(); for($i=0; $i<150; $i++) { $curl_arr[$i] = curl_init(); curl_setopt($curl_arr[$i], CURLOPT_URL, $url); curl_setopt($curl_arr[$i], CURLOPT_RETURNTRANSFER, 1); curl_setopt($curl_arr[$i], CURLOPT_SSL_VERIFYHOST, FALSE); curl_setopt($curl_arr[$i], CURLOPT_SSL_VERIFYPEER, FALSE); curl_multi_add_handle($master, $curl_arr[$i]); } do { curl_multi_exec($master,$running); } while($running > 0); for($i=0; $i<150; $i++) { $results = curl_multi_getcontent ($curl_arr[$i]); $results = explode("<br>", $results); echo $results[0]; echo "<br>"; echo $results[1]; echo "<br>"; echo udate('H:i:s:u'); echo "<br><br>"; usleep(100000); } ?> Processor Information Total processors: 8 Processor #1 Vendor GenuineIntel Name Intel(R) Xeon(R) CPU E5405 @ 2.00GHz Speed 1995.120 MHz Cache 6144 KB Processor #2 Vendor GenuineIntel Name Intel(R) Xeon(R) CPU E5405 @ 2.00GHz Speed 1995.120 MHz Cache 6144 KB Processor #3 Vendor GenuineIntel Name Intel(R) Xeon(R) CPU E5405 @ 2.00GHz Speed 1995.120 MHz Cache 6144 KB Processor #4 Vendor GenuineIntel Name Intel(R) Xeon(R) CPU E5405 @ 2.00GHz Speed 1995.120 MHz Cache 6144 KB Processor #5 Vendor GenuineIntel Name Intel(R) Xeon(R) CPU E5405 @ 2.00GHz Speed 1995.120 MHz Cache 6144 KB Processor #6 Vendor GenuineIntel Name Intel(R) Xeon(R) CPU E5405 @ 2.00GHz Speed 1995.120 MHz Cache 6144 KB Processor #7 Vendor GenuineIntel Name Intel(R) Xeon(R) CPU E5405 @ 2.00GHz Speed 1995.120 MHz Cache 6144 KB Processor #8 Vendor GenuineIntel Name Intel(R) Xeon(R) CPU E5405 @ 2.00GHz Speed 1995.120 MHz Cache 6144 KB Memory Information Memory for crash kernel (0x0 to 0x0) notwithin permissible range Memory: 8302344k/9175040k available (2176k kernel code, 80272k reserved, 901k data, 228k init, 7466304k highmem) System Information Linux server3.server.com 2.6.18-194.17.1.el5PAE #1 SMP Wed Sep 29 13:31:51 EDT 2010 i686 i686 i386 GNU/Linux Physical Disks SCSI device sda: 1952448512 512-byte hdwr sectors (999654 MB) sda: Write Protect is off sda: Mode Sense: 03 00 00 08 SCSI device sda: drive cache: write back SCSI device sda: 1952448512 512-byte hdwr sectors (999654 MB) sda: Write Protect is off sda: Mode Sense: 03 00 00 08 SCSI device sda: drive cache: write back sd 0:1:0:0: Attached scsi disk sda sd 4:0:0:0: Attached scsi removable disk sdb sd 0:1:0:0: Attached scsi generic sg4 type 0 sd 4:0:0:0: Attached scsi generic sg7 type 0 Current Memory Usage total used free shared buffers cached Mem: 8306672 7847384 459288 0 487912 6444548 -/+ buffers/cache: 914924 7391748 Swap: 4095992 496 4095496 Total: 12402664 7847880 4554784 Current Disk Usage Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup00-LogVol00 898G 307G 546G 36% / /dev/sda1 99M 19M 76M 20% /boot none 4.0G 0 4.0G 0% /dev/shm /var/tmpMnt 4.0G 1.8G 2.0G 48% /tmp

    Read the article

  • Doing XML extracts with XSLT without having to read the whole DOM tree into memory?

    - by Thorbjørn Ravn Andersen
    I have a situation where I want to extract some information from some very large but regular XML files (just had to do it with a 500 Mb file), and where XSLT would be perfect. Unfortunately those XSLT implementations I am aware of (except the most expensive version of Saxon) does not support only having the necessary part of the DOM read in but reads in the whole tree. This cause the computer to swap to death. The XPath in question is //m/e[contains(.,'foobar') so it is essentially just a grep. Is there an XSLT implementation which can do this? Or an XSLT implementation which given suitable "advice" can do this trick of pruning away the parts in memory which will not be needed again? I'd prefer a Java implementation but both Windows and Linux are viable native platforms. EDIT: The input XML looks like: <log> <!-- Fri Jun 26 12:09:27 CEST 2009 --> <e h='12:09:27,284' l='org.apache.catalina.session.ManagerBase' z='1246010967284' t='ContainerBackgroundProcessor[StandardEngine[Catalina]]' v='10000'> <m>Registering Catalina:type=Manager,path=/axsWHSweb-20090626,host=localhost</m></e> <e h='12:09:27,284' l='org.apache.catalina.session.ManagerBase' z='1246010967284' t='ContainerBackgroundProcessor[StandardEngine[Catalina]]' v='10000'> <m>Force random number initialization starting</m></e> <e h='12:09:27,284' l='org.apache.catalina.session.ManagerBase' z='1246010967284' t='ContainerBackgroundProcessor[StandardEngine[Catalina]]' v='10000'> <m>Getting message digest component for algorithm MD5</m></e> <e h='12:09:27,284' l='org.apache.catalina.session.ManagerBase' z='1246010967284' t='ContainerBackgroundProcessor[StandardEngine[Catalina]]' v='10000'> <m>Completed getting message digest component</m></e> <e h='12:09:27,284' l='org.apache.catalina.session.ManagerBase' z='1246010967284' t='ContainerBackgroundProcessor[StandardEngine[Catalina]]' v='10000'> <m>getDigest() 0</m></e> ...... </log> Essentialy I want to select some m-nodes (and I know the XPath is wrong for that, it was just a quick hack), but maintain the XML layout. EDIT: It appears that STX may be what I am looking for (I can live with another transformation language), and that Joost is an implementation hereof. Any experiences? EDIT: I found that Saxon 6.5.4 with -Xmx1500m could load my XML, so this allowed me to use my XPaths right now. This is just a lucky stroke so I'd still like to solve this generically - this means scriptable which in turn means no handcrafted Java filtering first. EDIT: Oh, by the way. This is a log file very similar to what is generated by the log4j XMLLayout. The reason for XML is to be able to do exactly this, namely do queries on the log. This is the initial try, hence the simple question. Later I'd like to be able to ask more complex questions - therefore I'd like the query language to be able to handle the input file.

    Read the article

  • Sidebar with CSS3 transform

    - by Malcoda
    Goal I'm working on a collapsible sidebar using jQuery for animation. I would like to have vertical text on the sidebar that acts as a label and can swap on the animateOut/animateIn effect. Normally I would use an image of the text that I've simply swapped vertically, and switch it out on animation, but with CSS3 transforms I'd like to get it to work instead. Problem The problem I'm facing is that setting the height on my rotated container makes it expand horizontally (as it's rotated 90deg) so that doesn't work. I then tried to set the width (hoping it would expand vertically, acting as height), but that has an odd effect of causing the width of my parent container to expand as well. Fix? Anyone know why this happens and also what the fix/workaround could be without setting max-widths and overflow: hidden? I've got a fairly good understanding of both html elements behavior and css3, but this is stumping me. Live Example Here's a fiddle that demonstrates my problem: Fiddle The collapse-pane class is what I have rotated and contains the span I have my text inside. You'll notice it has a width set, that widens the border, but also affects the parent container. The code: CSS: .right-panel{ position:fixed; right:0; top:0; bottom:0; border:1px solid #ccc; background-color:#efefef; } .collapse-pane{ margin-top:50px; width:30px; border:1px solid #999; cursor:pointer; /* Safari */ -webkit-transform: rotate(-90deg); /* Firefox */ -moz-transform: rotate(-90deg); /* IE */ -ms-transform: rotate(-90deg); /* Opera */ -o-transform: rotate(-90deg); /* Internet Explorer */ filter: progid:DXImageTransform.Microsoft.BasicImage(rotation=3); } .collapse-pane span{ padding:5px; } HTML <div class="right-panel"> <div class="collapse-pane"> <span class="expand">Expand</span> </div> <div style="width:0;" class="panel-body"> <div style="display:none;" class="panel-body-inner"> adsfasdfasdf </div> </div> </div> JavaScript (Thought not really relevant) $(document).ready(function(){ var height = $(".right-panel").height(); $(".collapse-pane").css({marginTop: height/2 - 20}); $('.collapse-pane').click(function(){ if($(".collapse-pane span").html() == "Expand"){ $(".panel-body").animate({width:200}, 400); $(".panel-body-inner").fadeIn(500); $(".collapse-pane span").html("Collapse"); }else{ $(".panel-body").animate({width:00}, 400); $(".panel-body-inner").fadeOut(300); $(".collapse-pane span").html("Expand"); } }); }); I hope this was clear... Thanks for any help!

    Read the article

  • Calling popToRootViewControllerAnimated causing crash. How should I be doing this?

    - by Lewis42
    The app is for taking body measurements. The user can say I want to measure: legs, arms and neck, in the settings tab and in the main tab there is a view which loops round to take each measurement. This is achieved like so: i have tab controller the first tab has a navigation controller the first view controller on the storyboard and has one segue to itself the board loops round until it has all the measurements then it segues to a different controller the problem is: if the user changes which measurements they are taking in the settings tab, the first tab needs to completely reload, as if the app was just starting up, clearing down the whole nav stack etc. at the moment the tab controller calls popToRootViewControllerAnimated on the navigation controller in the measurements tab, but this is causing a crash. Each screen has a slider control and a call to titleForRow:forComponent: is being called on a deleted view causing it to crash. What am I doing wrong?! Here's the tab bar controller code // TabBarController.m // #import "TabBarController.h" #import "TodaysMeasurementObject.h" #import "AppDelegateProtocol.h" #import "AddMeasurementViewController.h" #import "ReadPerson.h" #import "AppDelegate.h" @interface TabBarController () <UITabBarControllerDelegate> @end @implementation TabBarController bool resetWizardView = false; - (id)initWithNibName:(NSString *)nibNameOrNil bundle:(NSBundle *)nibBundleOrNil { self = [super initWithNibName:nibNameOrNil bundle:nibBundleOrNil]; if (self) { } return self; } - (void)viewDidLoad { [super viewDidLoad]; // Do any additional setup after loading the view. self.delegate = self; [[UIDevice currentDevice] beginGeneratingDeviceOrientationNotifications]; [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(orientationChanged:) name:UIDeviceOrientationDidChangeNotification object:nil]; [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(measurementsSettingsUpdated:) name:@"MeasurementsSettingsUpdated" object:nil]; } - (void) measurementsSettingsUpdated:(NSNotification *) notification { // UINavigationController *navigationController = [self.viewControllers objectAtIndex:0]; // AddMeasurementViewController *addMeasurement = [[AddMeasurementViewController alloc] init]; // [navigationController setViewControllers: [[NSArray alloc] initWithObjects:addMeasurement, nil]]; resetWizardView = YES; } - (void) viewDidAppear:(BOOL)animated { if (![ReadPerson userHasRecords]) { [self setSelectedIndex:3]; } } - (void)orientationChanged:(NSNotification *)notification { // We must add a delay here, otherwise we'll swap in the new view // too quickly and we'll get an animation glitch [self performSelector:@selector(showGraphs) withObject:nil afterDelay:0]; } - (void)showGraphs { UIDeviceOrientation deviceOrientation = [UIDevice currentDevice].orientation; if (deviceOrientation == UIDeviceOrientationLandscapeLeft && !isShowingLandscapeView) { [self performSegueWithIdentifier: @"toGraph" sender: self]; isShowingLandscapeView = YES; } else if (deviceOrientation != UIDeviceOrientationLandscapeLeft && isShowingLandscapeView) { [self dismissModalViewControllerAnimated:YES]; isShowingLandscapeView = NO; } } - (void)viewDidUnload { [super viewDidUnload]; // Release any retained subviews of the main view. } - (void)dealloc { [[NSNotificationCenter defaultCenter] removeObserver:self]; [[UIDevice currentDevice] endGeneratingDeviceOrientationNotifications]; } - (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation { if(interfaceOrientation == UIInterfaceOrientationLandscapeRight) { [self performSegueWithIdentifier: @"toGraph" sender: self]; } return false; } - (void)tabBarController:(UITabBarController *)tabBarController didSelectViewController:(UIViewController *)viewController { int tbi = tabBarController.selectedIndex; if (tbi == 0) { [[viewController view] setNeedsDisplay]; if (resetWizardView) { [(UINavigationController*)[self.viewControllers objectAtIndex:0] popToRootViewControllerAnimated: NO]; // ******* POP CALLED HERE ****** resetWizardView = false; } } } - (TodaysMeasurementObject*) theAppDataObject { id<AppDelegateProtocol> theDelegate = (id<AppDelegateProtocol>) [UIApplication sharedApplication].delegate; TodaysMeasurementObject* theDataObject; theDataObject = (TodaysMeasurementObject*) theDelegate.theAppDataObject; return theDataObject; } - (BOOL)shouldAutorotate { return NO; } - (NSUInteger)supportedInterfaceOrientations { return UIInterfaceOrientationMaskPortrait; } @end UPDATED - (void) measurementsSettingsUpdated:(NSNotification *) notification { NSMutableArray *viewControllers = [[NSMutableArray alloc] initWithArray: self.viewControllers]; UINavigationController *navigationController = [viewControllers objectAtIndex:0]; AddMeasurementViewController *addMeasurement = [[AddMeasurementViewController alloc] init]; [navigationController setViewControllers: [[NSArray alloc] initWithObjects:addMeasurement, nil]]; [viewControllers setObject:navigationController atIndexedSubscript:0]; self.viewControllers = viewControllers; } and removed the code from - tabBarController:didSelectViewController: but still the same error. I think the problem is that it's trying to get a value for the slide control after the view has been deleted. But some part of the view must still be alive...? Anyway to kill that off? Or leave it all alive??

    Read the article

  • "Undefined reference to"

    - by user1332364
    I know that there are a lot of questions somewhat related to this one, but they answers are a bit hard for me to make sense of. I'm receiving the following error for a few different lines of code: C:\Users\Jeff\AppData\Local\Temp\ccAixtmT.o:football.cpp:(.text+0x6f0): undefined reference to `Player::set_values(int, std::string, float)' From these blocks of code: class Player { int playerNum; string playerPos; float playerRank; public: void set_values(int, string, float); float get_rank(){ return playerRank; }; bool operator == (const Player &p1/*, const Player &p2*/) const { if(&p1.playerNum == &playerNum && &p1.playerPos == &playerPos && &p1.playerRank == &playerRank) return true; else return false; }; }; And this being the main function referencing the subclass: int main() { ifstream infile; infile.open ("input.txt", ifstream::in); int numTeams; string command; while(!infile.fail() && !infile.eof()){ infile >> numTeams; string name; Player p; int playNum; string playPos; float playRank; Player all[11]; float ranks[11]; Team allTeams[numTeams]; for(int i=0; i<numTeams; i++){ infile >> name; for(int j=0; j<11; j++){ infile >> playNum; infile >> playPos; infile >> playRank; if(playPos == "QB") p.set_values(playNum, playPos, (playRank*2.0)); else if(playPos == "RB") p.set_values(playNum, playPos, (playRank*1.5)); else if(playPos == "WR") p.set_values(playNum, playPos, (playRank/1.8)); else if(playPos == "TE") p.set_values(playNum, playPos, (playRank*1.1)); else if(playPos == "GD") p.set_values(playNum, playPos, (playRank/2.0)); else if(playPos == "TC") p.set_values(playNum, playPos, (playRank/2.2)); else if(playPos == "CR") p.set_values(playNum, playPos, (playRank/1.2)); all[j] = p; allTeams[i].set_values(all, name); } } infile >> command; if (command == "play"){ int t1; int t2; infile >> t1; infile >> t2; play(allTeams[t1], allTeams[t2]); } else { int t1; int p1; int t2; int p2; swap(allTeams[t1], allTeams[t1].get_player(p1), allTeams[t2], allTeams[t2].get_player(p2)); } } }

    Read the article

  • (This is for a project, so yes it is homework) How would I finish this java code?

    - by user2924318
    The task is to create arrays using user input (which I was able to do), then for the second part, use a separate method to sort the array in ascending order then output it. I have gotten it to do everything I need except I don't know how I would get it to sort. The directions say to use a while loop from 0 to the length to find the minimum value then swap that with the 1st, but I don't know how to do this. This is what I have so far: public static void main(String[] args) { Scanner in = new Scanner(System.in); int storage = getNumDigits(in); if(storage == 0){ System.out.print("No digits to store? OK, goodbye!"); System.exit(0); } int []a = new int [storage]; a = getDigits(a, in); displayDigits(a); selectionSort(a); } private static int getNumDigits(Scanner inScanner) { System.out.print("Please enter the number of digits to be stored: "); int stored = inScanner.nextInt(); while(stored < 0){ System.out.println("ERROR! You must enter a non-negative number of digits!"); System.out.println(); System.out.print("Please enter the number of digits to be stored: "); stored = inScanner.nextInt(); } return stored; } private static int[] getDigits(int[] digits, Scanner inScanner) { int length = digits.length; int count = 0; int toBeStored = 0; while(count < length){ System.out.print("Enter integer " +count +": "); toBeStored = inScanner.nextInt(); digits[count] = toBeStored; count++; } return digits; } private static void displayDigits(int[] digits) { int len = digits.length; System.out.println(); System.out.println("Array before sorting:"); System.out.println("Number of digits in array: " +len); System.out.print("Digits in array: "); for(int cnt = 0; cnt < len-1; cnt++){ System.out.print(digits[cnt] + ", "); } System.out.println(digits[len-1]); } private static void selectionSort(int[] digits) { int l = digits.length; System.out.println(); System.out.println("Array after sorting:"); System.out.println("Number of digits in array: " +l); System.out.print("Digits in array: "); int index = 0; int value = digits[0]; int indVal = digits[index]; while(index < l){ indVal = digits[index]; if(indVal <= value){ indVal = value; digits[index] = value; index++; } else if(value < indVal){ index++; } System.out.print(value); //This is where I don't know what to do. } }

    Read the article

  • Parallelism in .NET – Part 11, Divide and Conquer via Parallel.Invoke

    - by Reed
    Many algorithms are easily written to work via recursion.  For example, most data-oriented tasks where a tree of data must be processed are much more easily handled by starting at the root, and recursively “walking” the tree.  Some algorithms work this way on flat data structures, such as arrays, as well.  This is a form of divide and conquer: an algorithm design which is based around breaking up a set of work recursively, “dividing” the total work in each recursive step, and “conquering” the work when the remaining work is small enough to be solved easily. Recursive algorithms, especially ones based on a form of divide and conquer, are often a very good candidate for parallelization. This is apparent from a common sense standpoint.  Since we’re dividing up the total work in the algorithm, we have an obvious, built-in partitioning scheme.  Once partitioned, the data can be worked upon independently, so there is good, clean isolation of data. Implementing this type of algorithm is fairly simple.  The Parallel class in .NET 4 includes a method suited for this type of operation: Parallel.Invoke.  This method works by taking any number of delegates defined as an Action, and operating them all in parallel.  The method returns when every delegate has completed: Parallel.Invoke( () => { Console.WriteLine("Action 1 executing in thread {0}", Thread.CurrentThread.ManagedThreadId); }, () => { Console.WriteLine("Action 2 executing in thread {0}", Thread.CurrentThread.ManagedThreadId); }, () => { Console.WriteLine("Action 3 executing in thread {0}", Thread.CurrentThread.ManagedThreadId); } ); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Running this simple example demonstrates the ease of using this method.  For example, on my system, I get three separate thread IDs when running the above code.  By allowing any number of delegates to be executed directly, concurrently, the Parallel.Invoke method provides us an easy way to parallelize any algorithm based on divide and conquer.  We can divide our work in each step, and execute each task in parallel, recursively. For example, suppose we wanted to implement our own quicksort routine.  The quicksort algorithm can be designed based on divide and conquer.  In each iteration, we pick a pivot point, and use that to partition the total array.  We swap the elements around the pivot, then recursively sort the lists on each side of the pivot.  For example, let’s look at this simple, sequential implementation of quicksort: public static void QuickSort<T>(T[] array) where T : IComparable<T> { QuickSortInternal(array, 0, array.Length - 1); } private static void QuickSortInternal<T>(T[] array, int left, int right) where T : IComparable<T> { if (left >= right) { return; } SwapElements(array, left, (left + right) / 2); int last = left; for (int current = left + 1; current <= right; ++current) { if (array[current].CompareTo(array[left]) < 0) { ++last; SwapElements(array, last, current); } } SwapElements(array, left, last); QuickSortInternal(array, left, last - 1); QuickSortInternal(array, last + 1, right); } static void SwapElements<T>(T[] array, int i, int j) { T temp = array[i]; array[i] = array[j]; array[j] = temp; } Here, we implement the quicksort algorithm in a very common, divide and conquer approach.  Running this against the built-in Array.Sort routine shows that we get the exact same answers (although the framework’s sort routine is slightly faster).  On my system, for example, I can use framework’s sort to sort ten million random doubles in about 7.3s, and this implementation takes about 9.3s on average. Looking at this routine, though, there is a clear opportunity to parallelize.  At the end of QuickSortInternal, we recursively call into QuickSortInternal with each partition of the array after the pivot is chosen.  This can be rewritten to use Parallel.Invoke by simply changing it to: // Code above is unchanged... SwapElements(array, left, last); Parallel.Invoke( () => QuickSortInternal(array, left, last - 1), () => QuickSortInternal(array, last + 1, right) ); } This routine will now run in parallel.  When executing, we now see the CPU usage across all cores spike while it executes.  However, there is a significant problem here – by parallelizing this routine, we took it from an execution time of 9.3s to an execution time of approximately 14 seconds!  We’re using more resources as seen in the CPU usage, but the overall result is a dramatic slowdown in overall processing time. This occurs because parallelization adds overhead.  Each time we split this array, we spawn two new tasks to parallelize this algorithm!  This is far, far too many tasks for our cores to operate upon at a single time.  In effect, we’re “over-parallelizing” this routine.  This is a common problem when working with divide and conquer algorithms, and leads to an important observation: When parallelizing a recursive routine, take special care not to add more tasks than necessary to fully utilize your system. This can be done with a few different approaches, in this case.  Typically, the way to handle this is to stop parallelizing the routine at a certain point, and revert back to the serial approach.  Since the first few recursions will all still be parallelized, our “deeper” recursive tasks will be running in parallel, and can take full advantage of the machine.  This also dramatically reduces the overhead added by parallelizing, since we’re only adding overhead for the first few recursive calls.  There are two basic approaches we can take here.  The first approach would be to look at the total work size, and if it’s smaller than a specific threshold, revert to our serial implementation.  In this case, we could just check right-left, and if it’s under a threshold, call the methods directly instead of using Parallel.Invoke. The second approach is to track how “deep” in the “tree” we are currently at, and if we are below some number of levels, stop parallelizing.  This approach is a more general-purpose approach, since it works on routines which parse trees as well as routines working off of a single array, but may not work as well if a poor partitioning strategy is chosen or the tree is not balanced evenly. This can be written very easily.  If we pass a maxDepth parameter into our internal routine, we can restrict the amount of times we parallelize by changing the recursive call to: // Code above is unchanged... SwapElements(array, left, last); if (maxDepth < 1) { QuickSortInternal(array, left, last - 1, maxDepth); QuickSortInternal(array, last + 1, right, maxDepth); } else { --maxDepth; Parallel.Invoke( () => QuickSortInternal(array, left, last - 1, maxDepth), () => QuickSortInternal(array, last + 1, right, maxDepth)); } We no longer allow this to parallelize indefinitely – only to a specific depth, at which time we revert to a serial implementation.  By starting the routine with a maxDepth equal to Environment.ProcessorCount, we can restrict the total amount of parallel operations significantly, but still provide adequate work for each processing core. With this final change, my timings are much better.  On average, I get the following timings: Framework via Array.Sort: 7.3 seconds Serial Quicksort Implementation: 9.3 seconds Naive Parallel Implementation: 14 seconds Parallel Implementation Restricting Depth: 4.7 seconds Finally, we are now faster than the framework’s Array.Sort implementation.

    Read the article

< Previous Page | 78 79 80 81 82 83 84 85 86  | Next Page >