Search Results

Search found 9824 results on 393 pages for 'space partitioning'.

Page 100/393 | < Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >

  • Linux commands shows different results

    - by ClydeFrog
    I'm really having a hard time to process these results on my Ubuntu server. I have a major problem with my JBoss server where I get FileNotFoundExceptions along with "No space left on device" errors. And I thought "maybe I'm out of disk space", and used df command to figure out how much I have left: root@ubuntu1:/# df -h Filsystem Storlek Anvnt Tillg Anv% Monterat på /dev/mapper/ubuntu1-root 36G 13G 21G 38% / none 2,0G 192K 2,0G 1% /dev none 2,0G 0 2,0G 0% /dev/shm none 2,0G 64K 2,0G 1% /var/run none 2,0G 0 2,0G 0% /var/lock /dev/sda1 228M 23M 193M 11% /boot /dev/mapper/vgdata-lvdata 79G 9,2G 66G 13% /data And as you can see, I have plenty of space left. And I also checked if I'm out of i-nodes: root@ubuntu1:/# df -i Filsystem Inoder IAnv IFria IAnv% Monterat på /dev/mapper/ubuntu1-root 2346512 61992 2284520 3% / none 505380 773 504607 1% /dev none 507383 1 507382 1% /dev/shm none 507383 30 507353 1% /var/run none 507383 2 507381 1% /var/lock /dev/sda1 124496 230 124266 1% /boot /dev/mapper/vgdata-lvdata 10486784 233945 10252839 3% /data But then i used du: root@ubuntu1:/# du -s -h /* 7,5M /bin 23M /boot 19G /data 192K /dev 11G /eniro 5,3M /etc 112K /home 0 /initrd.img 183M /lib 0 /lib64 16K /lost+found 12K /media 4,0K /mnt 4,0K /opt du: kan inte komma åt "/proc/20452/task/20452/fd/3": Filen eller katalogen finns inte du: kan inte komma åt "/proc/20452/task/20452/fdinfo/3": Filen eller katalogen finns inte du: kan inte komma åt "/proc/20452/fd/3": Filen eller katalogen finns inte du: kan inte komma åt "/proc/20452/fdinfo/3": Filen eller katalogen finns inte 0 /proc 18M /root 8,2M /sbin 4,0K /selinux 8,0K /srv 0 /sys 40K /tmp 691M /usr 1,2G /var 0 /vmlinuz Notice that /data and /eniro are 30G combined! How is it possible? Do I have a memory leak somewhere? Or is it something else? ----- EDIT 1 ----- Ok, I figured out that /data has its own mount so it's not possible to combine /data and /eniro because they aren't on the same mount. But how come it says 9,2G on the first command when it says 19G on the third on directory /data?

    Read the article

  • Migrating away from LVM

    - by Kye
    I have an Ubuntu home media server setup with 4.5TB split across a few hard-drives (1x3TB, 2x1TB) and I'm using LVM2 to manage the volumes. I have recently added a 60GB SSD to my server, and I wish to use it to house the 'root' partition of my server (which is currently under the LVM group). I don't want to simply add it to the LVM volume group, because (afaik) there's no way to ensure that the SSD will be used for the root filesystem. If I just throw it at the VG, it may be used to house my media, which would defeat the purpose of having the SSD in the first place. I feel that my only solution is to somehow remove my root partition from the LVM setup and copy it across to the SSD. My boot partition is, of course, not part of the LVM group. My disk setup is as follows: 60GB SSD: EMPTY. 1TB HDD: /boot, LVM space. 1TB HDD: LVM space. 3TB HHD: LVM space. I have a few logical volumes. my root (/), a 'media' volume for my media collection, a backup one for my network backups.etc. Does anyone have any advice as to how to go about this? My end goal is to have the 60GB SSD used for my boot and root partitions, with everything else on the 3TB/1TB/1TB hard-drives.

    Read the article

  • Sparing level on HP EVA 4000

    - by Samuel
    One of the disks of our EVA4000 died today. This diskgroup (all volumes vraid5 with sparing level 1 and almost no space left for more volumes, 1TiB drives) is being rebuilt with "spare space" right now, and it will take at least 15 hours to do the leveling/rebuilding. We can't get a new disk until Friday. So, the question is, what would happen if another disk dies before the leveling completes? Would we lose data? And after that, how many aditional disks could die before losing data? 1 or 2? In "usual" RAID, we would be vulnerable to data loss while the rebuild takes place, but in this case the space reserved for sparing is two times the size of the bigger disk, so at the very least the effect should be the same of having two spares. Thanks in advance. Update: I have found some interesting threads about this question but still can't answer to this question, so I'm starting a bounty. http://blog.thestoragearchitect.com/2008/10/27/understanding-eva/ http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&url=http%3A%2F%2Fwww.experts-exchange.com%2FStorage%2FStorage_Technology%2FQ_25548177.html (Expert Exchange question from google).

    Read the article

  • mysqld crashes on any statement

    - by ??iu
    I restarted my slave to change configuration settings to skip reverse hostname lookup on connecting and to enable the slow query log. I edited /etc/my.cnf making only these changes, then restarted mysqld with /etc/init.d/mysql restart All appeared to be well but when I connect to msyqld remotely or locally though it connects okay a slight problem is that mysqld crashes whenever you try to issue any kind of statement. The client looks like: Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -A Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 3 Server version: 5.1.31-1ubuntu2-log Type 'help;' or '\h' for help. Type '\c' to clear the buffer. mysql> show tables; ERROR 2006 (HY000): MySQL server has gone away No connection. Trying to reconnect... Connection id: 1 Current database: mydb ERROR 2006 (HY000): MySQL server has gone away No connection. Trying to reconnect... ERROR 2003 (HY000): Can't connect to MySQL server on 'xx.xx.xx.xx' (61) ERROR: Can't connect to the server ERROR 2006 (HY000): MySQL server has gone away No connection. Trying to reconnect... ERROR 2003 (HY000): Can't connect to MySQL server on 'xx.xx.xx.xx' (61) ERROR: Can't connect to the server ERROR 2006 (HY000): MySQL server has gone away Bus error The mysqld error log looks like: 101210 16:35:51 InnoDB: Error: (1500) Couldn't read the MAX(job_id) autoinc value from the index (PRIMARY). 101210 16:35:51 InnoDB: Assertion failure in thread 140245598570832 in file handler/ha_innodb.cc line 2595 InnoDB: Failing assertion: error == DB_SUCCESS InnoDB: We intentionally generate a memory trap. InnoDB: Submit a detailed bug report to http://bugs.mysql.com. InnoDB: If you get repeated assertion failures or crashes, even InnoDB: immediately after the mysqld startup, there may be InnoDB: corruption in the InnoDB tablespace. Please refer to InnoDB: http://dev.mysql.com/doc/refman/5.1/en/forcing-recovery.html InnoDB: about forcing recovery. 101210 16:35:51 - mysqld got signal 6 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=16777216 read_buffer_size=131072 max_used_connections=3 max_threads=600 threads_connected=3 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 1328077 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. thd: 0x18209220 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 0x7f8d791580d0 thread_stack 0x20000 /usr/sbin/mysqld(my_print_stacktrace+0x29) [0x8b4f89] /usr/sbin/mysqld(handle_segfault+0x383) [0x5f8f03] /lib/libpthread.so.0 [0x7f902a76a080] /lib/libc.so.6(gsignal+0x35) [0x7f90291f8fb5] /lib/libc.so.6(abort+0x183) [0x7f90291fabc3] /usr/sbin/mysqld(ha_innobase::open(char const*, int, unsigned int)+0x41b) [0x781f4b] /usr/sbin/mysqld(handler::ha_open(st_table*, char const*, int, int)+0x3f) [0x6db00f] /usr/sbin/mysqld(open_table_from_share(THD*, st_table_share*, char const*, unsigned int, unsigned int, unsigned int, st_table*, bool)+0x57a) [0x64760a] /usr/sbin/mysqld [0x63f281] /usr/sbin/mysqld(open_table(THD*, TABLE_LIST*, st_mem_root*, bool*, unsigned int)+0x626) [0x641e16] /usr/sbin/mysqld(open_tables(THD*, TABLE_LIST**, unsigned int*, unsigned int)+0x5db) [0x6429cb] /usr/sbin/mysqld(open_normal_and_derived_tables(THD*, TABLE_LIST*, unsigned int)+0x1e) [0x642b0e] /usr/sbin/mysqld(mysqld_list_fields(THD*, TABLE_LIST*, char const*)+0x22) [0x70b292] /usr/sbin/mysqld(dispatch_command(enum_server_command, THD*, char*, unsigned int)+0x146d) [0x60dc1d] /usr/sbin/mysqld(do_command(THD*)+0xe8) [0x60dda8] /usr/sbin/mysqld(handle_one_connection+0x226) [0x601426] /lib/libpthread.so.0 [0x7f902a7623ba] /lib/libc.so.6(clone+0x6d) [0x7f90292abfcd] Trying to get some variables. Some pointers may be invalid and cause the dump to abort... thd->query at 0x18213c70 = thd->thread_id=3 thd->killed=NOT_KILLED The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash. 101210 16:35:51 mysqld_safe Number of processes running now: 0 101210 16:35:51 mysqld_safe mysqld restarted InnoDB: The log sequence number in ibdata files does not match InnoDB: the log sequence number in the ib_logfiles! 101210 16:35:54 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... 101210 16:35:56 InnoDB: Started; log sequence number 456 143528628 101210 16:35:56 [Warning] 'user' entry 'root@PSDB102' ignored in --skip-name-resolve mode. 101210 16:35:56 [Warning] Neither --relay-log nor --relay-log-index were used; so replication may break when this MySQL server acts as a slave and has his hostname changed!! Please use '--relay-log=mysqld-relay-bin' to avoid this problem. 101210 16:35:56 [Note] Event Scheduler: Loaded 0 events 101210 16:35:56 [Note] /usr/sbin/mysqld: ready for connections. Version: '5.1.31-1ubuntu2-log' socket: '/var/run/mysqld/mysqld.sock' port: 3306 (Ubuntu) 101210 16:36:11 InnoDB: Error: (1500) Couldn't read the MAX(job_id) autoinc value from the index (PRIMARY). 101210 16:36:11 InnoDB: Assertion failure in thread 139955151501648 in file handler/ha_innodb.cc line 2595 InnoDB: Failing assertion: error == DB_SUCCESS InnoDB: We intentionally generate a memory trap. InnoDB: Submit a detailed bug report to http://bugs.mysql.com. InnoDB: If you get repeated assertion failures or crashes, even InnoDB: immediately after the mysqld startup, there may be InnoDB: corruption in the InnoDB tablespace. Please refer to InnoDB: http://dev.mysql.com/doc/refman/5.1/en/forcing-recovery.html InnoDB: about forcing recovery. 101210 16:36:11 - mysqld got signal 6 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=16777216 read_buffer_size=131072 max_used_connections=1 max_threads=600 threads_connected=1 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 1328077 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. thd: 0x18588720 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 0x7f49d916f0d0 thread_stack 0x20000 /usr/sbin/mysqld(my_print_stacktrace+0x29) [0x8b4f89] /usr/sbin/mysqld(handle_segfault+0x383) [0x5f8f03] /lib/libpthread.so.0 [0x7f4c8a73f080] /lib/libc.so.6(gsignal+0x35) [0x7f4c891cdfb5] /lib/libc.so.6(abort+0x183) [0x7f4c891cfbc3] /usr/sbin/mysqld(ha_innobase::open(char const*, int, unsigned int)+0x41b) [0x781f4b] /usr/sbin/mysqld(handler::ha_open(st_table*, char const*, int, int)+0x3f) [0x6db00f] /usr/sbin/mysqld(open_table_from_share(THD*, st_table_share*, char const*, unsigned int, unsigned int, unsigned int, st_table*, bool)+0x57a) [0x64760a] /usr/sbin/mysqld [0x63f281] /usr/sbin/mysqld(open_table(THD*, TABLE_LIST*, st_mem_root*, bool*, unsigned int)+0x626) [0x641e16] /usr/sbin/mysqld(open_tables(THD*, TABLE_LIST**, unsigned int*, unsigned int)+0x5db) [0x6429cb] /usr/sbin/mysqld(open_normal_and_derived_tables(THD*, TABLE_LIST*, unsigned int)+0x1e) [0x642b0e] /usr/sbin/mysqld(mysqld_list_fields(THD*, TABLE_LIST*, char const*)+0x22) [0x70b292] /usr/sbin/mysqld(dispatch_command(enum_server_command, THD*, char*, unsigned int)+0x146d) [0x60dc1d] /usr/sbin/mysqld(do_command(THD*)+0xe8) [0x60dda8] /usr/sbin/mysqld(handle_one_connection+0x226) [0x601426] /lib/libpthread.so.0 [0x7f4c8a7373ba] /lib/libc.so.6(clone+0x6d) [0x7f4c89280fcd] Trying to get some variables. Some pointers may be invalid and cause the dump to abort... thd->query at 0x18599950 = thd->thread_id=1 thd->killed=NOT_KILLED The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash. 101210 16:36:11 mysqld_safe Number of processes running now: 0 101210 16:36:11 mysqld_safe mysqld restarted The config is [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] innodb_file_per_table innodb_buffer_pool_size=10G innodb_log_buffer_size=4M innodb_flush_log_at_trx_commit=2 innodb_thread_concurrency=8 skip-slave-start server-id=3 # # * IMPORTANT # If you make changes to these settings and your system uses apparmor, you may # also need to also adjust /etc/apparmor.d/usr.sbin.mysqld. # user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /DB2/mysql tmpdir = /tmp skip-external-locking # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. #bind-address = 127.0.0.1 # # * Fine Tuning # key_buffer = 16M max_allowed_packet = 16M thread_stack = 128K thread_cache_size = 8 # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP max_connections = 600 #table_cache = 64 #thread_concurrency = 10 # # * Query Cache Configuration # query_cache_limit = 1M query_cache_size = 32M # skip-federated slow-query-log skip-name-resolve Update: I followed the instructions as per http://dev.mysql.com/doc/refman/5.1/en/forcing-innodb-recovery.html and set innodb_force_recovery = 4 and the logs are showing a different error but the behavior is still the same: 101210 19:14:15 mysqld_safe mysqld restarted 101210 19:14:19 InnoDB: Started; log sequence number 456 143528628 InnoDB: !!! innodb_force_recovery is set to 4 !!! 101210 19:14:19 [Warning] 'user' entry 'root@PSDB102' ignored in --skip-name-resolve mode. 101210 19:14:19 [Warning] Neither --relay-log nor --relay-log-index were used; so replication may break when this MySQL server acts as a slave and has his hostname changed!! Please use '--relay-log=mysqld-relay-bin' to avoid this problem. 101210 19:14:19 [Note] Event Scheduler: Loaded 0 events 101210 19:14:19 [Note] /usr/sbin/mysqld: ready for connections. Version: '5.1.31-1ubuntu2-log' socket: '/var/run/mysqld/mysqld.sock' port: 3306 (Ubuntu) 101210 19:14:32 InnoDB: error: space object of table mydb/__twitter_friend, InnoDB: space id 1602 did not exist in memory. Retrying an open. 101210 19:14:32 InnoDB: error: space object of table mydb/access_request, InnoDB: space id 1318 did not exist in memory. Retrying an open. 101210 19:14:32 InnoDB: error: space object of table mydb/activity, InnoDB: space id 1595 did not exist in memory. Retrying an open. 101210 19:14:32 - mysqld got signal 11 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=16777216 read_buffer_size=131072 max_used_connections=1 max_threads=600 threads_connected=1 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 1328077 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. thd: 0x1753c070 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 0x7f7a0b5800d0 thread_stack 0x20000 /usr/sbin/mysqld(my_print_stacktrace+0x29) [0x8b4f89] /usr/sbin/mysqld(handle_segfault+0x383) [0x5f8f03] /lib/libpthread.so.0 [0x7f7cbc350080] /usr/sbin/mysqld(ha_innobase::innobase_get_index(unsigned int)+0x46) [0x77c516] /usr/sbin/mysqld(ha_innobase::innobase_initialize_autoinc()+0x40) [0x77c640] /usr/sbin/mysqld(ha_innobase::open(char const*, int, unsigned int)+0x3f3) [0x781f23] /usr/sbin/mysqld(handler::ha_open(st_table*, char const*, int, int)+0x3f) [0x6db00f] /usr/sbin/mysqld(open_table_from_share(THD*, st_table_share*, char const*, unsigned int, unsigned int, unsigned int, st_table*, bool)+0x57a) [0x64760a] /usr/sbin/mysqld [0x63f281] /usr/sbin/mysqld(open_table(THD*, TABLE_LIST*, st_mem_root*, bool*, unsigned int)+0x626) [0x641e16] /usr/sbin/mysqld(open_tables(THD*, TABLE_LIST**, unsigned int*, unsigned int)+0x5db) [0x6429cb] /usr/sbin/mysqld(open_normal_and_derived_tables(THD*, TABLE_LIST*, unsigned int)+0x1e) [0x642b0e] /usr/sbin/mysqld(mysqld_list_fields(THD*, TABLE_LIST*, char const*)+0x22) [0x70b292] /usr/sbin/mysqld(dispatch_command(enum_server_command, THD*, char*, unsigned int)+0x146d) [0x60dc1d] /usr/sbin/mysqld(do_command(THD*)+0xe8) [0x60dda8] /usr/sbin/mysqld(handle_one_connection+0x226) [0x601426] /lib/libpthread.so.0 [0x7f7cbc3483ba] /lib/libc.so.6(clone+0x6d) [0x7f7cbae91fcd] Trying to get some variables. Some pointers may be invalid and cause the dump to abort... thd->query at 0x1754d690 = thd->thread_id=1 thd->killed=NOT_KILLED The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash.

    Read the article

  • Missing MB on a GPT partioned SSD

    - by pisswillis
    I recently installed Arch Linux on an Intel 40GB SSD. I used GPT for partioning (via GNU parted) and created the following partions: /dev/sda1 : 1 MB, no FS, flag=bios_grub /dev/sda2 : 30MB, /boot, ext2, flag=boot /dev/sda3 : 20GB, /home, ext4 /dev/sda4 : ~20GB, /, ext4 After struggling to install grub2 from the livecd environment (which I finally did via grub-install /dev/sda --root-directory=/mnt/ --no-floppy --force) I got a working system. However, when I was inspecting disk usage with df I noticed that my home partition had around 170MB of used space on it. This surprised me because the only things on /home were one users .bashrc, .bash_history, and .lesshst. du confirmed that there was only a few KB of space being used on /home. Why does df report approximately 170MB being used when du does not? Is this space "gone forever", or can I regain it by repartioning and/or reinstalling? When I installed grub2 it said something along the lines of "your embed area is too small", and that I could "use BLOCKLISTS, but BLOCKLISTS are UNRELIABLE". In the end the only way I could get a system booting from the SSD was to use blocklists via the grub-install --force flag. Is this related to the mysterious missing 170MB? Thanks

    Read the article

  • Unexpected(?) high 'wasted' memory in memcached

    - by Nanne
    Looking at our memcached stats I think I have found an issue I was not aware of before. It seems that we have a strangely high amount of wasted space. I checked with phpmemcacheadmin for a change, and found this image staring at me: Now I was under the impression that the worst-case scenario would be that there is 50% waste, although I am the first to admit not knowing all the details. I have read - amongst others- this page which is indeed somewhat old, but so is our version of memcached. I think I do understand how the system works (e.g.) I believe, but I have a hard time understanding how we could get to 76% wasted space. The eviction rate that phpmemcacheadmin shows is 2 ev/s, so there is some problem here. The primary question is: what can I do to fix this. I could throw more memory at it (there is some extra available I think), maybe I should fiddle with the slab config (is that even possible with this version?), maybe there are other options? Upgrading the memcached version is not a quickly available option. The secondairy question, out of curiosity, is of course if the rate of 75% (and rising) wasted space is expected, and if so, why. System: This is currently not something I can do anything about, I know the memcached version isn't the newest, but these are the cards I've been dealt. Memcached 1.4.5 Apache 2.2.17 PHP 5.3.5

    Read the article

  • How do I batch-downsize images on linux, while keeping small images small?

    - by Gabriel
    I have a whole lot of photos and it's time to clean up the mess and free some disk space. I know mogrify is great to batch-resize things down. The problem is, in some directories I have small images mixed with the big ones. I'd like to batch-downsize all the big one but not upsize the small ones. As an example, I have a rep with tens of MBs-pictures in the 3000x2000s. Some of them I have already downsized so I could email them. They may be 1024x768. I'd like to downsize the big ones to 1600x1200, a disk-space-to-quality tradeoff I like. But then, with mogrify or convert, the small ones will be upsized, which would be a waste of disk space. I found some tricky ways to use identify with cut and some scripting to filter the small pics out and mogrify the others, but man, there's got a way to tell mogrify not to upsize my pics... How ? Is there some other tool better suited ?

    Read the article

  • Disk doesn't contain a valid partition table

    - by Jeevan Dongre
    I was running a m1.small instance ec2 ubuntu instance. I was running out of disk space, so I upgraded my instance to medium. When I upgraded I actually got 429.5 GB of space and after that I added 10 gb of volume too. When I run the "sudo fdisk -l" command I got this results. Disk /dev/sda1: 8589 MB, 8589934592 bytes 255 heads, 63 sectors/track, 1044 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/sda1 doesn't contain a valid partition table Disk /dev/sda2: 429.5 GB, 429461078016 bytes 255 heads, 63 sectors/track, 52212 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/sda2 doesn't contain a valid partition table Disk /dev/sdf: 10.7 GB, 10737418240 bytes 255 heads, 63 sectors/track, 1305 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 sda1 is the primary parition and sda2 is what I got added upgrading my system to medium. But the problem persists, I am not able to pull the code from git, it is giving me this error. remote: Counting objects: 409, done. remote: Compressing objects: 100% (236/236), done. fatal: write error: No space left on device fatal: index-pack failed

    Read the article

  • Live Messenger SimilarityTable2 file

    - by adrianbanks
    I am trying to free up some space on my laptop's hard disk and am using a tool (SpaceMonger) that will show me a treemap of the whole disk. The problem I have comes from Live Messenger's SimilarityTable2 file. I have no idea what it is for, but I know that it is a sparse file, meaning that it shows as taking up 8GB of disk space, but actually only takes up 132KB of space on disk. The problem is that because SpaceMonger thinks this file is 8GB, it swamps the other files and takes up most of the treemap, making it hard to see the other files that really are large. Is this file safe to delete? If not, how do I make its actual size on disk match its reserved size? If that's not possible, how can I make SpaceMonger (or another treemap tool) use the real size of the file and not the reserved size? EDIT: I've just realised that I have some NTFS junctions set up, meaning that the same set of directories appear twice. Is there any way to stop this happening as well?

    Read the article

  • How to start MSSQL Server with corrupt model db

    - by Jordan McGuigan
    After moving some databases around (restoring, deleting, etc) we experienced an issue creating new databases. Specifically, When trying to create a new database MSSQL Server it failed because the "The database 'model' is marked RESTORING and is in a state that does not allow recovery to be run". As some online solutions suggested, we tried to Start and Stop the MSSQL Service. Service would not restart because "Could not create tempdb. You may not have enough disk space available. Free additional disk space by deleting other files on the tempdb drive" (FYI: the drive has 100gb of free space). Tried restarting the machine the MSSQL Server is running on. When the server came back online, we received the same error. We have tried deleting tempdb.mdf and restoring the modeldb from the templates folder, but neither of these solved the issue. We are unable to connect to the database, even in single user mode. Many of the online solutions have us running SQL commands against the server, but we are unable to connect (even in single user mode) to the DB to run commands against the server. Specific error messages: Database 'model' cannot be opened. It is in the middle of a restore. (Microsoft SQL Server, Error: 927) The SQL Server (MSSQLSERVER) service is starting. The SQL Server (MSSQLSERVER) service could not be started. A service specific error occurred: 1814. We need the server up and running again ASAP.

    Read the article

  • Using multiple USB webcams in Linux

    - by rachelderp
    Running more than one USB webcam in Debian/Linux results in the the following error: libv4l2: error turning on stream: No space left on device VIDIOC_STREAMON: No space left on device What initially seemed to be a programming issue in OpenCV turned into a quest for a mysterious hardware/software problem after the same errors were produced by running cheese and xawtv. Apparently it's caused by webcams requesting all the available bandwidth on the USB host controller. With that in mind I decided to run wireshark and capinfos to see just how much bandwidth a single camera used. 4 megabits per second at 320x240 14 megabits per second at 640x480 32 megabits per second at 1920x1080 Interesting! That might explain why two cameras at 320x240 work but any higher resolution fails. It's as if my USB controller is only operating at USB 1 speeds, yet lsusb shows both webcams belonging to a device which supposedly supports 480 megabits per second. One solution proposed forcing the webcams to calculate their bandwidth usage instead of requesting their maximum by running the following commands: sudo rmmod uvcvideo sudo modprobe uvcvideo quirks=128 Unfortunately that made no difference, so I decided to try another solution. A post on StackOverflow suggested telling my webcams to use a lower FPS or compressed video format like MJPEG, but after running v4lctl list it doesn't appear either of my webcams support changing their video mode. And that's where I'm stuck. Why would two webcams operating well below the maximum speed of USB 2 would produce this error? ps: It's not a disk space issue, df displays no change when the webcams are started. pps: If it makes a difference, here's the output of lsusb

    Read the article

  • Killing a process which ran for a lot of time or is using a lot of memory

    - by Vedant Terkar
    I am not sure whether this question belong to Stack Overflow or here, but here we go. I am designing a online 'C' compiler, which will compile and invoke the program if compilation succeeded. So here is code which I am using for that: $str=shell_exec("gcc path/to/file.c -o path/to/file.exe 2>&1"); if(file_exists("path/to/file.exe")){ $res=shell_exec("path/to/file.exe <inputfile 2>&1"); echo $res; } This Seems to work fine with simple program files. But When file.c That is the source code entered contains Infinite loop then This script crashes the server and utilizes a lot of memory and time. So here is my question: Is There any way to detect for how much time does the process file.exe is Running? How Much Space is Utilized by that process that is file.exe? Is There any way to kill the process file.exe if space and time utilization increases beyond certain limit? That Mean if we allocate time of 2.5sec and space of 40Mb at max for that process file.exe and if any one of those 2 constraints is violated then we should display appropriate error message to client Is it possible? I am Using WAMP (Windows 7).

    Read the article

  • cannot make ubuntu 64-bit v12.04 install work

    - by honestann
    I decided it was time to update my ubuntu (single boot) computer from 64-bit v10.04 to 64-bit v12.04. Unfortunately, for some reason (or reasons) I just can't make it work. Note that I am attempting a fresh install of 64-bit v12.04 onto a new 3TB hard disk, not an upgrade of the 1TB hard disk that contains my working 64-bit v10.04 installation. To perform the attempted install of v12.04 I unplug the SATA cable from the 1TB drive and plug it into the 3TB drive (to avoid risking damage to my working v10.04 installation). I downloaded the ubuntu 64-bit v12.04 install DVD ISO file (~1.6 GB) from the ubuntu releases webpage and burned it onto a DVD. I have downloaded the DVD ISO file 3 times and burned 3 of these installation DVDs (twice with v10.04 and once with my winxp64 system), but none of them work. I run the "check disk" on the DVDs at the beginning of the installation process to assure the DVD is valid. When installation completes and the system boots the 3TB drive, it reports "unknown filesystem". After installation on the 250GB drives, the system boots up fine. During every install I plug the same SATA cable (sda) into only one disk drive (the 3TB or one of the 250GB drives) and leave the other disk drives unconnected (for simplicity). It is my understanding that 64-bit ubuntu (and 64-bit linux in general) has no problem with 3TB disk drives. In the BIOS I have tried having EFI set to "enabled" and "auto" with no apparent difference (no success). I never bothered setting the BIOS to "non-EFI". I have tried partitioning the drive in a few ways to see if that makes a difference, but so far it has not mattered. Typically I manually create partitions something like this: 8GB /boot ext4 8GB swap 3TB / ext4 But I've also tried the following, just in case it matters: 8GB boot efi 8GB swap 8GB /boot ext4 3TB / ext4 Note: In the partition dialog I specify bootup on the same drive I am partitioning and installing ubuntu v12.04 onto. It is a VERY DANGEROUS FACT that the default for this always comes up with the wrong drive (some other drive, generally the external drive). Unless I'm stupid or misunderstanding something, this is very wrong and very dangerous default behavior. Note: If I connect the SATA cable to the 1TB drive that has been my ubuntu 64-bit v10.04 system drive for the past 2 years, it boots up and runs fine. I guess there must be a log file somewhere, and maybe it gives some hints as to what the problem is. I should be able to boot off the 1TB drive with the 3TB drive connected as a secondary (non-boot) drive and get the log file, assuming there is one and someone tells me the name (and where to find it if the name is very generic). After installation on the 3TB drive completes and the system reboots, the following prints out on a black screen: Loading Operating System ... Boot from CD/DVD : Boot from CD/DVD : error: unknown filesystem grub rescue> Note: I have two DVD burners in the system, hence the duplicate line above. Note: I install and boot 64-bit ubuntu v12.04 on both of my 250GB in this same system, but still cannot make the 3TB drive boot. Sigh. Any ideas? ========== motherboard == gigabyte 990FXA-UD7 CPU == AMD FX-8150 8-core bulldozer @ 3.6 GHz RAM == 8GB of DDR3 in 2 sticks (matched pair) HDD == seagate 3TB SATA3 @ 7200 rpm (new install 64-bit v12.04 FAILS) HDD == seagate 1TB SATA3 @ 7200 rpm (64-bit v10.04 WORKS for two years) HDD == seagate 250GB SATA2 @ 7200 rpm (new install 64-bit v12.04 WORKS) HDD == seagate 250GB SATA2 @ 7200 rpm (new install 64-bit v12.04 WORKS) GPU == nvidia GTX-285 ??? == no overclocking or other funky business USB == external seagate 2TB HDD for making backups DVD == one bluray burner (SATA) DVD == one DVD burner (SATA) 64-bit ubuntu v10.04 has booted and run fine on the seagate 1TB drive for 2 years.

    Read the article

  • help: cannot make ubuntu 64-bit v12.04 install work

    - by honestann
    I decided it was time to update my ubuntu (single boot) computer from 64-bit v10.04 to 64-bit v12.04. Unfortunately, for some reason (or reasons) I just can't make it work. Note that I am attempting a fresh install of 64-bit v12.04 onto a new 3TB hard disk, not an upgrade of the 1TB hard disk that has contained my 64-bit v10.04 installation. To perform the attempted install of v12.04 I unplug the SATA cable from the 1TB drive and plug it into the 3TB drive (to avoid risking damage to my working v10.04 installation). I downloaded the ubuntu 64-bit v12.04 install DVD ISO file (~1.6 GB) from the ubuntu releases webpage and burned it onto a DVD. I have downloaded the DVD ISO file 3 times and burned 3 of these installation DVDs (twice with v10.04 and once with my winxp64 system), but none of them work. I run the "check disk" on the DVDs at the beginning of the installation process to assure the DVD is valid. I also tried to install on two older 250GB seagate drives in the same computer. During every attempt I plug the same SATA cable (sda) into only one disk drive (the 3TB or one of the 250GB drives) and leave the other disk drives unconnected (for simplicity). Installation takes about 30 minutes on the 250GB drives, and about 60 minutes on the 3TB drive - not sure why. When I install on the 250GB drives, the install process finishes, the computer reboots (after the install DVD is removed), but I get a grub error 15. It is my understanding that 64-bit ubuntu (and 64-bit linux in general) has no problem with 3TB disk drives. In the BIOS I have tried having EFI set to "enabled" and "auto" with no apparent difference (no success). I have tried partitioning the drive in a few ways to see if that makes a difference, but so far it has not mattered. Typically I manually create partitions something like this: 8GB swap 8GB /boot ext4 3TB / ext4 But I've also tried the following, just in case it matters: 100MB boot efi 8GB swap 8GB /boot ext4 3TB / ext4 Note: In the partition dialog I specify bootup on the same drive I am partitioning and installing ubuntu v12.04 onto. It is a VERY DANGEROUS FACT that the default for this always comes up with the wrong drive (some other drive, generally the external drive). Unless I'm stupid or misunderstanding something, this is very wrong and very dangerous default behavior. Note: If I connect the SATA cable to the 1TB drive that has been my ubuntu 64-bit v10.04 system drive for the past 2 years, it boots up and runs fine. I guess there must be a log file somewhere, and maybe it gives some hints as to what the problem is. I should be able to boot off the 1TB drive with the 3TB drive connected as a secondary (non-boot) drive and get the log file, assuming there is one and someone tells me the name (and where to find it if the name is very generic). After installation on the 3TB drive completes and the system reboots, the following prints out on a black screen: Loading Operating System ... Boot from CD/DVD : Boot from CD/DVD : error: unknown filesystem grub rescue Note: I have two DVD burners in the system, hence the duplicate line above. The same install and reboot on the 250GB drives generates "grub error 15". Sigh. Any ideas? ========== motherboard == gigabyte 990FXA-UD7 CPU == AMD FX-8150 8-core bulldozer @ 3.6 GHz RAM == 8GB of DDR3 in 2 sticks (matched pair) HDD == seagate 3TB SATA3 @ 7200 rpm (new install 64-bit v12.04) HDD == seagate 1TB SATA3 @ 7200 rpm (current install 64-bit v10.04) GPU == nvidia GTX-285 ??? == no overclocking or other funky business USB == external seagate 2TB HDD for making backups DVD == one bluray burner (SATA) DVD == one DVD burner (SATA) The current ubuntu 64-bit v10.04 system boots and runs fine on a seagate 1TB.

    Read the article

  • Inappropriate Updates?

    - by Tony Davis
    A recent Simple-talk article by Kathi Kellenberger dissected the fastest SQL solution, submitted by Peter Larsson as part of Phil Factor's SQL Speed Phreak challenge, to the classic "running total" problem. In its analysis of the code, the article re-ignited a heated debate regarding the techniques that should, and should not, be deemed acceptable in your search for fast SQL code. Peter's code for running total calculation uses a variation of a somewhat contentious technique, sometimes referred to as a "quirky update": SET @Subscribers = Subscribers = @Subscribers + PeopleJoined - PeopleLeft This form of the UPDATE statement, @variable = column = expression, is documented and it allows you to set a variable to the value returned by the expression. Microsoft does not guarantee the order in which rows are updated in this technique because, in relational theory, a table doesn’t have a natural order to its rows and the UPDATE statement has no means of specifying the order. Traditionally, in cases where a specific order is requires, such as for running aggregate calculations, programmers who used the technique have relied on the fact that the UPDATE statement, without the WHERE clause, is executed in the order imposed by the clustered index, or in heap order, if there isn’t one. Peter wasn’t satisfied with this, and so used the ingenious device of assuring the order of the UPDATE by the use of an "ordered CTE", based on an underlying temporary staging table (a heap). However, in either case, the ordering is still not guaranteed and, in addition, would be broken under conditions of parallelism, or partitioning. Many argue, with validity, that this reliance on a given order where none can ever be guaranteed is an abuse of basic relational principles, and so is a bad practice; perhaps even irresponsible. More importantly, Microsoft doesn't wish to support the technique and offers no guarantee that it will always work. If you put it into production and it breaks in a later version, you can't file a bug. As such, many believe that the technique should never be tolerated in a production system, under any circumstances. Is this attitude justified? After all, both forms of the technique, using a clustered index to guarantee the order or using an ordered CTE, have been tested rigorously and are proven to be robust; although not guaranteed by Microsoft, the ordering is reliable, provided none of the conditions that are known to break it are violated. In Peter's particular case, the technique is being applied to a temporary table, where the developer has full control of the data ordering, and indexing, and knows that the table will never be subject to parallelism or partitioning. It might be argued that, in such circumstances, the technique is not really "quirky" at all and to ban it from your systems would server no real purpose other than to deprive yourself of a reliable technique that has uses that extend well beyond the running total calculations. Of course, it is doubly important that such a technique, including its unsupported status and the assumptions that underpin its success, is fully and clearly documented, preferably even when posting it online in a competition or forum post. Ultimately, however, this technique has been available to programmers throughout the time Sybase and SQL Server has existed, and so cannot be lightly cast aside, even if one sympathises with Microsoft for the awkwardness of maintaining an archaic way of doing updates. After all, a Table hint could easily be devised that, if specified in the WITH (<Table_Hint_Limited>) clause, could be used to request the database engine to do the update in the conventional order. Then perhaps everyone would be satisfied. Cheers, Tony.

    Read the article

  • SQL SERVER – Partition Parallelism Support in expressor 3.6

    - by pinaldave
    I am very excited to learn that there is a new version of expressor’s data integration platform coming out in March of this year.  It will be version 3.6, and I look forward to using it and telling everyone about it.  Let me describe a little bit more about what will be so great in expressor 3.6: Greatly enhanced user interface Parallel Processing Bulk Artifact Upgrading The User Interface First let me cover the most obvious enhancements. The expressor Studio user interface (UI) has had some significant work done. Kudos to the expressor Engineering team; the entire UI is a visual masterpiece that is very responsive and intuitive. The improvements are more than just eye candy; they provide significant productivity gains when developing expressor Dataflows. Operator shape icons now include a description that identifies the function of each operator, instead of having to guess at the function by the icon. Operator shapes and highlighting depict the current function and status: Disabled, enabled, complete, incomplete, and error. Each status displays an appropriate message in the message panel with correction suggestions. Floating or docking property panels provide descriptive tool tips for each property as well as auto resize when adjusting the canvas, without having to search Help or the need to scroll around to get access to the property. Progress and status indicators let you know when an operation is working. “No limit” canvas with snap-to-grid allows automatic sizing and accurate positioning when you have numerous operators in the Dataflow. The inline tool bar offers quick access to pan, zoom, fit and overview functions. Selecting multiple artifacts with a right click context allows you to easily manage your workspace more efficiently. Partitioning and Parallel Processing Partitioning allows each operator to process multiple subsets of records in parallel as opposed to processing all records that flow through that operator in a single sequential set. This capability allows the user to configure the expressor Dataflow to run in a way that most efficiently utilizes the resources of the hardware where the Dataflow is running. Partitions can exist in most individual operators. Using partitions increases the speed of an expressor data integration application, therefore improving performance and load times. With the expressor 3.6 Enterprise Edition, expressor simplifies enabling parallel processing by adding intuitive partition settings that are easy to configure. Bulk Artifact Upgrading Bulk Artifact Upgrading sounds a bit intimidating, but it actually is not and it is a welcome addition to expressor Studio. In past releases, users were prompted to confirm that they wanted to upgrade their individual artifacts only when opened. This was a cumbersome and repetitive process. Now with bulk artifact upgrading, a user can easily select what artifact or group of artifacts to upgrade all at once. As you can see, there are many new features and upgrade options that will prove to make expressor Studio quicker and more efficient.  I hope I’m not the only one who is excited about all these new upgrades, and that I you try expressor and share your experience with me. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • build error with boost spirit grammar (boost 1.43 and g++ 4.4.1)

    - by lurscher
    I'm having issues getting a small spirit/qi grammar to compile. The build stack trace is fugly enought to not make any sense to me (despite some assertion_failed i could notice in there but that didn't brought much information) the input grammar header: inputGrammar.h #include <boost/config/warning_disable.hpp> #include <boost/spirit/include/qi.hpp> #include <boost/spirit/include/phoenix_core.hpp> #include <boost/spirit/include/phoenix_operator.hpp> #include <boost/spirit/include/phoenix_fusion.hpp> #include <boost/spirit/include/phoenix_stl.hpp> #include <boost/fusion/include/adapt_struct.hpp> #include <boost/variant/recursive_variant.hpp> #include <boost/foreach.hpp> #include <iostream> #include <fstream> #include <string> #include <vector> namespace sp = boost::spirit; namespace qi = boost::spirit::qi; using namespace boost::spirit::ascii; //using namespace boost::spirit::arg_names; namespace fusion = boost::fusion; namespace phoenix = boost::phoenix; using phoenix::at_c; using phoenix::push_back; template< typename Iterator , typename ExpressionAST > struct InputGrammar : qi::grammar<Iterator, ExpressionAST(), space_type> { InputGrammar() : InputGrammar::base_type( block ) { tag = sp::lexeme[+(alpha) [sp::_val += sp::_1]];//[+(char_ - '<') [_val += _1]]; block = sp::lit("block") [ at_c<0>(sp::_val) = sp::_1] >> "(" >> *instruction[ push_back( at_c<1>(sp::_val) , sp::_1 ) ] >> ")"; command = tag [ at_c<0>(sp::_val) = sp::_1] >> "(" >> *instruction [ push_back( at_c<1>(sp::_val) , sp::_1 )] >> ")"; instruction = ( command | tag ) [sp::_val = sp::_1]; } qi::rule< Iterator , std::string() , space_type > tag; qi::rule< Iterator , ExpressionAST() , space_type > block; qi::rule< Iterator , ExpressionAST() , space_type > function_def; qi::rule< Iterator , ExpressionAST() , space_type > command; qi::rule< Iterator , ExpressionAST() , space_type > instruction; }; the test build program: i seems the build fails at qi::phrase_parse, i am using boost 1.43 and g++ 4.4.1 #include <iostream> #include <string> #include <vector> using namespace std; //my grammar #include <InputGrammar.h> struct MockExpressionNode { std::string name; std::vector< MockExpressionNode > operands; typedef std::vector< MockExpressionNode >::iterator iterator; typedef std::vector< MockExpressionNode >::const_iterator const_iterator; iterator begin() { return operands.begin(); } const_iterator begin() const { return operands.begin(); } iterator end() { return operands.end(); } const_iterator end() const { return operands.end(); } bool is_leaf() const { return ( operands.begin() == operands.end() ); } }; BOOST_FUSION_ADAPT_STRUCT( MockExpressionNode, (std::string, name) (std::vector<MockExpressionNode>, operands) ) int const tabsize = 4; void tab(int indent) { for (int i = 0; i < indent; ++i) std::cout << ' '; } template< typename ExpressionNode > struct ExpressionNodePrinter { ExpressionNodePrinter(int indent = 0) : indent(indent) { } void operator()(ExpressionNode const& node) const { cout << " tag: " << node.name << endl; for (int i=0 ; i < node.operands.size() ; i++ ) { tab( indent ); cout << " arg "<<i<<": "; ExpressionNodePrinter(indent + 2)( node.operands[i]); cout << endl; } } int indent; }; int test() { MockExpressionNode root; InputGrammar< string::const_iterator , MockExpressionNode > g(); std::string litA = "litA"; std::string litB = "litB"; std::string litC = "litC"; std::string litD = "litD"; std::string litE = "litE"; std::string litF = "litF"; std::string source = litA+"( "+litB+" ,"+litC+" , "+ litD+" ( "+litE+", "+litF+" ) "+ " )"; string::const_iterator iter = source.begin(); string::const_iterator end = source.end(); bool r = qi::phrase_parse( iter , end , g , root , space ); ExpressionNodePrinter< MockExpressionNode > np; np( root ); }; int main() { test(); } finally, the build error is the following: /usr/bin/make -f nbproject/Makefile-linux_amd64_devel.mk SUBPROJECTS= .build-conf make[1]: se ingresa al directorio `/home/mineq/NetBeansProjects/InputParserTests' /usr/bin/make -f nbproject/Makefile-linux_amd64_devel.mk dist/linux_amd64_devel/GNU-Linux-x86/vpuinputparsertests make[2]: se ingresa al directorio `/home/mineq/NetBeansProjects/InputParserTests' mkdir -p build/linux_amd64_devel/GNU-Linux-x86 rm -f build/linux_amd64_devel/GNU-Linux-x86/tests_main.o.d g++ `llvm-config --cxxflags` `pkg-config --cflags unittest-cpp` `pkg-config --cflags boost-1.43` `pkg-config --cflags boost-coroutines` -c -g -I../InputParser -MMD -MP -MF build/linux_amd64_devel/GNU-Linux-x86/tests_main.o.d -o build/linux_amd64_devel/GNU-Linux-x86/tests_main.o tests_main.cpp from /home/mineq/third_party/boost_1_43_0/boost/spirit/home/qi/auto.hpp:16, from /home/mineq/third_party/boost_1_43_0/boost/spirit/home/qi.hpp:15, from /home/mineq/third_party/boost_1_43_0/boost/spirit/include/qi.hpp:16, from ../InputParser/InputGrammar.h:12, from tests_main.cpp:14: /home/mineq/third_party/boost_1_43_0/boost/spirit/home/qi/parse.hpp: In function ‘bool boost::spirit::qi::phrase_parse(Iterator&, Iterator, const Expr&, const Skipper&, boost::spirit::qi::skip_flag::enum_type, Attr&) [with Iterator = __gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, Expr = InputGrammar<__gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, MockExpressionNode>(), Skipper = MockExpressionNode, Attr = const boost::proto::exprns_::expr<boost::proto::tag::terminal, boost::proto::argsns_::term<boost::spirit::tag::char_code<boost::spirit::tag::space, boost::spirit::char_encoding::ascii> >, 0l>]’: In file included from /home/mineq/third_party/boost_1_43_0/boost/spirit/home/qi/detail/parse_auto.hpp:14, /home/mineq/third_party/boost_1_43_0/boost/spirit/home/qi/parse.hpp:125: instantiated from ‘bool boost::spirit::qi::phrase_parse(Iterator&, Iterator, const Expr&, const Skipper&, Attr&) [with Iterator = __gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, Expr = InputGrammar<__gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, MockExpressionNode>(), Skipper = MockExpressionNode, Attr = const boost::spirit::ascii::space_type]’ tests_main.cpp:206: instantiated from here /home/mineq/third_party/boost_1_43_0/boost/spirit/home/qi/parse.hpp:99: error: no matching function for call to ‘assertion_failed(mpl_::failed************ (boost::spirit::qi::phrase_parse(Iterator&, Iterator, const Expr&, const Skipper&, boost::spirit::qi::skip_flag::enum_type, Attr&) [with Iterator = __gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, Expr = InputGrammar<__gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, MockExpressionNode>(), Skipper = MockExpressionNode, Attr = const boost::proto::exprns_::expr<boost::proto::tag::terminal, boost::proto::argsns_::term<boost::spirit::tag::char_code<boost::spirit::tag::space, boost::spirit::char_encoding::ascii> >, 0l>]::error_invalid_expression::************)(InputGrammar<__gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, MockExpressionNode> (*)()))’ /home/mineq/third_party/boost_1_43_0/boost/spirit/home/qi/parse.hpp:125: instantiated from ‘bool boost::spirit::qi::phrase_parse(Iterator&, Iterator, const Expr&, const Skipper&, Attr&) [with Iterator = __gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, Expr = InputGrammar<__gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, MockExpressionNode>(), Skipper = MockExpressionNode, Attr = const boost::spirit::ascii::space_type]’ tests_main.cpp:206: instantiated from here /home/mineq/third_party/boost_1_43_0/boost/spirit/home/qi/parse.hpp:100: error: no matching function for call to ‘assertion_failed(mpl_::failed************ (boost::spirit::qi::phrase_parse(Iterator&, Iterator, const Expr&, const Skipper&, boost::spirit::qi::skip_flag::enum_type, Attr&) [with Iterator = __gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, Expr = InputGrammar<__gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, MockExpressionNode>(), Skipper = MockExpressionNode, Attr = const boost::proto::exprns_::expr<boost::proto::tag::terminal, boost::proto::argsns_::term<boost::spirit::tag::char_code<boost::spirit::tag::space, boost::spirit::char_encoding::ascii> >, 0l>]::error_invalid_expression::************)(MockExpressionNode))’ from /home/mineq/third_party/boost_1_43_0/boost/proto/proto.hpp:12, from /home/mineq/third_party/boost_1_43_0/boost/spirit/home/support/meta_compiler.hpp:17, from /home/mineq/third_party/boost_1_43_0/boost/spirit/home/qi/meta_compiler.hpp:14, from /home/mineq/third_party/boost_1_43_0/boost/spirit/home/qi/action/action.hpp:14, from /home/mineq/third_party/boost_1_43_0/boost/spirit/home/qi/action.hpp:14, from /home/mineq/third_party/boost_1_43_0/boost/spirit/home/qi.hpp:14, from /home/mineq/third_party/boost_1_43_0/boost/spirit/include/qi.hpp:16, from ../InputParser/InputGrammar.h:12, from tests_main.cpp:14: /home/mineq/third_party/boost_1_43_0/boost/proto/detail/expr0.hpp: At global scope: /home/mineq/third_party/boost_1_43_0/boost/proto/proto_fwd.hpp: In instantiation of ‘boost::proto::exprns_::expr<boost::proto::tag::terminal, boost::proto::argsns_::term<InputGrammar<__gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, MockExpressionNode>()>, 0l>’: In file included from /home/mineq/third_party/boost_1_43_0/boost/proto/core.hpp:13, /home/mineq/third_party/boost_1_43_0/boost/utility/enable_if.hpp:59: instantiated from ‘boost::disable_if<boost::proto::result_of::is_expr<boost::proto::exprns_::expr<boost::proto::tag::terminal, boost::proto::argsns_::term<InputGrammar<__gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, MockExpressionNode>()>, 0l>, void>, void>’ /home/mineq/third_party/boost_1_43_0/boost/spirit/home/support/meta_compiler.hpp:200: instantiated from ‘boost::spirit::result_of::compile<boost::spirit::qi::domain, InputGrammar<__gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, MockExpressionNode>(), boost::fusion::unused_type, void>’ /home/mineq/third_party/boost_1_43_0/boost/spirit/home/qi/parse.hpp:107: instantiated from ‘bool boost::spirit::qi::phrase_parse(Iterator&, Iterator, const Expr&, const Skipper&, boost::spirit::qi::skip_flag::enum_type, Attr&) [with Iterator = __gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, Expr = InputGrammar<__gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, MockExpressionNode>(), Skipper = MockExpressionNode, Attr = const boost::proto::exprns_::expr<boost::proto::tag::terminal, boost::proto::argsns_::term<boost::spirit::tag::char_code<boost::spirit::tag::space, boost::spirit::char_encoding::ascii> >, 0l>]’ /home/mineq/third_party/boost_1_43_0/boost/spirit/home/qi/parse.hpp:125: instantiated from ‘bool boost::spirit::qi::phrase_parse(Iterator&, Iterator, const Expr&, const Skipper&, Attr&) [with Iterator = __gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, Expr = InputGrammar<__gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, MockExpressionNode>(), Skipper = MockExpressionNode, Attr = const boost::spirit::ascii::space_type]’ tests_main.cpp:206: instantiated from here /home/mineq/third_party/boost_1_43_0/boost/proto/detail/expr0.hpp:64: error: field ‘boost::proto::exprns_::expr<boost::proto::tag::terminal, boost::proto::argsns_::term<InputGrammar<__gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, MockExpressionNode>()>, 0l>::child0’ invalidly declared function type from /home/mineq/third_party/boost_1_43_0/boost/spirit/home/qi/auto.hpp:16, from /home/mineq/third_party/boost_1_43_0/boost/spirit/home/qi.hpp:15, from /home/mineq/third_party/boost_1_43_0/boost/spirit/include/qi.hpp:16, from ../InputParser/InputGrammar.h:12, from tests_main.cpp:14: /home/mineq/third_party/boost_1_43_0/boost/spirit/home/qi/parse.hpp: In function ‘bool boost::spirit::qi::phrase_parse(Iterator&, Iterator, const Expr&, const Skipper&, boost::spirit::qi::skip_flag::enum_type, Attr&) [with Iterator = __gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, Expr = InputGrammar<__gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, MockExpressionNode>(), Skipper = MockExpressionNode, Attr = const boost::proto::exprns_::expr<boost::proto::tag::terminal, boost::proto::argsns_::term<boost::spirit::tag::char_code<boost::spirit::tag::space, boost::spirit::char_encoding::ascii> >, 0l>]’: In file included from /home/mineq/third_party/boost_1_43_0/boost/spirit/home/qi/detail/parse_auto.hpp:14, /home/mineq/third_party/boost_1_43_0/boost/spirit/home/qi/parse.hpp:125: instantiated from ‘bool boost::spirit::qi::phrase_parse(Iterator&, Iterator, const Expr&, const Skipper&, Attr&) [with Iterator = __gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, Expr = InputGrammar<__gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, MockExpressionNode>(), Skipper = MockExpressionNode, Attr = const boost::spirit::ascii::space_type]’ tests_main.cpp:206: instantiated from here /home/mineq/third_party/boost_1_43_0/boost/spirit/home/qi/parse.hpp:107: error: request for member ‘parse’ in ‘boost::spirit::compile [with Domain = boost::spirit::qi::domain, Expr = InputGrammar<__gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, MockExpressionNode>()](((InputGrammar<__gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, MockExpressionNode> (&)())((InputGrammar<__gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, MockExpressionNode> (*)())expr)))’, which is of non-class type ‘InputGrammar<__gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, MockExpressionNode>()’ from /home/mineq/third_party/boost_1_43_0/boost/spirit/home/qi/auto.hpp:15, from /home/mineq/third_party/boost_1_43_0/boost/spirit/home/qi.hpp:15, from /home/mineq/third_party/boost_1_43_0/boost/spirit/include/qi.hpp:16, from ../InputParser/InputGrammar.h:12, from tests_main.cpp:14: /home/mineq/third_party/boost_1_43_0/boost/spirit/home/qi/skip_over.hpp: In function ‘void boost::spirit::qi::skip_over(Iterator&, const Iterator&, const T&) [with Iterator = __gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, T = boost::spirit::qi::phrase_parse(Iterator&, Iterator, const Expr&, const Skipper&, boost::spirit::qi::skip_flag::enum_type, Attr&) [with Iterator = __gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, Expr = InputGrammar<__gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, MockExpressionNode>(), Skipper = MockExpressionNode, Attr = const boost::proto::exprns_::expr<boost::proto::tag::terminal, boost::proto::argsns_::term<boost::spirit::tag::char_code<boost::spirit::tag::space, boost::spirit::char_encoding::ascii> >, 0l>]::skipper_type]’: In file included from /home/mineq/third_party/boost_1_43_0/boost/spirit/home/qi/auto/auto.hpp:19, /home/mineq/third_party/boost_1_43_0/boost/spirit/home/qi/parse.hpp:112: instantiated from ‘bool boost::spirit::qi::phrase_parse(Iterator&, Iterator, const Expr&, const Skipper&, boost::spirit::qi::skip_flag::enum_type, Attr&) [with Iterator = __gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, Expr = InputGrammar<__gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, MockExpressionNode>(), Skipper = MockExpressionNode, Attr = const boost::proto::exprns_::expr<boost::proto::tag::terminal, boost::proto::argsns_::term<boost::spirit::tag::char_code<boost::spirit::tag::space, boost::spirit::char_encoding::ascii> >, 0l>]’ /home/mineq/third_party/boost_1_43_0/boost/spirit/home/qi/parse.hpp:125: instantiated from ‘bool boost::spirit::qi::phrase_parse(Iterator&, Iterator, const Expr&, const Skipper&, Attr&) [with Iterator = __gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, Expr = InputGrammar<__gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, MockExpressionNode>(), Skipper = MockExpressionNode, Attr = const boost::spirit::ascii::space_type]’ tests_main.cpp:206: instantiated from here /home/mineq/third_party/boost_1_43_0/boost/spirit/home/qi/skip_over.hpp:27: error: ‘const struct MockExpressionNode’ has no member named ‘parse’ make[2]: *** [build/linux_amd64_devel/GNU-Linux-x86/tests_main.o] Error 1 make[2]: se sale del directorio `/home/mineq/NetBeansProjects/InputParserTests' make[1]: *** [.build-conf] Error 2 make[1]: se sale del directorio `/home/mineq/NetBeansProjects/InputParserTests' make: *** [.build-impl] Error 2 BUILD FAILED (exit value 2, total time: 1m 48s)

    Read the article

  • How to find long running transactions in Websphere MQ Series?

    - by raistlin
    In a J2EE environment the MQ server log shows the following: Process(954584.5) User(mqm) Program(amqzmuc0) AMQ7469: Transactions rolled back to release log space. .... While increasing the logfile size/space might be a temporary solution, the definitive solution must be to identify the culprit process/queue that causes this long transaction. Is there any solution/tool for this? Note: MQ is used via JMS only

    Read the article

  • How do you adjust the frame or vertical alignment of a UIBarButtonItem contained by a UIToolbar inst

    - by akaii
    Horizontal positioning of UIBarButtonItems is no problem, I can simply pad the space with fixed/flexible space items. However, I can't seem to adjust the toolbar item vertically. UIToolbar has no alignment property, and UIBarButtonItem has no way of setting its frame. I need to do this because we're using a mix of custom icons created using initWithImage, and standard icons created with initWithBarButtonSystemItem. The custom icons aren't being centered properly (they're offset upwards, relative to the system icons, which are centered properly), so the toolbar looks awkward.

    Read the article

  • SQL Server 2005 standard filegroups / files for performance on SAN

    - by Blootac
    Ok so I've just been on a SQL Server course and we discussed the usage scenarios of multiple filegroups and files when in use over local RAID and local disks but we didn't touch SAN scenarios so my question is as follows; I currently have a 250 gig database running on SQL Server 2005 where some tables have a huge number of writes and others are fairly static. The database and all objects reside in a single file group with a single data file. The log file is also on the same volume. My interpretation is that separate data files should be used across different disks to lessen disk contention and that file groups should be used for partitioning of data. However, with a SAN you obviously don't really have the same issue of disk contention that you do with a small RAID setup (or at least we don't at the moment), and standard edition doesn't support partitioning. So in order to improve parallelism what should I do? My understanding of various Microsoft publications is that if I increase the number of data files, separate threads can act across each file separately. Which leads me to the question how many files should I have. One per core? Should I be putting tables and indexes with high levels of activity in separate file groups, each with the same number of data files as we have cores? Thank you

    Read the article

  • TreeView Root Node Style

    - by ScG
    I have a tree view <asp:TreeView ID="TreeView1" runat="server" DataSourceID="SiteMapDataSource1" ShowExpandCollapse="False"> </asp:TreeView> As you can see, root node is hidden. However when the treenode gets rendered, there is a whitespace to the left of each leafnode. This is probably because the root node is hidden but is taking up space. How do i remove that white space.

    Read the article

< Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >