Search Results

Search found 9557 results on 383 pages for 'x86 64'.

Page 90/383 | < Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >

  • Can't connect to certain HTTPS sites

    - by mind.blank
    I've just moved to a new apartment and with internet connection via a router and I'm finding that I can't connect to quite a few sites that use SSL. For example trying to connect to PayPal: curl -v https://paypal.com * About to connect() to paypal.com port 443 (#0) * Trying 66.211.169.3... connected * successfully set certificate verify locations: * CAfile: none CApath: /etc/ssl/certs * SSLv3, TLS handshake, Client hello (1): * Unknown SSL protocol error in connection to paypal.com:443 * Closing connection #0 curl: (35) Unknown SSL protocol error in connection to paypal.com:443 curl -v -ssl https://paypal.com gives the same output. For some sites it works: curl -v https://www.google.com * About to connect() to www.google.com port 443 (#0) * Trying 74.125.235.112... connected * successfully set certificate verify locations: * CAfile: none CApath: /etc/ssl/certs * SSLv3, TLS handshake, Client hello (1): * SSLv3, TLS handshake, Server hello (2): * SSLv3, TLS handshake, CERT (11): * SSLv3, TLS handshake, Server key exchange (12): * SSLv3, TLS handshake, Server finished (14): * SSLv3, TLS handshake, Client key exchange (16): * SSLv3, TLS change cipher, Client hello (1): * SSLv3, TLS handshake, Finished (20): * SSLv3, TLS change cipher, Client hello (1): * SSLv3, TLS handshake, Finished (20): * SSL connection using ECDHE-RSA-RC4-SHA * Server certificate: * subject: C=US; ST=California; L=Mountain View; O=Google Inc; CN=www.google.com * start date: 2011-10-26 00:00:00 GMT * expire date: 2013-09-30 23:59:59 GMT * common name: www.google.com (matched) * issuer: C=ZA; O=Thawte Consulting (Pty) Ltd.; CN=Thawte SGC CA * SSL certificate verify ok. > GET / HTTP/1.1 > User-Agent: curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1 zlib/1.2.3.4 libidn/1.23 librtmp/2.3 > Host: www.google.com > Accept: */* > < HTTP/1.1 302 Found < Location: https://www.google.co.jp/ . . . I'm using Ubuntu 12.04, with Windows 7 installed as well. These sites work on Windows :( Not sure if this information helps but I ran ifconfig and got the following: eth0 Link encap:Ethernet HWaddr 1c:c1:de:bc:e2:4f inet6 addr: 2408:c3:7fff:991:686b:8d18:81b3:8dd1/64 Scope:Global inet6 addr: 2408:c3:7fff:991:1ec1:deff:febc:e24f/64 Scope:Global inet6 addr: fe80::1ec1:deff:febc:e24f/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:87075 errors:0 dropped:0 overruns:0 frame:0 TX packets:54522 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:78167937 (78.1 MB) TX bytes:10016891 (10.0 MB) Interrupt:46 Base address:0x4000 eth1 Link encap:Ethernet HWaddr ac:81:12:0d:93:80 inet6 addr: fe80::ae81:12ff:fe0d:9380/64 Scope:Link UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:498 TX packets:0 errors:26 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:17 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:630 errors:0 dropped:0 overruns:0 frame:0 TX packets:630 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:39592 (39.5 KB) TX bytes:39592 (39.5 KB) ppp0 Link encap:Point-to-Point Protocol inet addr:180.57.228.200 P-t-P:118.23.8.175 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1492 Metric:1 RX packets:39631 errors:0 dropped:0 overruns:0 frame:0 TX packets:22391 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:3 RX bytes:43462054 (43.4 MB) TX bytes:2834628 (2.8 MB)

    Read the article

  • Wireless doesn't work on a Broadcom BCM4312

    - by Boderick
    As stated, I've just upgraded to 12.04 and my Dell Inspiron 1545 isn't recognising its wireless card and I was wondering if anybody could help? Edit: Okay, so I found the wireless card by using lspci -vvv and it returned this: 0c:00.0 Network controller: Broadcom Corporation BCM4312 802.11b/g LP-PHY (rev 01) Subsystem: Dell Wireless 1397 WLAN Mini-Card Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast TAbort- SERR- Kernel modules: ssb lsmod Module Size Used by dm_crypt 22528 0 joydev 17393 0 dell_wmi 12601 0 sparse_keymap 13658 1 dell_wmi dell_laptop 13671 0 dcdbas 14098 1 dell_laptop psmouse 72919 0 uvcvideo 67203 0 serio_raw 13027 0 videodev 86588 1 uvcvideo snd_hda_codec_idt 60251 1 mac_hid 13077 0 snd_hda_intel 32765 5 snd_hda_codec 109562 2 snd_hda_codec_idt,snd_hda_intel snd_hwdep 13276 1 snd_hda_codec parport_pc 32114 0 rfcomm 38139 0 bnep 17830 2 ppdev 12849 0 snd_pcm 80845 3 snd_hda_intel,snd_hda_codec bluetooth 158438 10 rfcomm,bnep snd_seq_midi 13132 0 snd_rawmidi 25424 1 snd_seq_midi snd_seq_midi_event 14475 1 snd_seq_midi snd_seq 51567 2 snd_seq_midi,snd_seq_midi_event snd_timer 28931 2 snd_pcm,snd_seq snd_seq_device 14172 3 snd_seq_midi,snd_rawmidi,snd_seq binfmt_misc 17292 1 snd 62064 18 snd_hda_codec_idt,snd_hda_intel,snd_hda_codec,snd_hwdep,snd_pcm,snd_rawmidi,snd_seq,snd_timer,snd_seq_device soundcore 14635 1 snd snd_page_alloc 14108 2 snd_hda_intel,snd_pcm lp 17455 0 parport 40930 3 parport_pc,ppdev,lp sky2 53628 0 ums_realtek 17920 0 uas 17699 0 i915 414603 3 wmi 18744 1 dell_wmi drm_kms_helper 45466 1 i915 drm 197692 4 i915,drm_kms_helper i2c_algo_bit 13199 1 i915 video 19068 1 i915 usb_storage 39646 1 ums_realtek ifconfig -a eth0 Link encap:Ethernet HWaddr 00:23:ae:24:71:45 inet addr:192.168.1.158 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::223:aeff:fe24:7145/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:14340 errors:0 dropped:0 overruns:0 frame:0 TX packets:10191 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:15403754 (15.4 MB) TX bytes:1262570 (1.2 MB) Interrupt:18 ham0 Link encap:Ethernet HWaddr 7a:79:05:2d:b0:f7 inet addr:5.45.176.247 Bcast:5.255.255.255 Mask:255.0.0.0 inet6 addr: fe80::7879:5ff:fe2d:b0f7/64 Scope:Link inet6 addr: 2620:9b::52d:b0f7/96 Scope:Global UP BROADCAST RUNNING MULTICAST MTU:1404 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:179 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:0 (0.0 B) TX bytes:27480 (27.4 KB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:433 errors:0 dropped:0 overruns:0 frame:0 TX packets:433 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:60051 (60.0 KB) TX bytes:60051 (60.0 KB) iwconfig lo no wireless extensions. ham0 no wireless extensions. eth0 no wireless extensions. the results for sudo lshw -class network *-network description: Wireless interface product: BCM4312 802.11b/g LP-PHY vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:0c:00.0 logical name: eth1 version: 01 serial: 00:22:5f:77:1f:e6 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=wl0 driverversion=5.100.82.38 latency=0 multicast=yes wireless=IEEE 802.11bg resources: irq:17 memory:f69fc000-f69fffff *-network description: Ethernet interface product: 88E8040 PCI-E Fast Ethernet Controller vendor: Marvell Technology Group Ltd. physical id: 0 bus info: pci@0000:09:00.0 logical name: eth0 version: 13 serial: 00:23:ae:24:71:45 size: 100Mbit/s capacity: 100Mbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=sky2 driverversion=1.30 duplex=full firmware=N/A ip=192.168.1.158 latency=0 link=yes multicast=yes port=twisted pair speed=100Mbit/s resources: irq:45 memory:f68fc000-f68fffff ioport:de00(size=256) *-network description: Ethernet interface physical id: 2 logical name: ham0 serial: 7a:79:05:2d:b0:f7 size: 10Mbit/s capabilities: ethernet physical configuration: autonegotiation=off broadcast=yes driver=tun driverversion=1.6 duplex=full firmware=N/A ip=5.45.176.247 link=yes multicast=yes port=twisted pair speed=10Mbit/s and the results of rfkill list all 0: brcmwl-0: Wireless LAN Soft blocked: yes Hard blocked: yes 1: dell-wifi: Wireless LAN Soft blocked: yes Hard blocked: yes

    Read the article

  • I am not able to delete a corrupt NTFS partition on my pen drive. How can I force its deletion?

    - by yesuraj
    I formatted my 16GB pen drive with the NTFS file system in windows vista. After that I started copying some files. However, only a few files were copied to the pen drive before the copy operation hung. So I cancelled the copy operation. Now I am unable to use the pen drive. I DON'T REALLY NEED ANY FILES THAT I COPIED TO THE PENDRIVE. I JUST WANT TO USE THE PENDRIVE AGAIN. I have tried using Ubuntu to format the pen drive. But when i use fdisk to delete the partition, it looks like it is working fine but in fact it does not delete the partition. Also I am unable to format it with any other file system. When I tried to use gparted, it throws the following error: Error mounting: mount exited with exit code 14: The disk contains an unclean file system(0,0). The file system wasn't safely closed on window. Fixing ntfs_attr_pread_i:ntfs_pread failed: Input/output error Failed to read NTFS$Bitmap:Input/output error NTFS is either inconsistent, or there is a hardware fault, or it's a softRAID/FakeRAID hardware. In the first case run chkdsk /f on Windows then reboot into windows twice. The usage of the /f parameter is very important!. If the device is a SoftRAID/FakeRAID then first activate it and mount a different device under the /dev/mapper directory, (e.g. /dev/mapper/nvidia_eahaabcc1). Please see the dmraid documentation for more details When I searched the Internet I found help on how to recover. But I don’t want to recover, I want to format it again. When I pressed w after deleting the partition, it took more time than previously. After that i removed the pen drive and re-inserted, but the partition I had deleted was still present. If I simply type the command fdisk /dev/sdb without removing the pen drive after the partition is deleted, then it returns the error message Unable to open /dev/sdb. Here are the steps that I followed: root@yesuraj-ubuntu:~# fdisk /dev/sdb Command (m for help): d Selected partition 1 Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks THE DEMESG PRINTS ARE AS FOLLOWS, [ 6139.774753] usb 2-1.3: reset high speed USB device number 4 using ehci_hcd [ 6154.816941] usb 2-1.3: device descriptor read/64, error -110 [ 6169.968908] usb 2-1.3: device descriptor read/64, error -110 [ 6170.158427] usb 2-1.3: reset high speed USB device number 4 using ehci_hcd [ 6185.200638] usb 2-1.3: device descriptor read/64, error -110 [ 6200.352572] usb 2-1.3: device descriptor read/64, error -110 [ 6200.542093] usb 2-1.3: reset high speed USB device number 4 using ehci_hcd [ 6205.559460] usb 2-1.3: device descriptor read/8, error -110 I used the dd command and it erased the partition table. But now when I connect the pen drive, dmesg contains this error message: [88143.437001] sdb: unknown partition table. I am not able to create a partion using fdisk /dev/sdb. The error message says that it is unable to find the node. Other messages from dmesg follow below. [87100.531596] usb 2-1.3: new high speed USB device number 39 using ehci_hcd [87130.915257] usb 2-1.3: new high speed USB device number 40 using ehci_hcd [87135.932647] usb 2-1.3: device descriptor read/8, error -110

    Read the article

  • MySQL Server 5.6 defaults changes

    - by user12626240
    We're improving the MySQL Server defaults, as announced by Tomas Ulin at MySQL Connect. Here's what we're changing:  Setting  Old  New  Notes back_log  50  50 + ( max_connections / 5 ) capped at 900 binlog_checksum  off  CRC32  New variable in 5.6 binlog_row_event_max_size  1k  8k flush_time  1800  Windows changes from 1800 to 0  Was already 0 on other platforms host_cache_size  128  128 + 1 for each of the first 500 max_connections + 1 for every 20 max_connections over 500, capped at 2000  New variable in 5.6 innodb_autoextend_increment  8  64  Now affects *.ibd files. 64 is 64 megabytes innodb_buffer_pool_instances  0  8. On 32 bit Windows only, if innodb_buffer_pool_size is greater than 1300M, default is innodb_buffer_pool_size / 128M innodb_concurrency_tickets  500  5000 innodb_file_per_table  off  on innodb_log_file_size  5M  48M  InnoDB will always change size to match my.cnf value. Also see innodb_log_compressed_pages and binlog_row_image innodb_old_blocks_time 0  1000 1 second innodb_open_files  300  300; if innodb_file_per_table is ON, higher of table_open_cache or 300 innodb_purge_batch_size  20  300 innodb_purge_threads  0  1 innodb_stats_on_metadata  on  off join_buffer_size 128k  256k max_allowed_packet  1M  4M max_connect_errors  10  100 open_files_limit  0  5000  See note 1 query_cache_size  0  1M query_cache_type  on/1  off/0 sort_buffer_size  2M  256k sql_mode  none  NO_ENGINE_SUBSTITUTION  See later post about default my.cnf for STRICT_TRANS_TABLES sync_master_info  0  10000  Recommend: master_info_repository=table sync_relay_log  0  10000 sync_relay_log_info  0  10000  Recommend: relay_log_info_repository=table. Also see Replication Relay and Status Logs table_definition_cache  400  400 + table_open_cache / 2, capped at 2000 table_open_cache  400  2000   Also see table_open_cache_instances thread_cache_size  0  8 + max_connections/100, capped at 100 Note 1: In 5.5 there was already a rule to make open_files_limit 10 + max_connections + table_cache_size * 2 if that was higher than the user-specified value. Now uses the higher of that and (5000 or what you specify). We are also adding a new default my.cnf file and guided instructions on the key settings to adjust. More on this in a later post. We're also providing a page with suggestions for settings to improve backwards compatibility. The old example files like my-huge.cnf are obsolete. Some of the improvements are present from 5.6.6 and the rest are coming. These are ideas, and until they are in an official GA release, they are subject to change. As part of this work I reviewed every old server setting plus many hundreds of emails of feedback and testing results from inside and outside Oracle's MySQL Support team and the many excellent blog entries and comments from others over the years, including from many MySQL Gurus out there, like Baron, Sheeri, Ronald, Schlomi, Giuseppe and Mark Callaghan. With these changes we're trying to make it easier to set up the server by adjusting only a few settings that will cause others to be set. This happens only at server startup and only applies to variables where you haven't set a value. You'll see a similar approach used for the Performance Schema. The Gurus don't need this but for many newcomers the defaults will be very useful. Possibly the most unusual change is the way we vary the setting for innodb_buffer_pool_instances for 32-bit Windows. This is because we've found that DLLs with specified load addresses often fragment the limited four gigabyte 32-bit address space and make it impossible to allocate more than about 1300 megabytes of contiguous address space for the InnoDB buffer pool. The smaller requests for many pools are more likely to succeed. If you change the value of innodb_log_file_size in my.cnf you will see a message like this in the error log file at the next restart, instead of the old error message: [Warning] InnoDB: Resizing redo log from 2*64 to 5*128 pages, LSN=5735153 One of the biggest challenges for the defaults is the millions of installations on a huge range of systems, from point of sale terminals and routers though shared hosting or end user systems and on to major servers with lots of CPU cores, hundreds of gigabytes of RAM and terabytes of fast disk space. Our past defaults were for the smaller systems and these change that to larger shared hosting or shared end user systems, still with a bias towards the smaller end. There is a bias in favour of OLTP workloads, so reporting systems may need more changes. Where there is a conflict between the best settings for benchmarks and normal use, we've favoured production, not benchmarks. We're very interested in your feedback, comments and suggestions.

    Read the article

  • How to parse a CSV file containing serialized PHP? [migrated]

    - by garbetjie
    I've just started dabbling in Perl, to try and gain some exposure to different programming languages - so forgive me if some of the following code is horrendous. I needed a quick and dirty CSV parser that could receive a CSV file, and split it into file batches containing "X" number of CSV lines (taking into account that entries could contain embedded newlines). I came up with a working solution, and it was going along just fine. However, as one of the CSV files that I'm trying to split, I came across one that contains serialized PHP code. This seems to break the CSV parsing. As soon as I remove the serialization - the CSV file is parsed correctly. Are there any tricks I need to know when it comes to parsing serialized data in CSV files? Here is a shortened sample of the code: use strict; use warnings; my $csv = Text::CSV_XS->new({ eol => $/, always_quote => 1, binary => 1 }); my $out; my $in; open $in, "<:encoding(utf8)", "infile.csv" or die("cannot open input file $inputfile"); open $out, ">outfile.000"; binmode($out, ":utf8"); while (my $line = $csv->getline($in)) { $lines++; $csv->print($out, $line); } I'm never able to get into the while loop shown above. As soon as I remove the serialized data, I suddenly am able to get into the loop. Edit: An example of a line that is causing me trouble (taken straight from Vim - hence the ^M): "26","other","1","20,000 Subscriber Plan","Some text here.^M\ Some more text","on","","18","","0","","0","0","recurring","0","","payment","totalsend","0","tsadmin","R34bL9oq","37","0","0","","","","","","","","","","","","","","","","","","","","","","","0","0","0","a:18:{i:0;s:1:\"3\";i:1;s:1:\"2\";i:2;s:2:\"59\";i:3;s:2:\"60\";i:4;s:2:\"61\";i:5;s:2:\"62\";i:6;s:2:\"63\";i:7;s:2:\"64\";i:8;s:2:\"65\";i:9;s:2:\"66\";i:10;s:2:\"67\";i:11;s:2:\"68\";i:12;s:2:\"69\";i:13;s:2:\"70\";i:14;s:2:\"71\";i:15;s:2:\"72\";i:16;s:2:\"73\";i:17;s:2:\"74\";}","","","0","0","","0","0","0.0000","0.0000","0","","","0.00","","6","1" "27","other","1","35,000 Subscriber Plan","Some test here.^M\ Some more text","on","","18","","0","","0","0","recurring","0","","payment","totalsend","0","tsadmin","R34bL9oq","38","0","0","","","","","","","","","","","","","","","","","","","","","","","0","0","0","a:18:{i:0;s:1:\"3\";i:1;s:1:\"2\";i:2;s:2:\"59\";i:3;s:2:\"60\";i:4;s:2:\"61\";i:5;s:2:\"62\";i:6;s:2:\"63\";i:7;s:2:\"64\";i:8;s:2:\"65\";i:9;s:2:\"66\";i:10;s:2:\"67\";i:11;s:2:\"68\";i:12;s:2:\"69\";i:13;s:2:\"70\";i:14;s:2:\"71\";i:15;s:2:\"72\";i:16;s:2:\"73\";i:17;s:2:\"74\";}","","","0","0","","0","0","0.0000","0.0000","0","","","0.00","","7","1" "28","other","1","50,000 Subscriber Plan","Some text here.^M\ Some more text","on","","18","","0","","0","0","recurring","0","","payment","totalsend","0","tsadmin","R34bL9oq","39","0","0","","","","","","","","","","","","","","","","","","","","","","","0","0","0","a:18:{i:0;s:1:\"3\";i:1;s:1:\"2\";i:2;s:2:\"59\";i:3;s:2:\"60\";i:4;s:2:\"61\";i:5;s:2:\"62\";i:6;s:2:\"63\";i:7;s:2:\"64\";i:8;s:2:\"65\";i:9;s:2:\"66\";i:10;s:2:\"67\";i:11;s:2:\"68\";i:12;s:2:\"69\";i:13;s:2:\"70\";i:14;s:2:\"71\";i:15;s:2:\"72\";i:16;s:2:\"73\";i:17;s:2:\"74\";}","","","0","0","","0","0","0.0000","0.0000","0","","","0.00","","8","1""73","other","8","10,000,000","","","","0","","0","","0","0","recurring","0","","payment","","0","","","75","0","10000000","","","","","","","","","","","","","","","","","","","","","","","0","0","0","a:17:{i:0;s:1:\"3\";i:1;s:1:\"2\";i:2;s:2:\"59\";i:3;s:2:\"60\";i:4;s:2:\"61\";i:5;s:2:\"62\";i:6;s:2:\"63\";i:7;s:2:\"64\";i:8;s:2:\"65\";i:9;s:2:\"66\";i:10;s:2:\"67\";i:11;s:2:\"68\";i:12;s:2:\"69\";i:13;s:2:\"70\";i:14;s:2:\"71\";i:15;s:2:\"72\";i:16;s:2:\"74\";}","","","0","0","","0","0","0.0000","0.0000","0","","","0.00","","14","0"

    Read the article

  • Command does not execute in crontab while command itself works just fine

    - by fuzzybee
    I have this script from Colin Johnson on Github - https://github.com/colinbjohnson/aws-missing-tools/tree/master/ec2-automate-backup It seems great. I have modified it to send email to myself every time an EBS snapshot is created or deleted. The following works like a charm ec2-automate-backup.sh -v "vol-myvolumeid" -k 3 However, it does not execute at all as part of my crontab (I didn't receive any emails) #some command that got commented out */5 * * * * ec2-automate-backup.sh -v "vol-fb2fbcdf" -k 3; * * * * * date /root/logs/crontab.log; */5 * * * * date /root/logs/crontab2.log Please note that the 2nd and 3rd execute just fines as I can see the date and time in log files. What could I have missed here? The full ec2-automate-backup.sh is as follows: #!/bin/bash - # Author: Colin Johnson / [email protected] # Date: 2012-09-24 # Version 0.1 # License Type: GNU GENERAL PUBLIC LICENSE, Version 3 # #confirms that executables required for succesful script execution are available prerequisite_check() { for prerequisite in basename ec2-create-snapshot ec2-create-tags ec2-describe-snapshots ec2-delete-snapshot date do #use of "hash" chosen as it is a shell builtin and will add programs to hash table, possibly speeding execution. Use of type also considered - open to suggestions. hash $prerequisite &> /dev/null if [[ $? == 1 ]] #has exits with exit status of 70, executable was not found then echo "In order to use `basename $0`, the executable \"$prerequisite\" must be installed." 1>&2 | mailx -s "Error happened 0" [email protected] ; exit 70 fi done } #get_EBS_List gets a list of available EBS instances depending upon the selection_method of EBS selection that is provided by user input get_EBS_List() { case $selection_method in volumeid) if [[ -z $volumeid ]] then echo "The selection method \"volumeid\" (which is $app_name's default selection_method of operation or requested by using the -s volumeid parameter) requires a volumeid (-v volumeid) for operation. Correct usage is as follows: \"-v vol-6d6a0527\",\"-s volumeid -v vol-6d6a0527\" or \"-v \"vol-6d6a0527 vol-636a0112\"\" if multiple volumes are to be selected." 1>&2 | mailx -s "Error happened 1" [email protected] ; exit 64 fi ebs_selection_string="$volumeid" ;; tag) if [[ -z $tag ]] then echo "The selected selection_method \"tag\" (-s tag) requires a valid tag (-t key=value) for operation. Correct usage is as follows: \"-s tag -t backup=true\" or \"-s tag -t Name=my_tag.\"" 1>&2 | mailx -s "Error happened 2" [email protected] ; exit 64 fi ebs_selection_string="--filter tag:$tag" ;; *) echo "If you specify a selection_method (-s selection_method) for selecting EBS volumes you must select either \"volumeid\" (-s volumeid) or \"tag\" (-s tag)." 1>&2 | mailx -s "Error happened 3" [email protected] ; exit 64 ;; esac #creates a list of all ebs volumes that match the selection string from above ebs_backup_list_complete=`ec2-describe-volumes --show-empty-fields --region $region $ebs_selection_string 2>&1` #takes the output of the previous command ebs_backup_list_result=`echo $?` if [[ $ebs_backup_list_result -gt 0 ]] then echo -e "An error occured when running ec2-describe-volumes. The error returned is below:\n$ebs_backup_list_complete" 1>&2 | mailx -s "Error happened 4" [email protected] ; exit 70 fi ebs_backup_list=`echo "$ebs_backup_list_complete" | grep ^VOLUME | cut -f 2` #code to right will output list of EBS volumes to be backed up: echo -e "Now outputting ebs_backup_list:\n$ebs_backup_list" } create_EBS_Snapshot_Tags() { #snapshot tags holds all tags that need to be applied to a given snapshot - by aggregating tags we ensure that ec2-create-tags is called only onece snapshot_tags="" #if $name_tag_create is true then append ec2ab_${ebs_selected}_$date_current to the variable $snapshot_tags if $name_tag_create then ec2_snapshot_resource_id=`echo "$ec2_create_snapshot_result" | cut -f 2` snapshot_tags="$snapshot_tags --tag Name=ec2ab_${ebs_selected}_$date_current" fi #if $purge_after_days is true, then append $purge_after_date to the variable $snapshot_tags if [[ -n $purge_after_days ]] then snapshot_tags="$snapshot_tags --tag PurgeAfter=$purge_after_date --tag PurgeAllow=true" fi #if $snapshot_tags is not zero length then set the tag on the snapshot using ec2-create-tags if [[ -n $snapshot_tags ]] then echo "Tagging Snapshot $ec2_snapshot_resource_id with the following Tags:" ec2-create-tags $ec2_snapshot_resource_id --region $region $snapshot_tags #echo "Snapshot tags successfully created" | mailx -s "Snapshot tags successfully created" [email protected] fi } date_command_get() { #finds full path to date binary date_binary_full_path=`which date` #command below is used to determine if date binary is gnu, macosx or other date_binary_file_result=`file -b $date_binary_full_path` case $date_binary_file_result in "Mach-O 64-bit executable x86_64") date_binary="macosx" ;; "ELF 64-bit LSB executable, x86-64, version 1 (SYSV)"*) date_binary="gnu" ;; *) date_binary="unknown" ;; esac #based on the installed date binary the case statement below will determine the method to use to determine "purge_after_days" in the future case $date_binary in gnu) date_command="date -d +${purge_after_days}days -u +%Y-%m-%d" ;; macosx) date_command="date -v+${purge_after_days}d -u +%Y-%m-%d" ;; unknown) date_command="date -d +${purge_after_days}days -u +%Y-%m-%d" ;; *) date_command="date -d +${purge_after_days}days -u +%Y-%m-%d" ;; esac } purge_EBS_Snapshots() { #snapshot_tag_list is a string that contains all snapshots with either the key PurgeAllow or PurgeAfter set snapshot_tag_list=`ec2-describe-tags --show-empty-fields --region $region --filter resource-type=snapshot --filter key=PurgeAllow,PurgeAfter` #snapshot_purge_allowed is a list of all snapshot_ids with PurgeAllow=true snapshot_purge_allowed=`echo "$snapshot_tag_list" | grep .*PurgeAllow'\t'true | cut -f 3` for snapshot_id_evaluated in $snapshot_purge_allowed do #gets the "PurgeAfter" date which is in UTC with YYYY-MM-DD format (or %Y-%m-%d) purge_after_date=`echo "$snapshot_tag_list" | grep .*$snapshot_id_evaluated'\t'PurgeAfter.* | cut -f 5` #if purge_after_date is not set then we have a problem. Need to alter user. if [[ -z $purge_after_date ]] #Alerts user to the fact that a Snapshot was found with PurgeAllow=true but with no PurgeAfter date. then echo "A Snapshot with the Snapshot ID $snapshot_id_evaluated has the tag \"PurgeAllow=true\" but does not have a \"PurgeAfter=YYYY-MM-DD\" date. $app_name is unable to determine if $snapshot_id_evaluated should be purged." 1>&2 | mailx -s "Error happened 5" [email protected] else #convert both the date_current and purge_after_date into epoch time to allow for comparison date_current_epoch=`date -j -f "%Y-%m-%d" "$date_current" "+%s"` purge_after_date_epoch=`date -j -f "%Y-%m-%d" "$purge_after_date" "+%s"` #perform compparison - if $purge_after_date_epoch is a lower number than $date_current_epoch than the PurgeAfter date is earlier than the current date - and the snapshot can be safely removed if [[ $purge_after_date_epoch < $date_current_epoch ]] then echo "The snapshot \"$snapshot_id_evaluated\" with the Purge After date of $purge_after_date will be deleted." ec2-delete-snapshot --region $region $snapshot_id_evaluated echo "Old snapshots successfully deleted for $volumeid" | mailx -s "Old snapshots successfully deleted for $volumeid" [email protected] fi fi done } #calls prerequisitecheck function to ensure that all executables required for script execution are available prerequisite_check app_name=`basename $0` #sets defaults selection_method="volumeid" region="ap-southeast-1" #date_binary allows a user to set the "date" binary that is installed on their system and, therefore, the options that will be given to the date binary to perform date calculations date_binary="" #sets the "Name" tag set for a snapshot to false - using "Name" requires that ec2-create-tags be called in addition to ec2-create-snapshot name_tag_create=false #sets the Purge Snapshot feature to false - this feature will eventually allow the removal of snapshots that have a "PurgeAfter" tag that is earlier than current date purge_snapshots=false #handles options processing while getopts :s:r:v:t:k:pn opt do case $opt in s) selection_method="$OPTARG";; r) region="$OPTARG";; v) volumeid="$OPTARG";; t) tag="$OPTARG";; k) purge_after_days="$OPTARG";; n) name_tag_create=true;; p) purge_snapshots=true;; *) echo "Error with Options Input. Cause of failure is most likely that an unsupported parameter was passed or a parameter was passed without a corresponding option." 1>&2 ; exit 64;; esac done #sets date variable date_current=`date -u +%Y-%m-%d` #sets the PurgeAfter tag to the number of days that a snapshot should be retained if [[ -n $purge_after_days ]] then #if the date_binary is not set, call the date_command_get function if [[ -z $date_binary ]] then date_command_get fi purge_after_date=`$date_command` echo "Snapshots taken by $app_name will be eligible for purging after the following date: $purge_after_date." fi #get_EBS_List gets a list of EBS instances for which a snapshot is desired. The list of EBS instances depends upon the selection_method that is provided by user input get_EBS_List #the loop below is called once for each volume in $ebs_backup_list - the currently selected EBS volume is passed in as "ebs_selected" for ebs_selected in $ebs_backup_list do ec2_snapshot_description="ec2ab_${ebs_selected}_$date_current" ec2_create_snapshot_result=`ec2-create-snapshot --region $region -d $ec2_snapshot_description $ebs_selected 2>&1` if [[ $? != 0 ]] then echo -e "An error occured when running ec2-create-snapshot. The error returned is below:\n$ec2_create_snapshot_result" 1>&2 ; exit 70 else ec2_snapshot_resource_id=`echo "$ec2_create_snapshot_result" | cut -f 2` echo "Snapshots successfully created for volume $volumeid" | mailx -s "Snapshots successfully created for $volumeid" [email protected] fi create_EBS_Snapshot_Tags done #if purge_snapshots is true, then run purge_EBS_Snapshots function if $purge_snapshots then echo "Snapshot Purging is Starting Now." purge_EBS_Snapshots fi cron log Oct 23 10:24:01 ip-10-130-153-227 CROND[28214]: (root) CMD (root (ec2-automate-backup.sh -v "vol-fb2fbcdf" -k 3;)) Oct 23 10:24:01 ip-10-130-153-227 CROND[28215]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:25:01 ip-10-130-153-227 CROND[28228]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:25:01 ip-10-130-153-227 CROND[28229]: (root) CMD (date >> /root/logs/crontab2.log) Oct 23 10:26:01 ip-10-130-153-227 CROND[28239]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:27:01 ip-10-130-153-227 CROND[28247]: (root) CMD (root (ec2-automate-backup.sh -v "vol-fb2fbcdf" -k 3;)) Oct 23 10:27:01 ip-10-130-153-227 CROND[28248]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:28:01 ip-10-130-153-227 CROND[28263]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:29:01 ip-10-130-153-227 CROND[28275]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:30:01 ip-10-130-153-227 CROND[28292]: (root) CMD (root (ec2-automate-backup.sh -v "vol-fb2fbcdf" -k 3;)) Oct 23 10:30:01 ip-10-130-153-227 CROND[28293]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:30:01 ip-10-130-153-227 CROND[28294]: (root) CMD (date >> /root/logs/crontab2.log) Oct 23 10:31:01 ip-10-130-153-227 CROND[28312]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:32:01 ip-10-130-153-227 CROND[28319]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:33:01 ip-10-130-153-227 CROND[28325]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:33:01 ip-10-130-153-227 CROND[28324]: (root) CMD (root (ec2-automate-backup.sh -v "vol-fb2fbcdf" -k 3;)) Oct 23 10:34:01 ip-10-130-153-227 CROND[28345]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:35:01 ip-10-130-153-227 CROND[28362]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:35:01 ip-10-130-153-227 CROND[28363]: (root) CMD (date >> /root/logs/crontab2.log) Mails to root From [email protected] Tue Oct 23 06:00:01 2012 Return-Path: <[email protected]> Date: Tue, 23 Oct 2012 06:00:01 GMT From: [email protected] (Cron Daemon) To: [email protected] Subject: Cron <root@ip-10-130-153-227> root ec2-automate-backup.sh -v "vol-fb2fbcdf" -k 3 Content-Type: text/plain; charset=UTF-8 Auto-Submitted: auto-generated X-Cron-Env: <SHELL=/bin/sh> X-Cron-Env: <HOME=/root> X-Cron-Env: <PATH=/usr/bin:/bin> X-Cron-Env: <LOGNAME=root> X-Cron-Env: <USER=root> Status: R /bin/sh: root: command not found

    Read the article

  • mercurial .hgrc notify hook

    - by Eeyore
    Could someone tell me what is incorrect in my .hgrc configuration? I am trying to use gmail to send a e-mail after each push and/or commit. .hgrc [paths] default = ssh://www.domain.com/repo/hg [ui] username = intern <[email protected]> ssh="C:\Program Files (x86)\Mercurial\plink.exe" -ssh -i "C:\Program Files (x86)\Mercurial\key.pub" [extensions] hgext.notify = [hooks] changegroup.notify = python:hgext.notify.hook incoming.notify = python:hgext.notify.hook [email] from = [email protected] [smtp] host = smtp.gmail.com username = [email protected] password = sure port = 587 tls = true [web] baseurl = http://dev/... [notify] sources = serve push pull bundle test = False config = /path/to/subscription/file template = \ndetails: {baseurl}{webroot}/rev/{node|short}\nchangeset: {rev}:{node|short}\nuser: {author}\ndate: {date|date}\ndescription:\n{desc}\n maxdiff = 300 Error Incoming comand failed for P/project. running ""C:\Program Files (x86)\Mercurial\plink.exe" -ssh -i "C:\Program Files (x86)\Mercurial\key.pub" [email protected] "hg -R repo/hg serve --stdio"" sending hello command sending between command remote: FATAL ERROR: Server unexpectedly closed network connection abort: no suitable response from remote hg! , error code: -1 running ""C:\Program Files (x86)\Mercurial\plink.exe" -ssh -i "C:\Program Files (x86)\Mercurial\key.pub" [email protected] "hg -R repo/hg serve --stdio"" sending hello command sending between command remote: FATAL ERROR: Server unexpectedly closed network connection abort: no suitable response from remote hg!

    Read the article

  • Temperature anomaly calculation of time series data

    - by neel
    I have a time series like following: Data <- structure(list(Year = c(1991L, 1991L, 1991L, 1991L, 1991L, 1991L, 1991L, 1991L, 1991L, 1991L, 1991L, 1991L, 1991L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L), Month = c(8L, 9L, 9L, 9L, 10L, 10L, 10L, 11L, 11L, 11L, 12L, 12L, 12L, 1L, 1L, 1L, 2L, 2L, 2L, 3L, 3L, 3L, 4L, 4L, 4L, 5L, 5L, 5L, 6L, 6L, 6L, 7L, 7L, 7L, 8L, 8L, 8L, 9L, 9L, 9L, 10L, 10L, 10L, 11L, 11L, 11L, 12L, 12L, 12L), Day = c(30L, 10L, 20L, 30L, 10L, 20L, 30L, 10L, 20L, 30L, 10L, 20L, 30L, 10L, 20L, 30L, 10L, 20L, 28L, 10L, 20L, 30L, 10L, 20L, 30L, 10L, 20L, 30L, 10L, 20L, 30L, 10L, 20L, 30L, 10L, 20L, 30L, 10L, 20L, 30L, 10L, 20L, 30L, 10L, 20L, 30L, 10L, 20L, 30L), Hour = c(0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L), temperature = c(72.5, 64, 62.5, 64, 64, 53, 52, 52, 45.5, 49, 50, 50, 59, 63.5, 69.5, 61, 61, NaN, NaN, 39.5, 37, 45.5, 45, 39, 43.5, 52, 53, 56, 64, 66, 66.5, 73.5, 81, 85, 89.5, 87.5, 88.5, 83, 84.5, 74, 60.5, 59, 53, 60.5, 62.5, 64.5, 63, 62, 65.5)), .Names = c("Year", "Month", "Day", "Hour", "temperature"), class = "data.frame", row.names = c(NA, -49L)) and I have to calculate standardized anomaly. The steps to calculate the anomalies are following: Monthly premature departures from the long-term (1991-2007) average are obtained. Then standardized by dividing by the standard deviation of monthly temperature. The standardized monthly anomalies are then weighted by multiplying by the fraction of the average temperature for the given month. These weighted anomalies are then summed over 3 month time period. Can you please help me?

    Read the article

  • Problem: Vectorizing Code with Intel Visual FORTRAN for X64

    - by user313209
    I'm compiling my fortran90 code using Intel Visual FORTRAN on Windows Server 2003 Enterprise X64 Edition. When I compile the code for 32 bit structure and using automatic and manual vectorizing options. The code will be compiled, vectorized. And when I run it on 8 core system the compiled code uses 70% of CPU that shows me that vectorizing is working. But when I compile the code with 64 Bit compiler, it says that the code is vectorized but when I run it it only shows CPU usage of about 12% that is full usage for one core out of 8, so it means that while the compiler says that code is vectorized, vectorization is not working. And it's strange for me because it's on a X64 Edition Windows and I was expecting to see the reverse result. I thought that it should be better to run a code that is compiled for 64 Bit architecture on a 64 bit windows. Anyone have any idea why the compiled code is not able to use the full power of multiple cores for 64 Bit Compiled version? Thanks in advance for your responses.

    Read the article

  • Strange behavior using getchar() and -O3

    - by Eduardo
    I have these two functions void set_dram_channel_width(int channel_width){ printf("one\n"); getchar(); } void set_dram_transaction_granularity(int cacheline_size){ printf("two\n"); getchar(); } //output: one f //my keyboard input two one f //keyboard input two one f //keyboard input //No more calls Then I change the functions to: void set_dram_channel_width(int channel_width){ printf("one\n"); } void set_dram_transaction_granularity(int cacheline_size){ printf("two\n"); getchar(); } //output one two f //keyboard input //No more calls Both functions are called by an external code, the code for both programs is the same, just changing the getchar() I get those two different outputs. Is this possible or there is something that is really wrong in my code? Thanks This is the output I get with GDB** For the first code (gdb) break mem-dram.c:374 Breakpoint 1 at 0x71c810: file build/ALPHA_FS/mem/dramsim/mem-dram.c, line 374. (gdb) break mem-dram.c:381 Breakpoint 2 at 0x71c7b0: file build/ALPHA_FS/mem/dramsim/mem-dram.c, line 381. (gdb) run -d ./tmp/MyBench2/ one f [Switching to Thread 47368811512112 (LWP 17507)] Breakpoint 1, set_dram_channel_width (channel_width=64) (gdb) c Continuing. two one f Breakpoint 2, set_dram_transaction_granularity (cacheline_size=64) (gdb) c Continuing. Breakpoint 1, set_dram_channel_width (channel_width=8) 374 void set_dram_channel_width(int channel_width){ (gdb) c Continuing. two one f For the second code (gdb) break mem-dram.c:374 Breakpoint 1 at 0x71c7b6: file build/ALPHA_FS/mem/dramsim/mem-dram.c, line 374. (gdb) break mem-dram.c:380 Breakpoint 2 at 0x71c7f0: file build/ALPHA_FS/mem/dramsim/mem-dram.c, line 380. (gdb) run one two f [Switching to Thread 46985688772912 (LWP 17801)] Breakpoint 1, set_dram_channel_width (channel_width=64) (gdb) c Continuing. Breakpoint 2, set_dram_transaction_granularity (cacheline_size=64) (gdb) c Continuing. Breakpoint 1, set_dram_channel_width (channel_width=8) (gdb) c Continuing.

    Read the article

  • New projects not built when target platform is set explicitly

    - by stiank81
    I create a new solution with one project, and then change the target platform from "Any CPU" to "x86". After this new projects added doesn't get built by default, and their target platform doesn't follow the global settings. Why?! Looking at the configuration manager new projects added are not checked to "Build", and they get target platform "Any CPU" instead of the globally set x86. Why is this happening? I expect new projects too to get the globally set and defined x86 target platform.. Some things I've tried: Toggle global platform back to Any CPU, and then to x86 again. No change.. Choosing platform explicitly for the new project. x86 is not available in the list, and when I say <New..> and try adding it I'm not allowed as ".. a solution platform with the same name already exists.". On the build properties for the new project I can't change the platform in the Configuration section, but I can set "Platform target" to x86 in the General section. It is however not clear whether this actually makes a difference, and it wouldn't respond if I change the target platform globally later. Initially I thought this was a problem from converting my solution from VS2008 to VS2010, but the problem applies both places. I.e. when I create a solution in VS2008 and just stay in VS2008 I still get the problem.

    Read the article

  • Encoding/Decoding hex packet

    - by Roberto Pulvirenti
    I want to send this hex packet: 00 38 60 dc 00 00 04 33 30 3c 00 00 00 20 63 62 39 62 33 61 36 37 34 64 31 36 66 32 31 39 30 64 30 34 30 63 30 39 32 66 34 66 38 38 32 62 00 06 35 2e 31 33 2e 31 00 00 02 3c so i build the string: string packet = "003860dc0000" + textbox1.text+ "00000020" + textbox2.text+ "0006" + textbox3.text; then "convert" it to ascii: conn_str = HexString2Ascii(packet); then i send the packet... but i have this: 00 38 60 **c3 9c** 00 00 04 33 30 3c 00 00 00 20 63 62 39 62 33 61 36 37 34 64 31 36 66 32 31 39 30 64 30 34 30 63 30 39 32 66 34 66 38 38 32 62 00 06 35 2e 31 33 2e 31 00 00 02 3c **0a** why?? Thank you! P.S. the function is: private string HexString2Ascii(string hexString) { byte[] tmp; int j = 0; int lenght; lenght=hexString.Length-2; tmp = new byte[(hexString.Length)/2]; for (int i = 0; i <= lenght; i += 2) { tmp[j] =(byte)Convert.ToChar(Int32.Parse(hexString.Substring(i, 2), System.Globalization.NumberStyles.HexNumber)); j++; } return Encoding.GetEncoding(1252).GetString(tmp); }

    Read the article

  • [C#] Three System.Drawing methods manifest slow drawing or flickery: Solutions? or Other Options?

    - by Luke Mcneice
    Hi all, I am doing a little graphing via the System.Drawing and im having a few problems. I'm holding data in a Queue and i'm drawing(graphing) out that data onto three picture boxes this method fills the picture box then scrolls the graph across. so not to draw on top of the previous drawings (and graduly looking messier) i found 2 solutions to draw the graph. Call plot.Clear(BACKGOUNDCOLOR) before the draw loop [block commented] although this causes a flicker to appear from the time it takes to do the actual drawing loop. call plot.DrawLine(channelPen[5], j, 140, j, 0); just before each drawline [commented] although this causes the drawing to start ok then slow down very quickly to a crawl as if a wait command had been placed before the draw command. Here is the Code for reference: /*plotx.Clear(BACKGOUNDCOLOR) ploty.Clear(BACKGOUNDCOLOR) plotz.Clear(BACKGOUNDCOLOR)*/ for (int j = 1; j < 599; j++) { if (j > RealTimeBuffer.Count - 1) break; QueueEntity past = RealTimeBuffer.ElementAt(j - 1); QueueEntity current = RealTimeBuffer.ElementAt(j); if (j == 1) { //plotx.DrawLine(channelPen[5], 0, 140, 0, 0); //ploty.DrawLine(channelPen[5], 0, 140, 0, 0); //plotz.DrawLine(channelPen[5], 0, 140, 0, 0); } //plotx.DrawLine(channelPen[5], j, 140, j, 0); plotx.DrawLine(channelPen[0], j - 1, (((past.accdata.X - 0x7FFF) / 256) + 64), j, (((current.accdata.X - 0x7FFF) / 256) + 64)); //ploty.DrawLine(channelPen[5], j, 140, j, 0); ploty.DrawLine(channelPen[1], j - 1, (((past.accdata.Y - 0x7FFF) / 256) + 64), j, (((current.accdata.Y - 0x7FFF) / 256) + 64)); //plotz.DrawLine(markerPen, j, 140, j, 0); plotz.DrawLine(channelPen[2], j - 1, (((past.accdata.Z - 0x7FFF) / 256) + 94), j, (((current.accdata.Z - 0x7FFF) / 256) + 94)); } Is there any tricks to avoid these overheads? If not would there be any other/better solutions?

    Read the article

  • [C#] Two System.Drawing methods manifest slow drawing or flickery: Solutions? or Other Options?

    - by Luke Mcneice
    Hi all, I am doing a little graphing via the System.Drawing and im having a few problems. I'm holding data in a Queue and i'm drawing(graphing) out that data onto three picture boxes this method fills the picture box then scrolls the graph across. so not to draw on top of the previous drawings (and graduly looking messier) i found 2 solutions to draw the graph. Call plot.Clear(BACKGOUNDCOLOR) before the draw loop [block commented] although this causes a flicker to appear from the time it takes to do the actual drawing loop. call plot.DrawLine(channelPen[5], j, 140, j, 0); just before each drawline [commented] although this causes the drawing to start ok then slow down very quickly to a crawl as if a wait command had been placed before the draw command. Here is the Code for reference: /*plotx.Clear(BACKGOUNDCOLOR) ploty.Clear(BACKGOUNDCOLOR) plotz.Clear(BACKGOUNDCOLOR)*/ for (int j = 1; j < 599; j++) { if (j > RealTimeBuffer.Count - 1) break; QueueEntity past = RealTimeBuffer.ElementAt(j - 1); QueueEntity current = RealTimeBuffer.ElementAt(j); if (j == 1) { //plotx.DrawLine(channelPen[5], 0, 140, 0, 0); //ploty.DrawLine(channelPen[5], 0, 140, 0, 0); //plotz.DrawLine(channelPen[5], 0, 140, 0, 0); } //plotx.DrawLine(channelPen[5], j, 140, j, 0); plotx.DrawLine(channelPen[0], j - 1, (((past.accdata.X - 0x7FFF) / 256) + 64), j, (((current.accdata.X - 0x7FFF) / 256) + 64)); //ploty.DrawLine(channelPen[5], j, 140, j, 0); ploty.DrawLine(channelPen[1], j - 1, (((past.accdata.Y - 0x7FFF) / 256) + 64), j, (((current.accdata.Y - 0x7FFF) / 256) + 64)); //plotz.DrawLine(markerPen, j, 140, j, 0); plotz.DrawLine(channelPen[2], j - 1, (((past.accdata.Z - 0x7FFF) / 256) + 94), j, (((current.accdata.Z - 0x7FFF) / 256) + 94)); } Is there any tricks to avoid these overheads? If not would there be any other/better solutions?

    Read the article

  • How can I generate a FindBugs report that shows me the bugs removed between two revisions in the bug

    - by David Deschenes
    I am attempting to execute a combination of the FindBugs commands filterBugs and convertXmlToText, against a bug database that I created, to generate a report that shows me the all of the bugs removed between two revisions of the system that I am working on. Unfortunately, the resulting report does not show any bug details. It appears that the convertXmlToText throws away all bugs that are dead (aka inactive)... the exact set of bugs that I'd like to see. Below is what I see when I pass the results of the filterBugs command to the mineBugHistory command: build/findbugs/bin> ./filterBugs -before r39921 -absent r41558 -active:false ../../../mmfg/bugDB-2.xml | ./mineBugHistory seq version time classes NCSS added newCode fixed removed retained dead active 0 r39764 1271169398000 438 74069 0 64 0 0 0 0 64 1 r39921 1271186932000 441 74333 0 0 22 0 42 0 42 2 r40149 1271185876000 449 74636 0 0 3 0 39 22 39 3 r40344 1271180332000 452 74789 0 0 7 0 32 25 32 4 r40558 1271179612000 452 74806 0 0 1 0 31 32 31 5 r40793 1271178818000 464 75610 0 0 20 0 11 33 11 6 r41016 1271176154000 467 75712 0 0 4 0 7 53 7 7 r41303 1271175616000 481 76931 0 0 7 0 0 57 0 8 r41558 1271175026000 486 77793 0 0 0 0 0 64 0 What I'd like to see in the HTML report is the list of the 64 bugs that are shown as active in version r39764 (sequence # 0). Below is the command line that I am using to generate the HTML report: build/findbugs/bin> ./filterBugs -before r39921 -absent r41558 -active:false ../../../mmfg/bugDB-2.xml | ./convertXmlToText -html:fancy-hist.xsl > ../../../mmfg/bugDB-removed.html

    Read the article

  • LuaEdit can't find module when Lua files all in the same folder

    - by joverboard
    I downloaded LuaEdit to use as an IDE and debug tool however I'm having trouble using it for even the simplest things. I've created a solution with 2 files in it, all of which are stored in the same folder. My files are as follows: --startup.lua require("foo") test("Testing", "testing", "one, two, three") --foo.lua foo = {} print("In foo.lua") function test(a,b,c) print(a,b,c) end This works fine when in my C++ compiler when accessed through some embed code, however when I attempt to use the same code in LuaEdit, it crashes on line 3 require("foo") with an error stating: module 'foo' not found: no field package.preload['foo'] no file 'C:\Program Files (x86)\LuaEdit 2010\lua\foo.lua' no file 'C:\Program Files (x86)\LuaEdit 2010\lua\foo\init.lua' no file 'C:\Program Files (x86)\LuaEdit 2010\foo.lua' no file 'C:\Program Files (x86)\LuaEdit 2010\foo\init.lua' no file '.\foo.lua' no file 'C:\Program Files (x86)\LuaEdit 2010\foo.dll' no file 'C:\Program Files (x86)\LuaEdit 2010\loadall.dll' no file '.\battle.dll' I have also tried creating these files prior to adding them to a solution and still get the same error. Is there some setting I'm missing? It would be great to have an IDE/debugger but it's useless to me if it can't run linked functions.

    Read the article

  • Classify data (cell array) based on years in MATLAB

    - by user2991243
    Suppose that we have this cell array of data : a={43 432 2006; 254 12 2008; 65 35 2000; 64 34 2000; 23 23 2006; 64 2 2010; 32 5 2006; 22 2 2010} Last column of this cell array is years. I want classify data(rows) based on years like this : a_2006 = {43 432 2006; 32 5 2006; 32 5 2006} a_2008 = {254 12 2008}; a_2000 = {65 35 2000; 64 34 2000} a_2010 = {64 2 2010; 22 2 2010} I have different years in column three in every cell array (this cell array is a sample) so I want an automatic method to determine the years and classify them to a_yearA , a_yearB etc. or other naming that I can distinguish years and call data years easily. How can I do this? Thanks.da

    Read the article

  • Is there a IDE/compiler PC benchmark I can use to compare my PCs performance?

    - by RickL
    I'm looking for a benchmark (and results on other PCs) which would give me an idea of the development performance gain I could get by upgrading my PC, also the benchmark could be used to justify the upgrade to my boss. I use Visual Studio 2008 for my development, so I'd like to get an idea of by what factor the build times would be improved, and also it would be good if the benchmark could incorporate IDE performance (i.e. when editing, using intellisense, opening code files etc) into its result. I currently have an AMD 3800x2, with 2GB RAM on Vista 32. For example, I'd like to know what kind of performance gain I'd see in Visual Studio 2008 with a Q6600, 4GB RAM on Vista 64. And also with other processors, and other RAM sizes... also see whether hard disk performance is a big factor. EDIT: I mentioned Vista 64 because I'm aware that Vista 32 can only use 3GB RAM maximum. So I'd presume that wanting to use more RAM would require Vista 64, but perhaps it could still be slower overall there is a large overhead in using the 32 bit VS 2008 on 64 bit OS.

    Read the article

  • Cannot generate/run migrations on rails 2.3.4

    - by Brian Roisentul
    I used to work with rails 2.3.2 before and then I decided to upgrade to version 2.3.4. Today I tried to generate a migration(I could do this fine with version 2.3.2) and I got the following error message: C:/Program Files (x86)/NetBeans 6.8/ruby2/jruby-1.4.0/lib/ruby/gems/1.8/gems/rails-2.3.4/lib/initializer.rb:812:in `const_missing': uninitialized constant ActiveSupport (NameError) from D:/Proyectos/Cursometro/www/config/environment.rb:33 from C:/Program Files (x86)/NetBeans 6.8/ruby2/jruby-1.4.0/lib/ruby/gems/1.8/gems/rails-2.3.4/lib/initializer.rb:111:in `run' from D:/Proyectos/Cursometro/www/config/environment.rb:15 from D:/Proyectos/Cursometro/www/config/environment.rb:31:in `require' from C:/Program Files (x86)/NetBeans 6.8/ruby2/jruby-1.4.0/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `require' from C:/Program Files (x86)/NetBeans 6.8/ruby2/jruby-1.4.0/lib/ruby/gems/1.8/gems/rails-2.3.4/lib/commands/generate.rb:1 from C:/Program Files (x86)/NetBeans 6.8/ruby2/jruby-1.4.0/lib/ruby/gems/1.8/gems/rails-2.3.4/lib/commands/generate.rb:31:in `require' from C:/Program Files (x86)/NetBeans 6.8/ruby2/jruby-1.4.0/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `require' from script\generate:3 I don't know why this is happening. Everything worked fine in 2.3.2 and now it doesn't.

    Read the article

  • How do you convert an unsigned int[16] of hexidecimal to an unsigned char array without losing any information?

    - by user1068636
    I have a unsigned int[16] array that when printed out looks like this: 4418703544ED3F688AC208F53343AA59 The code used to print it out is this: for (i = 0; i < 16; i++) printf("%X", CipherBlock[i] / 16), printf("%X",CipherBlock[i] % 16); printf("\n"); I need to pass this unsigned int array "CipherBlock" into a decrypt() method that only takes unsigned char *. How do correctly memcpy everything from the "CipherBlock" array into an unsigned char array without losing information? My understanding is an unsigned int is 4 bytes and unsigned char 1 byte. Since "CipherBlock" is 16 unsigned integers, the total size in bytes = 16 * 4 = 64 bytes. Does this mean my unsigned char[] array needs to be 64 in length? If so, would the following work? unsigned char arr[64] = { '\0' }; memcpy(arr,CipherBlock,64); This does not seem to work. For some reason it only copies the the first byte of "CipherBlock" into "arr". The rest of "arr" is '\0' thereafter.

    Read the article

  • Ways to access a 32bit DLL from a 64bit exe

    - by bufferz
    I have a project that must be compiled and run in 64 bit mode. Unfortunately, I am required to call upon a DLL that is only available in 32 bit mode, so there's no way I can house everything in a 1 Visual Studio project. I am working to find the best way to wrap the 32 bit DLL in its own exe/service and issue remote (although on the same machine) calls to that exe/service from my 64 bit app. My OS is Win7 Pro 64 bit. The required calls to this 32 bit process are several dozen per second, but low data volume. This is a realtime image analysis application so response time is critical despite low volume. Lots of sending/receiving single primitives. Ideally, I would host a WCF service to house this DLL, but in a 64 bit OS one cannot force the service to run as x86! Source. That is really unfortunate since I timed function calls to the WCF service to be only 4ms on my machine. I have experimented with named pipes is .net. I found them to be 40-50 times slower than WCF (unusable for me). Any other options or suggestions for the best way to approach my puzzle?

    Read the article

  • Cannot generate migrations on rails 2.3.4

    - by Brian Roisentul
    I used to work with rails 2.3.2 before and then I decided to upgrade to version 2.3.4. Today I tried to generate a migration(I could do this fine with version 2.3.2) and I got the following error message: C:/Program Files (x86)/NetBeans 6.8/ruby2/jruby-1.4.0/lib/ruby/gems/1.8/gems/rails-2.3.4/lib/initializer.rb:812:in `const_missing': uninitialized constant ActiveSupport (NameError) from D:/Proyectos/Cursometro/www/config/environment.rb:33 from C:/Program Files (x86)/NetBeans 6.8/ruby2/jruby-1.4.0/lib/ruby/gems/1.8/gems/rails-2.3.4/lib/initializer.rb:111:in `run' from D:/Proyectos/Cursometro/www/config/environment.rb:15 from D:/Proyectos/Cursometro/www/config/environment.rb:31:in `require' from C:/Program Files (x86)/NetBeans 6.8/ruby2/jruby-1.4.0/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `require' from C:/Program Files (x86)/NetBeans 6.8/ruby2/jruby-1.4.0/lib/ruby/gems/1.8/gems/rails-2.3.4/lib/commands/generate.rb:1 from C:/Program Files (x86)/NetBeans 6.8/ruby2/jruby-1.4.0/lib/ruby/gems/1.8/gems/rails-2.3.4/lib/commands/generate.rb:31:in `require' from C:/Program Files (x86)/NetBeans 6.8/ruby2/jruby-1.4.0/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `require' from script\generate:3 I don't know why this is happening. Everything worked fine in 2.3.2 and now it doesn't.

    Read the article

  • Oracle’s Sun Server X4-8 with Built-in Elastic Computing

    - by kgee
    We are excited to announce the release of Oracle's new 8-socket server, Sun Server X4-8. It’s the most flexible 8-socket x86 server Oracle has ever designed, and also the most powerful. Not only does it use the fastest Intel® Xeon® E7 v2 processors, but also its memory, I/O and storage subsystems are all designed for maximum performance and throughput. Like its predecessor, the Sun Server X4-8 uses a “glueless” design that allows for maximum performance for Oracle Database, while also reducing power consumption and improving reliability. The specs are pretty impressive. Sun Server X4-8 supports 120 cores (or 240 threads), 6 TB memory, 9.6 TB HDD capacity or 3.2 TB SSD capacity, contains 16 PCIe Gen 3 I/O expansion slots, and allows for up to 6.4 TB Sun Flash Accelerator F80 PCIe Cards. The Sun Server X4-8 is also the most dense x86 server with its 5U chassis, allowing 60% higher rack-level core and DIMM slot density than the competition.  There has been a lot of innovation in Oracle’s x86 product line, but the latest and most significant is a capability called elastic computing. This new capability is built into each Sun Server X4-8.   Elastic computing starts with the Intel processor. While Intel provides a wide range of processors each with a fixed combination of core count, operational frequency, and power consumption, customers have been forced to make tradeoffs when they select a particular processor. They have had to make educated guesses on which particular processor (core count/frequency/cache size) will be best suited for the workload they intend to execute on the server.Oracle and Intel worked jointly to define a new processor, the Intel Xeon E7-8895 v2 for the Sun Server X4-8, that has unique characteristics and effectively combines the capabilities of three different Xeon processors into a single processor. Oracle system design engineers worked closely with Oracle’s operating system development teams to achieve the ability to vary the core count and operating frequency of the Xeon E7-8895 v2 processor with time without the need for a system level reboot.  Along with the new processor, enhancements have been made to the system BIOS, Oracle Solaris, and Oracle Linux, which allow the processors in the system to dynamically clock up to faster speeds as cores are disabled and to reach higher maximum turbo frequencies for the remaining active cores. One customer, a stock market trading company, will take advantage of the elastic computing capability of Sun Server X4-8 by repurposing servers between daytime stock trading activity and nighttime stock portfolio processing, daily, to achieve maximum performance of each workload.To learn more about Sun Server X4-8, you can find more details including the data sheet and white papers here.Josh Rosen is a Principal Product Manager for Oracle’s x86 servers, focusing on Oracle’s operating systems and software. He previously spent more than a decade as a developer and architect of system management software. Josh has worked on system management for many of Oracle's hardware products ranging from the earliest blade systems to the latest Oracle x86 servers.

    Read the article

  • Oracle VM networking under the hood and 3 new templates

    - by Chris Kawalek
    We have a few cool things to tell you about:  First up: have you ever wondered what happens behind the scenes in the network when you Live Migrate your Oracle VM server workload? Or how Oracle VM implements the network infrastructure you configure through your point & click action in the GUI? Really….how do they do this? For an in-depth view of the Oracle VM for x86 Networking model, Look ‘Under the Hood’ at Networking in Oracle VM Server for x86 with our best practices engineer in a blog post on OTN Garage. Next, making things simple in Oracle VM is what we strive every day to deliver to our user community. With that, we are pleased to bring you updates on three new Oracle Application templates: E-Business Suite 12.1.3 for Oracle ExalogicOracle VM templates for Oracle E-Business Suite 12.1.3 (x86 64-bit for Oracle Exalogic Elastic Cloud) contain all the required elements to create an Oracle E-Business Suite R12 demonstration system on an Exalogic server. You can use these templates to quickly build an EBS 12.1.3 demonstration environment, bypassing the operating system and the software install (via the EBS Rapid Install). For further details, please review the announcement.   JD Edwards EnterpriseOne 9.1 and JD Edwards EnterpriseOne Tools 9.1.2.1 for x86 servers and Oracle Exalogic The Oracle VM Templates for JD Edwards EnterpriseOne provide a method to rapidly install JD Edwards EnterpriseOne 9.1  and Tools 9.1.2.1. The complete stack includes Oracle Database 11g R2 and Oracle WebLogic Server 10.3.5 running on Oracle Linux 5. The templates can be installed to Oracle VM Server for x86 release 3.x and to the Oracle Exalogic Elastic Cloud.  PeopleSoft PeopleTools 8.5.2.10 for Oracle Exalogic This virtual deployment package delivers a "quick start" of PeopleSoft Middle-tier template on Oracle Linux for Oracle Exalogic Elastic Cloud. And last, are you wondering why we talk about “fast”, “rapid” when we refer to using Oracle VM templates to virtualize Oracle applications? Read the Evaluator Group Lab Validation report quantifying speeds of deployment up to 10x faster than with VMware vSphere. Or you can also check out our on demand webcast Quantifying the Value of Application-Driven Virtualization.

    Read the article

  • How can I remove old kernels/install new ones when /boot is full?

    - by Marcel
    I know this question is asked many times before, however with me it is just a bit different I guess. # df -h Filesystem Size Used Avail Use% Mounted on /dev/sda3 224G 5.2G 208G 3% / udev 1.9G 4.0K 1.9G 1% /dev tmpfs 777M 260K 777M 1% /run none 5.0M 0 5.0M 0% /run/lock none 1.9G 0 1.9G 0% /run/shm /dev/sda2 90M 88M 0 100% /boot /dev/sda6 1.9G 514M 1.3G 29% /tmp My boot partition is full. Current Kernel: # uname -r 3.2.0-35-generic All Kernels: # dpkg --list | grep linux-image ii linux-image-3.2.0-32-generic 3.2.0-32.51 Linux kernel image for version 3.2.0 on 64 bit x86 SMP ii linux-image-3.2.0-34-generic 3.2.0-34.53 Linux kernel image for version 3.2.0 on 64 bit x86 SMP ii linux-image-3.2.0-35-generic 3.2.0-35.55 Linux kernel image for version 3.2.0 on 64 bit x86 SMP iF linux-image-3.2.0-37-generic 3.2.0-37.58 Linux kernel image for version 3.2.0 on 64 bit x86 SMP iF linux-image-3.2.0-38-generic 3.2.0-38.60 Linux kernel image for version 3.2.0 on 64 bit x86 SMP iU linux-image-generic 3.2.0.37.45 Generic Linux kernel image So I thought of removing the 3.2.0.32-generic kernel with: # sudo apt-get purge linux-image-3.2.0-32-generic Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: linux-generic : Depends: linux-headers-generic (= 3.2.0.37.45) but 3.2.0.38.46 is to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). No success. When I try apt-get -f install it also fails: # apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following packages were automatically installed and are no longer required: linux-headers-3.2.0-34 linux-headers-3.2.0-35 linux-headers-3.2.0-34-generic linux-headers-3.2.0-35-generic Use 'apt-get autoremove' to remove them. The following extra packages will be installed: linux-generic linux-image-generic The following packages will be upgraded: linux-generic linux-image-generic 2 upgraded, 0 newly installed, 0 to remove and 9 not upgraded. 5 not fully installed or removed. Need to get 0 B/4,334 B of archives. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? y Setting up initramfs-tools (0.99ubuntu13.1) ... update-initramfs: deferring update (trigger activated) Setting up linux-image-3.2.0-37-generic (3.2.0-37.58) ... Running depmod. update-initramfs: deferring update (hook will be called later) The link /initrd.img is a dangling linkto /boot/initrd.img-3.2.0-38-generic Examining /etc/kernel/postinst.d. run-parts: executing /etc/kernel/postinst.d/initramfs-tools 3.2.0-37-generic /boot/vmlinuz-3.2.0-37-generic update-initramfs: Generating /boot/initrd.img-3.2.0-37-generic gzip: stdout: No space left on device E: mkinitramfs failure cpio 141 gzip 1 update-initramfs: failed for /boot/initrd.img-3.2.0-37-generic with 1. run-parts: /etc/kernel/postinst.d/initramfs-tools exited with return code 1 Failed to process /etc/kernel/postinst.d at /var/lib/dpkg/info/linux-image-3.2.0-37-generic.postinst line 1010. dpkg: error processing linux-image-3.2.0-37-generic (--configure): subprocess installed post-installation script returned error exit status 2 Setting up linux-image-3.2.0-38-generic (3.2.0-38.60) ... Running depmod. update-initramfs: deferring update (hook will be called later) The link /initrd.img is a dangling linkto /boot/initrd.img-3.2.0-37-generic Examining /etc/kernel/postinst.d. run-parts: executing /etc/kernel/postinst.d/initramfs-tools 3.2.0-38-generic /boot/vmlinuz-3.2.0-38-generic update-initramfs: Generating /boot/initrd.img-3.2.0-38-generic gzip: stdout: No space left on device E: mkinitramfs failure cpio 141 gzip 1 update-initramfs: failed for /boot/initrd.img-3.2.0-38-generic with 1. run-parts: /etc/kernel/postinst.d/initramfs-tools exited with return code 1 Failed to process /etc/kernel/postinst.d at /var/lib/dpkg/info/linux-image-3.2.0-38-generic.postinst line 1010. dpkg: error processing linux-image-3.2.0-38-generic (--configure): subprocess installed post-installation script returned error exit status 2 dpkg: dependency problems prevent configuration of linux-image-generic: linux-image-generic depends on linux-image-3.2.0-37-generic; however: Package linux-image-3.2.0-37-generic is not configured yet. dpkg: error processing linux-image-generic (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of linux-generic: linux-generic depends on linux-image-generic (= 3.2.0.37.45); however: Package linux-image-generic is not configured yet. linux-generic depends on linux-headers-generic (= 3.2.0.37.45); however: Version of linux-headers-generic on system is 3.2.0.38.46. dpkg: error processing linux-generic (--configure): dependency problems - leaving unconfigured Processing triggers for initramfs-tools ... No apport report written because the error message indicates its a followup error from a previous failure. No apport report written because MaxReports is reached already update-initramfs: Generating /boot/initrd.img-3.2.0-35-generic gzip: stdout: No space left on device E: mkinitramfs failure cpio 141 gzip 1 update-initramfs: failed for /boot/initrd.img-3.2.0-35-generic with 1. dpkg: error processing initramfs-tools (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Errors were encountered while processing: linux-image-3.2.0-37-generic linux-image-3.2.0-38-generic linux-image-generic linux-generic initramfs-tools E: Sub-process /usr/bin/dpkg returned an error code (1) Any help would really be appreciated. Update: I did: sudo rm /boot/*-3.2.0-32-generic /boot/*-3.2.0-34-generic After that the following problem with apt-get -f install: root@localhost:/# apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: linux-generic The following packages will be upgraded: linux-generic 1 upgraded, 0 newly installed, 0 to remove and 9 not upgraded. 1 not fully installed or removed. Need to get 0 B/1,722 B of archives. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? y dpkg: dependency problems prevent configuration of linux-generic: linux-generic depends on linux-image-generic (= 3.2.0.37.45); however: Version of linux-image-generic on system is 3.2.0.38.46. linux-generic depends on linux-headers-generic (= 3.2.0.37.45); however: Version of linux-headers-generic on system is 3.2.0.38.46. dpkg: error processing linux-generic (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: linux-generic E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

< Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >