Search Results

Search found 7277 results on 292 pages for 'operating room analytics'.

Page 275/292 | < Previous Page | 271 272 273 274 275 276 277 278 279 280 281 282  | Next Page >

  • Outlook Web Access, reverse proxy and browser

    - by M'vy
    Hi SF'ers! We recently moved an exchange server behind a reverse proxy due to the loss of a public IP. I've managed to configure the reverse proxy (httpd proxy_http). But there is a problem for the SSL configuration. When accessing the OWA interface with Firefox, all is ok and working. When accessing with MSIE or Chrome, they do not retrieve the good SSL Certificate. I think this is due to the multiples virtual host for httpd. Is there a workaround to make sure MSIE/Chrome request the certificate for the good domain name like FF does? Already tested with the SSL virtual host : SetEnvIf User-Agent ".*MSIE.*" value BrowserMSIE Header unset WWW-Authenticate Header add WWW-Authenticate "Basic realm=exchange.domain.com" A: ProxyPreserveHost On also: BrowserMatch ".*MSIE.*" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 Or: SetEnvIf User-Agent ".*MSIE.*" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 And lots of ProxyPassand ProxyReversePath on /exchweb /exchange /public etc... And it still don't seem to work. Any clue? Thanks. Edit 1: Precision of versions # openssl version OpenSSL 0.9.8k-fips 25 Mar 2009 /usr/sbin/httpd -v Server version: Apache/2.2.11 (Unix) Server built: Mar 17 2009 09:15:10 Browser versions : MSIE : 8.0.6001 Opera: Version 11.01 Revision 1190 Firefox: 3.6.15 Chrome: 10.0.648.151 Operating System: Windows Vista 32bits. They are all SNI compliant, I've tested them this afternoon https://sni.velox.ch/ You're right Shane Madden, I have multiple sites on the same public IP (and same port as well). The server itself is just a reverse proxy, that rewrite addresses to internal servers. The default host is a dev site, configure with the certificate that does not match the OWA (of course... would have been to easy) <VirtualHost *:443> ServerName dev2.domain.com ServerAdmin [email protected] CustomLog "| /usr/sbin/rotatelogs /var/log/httpd/access-%y%m%d.log 86400" combined ErrorLog "| /usr/sbin/rotatelogs /var/log/httpd/error-%y%m%d.log 86400" LogLevel warn RewriteEngine on SetEnvIfNoCase X-Forwarded-For .+ proxy=yes SSLEngine on SSLProtocol -all +SSLv3 +TLSv1 SSLCipherSuite ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP:+eNULL:+SSLv3 SSLCertificateFile /etc/httpd/ssl/domain.com.crt SSLCertificateKeyFile /etc/httpd/ssl/domain.com.key RewriteCond %{HTTP_HOST} dev2\.domain\.com RewriteRule ^/(.*)$ http://dev2.domain.com/$1 [L,P] </VirtualHost> The certificate of domain is a *.domain.com The second vHost is : <VirtualHost *:443> ServerName exchange.domain2.com ServerAdmin [email protected] CustomLog "| /usr/sbin/rotatelogs /var/log/httpd/exchange/access-%y%m%d.log 86400" combined ErrorLog "| /usr/sbin/rotatelogs /var/log/httpd/exchange/error-%y%m%d.log 86400" LogLevel warn SSLEngine on SSLProxyEngine On SSLProtocol -all +SSLv3 +TLSv1 SSLCipherSuite ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP:+eNULL:+SSLv3 SSLCertificateFile /etc/httpd/ssl/exchange.pem SSLCertificateKeyFile /etc/httpd/ssl/exchange.key RewriteEngine on SetEnvIfNoCase X-Forwarded-For .+ proxy=yes RewriteCond %{HTTP_HOST} exchange\.domain2\.com RewriteRule ^/(.*)$ https://exchange.domain2.com/$1 [L,P] </VirtualHost> and it's certificate is exchange.domain2.com only. I presume the SNI is somewhere not activated on my server. The versions of openssl and apache seams to be ok for the SNI support. The only thing I do not know is if httpd has been compile with the good options. (I assume it's a fedora packet).

    Read the article

  • File Server - Storage configuration: RAID vs LVM vs ZFS something else... ?

    - by privatehuff
    We are a small company that does video editing, among other things, and need a place to keep backup copies of large media files and make it easy to share them. I've got a box set up with Ubuntu Server and 4 x 500 GB drives. They're currently set up with Samba as four shared folders that Mac/Windows workstations can see fine, but I want a better solution. There are two major reasons for this: 500 GB is not really big enough (some projects are larger) It is cumbersome to manage the current setup, because individual hard drives have different amounts of free space and duplicated data (for backup). It is confusing now and that will only get worse once there are multiple servers. ("the project is on sever2 in share4" etc) So, I need a way to combine hard drives in such a way as to avoid complete data loss with the failure of a single drive, and so users see only a single share on each server. I've done linux software RAID5 and had a bad experience with it, but would try it again. LVM looks ok but it seems like no one uses it. ZFS seems interesting but it is relatively "new". What is the most efficient and least risky way to to combine the hdd's that is convenient for my users? Edit: The Goal here is basically to create servers that contain an arbitrary number of hard drives but limit complexity from an end-user perspective. (i.e. they see one "folder" per server) Backing up data is not an issue here, but how each solution responds to hardware failure is a serious concern. That is why I lump RAID, LVM, ZFS, and who-knows-what together. My prior experience with RAID5 was also on an Ubuntu Server box and there was a tricky and unlikely set of circumstances that led to complete data loss. I could avoid that again but was left with a feeling that I was adding an unnecessary additional point of failure to the system. I haven't used RAID10 but we are on commodity hardware and the most data drives per box is pretty much fixed at 6. We've got a lot of 500 GB drives and 1.5 TB is pretty small. (Still an option for at least one server, however) I have no experience with LVM and have read conflicting reports on how it handles drive failure. If a (non-striped) LVM setup could handle a single drive failing and only loose whichever files had a portion stored on that drive (and stored most files on a single drive only) we could even live with that. But as long as I have to learn something totally new, I may as well go all the way to ZFS. Unlike LVM, though, I would also have to change my operating system (?) so that increases the distance between where I am and where I want to be. I used a version of solaris at uni and wouldn't mind it terribly, though. On the other end on the IT spectrum, I think I may also explore FreeNAS and/or Openfiler, but that doesn't really solve the how-to-combine-drives issue.

    Read the article

  • How can I recover an ext4 filesystem corrupted after a fsck?

    - by Regan
    I have an ext4 filesystem on luks over software raid5. The filesystem was operating "just fine" for several years when I was beginning to run out of space. I had a 9T volume on 6x2T drives. I began upgrading to 3T drives by doing the mdadm fail, remove, add, rebuild, repeat process until I had a larger array. I then grew the luks container, and then when I unmounted and tried to resize2fs I was given the message the filesystem was dirty and needed e2fsck. Without thinking I just did e2fsck -y /dev/mapper/candybox and it began spewing all kinds of inode being removed type messages (can't remember exactly) I killed e2fsck and tried to remount the filesystem to backup data I was concerned about. When trying to mount at this point I get: # mount /dev/mapper/candybox /candybox mount: wrong fs type, bad option, bad superblock on /dev/mapper/candybox, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so Looking back at my older logs I noticed the filesystem was giving this error each time the machine booted: kernel: [79137.275531] EXT4-fs (dm-2): warning: mounting fs with errors, running e2fsck is recommended So shame on me for not paying attention :( I then tried to mount using every backup superblock (one after another) and each attempt left this in my log: EXT4-fs (dm-2): ext4_check_descriptors: Checksum for group 0 failed (26534!=65440) EXT4-fs (dm-2): ext4_check_descriptors: Checksum for group 1 failed (38021!=36729) EXT4-fs (dm-2): ext4_check_descriptors: Checksum for group 2 failed (18336!=39845) ... EXT4-fs (dm-2): ext4_check_descriptors: Checksum for group 11911 failed (28743!=44098) BUG: soft lockup - CPU#0 stuck for 23s! [mount:2939] Attempts to restart e2fsck results in: # e2fsck /dev/mapper/candybox e2fsck 1.41.14 (22-Dec-2010) e2fsck: Group descriptors look bad... trying backup blocks... candy: recovering journal e2fsck: unable to set superblock flags on candy At this point, I decided it best to order some more drives and make an image using ddrescue Now two weeks later I have an image of the luks partition in a .img file. # ls -lh total 14T -rw-r--r-- 1 root root 14T Oct 25 01:57 candybox.img -rw-r--r-- 1 root root 271 Oct 20 14:32 candybox.logfile After numerous attempts using everything I could find online I could not coerce e2fsck to do anything on the image, so I used mkfs.ext4 -L candy candybox.img -m 0 -S and I was able to mount the dirty filesystem readonly without the journal and recover 960G of data. It gave all kinds of errors of various directories not existing and so forth but I was able to get some stuff. Which gave me some hope! I then ran e2fsck again and it had to recreate the root inode and gave a massive list of correcting group counts, I accepted the root inode creation and said no to everything else, leaving a completely empty filesystem. Re-ran again and said yes to all questions with the same result but now a "clean" but empty filesystem. extundelete gives me 0 recoverable inodes found. And now I'm stuck again, I can't come up with any other methods other than dropping to something like photorec which will give me an absolute mess with how large the filesystem was. I'm willing to re-copy the image from the original array and start over, if I can get any suggestions or ideas on a way to get more of my files back. I wish I could give more detailed logs of the commands that have run, but the output is long scrolled passed except for what gets logged to syslog and my memory is not as detailed due to the timeframe this has occurred over. Any help is greatly appreciated!

    Read the article

  • MySQL reserves too much RAM

    - by Buddy
    I have a cheap VPS with 128Mb RAM and 256Mb burst. MySQL starts and reserves about 110Mb, but uses not more than 20Mb of them. My VPS Control Panel shows, that I use 127Mb (I also running nginx and sphinx), I know, that it shows reserved RAM, but when I reach over 128Mb, my VPS reboots automatically every 4 hours. So I want to force MySQL to reserve less RAM. How can i do that? I did some tweaks with my.conf but it helped not so much. top output: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1 root 15 0 2156 668 572 S 0.0 0.3 0:00.03 init 11311 root 15 0 11212 356 228 S 0.0 0.1 0:00.00 vzctl 11312 root 18 0 3712 1484 1248 S 0.0 0.6 0:00.01 bash 11347 root 18 0 2284 916 732 R 0.0 0.3 0:00.00 top 13978 root 17 -4 2248 552 344 S 0.0 0.2 0:00.00 udevd 14262 root 15 0 1812 564 472 S 0.0 0.2 0:00.03 syslogd 14293 sphinx 15 0 11816 1172 672 S 0.0 0.4 0:00.07 searchd 14305 root 25 0 7192 1036 636 S 0.0 0.4 0:00.00 sshd 14321 root 25 0 2832 836 668 S 0.0 0.3 0:00.00 xinetd 15389 root 18 0 3708 1300 1132 S 0.0 0.5 0:00.00 mysqld_safe 15441 mysql 15 0 113m 16m 4440 S 0.0 6.4 0:00.15 mysqld 15489 root 21 0 13056 1456 340 S 0.0 0.6 0:00.00 nginx 15490 nginx 18 0 13328 2388 992 S 0.0 0.9 0:00.06 nginx 15507 nginx 25 0 19520 5888 4244 S 0.0 2.2 0:00.00 php-cgi 15508 nginx 18 0 19636 4876 2748 S 0.0 1.9 0:00.12 php-cgi 15509 nginx 15 0 19668 4872 2716 S 0.0 1.9 0:00.11 php-cgi 15518 root 18 0 4492 1116 568 S 0.0 0.4 0:00.01 crond MySQL tuner: >> MySQLTuner 1.0.1 - Major Hayden <[email protected]> >> Bug reports, feature requests, and downloads at http://mysqltuner.com/ >> Run with '--help' for additional options and output filtering Please enter your MySQL administrative login: root Please enter your MySQL administrative password: -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.0.77 [OK] Operating on 32-bit architecture with less than 2GB RAM -------- Storage Engine Statistics ------------------------------------------- [--] Status: -Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in InnoDB tables: 1M (Tables: 1) [OK] Total fragmented tables: 0 -------- Performance Metrics ------------------------------------------------- [--] Up for: 38m 43s (37 q [0.016 qps], 20 conn, TX: 4M, RX: 3K) [--] Reads / Writes: 100% / 0% [--] Total buffers: 28.1M global + 832.0K per thread (100 max threads) [OK] Maximum possible memory usage: 109.4M (42% of installed RAM) [OK] Slow queries: 0% (0/37) [OK] Highest usage of available connections: 1% (1/100) [OK] Key buffer size / total MyISAM indexes: 128.0K/64.0K [OK] Query cache efficiency: 42.1% (8 cached / 19 selects) [OK] Query cache prunes per day: 0 [!!] Temporary tables created on disk: 27% (3 on disk / 11 total) [!!] Thread cache is disabled [OK] Table cache hit rate: 57% (8 open / 14 opened) [OK] Open file limit used: 1% (12/1K) [OK] Table locks acquired immediately: 100% (22 immediate / 22 locks) [!!] Connections aborted: 10% [OK] InnoDB data size / buffer pool: 1.5M/8.0M -------- Recommendations ----------------------------------------------------- General recommendations: MySQL started within last 24 hours - recommendations may be inaccurate Enable the slow query log to troubleshoot bad queries When making adjustments, make tmp_table_size/max_heap_table_size equal Reduce your SELECT DISTINCT queries without LIMIT clauses Set thread_cache_size to 4 as a starting value Your applications are not closing MySQL connections properly Variables to adjust: tmp_table_size (> 32M) max_heap_table_size (> 16M) thread_cache_size (start at 4) I think if I do what MySQLtuner says, MySQL will use more RAM.

    Read the article

  • need assistance with my.cnf - 1500% CPU usage

    - by Alan Long
    I'm running into a few issues with our new database server. It is a HP G8 with 2 INTEL XEON E5-2650 processors and 32GB of ram. This server is dedicated as a MySQL server (5.1.69) for our intranet portal. I have been having issues with this server staying alive - I notice high CPU usage during certain times of day (8% ~ 1500%+) and see very low memory usage (7 ~ 15%) based on using the 'top' command. When the CPU usage passes 1000%, that is when the app usually dies. I'm trying to see what I'm doing wrong with the config file, hopefully one of the experts can chime in and let me know what they think. See below for my.cnf file: [mysqld] default-storage-engine=InnoDB datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock #user=mysql large-pages # Disabling symbolic-links is recommended to prevent assorted security risks symbolic-links=0 max_connections=275 tmp_table_size=1G key_buffer_size=384M key_buffer=384M thread_cache_size=1024 long_query_time=5 low_priority_updates=1 max_heap_table_size=1G myisam_sort_buffer_size=8M concurrent_insert=2 table_cache=1024 sort_buffer_size=8M read_buffer_size=5M read_rnd_buffer_size=6M join_buffer_size=16M table_definition_cache=6k open_files_limit=8k slow_query_log #skip-name-resolve # Innodb Settings innodb_buffer_pool_size=18G innodb_thread_concurrency=0 innodb_log_file_size=1G innodb_log_buffer_size=16M innodb_flush_log_at_trx_commit=2 innodb_lock_wait_timeout=50 innodb_file_per_table #innodb_buffer_pool_instances=4 #eliminating double buffering innodb_flush_method = O_DIRECT flush_time=86400 innodb_additional_mem_pool_size=40M #innodb_io_capacity = 5000 #innodb_read_io_threads = 64 #innodb_write_io_threads = 64 # increase until threads_created doesnt grow anymore thread_cache=1024 query_cache_type=1 query_cache_limit=4M query_cache_size=256M # Try number of CPU's*2 for thread_concurrency thread_concurrency = 0 wait_timeout = 1800 connect_timeout = 10 interactive_timeout = 60 [mysqldump] max_allowed_packet=32M [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid log-slow-queries=/var/log/mysql/slow-queries.log long_query_time = 1 log-queries-not-using-indexes we connect to one database with 75 tables, the largest table has 1,150,000 entries and the second largest has 128,036 entries. I have also verified that our PHP queries are optimized as best as possible. Reference - MySQLtuner: >> MySQLTuner 1.2.0 - Major Hayden <[email protected]> >> Bug reports, feature requests, and downloads at http://mysqltuner.com/ >> Run with '--help' for additional options and output filtering -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.1.69-log [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: -Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in InnoDB tables: 420M (Tables: 75) [!!] Total fragmented tables: 75 -------- Security Recommendations ------------------------------------------- [!!] User '[email protected]' has no password set. -------- Performance Metrics ------------------------------------------------- [--] Up for: 1h 14m 50s (8M q [1K qps], 705 conn, TX: 6B, RX: 892M) [--] Reads / Writes: 68% / 32% [--] Total buffers: 19.7G global + 35.2M per thread (275 max threads) [!!] Maximum possible memory usage: 29.1G (93% of installed RAM) [OK] Slow queries: 0% (472/8M) [OK] Highest usage of available connections: 66% (183/275) [OK] Key buffer size / total MyISAM indexes: 384.0M/91.0K [OK] Key buffer hit rate: 100.0% (173 cached / 0 reads) [OK] Query cache efficiency: 96.2% (7M cached / 7M selects) [!!] Query cache prunes per day: 553614 [OK] Sorts requiring temporary tables: 0% (3 temp sorts / 1K sorts) [!!] Temporary tables created on disk: 49% (3K on disk / 7K total) [OK] Thread cache hit rate: 74% (183 created / 705 connections) [OK] Table cache hit rate: 97% (231 open / 238 opened) [OK] Open file limit used: 0% (17/8K) [OK] Table locks acquired immediately: 100% (432K immediate / 432K locks) [OK] InnoDB data size / buffer pool: 420.9M/18.0G -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance MySQL started within last 24 hours - recommendations may be inaccurate Reduce your overall MySQL memory footprint for system stability Increasing the query_cache size over 128M may reduce performance Temporary table size is already large - reduce result set size Reduce your SELECT DISTINCT queries without LIMIT clauses Variables to adjust: *** MySQL's maximum memory usage is dangerously high *** *** Add RAM before increasing MySQL buffer variables *** query_cache_size (> 256M) [see warning above] Thanks in advanced for your help!

    Read the article

  • A failed disk (Pay for professional service or SpinRite?)(new edit)

    - by huggie
    EDIT: After much negotiating and begging and seeing through promotion smoke screen, thanks to the nice representative who took my case, I now know that the engineer has already fixed my NTFS partition (I guess it might be a bad block in the partition table?). She told me that the problem was considered minor, and I should be able to boot normally and just copy stuff out. Whew..I'm glad I didn't agree to the NTD $16,000 deal. New question (should this be in a new thread?): is it safer to use the linux "dd" command or is it better to boot normally into Windows XP and just copy stuff out? EDIT2: Thanks to all the help. I give the best answer to Console as it's most directed related to my question. But many suggestion are helpful and informational. ---- ORIGINAL POST BELOW --- Hi, in my previous post (You don't need to read but it's at http://superuser.com/questions/48838/windows-xp-a-disk-read-error-occurred), I said that my hard disk was not booting and is showing "a disk read error occurred". I took it to a recovery professional. A representative responded today told me that the NTFS partitions have a "NTFS partition system crash". I have no idea what that means. The engineer handling my drive will not be available for contact till tomorrow. Now the company charges me NTD (New Taiwan Dollar) $16,000 to recover lost data, that's kind of a lot considering that my graduate student monthly stipend is currently NTD $32,000 (max. allowed by regulation, may be lower, may change depend on funding). Now I'm weighting in between the options. Option A: let the professional recovers it with the half of my monthly stipend. If file/directories I designated are not recovered I don't pay a penny. (other than the initial examination fee of NTD $1000 which I've already paid.) Option B: let me try SpinRite, if failed, back to Option A. I spoke to the representative at the company they recommended me not to handle it on my own (yeah of course that's what they all want to say, right?), and at the price tag the disk error is probably relatively minor and data recoverable. But the representative really did not have detailed information of the disk failure so I didn't take her recommendation readily. Though one thing I heed was that she said that what they would do is to duplicate the disk before attempting discovery, so there would be no data loss (Is this true? can't duplicating invoke further data loss?). That sounds very good to me. Or maybe a third option: Option C: Negotiate with them to pay them to duplicate the disk hopefully for a much smaller price tag. Let me try SpinRite, if failed, back to Option A. This is a difficult decision. Ultimately I want my data back, but if a cheaper way is available to achieve the same thing... Can operating with SpinRite also corrupt data in someway? I've no idea what happened to my drive. I'll attempt to contact the engineer and hope to get it clarified and make an edit here.

    Read the article

  • mySQL Optimization Suggestions

    - by Brian Schroeter
    I'm trying to optimize our mySQL configuration for our large Magento website. The reason I believe that mySQL needs to be configured further is because New Relic has shown that our SELECT queries are taking a long time (20,000+ ms) in some categories. I ran MySQLTuner 1.3.0 and got the following results... (Disclaimer: I restarted mySQL earlier after tweaking some settings, and so the results here may not be 100% accurate): >> MySQLTuner 1.3.0 - Major Hayden <[email protected]> >> Bug reports, feature requests, and downloads at http://mysqltuner.com/ >> Run with '--help' for additional options and output filtering [OK] Currently running supported MySQL version 5.5.37-35.0 [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: +ARCHIVE +BLACKHOLE +CSV -FEDERATED +InnoDB +MRG_MYISAM [--] Data in MyISAM tables: 7G (Tables: 332) [--] Data in InnoDB tables: 213G (Tables: 8714) [--] Data in PERFORMANCE_SCHEMA tables: 0B (Tables: 17) [--] Data in MEMORY tables: 0B (Tables: 353) [!!] Total fragmented tables: 5492 -------- Security Recommendations ------------------------------------------- [!!] User '@host5.server1.autopartsnetwork.com' has no password set. [!!] User '@localhost' has no password set. [!!] User 'root@%' has no password set. -------- Performance Metrics ------------------------------------------------- [--] Up for: 5h 3m 4s (5M q [317.443 qps], 42K conn, TX: 18B, RX: 2B) [--] Reads / Writes: 95% / 5% [--] Total buffers: 35.5G global + 184.5M per thread (1024 max threads) [!!] Maximum possible memory usage: 220.0G (174% of installed RAM) [OK] Slow queries: 0% (6K/5M) [OK] Highest usage of available connections: 5% (61/1024) [OK] Key buffer size / total MyISAM indexes: 512.0M/3.1G [OK] Key buffer hit rate: 100.0% (102M cached / 45K reads) [OK] Query cache efficiency: 66.9% (3M cached / 5M selects) [!!] Query cache prunes per day: 3486361 [OK] Sorts requiring temporary tables: 0% (0 temp sorts / 812K sorts) [!!] Joins performed without indexes: 1328 [OK] Temporary tables created on disk: 11% (126K on disk / 1M total) [OK] Thread cache hit rate: 99% (61 created / 42K connections) [!!] Table cache hit rate: 19% (9K open / 49K opened) [OK] Open file limit used: 2% (712/25K) [OK] Table locks acquired immediately: 100% (5M immediate / 5M locks) [!!] InnoDB buffer pool / data size: 32.0G/213.4G [OK] InnoDB log waits: 0 -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance MySQL started within last 24 hours - recommendations may be inaccurate Reduce your overall MySQL memory footprint for system stability Enable the slow query log to troubleshoot bad queries Increasing the query_cache size over 128M may reduce performance Adjust your join queries to always utilize indexes Increase table_cache gradually to avoid file descriptor limits Read this before increasing table_cache over 64: http://bit.ly/1mi7c4C Variables to adjust: *** MySQL's maximum memory usage is dangerously high *** *** Add RAM before increasing MySQL buffer variables *** query_cache_size (> 512M) [see warning above] join_buffer_size (> 128.0M, or always use indexes with joins) table_cache (> 12288) innodb_buffer_pool_size (>= 213G) My my.cnf configuration is as follows... [client] port = 3306 [mysqld_safe] nice = 0 [mysqld] tmpdir = /var/lib/mysql/tmp user = mysql port = 3306 skip-external-locking character-set-server = utf8 collation-server = utf8_general_ci event_scheduler = 0 key_buffer = 512M max_allowed_packet = 64M thread_stack = 512K thread_cache_size = 512 sort_buffer_size = 24M read_buffer_size = 8M read_rnd_buffer_size = 24M join_buffer_size = 128M # for some nightly processes client sessions set the join buffer to 8 GB auto-increment-increment = 1 auto-increment-offset = 1 myisam-recover = BACKUP max_connections = 1024 # max connect errors artificially high to support behaviors of NetScaler monitors max_connect_errors = 999999 concurrent_insert = 2 connect_timeout = 5 wait_timeout = 180 net_read_timeout = 120 net_write_timeout = 120 back_log = 128 # this table_open_cache might be too low because of MySQL bugs #16244691 and #65384) table_open_cache = 12288 tmp_table_size = 512M max_heap_table_size = 512M bulk_insert_buffer_size = 512M open-files-limit = 8192 open-files = 1024 query_cache_type = 1 # large query limit supports SOAP and REST API integrations query_cache_limit = 4M # larger than 512 MB query cache size is problematic; this is typically ~60% full query_cache_size = 512M # set to true on read slaves read_only = false slow_query_log_file = /var/log/mysql/slow.log slow_query_log = 0 long_query_time = 0.2 expire_logs_days = 10 max_binlog_size = 1024M binlog_cache_size = 32K sync_binlog = 0 # SSD RAID10 technically has a write capacity of 10000 IOPS innodb_io_capacity = 400 innodb_file_per_table innodb_table_locks = true innodb_lock_wait_timeout = 30 # These servers have 80 CPU threads; match 1:1 innodb_thread_concurrency = 48 innodb_commit_concurrency = 2 innodb_support_xa = true innodb_buffer_pool_size = 32G innodb_file_per_table innodb_flush_log_at_trx_commit = 1 innodb_log_buffer_size = 2G skip-federated [mysqldump] quick quote-names single-transaction max_allowed_packet = 64M I have a monster of a server here to power our site because our catalog is very large (300,000 simple SKUs), and I'm just wondering if I'm missing anything that I can configure further. :-) Thanks!

    Read the article

  • a friendly elaboratation on how iceweasel misses as recent request was discovered in post tut

    - by v3x3n
    I would like to specify what you asked about in how iceweasel misses on in hopes that you are interested in considering the answer to be valid considering the package should run efficient to a novice user especially when getting a first impression on a new software, browser, or package they are given by default. First let me say that I am very grateful for all debian has and continues to do for its user base and I am not complaining, only hoping that in mentioning such it will eventually aid in improving the disappointing experience iceweasel gave me in multiple ways right from the start of using it. I am sure tremendous amounts of hard work went into making it work as well as it does, and tremendous effort continues to go into IW like the rest of the package and distros such as this... Having said that, I do aim to keep my examples simple as possible but also explained in enough detail as to not leave out important info to get a full idea of my issue... (Thanks for understanding if I sound novice to debian, I am but continue to learn and advance by trial and error each day, and of course the wonderful help from all the great minds available here and there)... Well, my bone to pick (unfortunately as I disdain being a complainer when so much has been done for us, the end user) with IW is mainly 3 immediately noticed problems in offering me a smooth test drive (completely normal browsing conduct of your average user, I should indicate) as soon as I started out a task using IW (heading directly to google images and searched for a few web sized images, then saved a few (no more than 800 to 900 in dimensions, typically a normal procedure I would imagine) I experienced a horrendous reoccurring freeze and lag time that almost left me to kill the process more than once on each attempt and continued to compound so that by a few attempts later, even 1 single image save locked up the entire browser, ultimately giving up on my procedure in its simplicity with a bit of aggravation, if there was something I had done wrong on my end pre-attempt or during installation. Secondly, visiting video player sites in this case youtube will not allow for any video feed to play whatsoever and again, experienced a slight freezing that eventually dissipated as long as I didn't attempt to press play or reload another video trial... Very annoying and possibly related to my first issue (which could rely on my own lack of knowledge in setup config, but that troubles me if so).... Lastly as another user has stated already, in which brought you to ask them what exactly seemed to be the problem in the first place is also my final straw that broke the weasel's back, leading me here to find a solution to a updated version of firefox... being that iceweasel (an apparent build of firefox 17.x.x) leaving the user to find themselves out of luck when installing their favorite or much useful add-ons or plugins that aid their task or normal usage online.... As the other user clearly stated... there is almost zero options and barely any luck to find compatible plugins with such an outdated version still being used... I can only imagine if as a novoice or standard user to linux... and beginner to debian (which I am all for btw!) if I am discovering major conductivity issues which hinder normal user conduct right off the bat... that there is probably a bundle or more of security vulnerabilities that would unravel to me if I continued to give mr Hommey's package any further attention, or interest in continuing to use... This is a deal breaker on any account, unless all 3 of these issues stated are due to a very n00berish setting or lack thereof (which if that is the case, such was not properly stated for beginner in any place available by following installation tutorials such as LVM installers, which I felt lacked a clear understanding to someone who is trying to learn why and how, rather than, a fix to a potentially broken out of box installation... I really think that is irrelevant in any case however, as there is much step by step support regarding that kind of thing, just saying... its very challenging, along side a job to keep deadlines on, and the need to learn a better and more costomizible system, that was not as popularized when I became a dev via windows (OK OK I know thats a huge mistake and now I am way off original topic but anyway) Thanks very much for taking the time to read over this, and I hope it honestly gives you a few good and valid reasons for users wanting and seeking a way to get around iceweasel altogether... Finally, if I am missing something vital that renders all I've stated invalid or useless, please feel free to shove that elephant in the room my direction! Thanks for all you do, as I am sure it is very useful and dedicated, being you are a debian maintainer, and super user! Much love for intelligence, freedom and sharing the secrets of the web! May it remain unregulated and a good place to grow and learn for generations to come! Stay true! Take care now! -v3x (v3x @ gmx .com)

    Read the article

  • How to export SQL Server data from corrupted database (with disk write error)

    - by damitamit
    IT realised there was a disk write error on our production SQL Server 2005 and hence was causing the backups to fail. By the time they had realised this the nightly backup was old, so were not able to just restore the backup on another server. The database is still running and being used constantly. However DBCC CheckDB fails. Also the SQL Server backup task fails, Copy Database fails, Export Data Wizard fails. However it seems all the data can be read from the tables (i.e using bcp etc) Another observation I have made is that the Transaction Log is nearly double the size of the Database. (Does that mean all the changes arent being written to the MDF?) What would be the best plan of attack to get the database to a state where backups are working and the data is safe? Take the database offline and use the MDF/LDF to somehow create the database on another sql server? Export the data from the database using bcp. Create the database (use the Generate Scripts function on the corrupt db to create the schema on the new db) on another sql server and use bcp again to import the data. Some other option that is the right course of action in this situation? The IT manager says the data is safe as if the server fails, the data can be restored from the mdf/ldf. I'm not sure so insisted that we start exporting the data each night as a failsafe (using bcp for example). IT are also having issues on the hardware side of things as supposedly the disk error in on a virtualized disk and can't be rebuilt like a normal raid array (or something like that). Please excuse my use of incorrect terminology and incorrect assumptions on how Sql Server operates. I'm the application developer and have been called to help (as it seems IT know less about SQL Server than I do). Many Thanks, Amit Results of DBBC CheckDB: Msg 1823, Level 16, State 2, Line 1 A database snapshot cannot be created because it failed to start. Msg 7928, Level 16, State 1, Line 1 The database snapshot for online checks could not be created. Either the reason is given in a previous error or one of the underlying volumes does not support sparse files or alternate streams. Attempting to get exclusive access to run checks offline. Msg 5030, Level 16, State 12, Line 1 The database could not be exclusively locked to perform the operation. Msg 7926, Level 16, State 1, Line 1 Check statement aborted. The database could not be checked as a database snapshot could not be created and the database or table could not be locked. See Books Online for details of when this behavior is expected and what workarounds exist. Also see previous errors for more details. Msg 823, Level 24, State 3, Line 1 The operating system returned error 1(error not found) to SQL Server during a write at offset 0x00000674706000 in file 'G:\AX40_Dynamics_Live.mdf'. Additional messages in the SQL Server error log and system event log may provide more detail. This is a severe system-level error condition that threatens database integrity and must be corrected immediately. Complete a full database consistency check (DBCC CHECKDB). This error can be caused by many factors; for more information, see SQL Server Books Online.

    Read the article

  • What needs updating when moving a bootable Windows 7 (or Vista) partition?

    - by SuperTempel
    When I move a bootable NTFS partition with Windows on it to a different block offset, what needs updating to make it bootable again? In particular, here's what I tried: I have a disk with several partitions, one of which is the NTFS partition with Windows on it, and the disk uses the plain old MBR block 0 for the partitions layout (no more than 4 partitions). Now I format and partition a new, larger, disk. There I make room for the NTFS partition and copy the contents from the old disk's NTFS Windows partition into. And I make the partition "active". However, when I try to boot from this disk, I get a "read error" message immediately and the booting stops, the exact text is: A disk read error occurred Press Ctrl+Alt+Del to restart I verified that both disks have the same boot sector code in block 0. It seems to me that something else might need updating. I guess that somewhere there's a absolute block reference that I need to update, probably pointing to the next level loader or to the NT kernel. Update: I found this article going quite into the depth of what I want to know. However, it says to modify boot.ini, but I have Windows 7 installed here, where such things appear to have changed: No boot.ini but a folder called System Volume Information with GUID and other data in it that sounds related to my problem. Going to keep digging... Update 2: Thanks to the terrible looking but very informative website by starman, I was able to figure out the first step: The NTFS boot sector has a field for "hidden" sectors. This feld has to contain the sector number of the boot sector. This solves the "read error" message. Now, however, I get a "BOOTMGR is missing" error instead. Looks like there's another place where a block number has to be adjusted, but I can't find anything in the code listing about this. I do find a lot of help sites suggesting Windows tools for fixing this "BOOTMGR is missing" problem, but none seem to know what goes on behind the scenes. Kind of like suggesting to re-install Windows when there's a little problem with it. At least, those fixes seem to work, mostly involving the Bcdedit and Bootrec tools. Now, who knows what they do, especially the latter, in regards to a moved partition? Update 3: After lots of trial-and-error attempts, I believe now that the solution lies in the BCD-Template registry file, residing usually inside \Windows\System32\config. If I get this updated using the "bcdboot" command, Windows starts up from it. I am now in the middle of figuring out what information this registry contains relevant to the above question. Any pointers to the contents of this registry are welcome. Update 4: Turns out that while the BCD-Template file gets rewritten and has different binary contents than its predecessor, the values inside do not change. So it must be something else that bcdboot.exe writes. I had previously already checked if it changes the first 32 boot blocks of the partition, but they appear to remain unchanged. Parititon map doesn't get changed, either. So what is it that bcdboot modifies besides the BCD registry? Any tips on how I can trace that? Are there low level tools that show me what files a program writes to? Update 5: The answer seems to be: c:\Boot\BCD is also changed, and that appears to be the key file for the boot manager's process. I'll investigate this later... Update 6: It seems to be an important detail that I had originally two partitions created when I installed Windows 7: A small partition of 204800 sectors which appears to be a bootstrap partition, followed by the actual, large, partition containing the Windows system (drive C:). When I tried to transfer this installation to a new, larger, disk, I had kept the same two partitions intact on the new drive, although they ended up at a different offset. This alone led to the "BOOTMGR is missing" message. Since then, I've used bcdboot.exe only on the Windows partition, which added the \Boot\BCD file on that partition. That file (and folder) did originally only exist on the smaller partition. Hence, this problem may be more complicated in my case as one partition (the boot strapper) referred to another partition (the one containing the OS), whereas other people may only have to deal with one partition containing both, and maybe there the solution is simpler. Update 7: Found one more detail: The \Boot\BCD file records the MBR's serial number. If that number doesn't match, the system won't boot. Next I'll test if there's also an absolute block reference stored in there.

    Read the article

  • ICalendar not readable by google calendar.

    - by Sagar
    Operating system : WinXP Program and version you use to access Google Calendar (FF3.5): I'm developing a script (based on an existing vCal ASP.NET class I found online) to generate an .ics file. This file works perfectly when importing to Outlook 2003. When I try to import to Google Calendar, I get the following error: Failed to import events: Unable to process your iCal/CSV file.. I don't know too much about the vCal format or syntax, but everything looks fine to me. I'll post the sample test calendar .ics below: BEGIN:VCALENDAR PRODID:-//jpalm.se//iCalendar example with ASP.NET MVC//EN VERSION:2.0 CALSCALE:GREGORIAN METHOD:PUBLISH X-MS-OLK-FORCEINSPECTOROPEN:TRUE BEGIN:VEVENT DTSTART:20100304T000000Z DTEND:20100304T000000Z TRANSP:OPAQUE SEQUENCE:0 UID:7c9d6dd7-41f2-4171-8ae4-35820974efa4 DESCRIPTION:uba:Project20100321:sagar . SUMMARY:First Milestone END:VEVENT X-MS-OLK-FORCEINSPECTOROPEN:TRUE BEGIN:VEVENT DTSTART:20100330T230000Z DTEND:20100330T230000Z TRANSP:OPAQUE SEQUENCE:0 UID:8a982519-b99b-429a-8dad-c0f95c50d0e6 DESCRIPTION:uba:Project20100321:imanage2010 pm SUMMARY:upcoming milestones END:VEVENT X-MS-OLK-FORCEINSPECTOROPEN:TRUE BEGIN:VEVENT DTSTART:20100329T230000Z DTEND:20100329T230000Z TRANSP:OPAQUE SEQUENCE:0 UID:588750a1-6f10-4b5d-8a51-3f3818024726 DESCRIPTION:uba:Project20100321:sagar . SUMMARY:test END:VEVENT X-MS-OLK-FORCEINSPECTOROPEN:TRUE BEGIN:VEVENT DTSTART:20100407T230000Z DTEND:20100407T230000Z TRANSP:OPAQUE SEQUENCE:0 UID:36eaa726-a0a0-40a1-ba7c-09857f8ed006 DESCRIPTION:uba:Project20100321:imanage2010 pm SUMMARY:Rad apps devs END:VEVENT X-MS-OLK-FORCEINSPECTOROPEN:TRUE BEGIN:VEVENT DTSTART:20100408T125632Z DTEND:20100408T125632Z TRANSP:OPAQUE SEQUENCE:0 UID:8521ad53-916a-43cc-8eeb-42c1b3d670d3 DESCRIPTION:uba:Project20100321:imanage2010 pm SUMMARY:this is a test ms END:VEVENT X-MS-OLK-FORCEINSPECTOROPEN:TRUE BEGIN:VEVENT DTSTART:20100415T125643Z DTEND:20100415T125643Z TRANSP:OPAQUE SEQUENCE:0 UID:e4b295d8-2271-4393-9899-3e9c858f4e8c DESCRIPTION:uba:Project20100321:imanage2010 pm SUMMARY:Test msssss END:VEVENT X-MS-OLK-FORCEINSPECTOROPEN:TRUE BEGIN:VEVENT DTSTART:20100430T055201Z DTEND:20100430T055201Z TRANSP:OPAQUE SEQUENCE:0 UID:1e464698-1064-4cb2-8166-2a843b63ca5a DESCRIPTION:uba:Project20100321:imanage2010 pm SUMMARY:this is a new milestones for testing on 30th april END:VEVENT X-MS-OLK-FORCEINSPECTOROPEN:TRUE BEGIN:VEVENT DTSTART:20100731T093917Z DTEND:20100731T093917Z TRANSP:OPAQUE SEQUENCE:0 UID:5262ef58-73bc-4d66-a207-4e884e249629 DESCRIPTION:uba:Project20100321:imanage2010 pm SUMMARY:555555555555555555 END:VEVENT X-MS-OLK-FORCEINSPECTOROPEN:TRUE BEGIN:VEVENT DTSTART:20100328T230000Z DTEND:20100328T230000Z TRANSP:OPAQUE SEQUENCE:0 UID:f654262d-714e-41d9-9690-005bb467f8aa DESCRIPTION:uba:Untitled project:imanage2010 pm SUMMARY:first milestone END:VEVENT X-MS-OLK-FORCEINSPECTOROPEN:TRUE BEGIN:VEVENT DTSTART:20100401T095537Z DTEND:20100401T095537Z TRANSP:OPAQUE SEQUENCE:0 UID:3f4a6c16-f460-457d-a281-b4c010958796 DESCRIPTION:uba:ProjectIcal:imanage2010 pm SUMMARY:new ms ical END:VEVENT X-MS-OLK-FORCEINSPECTOROPEN:TRUE BEGIN:VEVENT DTSTART:20100331T230000Z DTEND:20100331T230000Z TRANSP:OPAQUE SEQUENCE:0 UID:e5bf28d1-3559-48e9-90f8-2b5233489a13 DESCRIPTION:uba:ProjectIcal:imanage2010 pm SUMMARY:new ms 2 ical END:VEVENT END:VCALENDAR And the source for generating the above code is which is nothing but the mvc view:: <%@ Import Namespace ="iManageProjectPM.Controllers" % <%@ Page Language="C#" Inherits="System.Web.Mvc.ViewPage"% BEGIN:VCALENDAR VERSION:2.0<%if (Model.Events.Count 1) {% CALSCALE:GREGORIAN METHOD:PUBLISH<%}% X-MS-OLK-FORCEINSPECTOROPEN:TRUE <%foreach(var evnt in Model.Events){% BEGIN:VEVENT DTSTART<%=Model.GetTimeString(evnt.StartTime)% DTEND<%=Model.GetTimeString(evnt.EndTime)% TRANSP:OPAQUE SEQUENCE:0 UID:<%=evnt.UID% DESCRIPTION:<%=evnt.Desc% SUMMARY:<%=evnt.Title% END:VEVENT<%}% END:VCALENDAR

    Read the article

  • Handwritten linked list is segfaulting and I don't understand why

    - by Born2Smile
    Hi I was working on a bit of fun, making an interface to run gnuplot from within c++, and for some reason the my linked list implementation fails. The code below fails on the line plots-append(&plot). Stepping through the code I discovered that for some reason the destructor ~John() is called immediately after the constructor John(), and I cannot seem to figure out why. The code included below is a stripped down version operating only on Plot*. Originally I made the linked list as a template class. And it worked fine as ll<int and ll<char* but for some reason it fails as ll<Plot*. Could youp please help me figure out why it fails? and perhaps help me understand how to make it work? In advance: Thanks a heap! //B2S #include <string.h class Plot{ char title[80]; public: Plot(){ } }; class Link{ Plot* element; Link* next; Link* prev; friend class ll; }; class ll{ Link* head; Link* tail; public: ll(){ head = tail = new Link(); head-prev = tail-prev = head-next = tail-next = head; } ~ll(){ while (head!=tail){ tail = tail-prev; delete tail-next; } delete head; } void append(Plot* element){ tail-element = element; tail-next = new Link(); tail-next-prev = tail; tail-next = tail; } }; class John{ ll* plots; public: John(){ plots= new ll(); } ~John(){ delete plots; } John(Plot* plot){ John(); plots-append(plot); } }; int main(){ Plot p; John k(&p); }

    Read the article

  • Succinct introduction to C++/CLI for C#/Haskell/F#/JS/C++/... programmer

    - by Henrik
    Hello everybody, I'm trying to write integrations with the operating system and with things like active directory and Ocropus. I know a bunch of programming languages, including those listed in the title. I'm trying to learn exactly how C++/CLI works, but can't find succinct, exact and accurate descriptions online from the searching that I have done. So I ask here. Could you tell me the pitfalls and features of C++/CLI? Assume I know all of C# and start from there. I'm not an expert in C++, so some of my questions' answers might be "just like C++", but could say that I am at C#. I would like to know things like: Converting C++ pointers to CLI pointers, Any differences in passing by value/doubly indirect pointers/CLI pointers from C#/C++ and what is 'recommended'. How do gcnew, __gc, __nogc work with Polymorphism Structs Inner classes Interfaces The "fixed" keyword; does that exist? Compiling DLLs loaded into the kernel with C++/CLI possible? Loaded as device drivers? Invoked by the kernel? What does this mean anyway (i.e. to load something into the kernel exactly; how do I know if it is?)? L"my string" versus "my string"? wchar_t? How many types of chars are there? Are we safe in treating chars as uint32s or what should one treat them as to guarantee language indifference in code? Finalizers (~ClassName() {}) are discouraged in C# because there are no garantuees they will run deterministically, but since in C++ I have to use "delete" or use copy-c'tors as to stack allocate memory, what are the recommendations between C#/C++ interactions? What are the pitfalls when using reflection in C++/CLI? How well does C++/CLI work with the IDisposable pattern and with SafeHandle, SafeHandleZeroOrMinusOneIsInvalid? I've read briefly about asynchronous exceptions when doing DMA-operations, what are these? Are there limitations you impose upon yourself when using C++ with CLI integration rather than just doing plain C++? Attributes in C++ similar to Attributes in C#? Can I use the full meta-programming patterns available in C++ through templates now and still have it compile like ordinary C++? Have you tried writing C++/CLI with boost? What are the optimal ways of interfacing the boost library with C++/CLI; can you give me an example of passing a lambda expression to an iterator/foldr function? What is the preferred way of exception handling? Can C++/CLI catch managed exceptions now? How well does dynamic IL generation work with C++/CLI? Does it run on Mono? Any other things I ought to know about?

    Read the article

  • Succinct introduction to C++/CLI for C#/Haskell/F#/JS/C++/... programmer

    - by Henrik
    Hello everybody, I'm trying to write integrations with the operating system and with things like active directory and Ocropus. I know a bunch of programming languages, including those listed in the title. I'm trying to learn exactly how C++/CLI works, but can't find succinct, exact and accurate descriptions online from the searching that I have done. So I ask here. Could you tell me the pitfalls and features of C++/CLI? Assume I know all of C# and start from there. I'm not an expert in C++, so some of my questions' answers might be "just like C++", but could say that I am at C#. I would like to know things like: Converting C++ pointers to CLI pointers, Any differences in passing by value/doubly indirect pointers/CLI pointers from C#/C++ and what is 'recommended'. How do gcnew, __gc, __nogc work with Polymorphism Structs Inner classes Interfaces The "fixed" keyword; does that exist? Compiling DLLs loaded into the kernel with C++/CLI possible? Loaded as device drivers? Invoked by the kernel? What does this mean anyway (i.e. to load something into the kernel exactly; how do I know if it is?)? L"my string" versus "my string"? wchar_t? How many types of chars are there? Are we safe in treating chars as uint32s or what should one treat them as to guarantee language indifference in code? Finalizers (~ClassName() {}) are discouraged in C# because there are no garantuees they will run deterministically, but since in C++ I have to use "delete" or use copy-c'tors as to stack allocate memory, what are the recommendations between C#/C++ interactions? What are the pitfalls when using reflection in C++/CLI? How well does C++/CLI work with the IDisposable pattern and with SafeHandle, SafeHandleZeroOrMinusOneIsInvalid? I've read briefly about asynchronous exceptions when doing DMA-operations, what are these? Are there limitations you impose upon yourself when using C++ with CLI integration rather than just doing plain C++? Attributes in C++ similar to Attributes in C#? Can I use the full meta-programming patterns available in C++ through templates now and still have it compile like ordinary C++? Have you tried writing C++/CLI with boost? What are the optimal ways of interfacing the boost library with C++/CLI; can you give me an example of passing a lambda expression to an iterator/foldr function? What is the preferred way of exception handling? Can C++/CLI catch managed exceptions now? How well does dynamic IL generation work with C++/CLI? Does it run on Mono? Any other things I ought to know about?

    Read the article

  • zLib on iPhone, stop at first BLOCK

    - by cedric
    I am trying to call iPhone zLib to decompress the zlib stream from our HTTP based server, but the code always stop after finishing the first zlib block. Obviously, iPhone SDK is using the standard open Zlib. My doubt is that the parameter for inflateInit2 is not appropriate here. I spent lots of time reading the zlib manual, but it isn't that helpful. Here is the details, your help is appreciated. (1) the HTTP request: NSURL *url = [NSURL URLWithString:@"http://192.168.0.98:82/WIC?query=getcontacts&PIN=12345678&compression=Y"]; (2) The data I get from server is something like this (if decompressed). The stream was compressed by C# zlib class DeflateStream: $REC_TYPE=SYS Status=OK Message=OK SetID= IsLast=Y StartIndex=0 LastIndex=6 EOR ...... $REC_TYPE=CONTACTSDISTLIST ID=2 Name=CTU+L%2EA%2E OnCallEnabled=Y OnCallMinUsers=1 OnCallEditRight= OnCallEditDLRight=D Fields= CL= OnCallStatus= EOR (3) However, I will only get the first Block. The code for decompression on iPhone (copied from a code piece from somewhere here) is as follow. The loop between Line 23~38 always break the second time execution. + (NSData *) uncompress: (NSData*) data { 1 if ([data length] == 0) return nil; 2 NSInteger length = [data length]; 3 unsigned full_length = length; 4 unsigned half_length =length/ 2; 5 NSMutableData *decompressed = [NSMutableData dataWithLength: 5*full_length + half_length]; 6 BOOL done = NO; 7 int status; 8 z_stream strm; 9 length=length-4; 10 void* bytes= malloc(length); 11 NSRange range; 12 range.location=4; 13 range.length=length; 14 [data getBytes: bytes range: range]; 15 strm.next_in = bytes; 16 strm.avail_in = length; 17 strm.total_out = 0; 18 strm.zalloc = Z_NULL; 19 strm.zfree = Z_NULL; 20 strm.data_type= Z_BINARY; 21 // if (inflateInit(&strm) != Z_OK) return nil; 22 if (inflateInit2(&strm, (-15)) != Z_OK) return nil; //It won't work if change -15 to positive numbers. 23 while (!done) 24 { 25 // Make sure we have enough room and reset the lengths. 26 if (strm.total_out >= [decompressed length]) 27 [decompressed increaseLengthBy: half_length]; 28 strm.next_out = [decompressed mutableBytes] + strm.total_out; 29 strm.avail_out = [decompressed length] - strm.total_out; 30 31 // Inflate another chunk. 32 status = inflate (&strm, Z_SYNC_FLUSH); //Z_SYNC_FLUSH-->Z_BLOCK, won't work either 33 if (status == Z_STREAM_END){ 34 35 done = YES; 36 } 37 else if (status != Z_OK) break; 38 } 39 if (inflateEnd (&strm) != Z_OK) return nil; 40 // Set real length. 41 if (done) 42 { 43 [decompressed setLength: strm.total_out]; 44 return [NSData dataWithData: decompressed]; 45 } 46 else return nil; 47 }

    Read the article

  • Stand-alone Java code formatter/beautifier/pretty printer?

    - by Greg Mattes
    I'm interested in learning about the available choices of high-quality, stand-alone source code formatters for Java. The formatter must be stand-alone, that is, it must support a "batch" mode that is decoupled from any particular development environment. Ideally it should be independent of any particular operating system as well. So, a built-in formatter for the IDE du jour is of little interest here (unless that IDE supports batch mode formatter invocation, perhaps from the command line). A formatter written in closed-source C/C++ that only runs on, say, Windows is not ideal, but is somewhat interesting. To be clear, a "formatter" (or "beautifier") is not the same as a "style checker." A formatter accepts source code as input, applies styling rules, and produces styled source code that is semantically equivalent to the original source code. A style checker also applies styling rules, but it simply reports rule violations without producing modified source code as output. So the picture looks like this: Formatter (produces modified source code that conforms to styling rules) Read Source Code → Apply Styling Rules → Write Styled Source Code Style Checker (does not produce modified source code) Read Source Code → Apply Styling Rules → Write Rule Violations Further Clarifications Solutions must be highly configurable. I want to be able to specify my own style, not simply select from a canned list. Also, I'm not looking for a general purpose pretty-printer written in Java that can pretty-print many things. I want to style Java code. I'm also not necessarily interested in a grand-unified formatter for many languages. I suppose it might be nice for a solution to have support for languages other than Java, but that is not a requirement. Furthermore, tools that only perform code highlighting are right out. I'm also not interested in a web service. I want a tool that I can run locally. Finally, solutions need not be restricted to open source, public domain, shareware, free software, commercial, or anything else. All forms of licensing are acceptable.

    Read the article

  • Backbone.js Model change events in nested collections not firing as expected

    - by Pallavi Kaushik
    I'm trying to use backbone.js in my first "real" application and I need some help debugging why certain model change events are not firing as I would expect. I have a web service at /employees/{username}/tasks which returns a JSON array of task objects, with each task object nesting a JSON array of subtask objects. For example, [{ "id":45002, "name":"Open Dining Room", "subtasks":[ {"id":1,"status":"YELLOW","name":"Clean all tables"}, {"id":2,"status":"RED","name":"Clean main floor"}, {"id":3,"status":"RED","name":"Stock condiments"}, {"id":4,"status":"YELLOW","name":"Check / replenish trays"} ] },{ "id":47003, "name":"Open Registers", "subtasks":[ {"id":1,"status":"YELLOW","name":"Turn on all terminals"}, {"id":2,"status":"YELLOW","name":"Balance out cash trays"}, {"id":3,"status":"YELLOW","name":"Check in promo codes"}, {"id":4,"status":"YELLOW","name":"Check register promo placards"} ] }] Another web service allows me to change the status of a specific subtask in a specific task, and looks like this: /tasks/45002/subtasks/1/status/red [aside - I intend to change this to a HTTP Post-based service, but the current implementation is easier for debugging] I have the following classes in my JS app: Subtask Model and Subtask Collection var Subtask = Backbone.Model.extend({}); var SubtaskCollection = Backbone.Collection.extend({ model: Subtask }); Task Model with a nested instance of a Subtask Collection var Task = Backbone.Model.extend({ initialize: function() { // each Task has a reference to a collection of Subtasks this.subtasks = new SubtaskCollection(this.get("subtasks")); // status of each Task is based on the status of its Subtasks this.update_status(); }, ... }); var TaskCollection = Backbone.Collection.extend({ model: Task }); Task View to renders the item and listen for change events to the model var TaskView = Backbone.View.extend({ tagName: "li", template: $("#TaskTemplate").template(), initialize: function() { _.bindAll(this, "on_change", "render"); this.model.bind("change", this.on_change); }, ... on_change: function(e) { alert("task model changed!"); } }); When the app launches, I instantiate a TaskCollection (using the data from the first web service listed above), bind a listener for change events to the TaskCollection, and set up a recurring setTimeout to fetch() the TaskCollection instance. ... TASKS = new TaskCollection(); TASKS.url = ".../employees/" + username + "/tasks" TASKS.fetch({ success: function() { APP.renderViews(); } }); TASKS.bind("change", function() { alert("collection changed!"); APP.renderViews(); }); // Poll every 5 seconds to keep the models up-to-date. setInterval(function() { TASKS.fetch(); }, 5000); ... Everything renders as expected the first time. But at this point, I would expect either (or both) a Collection change event or a Model change event to get fired if I change a subtask's status using my second web service, but this does not happen. Funnily, I did get change events to fire if I added one additional level of nesting, with the web service returning a single object that has the Tasks Collection embedded, for example: "employee":"pkaushik", "tasks":[{"id":45002,"subtasks":[{"id":1..... But this seems klugey... and I'm afraid I haven't architected my app right. I'll include more code if it helps, but this question is already rather verbose. Thoughts?

    Read the article

  • What is "elegant" code?

    - by Breton
    I see a lot of lip service and talk about the most "elegant" way to do this or that. I think if you spend enough time programming you begin to obtain a sort of intuitive feel for what it is we call "elegance". But I'm curious. Even if we can look at a bit of code, and say instinctively "That's elegant", or "That's messy", I wonder if any of us really understands what that means. Is there a precise definition for this "elegance" we keep referring to? If there is, what is it? Now, what I mean by a precise definition, is a series of statements which can be used to derive questions about a peice of code, or a program as a whole, and determine objectively, or as objectively as possible, whether that code is "elegant" or not. May I assert, that perhaps no such definition exists, and it's all just personal preference. In this case, I ask you a slightly different question: Is there a better word for "elegance", or a better set of attributes to use for judging code quality that is perhaps more objective than merely appealing to individual intuition and taste? Perhaps code quality is a matter of taste, and the answer to both of my questions is "no". But I can't help but feel that we could be doing better than just expressing wishy washy feelings about our code quality. For example, user interface design is something that to a broad range of people looks for all the world like a field of study that oughtta be 100% subjective matter of taste. But this is shockingly and brutally not the case, and there are in fact many objective measures that can be applied to a user interface to determine its quality. A series of tests could be written to give a definitive and repeatable score to user interface quality. (See GOMS, for instance). Now, okay. is Elegance simply "code quality" or is it something more? Is it something that can be measured? Or is it a matter of taste? Does our profession have room for taste? Maybe I'm asking the wrong questions altogether. Help me out here. Bonus Round If there is such a thing as elegance in code, and that concept is useful, do you think that justifies classifying the field of programming as an "Art" capital A, or merely a "craft". Or is it just an engineering field populated by a bunch of wishful thinking humans? Consider this question in the light of your thoughts about the elegance question. Please note that there is a distinction between code which is considered "art" in itself, and code that was written merely in the service of creating an artful program. When I ask this question, I ask if the code itself justifies calling programming an art. Bounty Note I liked the answers to this question so much, I think I'd like to make a photographic essay book from it. Released as a free PDF, and published on some kind of on demand printing service of course, such as "zazz" or "tiggle" or "printley" or something . I'd like some more answers, please!

    Read the article

  • Technology to communicate with someone with expressive aphasia?

    - by rascher
    A family member had a stroke a few years back and now has expressive aphasia. She understands what is said to her, is cognitive of what is going on, but cannot express herself. She is able to respond to yes/no questions (do you want to go shopping? are you looking for your earrings?) She is not, however, able to read (English is not her native language and she hasn't read Hindi for decades.) I am the technologist in the family, and I intend to come up with something to help us communicate. The idea is to have some sort of picture book where she can point to what she wants. My first question: does some sort of assistive technology for people with expressive aphasia already exist? These can be hardware or software devices? If not, then such a software doesn't seem difficult to write. My initial thought is to have an interface with pictures - maybe separated by category (food, shopping) - where she can point to an individual picture to indicate what she needs. We could easily add more items with such a software, and we could have an interface where she (or we) could "flip pages". Which suggests that the best solution would use a touch screen rather than a mouse. It would be really difficult to train her to aim a mouse or find keys on a keyboard. We're thinking of maybe getting a tablet and writing some basic software. But tablets computers are expensive and fragile - I'm not sure if it would be able to stand spills or being knocked about in a nursing home. So my next question: what kind of tablet-like devices are out there which I can program on? I don't know anything about hardware, but if there is something then we could special-order it. What would be safe and durable for such a project? We could do something on an iPod or cell phone, but I feel like that interface would be too small. Finally, does anyone here have experience with this kind of assistive technology? Things I might not anticipate when designing such a system? edit I've added a (pretty hefty!) bounty. I'd kinda like to open this question up to any suggestions, comments, and experiences that people might have. This is a pretty real and important project, so while we will (are working on) a solution, any insights would be particularly helpful. Right now the plan is to mount a screen in her room. We'll either teach her to use a trackball or use a touch-screen panel, after seeing what she is able to use with a simple prototype. Then software akin to an old "hypercard" deck: ---------------------------------------------------------------- | -------------- -------------- | | | Clothes | | Food | ... | | -------------- -------------- | | | | Pic of item 1 Pic of item 2 Pic of item 3 | | | | | | | | | | Pic of item 4 Pic of item 5 Pic of item 6 | | | | | | | | | | <-Back Next-> | ---------------------------------------------------------------- commentcommentcomment!

    Read the article

  • Installing my sdist from PyPI puts the files in the wrong places

    - by Tartley
    Hey. My problem is that when I upload my Python package to PyPI, and then install it from there using pip, my app breaks because it installs my files into completely different locations than when I simply install the exact same package from a local sdist. Installing from the local sdist puts files on my system like this: /Python27/ Lib/ site-packages/ gloopy-0.1.alpha-py2.7.egg/ (egg and install info files) data/ (images and shader source) doc/ (html) examples/ (.py scripts that use the library) gloopy/ (source) This is much as I'd expect, and works fine (e.g. my source can find my data dir, because they lie next to each other, just like they do in development.) If I upload the same sdist to PyPI and then install it from there, using pip, then things look very different: /Python27/ data/ (images and shader source) doc/ (html) Lib/ site-packages/ gloopy-0.1.alpha-py2.7.egg/ (egg and install info files) gloopy/ (source files) examples/ (.py scripts that use the library) This doesn't work at all - my app can't find its data files, plus obviously it's a mess, polluting the top-level /python27 directory with all my junk. What am I doing wrong? How do I make the pip install behave like the local sdist install? Is that even what I should be trying to achieve? Details I have setuptools installed, and also distribute, and I'm calling distribute_setup.use_setuptools() WindowsXP, Python2.7. My development directory looks like this: /gloopy /data (image files and GLSL shader souce read at runtime) /doc (html files) /examples (some scripts to show off the library) /gloopy (the library itself) My MANIFEST.in mentions all the files I want to be included in the sdist, including everything in the data, examples and doc directories: recursive-include data *.* recursive-include examples *.py recursive-include doc/html *.html *.css *.js *.png include LICENSE.txt include TODO.txt My setup.py is quite verbose, but I guess the best thing is to include it here, right? I also includes duplicate references to the same data / doc / examples directories as are mentioned in the MANIFEST.in, because I understand this is required in order for these files to be copied from the sdist to the system during install. NAME = 'gloopy' VERSION= __import__(NAME).VERSION RELEASE = __import__(NAME).RELEASE SCRIPT = None CONSOLE = False def main(): import sys from pprint import pprint from setup_utils import distribute_setup from setup_utils.sdist_setup import get_sdist_config distribute_setup.use_setuptools() from setuptools import setup description, long_description = read_description() config = dict( name=name, version=version, description=description, long_description=long_description, keywords='', packages=find_packages(), data_files=[ ('examples', glob('examples/*.py')), ('data/shaders', glob('data/shaders/*.*')), ('doc', glob('doc/html/*.*')), ('doc/_images', glob('doc/html/_images/*.*')), ('doc/_modules', glob('doc/html/_modules/*.*')), ('doc/_modules/gloopy', glob('doc/html/_modules/gloopy/*.*')), ('doc/_modules/gloopy/geom', glob('doc/html/_modules/gloopy/geom/*.*')), ('doc/_modules/gloopy/move', glob('doc/html/_modules/gloopy/move/*.*')), ('doc/_modules/gloopy/shapes', glob('doc/html/_modules/gloopy/shapes/*.*')), ('doc/_modules/gloopy/util', glob('doc/html/_modules/gloopy/util/*.*')), ('doc/_modules/gloopy/view', glob('doc/html/_modules/gloopy/view/*.*')), ('doc/_static', glob('doc/html/_static/*.*')), ('doc/_api', glob('doc/html/_api/*.*')), ], classifiers=[ 'Development Status :: 1 - Planning', 'Intended Audience :: Developers', 'License :: OSI Approved :: BSD License', 'Operating System :: Microsoft :: Windows', 'Programming Language :: Python :: 2.7', ], # see classifiers http://pypi.python.org/pypi?:action=list_classifiers ) config.update(dict( author='Jonathan Hartley', author_email='[email protected]', url='http://bitbucket.org/tartley/gloopy', license='New BSD', ) ) if '--verbose' in sys.argv: pprint(config) setup(**config) if __name__ == '__main__': main()

    Read the article

  • How to declare and implement a COM interface on C# that inherits from another COM interface?

    - by Carlos Loth
    I'm trying to understand what is the correct why to implement COM interfaces from C# code. It is straightforward when the interface doesn't inherit from other base interface. Like this one: [ComImport, Guid("2047E320-F2A9-11CE-AE65-08002B2E1262"), InterfaceType(ComInterfaceType.InterfaceIsIUnknown)] public interface IShellFolderViewCB { long MessageSFVCB(uint uMsg, int wParam, int lParam); } However things start to become weired when I need to implement an interface that inherits from other COM interfaces. For example, if I implement the IPersistFolder2 interface which inherits from IPersistFolder which inherits from IPersist as I usually on C# code: [ComImport, Guid("0000010c-0000-0000-C000-000000000046"), InterfaceType(ComInterfaceType.InterfaceIsIUnknown)] public interface IPersist { void GetClassID([Out] out Guid classID); } [ComImport, Guid("000214EA-0000-0000-C000-000000000046"), InterfaceType(ComInterfaceType.InterfaceIsIUnknown)] public interface IPersistFolder : IPersist { void Initialize([In] IntPtr pidl); } [ComImport, Guid("1AC3D9F0-175C-11d1-95BE-00609797EA4F"), InterfaceType(ComInterfaceType.InterfaceIsIUnknown)] public interface IPersistFolder2 : IPersistFolder { void GetCurFolder([Out] out IntPtr ppidl); } The operating system is not able to call the methods on my object implementation. When I'm debugging I can see the constructor of my IPersistFolder2 implementation is called many times, however the interface methods I've implemented aren't called. I'm implementing the IPersistFolder2 as follows: [Guid("A4603CDB-EC86-4E40-80FE-25D5F5FA467D")] public class PersistFolder: IPersistFolder2 { void IPersistFolder2.GetClassID(ref Guid classID) { ... } void IPersistFolder2.Initialize(IntPtr pidl) { ... } void IPersistFolder2.GetCurFolder(out IntPtr ppidl) { ... } } What seems strange is when I declare the COM interface imports as follow, it works: [ComImport, Guid("0000010c-0000-0000-C000-000000000046"), InterfaceType(ComInterfaceType.InterfaceIsIUnknown)] internal interface IPersist { void GetClassID([Out] out Guid classID); } [ComImport, Guid("000214EA-0000-0000-C000-000000000046"), InterfaceType(ComInterfaceType.InterfaceIsIUnknown)] internal interface IPersistFolder : IPersist { new void GetClassID([Out] out Guid classID); void Initialize([In] IntPtr pidl); } [ComImport, Guid("1AC3D9F0-175C-11d1-95BE-00609797EA4F"), InterfaceType(ComInterfaceType.InterfaceIsIUnknown)] internal interface IPersistFolder2 : IPersistFolder { new void GetClassID([Out] out Guid classID); new void Initialize([In] IntPtr pidl); void GetCurFolder([Out] out IntPtr ppidl); } I don't know why it works when I declare the COM interfaces that way (hidding the base interface methods using new). Maybe it is related to the way IUnknown works. Does anyone know what is the correct way of implementing COM interfaces in C# that inherits from other COM interfaces and why?

    Read the article

  • How do I make this scroll layout work?

    - by JuiCe
    I am currently trying to get my UI to have a Title Bar, a bottom Button bar, with a ScrollView in between. I can get bits and pieces of it to work, but once I get a different piece working, the old part goes back to not showing up. Here is a picture of my UI on the left, with what I want it to look like on the right...(sorry for the sloppiness, I edited it in MS Paint :P ) To sum it up, I want the Version and Type fields to be moved with room for the other TextViews in the XML file, and I want both buttons to appear on the bottom bar. EDIT : The buttons on the bottom should be equal in size, I'm not too talented in making boxes in MS Paint EDIT 2 : Sorry....here is my XML file <?xml version="1.0" encoding="UTF-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="fill_parent" android:layout_height="fill_parent" android:orientation="vertical" android:weightSum="1.0" > <LinearLayout android:layout_width="fill_parent" android:layout_height="wrap_content" > <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:gravity="center_horizontal" android:text="SN : " /> <TextView android:id="@+id/serialNumberView" android:layout_width="wrap_content" android:layout_height="wrap_content" android:gravity="center_horizontal" /> <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:gravity="center_horizontal" android:text="Ver : " /> <TextView android:id="@+id/versionView" android:layout_width="wrap_content" android:layout_height="wrap_content" android:gravity="center_horizontal" /> <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:gravity="center_horizontal" android:text="Type : " /> <TextView android:id="@+id/typeView" android:layout_width="wrap_content" android:layout_height="wrap_content" android:gravity="center_horizontal" /> </LinearLayout> <ScrollView android:layout_width="fill_parent" android:layout_height="fill_parent" android:layout_weight="1" > <LinearLayout android:layout_width="wrap_content" android:layout_height="wrap_content" android:orientation="vertical" android:layout_weight="1"> <CheckBox android:id="@+id/floatCheck" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Float" /> <CheckBox android:id="@+id/tripCheck" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Trip" /> <CheckBox android:id="@+id/closeCheck" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Close" /> <CheckBox android:id="@+id/blockedCheck" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Blocked" /> <CheckBox android:id="@+id/hardTripCheck" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Hard Trip" /> <CheckBox android:id="@+id/phaseAngleCheck" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Phase angle wrong for closing" /> <CheckBox android:id="@+id/diffVoltsCheck" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Differential volts too low" /> <CheckBox android:id="@+id/networkVoltsCheck" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Network volts too low to close" /> <CheckBox android:id="@+id/usingDefaultsCheck" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Using Defaults( Reprogram )" /> <CheckBox android:id="@+id/relaxedCloseActiveCheck" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Relaxed Close Active" /> <CheckBox android:id="@+id/commBoardDetectedCheck" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Comm Board Detected" /> <CheckBox android:id="@+id/antiPumpBlock" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Anti-Pump Block" /> <CheckBox android:id="@+id/motorCutoffCheck" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Motor Cutoff Inhibit" /> <CheckBox android:id="@+id/phaseRotationCheck" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Phase Rotation Wrong" /> <CheckBox android:id="@+id/usingDefaultDNPCheck" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text= "Using Default DNP Profile" /> </LinearLayout> </ScrollView> <LinearLayout android:layout_width="fill_parent" android:layout_height="wrap_content" android:orientation="horizontal" android:layout_weight="1" > <Button android:id="@+id/button3" android:layout_width="fill_parent" android:layout_height="fill_parent" android:text="Back" /> <Button android:id="@+id/button3" android:layout_width="fill_parent" android:layout_height="fill_parent" android:text="Read" /> </LinearLayout> </LinearLayout>

    Read the article

  • file.createNewFile() creates files with last-modified time before actual creation time

    - by Kaleb Pederson
    I'm using JPoller to detect changes to files in a specific directory, but it's missing files because they end up with a timestamp earlier than their actual creation time. Here's how I test: public static void main(String [] files) { for (String file : files) { File f = new File(file); if (f.exists()) { System.err.println(file + " exists"); continue; } try { // find out the current time, I would hope to assume that the last-modified // time on the file will definitely be later than this System.out.println("-----------------------------------------"); long time = System.currentTimeMillis(); // create the file System.out.println("Creating " + file + " at " + time); f.createNewFile(); // let's see what the timestamp actually is (I've only seen it <time) System.out.println(file + " was last modified at: " + f.lastModified()); // well, ok, what if I explicitly set it to time? f.setLastModified(time); System.out.println("Updated modified time on " + file + " to " + time + " with actual " + f.lastModified()); } catch (IOException e) { System.err.println("Unable to create file"); } } } And here's what I get for output: ----------------------------------------- Creating test.7 at 1272324597956 test.7 was last modified at: 1272324597000 Updated modified time on test.7 to 1272324597956 with actual 1272324597000 ----------------------------------------- Creating test.8 at 1272324597957 test.8 was last modified at: 1272324597000 Updated modified time on test.8 to 1272324597957 with actual 1272324597000 ----------------------------------------- Creating test.9 at 1272324597957 test.9 was last modified at: 1272324597000 Updated modified time on test.9 to 1272324597957 with actual 1272324597000 The result is a race condition: JPoller records time of last check as xyz...123 File created at xyz...456 File last-modified timestamp actually reads xyz...000 JPoller looks for new/updated files with timestamp greater than xyz...123 JPoller ignores newly added file because xyz...000 is less than xyz...123 I pull my hair out for a while I tried digging into the code but both lastModified() and createNewFile() eventually resolve to native calls so I'm left with little information. For test.9, I lose 957 milliseconds. What kind of accuracy can I expect? Are my results going to vary by operating system or file system? Suggested workarounds? NOTE: I'm currently running Linux with an XFS filesystem. I wrote a quick program in C and the stat system call shows st_mtime as truncate(xyz...000/1000).

    Read the article

  • get_or_create generic relations in Django & python debugging in general

    - by rabidpebble
    I ran the code to create the generically related objects from this demo: http://www.djangoproject.com/documentation/models/generic_relations/ Everything is good intially: >>> bacon.tags.create(tag="fatty") <TaggedItem: fatty> >>> tag, newtag = bacon.tags.get_or_create(tag="fatty") >>> tag <TaggedItem: fatty> >>> newtag False But then the use case that I'm interested in for my app: >>> tag, newtag = bacon.tags.get_or_create(tag="wholesome") Traceback (most recent call last): File "<console>", line 1, in <module> File "/usr/local/lib/python2.6/dist-packages/django/db/models/manager.py", line 123, in get_or_create return self.get_query_set().get_or_create(**kwargs) File "/usr/local/lib/python2.6/dist-packages/django/db/models/query.py", line 343, in get_or_create raise e IntegrityError: app_taggeditem.content_type_id may not be NULL I tried a bunch of random things after looking at other code: >>> tag, newtag = bacon.tags.get_or_create(tag="wholesome", content_type=TaggedItem) ValueError: Cannot assign "<class 'generics.app.models.TaggedItem'>": "TaggedItem.content_type" must be a "ContentType" instance. or: >>> tag, newtag = bacon.tags.get_or_create(tag="wholesome", content_type=TaggedItem.content_type) InterfaceError: Error binding parameter 3 - probably unsupported type. etc. I'm sure somebody can give me the correct syntax, but the real problem here is that I have no idea what is going on. I have developed in strongly typed languages for over ten years (x86 assembly, C++ and C#) but am new to Python. I find it really difficult to follow what is going on in Python when things like this break. In the languages I mentioned previously it's fairly straightforward to figure things like this out -- check the method signature and check your parameters. Looking at the Django documentation for half an hour left me just as lost. Looking at the source for get_or_create(self, **kwargs) didn't help either since there is no method signature and the code appears very generic. A next step would be to debug the method and try to figure out what is happening, but this seems a bit extreme... I seem to be missing some fundamental operating principle here... what is it? How do I resolve issues like this on my own in the future?

    Read the article

  • Having problem loading data from AppDelegate using UITableView into a flip view, loads first view bu

    - by Ms. Ryann
    AppDelegate: @implementation Ripe_ProduceGuideAppDelegate -(void)applicationDidFinishLaunching:(UIApplication *)application { Greens *apricot = [[Greens alloc] init]; apricot.produceName = @"Apricot"; apricot.produceSight = @"Deep orange or yellow orange in appearance, may have red tinge, no marks or bruises. "; apricot.produceTouch = @"Firm to touch and give to gentle pressure, plump."; apricot.produceSmell = @"Should be Fragrant"; apricot.produceHtoP = @"raw, salads, baked, sauces, glazes, desserts, poached, stuffing."; apricot.produceStore = @"Not ripe: place in brown paper bag, at room temperature and out of direct sunlight, close bag for 2 - 3 days. Last for a week. Warning: Only refrigerate ripe apricots."; apricot.produceBest = @"Spring & Summer"; apricot.producePic = [UIImage imageNamed:@"apricot.jpg"]; Greens *artichoke = [[Greens alloc] init]; artichoke.produceName = @"Artichoke"; artichoke.produceSight = @"Slightly glossy dark green color and sheen, tight petals that are not be too open, no marks, no brown petals or dried out look. Stem should not be dark brown or black."; artichoke.produceTouch = @"No soft spots"; artichoke.produceSmell = @" Should not smell"; artichoke.produceHtoP = @"steam, boil, grill, saute, soups"; artichoke.produceStore = @"Stand up in vase of cold water, keeps for 2 -3 days. Or, place in refrigerator loose without plastic bag. May be frozen, if cooked but not raw."; artichoke.produceBest = @"Spring"; artichoke.producePic = [UIImage imageNamed:@"artichoke.jpg"]; self.produce = [[NSMutableArray alloc] initWithObjects:apricot, artichoke, nil]; [apricot release]; [artichoke release]; FirstView: @implementation ProduceView -(id)initWithIndexPath: (NSIndexPath *)indexPath { if (self == [super init] ){ index = indexPath; } return self; } -(void)viewDidLoad { Ripe_ProduceGuideAppDelegate *delegate = (Ripe_ProduceGuideAppDelegate *) [[UIApplication sharedApplication] delegate]; Greens *thisProduce = [delegate.produce objectAtIndex:index.row]; self.title = thisProduce.produceName; sightView.text = thisProduce.produceSight; touchView.text = thisProduce.produceTouch; smellView.text = thisProduce.produceSmell; picView.image = thisProduce.producePic; } FlipView: @implementation FlipsideViewController @synthesize flipDelegate; -(id)initWithIndexPath: (NSIndexPath *)indexPath { if ( self == [super init]) { index = indexPath; } return self; } -(void)viewDidLoad { Ripe_ProduceGuideAppDelegate *delegate = (Ripe_ProduceGuideAppDelegate *) [[UIApplication sharedApplication] delegate]; Greens*thisProduce = [delegate.produce objectAtIndex:index.row]; self.title = thisProduce.produceName; bestView.text = thisProduce.produceBest; htopView.text = thisProduce.produceHtoP; storeView.text = thisProduce.produceStore; picView.image = thisProduce.producePic; } *the app works, the flip view for Artichoke shows the information for Apricot. Been working on it for two days. I have been working with iPhone apps for two months now and would very much appreciate any assistance with this problem. Thank you very much.

    Read the article

< Previous Page | 271 272 273 274 275 276 277 278 279 280 281 282  | Next Page >