Search Results

Search found 1999 results on 80 pages for 'temporary'.

Page 21/80 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • Is wiper.sh working?

    - by Aleksander Blomskøld
    I'm setting up a server running Ubuntu Precise, and I'm trying to verify if SSD TRIM is working. fstrim is failing: ~ sudo fstrim -v / fstrim: /: FITRIM ioctl failed: Operation not supported So I tried wiper.sh in hdparm: wiper-3.5 sudo ./wiper.sh --verbose --commit /dev/sda1 wiper.sh: Linux SATA SSD TRIM utility, version 3.5, by Mark Lord. rootdev=/dev/sda1 fsmode2: fsmode=read-write /: fstype=ext4 freesize = 169502088 KB, reserved = 1695020 KB Preparing for online TRIM of free space on /dev/sda1 (ext4 mounted read-write at /). This operation could silently destroy your data. Are you sure (y/N)? y Creating temporary file (167807068 KB).. Syncing disks.. Beginning TRIM operations.. get_trimlist=/sbin/hdparm --fibmap WIPER_TMPFILE.11503 /dev/sda: trimming 3211263 sectors from 64 ranges succeeded trimming 3571713 sectors from 64 ranges succeeded trimming 3915776 sectors from 64 ranges succeeded (...) trimming 3657913 sectors from 60 ranges succeeded Removing temporary file.. Syncing disks.. Done. It seems to be working, but I'm wondering if it really is. Are there any cases where wiper.sh should work when fstrim isn't? Is there any way I can check if the TRIMing actually has succeeded (other than trusting the wiper.sh-log)?

    Read the article

  • Fixed and dynamic IPs in ISC DHPD lead to double lease

    - by GorillaPatch
    I would like to have a small dynamic adress part and the most clients are assigned a fixed IP adress. My dhcpd.conf looks like this: use-host-decl-names on; authoritative; allow client-updates; ddns-updates on; # Einstellungen fuer DHCP leases default-lease-time 3600; max-lease-time 86400; lease-file-name "/var/lib/dhcpd/dhcpd.leases"; subnet 192.168.11.0 netmask 255.255.255.0 { ddns-updates on; pool { # IP range which will be assigned statically range 192.168.11.1 192.168.11.240; deny all clients; } pool { # small dynamic range range 192.168.11.241 192.168.11.254; # used for temporary devices } } group { host pc1 { hardware ethernet xx:xx:xx:xx:xx:xx; fixed-address 192.168.11.11; } } The motivation for the pool declaration with deny all hosts comes from the ISC DHCPD homepage http://www.isc.org/files/auth.html This will allow hosts to be first added to the network, where they will receive a temporary IP from the 241-254 adress range and then later write an explicit host declaration. Upon next connect it will receive the right configuration. The problem is that I am getting error messages that 192.168.11.13 has a dynamic and a static lease. I am a bit confused as I expected the pool declaration with deny all clients would not count as dynamic. Dynamic and static leases present for 192.168.11.13. Remove host declaration pc1 or remove 192.168.11.13 from the dynamic address pool for 192.168.11.0/24 Is there a way to have the DHCP server send an DHCPNA to clients if they have a host statement and retain this dynamic range?

    Read the article

  • MySQL query, 2 similar servers, 2 minute difference in execution times

    - by mr12086
    I had a similar question on stack overflow, but it seems to be more server/mysql setup related than coding. The queries below all execute instantly on our development server where as they can take upto 2 minutes 20 seconds. The query execution time seems to be affected by home ambiguous the LIKE string's are. If they closely match a country that has few matches it will take less time, and if you use something like 'ge' for germany - it will take longer to execute. But this doesn't always work out like that, at times its quite erratic. Sending data appears to be the culprit but why and what does that mean. Also memory on production looks to be quite low (free memory)? Production: Intel Quad Xeon E3-1220 3.1GHz 4GB DDR3 2x 1TB SATA in RAID1 Network speed 100Mb Ubuntu Development Intel Core i3-2100, 2C/4T, 3.10GHz 500 GB SATA - No RAID 4GB DDR3 UPDATE 2 : mysqltuner output: [prod] -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.1.61-0ubuntu0.10.04.1 [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: +Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in MyISAM tables: 103M (Tables: 180) [--] Data in InnoDB tables: 491M (Tables: 19) [!!] Total fragmented tables: 38 -------- Security Recommendations ------------------------------------------- [OK] All database users have passwords assigned -------- Performance Metrics ------------------------------------------------- [--] Up for: 77d 4h 6m 1s (53M q [7.968 qps], 14M conn, TX: 87B, RX: 12B) [--] Reads / Writes: 98% / 2% [--] Total buffers: 58.0M global + 2.7M per thread (151 max threads) [OK] Maximum possible memory usage: 463.8M (11% of installed RAM) [OK] Slow queries: 0% (12K/53M) [OK] Highest usage of available connections: 22% (34/151) [OK] Key buffer size / total MyISAM indexes: 16.0M/10.6M [OK] Key buffer hit rate: 98.7% (162M cached / 2M reads) [OK] Query cache efficiency: 20.7% (7M cached / 36M selects) [!!] Query cache prunes per day: 3934 [OK] Sorts requiring temporary tables: 1% (3K temp sorts / 230K sorts) [!!] Joins performed without indexes: 71068 [OK] Temporary tables created on disk: 24% (3M on disk / 13M total) [OK] Thread cache hit rate: 99% (690 created / 14M connections) [!!] Table cache hit rate: 0% (64 open / 85M opened) [OK] Open file limit used: 12% (128/1K) [OK] Table locks acquired immediately: 99% (16M immediate / 16M locks) [!!] InnoDB data size / buffer pool: 491.9M/8.0M -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance Enable the slow query log to troubleshoot bad queries Adjust your join queries to always utilize indexes Increase table_cache gradually to avoid file descriptor limits Variables to adjust: query_cache_size (> 16M) join_buffer_size (> 128.0K, or always use indexes with joins) table_cache (> 64) innodb_buffer_pool_size (>= 491M) [dev] -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.1.62-0ubuntu0.11.10.1 [!!] Switch to 64-bit OS - MySQL cannot currently use all of your RAM -------- Storage Engine Statistics ------------------------------------------- [--] Status: +Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in MyISAM tables: 185M (Tables: 632) [--] Data in InnoDB tables: 967M (Tables: 38) [!!] Total fragmented tables: 73 -------- Security Recommendations ------------------------------------------- [OK] All database users have passwords assigned -------- Performance Metrics ------------------------------------------------- [--] Up for: 1d 2h 26m 9s (5K q [0.058 qps], 1K conn, TX: 4M, RX: 1M) [--] Reads / Writes: 99% / 1% [--] Total buffers: 58.0M global + 2.7M per thread (151 max threads) [OK] Maximum possible memory usage: 463.8M (11% of installed RAM) [OK] Slow queries: 0% (0/5K) [OK] Highest usage of available connections: 1% (2/151) [OK] Key buffer size / total MyISAM indexes: 16.0M/18.6M [OK] Key buffer hit rate: 99.9% (60K cached / 36 reads) [OK] Query cache efficiency: 44.5% (1K cached / 2K selects) [OK] Query cache prunes per day: 0 [OK] Sorts requiring temporary tables: 0% (0 temp sorts / 44 sorts) [OK] Temporary tables created on disk: 24% (162 on disk / 666 total) [OK] Thread cache hit rate: 99% (2 created / 1K connections) [!!] Table cache hit rate: 1% (64 open / 4K opened) [OK] Open file limit used: 8% (88/1K) [OK] Table locks acquired immediately: 100% (1K immediate / 1K locks) [!!] InnoDB data size / buffer pool: 967.7M/8.0M -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance Enable the slow query log to troubleshoot bad queries Increase table_cache gradually to avoid file descriptor limits Variables to adjust: table_cache (> 64) innodb_buffer_pool_size (>= 967M) UPDATE 1: When testing the queries listed here there is usually no more than one other query taking place, and usually none. Because production is actually handling apache requests that development gets very few of as it's only myself and 1 other who accesses it - could the 4GB of RAM be getting exhausted by using the single machine for both apache and mysql server? Production: sudo hdparm -tT /dev/sda /dev/sda: Timing cached reads: 24872 MB in 2.00 seconds = 12450.72 MB/sec Timing buffered disk reads: 368 MB in 3.00 seconds = 122.49 MB/sec sudo hdparm -tT /dev/sdb /dev/sdb: Timing cached reads: 24786 MB in 2.00 seconds = 12407.22 MB/sec Timing buffered disk reads: 350 MB in 3.00 seconds = 116.53 MB/sec Server version(mysql + ubuntu versions): 5.1.61-0ubuntu0.10.04.1 Development: sudo hdparm -tT /dev/sda /dev/sda: Timing cached reads: 10632 MB in 2.00 seconds = 5319.40 MB/sec Timing buffered disk reads: 400 MB in 3.01 seconds = 132.85 MB/sec Server version(mysql + ubuntu versions): 5.1.62-0ubuntu0.11.10.1 ORIGINAL DATA : This query is NOT the query in question but is related so ill post it. SELECT f.form_question_has_answer_id FROM form_question_has_answer f INNER JOIN project_company_has_user p ON f.form_question_has_answer_user_id = p.project_company_has_user_user_id INNER JOIN company c ON p.project_company_has_user_company_id = c.company_id INNER JOIN project p2 ON p.project_company_has_user_project_id = p2.project_id INNER JOIN user u ON p.project_company_has_user_user_id = u.user_id INNER JOIN form f2 ON p.project_company_has_user_project_id = f2.form_project_id WHERE (f2.form_template_name = 'custom' AND p.project_company_has_user_garbage_collection = 0 AND p.project_company_has_user_project_id = '29') AND (LCASE(c.company_country) LIKE '%ge%' OR LCASE(c.company_country) LIKE '%abcde%') AND f.form_question_has_answer_form_id = '174' And the explain plan for the above query is, run on both dev and production produce the same plan. +----+-------------+-------+--------+----------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+--------+----------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+-------------+ | 1 | SIMPLE | p2 | const | PRIMARY | PRIMARY | 4 | const | 1 | Using index | | 1 | SIMPLE | f | ref | form_question_has_answer_form_id,form_question_has_answer_user_id | form_question_has_answer_form_id | 4 | const | 796 | Using where | | 1 | SIMPLE | u | eq_ref | PRIMARY | PRIMARY | 4 | new_klarents.f.form_question_has_answer_user_id | 1 | Using index | | 1 | SIMPLE | p | ref | project_company_has_user_unique_key,project_company_has_user_user_id,project_company_has_user_company_id,project_company_has_user_project_id | project_company_has_user_user_id | 4 | new_klarents.f.form_question_has_answer_user_id | 1 | Using where | | 1 | SIMPLE | f2 | ref | form_project_id | form_project_id | 4 | const | 15 | Using where | | 1 | SIMPLE | c | eq_ref | PRIMARY | PRIMARY | 4 | new_klarents.p.project_company_has_user_company_id | 1 | Using where | +----+-------------+-------+--------+----------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+-------------+ This query takes 2 minutes ~20 seconds to execute. The query that is ACTUALLY being run on the server is this one: SELECT COUNT(*) AS num_results FROM (SELECT f.form_question_has_answer_id FROM form_question_has_answer f INNER JOIN project_company_has_user p ON f.form_question_has_answer_user_id = p.project_company_has_user_user_id INNER JOIN company c ON p.project_company_has_user_company_id = c.company_id INNER JOIN project p2 ON p.project_company_has_user_project_id = p2.project_id INNER JOIN user u ON p.project_company_has_user_user_id = u.user_id INNER JOIN form f2 ON p.project_company_has_user_project_id = f2.form_project_id WHERE (f2.form_template_name = 'custom' AND p.project_company_has_user_garbage_collection = 0 AND p.project_company_has_user_project_id = '29') AND (LCASE(c.company_country) LIKE '%ge%' OR LCASE(c.company_country) LIKE '%abcde%') AND f.form_question_has_answer_form_id = '174' GROUP BY f.form_question_has_answer_id;) dctrn_count_query; With explain plans (again same on dev and production): +----+-------------+-------+--------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+--------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+------------------------------+ | 1 | PRIMARY | NULL | NULL | NULL | NULL | NULL | NULL | NULL | Select tables optimized away | | 2 | DERIVED | p2 | const | PRIMARY | PRIMARY | 4 | | 1 | Using index | | 2 | DERIVED | f | ref | form_question_has_answer_form_id,form_question_has_answer_user_id | form_question_has_answer_form_id | 4 | | 797 | Using where | | 2 | DERIVED | p | ref | project_company_has_user_unique_key,project_company_has_user_user_id,project_company_has_user_company_id,project_company_has_user_project_id,project_company_has_user_garbage_collection | project_company_has_user_user_id | 4 | new_klarents.f.form_question_has_answer_user_id | 1 | Using where | | 2 | DERIVED | f2 | ref | form_project_id | form_project_id | 4 | | 15 | Using where | | 2 | DERIVED | c | eq_ref | PRIMARY | PRIMARY | 4 | new_klarents.p.project_company_has_user_company_id | 1 | Using where | | 2 | DERIVED | u | eq_ref | PRIMARY | PRIMARY | 4 | new_klarents.p.project_company_has_user_user_id | 1 | Using where; Using index | +----+-------------+-------+--------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+------------------------------+ On the production server the information I have is as follows. Upon execution: +-------------+ | num_results | +-------------+ | 3 | +-------------+ 1 row in set (2 min 14.28 sec) Show profile: +--------------------------------+------------+ | Status | Duration | +--------------------------------+------------+ | starting | 0.000016 | | checking query cache for query | 0.000057 | | Opening tables | 0.004388 | | System lock | 0.000003 | | Table lock | 0.000036 | | init | 0.000030 | | optimizing | 0.000016 | | statistics | 0.000111 | | preparing | 0.000022 | | executing | 0.000004 | | Sorting result | 0.000002 | | Sending data | 136.213836 | | end | 0.000007 | | query end | 0.000002 | | freeing items | 0.004273 | | storing result in query cache | 0.000010 | | logging slow query | 0.000001 | | logging slow query | 0.000002 | | cleaning up | 0.000002 | +--------------------------------+------------+ On development the results are as follows. +-------------+ | num_results | +-------------+ | 3 | +-------------+ 1 row in set (0.08 sec) Again the profile for this query: +--------------------------------+----------+ | Status | Duration | +--------------------------------+----------+ | starting | 0.000022 | | checking query cache for query | 0.000148 | | Opening tables | 0.000025 | | System lock | 0.000008 | | Table lock | 0.000101 | | optimizing | 0.000035 | | statistics | 0.001019 | | preparing | 0.000047 | | executing | 0.000008 | | Sorting result | 0.000005 | | Sending data | 0.086565 | | init | 0.000015 | | optimizing | 0.000006 | | executing | 0.000020 | | end | 0.000004 | | query end | 0.000004 | | freeing items | 0.000028 | | storing result in query cache | 0.000005 | | removing tmp table | 0.000008 | | closing tables | 0.000008 | | logging slow query | 0.000002 | | cleaning up | 0.000005 | +--------------------------------+----------+ If i remove user and/or project innerjoins the query is reduced to 30s. Last bit of information I have: Mysqlserver and Apache are on the same box, there is only one box for production. Production output from top: before & after. top - 15:43:25 up 78 days, 12:11, 4 users, load average: 1.42, 0.99, 0.78 Tasks: 162 total, 2 running, 160 sleeping, 0 stopped, 0 zombie Cpu(s): 0.1%us, 50.4%sy, 0.0%ni, 49.5%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 4037868k total, 3772580k used, 265288k free, 243704k buffers Swap: 3905528k total, 265384k used, 3640144k free, 1207944k cached top - 15:44:31 up 78 days, 12:13, 4 users, load average: 1.94, 1.23, 0.87 Tasks: 160 total, 2 running, 157 sleeping, 0 stopped, 1 zombie Cpu(s): 0.2%us, 50.6%sy, 0.0%ni, 49.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 4037868k total, 3834300k used, 203568k free, 243736k buffers Swap: 3905528k total, 265384k used, 3640144k free, 1207804k cached But this isn't a good representation of production's normal status so here is a grab of it from today outside of executing the queries. top - 11:04:58 up 79 days, 7:33, 4 users, load average: 0.39, 0.58, 0.76 Tasks: 156 total, 1 running, 155 sleeping, 0 stopped, 0 zombie Cpu(s): 3.3%us, 2.8%sy, 0.0%ni, 93.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 4037868k total, 3676136k used, 361732k free, 271480k buffers Swap: 3905528k total, 268736k used, 3636792k free, 1063432k cached Development: This one doesn't change during or after. top - 15:47:07 up 110 days, 22:11, 7 users, load average: 0.17, 0.07, 0.06 Tasks: 210 total, 2 running, 208 sleeping, 0 stopped, 0 zombie Cpu(s): 0.1%us, 0.2%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 4111972k total, 1821100k used, 2290872k free, 238860k buffers Swap: 4183036k total, 66472k used, 4116564k free, 921072k cached

    Read the article

  • Server not accepting uploads

    - by Tatu Ulmanen
    I'm having a strange problem with my VPS: I can download files from it, I can use PuTTy to connect to it and all behaves normally. But sometimes, when I try to upload a file to the server or save a file via SFTP, the connection inexplicably fails. I am using jEdit to edit files remotely via SFTP. When it works, it works fine. When it doesn't, I get an error message: Cannot save: java.io.IOException: inputstream is closed Cannot save: java.io.IOException: 4: I can see that a temporary save file (#file.php#save#) is created on the server with a filesize of 0. So the connection works, but when it comes to sending the actual data, something fails. The same thing with WinSCP, but the error is different: Copying file fatally failed. Copying files to remote side failed. And I can always browse the server with PuTTy without a problem. I see nothing abnormal in any log files. Auth.log shows this when I try to save: sshd[32638]: Accepted password for - from - port 62272 ssh2 sshd[32638]: pam_unix(sshd:session): session opened for user - by (uid=0) sshd[32640]: subsystem request for sftp sshd[32638]: pam_unix(sshd:session): session closed for user - When I wait for a while (say, an hour), everything works fine again. It can't be a temporary ban, as I am still allowed to connect to the server, right? I know this may not be enough info to solve the problem, but I am grateful for any clues or bits of information that might help me. What are the possible causes for this kind of behaviour, what log files can I check for clues etc.. I'm running out of ideas!

    Read the article

  • Configure samba server for Unix group

    - by Bird Jaguar IV
    I'm trying to set up a samba server with access for users in the Linux (RHEL 6) "wheel" group. I am basing smb.conf off of the example here where it goes through the [accounting] example. In my smb.conf I have [tmp] comment = temporary files path = /var/share valid users = @wheel read only = No create mask = 0664 directory mask = 02777 max connections = 0 (rest of the output from $ testparm /etc/samba/smb.conf is here). And groups `whoami` returns user01 : wheel. When I use the following command from another machine (Mac OS) as the Linux user (user01): $ smbclient -L NETBIOSNAME/tmp it asks for a password, I hit return without a password, and get: Enter user01's password: Anonymous login successful Domain=[DOMAIN] OS=[Unix] Server=[Samba 3.6.9-151.el6_4.1] Sharename Type Comment --------- ---- ------- tmp Disk temporary files IPC$ IPC IPC Service (Samba Server Version 3.6.9-151.el6_4.1) But when I try $ smbclient //NETBIOSNAME/tmp I try entering the password I use for the Linux login, and get a bunch of stuff logged, including check_sam_security: Couldn't find user 'user01' in passdb. ... session setup failed: NT_STATUS_LOGON_FAILURE (I can give more logging information if it would be helpful.) I can't find a reference to more steps I need to add group users in the resource. Should I be manually adding samba users from the group somehow? Thank you

    Read the article

  • Server slowdown

    - by Clinton Bosch
    I have a GWT application running on Tomcat on a cloud linux(Ubuntu) server, recently I released a new version of the application and suddenly my server response times have gone from 500ms average to 15s average. I have run every monitoring tool I know. iostat says my disks are 0.03% utilised mysqltuner.pl says I am OK other see below top says my processor is 99% idle and load average: 0.20, 0.31, 0.33 memory usage is 50% (-/+ buffers/cache: 3997 3974) mysqltuner output [OK] Logged in using credentials from debian maintenance account. -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.1.63-0ubuntu0.10.04.1-log [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: +Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in MyISAM tables: 370M (Tables: 52) [--] Data in InnoDB tables: 697M (Tables: 1749) [!!] Total fragmented tables: 1754 -------- Security Recommendations ------------------------------------------- [OK] All database users have passwords assigned -------- Performance Metrics ------------------------------------------------- [--] Up for: 19h 25m 41s (1M q [28.122 qps], 1K conn, TX: 2B, RX: 1B) [--] Reads / Writes: 98% / 2% [--] Total buffers: 1.0G global + 2.7M per thread (500 max threads) [OK] Maximum possible memory usage: 2.4G (30% of installed RAM) [OK] Slow queries: 0% (1/1M) [OK] Highest usage of available connections: 34% (173/500) [OK] Key buffer size / total MyISAM indexes: 16.0M/279.0K [OK] Key buffer hit rate: 99.9% (50K cached / 40 reads) [OK] Query cache efficiency: 61.4% (844K cached / 1M selects) [!!] Query cache prunes per day: 553779 [OK] Sorts requiring temporary tables: 0% (0 temp sorts / 34K sorts) [OK] Temporary tables created on disk: 4% (4K on disk / 102K total) [OK] Thread cache hit rate: 84% (185 created / 1K connections) [!!] Table cache hit rate: 0% (256 open / 27K opened) [OK] Open file limit used: 0% (20/2K) [OK] Table locks acquired immediately: 100% (692K immediate / 692K locks) [OK] InnoDB data size / buffer pool: 697.2M/1.0G -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance MySQL started within last 24 hours - recommendations may be inaccurate Enable the slow query log to troubleshoot bad queries Increase table_cache gradually to avoid file descriptor limits Variables to adjust: query_cache_size (> 16M) table_cache (> 256)

    Read the article

  • Windows 7 Paging file apparently not being used

    - by Daniel F.
    I'm running Windows 7 Home Premium 32bit on a mobo with 24GB RAM. Of those 24GB, 20GB are assigned as a RAMDISK via ASRock XFastRAM. This RAMDISK has the drive letter X assigned to it. On X:\ I'm storing the temporary files folder, as well as pagefile.sys. Pagefile.sys has 6GB of size. The X:\ has usually around 14GB free space, so the temporary files are negligible, it's mostly the browsers which are storing their caches on there. Now my issue is that Firefox is crashing a lot on me, no error message pops up, but I know that this is because it's out of memory. I could kind of live with that, but now that I switched from using Eclipse to Android Studio, I know that I'm in trouble, because Java isn't capable of allocating, and Android Studio, together with the Java instances it launches, is quite a memory hog. So I tried to figure out what's wrong, and apparently Windows isn't swapping out memory onto the paging file. While my applications are crashing (firefox) / not starting (java vm's), the paging file is only using constantly around 15% of its size (checked with the performance monitor). 15% equals to 1GB aprox. I know that the correct solution would be to switch to 64 bit Windows, but I had to use the 32 bit version because of driver issues which I had about two years ago, and I guess that I'll have them again if I reformat and install the 64 bit version. Also, the machine is running quite stable, the only issue is the memory, so I'd like to use it as it is (as the apps are installed and configured) Is there a way to make Windows use the paging file more efficiently? None of my processes require more than 1GB, I'd just like it to swap out some seldomly used stuff, like GoogleCrashHandler.exe and stuff like that in order to have "more physical memory avaliable". Is that possible?

    Read the article

  • Help configuring Mercury mail or similiar with XAMPP to send e-mail outside of localhost

    - by user291040
    I'm building a PHP/MySQL driven website for my department at work (installed via XAMPP). I need to be able to send mail to outside e-mail addresses (e.g., Yahoo, Hotmail, etc.) using the PHP mail() function. As I see it I have to solutions: Configure the SMTP directive in php.ini to the server running at my work. Configure/run a mail server that can send e-mails outside of localhost (I'm trying Mercury because it comes installed with XAMPP). Here are problems I've come up against: I took a guess at our SMTP server name, and when calling PHP mail(), I get the error SMTP server response: 530 5.7.1 Client was not authenticated I can't be sure, however, the SMTP name is correct (I can't get help from our IT guys because of politics). I have tried to use mercury mail. Mercury seems to be picking up the request, but it doesn't want to forward the e-mail to the outside. I keep getting a Temporary error 240 (temporary MX resolution error). I've searched high and low but still can't find a definitive answer on how to send e-mails outside of localhost. Any help is greatly appreciated.

    Read the article

  • MysqlTunner and query_cache_size dilemma

    - by wbad
    On a busy mysql server MySQLTuner 1.2.0 always recommends to add query_cache_size no matter how I increase the value (I tried up to 512MB). On the other hand it warns that : Increasing the query_cache size over 128M may reduce performance Here are the last results: >> MySQLTuner 1.2.0 - Major Hayden <[email protected]> >> Bug reports, feature requests, and downloads at http://mysqltuner.com/ >> Run with '--help' for additional options and output filtering -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.5.25-1~dotdeb.0-log [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: +Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in InnoDB tables: 6G (Tables: 195) [--] Data in PERFORMANCE_SCHEMA tables: 0B (Tables: 17) [!!] Total fragmented tables: 51 -------- Security Recommendations ------------------------------------------- [OK] All database users have passwords assigned -------- Performance Metrics ------------------------------------------------- [--] Up for: 1d 19h 17m 8s (254M q [1K qps], 5M conn, TX: 139B, RX: 32B) [--] Reads / Writes: 89% / 11% [--] Total buffers: 24.2G global + 92.2M per thread (1200 max threads) [!!] Maximum possible memory usage: 132.2G (139% of installed RAM) [OK] Slow queries: 0% (2K/254M) [OK] Highest usage of available connections: 32% (391/1200) [OK] Key buffer size / total MyISAM indexes: 128.0M/92.0K [OK] Key buffer hit rate: 100.0% (8B cached / 0 reads) [OK] Query cache efficiency: 79.9% (181M cached / 226M selects) [!!] Query cache prunes per day: 1033203 [OK] Sorts requiring temporary tables: 0% (341 temp sorts / 4M sorts) [OK] Temporary tables created on disk: 14% (760K on disk / 5M total) [OK] Thread cache hit rate: 99% (676 created / 5M connections) [OK] Table cache hit rate: 22% (1K open / 8K opened) [OK] Open file limit used: 0% (49/13K) [OK] Table locks acquired immediately: 99% (64M immediate / 64M locks) [OK] InnoDB data size / buffer pool: 6.1G/19.5G -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance Reduce your overall MySQL memory footprint for system stability Increasing the query_cache size over 128M may reduce performance Variables to adjust: *** MySQL's maximum memory usage is dangerously high *** *** Add RAM before increasing MySQL buffer variables *** query_cache_size (> 192M) [see warning above] The server has 76GB ram and dual E5-2650. The load is usually below 2. I appreciate your hints to interpret the recommendation and optimize the database configs.

    Read the article

  • Can a non-redundant RAID5 cause any serious problems (compared to RAID0)?

    - by leemes
    I used to have a three-disc RAID5 (mdadm) in my computer for personal media storage (music, videos, photos, programs, games, ...). It had three discs with 750 GB each, resulting in an array capacity of 1.5 TB. One day (one year ago), I needed one of those discs to install another operating system. I thought, I don't need the redundancy anymore since I backup the most important stuff (personal photos e.g.) on an external disc anyway. So I decided to remove one of the three discs without converting the RAID to RAID0 or even two separate discs, because I had no temporary storage (since one cannot simply convert the RAID5 to RAID0 AFAIK). So now, for about one year, I have a non-redundant RAID5 with 2 of 3 discs running. Sometimes, one of the discs has a defective contact at the power cable or something similar causing the drive to stop working temporarily (I don't know exactly what it is). Since it still works when rebooting the computer and in most cases by calling some mdadm commands, it wasn't that problematic. Note that the data is not very critical, since I still have a backup of the most important stuff. But in the last few weeks, one of the drives fails very frequently (every few hours), so it gets really annoying to manage this. My questions are: Is there any disadvantage (apart from the annoying management) of a non-redundant RAID5 (with one drive less than typical) over a RAID0? If I understand it correctly, both have no redundancy and the same capacity. On a temporary drive failure, I can restart the array in both cases, assuming that the drive itself still works after the failure. Can it happen that the drive contents alter on a drive failure, making the array inconsistent? If so, can I tell mdadm to check the array for failures (without a file system level checking tool)? Since the drive most probably only has a defective contact causing it to fail for a second only, can I tell mdadm to automatically restart the array, so I will not even notice the failure if no application wanted to access the file system during the failure?

    Read the article

  • SQL Server 2012 memory usage steadily growing

    - by pgmo
    I am very worried about the SQL Server 2012 Express instance on which my database is running: the SQL Server process memory usage is growing steadily (1.5GB after only 2 days working). The database is made of seven tables, each having a bigint primary key (Identity) and at least one non-unique index with some included columns to serve the majority of incoming queries. An external application is calling via Microsoft OLE DB some stored procedures, each of which do some calculations using intermediate temporary tables and/or table variables and finally do an upsert (UPDATE....IF @@ROWCOUNT=0 INSERT.....) - I never DROP those temporary tables explicitly: the frequency of those calls is about 100 calls every 5 seconds (I saw that the DLL used by the external application open a connection to SQL Server, do the call and then close the connection for each and every call). The database files are organized in only one filgegroup, recovery type is set to simple. Some questions to diagnose the problem: is that steadily growing memory normal? did I do any mistake in database design which probably lead to this behaviour? (no explicit temp-table drop, filegroup organization, etc) can SQL Server manage such a stored procedure call rate (100 calls every 5 seconds, i.e. 100 upsert every 5 seconds, beyond intermediate calculations)? do the continuous "open connection/do sp call/close connection" pattern disturb SQL Server? is it possible to diagnose what is causing such a memory usage? Perhaps queues of wating requests? (I ran sp_who2, but I didn't see a big amount of orphan connections from the external application) if I restrict the amount of memory which SQL Server is allowed to use, may I sooner or later get into trouble?

    Read the article

  • Odd one: IE6 showing the "view source" command enabled when I was expecting it to be disabled.

    - by caerphilly
    Hi, OK, I admit this problem is odd. I'm just trying to understand why IE is behaving the way it is. I realise that logic may not apply here :) I have Internet Explorer 6 (Sp1) running on Windows 2000. The IE option "Do not store encrypted pages to disk" is checked (enabled). The temporary internet files folder is empty. TEMP and TMP environment variables are set to valid folders. I'm connected to a web server over SSL. The web server is serving a page over SSL with the HTTP cache-control header set to "no-cache, no-store". I was expecting the "view source" command to be greyed out in this circumstance (as it is on another machine). But it works. When I "view source", I get an entry in the Temporary Internet Files folder with an "internet address" property of "view-source:https://myserver/...." and the content of the page. I wasn't expecting that. I can't understand why one machine is different to another in this regard. Obviously there is some environment/setup difference, but I can't track it down. Anyone have any bright ideas?

    Read the article

  • Map Subdomains to Folders Owned/Run by Other Apache/PHP/Cpanel Users

    - by kristofferR
    I run a small service for Norwegian customers where they get automatically installed and configured Wordpress blogs on their own domains ready immediately after payment is finished. It's quite similar to Page.ly and WPEngine, just aimed at Norwegian customers with Norwegian Wordpress, support, billing etc. The backend is WHM/CPanel (Apache, PHP, mySQL), with a script running immediately after payment that installs and configures Wordpress and sends the customer an email with their username and password. Newly registered domains takes some time to propagate though, so for a day or two my customers unfortunately have to use a temporary URL before I can switch them over to their own domains. Right now my system uses mod_userdir ("serverIP/~cpanelusername"). However, it's not an optimal solution. It looks unprofessional, is confusing, and is quite problematic for both my customers and me. I'd rather prefer the temporary URL for their blogs to be "theirdomainwithoutextension.host.no", with "host.no" being a domain I own and served from the same server as the customer sites. I can easily modify the script to create the subdomains on my "host.no"-domain, but how can I seamlessly map the subdomains to folders owned/ran on/by different CPanel/Apache/PHP users?

    Read the article

  • In Which We Demystify A Few Docupresentment Settings And Learn the Ethos of the Author

    - by Andy Little
    It's no secret that Docupresentment (part of the Oracle Documaker suite) is powerful tool for integrating on-demand and interactive applications for publishing with the Oracle Documaker framework.  It's also no secret there are are many details with respect to the configuration of Docupresentment that can elude even the most erudite of of techies.  To be sure, Docupresentment will work for you right out of the box, and in most cases will suit your needs without toying with a configuration file.  But, where's the adventure in that?   With this inaugural post to That's The Way, I'm going to introduce myself, and what my aim is with this blog.  If you didn't figure it out already by checking out my profile, my name is Andy and I've been with Oracle (nee Skywire Software nee Docucorp nee Formmaker) since the formative years of 1998.  Strangely, it doesn't seem that long ago, but it's certainly a lifetime in the age of technology.  I recall running a BBS from my parent's basement on a 1200 baud modem, and the trepidation and sweaty-palmed excitement of upgrading to the power and speed of 2400 baud!  Fine, I'll admit that perhaps I'm inflating the experience a bit, but I was kid!  This is the stuff of War Games and King's Quest I and the demise of TI-99 4/A.  Exciting times.  So fast-forward a bit and I'm 12 years into a career in the world of document automation and publishing working for the best (IMHO) software company on the planet.  With That's The Way I hope to shed a little light and peek under the covers of some of the more interesting aspects of implementations involving the tech space within the Oracle Insurance Global Business Unit (IGBU), which includes Oracle Documaker, Rating & Underwriting, and Policy Administration to name a few.  I may delve off course a bit, and you'll likely get a dose of humor (at least in my mind) but I hope you'll glean at least a tidbit of usefulness with each post.  Feel free to comment as I'm a fairly conversant guy and happy to talk -- it's stopping the talking that's the hard part... So, back to our regularly-scheduled post, already in progress.  By this time you've visited Oracle's E-Delivery site and acquired your properly-licensed version of Oracle Documaker.  Wait -- you didn't find it?  Understandable -- navigating the voluminous download library within Oracle can be a daunting task.  It's pretty simple once you’ve done it a few times.  Login to the e-delivery site, and accept the license terms and restrictions.  Then, you’ll be able to select the Oracle Insurance Applications product pack and your appropriate platform. Click Go and you’ll see a list of applicable products, and you’ll click on Oracle Documaker Media Pack (as I went to press with this article the version is 11.4): Finally, click the Download button next to Docupresentment (again, version at press time is 2.2 p5). This should give you a ZIP file that contains the installation packages for the Docupresentment Server and Client, cryptically named IDSServer22P05W32.exe and IDSClient22P05W32.exe. At this time, I’d like to take a little detour and explain that the world of Oracle, like most technical companies, is rife with acronyms.  One of the reasons Skywire Software was a appealing to Oracle was our use of many acronyms, including the occasional use of multiple acronyms with the same meaning.  I apologize in advance and will try to point these out along the way.  Here’s your first sticky note to go along with that: IDS = Internet Document Server = Docupresentment Once you’ve completed the installation, you’ll have a shiny new Docupresentment server and client, and if you installed the default location it will be living in c:\docserv. Unix users, I’m one of you!  You’ll find it by default in  ~/docupresentment/docserv.  Forging onward with the meat of this post is learning about some special configuration options.  By now you’ve read the documentation included with the download (specifically ids_book.pdf) which goes into some detail of the rubric of the configuration file and in fact there’s even a handy utility that provides an interface to the configuration file (see Running IDSConfig in the documentation).  But who wants to deal with a configuration utility when we have the tools and technology to edit the file <gasp> by hand! I shall now proceed with the standard Information Technology Under the Hood Disclaimer: Please remember to back up any files before you make changes.  I am not responsible for any havoc you may wreak! Go to your installation directory, and locate your docserv.xml file.  Open it in your favorite XML editor.  I happen to be fond of Notepad++ with the XML Tools plugin.  Almost immediately you will behold the splendor of the configuration file.  Just take a moment and let that sink in.  Ok – moving on.  If you reviewed the documentation you know that inside the root <configuration> node there are multiple <section> nodes, each containing a specific group of settings.  Let’s take a look at <section name=”DocumentServer”>: There are a few entries I’d like to discuss.  First, <entry name=”StartCommand”>. This should be pretty self-explanatory; it’s the name of the executable that’s run when you fire up Docupresentment.  Immediately following that is <entry name=”StartArguments”> and as you might imagine these are the arguments passed to the executable.  A few things to point out: The –Dids.configuration=docserv.xml parameter specifies the name of your configuration file. The –Dlogging.configuration=logconf.xml parameter specifies the name of your logging configuration file (this uses log4j so bone up on that before you delve here). The -Djava.endorsed.dirs=lib/endorsed parameter specifies the path where 3rd party Java libraries can be located for use with Docupresentment.  More on that in another post. The <entry name=”Instances”> allows you to specify the number of instances of Docupresentment that will be started.  By default this is two, and generally two instances per CPU is adequate, however you will always need to perform load testing to determine the sweet spot based on your hardware and types of transactions.  You may have many, many more instances than 2. Time for a sidebar on instances.  An instance is nothing more than a separate process of Docupresentment.  The Docupresentment service that you fire up with docserver.bat or docserver.sh actually starts a watchdog process, which is then responsible for starting up the actual Docupresentment processes.  Each of these act independently from one another, so if one crashes, it does not affect any others.  In the case of a crashed process, the watchdog will start up another instance so the number of configured instances are always running.  Bottom line: instance = Docupresentment process. And now, finally, to the settings which gave me pause on an not-too-long-ago implementation!  Docupresentment includes a feature that watches configuration files (such as docserv.xml and logconf.xml) and will automatically restart its instances to load the changes.  You can configure the time that Docupresentment waits to check these files using the setting <entry name=”FileWatchTimeMillis”>.  By default the number is 12000ms, or 12 seconds.  You can save yourself a few CPU cycles by extending this time, or by disabling  the check altogether by setting the value to 0.  This may or may not be appropriate for your environment; if you have 100% uptime requirements then you probably don’t want to bring down an entire set of processes just to accept a new configuration value, so it’s best to leave this somewhere between 12 seconds to a few minutes.  Another point to keep in mind: if you are using Documaker real-time processing under Docupresentment the Master Resource Library (MRL) files and INI options are cached, and if you need to affect a change, you’ll have to “restart” Docupresentment.  Touching the docserv.xml file is an easy way to do this (other methods including using the RSS request, but that’s another post). The next item up: <entry name=”FilePurgeTimeSeconds”>.  You may already know that the Docupresentment system can generate many temporary files based on certain request types that are processed through the system.  What you may not know is how those files are cleaned up.  There are many rules in Docupresentment that cause the creation of temporary files.  When these files are created, Docupresentment writes an entry into a properties file called the file cache.  This file contains the name, creation date, and expiration time of each temporary file created by each instance of Docupresentment.  Periodically Docupresentment will check the file cache to determine if there are files that are past the expiration time, not unlike that block of cheese festering away in the back of my refrigerator.  However, unlike my ‘fridge cleaning tendencies, Docupresentment is quick to remove files that are past their expiration time.  You, my friend, have the power to control how often Docupresentment inspects the file cache.  Simply set the value for <entry name=”FilePurgeTimeSeconds”> to the number of seconds appropriate for your requirements and you’re set.  Note that file purging happens on a separate thread from normal request processing, so this shouldn’t interfere with response times unless the CPU happens to be really taxed at the point of cache processing.  Finally, after all of this, we get to the final setting I’m going to address in this post: <entry name=”FilePurgeList”>.  The default is “filecache.properties”.  This establishes the root name for the Docupresentment file cache that I mentioned previously.  Docupresentment creates a separate cache file for each instance based on this setting.  If you have two instances, you’ll see two files created: filecache.properties.1 and filecache.properties.2.  Feel free to open these up and check them out. I hope you’ve enjoyed this first foray into the configuration file of Docupresentment.  If you did enjoy it, feel free to drop a comment, I welcome feedback.  If you have ideas for other posts you’d like to see, please do let me know.  You can reach me at [email protected]. ‘Til next time! ###

    Read the article

  • Solaris X86 64-bit Assembly Programming

    - by danx
    Solaris X86 64-bit Assembly Programming This is a simple example on writing, compiling, and debugging Solaris 64-bit x86 assembly language with a C program. This is also referred to as "AMD64" assembly. The term "AMD64" is used in an inclusive sense to refer to all X86 64-bit processors, whether AMD Opteron family or Intel 64 processor family. Both run Solaris x86. I'm keeping this example simple mainly to illustrate how everything comes together—compiler, assembler, linker, and debugger when using assembly language. The example I'm using here is a C program that calls an assembly language program passing a C string. The assembly language program takes the C string and calls printf() with it to print the string. AMD64 Register Usage But first let's review the use of AMD64 registers. AMD64 has several 64-bit registers, some special purpose (such as the stack pointer) and others general purpose. By convention, Solaris follows the AMD64 ABI in register usage, which is the same used by Linux, but different from Microsoft Windows in usage (such as which registers are used to pass parameters). This blog will only discuss conventions for Linux and Solaris. The following chart shows how AMD64 registers are used. The first six parameters to a function are passed through registers. If there's more than six parameters, parameter 7 and above are pushed on the stack before calling the function. The stack is also used to save temporary "stack" variables for use by a function. 64-bit Register Usage %rip Instruction Pointer points to the current instruction %rsp Stack Pointer %rbp Frame Pointer (saved stack pointer pointing to parameters on stack) %rdi Function Parameter 1 %rsi Function Parameter 2 %rdx Function Parameter 3 %rcx Function Parameter 4 %r8 Function Parameter 5 %r9 Function Parameter 6 %rax Function return value %r10, %r11 Temporary registers (need not be saved before used) %rbx, %r12, %r13, %r14, %r15 Temporary registers, but must be saved before use and restored before returning from the current function (usually with the push and pop instructions). 32-, 16-, and 8-bit registers To access the lower 32-, 16-, or 8-bits of a 64-bit register use the following: 64-bit register Least significant 32-bits Least significant 16-bits Least significant 8-bits %rax%eax%ax%al %rbx%ebx%bx%bl %rcx%ecx%cx%cl %rdx%edx%dx%dl %rsi%esi%si%sil %rdi%edi%di%axl %rbp%ebp%bp%bp %rsp%esp%sp%spl %r9%r9d%r9w%r9b %r10%r10d%r10w%r10b %r11%r11d%r11w%r11b %r12%r12d%r12w%r12b %r13%r13d%r13w%r13b %r14%r14d%r14w%r14b %r15%r15d%r15w%r15b %r16%r16d%r16w%r16b There's other registers present, such as the 64-bit %mm registers, 128-bit %xmm registers, 256-bit %ymm registers, and 512-bit %zmm registers. Except for %mm registers, these registers may not present on older AMD64 processors. Assembly Source The following is the source for a C program, helloas1.c, that calls an assembly function, hello_asm(). $ cat helloas1.c extern void hello_asm(char *s); int main(void) { hello_asm("Hello, World!"); } The assembly function called above, hello_asm(), is defined below. $ cat helloas2.s /* * helloas2.s * To build: * cc -m64 -o helloas2-cpp.s -D_ASM -E helloas2.s * cc -m64 -c -o helloas2.o helloas2-cpp.s */ #if defined(lint) || defined(__lint) /* ARGSUSED */ void hello_asm(char *s) { } #else /* lint */ #include <sys/asm_linkage.h> .extern printf ENTRY_NP(hello_asm) // Setup printf parameters on stack mov %rdi, %rsi // P2 (%rsi) is string variable lea .printf_string, %rdi // P1 (%rdi) is printf format string call printf ret SET_SIZE(hello_asm) // Read-only data .text .align 16 .type .printf_string, @object .printf_string: .ascii "The string is: %s.\n\0" #endif /* lint || __lint */ In the assembly source above, the C skeleton code under "#if defined(lint)" is optionally used for lint to check the interfaces with your C program--very useful to catch nasty interface bugs. The "asm_linkage.h" file includes some handy macros useful for assembly, such as ENTRY_NP(), used to define a program entry point, and SET_SIZE(), used to set the function size in the symbol table. The function hello_asm calls C function printf() by passing two parameters, Parameter 1 (P1) is a printf format string, and P2 is a string variable. The function begins by moving %rdi, which contains Parameter 1 (P1) passed hello_asm, to printf()'s P2, %rsi. Then it sets printf's P1, the format string, by loading the address the address of the format string in %rdi, P1. Finally it calls printf. After returning from printf, the hello_asm function returns itself. Larger, more complex assembly functions usually do more setup than the example above. If a function is returning a value, it would set %rax to the return value. Also, it's typical for a function to save the %rbp and %rsp registers of the calling function and to restore these registers before returning. %rsp contains the stack pointer and %rbp contains the frame pointer. Here is the typical function setup and return sequence for a function: ENTRY_NP(sample_assembly_function) push %rbp // save frame pointer on stack mov %rsp, %rbp // save stack pointer in frame pointer xor %rax, %r4ax // set function return value to 0. mov %rbp, %rsp // restore stack pointer pop %rbp // restore frame pointer ret // return to calling function SET_SIZE(sample_assembly_function) Compiling and Running Assembly Use the Solaris cc command to compile both C and assembly source, and to pre-process assembly source. You can also use GNU gcc instead of cc to compile, if you prefer. The "-m64" option tells the compiler to compile in 64-bit address mode (instead of 32-bit). $ cc -m64 -o helloas2-cpp.s -D_ASM -E helloas2.s $ cc -m64 -c -o helloas2.o helloas2-cpp.s $ cc -m64 -c helloas1.c $ cc -m64 -o hello-asm helloas1.o helloas2.o $ file hello-asm helloas1.o helloas2.o hello-asm: ELF 64-bit LSB executable AMD64 Version 1 [SSE FXSR FPU], dynamically linked, not stripped helloas1.o: ELF 64-bit LSB relocatable AMD64 Version 1 helloas2.o: ELF 64-bit LSB relocatable AMD64 Version 1 $ hello-asm The string is: Hello, World!. Debugging Assembly with MDB MDB is the Solaris system debugger. It can also be used to debug user programs, including assembly and C. The following example runs the above program, hello-asm, under control of the debugger. In the example below I load the program, set a breakpoint at the assembly function hello_asm, display the registers and the first parameter, step through the assembly function, and continue execution. $ mdb hello-asm # Start the debugger > hello_asm:b # Set a breakpoint > ::run # Run the program under the debugger mdb: stop at hello_asm mdb: target stopped at: hello_asm: movq %rdi,%rsi > $C # display function stack ffff80ffbffff6e0 hello_asm() ffff80ffbffff6f0 0x400adc() > $r # display registers %rax = 0x0000000000000000 %r8 = 0x0000000000000000 %rbx = 0xffff80ffbf7f8e70 %r9 = 0x0000000000000000 %rcx = 0x0000000000000000 %r10 = 0x0000000000000000 %rdx = 0xffff80ffbffff718 %r11 = 0xffff80ffbf537db8 %rsi = 0xffff80ffbffff708 %r12 = 0x0000000000000000 %rdi = 0x0000000000400cf8 %r13 = 0x0000000000000000 %r14 = 0x0000000000000000 %r15 = 0x0000000000000000 %cs = 0x0053 %fs = 0x0000 %gs = 0x0000 %ds = 0x0000 %es = 0x0000 %ss = 0x004b %rip = 0x0000000000400c70 hello_asm %rbp = 0xffff80ffbffff6e0 %rsp = 0xffff80ffbffff6c8 %rflags = 0x00000282 id=0 vip=0 vif=0 ac=0 vm=0 rf=0 nt=0 iopl=0x0 status=<of,df,IF,tf,SF,zf,af,pf,cf> %gsbase = 0x0000000000000000 %fsbase = 0xffff80ffbf782a40 %trapno = 0x3 %err = 0x0 > ::dis # disassemble the current instructions hello_asm: movq %rdi,%rsi hello_asm+3: leaq 0x400c90,%rdi hello_asm+0xb: call -0x220 <PLT:printf> hello_asm+0x10: ret 0x400c81: nop 0x400c85: nop 0x400c88: nop 0x400c8c: nop 0x400c90: pushq %rsp 0x400c91: pushq $0x74732065 0x400c96: jb +0x69 <0x400d01> > 0x0000000000400cf8/S # %rdi contains Parameter 1 0x400cf8: Hello, World! > [ # Step and execute 1 instruction mdb: target stopped at: hello_asm+3: leaq 0x400c90,%rdi > [ mdb: target stopped at: hello_asm+0xb: call -0x220 <PLT:printf> > [ The string is: Hello, World!. mdb: target stopped at: hello_asm+0x10: ret > [ mdb: target stopped at: main+0x19: movl $0x0,-0x4(%rbp) > :c # continue program execution mdb: target has terminated > $q # quit the MDB debugger $ In the example above, at the start of function hello_asm(), I display the stack contents with "$C", display the registers contents with "$r", then disassemble the current function with "::dis". The first function parameter, which is a C string, is passed by reference with the string address in %rdi (see the register usage chart above). The address is 0x400cf8, so I print the value of the string with the "/S" MDB command: "0x0000000000400cf8/S". I can also print the contents at an address in several other formats. Here's a few popular formats. For more, see the mdb(1) man page for details. address/S C string address/C ASCII character (1 byte) address/E unsigned decimal (8 bytes) address/U unsigned decimal (4 bytes) address/D signed decimal (4 bytes) address/J hexadecimal (8 bytes) address/X hexadecimal (4 bytes) address/B hexadecimal (1 bytes) address/K pointer in hexadecimal (4 or 8 bytes) address/I disassembled instruction Finally, I step through each machine instruction with the "[" command, which steps over functions. If I wanted to enter a function, I would use the "]" command. Then I continue program execution with ":c", which continues until the program terminates. MDB Basic Cheat Sheet Here's a brief cheat sheet of some of the more common MDB commands useful for assembly debugging. There's an entire set of macros and more powerful commands, especially some for debugging the Solaris kernel, but that's beyond the scope of this example. $C Display function stack with pointers $c Display function stack $e Display external function names $v Display non-zero variables and registers $r Display registers ::fpregs Display floating point (or "media" registers). Includes %st, %xmm, and %ymm registers. ::status Display program status ::run Run the program (followed by optional command line parameters) $q Quit the debugger address:b Set a breakpoint address:d Delete a breakpoint $b Display breakpoints :c Continue program execution after a breakpoint [ Step 1 instruction, but step over function calls ] Step 1 instruction address::dis Disassemble instructions at an address ::events Display events Further Information "Assembly Language Techniques for Oracle Solaris on x86 Platforms" by Paul Lowik (2004). Good tutorial on Solaris x86 optimization with assembly. The Solaris Operating System on x86 Platforms An excellent, detailed tutorial on X86 architecture, with Solaris specifics. By an ex-Sun employee, Frank Hofmann (2005). "AMD64 ABI Features", Solaris 64-bit Developer's Guide contains rules on data types and register usage for Intel 64/AMD64-class processors. (available at docs.oracle.com) Solaris X86 Assembly Language Reference Manual (available at docs.oracle.com) SPARC Assembly Language Reference Manual (available at docs.oracle.com) System V Application Binary Interface (2003) defines the AMD64 ABI for UNIX-class operating systems, including Solaris, Linux, and BSD. Google for it—the original website is gone. cc(1), gcc(1), and mdb(1) man pages.

    Read the article

  • unexpected EOF and end of document

    - by WASasquatch
    I have been fiddling with this code for a few days. Mind you I am a beginner. I just want to get my script to be able to download a remote file, and scan MineCraft plugins. I got the scan plugins to work, but I'm having two other issues. One, I can't get the mc_addplugin to work correctly, and I get a Unexpected EOF and unexpected end of document when running any other command besides mc_scanplugins or mc_start bash: -c: line 0: unexpected EOF while looking for matching `"' bash: -c: line 1: syntax error: unexpected end of file Help would be so much appreciated! Thanks in advance. #!/bin/bash # /etc/init.d/craftbukkit # version 0.9.1 2012-07-06 (YYYY-MM-DD) ### BEGIN INIT INFO # Provides: craftbukkit # Required-Start: $local_fs $remote_fs # Required-Stop: $local_fs $remote_fs # Should-Start: $network # Should-Stop: $network # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: Starts craftbukkit server # Description: Starts and controls the craftbukkit server ### END INIT INFO # SETTINGS SERVICE='craftbukkit-1.2.5-R1.0.jar' OPTIONS='nogui' USERNAME='smith' # LIST ALL THE WORLDS IN YOUR CRAFTBUKKIT SERVER FOLDER WORLDS[1]='world' WORLDS[2]='world_nether' WORLDS[3]='world_the_end' WORLDS[4]='flat_world' MCPATH='/var/www/servers/Foundation' PLUGINSPATH='/var/www/servers/Foundation/plugins' TEMPPLUGINS='/var/www/servers/Foundationplugins/temp_plugins' BACKUPPATH='/var/www/servers/Foundation/backup' CPU_COUNT=2 INVOCATION="java -Xmx2024M -Xms2024M -XX:+UseConcMarkSweepGC -XX:+CMSIncrementalPacing -XX:ParallelGCThreads=$CPU_COUNT -XX:+AggressiveOpts -jar $SERVICE $OPTIONS" ME=`whoami` as_user() { if [ $ME == $USERNAME ] ; then bash -c "$1" else su - $USERNAME -c "$1" fi } mc_start() { if pgrep -u $USERNAME -f $SERVICE > /dev/null then echo "$SERVICE is already running!" else echo "Starting $SERVICE..." cd $MCPATH as_user "cd $MCPATH && screen -dmS craftbukkit $INVOCATION" sleep 7 if pgrep -u $USERNAME -f $SERVICE > /dev/null then echo "$SERVICE is now running." else echo "Error! Could not start $SERVICE!" fi fi } mc_saveoff() { if pgrep -u $USERNAME -f $SERVICE > /dev/null then echo "$SERVICE is running... suspending saves" as_user "screen -p 0 -S craftbukkit -X eval 'stuff \"say The server is preforming a backup. Server going to read-only mode. Do not build...\"\015'" as_user "screen -p 0 -S craftbukkit -X eval 'stuff \"save-off\"\015'" as_user "screen -p 0 -S craftbukkit -X eval 'stuff \"save-all\"\015'" sync sleep 10 else echo "$SERVICE is not running. Not suspending saves." fi } mc_save() { if pgrep -u $USERNAME -f $SERVICE > /dev/null then echo "$SERVICE is running... Saving worlds..." as_user "screen -p 0 -S craftbukkit -X eval 'stuff \"save-all\"\015'" sync sleep 10 echo "Save complete!" else echo "$SERVICE is not running. Cannot save!" fi } mc_saveon() { if pgrep -u $USERNAME -f $SERVICE > /dev/null then echo "$SERVICE is running... re-enabling saves" as_user "screen -p 0 -S craftbukkit -X eval 'stuff \"save-on\"\015'" as_user "screen -p 0 -S craftbukkit -X eval 'stuff \"say Server backup has completed. Server going to read-write mode. You can now continue building...\"\015'" else echo "$SERVICE is not running. Not resuming saves." fi } mc_stop() { if pgrep -u $USERNAME -f $SERVICE > /dev/null then echo "Stopping $SERVICE" as_user "screen -p 0 -S craftbukkit -X eval 'stuff \"say $SERVERNAME is shutting down in 30 seconds! Please stop what you are doing. Check back later, we'll be back!\"\015'" as_user "screen -p 0 -S craftbukkit -X eval 'stuff \"save-all\"\015'" sleep 30 as_user "screen -p 0 -S craftbukkit -X eval 'stuff \"stop\"\015'" sleep 7 else echo "$SERVICE was not running." fi if pgrep -u $USERNAME -f $SERVICE > /dev/null then echo "Error! $SERVICE could not be stopped." else echo "$SERVICE is stopped." fi } mc_update() { if pgrep -u $USERNAME -f $SERVICE > /dev/null then echo "$SERVICE is running! Will not start update." else MC_SERVER_URL=http://dl.bukkit.org/latest-rb/craftbukkit.jar as_user "cd $MCPATH && wget -q -O $MCPATH/craftbukkit_server.jar.update $MC_SERVER_URL" if [ -f $MCPATH/craftbukkit_server.jar.update ] then if `diff $MCPATH/$SERVICE $MCPATH/craftbukkit_server.jar.update >/dev/null` then echo "You are already running the latest version of $SERVICE. Update anyway? [Y/n]" select yn in "Yes" "No"; do case $yn in Yes ) as_user "mv $MCPATH/$SERVICE $MCPATH/${SERVICE}_old.jar" as_user "mv $MCPATH/craftbukkit_server.jar.update $MCPATH/$SERVICE" echo "$SERVICE updated successfully!"; break;; No ) echo "The update was not installed! Removing temporary files and exiting..." as_user "rm $MCPATH/craftbukkit_server.jar.update" exit;; esac done else as_user "mv $MCPATH/$SERVICE $MCPATH/${SERVICE}_old.jar" as_user "mv $MCPATH/craftbukkit_server.jar.update $MCPATH/$SERVICE" echo "$SERVICE updated successfully!" fi else echo "$SERVICE update could not be downloaded." fi fi } mc_addplugin() { if pgrep -u $USERNAME -f $SERVICE > /dev/null then echo "$SERVICE is running! Please stop the service before adding a plugin." else echo "Paste the URL to the .JAR Plugin..." read JARURL JARNAME=$(basename "$JARURL") if [ -d "$TEMPPLUGINS" ] then as_user "cd $PLUGINSPATH && wget -r -A.jar $JARURL -o temp_plugins/$JARNAME" else as_user "cd $PLUGINSPATH && mkdir $TEMPPLUGINS && wget -r -A.jar $JARURL -o temp_plugins/$JARNAME" fi if [ -f "$TMPDIR/$JARNAME" ] then if [ -f "$PLUGINSPATH/$JARNAME" ] then if `diff $PLUGINSPATH/$JARNAME $TMPDIR/$JARNAME >/dev/null` then echo "You are already running the latest version of $JARNAME." else NOW=`date "+%Y-%m-%d_%Hh%M"` echo "Are you sure you want to overwrite this plugin? [Y/n]" echo "Note: Your old plugin will be moved to the "$TEMPPLUGINS" folder with todays date." select yn in "Yes" "No"; do case $yn in Yes ) as_user "mv $PLUGINSPATH/$JARNAME $TEMPPLUGINS/${JARNAME}_${NOW} && mv $TEMPPLUGINS/$JARNAME $PLUGINSPATH/$JARNAME"; break;; No ) echo "The plugin has not been installed! Removing temporary plugin and exiting..." as_user "rm $TEMPPLUGINS/$JARNAME"; exit;; esac done echo "Would you like to start the $SERVICE now? [Y/n]" select yn in "Yes" "No"; do case $yn in Yes ) mc_start; break;; No ) "$SERVICE not running! To start the service run: /etc/init.d/craftbukkit start"; exit;; esac done fi else echo "Are you sure you want to add this new plugin? [Y/n]" select yn in "Yes" "No"; do case $yn in Yes ) as_user "mv $PLUGINSPATH/$JARNAME $TEMPPLUGINS/${JARNAME}_${NOW} && mv $TEMPPLUGINS/$JARNAME $PLUGINSPATH/$JARNAME"; break;; No ) echo "The plugin has not been installed! Removing temporary plugin and exiting..." as_user "rm $TEMPPLUGINS/$JARNAME"; exit;; esac done echo "Would you like to start the $SERVICE now? [Y/n]?" select yn in "Yes" "No"; do case $yn in Yes ) mc_start; break;; No ) "$SERVICE not running! To start the service run: /etc/init.d/craftbukkit start"; exit;; esac done fi else echo "Failed to download the plugin from the URL you specified!" exit; fi fi } mc_scanplugins() { if [ "$(ls -A $PLUGINSPATH)" ] then shopt -s nullglob PLUGINS=($PLUGINSPATH/*.jar) i=1 for f in "${PLUGINS[@]}" do echo "${i}: $f" PLUGIN[$i]=$f i=$(( $i + 1 )) done echo "Enter the ID of a plugin you want removed, or any other key to cancel." read INPUT if [ ! -z "${INPUT##*[!0-9]*}" ] then if [ -f "${PLUGIN[INPUT]}" ] then echo "Removing plugin..." JAR=$(basename ${PLUGIN[INPUT]}) JARNAME=${JAR%.jar} as_user "rm -f ${PLUGIN[INPUT]}" sleep 2 as_user "cd $PLUGINSPATH && rm -rf ./${JARNAME}" if [ -f "${PLUGINSPATH}/${JARNAME}" ] then echo "Plugin folder could not be removed..." fi echo "Plugin removed." else echo "${PLUGIN[INPUT]}" echo "Invalid plugin! Does not exist! Canceling..." exit; fi else echo "Canceling..." exit; fi else echo "You have no plugins installed." exit; fi } mc_backup() { mc_saveoff for i in "${WORLDS[@]}"; do NOW=`date "+%Y-%m-%d_%Hh%M"` BACKUP_FILE="$BACKUPPATH/${i}_${NOW}.tar" echo "Backing up world: $i..." #as_user "cd $MCPATH && cp -r $i $BACKUPPATH/${i}_`date "+%Y.%m.%d_%H.%M""` as_user "tar -C \"$MCPATH\" -cf \"$BACKUP_FILE\" $i" done echo "Backing up $SERVICE" as_user "tar -C \"$MCPATH\" -rf \"$BACKUP_FILE\" $SERVICE" #as_user "cp \"$MCPATH/$SERVICE\" \"$BACKUPPATH/craftbukkit_server_${NOW}.jar\"" mc_saveon echo "Compressing backup..." as_user "tar -cvzf $BACKUPPATH/server_backup_${NOW}.tar.gz $MCPATH" echo "Backup has completed successfully." } mc_command() { command="$1"; if pgrep -u $USERNAME -f $SERVICE > /dev/null then pre_log_len=`wc -l "$MCPATH/server.log" | awk '{print $1}'` echo "$SERVICE is running... executing command" as_user "screen -p 0 -S craftbukkit -X eval 'stuff \"$command\"\015'" sleep .1 # assumes that the command will run and print to the log file in less than .1 seconds # print output tail -n $[`wc -l "$MCPATH/server.log" | awk '{print $1}'`-$pre_log_len] "$MCPATH/server.log" fi } #Start-Stop here case "$1" in start) mc_start ;; stop) mc_stop ;; restart) mc_stop mc_start ;; save) mc_save ;; update) mc_stop mc_backup mc_update mc_start ;; scanplugins) mc_scanplugins ;; addplugin) mc_addplugin ;; backup) mc_backup ;; status) if pgrep -u $USERNAME -f $SERVICE > /dev/null then echo "$SERVICE is running." else echo "$SERVICE is not running." fi ;; command) if [ $# -gt 1 ]; then shift mc_command "$*" else echo "Must specify server command (try 'help'?)" fi ;; *) echo "Usage: $0 {start|stop|update|backup|status|restart|command \"server command\"}" exit 1 ;; esac exit 0

    Read the article

  • Slow queries in Rails- not sure if my indexes are being used.

    - by Max Williams
    I'm doing a quite complicated find with lots of includes, which rails is splitting into a sequence of discrete queries rather than do a single big join. The queries are really slow - my dataset isn't massive, with none of the tables having more than a few thousand records. I have indexed all of the fields which are examined in the queries but i'm worried that the indexes aren't helping for some reason: i installed a plugin called "query_reviewer" which looks at the queries used to build a page, and lists problems with them. This states that indexes AREN'T being used, and it features the results of calling 'explain' on the query, which lists various problems. Here's an example find call: Question.paginate(:all, {:page=>1, :include=>[:answers, :quizzes, :subject, {:taggings=>:tag}, {:gradings=>[:age_group, :difficulty]}], :conditions=>["((questions.subject_id = ?) or (questions.subject_id = ? and tags.name = ?))", "1", 19, "English"], :order=>"subjects.name, (gradings.difficulty_id is null), gradings.age_group_id, gradings.difficulty_id", :per_page=>30}) And here are the generated sql queries: SELECT DISTINCT `questions`.id FROM `questions` LEFT OUTER JOIN `taggings` ON `taggings`.taggable_id = `questions`.id AND `taggings`.taggable_type = 'Question' LEFT OUTER JOIN `tags` ON `tags`.id = `taggings`.tag_id LEFT OUTER JOIN `subjects` ON `subjects`.id = `questions`.subject_id LEFT OUTER JOIN `gradings` ON gradings.question_id = questions.id WHERE (((questions.subject_id = '1') or (questions.subject_id = 19 and tags.name = 'English'))) ORDER BY subjects.name, (gradings.difficulty_id is null), gradings.age_group_id, gradings.difficulty_id LIMIT 0, 30 SELECT `questions`.`id` AS t0_r0 <..etc...> FROM `questions` LEFT OUTER JOIN `answers` ON answers.question_id = questions.id LEFT OUTER JOIN `quiz_questions` ON (`questions`.`id` = `quiz_questions`.`question_id`) LEFT OUTER JOIN `quizzes` ON (`quizzes`.`id` = `quiz_questions`.`quiz_id`) LEFT OUTER JOIN `subjects` ON `subjects`.id = `questions`.subject_id LEFT OUTER JOIN `taggings` ON `taggings`.taggable_id = `questions`.id AND `taggings`.taggable_type = 'Question' LEFT OUTER JOIN `tags` ON `tags`.id = `taggings`.tag_id LEFT OUTER JOIN `gradings` ON gradings.question_id = questions.id LEFT OUTER JOIN `age_groups` ON `age_groups`.id = `gradings`.age_group_id LEFT OUTER JOIN `difficulties` ON `difficulties`.id = `gradings`.difficulty_id WHERE (((questions.subject_id = '1') or (questions.subject_id = 19 and tags.name = 'English'))) AND `questions`.id IN (602, 634, 666, 698, 730, 762, 613, 645, 677, 709, 741, 592, 624, 656, 688, 720, 752, 603, 635, 667, 699, 731, 763, 614, 646, 678, 710, 742, 593, 625) ORDER BY subjects.name, (gradings.difficulty_id is null), gradings.age_group_id, gradings.difficulty_id SELECT count(DISTINCT `questions`.id) AS count_all FROM `questions` LEFT OUTER JOIN `answers` ON answers.question_id = questions.id LEFT OUTER JOIN `quiz_questions` ON (`questions`.`id` = `quiz_questions`.`question_id`) LEFT OUTER JOIN `quizzes` ON (`quizzes`.`id` = `quiz_questions`.`quiz_id`) LEFT OUTER JOIN `subjects` ON `subjects`.id = `questions`.subject_id LEFT OUTER JOIN `taggings` ON `taggings`.taggable_id = `questions`.id AND `taggings`.taggable_type = 'Question' LEFT OUTER JOIN `tags` ON `tags`.id = `taggings`.tag_id LEFT OUTER JOIN `gradings` ON gradings.question_id = questions.id LEFT OUTER JOIN `age_groups` ON `age_groups`.id = `gradings`.age_group_id LEFT OUTER JOIN `difficulties` ON `difficulties`.id = `gradings`.difficulty_id WHERE (((questions.subject_id = '1') or (questions.subject_id = 19 and tags.name = 'English'))) Actually, looking at these all nicely formatted here, there's a crazy amount of joining going on here. This can't be optimal surely. Anyway, it looks like i have two questions. 1) I have an index on each of the ids and foreign key fields referred to here. The second of the above queries is the slowest, and calling explain on it (doing it directly in mysql) gives me the following: +----+-------------+----------------+--------+---------------------------------------------------------------------------------+-------------------------------------------------+---------+------------------------------------------------+------+----------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+----------------+--------+---------------------------------------------------------------------------------+-------------------------------------------------+---------+------------------------------------------------+------+----------------------------------------------+ | 1 | SIMPLE | questions | range | PRIMARY,index_questions_on_subject_id | PRIMARY | 4 | NULL | 30 | Using where; Using temporary; Using filesort | | 1 | SIMPLE | answers | ref | index_answers_on_question_id | index_answers_on_question_id | 5 | millionaire_development.questions.id | 2 | | | 1 | SIMPLE | quiz_questions | ref | index_quiz_questions_on_question_id | index_quiz_questions_on_question_id | 5 | millionaire_development.questions.id | 1 | | | 1 | SIMPLE | quizzes | eq_ref | PRIMARY | PRIMARY | 4 | millionaire_development.quiz_questions.quiz_id | 1 | | | 1 | SIMPLE | subjects | eq_ref | PRIMARY | PRIMARY | 4 | millionaire_development.questions.subject_id | 1 | | | 1 | SIMPLE | taggings | ref | index_taggings_on_taggable_id_and_taggable_type,index_taggings_on_taggable_type | index_taggings_on_taggable_id_and_taggable_type | 263 | millionaire_development.questions.id,const | 1 | | | 1 | SIMPLE | tags | eq_ref | PRIMARY | PRIMARY | 4 | millionaire_development.taggings.tag_id | 1 | Using where | | 1 | SIMPLE | gradings | ref | index_gradings_on_question_id | index_gradings_on_question_id | 5 | millionaire_development.questions.id | 2 | | | 1 | SIMPLE | age_groups | eq_ref | PRIMARY | PRIMARY | 4 | millionaire_development.gradings.age_group_id | 1 | | | 1 | SIMPLE | difficulties | eq_ref | PRIMARY | PRIMARY | 4 | millionaire_development.gradings.difficulty_id | 1 | | +----+-------------+----------------+--------+---------------------------------------------------------------------------------+-------------------------------------------------+---------+------------------------------------------------+------+----------------------------------------------+ The query_reviewer plugin has this to say about it - it lists several problems: Table questions: Using temporary table, Long key length (263), Using filesort MySQL must do an extra pass to find out how to retrieve the rows in sorted order. To resolve the query, MySQL needs to create a temporary table to hold the result. The key used for the index was rather long, potentially affecting indices in memory 2) It looks like rails isn't splitting this find up in a very optimal way. Is it, do you think? Am i better off doing several find queries manually rather than one big combined one? Grateful for any advice, max

    Read the article

  • PHP and XPath Loop

    - by user1794852
    Thank you all in advance. I've got some great answers to my sometimes stupid questions, so thank you again. I'm trying to parse a SOAP response using PHP, Xpath (namespaces) and SimpleXML. Down below is a snippet of the Response. What I need to do is loop through each <ns1:file></ns1:file> and add it to a DB. But I'm not sure how to do that. Please help! Namespace Stuff $x = simplexml_load_string($response); $x->registerXPathNamespace('ns1', 'http://ws.icontent.idefense.com/V3/2'); Here's the response: <ns1:mal_files> <ns1:file> <ns1:id>2895144</ns1:id> <ns1:md5>2189c3d3857ba0cabd19c8aa031d63cd</ns1:md5> <ns1:sha1>c20b26148caa059ecf85e9b29df4e28e8354d655</ns1:sha1> <ns1:path>%WINDIR%\Temp\Temporary Internet Files\Content.IE5\K9ANOPQB\1219831[1].htm</ns1:path> <ns1:size>110</ns1:size> <ns1:code_available>true</ns1:code_available> <ns1:len_char>fixed</ns1:len_char> </ns1:file> <ns1:file> <ns1:id>2895147</ns1:id> <ns1:md5>a533825ef1752630a300125b3eef6825</ns1:md5> <ns1:sha1>ec7feb1414b3c2e720f6c06e2750421b73634f87</ns1:sha1> <ns1:path>%WINDIR%\Temp\Temporary Internet Files\Content.IE5\4PMB8T67\bg[1].jpg</ns1:path> <ns1:size>707</ns1:size> <ns1:code_available>true</ns1:code_available> <ns1:len_char>fixed</ns1:len_char> </ns1:file> <ns1:file> <ns1:id>2895155</ns1:id> <ns1:md5>c88724e985efcc82173c0d3aa0b77dfd</ns1:md5> <ns1:sha1>ecc042ca06aac988cd4593bb2b25fa39c4b2a819</ns1:sha1> <ns1:path>%WINDIR%\Prefetch\24604775.EXE-3ADEC0C2.pf</ns1:path> <ns1:size>44940</ns1:size> <ns1:code_available>true</ns1:code_available> <ns1:len_char>fixed</ns1:len_char> </ns1:file> <ns1:file> <ns1:id>2895158</ns1:id> <ns1:md5>422a011793af6195ea517c2c4a26bdbc</ns1:md5> <ns1:sha1>f449240c091c27e2531342d6b291d6a7e4655834</ns1:sha1> <ns1:path>%WINDIR%\Temp\Temporary Internet Files\Content.IE5\KPYV0TM7\gamesleap[1].png</ns1:path> <ns1:size>2676</ns1:size> <ns1:code_available>true</ns1:code_available> <ns1:len_char>fixed</ns1:len_char> </ns1:file> </ns1:mal_files> PHP Code $md5 = $x->xpath('//ns1:mal_files/ns1:file/ns1:md5'); // Returns array :( The PHP code returns an Array of all MD5 for every <ns1:file></ns1:file> and that's not what I need. What I need is to loop through each of the (nodes?) for each of the <ns1:file> keying on the <ns1:id> and add all their associated elements (id, md5, sha1, etc) as a record, then move onto the next <ns1:file> and repeat. And obviously the PHP I'm using returns an array, which doesn't work here. Hopefully I've made my question clear... I appreciate any help. Thanks!

    Read the article

  • C++ copy-construct construct-and-assign question

    - by Andy
    Blockquote Here is an extract from item 56 of the book "C++ Gotchas": It's not uncommon to see a simple initialization of a Y object written any of three different ways, as if they were equivalent. Y a( 1066 ); Y b = Y(1066); Y c = 1066; In point of fact, all three of these initializations will probably result in the same object code being generated, but they're not equivalent. The initialization of a is known as a direct initialization, and it does precisely what one might expect. The initialization is accomplished through a direct invocation of Y::Y(int). The initializations of b and c are more complex. In fact, they're too complex. These are both copy initializations. In the case of the initialization of b, we're requesting the creation of an anonymous temporary of type Y, initialized with the value 1066. We then use this anonymous temporary as a parameter to the copy constructor for class Y to initialize b. Finally, we call the destructor for the anonymous temporary. To test this, I did a simple class with a data member (program attached at the end) and the results were surprising. It seems that for the case of b, the object was constructed by the copy constructor rather than as suggested in the book. Does anybody know if the language standard has changed or is this simply an optimisation feature of the compiler? I was using Visual Studio 2008. Code sample: #include <iostream> class Widget { std::string name; public: // Constructor Widget(std::string n) { name=n; std::cout << "Constructing Widget " << this->name << std::endl; } // Copy constructor Widget (const Widget& rhs) { std::cout << "Copy constructing Widget from " << rhs.name << std::endl; } // Assignment operator Widget& operator=(const Widget& rhs) { std::cout << "Assigning Widget from " << rhs.name << " to " << this->name << std::endl; return *this; } }; int main(void) { // construct Widget a("a"); // copy construct Widget b(a); // construct and assign Widget c("c"); c = a; // copy construct! Widget d = a; // construct! Widget e = "e"; // construct and assign Widget f = Widget("f"); return 0; } Output: Constructing Widget a Copy constructing Widget from a Constructing Widget c Assigning Widget from a to c Copy constructing Widget from a Constructing Widget e Constructing Widget f Copy constructing Widget from f I was most surprised by the results of constructing d and e.

    Read the article

  • How can I make the storage of C++ lambda objects more efficient?

    - by Peter Ruderman
    I've been thinking about storing C++ lambda's lately. The standard advice you see on the Internet is to store the lambda in a std::function object. However, none of this advice ever considers the storage implications. It occurred to me that there must be some seriously black voodoo going on behind the scenes to make this work. Consider the following class that stores an integer value: class Simple { public: Simple( int value ) { puts( "Constructing simple!" ); this->value = value; } Simple( const Simple& rhs ) { puts( "Copying simple!" ); this->value = rhs.value; } Simple( Simple&& rhs ) { puts( "Moving simple!" ); this->value = rhs.value; } ~Simple() { puts( "Destroying simple!" ); } int Get() const { return this->value; } private: int value; }; Now, consider this simple program: int main() { Simple test( 5 ); std::function<int ()> f = [test] () { return test.Get(); }; printf( "%d\n", f() ); } This is the output I would hope to see from this program: Constructing simple! Copying simple! Moving simple! Destroying simple! 5 Destroying simple! Destroying simple! First, we create the value test. We create a local copy on the stack for the temporary lambda object. We then move the temporary lambda object into memory allocated by std::function. We destroy the temporary lambda. We print our output. We destroy the std::function. And finally, we destroy the test object. Needless to say, this is not what I see. When I compile this on Visual C++ 2010 (release or debug mode), I get this output: Constructing simple! Copying simple! Copying simple! Copying simple! Copying simple! Destroying simple! Destroying simple! Destroying simple! 5 Destroying simple! Destroying simple! Holy crap that's inefficient! Not only did the compiler fail to use my move constructor, but it generated and destroyed two apparently superfluous copies of the lambda during the assignment. So, here finally are the questions: (1) Is all this copying really necessary? (2) Is there some way to coerce the compiler into generating better code? Thanks for reading!

    Read the article

  • Normalizing a table

    - by Alex
    I have a legacy table, which I can't change. The values in it can be modified from legacy application (application also can't be changed). Due to a lot of access to the table from new application (new requirement), I'd like to create a temporary table, which would hopefully speed up the queries. The actual requirement, is to calculate number of business days from X to Y. For example, give me all business days from Jan 1'st 2001 until Dec 24'th 2004. The table is used to mark which days are off, as different companies may have different days off - it isn't just Saturday + Sunday) The temporary table would be created from a .NET program, each time user enters the screen for this query (user may run query multiple times, with different values, table is created once), so I'd like it to be as fast as possible. Approach below runs in under a second, but I only tested it with a small dataset, and still it takes probably close to half a second, which isn't great for UI - even though it's just the overhead for first query. The legacy table looks like this: CREATE TABLE [business_days]( [country_code] [char](3) , [state_code] [varchar](4) , [calendar_year] [int] , [calendar_month] [varchar](31) , [calendar_month2] [varchar](31) , [calendar_month3] [varchar](31) , [calendar_month4] [varchar](31) , [calendar_month5] [varchar](31) , [calendar_month6] [varchar](31) , [calendar_month7] [varchar](31) , [calendar_month8] [varchar](31) , [calendar_month9] [varchar](31) , [calendar_month10] [varchar](31) , [calendar_month11] [varchar](31) , [calendar_month12] [varchar](31) , misc. ) Each month has 31 characters, and any day off (Saturday + Sunday + holiday) is marked with X. Each half day is marked with an 'H'. For example, if a month starts on a Thursday, than it will look like (Thursday+Friday workdays, Saturday+Sunday marked with X): ' XX XX ..' I'd like the new table to look like so: create table #Temp (country varchar(3), state varchar(4), date datetime, hours int) And I'd like to only have rows for days which are off (marked with X or H from previous query) What I ended up doing, so far is this: Create a temporary-intermediate table, that looks like this: create table #Temp_2 (country_code varchar(3), state_code varchar(4), calendar_year int, calendar_month varchar(31), month_code int) To populate it, I have a union which basically unions calendar_month, calendar_month2, calendar_month3, etc. Than I have a loop which loops through all the rows in #Temp_2, after each row is processed, it is removed from #Temp_2. To process the row there is a loop from 1 to 31, and substring(calendar_month, counter, 1) is checked for either X or H, in which case there is an insert into #Temp table. [edit added code] Declare @country_code char(3) Declare @state_code varchar(4) Declare @calendar_year int Declare @calendar_month varchar(31) Declare @month_code int Declare @calendar_date datetime Declare @day_code int WHILE EXISTS(SELECT * From #Temp_2) -- where processed = 0) BEGIN Select Top 1 @country_code = t2.country_code, @state_code = t2.state_code, @calendar_year = t2.calendar_year, @calendar_month = t2.calendar_month, @month_code = t2.month_code From #Temp_2 t2 -- where processed = 0 set @day_code = 1 while @day_code <= 31 begin if substring(@calendar_month, @day_code, 1) = 'X' begin set @calendar_date = convert(datetime, (cast(@month_code as varchar) + '/' + cast(@day_code as varchar) + '/' + cast(@calendar_year as varchar))) insert into #Temp (country, state, date, hours) values (@country_code, @state_code, @calendar_date, 8) end if substring(@calendar_month, @day_code, 1) = 'H' begin set @calendar_date = convert(datetime, (cast(@month_code as varchar) + '/' + cast(@day_code as varchar) + '/' + cast(@calendar_year as varchar))) insert into #Temp (country, state, date, hours) values (@country_code, @state_code, @calendar_date, 4) end set @day_code = @day_code + 1 end delete from #Temp_2 where @country_code = country_code AND @state_code = state_code AND @calendar_year = calendar_year AND @calendar_month = calendar_month AND @month_code = month_code --update #Temp_2 set processed = 1 where @country_code = country_code AND @state_code = state_code AND @calendar_year = calendar_year AND @calendar_month = calendar_month AND @month_code = month_code END I am not an expert in SQL, so I'd like to get some input on my approach, and maybe even a much better approach suggestion. After having the temp table, I'm planning to do (dates would be coming from a table): select cast(convert(datetime, ('01/31/2012'), 101) -convert(datetime, ('01/17/2012'), 101) as int) - ((select sum(hours) from #Temp where date between convert(datetime, ('01/17/2012'), 101) and convert(datetime, ('01/31/2012'), 101)) / 8) Besides the solution of normalizing the table, the other solution I implemented for now, is a function which does all this logic of getting the business days by scanning the current table. It runs pretty fast, but I'm hesitant to call a function, if I can instead add a simpler query to get result. (I'm currently trying this on MSSQL, but I would need to do same for Sybase ASE and Oracle)

    Read the article

  • can't able to integrate base64decode in my class

    - by MaheshBabu
    Hi folks, i am getting the image that is in base64 encoded format. I need to decode it. i am writing the code for decoding is + (NSData *) base64DataFromString: (NSString *)string { unsigned long ixtext, lentext; unsigned char ch, input[4], output[3]; short i, ixinput; Boolean flignore, flendtext = false; const char *temporary; NSMutableData *result; if (!string) return [NSData data]; ixtext = 0; temporary = [string UTF8String]; lentext = [string length]; result = [NSMutableData dataWithCapacity: lentext]; ixinput = 0; while (true) { if (ixtext >= lentext) break; ch = temporary[ixtext++]; flignore = false; if ((ch >= 'A') && (ch <= 'Z')) ch = ch - 'A'; else if ((ch >= 'a') && (ch <= 'z')) ch = ch - 'a' + 26; else if ((ch >= '0') && (ch <= '9')) ch = ch - '0' + 52; else if (ch == '+') ch = 62; else if (ch == '=') flendtext = true; else if (ch == '/') ch = 63; else flignore = true; if (!flignore) { short ctcharsinput = 3; Boolean flbreak = false; if (flendtext) { if (ixinput == 0) break; if ((ixinput == 1) || (ixinput == 2)) { ctcharsinput = 1; else ctcharsinput = 2; ixinput = 3; flbreak = true; } input[ixinput++] = ch; if (ixinput == 4) ixinput = 0; output[0] = (input[0] << 2) | ((input[1] & 0x30) >> 4); output[1] = ((input[1] & 0x0F) << 4) | ((input[2] & 0x3C) >> 2); output[2] = ((input[2] & 0x03) << 6) | (input[3] & 0x3F); for (i = 0; i < ctcharsinput; i++) [result appendBytes: &output[i] length: 1]; } if (flbreak) break; } return result; } i am calling this in my method like this NSData *data = [base64DataFromString:theXML]; theXML is encoded data. but it shows error decodeBase64 undeclared. How can i use this method. can any one pls help me. Thank u in advance.

    Read the article

  • SQLAuthority News – The Best Quotes of “Who Wrote This?” Contest

    - by pinaldave
    I am a frequent reader of Brent Ozar PLF, it is one of my favorite blogs. A recent post announced a “Who Wrote This?” contest to see if readers could tell their three contributors apart based on some writing samples. Here are my favorite lines from the sample paragraphs, from each of the three “mystery authors.” Topic 1: Working with Bad Managers Mystery Author A – “Working with bad managers means working against my own happiness, and I’ve come to learn that there’s no changing bad managers.” I love this line because, as anyone who has had a bad manager knows, often a lot of self-doubt rises up. We all have to remember that sometimes the problem is out of our control. Mystery Author B – “Mentor your manager just like you would mentor a junior DBA.” Having a bad manager can be extremely depressing, and we often feel out of control. But we all need to remember that our work is a two-way street, and that sometimes we can subtly influence those above us. Mystery Author C – “The trick to working for all bad managers is to remember that they aren’t your parent. Take charge of your career.” We all also need to learn not to play the blame game. Would you rather stay in a place where you are unhappy, or would you rather take charge of your life? I hope most people would pick the latter. Topic 2: Working with Remote Teams Mystery Author A – “Like almost anything else the key is to make sure that everyone on the team has an understanding of how and when communication will occur.” Communication is so important. I cannot over emphasize how much. And this one line captures how I feel and even communicates the idea clearly! Mystery Author B – “The key to remote team success is verifiable trust: feeling confident that invisible team members are doing the right amount of the right thing at the right time.” I think this line not only captures the key aspects of remote work – verifiable work and trust – but there were so many lines that followed that I loved and could not fit here. The whole paragraph is a list for successful remote work. Everyone could benefit from reading it. Mystery Author C – “What seems clear, precise, and specific in one time zone comes across as vague, soupy, and just plain weird in another.” You know what? I just love this description. The author is right – sometimes vague e-mails really do seem soupy and weird! Topic 3: Working with Your Nemesis Mystery Author A – “Every job is temporary, but your reputation stays with you.” Everyone needs to remember this. The workplace is meant to be a professional arena, and many people have the opinion that work is temporary and disposable. No one wants to work with co-worker like that. Mystery Author B – “Unhealthy conflict is going to lead to leaving three week old tuna fish sandwiches in someone’s desk drawer.” Sometimes humor really is the best policy! Mystery Author C – “Oh no, it’s that guy.” This might seem like a weird phrase to choose as my favorite from an entire paragraph. But the whole piece was written in the form of a story of co-workers getting drunk and plotting against a nemesis. It was too funny to overlook, but too long to post here. A must read! Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • The type 'XXX' is defined in an assembly that is not referenced exception after upgrade to ASP.NET 4

    - by imran_ku07
       Introduction :          I found two posts in ASP.NET MVC forums complaining that they are getting exception, The type XXX is defined in an assembly that is not referenced, after upgrading thier application into Visual Studio 2010 and .NET Framework 4.0 at here and here .   Description :           The reason why they are getting the above exception is the use of new clean web.config without referencing the assemblies which were presents in ASP.NET 3.5 web.config. The quick solution for this problem is to add the old assemblies in new web.config.          <assemblies>             <add assembly="System.Web.Abstractions, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/>             <add assembly="System.Web.Routing, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/>             <add assembly="System.Web.Mvc, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/>              <add assembly="System.Data.Entity, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" />              <add assembly="System.Data.Linq, Version=4.0.0.0, Culture=neutral, publicKeyToken=b77a5c561934e089" />          </assemblies>    How It works :            Currently i have not tested the above scenario in ASP.NET 4.0 because i have not yet get it. But the above scenario can easily be tested and verified in VS 2008. Just Open the root web.config and remove           <add assembly="System.Core, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089"/>             Even you add the reference of System.Core in your project, you will still get the above exception because aspx pages are compiled in separate assembly. You can check this yourself by checking Show Detailed Compiler Output: below in the yellow screen of death, you will find something,/out:"C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\root\e907aee4\5fa0acc8\App_Web_y5rd6bdg.dll"             This shows that aspx pages are compiled in separate assembly in Temporary ASP.NET Files.Summary :             After getting the above exception make sure to add the assemblies in web.config or add the Assembly directive at Page level. Hopefully this will helps to solve your problem.       

    Read the article

  • Inappropriate Updates?

    - by Tony Davis
    A recent Simple-talk article by Kathi Kellenberger dissected the fastest SQL solution, submitted by Peter Larsson as part of Phil Factor's SQL Speed Phreak challenge, to the classic "running total" problem. In its analysis of the code, the article re-ignited a heated debate regarding the techniques that should, and should not, be deemed acceptable in your search for fast SQL code. Peter's code for running total calculation uses a variation of a somewhat contentious technique, sometimes referred to as a "quirky update": SET @Subscribers = Subscribers = @Subscribers + PeopleJoined - PeopleLeft This form of the UPDATE statement, @variable = column = expression, is documented and it allows you to set a variable to the value returned by the expression. Microsoft does not guarantee the order in which rows are updated in this technique because, in relational theory, a table doesn’t have a natural order to its rows and the UPDATE statement has no means of specifying the order. Traditionally, in cases where a specific order is requires, such as for running aggregate calculations, programmers who used the technique have relied on the fact that the UPDATE statement, without the WHERE clause, is executed in the order imposed by the clustered index, or in heap order, if there isn’t one. Peter wasn’t satisfied with this, and so used the ingenious device of assuring the order of the UPDATE by the use of an "ordered CTE", based on an underlying temporary staging table (a heap). However, in either case, the ordering is still not guaranteed and, in addition, would be broken under conditions of parallelism, or partitioning. Many argue, with validity, that this reliance on a given order where none can ever be guaranteed is an abuse of basic relational principles, and so is a bad practice; perhaps even irresponsible. More importantly, Microsoft doesn't wish to support the technique and offers no guarantee that it will always work. If you put it into production and it breaks in a later version, you can't file a bug. As such, many believe that the technique should never be tolerated in a production system, under any circumstances. Is this attitude justified? After all, both forms of the technique, using a clustered index to guarantee the order or using an ordered CTE, have been tested rigorously and are proven to be robust; although not guaranteed by Microsoft, the ordering is reliable, provided none of the conditions that are known to break it are violated. In Peter's particular case, the technique is being applied to a temporary table, where the developer has full control of the data ordering, and indexing, and knows that the table will never be subject to parallelism or partitioning. It might be argued that, in such circumstances, the technique is not really "quirky" at all and to ban it from your systems would server no real purpose other than to deprive yourself of a reliable technique that has uses that extend well beyond the running total calculations. Of course, it is doubly important that such a technique, including its unsupported status and the assumptions that underpin its success, is fully and clearly documented, preferably even when posting it online in a competition or forum post. Ultimately, however, this technique has been available to programmers throughout the time Sybase and SQL Server has existed, and so cannot be lightly cast aside, even if one sympathises with Microsoft for the awkwardness of maintaining an archaic way of doing updates. After all, a Table hint could easily be devised that, if specified in the WITH (<Table_Hint_Limited>) clause, could be used to request the database engine to do the update in the conventional order. Then perhaps everyone would be satisfied. Cheers, Tony.

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >