Search Results

Search found 19871 results on 795 pages for 'commit log'.

Page 104/795 | < Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >

  • using munin-plugins-rails to monitor rails app perfromance

    - by user2099762
    I have been trying to configure munin-plugins-rails to monitor the performance of our rails apps from Munin. The graphs appear, but no data is shown in the graphs. The log files show Error output from : 2013/06/27-15:39:06 [5540] Request-log-analyzer, by Willem van Bergen and $ 2013/06/27-15:39:06 [5540] Website: http://railsdoctors.com I have tried running Request-log-analyzer manually and pointing it at the production log file, and this reports as % for every item. There is data in the log file. I have tried changing the version of the gems installed, and also the type of the log file, but no luck. Any ideas anyone? Thanks

    Read the article

  • Default Keyboard for new users in Windows 7

    - by xited
    I just installed Windows 7 and I want all users signing in to the computer to see the Language Bar customized with the following three languages: "English (American)" "French (Standard)" "Chinese (Simplified PRC)" I am running the following four lines of code at log on in order to change the registry such that each user will see the language bar, and then have access to the three keyboard layouts mentioned above. reg add "HKCU\Software\Microsoft\CTF\LangBar" /v ShowStatus /t REG_DWORD /d 4 /f reg add "HKCU\Keyboard Layout\Preload" /v 2 /d 0000040c reg add "HKCU\Keyboard Layout\Preload" /v 3 /d 00000c0a reg add "HKCU\Keyboard Layout\Preload" /v 4 /d 00000804 The above works fine, but with one small/major inconvenience: the user has to log off and then log back on in order for these changes to take effect and see the language bar, as described above. The question becomes: How can I force these changes to take effect so that users don't have to log off and then log back in to see the language bar. This has to be done automatically when users log in.

    Read the article

  • seaudit report detail

    - by user1014130
    I've just started using selinux in the last 6 months and am getting to grips with it. However, using sealert on a new CENTOS 6 server, Im not getting the level of detail I was with CENTOS 5. To illustrate: Running sealert -a /var/log/audit/audit.log On CENTOS 5 I get: Summary: SELinux is preventing postdrop (postfix_postdrop_t) "getattr" to /var/log/httpd/error_log (httpd_log_t). Detailed Description: SELinux denied access requested by postdrop. It is not expected that this access is required by postdrop and this access may signal an intrusion attempt. It is also possible that the specific version or configuration of the application is causing it to require additional access. Allowing Access: Sometimes labeling problems can cause SELinux denials. You could try to restore the default system file context for /var/log/httpd/error_log, restorecon -v '/var/log/httpd/error_log' If this does not work, there is currently no automatic way to allow this access. Instead, you can generate a local policy module to allow this access - see FAQ (http://fedora.redhat.com/docs/selinux-faq-fc5/#id2961385) Or you can disable SELinux protection altogether. Disabling SELinux protection is not recommended. Please file a bug report (http://bugzilla.redhat.com/bugzilla/enter_bug.cgi) against this package. Additional Information: Source Context root:system_r:postfix_postdrop_t Target Context system_u:object_r:httpd_log_t Target Objects /var/log/httpd/error_log [ file ] Source postdrop Source Path /usr/sbin/postdrop Port Host Source RPM Packages postfix-2.3.3-2.1.el5_2 Target RPM Packages Policy RPM selinux-policy-2.4.6-279.el5_5.1 Selinux Enabled True Policy Type targeted MLS Enabled True Enforcing Mode Enforcing Plugin Name catchall_file Host Name server109-228-26-144.live-servers.net Platform Linux server109-228-26-144.live-servers.net 2.6.18-194.8.1.el5 #1 SMP Thu Jul 1 19:04:48 EDT 2010 x86_64 x86_64 Alert Count 1 First Seen Wed Jun 13 11:43:55 2012 Last Seen Wed Jun 13 11:43:55 2012 but on CENTOS 6 I just get: Summary: SELinux is preventing postdrop (postfix_postdrop_t) "getattr" to /var/log/httpd/error_log (httpd_log_t). Detailed Description: SELinux denied access requested by postdrop. It is not expected that this access is required by postdrop and this access may signal an intrusion attempt. It is also possible that the specific version or configuration of the application is causing it to require additional access. Allowing Access: Sometimes labeling problems can cause SELinux denials. You could try to restore the default system file context for /var/log/httpd/error_log, restorecon -v '/var/log/httpd/error_log' If this does not work, there is currently no automatic way to allow this access. Instead, you can generate a local policy module to allow this access - see FAQ (http://fedora.redhat.com/docs/selinux-faq-fc5/#id2961385) Or you can disable SELinux protection altogether. Disabling SELinux protection is not recommended. Please file a bug report (http://bugzilla.redhat.com/bugzilla/enter_bug.cgi) against this package. Im running exactly the same command. Does anyone have any idea why Im not getting the "Additional information" that I do with CENTOS 5? Thanks in advance Dylan

    Read the article

  • Couldn't pass the signin screen on ubuntu

    - by Amokrane
    I have an issue here with my computer using ubuntu 10.10 on a 64 bits machine. When I start it, I have the login screen, I enter my credentials but instead of starting the session it reloads the login screen again. I checked the disc using fsck and it seems clean. How should I proceed to diagnose and repair this issue? Thanks! [Edit] I went to the log files, this is what I got: auth.log pam_unix (gdm:session): session opened for user amokrane by (uid=0) pam_ck_connector (gdm:session): nox11 mode, ignoring PAM_TTY :0 pam_unix (gdm:session) :session closed for user amokrane messages.log No ACPI video bus found I also took a shot with my camera of the black screen that appears between the two login screens, it says something like: fsck from util-linux-ng 2.17.2 /dev/sdc4 : propre, xxxx files, xxxx blocs Starting AppArmor profiles Skipping profiles in /etc/apparmor.d/disable: usr.bin.firefox Setting sensors limits Starting postgreSQL ... /var/log/Xorg.0.log [ 25.375] (II) intel(0): Modeline "1920x1080"x60.0 172.80 ... [ 28.850] (II) Power Button: Close [ 28.850] (II) UnloadModule: "evdev" [ 29.910] (II) Power Button: Close [ 28.910] (II) UnloadModule: "evdev" [ 28.941] (II) AT Translated Set 2 keyboard: Close [ 29.000] (II) ImPS/2 Generic Wheel Mouse: Close [ 29.000] (II) UnloadModule: "evdev" [ 29.039] ddxSigGiveUp: Closing log Update I tried the following: Ctrl-Alt-F1 on the login screen (to runt the console). sudo pkill startx sudo rm /tmp/.X0-locl startx But it tells me that the x server is already running.

    Read the article

  • Unable to get to remote samba share

    - by tubaguy50035
    I have a remote VPS that I would like to setup samba on and only allow my IP access to it. I currently have in my smb.conf: [global] netbios name = apollo security = user encrypt passwords = true socket options = TCP_NODELAY printing = bsd log level = 3 log file = /var/log/samba/log/%m debug timestamp = yes max log size = 100 [hosting] path = /hosting/ comment = Hosting Folder browseable = yes read only = yes guest account = yes valid users = nick I have the ports (137,138,139,445) open in iptables (they're open to everyone right now while I debug) and I see nothing in the syslog about iptables blocking my requests. When I try to open a file browser to my address \\ipaddress, it hangs for a good thirty seconds, and then opens a log in box. I enter my user name and password for the server, hit okay. It then opens the same box, I enter my credentials again and hit enter. Windows then tells me it could not connect. My user account is added to Samba already. Anybody have any suggestions what I can do to get this working?

    Read the article

  • Use crontab scheduling java application problem occurs

    - by koma
    The main method to start the java application. The main method initialize the log, and then determine whether the process is running. Every 10 minutes, scheduled to run through the linux crontab. Able to determine that the 10 minutes of this program must end. Under normal circumstances, will print the following log The beginning of the implementation of 10 minutes The end of the 10 minutes of normal The beginning of the implementation of 20 minutes The end of the 20 minutes of normal The beginning of the implementation of 30 minutes The end of the 30 minutes of normal ..... But now this situation: The beginning of the implementation of 10 minutes The end of the 10 minutes of normal Execution starts in 30 minutes, but detected already have a process in operation, the program exits. Start the execution of 40 points, but detected already have a process in operation, the program exits. ..... Very strange 20-minute log does not print, but the 20-minute program has been launched by the ps-ef | grep java view java thread, found in a 20-minute thread is locked. But why not see the log Check the linux dispatch log, not see scheduling a 20-minute log.

    Read the article

  • centos 6 nfs: logs not showing anywhere

    - by ancillary
    Can someone please tell me where NFS logs in centos 6? Or perhaps where I can tell NFS to send logs? At the present time, there appears to be no such setting. Trying to get the thing to work without logs is quite frustrating. [root@houston netshare]# locate nfs| grep log [root@houston netshare]# [root@houston netshare]# grep -Rni "nfs" /var/log /var/log/anaconda.storage.log:23:20:41:33,962 DEBUG : registered device format class NFS as nfs /var/log/anaconda.storage.log:24:20:41:33,962 DEBUG : registered device format class NFSv4 as nfs4 This is a day-old centos 6 install from livecd and yum update has been run.

    Read the article

  • how to get this working if exist tasklist=="notepad %DATE%.txt" [closed]

    - by blade19899
    @echo off title Log file creator if exist "%CD%\%DATE%.txt" (msg * "the log of today(%DATE%.txt) Has already been made" goto :notepad) else (goto :start) goto :eof :start echo %date%>>"%date%".txt echo %username% >>"%date%".txt echo.>>"%date%".txt echo.>>"%date%".txt echo ----------------------------------------Reports---------------------------------------- >>"%date%".txt echo.>>"%date%".txt echo.>>"%date%".txt echo.>>"%date%".txt echo.>>"%date%".txt echo.>>"%date%".txt echo.>>"%date%".txt echo.>>"%date%".txt echo.>>"%date%".txt echo ----------------------------------------TO-DO---------------------- >>"%date%".txt echo.>>"%date%".txt echo.>>"%date%".txt :notepad if exist tasklist=="notepad %DATE%.txt" (msg * "the log of today(%DATE%.txt) Has already been made, en is opened") else (goto :start) goto :eof :start start notepad %DATE%.txt This code right now pops up the log of today(%DATE%.txt) Has already been made and after clicking OK it doesn't do anything it should open the msg the log of today(%DATE%.txt) Has already been made, en is opened i have notepad opened. with process explorer it shows notepad and the date.txt my question is how to make it show the the log of today(%DATE%.txt) Has already been made, en is opened box... and perhaps bring notepad to the foreground? ps not sure if this question belongs here. Apologize if it doesn't!

    Read the article

  • Did a bunch of wrong work, should I keep it?

    - by Droogans
    I have forked a repo and branched that clone to code a story, and because I didn't understand the problem, wrote code that isn't solving my task at hand, but may prove useful later. Should I: Delete it, and don't worry about it. Then commit without the extra code. Make yet another branch for just that work, commit it, but don't post a pull request on it. Just commit it with the existing code, and worry about the extra "fluff" later. I was thinking #2. If I understand correctly, I could just keep the extra code on a branch I never use on my clone, and dig it up later if something comes up that may benefit from using it.

    Read the article

  • ??11.2 RAC??OCR?Votedisk??ASM Diskgroup?????

    - by Liu Maclean(???)
    ????????Oracle Allstarts??????????ocr?votedisk?ASM diskgroup??11gR2 RAC cluster?????????,????«?11gR2 RAC???ASM DISK Path????»??????,??????CRS??????11.2??ASM???????, ????????????”crsctl start crs -excl -nocrs “; ?????????,??ASM????ocr?????votedisk?????,??11.2????ocr?votedisk???ASM?,?ASM???????ocr?votedisk,?????ocr?votedisk????????cluter??????;???????????CRS????,?????diskgroup??????????,?????????????????? ??:?????????????????ASM LUN DISK,???OCR?????,????????4??????????,???????$GI_HOME,?????????;????votedisk?? ????: ??dd????ocr?votedisk??diskgroup header,??diskgroup corruption: 1. ??votedisk? ocr?? [root@vrh1 ~]# crsctl query css votedisk ## STATE File Universal Id File Name Disk group -- ----- ----------------- --------- --------- 1. ONLINE a853d6204bbc4feabfd8c73d4c3b3001 (/dev/asm-diskh) [SYSTEMDG] 2. ONLINE a5b37704c3574f0fbf21d1d9f58c4a6b (/dev/asm-diskg) [SYSTEMDG] 3. ONLINE 36e5c51ff0294fc3bf2a042266650331 (/dev/asm-diski) [SYSTEMDG] 4. ONLINE af337d1512824fe4bf6ad45283517aaa (/dev/asm-diskj) [SYSTEMDG] 5. ONLINE 3c4a349e2e304ff6bf64b2b1c9d9cf5d (/dev/asm-diskk) [SYSTEMDG] Located 5 voting disk(s). su - grid [grid@vrh1 ~]$ ocrconfig -showbackup PROT-26: Oracle Cluster Registry backup locations were retrieved from a local copy vrh1 2012/08/09 01:59:56 /g01/11.2.0/maclean/grid/cdata/vrh-cluster/backup00.ocr vrh1 2012/08/08 21:59:56 /g01/11.2.0/maclean/grid/cdata/vrh-cluster/backup01.ocr vrh1 2012/08/08 17:59:55 /g01/11.2.0/maclean/grid/cdata/vrh-cluster/backup02.ocr vrh1 2012/08/08 05:59:54 /g01/11.2.0/grid/cdata/vrh-cluster/day.ocr vrh1 2012/08/08 05:59:54 /g01/11.2.0/grid/cdata/vrh-cluster/week.ocr PROT-25: Manual backups for the Oracle Cluster Registry are not available 2. ??????????clusterware ,OHASD crsctl stop has -f 3. GetAsmDH.sh ==> GetAsmDH.sh?ASM disk header????? ????????,????????asm header [grid@vrh1 ~]$ ./GetAsmDH.sh ############################################ 1) Collecting Information About the Disks: ############################################ SQL*Plus: Release 11.2.0.3.0 Production on Thu Aug 9 03:28:13 2012 Copyright (c) 1982, 2011, Oracle. All rights reserved. SQL> Connected. SQL> SQL> SQL> SQL> SQL> SQL> SQL> 1 0 /dev/asm-diske 1 1 /dev/asm-diskd 2 0 /dev/asm-diskb 2 1 /dev/asm-diskc 2 2 /dev/asm-diskf 3 0 /dev/asm-diskh 3 1 /dev/asm-diskg 3 2 /dev/asm-diski 3 3 /dev/asm-diskj 3 4 /dev/asm-diskk SQL> SQL> Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production With the Real Application Clusters and Automatic Storage Management options -rw-r--r-- 1 grid oinstall 1048 Aug 9 03:28 /tmp/HC/asmdisks.lst ############################################ 2) Generating asm_diskh.sh script. ############################################ -rwx------ 1 grid oinstall 666 Aug 9 03:28 /tmp/HC/asm_diskh.sh ############################################ 3) Executing asm_diskh.sh script to generate dd dumps. ############################################ -rw-r--r-- 1 grid oinstall 1048576 Aug 9 03:28 /tmp/HC/dsk_1_0.dd -rw-r--r-- 1 grid oinstall 1048576 Aug 9 03:28 /tmp/HC/dsk_1_1.dd -rw-r--r-- 1 grid oinstall 1048576 Aug 9 03:28 /tmp/HC/dsk_2_0.dd -rw-r--r-- 1 grid oinstall 1048576 Aug 9 03:28 /tmp/HC/dsk_2_1.dd -rw-r--r-- 1 grid oinstall 1048576 Aug 9 03:28 /tmp/HC/dsk_2_2.dd -rw-r--r-- 1 grid oinstall 1048576 Aug 9 03:28 /tmp/HC/dsk_3_0.dd -rw-r--r-- 1 grid oinstall 1048576 Aug 9 03:28 /tmp/HC/dsk_3_1.dd -rw-r--r-- 1 grid oinstall 1048576 Aug 9 03:28 /tmp/HC/dsk_3_2.dd -rw-r--r-- 1 grid oinstall 1048576 Aug 9 03:28 /tmp/HC/dsk_3_3.dd -rw-r--r-- 1 grid oinstall 1048576 Aug 9 03:28 /tmp/HC/dsk_3_4.dd ############################################ 4) Compressing dd dumps in the next format: (asm_dd_header_all_.tar) ############################################ /tmp/HC/dsk_1_0.dd /tmp/HC/dsk_1_1.dd /tmp/HC/dsk_2_0.dd /tmp/HC/dsk_2_1.dd /tmp/HC/dsk_2_2.dd /tmp/HC/dsk_3_0.dd /tmp/HC/dsk_3_1.dd /tmp/HC/dsk_3_2.dd /tmp/HC/dsk_3_3.dd /tmp/HC/dsk_3_4.dd ./GetAsmDH.sh: line 81: compress: command not found ls: /tmp/HC/*.Z: No such file or directory [grid@vrh1 ~]$ 4. ??dd ?? ??ocr?votedisk??diskgroup [root@vrh1 ~]# dd if=/dev/zero of=/dev/asm-diskh bs=1024k count=1 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.00423853 seconds, 247 MB/s [root@vrh1 ~]# dd if=/dev/zero of=/dev/asm-diskg bs=1024k count=1 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0045179 seconds, 232 MB/s [root@vrh1 ~]# dd if=/dev/zero of=/dev/asm-diski bs=1024k count=1 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.00469976 seconds, 223 MB/s [root@vrh1 ~]# dd if=/dev/zero of=/dev/asm-diskj bs=1024k count=1 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.00344262 seconds, 305 MB/s [root@vrh1 ~]# dd if=/dev/zero of=/dev/asm-diskk bs=1024k count=1 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0053518 seconds, 196 MB/s 5. ????????????HAS [root@vrh1 ~]# crsctl start has CRS-4123: Oracle High Availability Services has been started. ????ocr?votedisk??diskgroup??,??CSS???????,???????: alertvrh1.log [cssd(5162)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in /g01/11.2.0/grid/log/vrh1/cssd/ocssd.log 2012-08-09 03:35:41.207 [cssd(5162)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in /g01/11.2.0/grid/log/vrh1/cssd/ocssd.log 2012-08-09 03:35:56.240 [cssd(5162)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in /g01/11.2.0/grid/log/vrh1/cssd/ocssd.log 2012-08-09 03:36:11.284 [cssd(5162)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in /g01/11.2.0/grid/log/vrh1/cssd/ocssd.log 2012-08-09 03:36:26.305 [cssd(5162)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in /g01/11.2.0/grid/log/vrh1/cssd/ocssd.log 2012-08-09 03:36:41.328 ocssd.log 2012-08-09 03:40:26.662: [ CSSD][1078700352]clssnmReadDiscoveryProfile: voting file discovery string(/dev/asm*) 2012-08-09 03:40:26.662: [ CSSD][1078700352]clssnmvDDiscThread: using discovery string /dev/asm* for initial discovery 2012-08-09 03:40:26.662: [ SKGFD][1078700352]Discovery with str:/dev/asm*: 2012-08-09 03:40:26.662: [ SKGFD][1078700352]UFS discovery with :/dev/asm*: 2012-08-09 03:40:26.665: [ SKGFD][1078700352]Fetching UFS disk :/dev/asm-diskf: 2012-08-09 03:40:26.665: [ SKGFD][1078700352]Fetching UFS disk :/dev/asm-diskb: 2012-08-09 03:40:26.665: [ SKGFD][1078700352]Fetching UFS disk :/dev/asm-diskj: 2012-08-09 03:40:26.665: [ SKGFD][1078700352]Fetching UFS disk :/dev/asm-diskh: 2012-08-09 03:40:26.665: [ SKGFD][1078700352]Fetching UFS disk :/dev/asm-diskc: 2012-08-09 03:40:26.665: [ SKGFD][1078700352]Fetching UFS disk :/dev/asm-diskd: 2012-08-09 03:40:26.665: [ SKGFD][1078700352]Fetching UFS disk :/dev/asm-diske: 2012-08-09 03:40:26.665: [ SKGFD][1078700352]Fetching UFS disk :/dev/asm-diskg: 2012-08-09 03:40:26.665: [ SKGFD][1078700352]Fetching UFS disk :/dev/asm-diski: 2012-08-09 03:40:26.665: [ SKGFD][1078700352]Fetching UFS disk :/dev/asm-diskk: 2012-08-09 03:40:26.665: [ SKGFD][1078700352]OSS discovery with :/dev/asm*: 2012-08-09 03:40:26.665: [ SKGFD][1078700352]Handle 0xdf22a0 from lib :UFS:: for disk :/dev/asm-diskf: 2012-08-09 03:40:26.665: [ SKGFD][1078700352]Handle 0xf412a0 from lib :UFS:: for disk :/dev/asm-diskb: 2012-08-09 03:40:26.666: [ SKGFD][1078700352]Handle 0xf3a680 from lib :UFS:: for disk :/dev/asm-diskj: 2012-08-09 03:40:26.666: [ SKGFD][1078700352]Handle 0xf93da0 from lib :UFS:: for disk :/dev/asm-diskh: 2012-08-09 03:40:26.667: [ CSSD][1078700352]clssnmvDiskVerify: Successful discovery of 0 disks 2012-08-09 03:40:26.667: [ CSSD][1078700352]clssnmCompleteInitVFDiscovery: Completing initial voting file discovery 2012-08-09 03:40:26.667: [ CSSD][1078700352]clssnmvFindInitialConfigs: No voting files found 2012-08-09 03:40:26.667: [ CSSD][1078700352](:CSSNM00070:)clssnmCompleteInitVFDiscovery: Voting file not found. Retrying discovery in 15 seconds ?????ocr?votedisk??diskgroup?????: 1. ?-excl -nocrs ????cluster,??????ASM?? ????CRS [root@vrh1 vrh1]# crsctl start crs -excl -nocrs CRS-4123: Oracle High Availability Services has been started. CRS-2672: Attempting to start 'ora.mdnsd' on 'vrh1' CRS-2676: Start of 'ora.mdnsd' on 'vrh1' succeeded CRS-2672: Attempting to start 'ora.gpnpd' on 'vrh1' CRS-2676: Start of 'ora.gpnpd' on 'vrh1' succeeded CRS-2672: Attempting to start 'ora.cssdmonitor' on 'vrh1' CRS-2672: Attempting to start 'ora.gipcd' on 'vrh1' CRS-2676: Start of 'ora.cssdmonitor' on 'vrh1' succeeded CRS-2676: Start of 'ora.gipcd' on 'vrh1' succeeded CRS-2672: Attempting to start 'ora.cssd' on 'vrh1' CRS-2672: Attempting to start 'ora.diskmon' on 'vrh1' CRS-2676: Start of 'ora.diskmon' on 'vrh1' succeeded CRS-2676: Start of 'ora.cssd' on 'vrh1' succeeded CRS-2679: Attempting to clean 'ora.cluster_interconnect.haip' on 'vrh1' CRS-2672: Attempting to start 'ora.ctssd' on 'vrh1' CRS-2681: Clean of 'ora.cluster_interconnect.haip' on 'vrh1' succeeded CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'vrh1' CRS-2676: Start of 'ora.ctssd' on 'vrh1' succeeded CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'vrh1' succeeded CRS-2672: Attempting to start 'ora.asm' on 'vrh1' CRS-2676: Start of 'ora.asm' on 'vrh1' succeeded 2.???ocr?votedisk??diskgroup,??compatible.asm???11.2: [root@vrh1 vrh1]# su - grid [grid@vrh1 ~]$ sqlplus / as sysasm SQL*Plus: Release 11.2.0.3.0 Production on Thu Aug 9 04:16:58 2012 Copyright (c) 1982, 2011, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production With the Real Application Clusters and Automatic Storage Management options SQL> create diskgroup systemdg high redundancy disk '/dev/asm-diskh','/dev/asm-diskg','/dev/asm-diski','/dev/asm-diskj','/dev/asm-diskk' ATTRIBUTE 'compatible.rdbms' = '11.2', 'compatible.asm' = '11.2'; 3.?ocr backup???ocr??ocrcheck??: [root@vrh1 ~]# ocrconfig -restore /g01/11.2.0/grid/cdata/vrh-cluster/backup00.ocr [root@vrh1 ~]# ocrcheck Status of Oracle Cluster Registry is as follows : Version : 3 Total space (kbytes) : 262120 Used space (kbytes) : 3180 Available space (kbytes) : 258940 ID : 1238458014 Device/File Name : +systemdg Device/File integrity check succeeded Device/File not configured Device/File not configured Device/File not configured Device/File not configured Cluster registry integrity check succeeded Logical corruption check succeeded 4. ????votedisk ,??????????: [grid@vrh1 ~]$ crsctl replace votedisk +SYSTEMDG CRS-4602: Failed 27 to add voting file 2e4e0fe285924f86bf5473d00dcc0388. CRS-4602: Failed 27 to add voting file 4fa54bb0cc5c4fafbf1a9be5479bf389. CRS-4602: Failed 27 to add voting file a109ead9ea4e4f28bfe233188623616a. CRS-4602: Failed 27 to add voting file 042c9fbd71b54f5abfcd3ab3408f3cf3. CRS-4602: Failed 27 to add voting file 7b5a8cd24f954fafbf835ad78615763f. Failed to replace voting disk group with +SYSTEMDG. CRS-4000: Command Replace failed, or completed with errors. ????????ASM???,???ASM: SQL> alter system set asm_diskstring='/dev/asm*'; System altered. SQL> create spfile from memory; File created. SQL> startup force mount; ORA-32004: obsolete or deprecated parameter(s) specified for ASM instance ASM instance started Total System Global Area 283930624 bytes Fixed Size 2227664 bytes Variable Size 256537136 bytes ASM Cache 25165824 bytes ASM diskgroups mounted SQL> show parameter spfile NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ spfile string /g01/11.2.0/grid/dbs/spfile+AS M1.ora [grid@vrh1 trace]$ crsctl replace votedisk +SYSTEMDG CRS-4256: Updating the profile Successful addition of voting disk 85edc0e82d274f78bfc58cdc73b8c68a. Successful addition of voting disk 201ffffc8ba44faabfe2efec2aa75840. Successful addition of voting disk 6f2a25c589964faabf6980f7c5f621ce. Successful addition of voting disk 93eb315648454f25bf3717df1a2c73d5. Successful addition of voting disk 3737240678964f88bfbfbd31d8b3829f. Successfully replaced voting disk group with +SYSTEMDG. CRS-4256: Updating the profile CRS-4266: Voting file(s) successfully replaced 5. ??has??,??cluster????: [root@vrh1 ~]# crsctl check crs CRS-4638: Oracle High Availability Services is online CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online [root@vrh1 ~]# crsctl query css votedisk ## STATE File Universal Id File Name Disk group -- ----- ----------------- --------- --------- 1. ONLINE 85edc0e82d274f78bfc58cdc73b8c68a (/dev/asm-diskh) [SYSTEMDG] 2. ONLINE 201ffffc8ba44faabfe2efec2aa75840 (/dev/asm-diskg) [SYSTEMDG] 3. ONLINE 6f2a25c589964faabf6980f7c5f621ce (/dev/asm-diski) [SYSTEMDG] 4. ONLINE 93eb315648454f25bf3717df1a2c73d5 (/dev/asm-diskj) [SYSTEMDG] 5. ONLINE 3737240678964f88bfbfbd31d8b3829f (/dev/asm-diskk) [SYSTEMDG] Located 5 voting disk(s). [root@vrh1 ~]# crsctl stat res -t -------------------------------------------------------------------------------- NAME TARGET STATE SERVER STATE_DETAILS -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.BACKUPDG.dg ONLINE ONLINE vrh1 ora.DATA.dg ONLINE ONLINE vrh1 ora.LISTENER.lsnr ONLINE ONLINE vrh1 ora.LSN_MACLEAN.lsnr ONLINE ONLINE vrh1 ora.SYSTEMDG.dg ONLINE ONLINE vrh1 ora.asm ONLINE ONLINE vrh1 Started ora.gsd OFFLINE OFFLINE vrh1 ora.net1.network ONLINE ONLINE vrh1 ora.ons ONLINE ONLINE vrh1 -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.LISTENER_SCAN1.lsnr http://www.askmaclean.com 1 ONLINE ONLINE vrh1 ora.cvu 1 OFFLINE OFFLINE ora.oc4j 1 OFFLINE OFFLINE ora.scan1.vip 1 ONLINE ONLINE vrh1 ora.vprod.db 1 ONLINE OFFLINE 2 ONLINE OFFLINE ora.vrh1.vip 1 ONLINE ONLINE vrh1 ora.vrh2.vip 1 ONLINE INTERMEDIATE vrh1 FAILED OVER

    Read the article

  • ??11.2 RAC??OCR?Votedisk??ASM Diskgroup?????

    - by Liu Maclean(???)
    ????????Oracle Allstarts??????????ocr?votedisk?ASM diskgroup??11gR2 RAC cluster?????????,????«?11gR2 RAC???ASM DISK Path????»??????,??????CRS??????11.2??ASM???????, ????????????”crsctl start crs -excl -nocrs “; ?????????,??ASM????ocr?????votedisk?????,??11.2????ocr?votedisk???ASM?,?ASM???????ocr?votedisk,?????ocr?votedisk????????cluter??????;???????????CRS????,?????diskgroup??????????,?????????????????? ??:?????????????????ASM LUN DISK,???OCR?????,????????4??????????,???????$GI_HOME,?????????;????votedisk?? ????: ??dd????ocr?votedisk??diskgroup header,??diskgroup corruption: 1. ??votedisk? ocr?? [root@vrh1 ~]# crsctl query css votedisk ## STATE File Universal Id File Name Disk group -- ----- ----------------- --------- --------- 1. ONLINE a853d6204bbc4feabfd8c73d4c3b3001 (/dev/asm-diskh) [SYSTEMDG] 2. ONLINE a5b37704c3574f0fbf21d1d9f58c4a6b (/dev/asm-diskg) [SYSTEMDG] 3. ONLINE 36e5c51ff0294fc3bf2a042266650331 (/dev/asm-diski) [SYSTEMDG] 4. ONLINE af337d1512824fe4bf6ad45283517aaa (/dev/asm-diskj) [SYSTEMDG] 5. ONLINE 3c4a349e2e304ff6bf64b2b1c9d9cf5d (/dev/asm-diskk) [SYSTEMDG] Located 5 voting disk(s). su - grid [grid@vrh1 ~]$ ocrconfig -showbackup PROT-26: Oracle Cluster Registry backup locations were retrieved from a local copy vrh1 2012/08/09 01:59:56 /g01/11.2.0/maclean/grid/cdata/vrh-cluster/backup00.ocr vrh1 2012/08/08 21:59:56 /g01/11.2.0/maclean/grid/cdata/vrh-cluster/backup01.ocr vrh1 2012/08/08 17:59:55 /g01/11.2.0/maclean/grid/cdata/vrh-cluster/backup02.ocr vrh1 2012/08/08 05:59:54 /g01/11.2.0/grid/cdata/vrh-cluster/day.ocr vrh1 2012/08/08 05:59:54 /g01/11.2.0/grid/cdata/vrh-cluster/week.ocr PROT-25: Manual backups for the Oracle Cluster Registry are not available 2. ??????????clusterware ,OHASD crsctl stop has -f 3. GetAsmDH.sh ==> GetAsmDH.sh?ASM disk header????? ????????,????????asm header [grid@vrh1 ~]$ ./GetAsmDH.sh ############################################ 1) Collecting Information About the Disks: ############################################ SQL*Plus: Release 11.2.0.3.0 Production on Thu Aug 9 03:28:13 2012 Copyright (c) 1982, 2011, Oracle. All rights reserved. SQL> Connected. SQL> SQL> SQL> SQL> SQL> SQL> SQL> 1 0 /dev/asm-diske 1 1 /dev/asm-diskd 2 0 /dev/asm-diskb 2 1 /dev/asm-diskc 2 2 /dev/asm-diskf 3 0 /dev/asm-diskh 3 1 /dev/asm-diskg 3 2 /dev/asm-diski 3 3 /dev/asm-diskj 3 4 /dev/asm-diskk SQL> SQL> Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production With the Real Application Clusters and Automatic Storage Management options -rw-r--r-- 1 grid oinstall 1048 Aug 9 03:28 /tmp/HC/asmdisks.lst ############################################ 2) Generating asm_diskh.sh script. ############################################ -rwx------ 1 grid oinstall 666 Aug 9 03:28 /tmp/HC/asm_diskh.sh ############################################ 3) Executing asm_diskh.sh script to generate dd dumps. ############################################ -rw-r--r-- 1 grid oinstall 1048576 Aug 9 03:28 /tmp/HC/dsk_1_0.dd -rw-r--r-- 1 grid oinstall 1048576 Aug 9 03:28 /tmp/HC/dsk_1_1.dd -rw-r--r-- 1 grid oinstall 1048576 Aug 9 03:28 /tmp/HC/dsk_2_0.dd -rw-r--r-- 1 grid oinstall 1048576 Aug 9 03:28 /tmp/HC/dsk_2_1.dd -rw-r--r-- 1 grid oinstall 1048576 Aug 9 03:28 /tmp/HC/dsk_2_2.dd -rw-r--r-- 1 grid oinstall 1048576 Aug 9 03:28 /tmp/HC/dsk_3_0.dd -rw-r--r-- 1 grid oinstall 1048576 Aug 9 03:28 /tmp/HC/dsk_3_1.dd -rw-r--r-- 1 grid oinstall 1048576 Aug 9 03:28 /tmp/HC/dsk_3_2.dd -rw-r--r-- 1 grid oinstall 1048576 Aug 9 03:28 /tmp/HC/dsk_3_3.dd -rw-r--r-- 1 grid oinstall 1048576 Aug 9 03:28 /tmp/HC/dsk_3_4.dd ############################################ 4) Compressing dd dumps in the next format: (asm_dd_header_all_.tar) ############################################ /tmp/HC/dsk_1_0.dd /tmp/HC/dsk_1_1.dd /tmp/HC/dsk_2_0.dd /tmp/HC/dsk_2_1.dd /tmp/HC/dsk_2_2.dd /tmp/HC/dsk_3_0.dd /tmp/HC/dsk_3_1.dd /tmp/HC/dsk_3_2.dd /tmp/HC/dsk_3_3.dd /tmp/HC/dsk_3_4.dd ./GetAsmDH.sh: line 81: compress: command not found ls: /tmp/HC/*.Z: No such file or directory [grid@vrh1 ~]$ 4. ??dd ?? ??ocr?votedisk??diskgroup [root@vrh1 ~]# dd if=/dev/zero of=/dev/asm-diskh bs=1024k count=1 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.00423853 seconds, 247 MB/s [root@vrh1 ~]# dd if=/dev/zero of=/dev/asm-diskg bs=1024k count=1 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0045179 seconds, 232 MB/s [root@vrh1 ~]# dd if=/dev/zero of=/dev/asm-diski bs=1024k count=1 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.00469976 seconds, 223 MB/s [root@vrh1 ~]# dd if=/dev/zero of=/dev/asm-diskj bs=1024k count=1 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.00344262 seconds, 305 MB/s [root@vrh1 ~]# dd if=/dev/zero of=/dev/asm-diskk bs=1024k count=1 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0053518 seconds, 196 MB/s 5. ????????????HAS [root@vrh1 ~]# crsctl start has CRS-4123: Oracle High Availability Services has been started. ????ocr?votedisk??diskgroup??,??CSS???????,???????: alertvrh1.log [cssd(5162)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in /g01/11.2.0/grid/log/vrh1/cssd/ocssd.log 2012-08-09 03:35:41.207 [cssd(5162)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in /g01/11.2.0/grid/log/vrh1/cssd/ocssd.log 2012-08-09 03:35:56.240 [cssd(5162)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in /g01/11.2.0/grid/log/vrh1/cssd/ocssd.log 2012-08-09 03:36:11.284 [cssd(5162)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in /g01/11.2.0/grid/log/vrh1/cssd/ocssd.log 2012-08-09 03:36:26.305 [cssd(5162)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in /g01/11.2.0/grid/log/vrh1/cssd/ocssd.log 2012-08-09 03:36:41.328 ocssd.log 2012-08-09 03:40:26.662: [ CSSD][1078700352]clssnmReadDiscoveryProfile: voting file discovery string(/dev/asm*) 2012-08-09 03:40:26.662: [ CSSD][1078700352]clssnmvDDiscThread: using discovery string /dev/asm* for initial discovery 2012-08-09 03:40:26.662: [ SKGFD][1078700352]Discovery with str:/dev/asm*: 2012-08-09 03:40:26.662: [ SKGFD][1078700352]UFS discovery with :/dev/asm*: 2012-08-09 03:40:26.665: [ SKGFD][1078700352]Fetching UFS disk :/dev/asm-diskf: 2012-08-09 03:40:26.665: [ SKGFD][1078700352]Fetching UFS disk :/dev/asm-diskb: 2012-08-09 03:40:26.665: [ SKGFD][1078700352]Fetching UFS disk :/dev/asm-diskj: 2012-08-09 03:40:26.665: [ SKGFD][1078700352]Fetching UFS disk :/dev/asm-diskh: 2012-08-09 03:40:26.665: [ SKGFD][1078700352]Fetching UFS disk :/dev/asm-diskc: 2012-08-09 03:40:26.665: [ SKGFD][1078700352]Fetching UFS disk :/dev/asm-diskd: 2012-08-09 03:40:26.665: [ SKGFD][1078700352]Fetching UFS disk :/dev/asm-diske: 2012-08-09 03:40:26.665: [ SKGFD][1078700352]Fetching UFS disk :/dev/asm-diskg: 2012-08-09 03:40:26.665: [ SKGFD][1078700352]Fetching UFS disk :/dev/asm-diski: 2012-08-09 03:40:26.665: [ SKGFD][1078700352]Fetching UFS disk :/dev/asm-diskk: 2012-08-09 03:40:26.665: [ SKGFD][1078700352]OSS discovery with :/dev/asm*: 2012-08-09 03:40:26.665: [ SKGFD][1078700352]Handle 0xdf22a0 from lib :UFS:: for disk :/dev/asm-diskf: 2012-08-09 03:40:26.665: [ SKGFD][1078700352]Handle 0xf412a0 from lib :UFS:: for disk :/dev/asm-diskb: 2012-08-09 03:40:26.666: [ SKGFD][1078700352]Handle 0xf3a680 from lib :UFS:: for disk :/dev/asm-diskj: 2012-08-09 03:40:26.666: [ SKGFD][1078700352]Handle 0xf93da0 from lib :UFS:: for disk :/dev/asm-diskh: 2012-08-09 03:40:26.667: [ CSSD][1078700352]clssnmvDiskVerify: Successful discovery of 0 disks 2012-08-09 03:40:26.667: [ CSSD][1078700352]clssnmCompleteInitVFDiscovery: Completing initial voting file discovery 2012-08-09 03:40:26.667: [ CSSD][1078700352]clssnmvFindInitialConfigs: No voting files found 2012-08-09 03:40:26.667: [ CSSD][1078700352](:CSSNM00070:)clssnmCompleteInitVFDiscovery: Voting file not found. Retrying discovery in 15 seconds ?????ocr?votedisk??diskgroup?????: 1. ?-excl -nocrs ????cluster,??????ASM?? ????CRS [root@vrh1 vrh1]# crsctl start crs -excl -nocrs CRS-4123: Oracle High Availability Services has been started. CRS-2672: Attempting to start 'ora.mdnsd' on 'vrh1' CRS-2676: Start of 'ora.mdnsd' on 'vrh1' succeeded CRS-2672: Attempting to start 'ora.gpnpd' on 'vrh1' CRS-2676: Start of 'ora.gpnpd' on 'vrh1' succeeded CRS-2672: Attempting to start 'ora.cssdmonitor' on 'vrh1' CRS-2672: Attempting to start 'ora.gipcd' on 'vrh1' CRS-2676: Start of 'ora.cssdmonitor' on 'vrh1' succeeded CRS-2676: Start of 'ora.gipcd' on 'vrh1' succeeded CRS-2672: Attempting to start 'ora.cssd' on 'vrh1' CRS-2672: Attempting to start 'ora.diskmon' on 'vrh1' CRS-2676: Start of 'ora.diskmon' on 'vrh1' succeeded CRS-2676: Start of 'ora.cssd' on 'vrh1' succeeded CRS-2679: Attempting to clean 'ora.cluster_interconnect.haip' on 'vrh1' CRS-2672: Attempting to start 'ora.ctssd' on 'vrh1' CRS-2681: Clean of 'ora.cluster_interconnect.haip' on 'vrh1' succeeded CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'vrh1' CRS-2676: Start of 'ora.ctssd' on 'vrh1' succeeded CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'vrh1' succeeded CRS-2672: Attempting to start 'ora.asm' on 'vrh1' CRS-2676: Start of 'ora.asm' on 'vrh1' succeeded 2.???ocr?votedisk??diskgroup,??compatible.asm???11.2: [root@vrh1 vrh1]# su - grid [grid@vrh1 ~]$ sqlplus / as sysasm SQL*Plus: Release 11.2.0.3.0 Production on Thu Aug 9 04:16:58 2012 Copyright (c) 1982, 2011, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production With the Real Application Clusters and Automatic Storage Management options SQL> create diskgroup systemdg high redundancy disk '/dev/asm-diskh','/dev/asm-diskg','/dev/asm-diski','/dev/asm-diskj','/dev/asm-diskk' ATTRIBUTE 'compatible.rdbms' = '11.2', 'compatible.asm' = '11.2'; 3.?ocr backup???ocr??ocrcheck??: [root@vrh1 ~]# ocrconfig -restore /g01/11.2.0/grid/cdata/vrh-cluster/backup00.ocr [root@vrh1 ~]# ocrcheck Status of Oracle Cluster Registry is as follows : Version : 3 Total space (kbytes) : 262120 Used space (kbytes) : 3180 Available space (kbytes) : 258940 ID : 1238458014 Device/File Name : +systemdg Device/File integrity check succeeded Device/File not configured Device/File not configured Device/File not configured Device/File not configured Cluster registry integrity check succeeded Logical corruption check succeeded 4. ????votedisk ,??????????: [grid@vrh1 ~]$ crsctl replace votedisk +SYSTEMDG CRS-4602: Failed 27 to add voting file 2e4e0fe285924f86bf5473d00dcc0388. CRS-4602: Failed 27 to add voting file 4fa54bb0cc5c4fafbf1a9be5479bf389. CRS-4602: Failed 27 to add voting file a109ead9ea4e4f28bfe233188623616a. CRS-4602: Failed 27 to add voting file 042c9fbd71b54f5abfcd3ab3408f3cf3. CRS-4602: Failed 27 to add voting file 7b5a8cd24f954fafbf835ad78615763f. Failed to replace voting disk group with +SYSTEMDG. CRS-4000: Command Replace failed, or completed with errors. ????????ASM???,???ASM: SQL> alter system set asm_diskstring='/dev/asm*'; System altered. SQL> create spfile from memory; File created. SQL> startup force mount; ORA-32004: obsolete or deprecated parameter(s) specified for ASM instance ASM instance started Total System Global Area 283930624 bytes Fixed Size 2227664 bytes Variable Size 256537136 bytes ASM Cache 25165824 bytes ASM diskgroups mounted SQL> show parameter spfile NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ spfile string /g01/11.2.0/grid/dbs/spfile+AS M1.ora [grid@vrh1 trace]$ crsctl replace votedisk +SYSTEMDG CRS-4256: Updating the profile Successful addition of voting disk 85edc0e82d274f78bfc58cdc73b8c68a. Successful addition of voting disk 201ffffc8ba44faabfe2efec2aa75840. Successful addition of voting disk 6f2a25c589964faabf6980f7c5f621ce. Successful addition of voting disk 93eb315648454f25bf3717df1a2c73d5. Successful addition of voting disk 3737240678964f88bfbfbd31d8b3829f. Successfully replaced voting disk group with +SYSTEMDG. CRS-4256: Updating the profile CRS-4266: Voting file(s) successfully replaced 5. ??has??,??cluster????: [root@vrh1 ~]# crsctl check crs CRS-4638: Oracle High Availability Services is online CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online [root@vrh1 ~]# crsctl query css votedisk ## STATE File Universal Id File Name Disk group -- ----- ----------------- --------- --------- 1. ONLINE 85edc0e82d274f78bfc58cdc73b8c68a (/dev/asm-diskh) [SYSTEMDG] 2. ONLINE 201ffffc8ba44faabfe2efec2aa75840 (/dev/asm-diskg) [SYSTEMDG] 3. ONLINE 6f2a25c589964faabf6980f7c5f621ce (/dev/asm-diski) [SYSTEMDG] 4. ONLINE 93eb315648454f25bf3717df1a2c73d5 (/dev/asm-diskj) [SYSTEMDG] 5. ONLINE 3737240678964f88bfbfbd31d8b3829f (/dev/asm-diskk) [SYSTEMDG] Located 5 voting disk(s). [root@vrh1 ~]# crsctl stat res -t -------------------------------------------------------------------------------- NAME TARGET STATE SERVER STATE_DETAILS -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.BACKUPDG.dg ONLINE ONLINE vrh1 ora.DATA.dg ONLINE ONLINE vrh1 ora.LISTENER.lsnr ONLINE ONLINE vrh1 ora.LSN_MACLEAN.lsnr ONLINE ONLINE vrh1 ora.SYSTEMDG.dg ONLINE ONLINE vrh1 ora.asm ONLINE ONLINE vrh1 Started ora.gsd OFFLINE OFFLINE vrh1 ora.net1.network ONLINE ONLINE vrh1 ora.ons ONLINE ONLINE vrh1 -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.LISTENER_SCAN1.lsnr http://www.askmaclean.com 1 ONLINE ONLINE vrh1 ora.cvu 1 OFFLINE OFFLINE ora.oc4j 1 OFFLINE OFFLINE ora.scan1.vip 1 ONLINE ONLINE vrh1 ora.vprod.db 1 ONLINE OFFLINE 2 ONLINE OFFLINE ora.vrh1.vip 1 ONLINE ONLINE vrh1 ora.vrh2.vip 1 ONLINE INTERMEDIATE vrh1 FAILED OVER

    Read the article

  • Lots of mysql Sleep processes

    - by user259284
    Hello, I am still having trouble with my mysql server. It seems that since i optimize it, the tables were growing and now sometimes is very slow again. I have no idea of how to optimize more. mySQL server has 48GB of RAM and mysqld is using about 8, most of the tables are innoDB. Site has about 2000 users online. I also run explain on every query and every one of them is indexed. mySQL processes: http://www.pik.ba/mysqlStanje.php my.cnf: # The MySQL database server configuration file. # # You can copy this to one of: # - "/etc/mysql/my.cnf" to set global options, # - "~/.my.cnf" to set user-specific options. # # One can use all long options that the program supports. # Run program with --help to get a list of available options and with # --print-defaults to see which it would actually understand and use. # # For explanations see # http://dev.mysql.com/doc/mysql/en/server-system-variables.html # This will be passed to all mysql clients # It has been reported that passwords should be enclosed with ticks/quotes # escpecially if they contain "#" chars... # Remember to edit /etc/mysql/debian.cnf when changing the socket location. [client] port = 3306 socket = /var/run/mysqld/mysqld.sock # Here is entries for some specific programs # The following values assume you have at least 32M ram # This was formally known as [safe_mysqld]. Both versions are currently parsed. [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] # # * Basic Settings # user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp language = /usr/share/mysql/english skip-external-locking # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. bind-address = 10.100.27.30 # # * Fine Tuning # key_buffer = 64M key_buffer_size = 512M max_allowed_packet = 16M thread_stack = 128K thread_cache_size = 8 # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP max_connections = 1000 table_cache = 1000 join_buffer_size = 2M tmp_table_size = 2G max_heap_table_size = 2G innodb_buffer_pool_size = 3G innodb_additional_mem_pool_size = 128M innodb_log_file_size = 100M log-slow-queries = /var/log/mysql/slow.log sort_buffer_size = 5M net_buffer_length = 5M read_buffer_size = 2M read_rnd_buffer_size = 12M thread_concurrency = 10 ft_min_word_len = 3 #thread_concurrency = 10 # # * Query Cache Configuration # query_cache_limit = 1M query_cache_size = 512M # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. #log = /var/log/mysql/mysql.log # # Error logging goes to syslog. This is a Debian improvement :) # # Here you can see queries with especially long duration #log_slow_queries = /var/log/mysql/mysql-slow.log #long_query_time = 2 #log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. # note: if you are setting up a replication slave, see README.Debian about # other settings you may need to change. #server-id = 1 #log_bin = /var/log/mysql/mysql-bin.log expire_logs_days = 10 max_binlog_size = 100M #binlog_do_db = include_database_name #binlog_ignore_db = include_database_name # # * BerkeleyDB # # Using BerkeleyDB is now discouraged as its support will cease in 5.1.12. skip-bdb # # * InnoDB # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # You might want to disable InnoDB to shrink the mysqld process by circa 100MB. #skip-innodb # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 16M # # * NDB Cluster # # See /usr/share/doc/mysql-server-*/README.Debian for more information. # # The following configuration is read by the NDB Data Nodes (ndbd processes) # not from the NDB Management Nodes (ndb_mgmd processes). # # [MYSQL_CLUSTER] # ndb-connectstring=127.0.0.1 # # * IMPORTANT: Additional settings that can override those from this file! # The files must end with '.cnf', otherwise they'll be ignored. # !includedir /etc/mysql/conf.d/

    Read the article

  • Apple push Notification Feedback service Not working

    - by Yassmeen
    Hi, I am developing an iPhone App that uses Apple Push Notifications. On the iPhone side everything is fine, on the server side I have a problem. Notifications are sent correctly however when I try to query the feedback service to obtain a list of devices from which the App has been uninstalled, I always get zero results. I know that I should obtain one result as the App has been uninstalled from one of my test devices. After 24 hours and more I still have no results from the feedback service.. Any ideas? Does anybody know how long it takes for the feedback service to recognize that my App has been uninstalled from my test device? Note: I have another push notification applications on the device so I know that my app is not the only app. The code - C#: public static string CheckFeedbackService(string certaName, string hostName) { SYLogger.Log("Check Feedback Service Started"); ServicePointManager.ServerCertificateValidationCallback = new RemoteCertificateValidationCallback(ValidateServerCertificate); // Create a TCP socket connection to the Apple server on port 2196 TcpClient tcpClientF = null; SslStream sslStreamF = null; string result = string.Empty; //Contect to APNS& Add the Apple cert to our collection X509Certificate2Collection certs = new X509Certificate2Collection { GetServerCert(certaName) }; //Set up byte[] buffer = new byte[38]; int recd = 0; DateTime minTimestamp = DateTime.Now.AddYears(-1); // Create a TCP socket connection to the Apple server on port 2196 try { using (tcpClientF = new TcpClient(hostName, 2196)) { SYLogger.Log("Client Connected ::" + tcpClientF.Connected); // Create a new SSL stream over the connection sslStreamF = new SslStream(tcpClientF.GetStream(), true,ValidateServerCertificate); // Authenticate using the Apple cert sslStreamF.AuthenticateAsClient(hostName, certs, SslProtocols.Default, false); SYLogger.Log("Stream Readable ::" + sslStreamF.CanRead); SYLogger.Log("Host Name ::"+hostName); SYLogger.Log("Cert Name ::" + certs[0].FriendlyName); if (sslStreamF != null) { SYLogger.Log("Connection Started"); //Get the first feedback recd = sslStreamF.Read(buffer, 0, buffer.Length); SYLogger.Log("Buffer length ::" + recd); //Continue while we have results and are not disposing while (recd > 0) { SYLogger.Log("Reading Started"); //Get our seconds since 1970 ? byte[] bSeconds = new byte[4]; byte[] bDeviceToken = new byte[32]; Array.Copy(buffer, 0, bSeconds, 0, 4); //Check endianness if (BitConverter.IsLittleEndian) Array.Reverse(bSeconds); int tSeconds = BitConverter.ToInt32(bSeconds, 0); //Add seconds since 1970 to that date, in UTC and then get it locally var Timestamp = new DateTime(1970, 1, 1, 0, 0, 0, DateTimeKind.Utc).AddSeconds(tSeconds).ToLocalTime(); //Now copy out the device token Array.Copy(buffer, 6, bDeviceToken, 0, 32); string deviceToken = BitConverter.ToString(bDeviceToken).Replace("-", "").ToLower().Trim(); //Make sure we have a good feedback tuple if (deviceToken.Length == 64 && Timestamp > minTimestamp) { SYLogger.Log("Feedback " + deviceToken); result = deviceToken; } //Clear array to reuse it Array.Clear(buffer, 0, buffer.Length); //Read the next feedback recd = sslStreamF.Read(buffer, 0, buffer.Length); } SYLogger.Log("Reading Ended"); } } } catch (Exception e) { SYLogger.Log("Authentication failed - closing the connection::" + e); return "NOAUTH"; } finally { // The client stream will be closed with the sslStream // because we specified this behavior when creating the sslStream. if (sslStreamF != null) sslStreamF.Close(); if (tcpClientF != null) tcpClientF.Close(); //Clear array on error Array.Clear(buffer, 0, buffer.Length); } SYLogger.Log("Feedback ended "); return result; }

    Read the article

  • Unable to start ServiceIntent android

    - by Mj1992
    I don't know why my service class is not working although it was working fine before. I've the following service class. public class MyIntentService extends IntentService { private static PowerManager.WakeLock sWakeLock; private static final Object LOCK = MyIntentService.class; public MyIntentService() { super("MuazzamService"); } @Override protected void onHandleIntent(Intent intent) { try { String action = intent.getAction(); <-- breakpoint if (action.equals("com.google.android.c2dm.intent.REGISTRATION")) <-- passes by { handleRegistration(intent); } else if (action.equals("com.google.android.c2dm.intent.RECEIVE")) { handleMessage(intent); } } finally { synchronized(LOCK) { sWakeLock.release(); } } } private void handleRegistration(Intent intent) { try { String registrationId = intent.getStringExtra("registration_id"); String error = intent.getStringExtra("error"); String unregistered = intent.getStringExtra("unregistered"); if (registrationId != null) { this.SendRegistrationIDViaHttp(registrationId); Log.i("Regid",registrationId); } if (unregistered != null) {} if (error != null) { if ("SERVICE_NOT_AVAILABLE".equals(error)) { Log.e("ServiceNoAvail",error); } else { Log.i("Error In Recieveing regid", "Received error: " + error); } } } catch(Exception e) { Log.e("ErrorHai(MIS0)",e.toString()); e.printStackTrace(); } } private void SendRegistrationIDViaHttp(String regID) { HttpClient httpclient = new DefaultHttpClient(); try { HttpGet httpget = new HttpGet("http://10.116.27.107/php/GCM/AndroidRequest.php?registrationID="+regID+"&[email protected]"); //test purposes k liye muazzam HttpResponse response = httpclient.execute(httpget); HttpEntity entity=response.getEntity(); if(entity!=null) { InputStream inputStream=entity.getContent(); String result= convertStreamToString(inputStream); Log.i("finalAnswer",result); // Toast.makeText(getApplicationContext(),regID, Toast.LENGTH_LONG).show(); } } catch (ClientProtocolException e) { Log.e("errorhai",e.getMessage()); e.printStackTrace(); } catch (IOException e) { Log.e("errorhai",e.getMessage()); e.printStackTrace(); } } private static String convertStreamToString(InputStream is) { BufferedReader reader = new BufferedReader(new InputStreamReader(is)); StringBuilder sb = new StringBuilder(); String line = null; try { while ((line = reader.readLine()) != null) { sb.append(line + "\n"); } } catch (IOException e) { Log.e("ErrorHai(MIS)",e.toString()); e.printStackTrace(); } finally { try { is.close(); } catch (IOException e) { Log.e("ErrorHai(MIS2)",e.toString()); e.printStackTrace(); } } return sb.toString(); } private void handleMessage(Intent intent) { try { String score = intent.getStringExtra("score"); String time = intent.getStringExtra("time"); Toast.makeText(getApplicationContext(), "hjhhjjhjhjh", Toast.LENGTH_LONG).show(); Log.e("GetExtraScore",score.toString()); Log.e("GetExtratime",time.toString()); } catch(NullPointerException e) { Log.e("je bat",e.getMessage()); } } static void runIntentInService(Context context,Intent intent){ synchronized(LOCK) { if (sWakeLock == null) { PowerManager pm = (PowerManager) context.getSystemService(Context.POWER_SERVICE); sWakeLock = pm.newWakeLock(PowerManager.PARTIAL_WAKE_LOCK, "my_wakelock"); } } sWakeLock.acquire(); intent.setClassName(context, MyIntentService.class.getName()); context.startService(intent); } } and here's how I am calling the service as mentioned in the android docs. Intent registrationIntent = new Intent("com.google.android.c2dm.intent.REGISTER"); registrationIntent.putExtra("app", PendingIntent.getBroadcast(this, 0, new Intent(), 0)); registrationIntent.putExtra("sender",Sender_ID); startService(registrationIntent); I've declared the service in the manifest file inside the application tag. <service android:name="com.pack.gcm.MyIntentService" android:enabled="true"/> I placed a breakpoint in my IntentService class but it never goes there.But if I declare my registrationIntent like this Intent registrationIntent = new Intent(getApplicationContext(),com.pack.gcm.MyIntentService); It works and goes to the breakpoint I've placed but intent.getAction() contains null and hence it doesn't go into the if condition placed after those lines. It says 07-08 02:10:03.755: W/ActivityManager(60): Unable to start service Intent { act=com.google.android.c2dm.intent.REGISTER (has extras) }: not found in the logcat.

    Read the article

  • How can I get `find` to ignore .svn directories?

    - by John Kugelman
    I often use the find command to search through source code, delete files, whatever. Annoyingly, because Subversion stores duplicates of each file in its .svn/text-base/ directories my simple searches end up getting lots of duplicate results. For example, I want to recursively search for uint in multiple messages.h and messages.cpp files: # find -name 'messages.*' -exec grep -Iw uint {} + ./messages.cpp: Log::verbose << "Discarding out of date message: id " << uint(olderMessage.id) ./messages.cpp: Log::verbose << "Added to send queue: " << *message << ": id " << uint(preparedMessage->id) ./messages.cpp: Log::error << "Received message with invalid SHA-1 hash: id " << uint(incomingMessage.id) ./messages.cpp: Log::verbose << "Received " << *message << ": id " << uint(incomingMessage.id) ./messages.cpp: Log::verbose << "Sent message: id " << uint(preparedMessage->id) ./messages.cpp: Log::verbose << "Discarding unsent message: id " << uint(preparedMessage->id) ./messages.cpp: for (uint i = 0; i < 10 && !_stopThreads; ++i) { ./.svn/text-base/messages.cpp.svn-base: Log::verbose << "Discarding out of date message: id " << uint(olderMessage.id) ./.svn/text-base/messages.cpp.svn-base: Log::verbose << "Added to send queue: " << *message << ": id " << uint(preparedMessage->id) ./.svn/text-base/messages.cpp.svn-base: Log::error << "Received message with invalid SHA-1 hash: id " << uint(incomingMessage.id) ./.svn/text-base/messages.cpp.svn-base: Log::verbose << "Received " << *message << ": id " << uint(incomingMessage.id) ./.svn/text-base/messages.cpp.svn-base: Log::verbose << "Sent message: id " << uint(preparedMessage->id) ./.svn/text-base/messages.cpp.svn-base: Log::verbose << "Discarding unsent message: id " << uint(preparedMessage->id) ./.svn/text-base/messages.cpp.svn-base: for (uint i = 0; i < 10 && !_stopThreads; ++i) { ./virus/messages.cpp:void VsMessageProcessor::_progress(const string &fileName, uint scanCount) ./virus/messages.cpp:ProgressMessage::ProgressMessage(const string &fileName, uint scanCount) ./virus/messages.h: void _progress(const std::string &fileName, uint scanCount); ./virus/messages.h: ProgressMessage(const std::string &fileName, uint scanCount); ./virus/messages.h: uint _scanCount; ./virus/.svn/text-base/messages.cpp.svn-base:void VsMessageProcessor::_progress(const string &fileName, uint scanCount) ./virus/.svn/text-base/messages.cpp.svn-base:ProgressMessage::ProgressMessage(const string &fileName, uint scanCount) ./virus/.svn/text-base/messages.h.svn-base: void _progress(const std::string &fileName, uint scanCount); ./virus/.svn/text-base/messages.h.svn-base: ProgressMessage(const std::string &fileName, uint scanCount); ./virus/.svn/text-base/messages.h.svn-base: uint _scanCount; How can I tell find to ignore the .svn directories?

    Read the article

  • Optimistic non-locking copy of InnoDB .frm files

    - by jothir
    MySQL Enterprise Backup(MEB) does hot backup of innodb data and log files. Till MEB 3.6.1, the user backs up the only innodb tables in a 3 step process: STEP 1. Take backup using --only-innodb option STEP 2. Temporarily make the table read only by executing “FLUSH TABLES WITH READ LOCK” MEB 3.7.0 has an enhancement to innodb file copying. The .frm files gets copied along with the hot backup done for innodb files. I would like to make the blog a little interactive by explaining the feature as answers: 1. What are these .frm files? The files containing the metadata, such as the table definition, of a MySQL table. For backups, the full set of .frm files are always required along with the backup data, to be able to restore tables that are altered or dropped after the backup. 2. Can the .frm files not be copied by MEB itself? --only-innodb-with-frm is the new option introduced in MEB 3.7.1 to do a copy of .frm files without locking the tables during backup operation itself. This is to reduce the pain of manually copying the .frm files. The option is intended for backups where you can ensure that no ALTER TABLE, CREATE TABLE, DROP TABLE, or other DDL statements modify the .frm files for InnoDB tables during the backup operation. 3. How is data consistency ensured? MEB does validation of the .frm files after copying by comparing with the server directory to see if the timestamps of any of the .frm files is greater than the saved system time (check .frm time).  This change in timestamp of the .frm files will show if a table is altered during the process of backup. The total number of frm files in the server directory is also verified against the copied contents. If the number of .frm files is less compared to server directory, it shows that table/tables have been dropped during the process of backup. If the number of .frm files is more compared to server directory, it shows that new table/tables have been created during backup operation. 4. How does MEB handle data inconsistency? MEB copies the .frm files through several iterations,  does the validation and throws a WARNING if there is any inconsistency found in .frm files at the end of backup operation. This means the user is warned of some DDL operations that had occurred during backup operation, and has to manually copy the .frm files or do a backup again. 5. What is the option and explain its usage? The option introduced is --only-innodb-with-frm which does optimistic copy of .frm files without locking. This can be used when the user wants to backup only innodb tables along with .frm files. The option can take one of the 2 values: all | related. --only-innodb-with-frm=all does copy of all .frm files of all innodb tables. --only-innodb-with-frm=related works in conjunction with --include option.This is to allow partial backup of .frm files corresponding to the tables specified in --include. Let me show the usage with example output: ./mysqlbackup -uroot --backup-dir=/logs/backupWithFrmAll --only-innodb-with-frm=all backup MySQL Enterprise Backup version 3.7.1 [2012/06/05] Copyright (c) 2003, 2012, Oracle and/or its affiliates. All Rights Reserved. INFO: Starting with following command line ... ./mysqlbackup -uroot --backup-dir=/logs/backupWithFrmAll        --only-innodb-with-frm=all backup INFO: Got some server configuration information from running server. IMPORTANT: Please check that mysqlbackup run completes successfully.            At the end of a successful 'backup' run mysqlbackup            prints "mysqlbackup completed OK!". --------------------------------------------------------------------                       Server Repository Options: --------------------------------------------------------------------  datadir                          =  /mysql/trydb/  innodb_data_home_dir             =    innodb_data_file_path            =  ibdata1:10M:autoextend  innodb_log_group_home_dir        =  /mysql/trydb/  innodb_log_files_in_group        =  2  innodb_log_file_size             =  5242880 --------------------------------------------------------------------                       Backup Config Options: --------------------------------------------------------------------  datadir                          =  /logs/backupWithFrmAll/datadir  innodb_data_home_dir             =  /logs/backupWithFrmAll/datadir  innodb_data_file_path            =  ibdata1:10M:autoextend  innodb_log_group_home_dir        =  /logs/backupWithFrmAll/datadir  innodb_log_files_in_group        =  2  innodb_log_file_size             =  5242880 mysqlbackup: INFO: Unique generated backup id for this is 13451979804504860 mysqlbackup: INFO: Uses posix_fadvise() for performance optimization. mysqlbackup: INFO: System tablespace file format is Antelope. mysqlbackup: INFO: Found checkpoint at lsn 1656792. mysqlbackup: INFO: Starting log scan from lsn 1656320. 120817 15:36:22 mysqlbackup: INFO: Copying log... 120817 15:36:22 mysqlbackup: INFO: Log copied, lsn 1656792.          We wait 1 second before starting copying the data files... 120817 15:36:23 mysqlbackup: INFO: Copying /mysql/trydb/ibdata1 (Antelope file format). 120817 15:36:23 mysqlbackup: INFO: Copying /mysql/trydb/innodb1/table2.ibd (Antelope file format). 120817 15:36:23 mysqlbackup: INFO: Copying /mysql/trydb/innodb1/table3.ibd (Antelope file format). 120817 15:36:23 mysqlbackup: INFO: Copying /mysql/trydb/innodb1/table1.ibd (Antelope file format). mysqlbackup: INFO: Opening backup source directory '/mysql/trydb/' 120817 15:36:23 mysqlbackup: INFO: Starting to backup .frm files in the subdirectories of /mysql/trydb/ mysqlbackup: INFO: Copying innodb data and logs during final stage ... mysqlbackup: INFO: A copied database page was modified at 1656792.          (This is the highest lsn found on page)          Scanned log up to lsn 1656792.          Was able to parse the log up to lsn 1656792.          Maximum page number for a log record 0 mysqlbackup: INFO: Copying non-innodb files took 2.000 seconds 120817 15:36:25 mysqlbackup: INFO: Full backup completed! mysqlbackup: INFO: Backup created in directory '/logs/backupWithFrmAll' -------------------------------------------------------------   Parameters Summary          -------------------------------------------------------------   Start LSN                  : 1656320   End LSN                    : 1656792 ------------------------------------------------------------- mysqlbackup completed OK! bash$ ls /logs/backupWithFrmAll/datadir/innodb1/ table1.frm  table1.ibd  table2.frm  table2.ibd  table3.frm  table3.ibd Here the backup directory contains all the .frm files of all the innodb tables. ./mysqlbackup -uroot --backup-dir=/logs/backupWithFrm --include="innodb1.table3.*" --only-innodb-with-frm=related backup MySQL Enterprise Backup version 3.7.1 [2012/06/05] Copyright (c) 2003, 2012, Oracle and/or its affiliates. All Rights Reserved. INFO: Starting with following command line ... ./mysqlbackup -uroot --backup-dir=/logs/backup371frm        --include=innodb1.table3.* --only-innodb-with-frm=related backup INFO: Got some server configuration information from running server. IMPORTANT: Please check that mysqlbackup run completes successfully.            At the end of a successful 'backup' run mysqlbackup            prints "mysqlbackup completed OK!". --------------------------------------------------------------------                       Server Repository Options: --------------------------------------------------------------------  datadir                          = /mysql/trydb/  innodb_data_home_dir             =    innodb_data_file_path            =  ibdata1:10M:autoextend  innodb_log_group_home_dir        =  /mysql/trydb  innodb_log_files_in_group        =  2  innodb_log_file_size             =  5242880 --------------------------------------------------------------------                       Backup Config Options: --------------------------------------------------------------------  datadir                          =  /logs/backupWithFrm/datadir  innodb_data_home_dir             =  /logs/backupWithFrm/datadir  innodb_data_file_path            =  ibdata1:10M:autoextend  innodb_log_group_home_dir        =  /logs/backupWithFrm/datadir  innodb_log_files_in_group        =  2  innodb_log_file_size             =  5242880 mysqlbackup: INFO: Unique generated backup id for this is 13451973458118162 mysqlbackup: INFO: Uses posix_fadvise() for performance optimization. mysqlbackup: INFO: The --include option specified: innodb1.table3.* mysqlbackup: INFO: System tablespace file format is Antelope. mysqlbackup: INFO: Found checkpoint at lsn 1656792. mysqlbackup: INFO: Starting log scan from lsn 1656320. 120817 15:25:47 mysqlbackup: INFO: Copying log... 120817 15:25:47 mysqlbackup: INFO: Log copied, lsn 1656792.          We wait 1 second before starting copying the data files... 120817 15:25:48 mysqlbackup: INFO: Copying /mysql/trydbibdata1 (Antelope file format). 120817 15:25:49 mysqlbackup: INFO: Copying /mysql/trydbinnodb1/table3.ibd (Antelope file format). mysqlbackup: INFO: Opening backup source directory '/mysql/trydb' 120817 15:25:49 mysqlbackup: INFO: Starting to backup .frm files in the subdirectories of /mysql/trydb mysqlbackup: INFO: Copying innodb data and logs during final stage ... mysqlbackup: INFO: A copied database page was modified at 1656792.          (This is the highest lsn found on page)          Scanned log up to lsn 1656792.          Was able to parse the log up to lsn 1656792.          Maximum page number for a log record 0 mysqlbackup: INFO: Copying non-innodb files took 2.000 seconds 120817 15:25:51 mysqlbackup: INFO: Full backup completed! mysqlbackup: INFO: Backup created in directory '/logs/backupWithFrm' -------------------------------------------------------------   Parameters Summary          -------------------------------------------------------------   Start LSN                  : 1656320   End LSN                    : 1656792 ------------------------------------------------------------- mysqlbackup completed OK! bash$ ls /logs/backupWithFrm/datadir/innodb1/ table3.frm table3.ibd Thus the backup directory contains only the .frm file matching the innodb table name specified in --include option. In a nutshell, we present our great new option --only-innodb-with-frm which is a true hot InnoDB-only backup with .frm files, but with an additional check, if any DDL happened during the backup. If a DDL has happened, the DBA can decide if to repeat the backup, or to live with the potential inconsistency. This is the ideal solution for users that have all their "real" data in InnoDB and seldom change their schemas. You may also like: http://dev.mysql.com/doc/mysql-enterprise-backup/3.7/en/backup-partial-options.html   STEP 3. Manually copy the .frm files of innodb tables to the destination directory where backup is stored.

    Read the article

  • Using logarithms to normalize a vector to avoid overflow

    - by muscicapa
    http://stackoverflow.com/questions/2293762/problem-with-arithmetic-using-logarithms-to-avoid-numerical-underflow-take-2 Having seen the above and having seen softmax normalization I was trying to normalize a vector while avoiding overflow - that is (x1 x2 x3 x4 ... xn) the normalized form for me has the sum of squares as 1.0 So what I thought of doing is s=(2*log(x1)+2*log(x2)+...+2*log(xn))/2 so the two factor can be taken off and finally the normalized vector is exp(log(x1)-s), , ..., exp(log(xn)-s) but I am evidently doing something wrong here, what?

    Read the article

  • is there a size limit to a text file?

    - by chicane
    HI All I am creating a log file for our website which will log every log-in by all the users to our orders area. I wish to know if you think its good to enter this log info in just one single file or should this be split up once the log file hits a certain size? My concern is that this file will get rather large over time, but im not certain of the size limit of a text file? thank you

    Read the article

  • SocketChannel in Java sends data, but it doesn't get to destination application

    - by Peterson
    Hi Everybody, I'm suffering a lot to create a simple ChatServer in Java, using the NIO libraries. Wonder if someone could help me. I am doing that by using SocketChannel and Selector to handle multiple clients in a single thread. The problem is: I am able to accept new connections and get it's data, but when I try to send data back, the SocketChannel simply doesn't work. In the method write(), it returns a integer that is the same size of the data i'm passing to it, but the client never receives that data. Strangely, when I close the server application, the client receives the data. It's like the socketchannel maintains a buffer, and it only get flushed when I close the application. Here are some more details, to give you more information to help. I'm handling the events in this piece of code: private void run() throws IOException { ServerSocketChannel ssc = ServerSocketChannel.open(); // Set it to non-blocking, so we can use select ssc.configureBlocking( false ); // Get the Socket connected to this channel, and bind it // to the listening port this.serverSocket = ssc.socket(); InetSocketAddress isa = new InetSocketAddress( this.port ); serverSocket.bind( isa ); // Create a new Selector for selecting this.masterSelector = Selector.open(); // Register the ServerSocketChannel, so we can // listen for incoming connections ssc.register( masterSelector, SelectionKey.OP_ACCEPT ); while (true) { // See if we've had any activity -- either // an incoming connection, or incoming data on an // existing connection int num = masterSelector.select(); // If we don't have any activity, loop around and wait // again if (num == 0) { continue; } // Get the keys corresponding to the activity // that has been detected, and process them // one by one Set keys = masterSelector.selectedKeys(); Iterator it = keys.iterator(); while (it.hasNext()) { // Get a key representing one of bits of I/O // activity SelectionKey key = (SelectionKey)it.next(); // What kind of activity is it? if ((key.readyOps() & SelectionKey.OP_ACCEPT) == SelectionKey.OP_ACCEPT) { // Aceita a conexão Socket s = serverSocket.accept(); System.out.println( "LOG: Conexao TCP aceita de " + s.getInetAddress() + ":" + s.getPort() ); // Make sure to make it non-blocking, so we can // use a selector on it. SocketChannel sc = s.getChannel(); sc.configureBlocking( false ); // Registra a conexao no seletor, apenas para leitura sc.register( masterSelector, SelectionKey.OP_READ ); } else if ( key.isReadable() ) { SocketChannel sc = null; // It's incoming data on a connection, so // process it sc = (SocketChannel)key.channel(); // Verifica se a conexão corresponde a um cliente já existente if((clientsMap.getClient(key)) != null){ boolean closedConnection = !processIncomingClientData(key); if(closedConnection){ int id = clientsMap.getClient(key); closeClient(id); } } else { boolean clientAccepted = processIncomingDataFromNewClient(key); if(!clientAccepted){ // Se o cliente não foi aceito, sua conexão é simplesmente fechada sc.socket().close(); sc.close(); key.cancel(); } } } } // We remove the selected keys, because we've dealt // with them. keys.clear(); } } This piece of code is simply handles new clients that wants to connect to the chat. So, a client makes a TCP connection to the server, and once it gets accepted, it sends data to the server following a simply text protocol, informing his id and asking to get registrated to the server. I handle this in the method processIncomingDataFromNewClient(key). I'm also keeping a map of clients and its connections in a data structure similar to a hashtable. I? doing that because I need to recover a client Id from a connection and a connection from a client Id. This is can be shown in: clientsMap.getClient(key). But the problem itself resides in the method processIncomingDataFromNewClient(key). There, I simply read the data that the client sent to me, validate it, and if it's ok, I send a message back to the client to tell that it is connected to the chat server. Here is a similar piece of code: private boolean processIncomingDataFromNewClient(SelectionKey key){ SocketChannel sc = (SocketChannel) key.channel(); String connectionOrigin = sc.socket().getInetAddress() + ":" + sc.socket().getPort(); int id = 0; //id of the client buf.clear(); int bytesRead = 0; try { bytesRead = sc.read(buf); if(bytesRead<=0){ System.out.println("Conexão fechada pelo: " + connectionOrigin); return false; } System.out.println("LOG: " + bytesRead + " bytes lidos de " + connectionOrigin); String msg = new String(buf.array(),0,bytesRead); // Do validations with the client sent me here // gets the client id }catch (Exception e) { e.printStackTrace(); System.out.println("LOG: Oops. Cliente não conhece o protocolo. Fechando a conexão: " + connectionOrigin); System.out.println("LOG: Primeiros 10 caracteres enviados pelo cliente: " + msg); return false; } } } catch (IOException e) { System.out.println("LOG: Erro ao ler dados da conexao: " + connectionOrigin); System.out.println("LOG: "+ e.getLocalizedMessage()); System.out.println("LOG: Fechando a conexão..."); return false; } // If it gets to here, the protocol is ok and we can add the client boolean inserted = clientsMap.addClient(key, id); if(!inserted){ System.out.println("LOG: Não foi possível adicionar o cliente. Ou ele já está conectado ou já têm clientes demais. Id: " + id); System.out.println("LOG: Fechando a conexão: " + connectionOrigin); return false; } System.out.println("LOG: Novo cliente conectado! Enviando mesnsagem de confirmação. Id: " + id + " Conexao: " + connectionOrigin); /* Here is the error */ sendMessage(id, "Servidor pet: connection accepted"); System.out.println("LOG: Novo cliente conectado! Id: " + id + " Conexao: " + connectionOrigin); return true; } And finally, the method sendMessage(SelectionKey key) looks like this: private void sendMessage(int destId, String msg) { Charset charset = Charset.forName("ISO-8859-1"); CharBuffer charBuffer = CharBuffer.wrap(msg, 0, msg.length()); ByteBuffer bf = charset.encode(charBuffer); //bf.flip(); int bytesSent = 0; SelectionKey key = clientsMap.getClient(destId); SocketChannel sc = (SocketChannel) key.channel(); try { / int total_bytes_sent = 0; while(total_bytes_sent < msg.length()){ bytesSent = sc.write(bf); total_bytes_sent += bytesSent; } System.out.println("LOG: Bytes enviados para o cliente " + destId + ": "+ total_bytes_sent + " Tamanho da mensagem: " + msg.length()); } catch (IOException e) { System.out.println("LOG: Erro ao mandar mensagem para: " + destId); System.out.println("LOG: " + e.getLocalizedMessage()); } } So, what is happening is that the server, when send a message, prints something like this: LOG: Bytes sent to the client: 28 Size of the message: 28 So, it tells that it sent the data, but the chat client keeps blocking, waiting in the recv() method. So, the data never gets to it. When I close the server application, though, all the data appears in the client. I wonder why. It is important to say that the client is in C and the server JAVA, and I'm running both in the same machine, an Ubuntu Guest in virtualbox under windows. I also run both under windows host and under linuxes hosts, and keep getting the same strange problem. I'm sorry for the great lenght of this question, but I already searched a lot of places for an answer, found a lot of tutorials and questions, including here at StackOverflow, but coundn't find a reasonable explanation. I am really not liking this Java NIO, and i saw a lot of people complaining about it too. I am thinking that if I had done that in C it would have been a lot easier :-D So, if someone could help me and even discuss this behavor, it would be great! :-) Thanks everybody in advance, Péterson

    Read the article

  • Method interception in PHP 5.*

    - by Rolf
    Hi everybody, I'm implementing a Log system for PHP, and I'm a bit stuck. All the configuration is defined in an XML file, that declares every method to be logged. XML is well parsed and converted into a multidimensionnal array (classname = array of methods). So far, so good. Let's take a simple example: #A.php class A { public function foo($bar) { echo ' // Hello there !'; } public function bar($foo) { echo " $ù$ùmezf$z !"; } } #B.php class B { public function far($boo) { echo $boo; } } Now, let's say I've this configuration file: <interceptor> <methods class="__CLASS_DIR__A.php"> <method name="foo"> <log-level>INFO</log-level> <log-message>Transaction init</log-message> </method> </methods> <methods class="__CLASS_DIR__B.php"> <method name="far"> <log-level>DEBUG</log-level> <log-message>Useless</log-message> </method> </methods> </interceptor> The thing I'd like AT RUNTIME ONLY (once the XML parser has done his job) is: #Logger.php (its definitely NOT a final version) -- generated by the XML parser class Logger { public function __call($name,$args) { $log_level = args[0]; $args = array_slice($args,1); switch($method_name) { case 'foo': case 'far': //case ..... //write in log files break; } //THEN, RELAY THE CALL TO THE INITIAL METHOD } } #"dynamic" A.php class A extends Logger { public function foo($log_level, $bar) { echo ' // Hello there !'; } public function bar($foo) { echo " $ù$ùmezf$z !"; } } #"dynamic" B.php class B extends Logger { public function far($log_level, $boo) { echo $boo; } } The big challenge here is to transform A and B into their "dynamic" versions, once the XML parser has completed its job. The ideal would be to achieve that without modifying the code of A and B at all (I mean, in the files) - or at least find a way to come back to their original versions once the program is finished. To be clear, I wanna find the most proper way to intercept method calls in PHP. What are your ideas about it ??? Thanks in advance, Rolf

    Read the article

  • XY-Scatter Chart In SSRS Won't Display Points

    - by Dalin Seivewright
    I'm a bit confused with this one. I have a Dataset with a BackupDate and a BackupTime as well as a BackupType. The BackupDate is comprised of 12 characters from the left of a datetime string within a table. The BackupTime is comprised of 8 characters from the right of that same datetime string. So for example: BackupDate would be 'December 12 2008' and the BackupTime would be '12:53PM.' I have added an XY-scatter chart to the report. I've added a 'series' value for the BackupType (so one can distinguish between a Full/Incr/Log backup). I've added a category value of BackupDate and set the Scale for the X-axis from the Min of BackupDate to the Max of BackupDate. I've then added an item to the Values with the Y variable set to BackupTime and the X variable set to BackupDate. The interval for the Y-axis is 12:00AM to 11:59PM and the formatting for the labels is 'hh:mmtt'. The BackupTime matches the format of the Y-axis. The BackupDate matches the format of the X-axis. 10 entries are retrieved by my Dataset and the Legend is properly populated by the BackupType field. No points are being plotted on the graph and no markers/pointers are shown if they are enabled. There should be a point on the graph for every point in time of each day there is a backup of a specific type. Am I missing something? Does anyone know of a good tutorial dealing specifically with XY-scatter graphs and using them in a way I intend? I am using the 2005 version of SSRS rather than the 2008 version. Screenshot of what my chart currently looks like: In case it could be dataset related: SELECT TOP (10) backup_type, LTRIM(RTRIM(LEFT(backup_finish_date, 12))) AS BackupDate, LTRIM(RTRIM(RIGHT(backup_finish_date, 8))) AS BackupTime FROM DBARepository.Backup_History As requested, here are the results of this query. There is a Where clause to constrain the results to a specific database of a specific server that was not included in the above SQL Query. Log Dec 26 2008 12:00PM Log Dec 27 2008 4:00AM Log Dec 27 2008 8:00AM Log Dec 27 2008 12:00PM Log Dec 27 2008 4:00PM Log Dec 27 2008 8:00PM Database Dec 27 2008 10:01PM Log Dec 28 2008 12:00AM Log Dec 28 2008 4:00AM Log Dec 28 2008 8:00AM

    Read the article

  • Logging a certain event only once in ruby

    - by Andrew Grimm
    Are there any logging frameworks in ruby that allow you to log a specific event type only once? logger = IdealLogger.new logger.log(:happy_path, "We reached the happy path") # => logs this message logger.log(:happy_path, "We reached the happy path yet again") # => Doesn't log this logger.log(:sad_path, "We've encountered a sad path!") # => logs this message Also, is there a term for the concept of logging a certain event type only once?

    Read the article

  • SQL SERVER – WRITELOG – Wait Type – Day 17 of 28

    - by pinaldave
    WRITELOG is one of the most interesting wait types. So far we have seen a lot of different wait types, but this log type is associated with log file which makes it interesting to deal with. From Book On-Line: WRITELOG Occurs while waiting for a log flush to complete. Common operations that cause log flushes are checkpoints and transaction commits. WRITELOG Explanation: This wait type is usually seen in the heavy transactional database. When data is modified, it is written both on the log cache and buffer cache. This wait type occurs when data in the log cache is flushing to the disk. During this time, the session has to wait due to WRITELOG. I have recently seen this wait type’s persistence at my client’s place, where one of the long-running transactions was stopped by the user causing it to roll back. In the future, I will see if I could re-create this situation once again on my machine to validate the relation. Reducing WRITELOG wait: There are several suggestions to reduce this wait stats: Move Transaction Log to Separate Disk from mdf and other files. Avoid cursor-like coding methodology and frequent committing of statements. Find the most active file based on IO stall time based on the script written over here. You can also use fn_virtualfilestats to find IO-related issues using the script mentioned over here. Check the IO-related counters (PhysicalDisk:Avg.Disk Queue Length, PhysicalDisk:Disk Read Bytes/sec and PhysicalDisk :Disk Write Bytes/sec) for additional details. Read about them over here. There are two excellent resources by Paul Randal, I suggest you understand the subject from those videos. The links to videos are here and here. Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussion of Wait Stats in this blog is generic and varies from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • ASP.NET MVC 3 Hosting :: How to Deploy Web Apps Using ASP.NET MVC 3, Razor and EF Code First - Part I

    - by mbridge
    First, you can download the source code from http://efmvc.codeplex.com. The following frameworks will be used for this step by step tutorial. public class Category {     public int CategoryId { get; set; }     [Required(ErrorMessage = "Name Required")]     [StringLength(25, ErrorMessage = "Must be less than 25 characters")]     public string Name { get; set;}     public string Description { get; set; }     public virtual ICollection<Expense> Expenses { get; set; } } Expense Class public class Expense {             public int ExpenseId { get; set; }            public string  Transaction { get; set; }     public DateTime Date { get; set; }     public double Amount { get; set; }     public int CategoryId { get; set; }     public virtual Category Category { get; set; } }    Define Domain Model Let’s create domain model for our simple web application Category Class We have two domain entities - Category and Expense. A single category contains a list of expense transactions and every expense transaction should have a Category. In this post, we will be focusing on CRUD operations for the entity Category and will be working on the Expense entity with a View Model object in the later post. And the source code for this application will be refactored over time. The above entities are very simple POCO (Plain Old CLR Object) classes and the entity Category is decorated with validation attributes in the System.ComponentModel.DataAnnotations namespace. Now we want to use these entities for defining model objects for the Entity Framework 4. Using the Code First approach of Entity Framework, we can first define the entities by simply writing POCO classes without any coupling with any API or database library. This approach lets you focus on domain model which will enable Domain-Driven Development for applications. EF code first support is currently enabled with a separate API that is runs on top of the Entity Framework 4. EF Code First is reached CTP 5 when I am writing this article. Creating Context Class for Entity Framework We have created our domain model and let’s create a class in order to working with Entity Framework Code First. For this, you have to download EF Code First CTP 5 and add reference to the assembly EntitFramework.dll. You can also use NuGet to download add reference to EEF Code First. public class MyFinanceContext : DbContext {     public MyFinanceContext() : base("MyFinance") { }     public DbSet<Category> Categories { get; set; }     public DbSet<Expense> Expenses { get; set; }         }   The above class MyFinanceContext is derived from DbContext that can connect your model classes to a database. The MyFinanceContext class is mapping our Category and Expense class into database tables Categories and Expenses using DbSet<TEntity> where TEntity is any POCO class. When we are running the application at first time, it will automatically create the database. EF code-first look for a connection string in web.config or app.config that has the same name as the dbcontext class. If it is not find any connection string with the convention, it will automatically create database in local SQL Express database by default and the name of the database will be same name as the dbcontext class. You can also define the name of database in constructor of the the dbcontext class. Unlike NHibernate, we don’t have to use any XML based mapping files or Fluent interface for mapping between our model and database. The model classes of Code First are working on the basis of conventions and we can also use a fluent API to refine our model. The convention for primary key is ‘Id’ or ‘<class name>Id’.  If primary key properties are detected with type ‘int’, ‘long’ or ‘short’, they will automatically registered as identity columns in the database by default. Primary key detection is not case sensitive. We can define our model classes with validation attributes in the System.ComponentModel.DataAnnotations namespace and it automatically enforces validation rules when a model object is updated or saved. Generic Repository for EF Code First We have created model classes and dbcontext class. Now we have to create generic repository pattern for data persistence with EF code first. If you don’t know about the repository pattern, checkout Martin Fowler’s article on Repository Let’s create a generic repository to working with DbContext and DbSet generics. public interface IRepository<T> where T : class     {         void Add(T entity);         void Delete(T entity);         T GetById(long Id);         IEnumerable<T> All();     } RepositoryBasse – Generic Repository class protected MyFinanceContext Database {     get { return database ?? (database = DatabaseFactory.Get()); } } public virtual void Add(T entity) {     dbset.Add(entity);            }        public virtual void Delete(T entity) {     dbset.Remove(entity); }   public virtual T GetById(long id) {     return dbset.Find(id); }   public virtual IEnumerable<T> All() {     return dbset.ToList(); } } DatabaseFactory class public class DatabaseFactory : Disposable, IDatabaseFactory {     private MyFinanceContext database;     public MyFinanceContext Get()     {         return database ?? (database = new MyFinanceContext());     }     protected override void DisposeCore()     {         if (database != null)             database.Dispose();     } } Unit of Work If you are new to Unit of Work pattern, checkout Fowler’s article on Unit of Work . According to Martin Fowler, the Unit of Work pattern "maintains a list of objects affected by a business transaction and coordinates the writing out of changes and the resolution of concurrency problems." Let’s create a class for handling Unit of Work public interface IUnitOfWork {     void Commit(); } UniOfWork class public class UnitOfWork : IUnitOfWork {     private readonly IDatabaseFactory databaseFactory;     private MyFinanceContext dataContext;       public UnitOfWork(IDatabaseFactory databaseFactory)     {         this.databaseFactory = databaseFactory;     }       protected MyFinanceContext DataContext     {         get { return dataContext ?? (dataContext = databaseFactory.Get()); }     }       public void Commit()     {         DataContext.Commit();     } } The Commit method of the UnitOfWork will call the commit method of MyFinanceContext class and it will execute the SaveChanges method of DbContext class.   Repository class for Category In this post, we will be focusing on the persistence against Category entity and will working on other entities in later post. Let’s create a repository for handling CRUD operations for Category using derive from a generic Repository RepositoryBase<T>. public class CategoryRepository: RepositoryBase<Category>, ICategoryRepository     {     public CategoryRepository(IDatabaseFactory databaseFactory)         : base(databaseFactory)         {         }                } public interface ICategoryRepository : IRepository<Category> { } If we need additional methods than generic repository for the Category, we can define in the CategoryRepository. Dependency Injection using Unity 2.0 If you are new to Inversion of Control/ Dependency Injection or Unity, please have a look on my articles at http://weblogs.asp.net/shijuvarghese/archive/tags/IoC/default.aspx. I want to create a custom lifetime manager for Unity to store container in the current HttpContext. public class HttpContextLifetimeManager<T> : LifetimeManager, IDisposable {     public override object GetValue()     {         return HttpContext.Current.Items[typeof(T).AssemblyQualifiedName];     }     public override void RemoveValue()     {         HttpContext.Current.Items.Remove(typeof(T).AssemblyQualifiedName);     }     public override void SetValue(object newValue)     {         HttpContext.Current.Items[typeof(T).AssemblyQualifiedName] = newValue;     }     public void Dispose()     {         RemoveValue();     } } Let’s create controller factory for Unity in the ASP.NET MVC 3 application.                 404, String.Format(                     "The controller for path '{0}' could not be found" +     "or it does not implement IController.",                 reqContext.HttpContext.Request.Path));       if (!typeof(IController).IsAssignableFrom(controllerType))         throw new ArgumentException(                 string.Format(                     "Type requested is not a controller: {0}",                     controllerType.Name),                     "controllerType");     try     {         controller= container.Resolve(controllerType) as IController;     }     catch (Exception ex)     {         throw new InvalidOperationException(String.Format(                                 "Error resolving controller {0}",                                 controllerType.Name), ex);     }     return controller; }   } Configure contract and concrete types in Unity Let’s configure our contract and concrete types in Unity for resolving our dependencies. private void ConfigureUnity() {     //Create UnityContainer               IUnityContainer container = new UnityContainer()                 .RegisterType<IDatabaseFactory, DatabaseFactory>(new HttpContextLifetimeManager<IDatabaseFactory>())     .RegisterType<IUnitOfWork, UnitOfWork>(new HttpContextLifetimeManager<IUnitOfWork>())     .RegisterType<ICategoryRepository, CategoryRepository>(new HttpContextLifetimeManager<ICategoryRepository>());                 //Set container for Controller Factory                ControllerBuilder.Current.SetControllerFactory(             new UnityControllerFactory(container)); } In the above ConfigureUnity method, we are registering our types onto Unity container with custom lifetime manager HttpContextLifetimeManager. Let’s call ConfigureUnity method in the Global.asax.cs for set controller factory for Unity and configuring the types with Unity. protected void Application_Start() {     AreaRegistration.RegisterAllAreas();     RegisterGlobalFilters(GlobalFilters.Filters);     RegisterRoutes(RouteTable.Routes);     ConfigureUnity(); } Developing web application using ASP.NET MVC 3 We have created our domain model for our web application and also have created repositories and configured dependencies with Unity container. Now we have to create controller classes and views for doing CRUD operations against the Category entity. Let’s create controller class for Category Category Controller public class CategoryController : Controller {     private readonly ICategoryRepository categoryRepository;     private readonly IUnitOfWork unitOfWork;           public CategoryController(ICategoryRepository categoryRepository, IUnitOfWork unitOfWork)     {         this.categoryRepository = categoryRepository;         this.unitOfWork = unitOfWork;     }       public ActionResult Index()     {         var categories = categoryRepository.All();         return View(categories);     }     [HttpGet]     public ActionResult Edit(int id)     {         var category = categoryRepository.GetById(id);         return View(category);     }       [HttpPost]     public ActionResult Edit(int id, FormCollection collection)     {         var category = categoryRepository.GetById(id);         if (TryUpdateModel(category))         {             unitOfWork.Commit();             return RedirectToAction("Index");         }         else return View(category);                 }       [HttpGet]     public ActionResult Create()     {         var category = new Category();         return View(category);     }           [HttpPost]     public ActionResult Create(Category category)     {         if (!ModelState.IsValid)         {             return View("Create", category);         }                     categoryRepository.Add(category);         unitOfWork.Commit();         return RedirectToAction("Index");     }       [HttpPost]     public ActionResult Delete(int  id)     {         var category = categoryRepository.GetById(id);         categoryRepository.Delete(category);         unitOfWork.Commit();         var categories = categoryRepository.All();         return PartialView("CategoryList", categories);       }        } Creating Views in Razor Now we are going to create views in Razor for our ASP.NET MVC 3 application.  Let’s create a partial view CategoryList.cshtml for listing category information and providing link for Edit and Delete operations. CategoryList.cshtml @using MyFinance.Helpers; @using MyFinance.Domain; @model IEnumerable<Category>      <table>         <tr>         <th>Actions</th>         <th>Name</th>          <th>Description</th>         </tr>     @foreach (var item in Model) {             <tr>             <td>                 @Html.ActionLink("Edit", "Edit",new { id = item.CategoryId })                 @Ajax.ActionLink("Delete", "Delete", new { id = item.CategoryId }, new AjaxOptions { Confirm = "Delete Expense?", HttpMethod = "Post", UpdateTargetId = "divCategoryList" })                           </td>             <td>                 @item.Name             </td>             <td>                 @item.Description             </td>         </tr>         }       </table>     <p>         @Html.ActionLink("Create New", "Create")     </p> The delete link is providing Ajax functionality using the Ajax.ActionLink. This will call an Ajax request for Delete action method in the CategoryCotroller class. In the Delete action method, it will return Partial View CategoryList after deleting the record. We are using CategoryList view for the Ajax functionality and also for Index view using for displaying list of category information. Let’s create Index view using partial view CategoryList  Index.chtml @model IEnumerable<MyFinance.Domain.Category> @{     ViewBag.Title = "Index"; }    <h2>Category List</h2>    <script src="@Url.Content("~/Scripts/jquery.unobtrusive-ajax.min.js")" type="text/javascript"></script>    <div id="divCategoryList">               @Html.Partial("CategoryList", Model) </div> We can call the partial views using Html.Partial helper method. Now we are going to create View pages for insert and update functionality for the Category. Both view pages are sharing common user interface for entering the category information. So I want to create an EditorTemplate for the Category information. We have to create the EditorTemplate with the same name of entity object so that we can refer it on view pages using @Html.EditorFor(model => model) . So let’s create template with name Category. Category.cshtml @model MyFinance.Domain.Category <div class="editor-label"> @Html.LabelFor(model => model.Name) </div> <div class="editor-field"> @Html.EditorFor(model => model.Name) @Html.ValidationMessageFor(model => model.Name) </div> <div class="editor-label"> @Html.LabelFor(model => model.Description) </div> <div class="editor-field"> @Html.EditorFor(model => model.Description) @Html.ValidationMessageFor(model => model.Description) </div> Let’s create view page for insert Category information @model MyFinance.Domain.Category   @{     ViewBag.Title = "Save"; }   <h2>Create</h2>   <script src="@Url.Content("~/Scripts/jquery.validate.min.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/jquery.validate.unobtrusive.min.js")" type="text/javascript"></script>   @using (Html.BeginForm()) {     @Html.ValidationSummary(true)     <fieldset>         <legend>Category</legend>                @Html.EditorFor(model => model)               <p>             <input type="submit" value="Create" />         </p>     </fieldset> }   <div>     @Html.ActionLink("Back to List", "Index") </div> ViewStart file In Razor views, we can add a file named _viewstart.cshtml in the views directory  and this will be shared among the all views with in the Views directory. The below code in the _viewstart.cshtml, sets the Layout page for every Views in the Views folder.     @{     Layout = "~/Views/Shared/_Layout.cshtml"; } Tomorrow, we will cotinue the second part of this article. :)

    Read the article

  • SQL SERVER – LOGBUFFER – Wait Type – Day 18 of 28

    - by pinaldave
    At first, I was not planning to write about this wait type. The reason was simple- I have faced this only once in my lifetime so far maybe because it is one of the top 5 wait types. I am not sure if it is a common wait type or not, but in the samples I had it really looks rare to me. From Book On-Line: LOGBUFFER Occurs when a task is waiting for space in the log buffer to store a log record. Consistently high values may indicate that the log devices cannot keep up with the amount of log being generated by the server. LOGBUFFER Explanation: The book online definition of the LOGBUFFER seems to be very accurate. On the system where I faced this wait type, the log file (LDF) was put on the local disk, and the data files (MDF, NDF) were put on SanDrives. My client then was not familiar about how the file distribution was supposed to be. Once we moved the LDF to a faster drive, this wait type disappeared. Reducing LOGBUFFER wait: There are several suggestions to reduce this wait stats: Move Transaction Log to Separate Disk from mdf and other files. (Make sure your drive where your LDF is has no IO bottleneck issues). Avoid cursor-like coding methodology and frequent commit statements. Find the most-active file based on IO stall time, as shown in the script written over here. You can also use fn_virtualfilestats to find IO-related issues using the script mentioned over here. Check the IO-related counters (PhysicalDisk:Avg.Disk Queue Length, PhysicalDisk:Disk Read Bytes/sec and PhysicalDisk :Disk Write Bytes/sec) for additional details. Read about them over here. If you have noticed, my suggestions for reducing the LOGBUFFER is very similar to WRITELOG. Although the procedures on reducing them are alike, I am not suggesting that LOGBUFFER and WRITELOG are same wait types. From the definition of the two, you will find their difference. However, they are both related to LOG and both of them can severely degrade the performance. Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussion of Wait Stats in this blog is generic and varies from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com)   Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

< Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >