Search Results

Search found 6107 results on 245 pages for 'reserved words'.

Page 31/245 | < Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >

  • Solaris syslog.conf. What are root and operator?

    - by cjavapro
    In /etc/syslog.conf #ident "@(#)syslog.conf 1.5 98/12/14 SMI" /* SunOS 5.0 */ # # Copyright (c) 1991-1998 by Sun Microsystems, Inc. # All rights reserved. # # syslog configuration file. # # This file is processed by m4 so be careful to quote (`') names # that match m4 reserved words. Also, within ifdef's, arguments # containing commas must be quoted. # *.err;kern.notice;auth.notice /dev/sysmsg *.err;kern.debug;daemon.notice;mail.crit /var/adm/messages *.alert;kern.err;daemon.err operator *.alert root *.emerg * # if a non-loghost machine chooses to have authentication messages # sent to the loghost machine, un-comment out the following line: #auth.notice ifdef(`LOGHOST', /var/log/authlog, @loghost) mail.debug ifdef(`LOGHOST', /var/log/syslog, @loghost) # # non-loghost machines will use the following lines to cause "user" # log messages to be logged locally. # ifdef(`LOGHOST', , user.err /dev/sysmsg user.err /var/adm/messages user.alert `root, operator' user.emerg * ) I googled some and it seems that root and operator mean email to root and to operator. Is this correct?

    Read the article

  • update ocz vertex le capacity via firmware update

    - by Ben Voigt
    I have an OCZ Vertex LE 100GB drive. It's actually 128GiB of NAND flash, with a whopping 28%+ reserved for write combining. Most 128GiB drives are actually ~ 115GB usable (and marketed as 120GB or 128GB). There were rumors that the reserved fraction could be decreased on OCZ 100GB drives. Can anyone provide a link to firmware that does that, or an official statement that no such firmware exists? (NB: I recently installed the 1.24 firmware from the OCZ site, it didn't affect the capacity. Possibly because the rumors say the capacity change is destructive to existing content.) Of possible interest: flashing firmware was more of a pain than it should have been -- the tool didn't detect the disk until I booted an older Windows install off a secondary hard disk, I suspect the Intel SATA driver is the issue and tool only works with the msachi.sys driver.

    Read the article

  • what am I doing wrong? Trying to reinstall Windows 7 starter onto my Acer Aspire One Netbook?

    - by Robbie Roberts
    I have been having some issues with my Netbook so I figured I would reformat it. I downloaded a copy of windows 7 starter, inserted it into my usb dvd drive and started my netbook. I made it as far as, "where do you want to install windows?" and it seems like the computer just freezes. it shows, Disk 0 Partition 1: PQSERVICE 13.0 GB OEM (Reserved) Disk 0 Partition 2: SYSTEM RESERVED 101.0 MB System Disk 0 Partition 3 218.8 GB Primary I cannot click on either of them, What am I doing wrong?

    Read the article

  • Why does diskpart set the volume attributes on all volumes?

    - by Nick
    I was trying to migrate a Win7 OS from a HDD to a SSD. I've created 2 partition with 1024KB offset, with diskpart: 100MB System Reserved and a 60GB for C:. I've cloned their contents using Easeus Disk Copy. I've loaded the Windows 7 Boot DVD, and wanted to use diskpart to drop the letter for the System Reserved partition and make it hidden. select volume 0 detail volume attribute volume set nodefaultdriveletter attribute volume set hidden These 2 attribute set commands actioned on both volumes (0 and 1, MSR and C:) instead of the selected one, and viceversa. I've tried to clear these attributes from volume 1, but it cleared them also from volume 0. Why does DiskPart have this behaviour?

    Read the article

  • How can the Private Bytes of a process be significantly less than its effect on the system commit charge?

    - by bacar
    On a 64-bit Windows Server 2003, I can see using taskmgr or process explorer that the total commit charge is around 3.5GB, yet when I sum the Private Bytes consumed by each process (by running pslist -m and adding all values under the Priv column) the total comes in at 1.6GB. I know which process seems to be causing this (sqlservr.exe) as when I kill the process, the commit charge drops dramatically. However the process in question is consuming only ~220MB of Private Bytes yet killing the process drops the commit charge by ~1.6GB. How is this possible? How can the commit charge be so significantly greater than Private Bytes, which should represent the amount of committed memory? If some other factor contributes to the commit charge, what is that factor and how can I view its impact in process explorer? Note: I claim that I understand the difference between reserved and committed memory already: my investigations above relate specifically to Private Bytes which includes only committed memory and excludes reserved memory. the Virtual Size of the process in this case is over 4GB, but this should be irrelevant - Virtual Size in procexp represents reserved, not committed memory, and should not contribute to the commit charge. I'm particularly interested in generalised answers to this question: I'm assuming that if sqlservr.exe can behave in this way, that any process potentially could. Further Investigations I notice that pointing Sysinternals VMMap at this process reports a committed "Private Data" of 1.6GB despite Procexp's reported a Private Bytes of 220MB. This is particularly strange given that the documentation for this field in the "Windows® Sysinternals Administrator's Reference" states that: Private Data memory is memory that is allocated by VirtualAlloc and that is not further handled by the Heap Manager or the .NET runtime, or assigned to the Stack category... VMMap’s definition of “Private Data” is more granular than that of Process Explorer’s “private bytes.” Procexp’s “private bytes” includes all private committed memory belonging to the process. i.e. that VMMap's committed "Private Data" should be smaller than procexp's "Private Bytes". Also, after reading the 'Process committed memory' section of Mark Russinovich's excellent Pushing the Limits of Windows: Virtual Memory, he highlights two cases which won't show up in Private Bytes: File mapping views with copy-on-write semantics (however, according to VMMap there is no significant space allocated to Mapped Files). pagefile-backed virtual memory (however, I tried testlimit with the -l flag as suggested, and no significant memory is consumed by pagefile-backed sections)

    Read the article

  • Windows 8.1 - Why are there multiple recovery partitions in the system?

    - by Abhiram
    DISKPART> list partition Partition ### Type Size Offset ------------- ---------------- ------- ------- Partition 1 System 500 MB 1024 KB Partition 2 OEM 40 MB 501 MB Partition 3 Reserved 128 MB 541 MB Partition 4 Recovery 490 MB 669 MB Partition 5 Primary 920 GB 1159 MB Partition 6 Recovery 350 MB 921 GB Partition 7 Recovery 9 GB 921 GB Above is the list of partitions on my system that I recently upgraded to Windows 8.1. Why are there multiple recovery partitions (4,6,7)? Shouldn't there be just one recovery partition? And what is the Reserved partition (#3) for?

    Read the article

  • Somebody knows why the sectors of the IBM floppy disk are named 1 to 8 (and not 0 to 7 )

    - by Olivier Briand
    I am now programming on a 8 bits Z80 computer with CP/M 2.2, (as a hobby) and the floppy disk format is IBM, 40 tracks, 8 sectors per track, 512 bytes per sector. free space is 154 Ko on each face of the disk. Why the sectors are indexed 1 to 8 (and not zero to seven, as usually is seen with computers)? The catalog of the floppy disk is on the track 1 (sector 1 to 4, 64 entries). I'm wondering is the catalog on track zero? Is the zero track reserved to included a system (as track 0 & 1 are reserved to the system on a CP/M floppy disk, and catalog is on track 2)?

    Read the article

  • How to change memory for DomU runtime

    - by saffron
    I have a xen server with xen-4.1.3, linux-image-3.2.0-3-amd64, debian squeeze and 16Gb of RAM. The domain-0 has 1Gb of ram, the rest of memory belongs to the hypervisor. I want to start a guest domain with a minimal amount of memory and increase it runtime later. When I start a guest domain with 256Mb of ram and run xm mem-set domu 4Gb, I get ~3Gb only in domu and a guest domain free says: root@test:~# free total used free shared buffers cached Mem: 2830620 72868 2757752 0 2432 43504 -/+ buffers/cache: 26932 2803688 Swap: 1048572 0 1048572 And a guest domain dmesg says: [ 0.000000] Memory: 175912k/2883584k available (3527k kernel code, 448k absent, 2707224k reserved, 3210k data, 612k init) When I start a guest domain with 2Gb of ram I can run xm mem-set domu 7Gb and get ~7Gb of ram in a guest domain: root@test:~# free total used free shared buffers cached Mem: 6828228 74944 6753284 0 1328 12568 -/+ buffers/cache: 61048 6767180 Swap: 1048572 0 1048572 And a guest domain dmesg: [ 0.000000] Memory: 1674960k/16651264k available (3527k kernel code, 448k absent, 14975856k reserved, 3210k data, 612k init) How can I start a guest domain with a minimal amount of ram (256Mb) and increase it under 15Gb?

    Read the article

  • Using MS Standalone profiler in VS2008 Professional

    - by fishdump
    I am trying to profile my .NET dll while running it from VS unit testing tools but I am having problems. I am using the standalone command-line profiler as VS2008 Professional does not come with an inbuilt profiler. I have an open CMD window and have run the following commands (I instrumented it earlier which is why vsinstr gave the warning that it did): C:\...\BusinessRules\obj\Debug>vsperfclrenv /samplegclife /tracegclife /globalsamplegclife /globaltracegclife Enabling VSPerf Sampling Attach Profiling. Allows to 'attaching' to managed applications. Current Profiling Environment variables are: COR_ENABLE_PROFILING=1 COR_PROFILER={0a56a683-003a-41a1-a0ac-0f94c4913c48} COR_LINE_PROFILING=1 COR_GC_PROFILING=2 C:\...\BusinessRules\obj\Debug>vsinstr BusinessRules.dll Microsoft (R) VSInstr Post-Link Instrumentation 9.0.30729 x86 Copyright (C) Microsoft Corp. All rights reserved. Error VSP1018 : VSInstr does not support processing binaries that are already instrumented. C:\...\BusinessRules\obj\Debug>vsperfcmd /start:trace /output:foo.vsp Microsoft (R) VSPerf Command Version 9.0.30729 x86 Copyright (C) Microsoft Corp. All rights reserved. C:\...\BusinessRules\obj\Debug> I then ran the unit tests that exercised the instrumented code. When the unit tests were complete, I did... C:\...\BusinessRules\obj\Debug>vsperfcmd /shutdown Microsoft (R) VSPerf Command Version 9.0.30729 x86 Copyright (C) Microsoft Corp. All rights reserved. Waiting for process 4836 ( C:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE\vstesthost.exe) to shutdown... It was clearly waiting for VS2008 to close so I closed it... Shutting down the Profile Monitor ------------------------------------------------------------ C:\...\BusinessRules\obj\Debug> All looking good, there was a 3.2mb foo.vsp file in the directory. I next did... C:\...\BusinessRules\obj\Debug>vsperfreport foo.vsp /summary:all Microsoft (R) VSPerf Report Generator, Version 9.0.0.0 Copyright (C) Microsoft Corporation. All rights reserved. VSP2340: Environment variables were not properly set during profiling run and managed symbols may not resolve. Please use vsperfclrenv before profiling. File opened Successfully opened the file. A report file, foo_Header.csv, has been generated. A report file, foo_MarksSummary.csv, has been generated. A report file, foo_ProcessSummary.csv, has been generated. A report file, foo_ThreadSummary.csv, has been generated. Analysis completed A report file, foo_FunctionSummary.csv, has been generated. A report file, foo_CallerCalleeSummary.csv, has been generated. A report file, foo_CallTreeSummary.csv, has been generated. A report file, foo_ModuleSummary.csv, has been generated. C:\...\BusinessRules\obj\Debug> Notice the warning about environment variables and using vsperfclrenv? But I had run it! Maybe I used the wrong switches? I don't know. Anyway, loading the csv files into Excel or using the perfconsole tool gives loads of useful info with useless symbol names: *** Loading commands from: C:\temp\PerfConsole\bin\commands\timebytype.dll *** Adding command: timebytype *** Loading commands from: C:\temp\PerfConsole\bin\commands\partition.dll *** Adding command: partition Welcome to PerfConsole 1.0 (for bugs please email: [email protected]), for help type: ?, for a quickstart type: ?? > load foo.vsp *** Couldn't match to either expected sampled or instrumented profile schema, defaulting to sampled *** Couldn't match to either expected sampled or instrumented profile schema, defaulting to sampled *** Profile loaded from 'foo.vsp' into @foo > > functions @foo >>>>> Function Name Exclusive Inclusive Function Name Module Name -------------------- -------------------- -------------- --------------- 900,798,600,000.00 % 900,798,600,000.00 % 0x0600003F 20397910 14,968,500,000.00 % 44,691,540,000.00 % 0x06000040 14736385 8,101,253,000.00 % 14,836,330,000.00 % 0x06000041 5491345 3,216,315,000.00 % 6,876,929,000.00 % 0x06000042 3924533 <snip> 71,449,430.00 % 71,449,430.00 % 0x0A000074 42572 52,914,200.00 % 52,914,200.00 % 0x0A000073 0 14,791.00 % 13,006,010.00 % 0x0A00007B 0 199,177.00 % 6,082,932.00 % 0x2B000001 5350072 2,420,116.00 % 2,420,116.00 % 0x0A00008A 0 836.00 % 451,888.00 % 0x0A000045 0 9,616.00 % 399,436.00 % 0x0A000039 0 18,202.00 % 298,223.00 % 0x06000046 1479900 I am so close to being able to find the bottlenecks, if only it will give me the function and module names instead of hex numbers! What am I doing wrong? --- Alistair.

    Read the article

  • making an array controller the target of a button

    - by ian
    I am working through a chapter of COCOA PROGRAMMING FOR MAC OS X (3RD EDITION) on NSArrayController and it tells me to: Control-Drag to make the array controller become the target of the Add New Employee button. Set the action to add: However when I drag over the array controller it does not highlight so I get no target options. How do I do this correctly in the new XCode full size image document.h: // // Document.h // RaiseMan // // Created by user on 11/12/11. // Copyright (c) 2011 __MyCompanyName__. All rights reserved. // #import <Cocoa/Cocoa.h> @interface Document : NSDocument { NSMutableArray *employees; } @end document.m: // // Document.m // RaiseMan // // Created by user on 11/12/11. // Copyright (c) 2011 __MyCompanyName__. All rights reserved. // #import "Document.h" @implementation Document - (id)init { self = [super init]; if (self) { employees = [[NSMutableArray alloc] init]; } return self; } - (void)dealloc { [self setEmployees:nil]; [super dealloc]; } -(void)setEmployees:(NSMutableArray *)a { //this is an unusual setter method we are goign to ad a lot of smarts in the next chapter if (a == employees) return; [a retain]; [employees release]; employees = a; } - (NSString *)windowNibName { // Override returning the nib file name of the document // If you need to use a subclass of NSWindowController or if your document supports multiple NSWindowControllers, you should remove this method and override -makeWindowControllers instead. return @"Document"; } - (void)windowControllerDidLoadNib:(NSWindowController *)aController { [super windowControllerDidLoadNib:aController]; // Add any code here that needs to be executed once the windowController has loaded the document's window. } - (NSData *)dataOfType:(NSString *)typeName error:(NSError **)outError { /* Insert code here to write your document to data of the specified type. If outError != NULL, ensure that you create and set an appropriate error when returning nil. You can also choose to override -fileWrapperOfType:error:, -writeToURL:ofType:error:, or -writeToURL:ofType:forSaveOperation:originalContentsURL:error: instead. */ NSException *exception = [NSException exceptionWithName:@"UnimplementedMethod" reason:[NSString stringWithFormat:@"%@ is unimplemented", NSStringFromSelector(_cmd)] userInfo:nil]; @throw exception; return nil; } - (BOOL)readFromData:(NSData *)data ofType:(NSString *)typeName error:(NSError **)outError { /* Insert code here to read your document from the given data of the specified type. If outError != NULL, ensure that you create and set an appropriate error when returning NO. You can also choose to override -readFromFileWrapper:ofType:error: or -readFromURL:ofType:error: instead. If you override either of these, you should also override -isEntireFileLoaded to return NO if the contents are lazily loaded. */ NSException *exception = [NSException exceptionWithName:@"UnimplementedMethod" reason:[NSString stringWithFormat:@"%@ is unimplemented", NSStringFromSelector(_cmd)] userInfo:nil]; @throw exception; return YES; } + (BOOL)autosavesInPlace { return YES; } - (void)setEmployees:(NSMutableArray *)a; @end person.h: // // Person.h // RaiseMan // // Created by user on 11/12/11. // Copyright (c) 2011 __MyCompanyName__. All rights reserved. // #import <Foundation/Foundation.h> @interface Person : NSObject { NSString *personName; float expectedRaise; } @property (readwrite, copy) NSString *personName; @property (readwrite) float expectedRaise; @end person.m: // // Person.m // RaiseMan // // Created by user on 11/12/11. // Copyright (c) 2011 __MyCompanyName__. All rights reserved. // #import "Person.h" @implementation Person - (id) init { self = [super init]; expectedRaise = 5.0; personName = @"New Person"; return self; } - (void)dealloc { [personName release]; [super dealloc]; } @synthesize personName; @synthesize expectedRaise; @end

    Read the article

  • Convite: Manageability Partner Community

    - by pfolgado
    Oracle PartnerNetwork | Account | Feedback WELCOME TO THE NEW ORACLE EMEA MANAGEABILITY PARTNER COMMUNITY Dear partner You are receiving this message because you are a registered member of the Oracle Applications & Systems Management Partner Community in EMEA. With occasion of the announcement of Oracle Enterprise Manager 12c we are revitalizing and rebranding our EMEA Applications & Systems Management Partner Community. To do this we have improved the community platform, for better and increased collaboration: The EMEA Applications & Systems Management Partner Community is now renamed to "Manageability Partner Community EMEA" We have created a Manageability Community blog and a Collaboration Workspace: The EMEA Manageability Partner Community blog is a public blog and we use it to provide quick and easy communication to the community members. (Please bookmark or subscribe to the RSS feeds). The EMEA Manageability Partner Community Collaborative Workspace is a restricted area that only community members can access. It contains materials from community events, sales kits, implementation experiences, reserved for community members. It also allows for partners to share content and collaborate with other community members. As a registered member of the community you have already been granted access to this restricted area. A dedicated team that manages the EMEA Manageability on a continuous basis. What do you have to do? All you have to do now is to bookmark the EMEA Manageability Partner Community blog page or subscribe to the blog's RSS feeds and use this as your central point of contact for Manageability information from Oracle. I look forward to develop a strong community in the Manageability area, where Oracle Manageability partners can share experiences and mutually benefit. Best regards, Javier Puerta Director Core Technology Partner Programs Alliances & Channels EMEA Phone: +34 916 312 41 Mobile: +34 609 062 373 Patrick Rood EMEA Partner Programs for Manageability Oracle EMEA Technology Phone: +31 306 627 969 Mobile: +31 611 954 277 Copyright © 2011, Oracle and/or its affiliates. All rights reserved. Contact PBC | Legal Notices and Terms of Use | Privacy

    Read the article

  • Determine All SQL Server Table Sizes

    Im doing some work to migrate and optimize a large-ish (40GB) SQL Server database at the moment.  Moving such a database between data centers over the Internet is not without its challenges.  In my case, virtually all of the size of the database is the result of one table, which has over 200M rows of data.  To determine the size of this table on disk, you can run the sp_TableSize stored procedure, like so: EXEC sp_spaceused lq_ActivityLog This results in the following: Of course this is only showing one table if you have a lot of tables and need to know which ones are taking up the most space, it would be nice if you could run a query to list all of the tables, perhaps ordered by the space theyre taking up.  Thanks to Mitchel Sellers (and Gregg Starks CURSOR template) and a tiny bit of my own edits, now you can!  Create the stored procedure below and call it to see a listing of all user tables in your database, ordered by their reserved space. -- Lists Space Used for all user tablesCREATE PROCEDURE GetAllTableSizesASDECLARE @TableName VARCHAR(100)DECLARE tableCursor CURSOR FORWARD_ONLYFOR select [name]from dbo.sysobjects where OBJECTPROPERTY(id, N'IsUserTable') = 1FOR READ ONLYCREATE TABLE #TempTable( tableName varchar(100), numberofRows varchar(100), reservedSize varchar(50), dataSize varchar(50), indexSize varchar(50), unusedSize varchar(50))OPEN tableCursorWHILE (1=1)BEGIN FETCH NEXT FROM tableCursor INTO @TableName IF(@@FETCH_STATUS<>0) BREAK; INSERT #TempTable EXEC sp_spaceused @TableNameENDCLOSE tableCursorDEALLOCATE tableCursorUPDATE #TempTableSET reservedSize = REPLACE(reservedSize, ' KB', '')SELECT tableName 'Table Name',numberofRows 'Total Rows',reservedSize 'Reserved KB',dataSize 'Data Size',indexSize 'Index Size',unusedSize 'Unused Size'FROM #TempTableORDER BY CONVERT(bigint,reservedSize) DESCDROP TABLE #TempTableGO Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Amazon EC2 vs Dedicated server at Hetzner, what's the use for EC2?

    - by C-Blu
    After searching the web I still can't find the reason to use EC2. What's the point to scale EC2? If you expect a huge burst in traffic, they say. OK, but what if you already have a couple of sites with good traffic, and for example medium reserved EC2 instance is not enough. You are paying $36.60(medium reserved for 1year) in EU(Ireland) + traffic + optional expenses for databases and S3 if you use them. Of course as some point when you are under $56.6-$66.1 you can optimize your hosting costs with Amazon EC2. But when you get at some point if purchase EX4 server from Hetzner, it will surpass your perfomance needs for a long time, before you get a massive traffic. (I am wrong?) CPU: i7-2600 Quadcore (3.4-3.8 Ghz) RAM: 16 GB HDD: 2x3 TB SATA (6 Gbit/s) - I think that disc performance of a dedicated is better then of Amazon EBS Traffic: 10 TiB in month included. This is what you get from Hetzner for $56(- 19% VAT) or $66 for EU residents. Please, tell me what's the reason to use Amazon? Which load won't a server from Hetzner take, but Amazon Auto Scaling will? The maintenance of dedicated vs EC2 is still the same? Or hardware failure at Amazon, won't ruin your EBS storage? I'm still not at the level when I need expensive hosting, but want to know beforehand, just to be sure if Amazon infrastructure is better then pure performance of Hetzner's hardware.

    Read the article

  • SQL SERVER – Signal Wait Time Introduction with Simple Example – Wait Type – Day 2 of 28

    - by pinaldave
    In this post, let’s delve a bit more in depth regarding wait stats. The very first question: when do the wait stats occur? Here is the simple answer. When SQL Server is executing any task, and if for any reason it has to wait for resources to execute the task, this wait is recorded by SQL Server with the reason for the delay. Later on we can analyze these wait stats to understand the reason the task was delayed and maybe we can eliminate the wait for SQL Server. It is not always possible to remove the wait type 100%, but there are few suggestions that can help. Before we continue learning about wait types and wait stats, we need to understand three important milestones of the query life-cycle. Running - a query which is being executed on a CPU is called a running query. This query is responsible for CPU time. Runnable – a query which is ready to execute and waiting for its turn to run is called a runnable query. This query is responsible for Signal Wait time. (In other words, the query is ready to run but CPU is servicing another query). Suspended – a query which is waiting due to any reason (to know the reason, we are learning wait stats) to be converted to runnable is suspended query. This query is responsible for wait time. (In other words, this is the time we are trying to reduce). In simple words, query execution time is a summation of the query Executing CPU Time (Running) + Query Wait Time (Suspended) + Query Signal Wait Time (Runnable). Again, it may be possible a query goes to all these stats multiple times. Let us try to understand the whole thing with a simple analogy of a taxi and a passenger. Two friends, Tom and Danny, go to the mall together. When they leave the mall, they decide to take a taxi. Tom and Danny both stand in the line waiting for their turn to get into the taxi. This is the Signal Wait Time as they are ready to get into the taxi but the taxis are currently serving other customer and they have to wait for their turn. In other word they are in a runnable state. Now when it is their turn to get into the taxi, the taxi driver informs them he does not take credit cards and only cash is accepted. Neither Tom nor Danny have enough cash, they both cannot get into the vehicle. Tom waits outside in the queue and Danny goes to ATM to fetch the cash. During this time the taxi cannot wait, they have to let other passengers get into the taxi. As Tom and Danny both are outside in the queue, this is the Query Wait Time and they are in the suspended state. They cannot do anything till they get the cash. Once Danny gets the cash, they are both standing in the line again, creating one more Signal Wait Time. This time when their turn comes they can pay the taxi driver in cash and reach their destination. The time taken for the taxi to get from the mall to the destination is running time (CPU time) and the taxi is running. I hope this analogy is bit clear with the wait stats. You can check the Signalwait stats using following query of Glenn Berry. -- Signal Waits for instance SELECT CAST(100.0 * SUM(signal_wait_time_ms) / SUM (wait_time_ms) AS NUMERIC(20,2)) AS [%signal (cpu) waits], CAST(100.0 * SUM(wait_time_ms - signal_wait_time_ms) / SUM (wait_time_ms) AS NUMERIC(20,2)) AS [%resource waits] FROM sys.dm_os_wait_stats OPTION (RECOMPILE); Higher the Signal wait stats are not good for the system. Very high value indicates CPU pressure. In my experience, when systems are running smooth and without any glitch the Signal wait stat is lower than 20%. Again, this number can be debated (and it is from my experience and is not documented anywhere). In other words, lower is better and higher is not good for the system. In future articles we will discuss in detail the various wait types and wait stats and their resolution. Read all the post in the Wait Types and Queue series. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL DMV, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • SQL SERVER – Single Wait Time Introduction with Simple Example – Wait Type – Day 2 of 28

    - by pinaldave
    In this post, let’s delve a bit more in depth regarding wait stats. The very first question: when do the wait stats occur? Here is the simple answer. When SQL Server is executing any task, and if for any reason it has to wait for resources to execute the task, this wait is recorded by SQL Server with the reason for the delay. Later on we can analyze these wait stats to understand the reason the task was delayed and maybe we can eliminate the wait for SQL Server. It is not always possible to remove the wait type 100%, but there are few suggestions that can help. Before we continue learning about wait types and wait stats, we need to understand three important milestones of the query life-cycle. Running - a query which is being executed on a CPU is called a running query. This query is responsible for CPU time. Runnable – a query which is ready to execute and waiting for its turn to run is called a runnable query. This query is responsible for Single Wait time. (In other words, the query is ready to run but CPU is servicing another query). Suspended – a query which is waiting due to any reason (to know the reason, we are learning wait stats) to be converted to runnable is suspended query. This query is responsible for wait time. (In other words, this is the time we are trying to reduce). In simple words, query execution time is a summation of the query Executing CPU Time (Running) + Query Wait Time (Suspended) + Query Single Wait Time (Runnable). Again, it may be possible a query goes to all these stats multiple times. Let us try to understand the whole thing with a simple analogy of a taxi and a passenger. Two friends, Tom and Danny, go to the mall together. When they leave the mall, they decide to take a taxi. Tom and Danny both stand in the line waiting for their turn to get into the taxi. This is the Signal Wait Time as they are ready to get into the taxi but the taxis are currently serving other customer and they have to wait for their turn. In other word they are in a runnable state. Now when it is their turn to get into the taxi, the taxi driver informs them he does not take credit cards and only cash is accepted. Neither Tom nor Danny have enough cash, they both cannot get into the vehicle. Tom waits outside in the queue and Danny goes to ATM to fetch the cash. During this time the taxi cannot wait, they have to let other passengers get into the taxi. As Tom and Danny both are outside in the queue, this is the Query Wait Time and they are in the suspended state. They cannot do anything till they get the cash. Once Danny gets the cash, they are both standing in the line again, creating one more Single Wait Time. This time when their turn comes they can pay the taxi driver in cash and reach their destination. The time taken for the taxi to get from the mall to the destination is running time (CPU time) and the taxi is running. I hope this analogy is bit clear with the wait stats. You can check the single wait stats using following query of Glenn Berry. -- Signal Waits for instance SELECT CAST(100.0 * SUM(signal_wait_time_ms) / SUM (wait_time_ms) AS NUMERIC(20,2)) AS [%signal (cpu) waits], CAST(100.0 * SUM(wait_time_ms - signal_wait_time_ms) / SUM (wait_time_ms) AS NUMERIC(20,2)) AS [%resource waits] FROM sys.dm_os_wait_stats OPTION (RECOMPILE); Higher the single wait stats are not good for the system. Very high value indicates CPU pressure. In my experience, when systems are running smooth and without any glitch the single wait stat is lower than 20%. Again, this number can be debated (and it is from my experience and is not documented anywhere). In other words, lower is better and higher is not good for the system. In future articles we will discuss in detail the various wait types and wait stats and their resolution. Read all the post in the Wait Types and Queue series. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL DMV, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • SQL Developer Data Modeler: On Notes, Comments, and Comments in RDBMS

    - by thatjeffsmith
    Ah the beautiful data model. They say a picture is worth a 1,000 words. And then we have our diagrams, how many words are they worth? Our friends from the Human Relations sample schema So our models describe how the data ‘works’ – whether that be at a logical-business level, or a technical-physical level. Developers like to say that their code is self-documenting. These would be very lazy or very bad (or both) developers. Models are the same way, you should document your models with comments and notes! I have 3 basic options: Comments Comments in RDBMS Notes So what’s the difference? Comments You’re describing the entity/table or attribute/column. This information will NOT be published in the database. It will only be available to the model, and hence, folks with access to the model. Table Comments (in the design only!) Comments in RDBMS You’re doing the same thing as above, but your words will be stored IN the data dictionary of the database. Oracle allows you to store comments on the table and column definitions. So your awesome documentation is going to be viewable to anyone with access to the database. RDBMS is an acronym for Relational Database Management System – of which Oracle is one of the first commercial examples If the DDL is produced and ran against a database, these comments WILL be stored in the data dictionary. Notes A place for you to add notes, maybe from a design meeting. Or maybe you’re using this as a to-do or requirements list. Basically it’s for anything that doesn’t literally describe the object at hand – that’s what the comments are for. I totally made these up. Now these are free text fields and you can put whatever you want here. Just make sure you put stuff here that’s worth reading. And it will live on…forever.

    Read the article

  • Better name of Social Networking ! quora.com

    - by Anirudha
    Originally posted on: http://geekswithblogs.net/anirugu/archive/2013/07/23/better-name-of-social-networking--quora.com.aspxAfter writing my recent post Facebook allow you to abuse in Non-English words ? I am looking for better social networking site. I found Facebook is not more then time wasting time. People share links and posts. Nowadays Facebook is marketing tool. Twitter is not useful too when you can’t say anything in less then 140 words. Now look at quora.com it’s very good site compare to other two. You can read a lot of discussion their. Too many discussion that you want to listen about. I am happy to use Quora.com no need of Facebook and Twitter. Both are dying already

    Read the article

  • Are there any non-english programming languages? [closed]

    - by samarudge
    Possible Duplicate: Non-English-based programming languages Not sure if this is the right place to ask this, but I'm going to anyway. Without fail, every programming language I've ever seen, used or heard of has it's keywords based around English. if, else, while, for, query, foreach, image, path, extension, the list goes on, are all based around English words. Are there any languages, or ports of languages that base their core keywords based on non-english words to lower the wall for non-english speaking programmers? This is mostly for intrest (English is my first language so it's no problem for me). Are these languages popular locally (I.E. might a software development house in Germany use a programming language based in German over one based in English).

    Read the article

  • Marshalling to a native library in C#

    - by Daniel Baulig
    I'm having trouble calling functions of a native library from within managed C# code. I am developing for the 3.5 compact framework (Windows Mobile 6.x) just in case this would make any difference. I am working with the waveIn* functions from coredll.dll (these are in winmm.dll in regular Windows I believe). This is what I came up with: // namespace winmm; class winmm [StructLayout(LayoutKind.Sequential)] public struct WAVEFORMAT { public ushort wFormatTag; public ushort nChannels; public uint nSamplesPerSec; public uint nAvgBytesPerSec; public ushort nBlockAlign; public ushort wBitsPerSample; public ushort cbSize; } [StructLayout(LayoutKind.Sequential)] public struct WAVEHDR { public IntPtr lpData; public uint dwBufferLength; public uint dwBytesRecorded; public IntPtr dwUser; public uint dwFlags; public uint dwLoops; public IntPtr lpNext; public IntPtr reserved; } public delegate void AudioRecordingDelegate(IntPtr deviceHandle, uint message, IntPtr instance, ref WAVEHDR wavehdr, IntPtr reserved2); [DllImport("coredll.dll")] public static extern int waveInAddBuffer(IntPtr hWaveIn, ref WAVEHDR lpWaveHdr, uint cWaveHdrSize); [DllImport("coredll.dll")] public static extern int waveInPrepareHeader(IntPtr hWaveIn, ref WAVEHDR lpWaveHdr, uint Size); [DllImport("coredll.dll")] public static extern int waveInStart(IntPtr hWaveIn); // some other class private WinMM.WinMM.AudioRecordingDelegate waveIn; private IntPtr handle; private uint bufferLength; private void setupBuffer() { byte[] buffer = new byte[bufferLength]; GCHandle bufferPin = GCHandle.Alloc(buffer, GCHandleType.Pinned); WinMM.WinMM.WAVEHDR hdr = new WinMM.WinMM.WAVEHDR(); hdr.lpData = bufferPin.AddrOfPinnedObject(); hdr.dwBufferLength = this.bufferLength; hdr.dwFlags = 0; int i = WinMM.WinMM.waveInPrepareHeader(this.handle, ref hdr, Convert.ToUInt32(Marshal.SizeOf(hdr))); if (i != WinMM.WinMM.MMSYSERR_NOERROR) { this.Text = "Error: waveInPrepare"; return; } i = WinMM.WinMM.waveInAddBuffer(this.handle, ref hdr, Convert.ToUInt32(Marshal.SizeOf(hdr))); if (i != WinMM.WinMM.MMSYSERR_NOERROR) { this.Text = "Error: waveInAddrBuffer"; return; } } private void setupWaveIn() { WinMM.WinMM.WAVEFORMAT format = new WinMM.WinMM.WAVEFORMAT(); format.wFormatTag = WinMM.WinMM.WAVE_FORMAT_PCM; format.nChannels = 1; format.nSamplesPerSec = 8000; format.wBitsPerSample = 8; format.nBlockAlign = Convert.ToUInt16(format.nChannels * format.wBitsPerSample); format.nAvgBytesPerSec = format.nSamplesPerSec * format.nBlockAlign; this.bufferLength = format.nAvgBytesPerSec; format.cbSize = 0; int i = WinMM.WinMM.waveInOpen(out this.handle, WinMM.WinMM.WAVE_MAPPER, ref format, Marshal.GetFunctionPointerForDelegate(waveIn), 0, WinMM.WinMM.CALLBACK_FUNCTION); if (i != WinMM.WinMM.MMSYSERR_NOERROR) { this.Text = "Error: waveInOpen"; return; } setupBuffer(); WinMM.WinMM.waveInStart(this.handle); } I read alot about marshalling the last few days, nevertheless I do not get this code working. When my callback function is called (waveIn) when the buffer is full, the hdr structure passed back in wavehdr is obviously corrupted. Here is an examlpe of how the structure looks like at that point: - wavehdr {WinMM.WinMM.WAVEHDR} WinMM.WinMM.WAVEHDR dwBufferLength 0x19904c00 uint dwBytesRecorded 0x0000fa00 uint dwFlags 0x00000003 uint dwLoops 0x1990f6a4 uint + dwUser 0x00000000 System.IntPtr + lpData 0x00000000 System.IntPtr + lpNext 0x00000000 System.IntPtr + reserved 0x7c07c9a0 System.IntPtr This obiously is not what I expected to get passed. I am clearly concerned about the order of the fields in the view. I do not know if Visual Studio .NET cares about actual memory order when displaying the record in the "local"-view, but they are obviously not displayed in the order I speciefied in the struct. Then theres no data pointer and the bufferLength field is far to high. Interestingly the bytesRecorded field is exactly 64000 - bufferLength and bytesRecorded I'd expect both to be 64000 though. I do not know what exactly is going wrong, maybe someone can help me out on this. I'm an absolute noob to managed code programming and marshalling so please don't be too harsh to me for all the stupid things I've propably done. Oh here's the C code definition for WAVEHDR which I found here, I believe I might have done something wrong in the C# struct definition: /* wave data block header */ typedef struct wavehdr_tag { LPSTR lpData; /* pointer to locked data buffer */ DWORD dwBufferLength; /* length of data buffer */ DWORD dwBytesRecorded; /* used for input only */ DWORD_PTR dwUser; /* for client's use */ DWORD dwFlags; /* assorted flags (see defines) */ DWORD dwLoops; /* loop control counter */ struct wavehdr_tag FAR *lpNext; /* reserved for driver */ DWORD_PTR reserved; /* reserved for driver */ } WAVEHDR, *PWAVEHDR, NEAR *NPWAVEHDR, FAR *LPWAVEHDR; If you are used to work with all those low level tools like pointer-arithmetic, casts, etc starting writing managed code is a pain in the ass. It's like trying to learn how to swim with your hands tied on your back. Some things I tried (to no effect): .NET compact framework does not seem to support the Pack = 2^x directive in [StructLayout]. I tried [StructLayout(LayoutKind.Explicit)] and used 4 bytes and 8 bytes alignment. 4 bytes alignmentgave me the same result as the above code and 8 bytes alignment only made things worse - but that's what I expected. Interestingly if I move the code from setupBuffer into the setupWaveIn and do not declare the GCHandle in the context of the class but in a local context of setupWaveIn the struct returned by the callback function does not seem to be corrupted. I am not sure however why this is the case and how I can use this knowledge to fix my code. I'd really appreciate any good links on marshalling, calling unmanaged code from C#, etc. Then I'd be very happy if someone could point out my mistakes. What am I doing wrong? Why do I not get what I'd expect.

    Read the article

  • How long should my Html Page Title Really be?

    - by RandomBen
    How long should my text within my <title></title> tags really be? I know Google cuts it off at some point but when? When I used IIS7's SEO Toolkit 1.0 I get error stating my title should be under 65 characters. I have a book by Bruce Clay that states I should use from 62-70 characters and roughly 9 +/- 3 words. I also have used SenSEO's Firefox Add-on and it states I should use a max of 65 characters or roughly 15 words. What is the max really? I have 2 sources saying 65 and 1 saying 72 but Bruce Clay is generally kept in high regard.

    Read the article

  • Bewildering SegFault involving STL sort algorithm.

    - by just_wes
    Hello everybody, I am completely perplexed at a seg fault that I seem to be creating. I have: vector<unsigned int> words; and global variable string input; I define my custom compare function: bool wordncompare(unsigned int f, unsigned int s) { int n = k; while (((f < input.size()) && (s < input.size())) && (input[f] == input[s])) { if ((input[f] == ' ') && (--n == 0)) { return false; } f++; s++; } return true; } When I run the code: sort(words.begin(), words.end()); The program exits smoothly. However, when I run the code: sort(words.begin(), words.end(), wordncompare); I generate a SegFault deep within the STL. The GDB back-trace code looks like this: #0 0x00007ffff7b79893 in std::string::size() const () from /usr/lib/gcc/x86_64-pc-linux-gnu/4.3.4/libstdc++.so.6 #1 0x0000000000400f3f in wordncompare (f=90, s=0) at text_gen2.cpp:40 #2 0x000000000040188d in std::__unguarded_linear_insert<__gnu_cxx::__normal_iterator<unsigned int*, std::vector<unsigned int, std::allocator<unsigned int> > >, unsigned int, bool (*)(unsigned int, unsigned int)> (__last=..., __val=90, __comp=0x400edc <wordncompare(unsigned int, unsigned int)>) at /usr/lib/gcc/x86_64-pc-linux-gnu/4.3.4/include/g++-v4/bits/stl_algo.h:1735 #3 0x00000000004018df in std::__unguarded_insertion_sort<__gnu_cxx::__normal_iterator<unsigned int*, std::vector<unsigned int, std::allocator<unsigned int> > >, bool (*)(unsigned int, unsigned int)> (__first=..., __last=..., __comp=0x400edc <wordncompare(unsigned int, unsigned int)>) at /usr/lib/gcc/x86_64-pc-linux-gnu/4.3.4/include/g++-v4/bits/stl_algo.h:1812 #4 0x0000000000402562 in std::__final_insertion_sort<__gnu_cxx::__normal_iterator<unsigned int*, std::vector<unsigned int, std::allocator<unsigned int> > >, bool (*)(unsigned int, unsigned int)> (__first=..., __last=..., __comp=0x400edc <wordncompare(unsigned int, unsigned int)>) at /usr/lib/gcc/x86_64-pc-linux-gnu/4.3.4/include/g++-v4/bits/stl_algo.h:1845 #5 0x0000000000402c20 in std::sort<__gnu_cxx::__normal_iterator<unsigned int*, std::vector<unsigned int, std::allocator<unsigned int> > >, bool (*)(unsigned int, unsigned int)> (__first=..., __last=..., __comp=0x400edc <wordncompare(unsigned int, unsigned int)>) at /usr/lib/gcc/x86_64-pc-linux-gnu/4.3.4/include/g++-v4/bits/stl_algo.h:4822 #6 0x00000000004012d2 in main (argc=1, args=0x7fffffffe0b8) at text_gen2.cpp:70 I have similar code in another program, but in that program I am using a vector instead of vector. For the life of me I can't figure out what I'm doing wrong. Thanks!

    Read the article

  • Xcode Video From UITableView [migrated]

    - by Luis Felipe Beltran
    I have created different UITableViews. Each of these are dynamic tables that contain NSarrays of words. Right now When a cell is tabbed it takes the user to another View (simple view controller) that has a label and a UIImageview that displays a picture. My problem is that I also want to show video. Some of these words need to show pictures, and others need to show video. I have shown video before but I have done it using a button. I need help! I would appreciate if you guys could help here. Again, Im trying to show a video from a UITableView after the user tabs a cell. (I have many videos not just one, so dynamically would be better if possible)

    Read the article

  • Beyond Syntax Highlighting - What other code representations are possible today?

    - by Mathieu Hélie
    Despite GUI applications having been around for 30ish years, software is still written as lines of text instructions, for various valid reasons. But we've also found that manipulating these text instructions is mind-blowingly difficult unless we apply a layer of coloring on different words to represent their syntax, thus allowing us to quickly parse through these text files without having to read the whole words. But besides the Sublime Text minimap feature, I've yet to see any innovation in visual representation of code since colors came around on CRT monitors. I can think of one obviously essential representation that modern graphics technology allows: visual hierarchies for nested structures. If we make nested text slightly smaller than its outer context, and zoom on it when the cursor is focused on the line, then we will be able to browse huge files of nested statements very quickly. This becomes even more essential as languages based on closures and anonymous functions become filled with deep statements. Has anyone attempted to implement this in a text editor? Do you know of any otherwise useful improvements in representing code text graphically?

    Read the article

  • Does your company name in an article title damage Search Engine relevance

    - by user492681
    I've been wondering about this for a while but never come across a solid answer. Many websites include their name in all the title tags of their articles. This is often apparent in word-press blogs etc. eg: Tsunami hits Japan and leaves thousands homeless | My Website Name The issue I have is that Search engines strip the stop words out of this sentence to leave the words in which it compares to the body text. So if I want my article to rank well and be relevant, in this case about the terrible Tsunami that has recently struck Japan what is to STOP the MY WEBSITE NAME section of the title de-valuing the relevance of the article. Am I over-worrying? Or should I take this in to consideration? Thanks for advice in advance.

    Read the article

< Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >