Search Results

Search found 93708 results on 3749 pages for 'state server'.

Page 408/3749 | < Previous Page | 404 405 406 407 408 409 410 411 412 413 414 415  | Next Page >

  • How to find a user's (or mine) access rights on Windows Server 2008?

    - by Faiz
    I was given access to a Windows Server 2008 box and I need to check what all permissions I have on that box (if possible in the entire domain). I don't have access to domain controller and I don't want to write LDAP queries but just some GUI option or some command line stuff. Is there anyway? PS: I am not in to network administration, I am a BI developer. Pardon me if asked a stupid question.

    Read the article

  • Exchange Server Address Resolving to Random String

    - by William
    I am running Exchange 2013 on Windows Server 2012 R2. When I add my exchange account to Outlook, it seems to work perfectly (sending/receiving email, syncing everything), but when I open the account settings it has the following set as the Server: [email protected] I would have expects this to be: mail.domain.com since this is the DNS A record pointing to the IP of my server. Where is it getting this server name?

    Read the article

  • Remove some junk characters from server console log.

    - by Jayakrishnan T
    Please look in to the picture,here am trying to open(with vi editor) my server console log file(around 100MB) and it takes more than two minutes to open with so many special characters.after deleting the first line (means typing "dd")then i can easily view the file and size of the file is also reduced very much.My server OS is RHEL 5.4 and jboss is running in to it. Please help me to avoid such junk characters coming to my server console log files and it helps me to save my valuable space in server.

    Read the article

  • Nginx-Virtualmin : virtual server template

    - by Zulkhaery Basrul
    I want to edit nginx server template. I want every new virtual server created to use some specific nginx server configs and location settings. I want to add this line each time virtualmin created a server: if (!-f $request_filename) { rewrite ^/(.*)$ /index.php?q=$1 last; break; } Where can i find it? or what is the best way to add these line automatic ?

    Read the article

  • Database server size

    - by David
    I received a recommendation from my hosting provider noting that a general rule in production server management is to ensure that the memory on your database server is larger than the size of your total database. Our database is 1800mb (and growing) and we're being told to purchase a 2GB server. This didn't sit right with me, but I'm not an expert in production server management. We're running a MySQL db with mostly InnoDB tables and some MyISAMs. Thanks!

    Read the article

  • Is there a way to save "work sessions" in linux

    - by mike
    I regularly work on different projects, using different software. For project 1 I need to open for ex : Filezilla, Gedit and Nautilus (set to a specific folder) For project 2 I need to open foz ex : Gimp, Nautilus (set to another specific folder) etc. What I would like is a kind of sessions manager, where I could create entries "project 1", "project 2", etc. And with one click or command, open all the softwares I need. Perhaps there's an easy way to write a batch file for this? Any idea is welcome :)

    Read the article

  • ESXi VM NTP Server

    - by jstorch
    Hello all. We need to setup an internal NTP server. Because this server will pretty much ONLY be a time server I was thinking of just creating a VM for it. However, Googling around it appears that there there might be serious clock drifting issues with VM servers. So, is running our NTP server in a VM a completely dumb idea? Thanks, John

    Read the article

  • Can't connect to the svn server

    - by nax
    I've a VPS with a svn server installed on it. When I try to connect to it I get a connection error like: Unable to connect to a repository at URL 'svn://nax.tolstoy.eu' I know is not my home connection because i have another svn server wokring. Anyway, the server is listening the port. tcp 0 0 :::3690 :::* LISTEN 6023/svnserve And I can't acces via telnet too. (Server doesn't have iptables, as much I can see) Any idea?

    Read the article

  • How to securely control access to a backend key server?

    - by andy
    I need to securely encrypt data in my database so that if the database is dumped, hackers are unable to decrypt the data. I'm planning on creating a simple key server on a different machine, and allowing the DB server access to it (restricted by IP address on the key server to permit the DB server). The key server would contain the key required to encrypt/decrypt data. However, if a hacker were able to get a shell on the DB server, they could request the key from the key server and therefore decrypt the data in the database. How could I prevent this (assuming all firewalls are in place, DB is not connected directly to the internet, etc)? i.e. is there some method I could use that could secure a request from the DB server to the key server so that even if a hacker had a shell on the DB server they'd be unable to make those same requests? Signed requests from the DB server could make issuing these requests less trivial - I suppose that'd help increase the amount of time it'd take to compromise the key server, something a hacker probably wouldn't have much of. As far as I can see, if someone can get a shell on the DB server everything's lost anyway. This could be mitigated by using one key per data item in the DB so at least there's not a single "master" key, but multiple keys that the hacker would need to access. What would be a secure method of ensuring requests from the DB server to the key server were authentic and could be trusted?

    Read the article

  • Message from Nagios Server

    - by user12213
    Nagios Server is monitoring my Server which hosts Windows Sharepoint. I am getting the following 2 alerts in my inbox from Nagios Server 1. Service: C:\ Drive Space State: CRITICAL Additional Info: CRITICAL - Socket timeout after 10 seconds 2. Service: CPU Load State: CRITICAL Additional Info: CRITICAL - Socket timeout after 10 seconds What do I infer from these?

    Read the article

  • determining server config

    - by tristan625
    I have recently set up a Linux server for my college which also hosts the college website and other downloadables. This server is linked to outside world using a leased line connection of 1mbps capacity. My problem is that the users complain that the downloads from the server are very slow, how can I imporve this? What things should I be looking at. My current server config: Dell vostro260s with 4Gb of ram and i3 processor.

    Read the article

  • Auto blocking attacking IP address

    - by dong
    This is to share my PowerShell code online. I original asked this question on MSDN forum (or TechNet?) here: http://social.technet.microsoft.com/Forums/en-US/winserversecurity/thread/f950686e-e3f8-4cf2-b8ec-2685c1ed7a77 In short, this is trying to find attacking IP address then add it into Firewall block rule. So I suppose: 1, You are running a Windows Server 2008 facing the Internet. 2, You need to have some port open for service, e.g. TCP 21 for FTP; TCP 3389 for Remote Desktop. You can see in my code I’m only dealing with these two since that’s what I opened. You can add further port number if you like, but the way to process might be different with these two. 3, I strongly suggest you use STRONG password and follow all security best practices, this ps1 code is NOT for adding security to your server, but reduce the nuisance from brute force attack, and make sys admin’s life easier: i.e. your FTP log won’t hold megabytes of nonsense, your Windows system log will not roll back and only can tell you what happened last month. 4, You are comfortable with setting up Windows Firewall rules, in my code, my rule has a name of “MY BLACKLIST”, you need to setup a similar one, and set it to BLOCK everything. 5, My rule is dangerous because it has the risk to block myself out as well. I do have a backup plan i.e. the DELL DRAC5 so that if that happens, I still can remote console to my server and reset the firewall. 6, By no means the code is perfect, the coding style, the use of PowerShell skills, the hard coded part, all can be improved, it’s just that it’s good enough for me already. It has been running on my server for more than 7 MONTHS. 7, Current code still has problem, I didn’t solve it yet, further on this point after the code. :)    #Dong Xie, March 2012  #my simple code to monitor attack and deal with it  #Windows Server 2008 Logon Type  #8: NetworkCleartext, i.e. FTP  #10: RemoteInteractive, i.e. RDP    $tick = 0;  "Start to run at: " + (get-date);    $regex1 = [regex] "192\.168\.100\.(?:101|102):3389\s+(\d+\.\d+\.\d+\.\d+)";  $regex2 = [regex] "Source Network Address:\t(\d+\.\d+\.\d+\.\d+)";    while($True) {   $blacklist = @();     "Running... (tick:" + $tick + ")"; $tick+=1;    #Port 3389  $a = @()  netstat -no | Select-String ":3389" | ? { $m = $regex1.Match($_); `    $ip = $m.Groups[1].Value; if ($m.Success -and $ip -ne "10.0.0.1") {$a = $a + $ip;} }  if ($a.count -gt 0) {    $ips = get-eventlog Security -Newest 1000 | Where-Object {$_.EventID -eq 4625 -and $_.Message -match "Logon Type:\s+10"} | foreach { `      $m = $regex2.Match($_.Message); $ip = $m.Groups[1].Value; $ip; } | Sort-Object | Tee-Object -Variable list | Get-Unique    foreach ($ip in $a) { if ($ips -contains $ip) {      if (-not ($blacklist -contains $ip)) {        $attack_count = ($list | Select-String $ip -SimpleMatch | Measure-Object).count;        "Found attacking IP on 3389: " + $ip + ", with count: " + $attack_count;        if ($attack_count -ge 20) {$blacklist = $blacklist + $ip;}      }      }    }  }      #FTP  $now = (Get-Date).AddMinutes(-5); #check only last 5 mins.     #Get-EventLog has built-in switch for EventID, Message, Time, etc. but using any of these it will be VERY slow.  $count = (Get-EventLog Security -Newest 1000 | Where-Object {$_.EventID -eq 4625 -and $_.Message -match "Logon Type:\s+8" -and `              $_.TimeGenerated.CompareTo($now) -gt 0} | Measure-Object).count;  if ($count -gt 50) #threshold  {     $ips = @();     $ips1 = dir "C:\inetpub\logs\LogFiles\FPTSVC2" | Sort-Object -Property LastWriteTime -Descending `       | select -First 1 | gc | select -Last 200 | where {$_ -match "An\+error\+occured\+during\+the\+authentication\+process."} `        | Select-String -Pattern "(\d+\.\d+\.\d+\.\d+)" | select -ExpandProperty Matches | select -ExpandProperty value | Group-Object `        | where {$_.Count -ge 10} | select -ExpandProperty Name;       $ips2 = dir "C:\inetpub\logs\LogFiles\FTPSVC3" | Sort-Object -Property LastWriteTime -Descending `       | select -First 1 | gc | select -Last 200 | where {$_ -match "An\+error\+occured\+during\+the\+authentication\+process."} `        | Select-String -Pattern "(\d+\.\d+\.\d+\.\d+)" | select -ExpandProperty Matches | select -ExpandProperty value | Group-Object `        | where {$_.Count -ge 10} | select -ExpandProperty Name;     $ips += $ips1; $ips += $ips2; $ips = $ips | where {$_ -ne "10.0.0.1"} | Sort-Object | Get-Unique;         foreach ($ip in $ips) {       if (-not ($blacklist -contains $ip)) {        "Found attacking IP on FTP: " + $ip;        $blacklist = $blacklist + $ip;       }     }  }        #Firewall change <# $current = (netsh advfirewall firewall show rule name="MY BLACKLIST" | where {$_ -match "RemoteIP"}).replace("RemoteIP:", "").replace(" ","").replace("/255.255.255.255",""); #inside $current there is no \r or \n need remove. foreach ($ip in $blacklist) { if (-not ($current -match $ip) -and -not ($ip -like "10.0.0.*")) {"Adding this IP into firewall blocklist: " + $ip; $c= 'netsh advfirewall firewall set rule name="MY BLACKLIST" new RemoteIP="{0},{1}"' -f $ip, $current; Invoke-Expression $c; } } #>    foreach ($ip in $blacklist) {    $fw=New-object –comObject HNetCfg.FwPolicy2; # http://blogs.technet.com/b/jamesone/archive/2009/02/18/how-to-manage-the-windows-firewall-settings-with-powershell.aspx    $myrule = $fw.Rules | where {$_.Name -eq "MY BLACKLIST"} | select -First 1; # Potential bug here?    if (-not ($myrule.RemoteAddresses -match $ip) -and -not ($ip -like "10.0.0.*"))      {"Adding this IP into firewall blocklist: " + $ip;         $myrule.RemoteAddresses+=(","+$ip);      }  }    Wait-Event -Timeout 30 #pause 30 secs    } # end of top while loop.   Further points: 1, I suppose the server is listening on port 3389 on server IP: 192.168.100.101 and 192.168.100.102, you need to replace that with your real IP. 2, I suppose you are Remote Desktop to this server from a workstation with IP: 10.0.0.1. Please replace as well. 3, The threshold for 3389 attack is 20, you don’t want to block yourself just because you typed your password wrong 3 times, you can change this threshold by your own reasoning. 4, FTP is checking the log for attack only to the last 5 mins, you can change that as well. 5, I suppose the server is serving FTP on both IP address and their LOG path are C:\inetpub\logs\LogFiles\FPTSVC2 and C:\inetpub\logs\LogFiles\FPTSVC3. Change accordingly. 6, FTP checking code is only asking for the last 200 lines of log, and the threshold is 10, change as you wish. 7, the code runs in a loop, you can set the loop time at the last line. To run this code, copy and paste to your editor, finish all the editing, get it to your server, and open an CMD window, then type powershell.exe –file your_powershell_file_name.ps1, it will start running, you can Ctrl-C to break it. This is what you see when it’s running: This is when it detected attack and adding the firewall rule: Regarding the design of the code: 1, There are many ways you can detect the attack, but to add an IP into a block rule is no small thing, you need to think hard before doing it, reason for that may include: You don’t want block yourself; and not blocking your customer/user, i.e. the good guy. 2, Thus for each service/port, I double check. For 3389, first it needs to show in netstat.exe, then the Event log; for FTP, first check the Event log, then the FTP log files. 3, At three places I need to make sure I’m not adding myself into the block rule. –ne with single IP, –like with subnet.   Now the final bit: 1, The code will stop working after a while (depends on how busy you are attacked, could be weeks, months, or days?!) It will throw Red error message in CMD, don’t Panic, it does no harm, but it also no longer blocking new attack. THE REASON is not confirmed with MS people: the COM object to manage firewall, you can only give it a list of IP addresses to the length of around 32KB I think, once it reaches the limit, you get the error message. 2, This is in fact my second solution to use the COM object, the first solution is still in the comment block for your reference, which is using netsh, that fails because being run from CMD, you can only throw it a list of IP to 8KB. 3, I haven’t worked the workaround yet, some ideas include: wrap that RemoteAddresses setting line with error checking and once it reaches the limit, use the newly detected IP to be the list, not appending to it. This basically reset your block rule to ground zero and lose the previous bad IPs. This does no harm as it sounds, because given a certain period has passed, any these bad IPs still not repent and continue the attack to you, it only got 30 seconds or 20 guesses of your password before you block it again. And there is the benefit that the bad IP may turn back to the good hands again, and you are not blocking a potential customer or your CEO’s home pc because once upon a time, it’s a zombie. Thus the ZEN of blocking: never block any IP for too long. 4, But if you insist to block the ugly forever, my other ideas include: You call MS support, ask them how can we set an arbitrary length of IP addresses in a rule; at least from my experiences at the Forum, they don’t know and they don’t care, because they think the dynamic blocking should be done by some expensive hardware. Or, from programming perspective, you can create a new rule once the old is full, then you’ll have MY BLACKLIST1, MY  BLACKLIST2, MY BLACKLIST3, … etc. Once in a while you can compile them together and start a business to sell your blacklist on the market! Enjoy the code! p.s. (PowerShell is REALLY REALLY GREAT!)

    Read the article

  • Bash script throws, "syntax error near unexpected token `}'" when ran

    - by Tab00
    I am trying to write a script to monitor some battery statuses on a laptop running as a server. To accomplish this, I have already started to write this code: #! /bin/bash # A script to monitor battery statuses and send out email notifications #take care of looping the script for (( ; ; )) do #First, we check to see if the battery is present... if(cat /proc/acpi/battery/BAT0/state | grep 'present: *' == present: yes) { #Code to execute if battery IS present #No script needed for our application #you may add scripts to run } else { #if the battery IS NOT present, run this code sendemail -f [email protected] -t 214*******@txt.att.net -u NTA TV Alert -m "The battery from the computer is either missing, or removed. Please check ASAP." -s smtp.gmail.com -o tls=yes -xu [email protected] -xp *********** } #Second, we check into the current state of the battery if(cat /proc/acpi/battery/BAT0/state | grep 'charging state: *' == 'charging state: charging') { #Code to execute if battery is charging sendemail -f [email protected] -t 214*******@txt.att.net -u NTA TV Alert -m "The battery from the computer is charging. This MIGHT mean that something just happened" -s smtp.gmail.com -o tls=yes -xu [email protected] -xp *********** } #If it isn't charging, is it discharging? else if(cat /proc/acpi/battery/BAT0/state | grep 'charging state: *' == 'charging state: discharging') { #Code to run if the battery is discharging sendemail -f [email protected] -t 214*******@txt.att.net -u NTA TV Alert -m "The battery from the computer is discharging. This shouldn't be happening. Please check ASAP." -s smtp.gmail.com -o tls=yes -xu [email protected] -xp *********** } #If it isn't charging or discharging, is it charged? else if(cat /proc/acpi/battery/BAT0/state | grep 'charging state: *' == 'charging state: charged') { #Code to run if battery is charged } done I'm pretty sure that most of the other stuff works correctly, but I haven't been able to try it because it will not run. whenever I try and run the script, this is the error that I get: ./BatMon.sh: line 15: syntax error near unexpected token `}' ./BatMon.sh: ` }' is the error something super simple like a forgotten semicolon? Thanks -Tab00

    Read the article

  • quick look at: dm_db_index_physical_stats

    - by fatherjack
    A quick look at the key data from this dmv that can help a DBA keep databases performing well and systems online as the users need them. When the dynamic management views relating to index statistics became available in SQL Server 2005 there was much hype about how they can help a DBA keep their servers running in better health than ever before. This particular view gives an insight into the physical health of the indexes present in a database. Whether they are use or unused, complete or missing some columns is irrelevant, this is simply the physical stats of all indexes; disabled indexes are ignored however. In it’s simplest form this dmv can be executed as:   The results from executing this contain a record for every index in every database but some of the columns will be NULL. The first parameter is there so that you can specify which database you want to gather index details on, rather than scan every database. Simply specifying DB_ID() in place of the first NULL achieves this. In order to avoid the NULLS, or more accurately, in order to choose when to have the NULLS you need to specify a value for the last parameter. It takes one of 4 values – DEFAULT, ‘SAMPLED’, ‘LIMITED’ or ‘DETAILED’. If you execute the dmv with each of these values you can see some interesting details in the times taken to complete each step. DECLARE @Start DATETIME DECLARE @First DATETIME DECLARE @Second DATETIME DECLARE @Third DATETIME DECLARE @Finish DATETIME SET @Start = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, DEFAULT) AS ddips SET @First = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'SAMPLED') AS ddips SET @Second = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'LIMITED') AS ddips SET @Third = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'DETAILED') AS ddips SET @Finish = GETDATE() SELECT DATEDIFF(ms, @Start, @First) AS [DEFAULT] , DATEDIFF(ms, @First, @Second) AS [SAMPLED] , DATEDIFF(ms, @Second, @Third) AS [LIMITED] , DATEDIFF(ms, @Third, @Finish) AS [DETAILED] Running this code will give you 4 result sets; DEFAULT will have 12 columns full of data and then NULLS in the remainder. SAMPLED will have 21 columns full of data. LIMITED will have 12 columns of data and the NULLS in the remainder. DETAILED will have 21 columns full of data. So, from this we can deduce that the DEFAULT value (the same one that is also applied when you query the view using a NULL parameter) is the same as using LIMITED. Viewing the final result set has some details that are worth noting: Running queries against this view takes significantly longer when using the SAMPLED and DETAILED values in the last parameter. The duration of the query is directly related to the size of the database you are working in so be careful running this on big databases unless you have tried it on a test server first. Let’s look at the data we get back with the DEFAULT value first of all and then progress to the extra information later. We know that the first parameter that we supply has to be a database id and for the purposes of this blog we will be providing that value with the DB_ID function. We could just as easily put a fixed value in there or a function such as DB_ID (‘AnyDatabaseName’). The first columns we get back are database_id and object_id. These are pretty explanatory and we can wrap those in some code to make things a little easier to read: SELECT DB_NAME([ddips].[database_id]) AS [DatabaseName] , OBJECT_NAME([ddips].[object_id]) AS [TableName] … FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, NULL) AS ddips  gives us   SELECT DB_NAME([ddips].[database_id]) AS [DatabaseName] , OBJECT_NAME([ddips].[object_id]) AS [TableName], [i].[name] AS [IndexName] , ….. FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, NULL) AS ddips INNER JOIN [sys].[indexes] AS i ON [ddips].[index_id] = [i].[index_id] AND [ddips].[object_id] = [i].[object_id]     These handily tie in with the next parameters in the query on the dmv. If you specify an object_id and an index_id in these then you get results limited to either the table or the specific index. Once again we can place a  function in here to make it easier to work with a specific table. eg. SELECT * FROM [sys].[dm_db_index_physical_stats] (DB_ID(), OBJECT_ID(‘AdventureWorks2008.Person.Address’) , 1, NULL, NULL) AS ddips   Note: Despite me showing that functions can be placed directly in the parameters for this dmv, best practice recommends that functions are not used directly in the function as it is possible that they will fail to return a valid object ID. To be certain of not passing invalid values to this function, and therefore setting an automated process off on the wrong path, declare variables for the OBJECT_IDs and once they have been validated, use them in the function: DECLARE @db_id SMALLINT; DECLARE @object_id INT; SET @db_id = DB_ID(N’AdventureWorks_2008′); SET @object_id = OBJECT_ID(N’AdventureWorks_2008.Person.Address’); IF @db_id IS NULL BEGINPRINT N’Invalid database’; ENDELSE IF @object_id IS NULL BEGINPRINT N’Invalid object’; ENDELSE BEGINSELECT * FROM sys.dm_db_index_physical_stats (@db_id, @object_id, NULL, NULL , ‘LIMITED’); END; GO In cases where the results of querying this dmv don’t have any effect on other processes (i.e. simply viewing the results in the SSMS results area)  then it will be noticed when the results are not consistent with the expected results and in the case of this blog this is the method I have used. So, now we can relate the values in these columns to something that we recognise in the database lets see what those other values in the dmv are all about. The next columns are: We’ll skip partition_number, index_type_desc, alloc_unit_type_desc, index_depth and index_level  as this is a quick look at the dmv and they are pretty self explanatory. The final columns revealed by querying this view in the DEFAULT mode are avg_fragmentation_in_percent. This is the amount that the index is logically fragmented. It will show NULL when the dmv is queried in SAMPLED mode. fragment_count. The number of pieces that the index is broken into. It will show NULL when the dmv is queried in SAMPLED mode. avg_fragment_size_in_pages. The average size, in pages, of a single fragment in the leaf level of the IN_ROW_DATA allocation unit. It will show NULL when the dmv is queried in SAMPLED mode. page_count. Total number of index or data pages in use. OK, so what does this give us? Well, there is an obvious correlation between fragment_count, page_count and avg_fragment_size-in_pages. We see that an index that takes up 27 pages and is in 3 fragments has an average fragment size of 9 pages (27/3=9). This means that for this index there are 3 separate places on the hard disk that SQL Server needs to locate and access to gather the data when it is requested by a DML query. If this index was bigger than 72KB then having it’s data in 3 pieces might not be too big an issue as each piece would have a significant piece of data to read and the speed of access would not be too poor. If the number of fragments increases then obviously the amount of data in each piece decreases and that means the amount of work for the disks to do in order to retrieve the data to satisfy the query increases and this would start to decrease performance. This information can be useful to keep in mind when considering the value in the avg_fragmentation_in_percent column. This is arrived at by an internal algorithm that gives a value to the logical fragmentation of the index taking into account the multiple files, type of allocation unit and the previously mentioned characteristics if index size (page_count) and fragment_count. Seeing an index with a high avg_fragmentation_in_percent value will be a call to action for a DBA that is investigating performance issues. It is possible that tables will have indexes that suffer from rapid increases in fragmentation as part of normal daily business and that regular defragmentation work will be needed to keep it in good order. In other cases indexes will rarely become fragmented and therefore not need rebuilding from one end of the year to another. Keeping this in mind DBAs need to use an ‘intelligent’ process that assesses key characteristics of an index and decides on the best, if any, defragmentation method to apply should be used. There is a simple example of this in the sample code found in the Books OnLine content for this dmv, in example D. There are also a couple of very popular solutions created by SQL Server MVPs Michelle Ufford and Ola Hallengren which I would wholly recommend that you review for much further detail on how to care for your SQL Server indexes. Right, let’s get back on track then. Querying the dmv with the fifth parameter value as ‘DETAILED’ takes longer because it goes through the index and refreshes all data from every level of the index. As this blog is only a quick look a we are going to skate right past ghost_record_count and version_ghost_record_count and discuss avg_page_space_used_in_percent, record_count, min_record_size_in_bytes, max_record_size_in_bytes and avg_record_size_in_bytes. We can see from the details below that there is a correlation between the columns marked. Column 1 (Page_Count) is the number of 8KB pages used by the index, column 2 is how full each page is (how much of the 8KB has actual data written on it), column 3 is how many records are recorded in the index and column 4 is the average size of each record. This approximates to: ((Col1*8) * 1024*(Col2/100))/Col3 = Col4*. avg_page_space_used_in_percent is an important column to review as this indicates how much of the disk that has been given over to the storage of the index actually has data on it. This value is affected by the value given for the FILL_FACTOR parameter when creating an index. avg_record_size_in_bytes is important as you can use it to get an idea of how many records are in each page and therefore in each fragment, thus reinforcing how important it is to keep fragmentation under control. min_record_size_in_bytes and max_record_size_in_bytes are exactly as their names set them out to be. A detail of the smallest and largest records in the index. Purely offered as a guide to the DBA to better understand the storage practices taking place. So, keeping an eye on avg_fragmentation_in_percent will ensure that your indexes are helping data access processes take place as efficiently as possible. Where fragmentation recurs frequently then potentially the DBA should consider; the fill_factor of the index in order to leave space at the leaf level so that new records can be inserted without causing fragmentation so rapidly. the columns used in the index should be analysed to avoid new records needing to be inserted in the middle of the index but rather always be added to the end. * – it’s approximate as there are many factors associated with things like the type of data and other database settings that affect this slightly.  Another great resource for working with SQL Server DMVs is Performance Tuning with SQL Server Dynamic Management Views by Louis Davidson and Tim Ford – a free ebook or paperback from Simple Talk. Disclaimer – Jonathan is a Friend of Red Gate and as such, whenever they are discussed, will have a generally positive disposition towards Red Gate tools. Other tools are often available and you should always try others before you come back and buy the Red Gate ones. All code in this blog is provided “as is” and no guarantee, warranty or accuracy is applicable or inferred, run the code on a test server and be sure to understand it before you run it on a server that means a lot to you or your manager.

    Read the article

  • Android - Resuming application state - SL4A

    - by toyotajon93
    please dont harpoon me for a noob-ish question. I am working on an android application using SL4A, when my application starts it runs in the background while the script is being executed. I'm not sure where to start but each time I click my icon, it re-starts my application. I have tried using different launchmodes with nothing different happening. I'm thinking it has to do with the OnCreate code, and the setting of the notification. I need help saving my application state and then resuming on either re-click of icon or click from notification bar. I've tried everything had to turn here for help. I am not a pro at android programming by any means. Thanks guys, be gentle ;) Public void onCreate() { super.onCreate(); mInterpreterConfiguration = ((BaseApplication) getApplication()) .getInterpreterConfiguration(); } @Override public void onStart(Intent intent, final int startId) { super.onStart(intent, startId); String fileName = Script.getFileName(this); Interpreter interpreter = mInterpreterConfiguration .getInterpreterForScript(fileName); if (interpreter == null || !interpreter.isInstalled()) { mLatch.countDown(); if (FeaturedInterpreters.isSupported(fileName)) { Intent i = new Intent(this, DialogActivity.class); i.addFlags(Intent.FLAG_ACTIVITY_NEW_TASK); i.putExtra(Constants.EXTRA_SCRIPT_PATH, fileName); startActivity(i); } else { Log .e(this, "Cannot find an interpreter for script " + fileName); } stopSelf(startId); return; } // Copies script to internal memory. fileName = InterpreterUtils.getInterpreterRoot(this).getAbsolutePath() + "/" + fileName; File script = new File(fileName); // TODO(raaar): Check size here! if (!script.exists()) { script = FileUtils.copyFromStream(fileName, getResources() .openRawResource(Script.ID)); } copyResourcesToLocal(); // Copy all resources if (Script.getFileExtension(this) .equals(HtmlInterpreter.HTML_EXTENSION)) { HtmlActivityTask htmlTask = ScriptLauncher.launchHtmlScript(script, this, intent, mInterpreterConfiguration); mFacadeManager = htmlTask.getRpcReceiverManager(); mLatch.countDown(); stopSelf(startId); } else { mProxy = new AndroidProxy(this, null, true); mProxy.startLocal(); mLatch.countDown(); ScriptLauncher.launchScript(script, mInterpreterConfiguration, mProxy, new Runnable() { @Override public void run() { mProxy.shutdown(); stopSelf(startId); } }); } } RpcReceiverManager getRpcReceiverManager() throws InterruptedException { mLatch.await(); if (mFacadeManager==null) { // Facade manage may not be available on startup. mFacadeManager = mProxy.getRpcReceiverManagerFactory() .getRpcReceiverManagers().get(0); } return mFacadeManager; } @Override protected Notification createNotification() { Notification notification = new Notification(R.drawable.script_logo_48, this.getString(R.string.loading), System.currentTimeMillis()); // This contentIntent is a noop. PendingIntent contentIntent = PendingIntent.getService(this, 0, new Intent(), 0); notification.setLatestEventInfo(this, this.getString(R.string.app_name), this.getString(R.string.loading), contentIntent); notification.flags = Notification.FLAG_ONGOING_EVENT; return notification; }

    Read the article

  • Visual State Manager in WPF not working for me

    - by Román
    Hi In a wpf project I have this XAML code <Window xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" mc:Ignorable="d" xmlns:ic="clr-namespace:Microsoft.Expression.Interactivity.Core;assembly=Microsoft.Expression.Interactions" x:Class="WpfApplication1.MainWindow" xmlns:vsm="clr-namespace:System.Windows;assembly=WPFToolkit" x:Name="Window" Title="MainWindow" Width="640" Height="480"> <vsm:VisualStateManager.VisualStateGroups> <vsm:VisualStateGroup x:Name="VisualStateGroup"> <vsm:VisualState x:Name="Loading"> <Storyboard> <ObjectAnimationUsingKeyFrames BeginTime="00:00:00" Duration="00:00:00.0010000" Storyboard.TargetName="control" Storyboard.TargetProperty="(UIElement.Visibility)"> <DiscreteObjectKeyFrame KeyTime="00:00:00" Value="{x:Static Visibility.Visible}"/> </ObjectAnimationUsingKeyFrames> <ObjectAnimationUsingKeyFrames BeginTime="00:00:00" Duration="00:00:00.0010000" Storyboard.TargetName="button" Storyboard.TargetProperty="(UIElement.Visibility)"> <DiscreteObjectKeyFrame KeyTime="00:00:00" Value="{x:Static Visibility.Collapsed}"/> </ObjectAnimationUsingKeyFrames> <ObjectAnimationUsingKeyFrames BeginTime="00:00:00" Duration="00:00:00.0010000" Storyboard.TargetName="button1" Storyboard.TargetProperty="(UIElement.Visibility)"> <DiscreteObjectKeyFrame KeyTime="00:00:00" Value="{x:Static Visibility.Visible}"/> </ObjectAnimationUsingKeyFrames> </Storyboard> </vsm:VisualState> <VisualState x:Name="Normal"> <Storyboard> <ObjectAnimationUsingKeyFrames BeginTime="00:00:00" Duration="00:00:00.0010000" Storyboard.TargetName="control" Storyboard.TargetProperty="(UIElement.Visibility)"> <DiscreteObjectKeyFrame KeyTime="00:00:00" Value="{x:Static Visibility.Collapsed}"/> </ObjectAnimationUsingKeyFrames> </Storyboard> </VisualState> </vsm:VisualStateGroup> </vsm:VisualStateManager.VisualStateGroups> <Grid x:Name="LayoutRoot"> <Grid.Resources> <ControlTemplate x:Key="loadingAnimation"> <Image x:Name="content" Opacity="1"> <Image.Source> <DrawingImage> <DrawingImage.Drawing> <DrawingGroup> <GeometryDrawing Brush="Transparent"> <GeometryDrawing.Geometry> <RectangleGeometry Rect="0,0,1,1"/> </GeometryDrawing.Geometry> </GeometryDrawing> <DrawingGroup> <DrawingGroup.Transform> <RotateTransform x:Name="angle" Angle="0" CenterX="0.5" CenterY="0.5"/> </DrawingGroup.Transform> <GeometryDrawing Geometry="M0.9,0.5 A0.4,0.4,90,1,1,0.5,0.1"> <GeometryDrawing.Pen> <Pen Brush="Green" Thickness="0.1"/> </GeometryDrawing.Pen> </GeometryDrawing> <GeometryDrawing Brush="Green" Geometry="M0.5,0 L0.7,0.1 L0.5,0.2"/> </DrawingGroup> </DrawingGroup> </DrawingImage.Drawing> </DrawingImage> </Image.Source> </Image> <ControlTemplate.Triggers> <Trigger Property="Visibility" Value="Visible"> <Trigger.EnterActions> <BeginStoryboard x:Name="animation"> <Storyboard> <DoubleAnimation From="0" To="359" Duration="0:0:1.5" RepeatBehavior="Forever" Storyboard.TargetName="angle" Storyboard.TargetProperty="Angle"/> </Storyboard> </BeginStoryboard> </Trigger.EnterActions> <Trigger.ExitActions> <StopStoryboard BeginStoryboardName="animation"/> </Trigger.ExitActions> </Trigger> </ControlTemplate.Triggers> </ControlTemplate> </Grid.Resources> <Grid.ColumnDefinitions> <ColumnDefinition MinWidth="76.128" Width="Auto"/> <ColumnDefinition MinWidth="547.872" Width="Auto"/> </Grid.ColumnDefinitions> <Grid.RowDefinitions> <RowDefinition Height="0.05*"/> <RowDefinition Height="0.95*"/> </Grid.RowDefinitions> <Button x:Name="button" Margin="0,0,1,0.04" Width="100" Content="Load" d:LayoutOverrides="Height" Click="Button_Click"/> <Button x:Name="button1" HorizontalAlignment="Left" Margin="0,0,0,0.04" Width="100" Content="Stop" Grid.Column="1" d:LayoutOverrides="Height" Click="Button2_Click" Visibility="Collapsed"/> <Control x:Name="control" Margin="10" Height="100" Grid.Row="1" Grid.ColumnSpan="2" Width="100" Template="{DynamicResource loadingAnimation}" Visibility="Collapsed"/> </Grid> </Window> and the following code behind on the window public partial class MainWindow : Window { public MainWindow() { this.InitializeComponent(); } private void Button1_Click(object sender, System.Windows.RoutedEventArgs e) { VisualStateManager.GoToState(this, "Loading", true); } private void Button2_Click(object sender, System.Windows.RoutedEventArgs e) { VisualStateManager.GoToState(this, "Normal", true); } } However, when I click the first button (button1) the state change is not being triggered. What am I doing wrong? Thanks in advance

    Read the article

  • Plan Caching and Query Memory Part II (Hash Match) – When not to use stored procedure - Most common performance mistake SQL Server developers make.

    - by sqlworkshops
    SQL Server estimates Memory requirement at compile time, when stored procedure or other plan caching mechanisms like sp_executesql or prepared statement are used, the memory requirement is estimated based on first set of execution parameters. This is a common reason for spill over tempdb and hence poor performance. Common memory allocating queries are that perform Sort and do Hash Match operations like Hash Join or Hash Aggregation or Hash Union. This article covers Hash Match operations with examples. It is recommended to read Plan Caching and Query Memory Part I before this article which covers an introduction and Query memory for Sort. In most cases it is cheaper to pay for the compilation cost of dynamic queries than huge cost for spill over tempdb, unless memory requirement for a query does not change significantly based on predicates.   This article covers underestimation / overestimation of memory for Hash Match operation. Plan Caching and Query Memory Part I covers underestimation / overestimation for Sort. It is important to note that underestimation of memory for Sort and Hash Match operations lead to spill over tempdb and hence negatively impact performance. Overestimation of memory affects the memory needs of other concurrently executing queries. In addition, it is important to note, with Hash Match operations, overestimation of memory can actually lead to poor performance.   To read additional articles I wrote click here.   The best way to learn is to practice. To create the below tables and reproduce the behavior, join the mailing list by using this link: www.sqlworkshops.com/ml and I will send you the table creation script. Most of these concepts are also covered in our webcasts: www.sqlworkshops.com/webcasts  Let’s create a Customer’s State table that has 99% of customers in NY and the rest 1% in WA.Customers table used in Part I of this article is also used here.To observe Hash Warning, enable 'Hash Warning' in SQL Profiler under Events 'Errors and Warnings'. --Example provided by www.sqlworkshops.com drop table CustomersState go create table CustomersState (CustomerID int primary key, Address char(200), State char(2)) go insert into CustomersState (CustomerID, Address) select CustomerID, 'Address' from Customers update CustomersState set State = 'NY' where CustomerID % 100 != 1 update CustomersState set State = 'WA' where CustomerID % 100 = 1 go update statistics CustomersState with fullscan go   Let’s create a stored procedure that joins customers with CustomersState table with a predicate on State. --Example provided by www.sqlworkshops.com create proc CustomersByState @State char(2) as begin declare @CustomerID int select @CustomerID = e.CustomerID from Customers e inner join CustomersState es on (e.CustomerID = es.CustomerID) where es.State = @State option (maxdop 1) end go  Let’s execute the stored procedure first with parameter value ‘WA’ – which will select 1% of data. set statistics time on go --Example provided by www.sqlworkshops.com exec CustomersByState 'WA' goThe stored procedure took 294 ms to complete.  The stored procedure was granted 6704 KB based on 8000 rows being estimated.  The estimated number of rows, 8000 is similar to actual number of rows 8000 and hence the memory estimation should be ok.  There was no Hash Warning in SQL Profiler. To observe Hash Warning, enable 'Hash Warning' in SQL Profiler under Events 'Errors and Warnings'.   Now let’s execute the stored procedure with parameter value ‘NY’ – which will select 99% of data. -Example provided by www.sqlworkshops.com exec CustomersByState 'NY' go  The stored procedure took 2922 ms to complete.   The stored procedure was granted 6704 KB based on 8000 rows being estimated.    The estimated number of rows, 8000 is way different from the actual number of rows 792000 because the estimation is based on the first set of parameter value supplied to the stored procedure which is ‘WA’ in our case. This underestimation will lead to spill over tempdb, resulting in poor performance.   There was Hash Warning (Recursion) in SQL Profiler. To observe Hash Warning, enable 'Hash Warning' in SQL Profiler under Events 'Errors and Warnings'.   Let’s recompile the stored procedure and then let’s first execute the stored procedure with parameter value ‘NY’.  In a production instance it is not advisable to use sp_recompile instead one should use DBCC FREEPROCCACHE (plan_handle). This is due to locking issues involved with sp_recompile, refer to our webcasts, www.sqlworkshops.com/webcasts for further details.   exec sp_recompile CustomersByState go --Example provided by www.sqlworkshops.com exec CustomersByState 'NY' go  Now the stored procedure took only 1046 ms instead of 2922 ms.   The stored procedure was granted 146752 KB of memory. The estimated number of rows, 792000 is similar to actual number of rows of 792000. Better performance of this stored procedure execution is due to better estimation of memory and avoiding spill over tempdb.   There was no Hash Warning in SQL Profiler.   Now let’s execute the stored procedure with parameter value ‘WA’. --Example provided by www.sqlworkshops.com exec CustomersByState 'WA' go  The stored procedure took 351 ms to complete, higher than the previous execution time of 294 ms.    This stored procedure was granted more memory (146752 KB) than necessary (6704 KB) based on parameter value ‘NY’ for estimation (792000 rows) instead of parameter value ‘WA’ for estimation (8000 rows). This is because the estimation is based on the first set of parameter value supplied to the stored procedure which is ‘NY’ in this case. This overestimation leads to poor performance of this Hash Match operation, it might also affect the performance of other concurrently executing queries requiring memory and hence overestimation is not recommended.     The estimated number of rows, 792000 is much more than the actual number of rows of 8000.  Intermediate Summary: This issue can be avoided by not caching the plan for memory allocating queries. Other possibility is to use recompile hint or optimize for hint to allocate memory for predefined data range.Let’s recreate the stored procedure with recompile hint. --Example provided by www.sqlworkshops.com drop proc CustomersByState go create proc CustomersByState @State char(2) as begin declare @CustomerID int select @CustomerID = e.CustomerID from Customers e inner join CustomersState es on (e.CustomerID = es.CustomerID) where es.State = @State option (maxdop 1, recompile) end go  Let’s execute the stored procedure initially with parameter value ‘WA’ and then with parameter value ‘NY’. --Example provided by www.sqlworkshops.com exec CustomersByState 'WA' go exec CustomersByState 'NY' go  The stored procedure took 297 ms and 1102 ms in line with previous optimal execution times.   The stored procedure with parameter value ‘WA’ has good estimation like before.   Estimated number of rows of 8000 is similar to actual number of rows of 8000.   The stored procedure with parameter value ‘NY’ also has good estimation and memory grant like before because the stored procedure was recompiled with current set of parameter values.  Estimated number of rows of 792000 is similar to actual number of rows of 792000.    The compilation time and compilation CPU of 1 ms is not expensive in this case compared to the performance benefit.   There was no Hash Warning in SQL Profiler.   Let’s recreate the stored procedure with optimize for hint of ‘NY’. --Example provided by www.sqlworkshops.com drop proc CustomersByState go create proc CustomersByState @State char(2) as begin declare @CustomerID int select @CustomerID = e.CustomerID from Customers e inner join CustomersState es on (e.CustomerID = es.CustomerID) where es.State = @State option (maxdop 1, optimize for (@State = 'NY')) end go  Let’s execute the stored procedure initially with parameter value ‘WA’ and then with parameter value ‘NY’. --Example provided by www.sqlworkshops.com exec CustomersByState 'WA' go exec CustomersByState 'NY' go  The stored procedure took 353 ms with parameter value ‘WA’, this is much slower than the optimal execution time of 294 ms we observed previously. This is because of overestimation of memory. The stored procedure with parameter value ‘NY’ has optimal execution time like before.   The stored procedure with parameter value ‘WA’ has overestimation of rows because of optimize for hint value of ‘NY’.   Unlike before, more memory was estimated to this stored procedure based on optimize for hint value ‘NY’.    The stored procedure with parameter value ‘NY’ has good estimation because of optimize for hint value of ‘NY’. Estimated number of rows of 792000 is similar to actual number of rows of 792000.   Optimal amount memory was estimated to this stored procedure based on optimize for hint value ‘NY’.   There was no Hash Warning in SQL Profiler.   This article covers underestimation / overestimation of memory for Hash Match operation. Plan Caching and Query Memory Part I covers underestimation / overestimation for Sort. It is important to note that underestimation of memory for Sort and Hash Match operations lead to spill over tempdb and hence negatively impact performance. Overestimation of memory affects the memory needs of other concurrently executing queries. In addition, it is important to note, with Hash Match operations, overestimation of memory can actually lead to poor performance.   Summary: Cached plan might lead to underestimation or overestimation of memory because the memory is estimated based on first set of execution parameters. It is recommended not to cache the plan if the amount of memory required to execute the stored procedure has a wide range of possibilities. One can mitigate this by using recompile hint, but that will lead to compilation overhead. However, in most cases it might be ok to pay for compilation rather than spilling sort over tempdb which could be very expensive compared to compilation cost. The other possibility is to use optimize for hint, but in case one sorts more data than hinted by optimize for hint, this will still lead to spill. On the other side there is also the possibility of overestimation leading to unnecessary memory issues for other concurrently executing queries. In case of Hash Match operations, this overestimation of memory might lead to poor performance. When the values used in optimize for hint are archived from the database, the estimation will be wrong leading to worst performance, so one has to exercise caution before using optimize for hint, recompile hint is better in this case.   I explain these concepts with detailed examples in my webcasts (www.sqlworkshops.com/webcasts), I recommend you to watch them. The best way to learn is to practice. To create the above tables and reproduce the behavior, join the mailing list at www.sqlworkshops.com/ml and I will send you the relevant SQL Scripts.  Register for the upcoming 3 Day Level 400 Microsoft SQL Server 2008 and SQL Server 2005 Performance Monitoring & Tuning Hands-on Workshop in London, United Kingdom during March 15-17, 2011, click here to register / Microsoft UK TechNet.These are hands-on workshops with a maximum of 12 participants and not lectures. For consulting engagements click here.   Disclaimer and copyright information:This article refers to organizations and products that may be the trademarks or registered trademarks of their various owners. Copyright of this article belongs to R Meyyappan / www.sqlworkshops.com. You may freely use the ideas and concepts discussed in this article with acknowledgement (www.sqlworkshops.com), but you may not claim any of it as your own work. This article is for informational purposes only; you use any of the suggestions given here entirely at your own risk.   R Meyyappan [email protected] LinkedIn: http://at.linkedin.com/in/rmeyyappan

    Read the article

  • SQLAuthority News – #TechEdIn – TechEd India 2012 Memories and Photos

    - by pinaldave
    TechEd India 2012 was held in Bangalore last March 21 to 23, 2012. Just like every year, this event is bigger, grander and inspiring. Pinal Dave at TechEd India 2012 Family Event Every single year, TechEd is a special affair for my entire family.  Four months before the start of TechEd, I usually start to build the mental image of the event. I start to think  about various things. For the most part, what excites me most is presenting a session and meeting friends. Seriously, I start thinking about presenting my session 4 months earlier than the event!  I work on my presentation day and night. I want to make sure that what I present is accurate and that I have experienced it firsthand. My wife and my daughter also contribute to my efforts. For us, TechEd is a family event, and the two of them feel equally responsible as well. They give up their family time so I can bring out the best content for the Community. Pinal, Shaivi and Nupur at TechEd India 2012 Guinea Pigs (My Experiment Victims) I do not rehearse my session, ever. However, I test my demo almost every single day till the last moment that I have to present it already. I sometimes go over the demo more than 2-3 times a day even though the event is more than a month away. I have two “guinea pigs”: 1) Nupur Dave and 2) Vinod Kumar. When I am at home, I present my demos to my wife Nupur. At times I feel that people often backup their demo, but in my case, I have backup demo presenters. In the office during lunch time, I present the demos to Vinod. I am sure he can walk my demos easily with eyes closed. Pinal and Vinod at TechEd India 2012 My Sessions I’ve been determined to present my sessions in a real and practical manner. I prefer to present the subject that I myself would be eager to attend to and sit through if I were an audience. Just keeping that principle in mind, I have created two sessions this year. SQL Server Misconception and Resolution Pinal and Vinod at TechEd India 2012 We believe all kinds of stuff – that the earth is flat, or that the forbidden fruit is apple, or that the big bang theory explains the origin of the universe, and so many other things. Just like these, we have plenty of misconceptions in SQL Server as well. I have had this dream of co-presenting a session with Vinod Kumar for the past 3 years. I have been asking him every year if we could present a session together, but we never got it to work out, until this year came. Fortunately, we got a chance to stand on the same stage and present a single subject.  I believe that Vinod Kumar and I have an excellent synergy when we are working together. We know each other’s strengths and weakness. We know when the other person will speak and when he will keep quiet. The reason behind this synergy is that we have worked on 2 Video Learning Courses (SQL Server Indexes and SQL Server Questions and Answers) and authored 1 book (SQL Server Questions and Answers) together. Crowd Outside Session Hall This session was inspired from the “Laurel and Hardy” show so we performed a role-playing of those famous characters. We had an excellent time at the stage and, for sure, the audience had a wonderful time, too. We had an extremely large audience for this session and had a great time interacting with them. Speed Up! – Parallel Processes and Unparalleled Performance Pinal Dave at TechEd India 2012 I wanted to approach this session at level 400 and I was very determined to do so. The biggest challenge I had was that this was a total of 60 minutes of session and the audience profile was very generic. I had to present at level 100 as well at 400. I worked hard to tune up these demos. I wanted to make sure that my messages would land perfectly to the minds of the attendees, and when they walk out of the session, they could use the knowledge I shared on their servers. After the session, I felt an extreme satisfaction as I received lots of positive feedback at the event. At one point, so many people rushed towards me that I was a bit scared that the stage might break and someone would get injured. Fortunately, nothing like that happened and I was able to shake hands with everybody. Pinal Dave at TechEd India 2012 Crowd rushing to Pinal at TechEd India 2012 Networking This is one of the primary reasons many of us visit the annual TechEd event. I had a fantastic time meeting SQL Server enthusiasts. Well, it was a terrific time meeting old friends, user group members, MVPs and SQL Enthusiasts. I have taken many photographs with lots of people, but I have received a very few back. If you are reading this blog and have a photo of us at the event, would you please send it to me so I could keep it in my memory lane? SQL Track Speaker: Jacob and Pinal at TechEd India 2012 SQL Community: Pinal, Tejas, Nakul, Jacob, Balmukund, Manas, Sudeepta, Sahal at TechEd India 2012 Star Speakers: Amit and Balmukund at TechEd India 2012 TechED Rockstars: Nakul, Tejas and Pinal at TechEd India 2012 I guess TechEd is a mix of family affair and culture for me! Hamara TechEd (Our TechEd) Please tell me which photo you like the most! Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, SQLAuthority News, SQLServer, T SQL, Technology Tagged: TechEd, TechEdIn

    Read the article

  • MSSQL: Copying data from one database to another

    - by DigiMortal
    I have database that has data imported from another server using import and export wizard of SQL Server Management Studio. There is also empty database with same tables but it also has primary keys, foreign keys and indexes. How to get data from first database to another? Here is the description of my crusade. And believe me – it is not nice one. Bugs in import and export wizard There is some awful bugs in import and export wizard that makes data imports and exports possible only on very limited manner: wizard is not able to analyze foreign keys, wizard wants to create tables always, whatever you say in settings. The result is faulty and useless package. Now let’s go step by step and make things work in our scenario. Database There are two databases. Let’s name them like this: PLAIN – contains data imported from remote server (no indexes, no keys, no nothing, just plain dumb data) CORRECT – empty database with same structure as remote database (indexes, keys and everything else but no data) Our goal is to get data from PLAIN to CORRECT. 1. Create import and export package In this point we will create faulty SSIS package using SQL Server Management Studio. Run import and export wizard and let it create SSIS package that reads data from CORRECT and writes it to, let’s say, CORRECT-2. Make sure you enable identity insert. Make sure there are no views selected. Make sure you don’t let package to create tables (you can miss this step because it wants to create tables anyway). Save package to SSIS. 2. Modify import and export package Now let’s clean up the package and remove all faulty crap. Connect SQL Server Management Studio to SSIS instance. Select the package you just saved and export it to your hard disc. Run Business Intelligence Studio. Create new SSIS project (DON’T MISS THIS STEP). Add package from disc as existing item to project and open it. Move to Control Flow page do one of following: Remove all preparation SQL-tasks and connect Data Flow tasks. Modify all preparation SQL-tasks so the existence of tables is checked before table is created (yes, you have to do it manually). Add new Execute-SQL task as first task in control flow: Open task properties. Assign destination connection as connection to use. Insert the following SQL as command:   EXEC sp_MSForEachTable 'ALTER TABLE ? NOCHECK CONSTRAINT ALL' GO   EXEC sp_MSForEachTable 'DELETE FROM ?' GO   Save task. Add new Execute-SQL task as last task in control flow: Open task properties. Assign destination connection as connection to use. Insert the following SQL as command:   EXEC sp_MSForEachTable 'ALTER TABLE ? CHECK CONSTRAINT ALL' GO   Save task Now connect first Execute-SQL task with first Data Flow task and last Data Flow task with second Execute-SQL task. Now move to Package Explorer tab and change connections under Connection Managers folder. Make source connection to use database PLAIN. Make destination connection to use database CORRECT. Save package and rebuilt the project. Update package using SQL Server Management Studio. Some hints: Make sure you take the package from solution folder because it is saved there now. Don’t overwrite existing package. Use numeric suffix and let Management Studio to create a new version of package. Now you are done with your package. Run it to test it and clean out all the errors you find. TRUNCATE vs DELETE You can see that I used DELETE FROM instead of TRUNCATE. Why? Because TRUNCATE has some nasty limits (taken from MSDN): “You cannot use TRUNCATE TABLE on a table referenced by a FOREIGN KEY constraint; instead, use DELETE statement without a WHERE clause. Because TRUNCATE TABLE is not logged, it cannot activate a trigger. TRUNCATE TABLE may not be used on tables participating in an indexed view.” As I am not sure what tables you have and how they are used I provided here the solution that should work for all scenarios. If you need better performance then in some cases you can use TRUNCATE table instead of DELETE. Conclusion My conclusion is bitter this time although I am very positive guy. It is A.D. 2010 and still we have to write stupid hacks for simple things. Simple tools that existed before are long gone and we have to live mysterious bloatware that is our only choice when using default tools. If you take a look at the length of this posting and the count of steps I had to do for one easy thing you should treat it as a signal that something has went wrong in last years. Although I got my job done I would be still more happy if out of box tools are more intelligent one day. References T-SQL Trick for Deleting All Data in Your Database (Mauro Cardarelli) TRUNCATE TABLE (MSDN Library) Error Handling in SQL 2000 – a Background (Erland Sommarskog) Disable/Enable Foreign Key and Check constraints in SQL Server (Decipher)

    Read the article

  • c++ smtp connection state - starttls

    - by Jackell
    Hi all! I am using openssl to build secure smtp connections to gmail.com:25. So I can successfully connect to the server and sends a command STARTTLS (I receive 220 2.0.0 Ready to start TLS). Then execute the following code without disconnecting: SSL_METHOD* method = NULL; SSL_library_init(); SSL_load_error_strings(); method = SSLv23_client_method(); ctx = SSL_CTX_new(method); if (ctx == NULL) { ERR_print_errors_fp(stderr); } SSL_CTX_set_options(ctx, SSL_OP_NO_SSLv2); ssl = SSL_new(ctx); if (!SSL_set_fd(ssl, socket)) { ERR_print_errors_fp(stderr); return; } if (ssl) { if (SSL_connect((SSL*)ssl) < 1) { ERR_print_errors_fp(stderr); } // then i think i need to send EHLO } But after calling SSL_connect I get an error: 24953:error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol:s23_clnt.c:601: Why? What I do wrong?

    Read the article

  • Columnstore Case Study #2: Columnstore faster than SSAS Cube at DevCon Security

    - by aspiringgeek
    Preamble This is the second in a series of posts documenting big wins encountered using columnstore indexes in SQL Server 2012 & 2014.  Many of these can be found in my big deck along with details such as internals, best practices, caveats, etc.  The purpose of sharing the case studies in this context is to provide an easy-to-consume quick-reference alternative. See also Columnstore Case Study #1: MSIT SONAR Aggregations Why Columnstore? As stated previously, If we’re looking for a subset of columns from one or a few rows, given the right indexes, SQL Server can do a superlative job of providing an answer. If we’re asking a question which by design needs to hit lots of rows—DW, reporting, aggregations, grouping, scans, etc., SQL Server has never had a good mechanism—until columnstore. Columnstore indexes were introduced in SQL Server 2012. However, they're still largely unknown. Some adoption blockers existed; yet columnstore was nonetheless a game changer for many apps.  In SQL Server 2014, potential blockers have been largely removed & they're going to profoundly change the way we interact with our data.  The purpose of this series is to share the performance benefits of columnstore & documenting columnstore is a compelling reason to upgrade to SQL Server 2014. The Customer DevCon Security provides home & business security services & has been in business for 135 years. I met DevCon personnel while speaking to the Utah County SQL User Group on 20 February 2012. (Thanks to TJ Belt (b|@tjaybelt) & Ben Miller (b|@DBADuck) for the invitation which serendipitously coincided with the height of ski season.) The App: DevCon Security Reporting: Optimized & Ad Hoc Queries DevCon users interrogate a SQL Server 2012 Analysis Services cube via SSRS. In addition, the SQL Server 2012 relational back end is the target of ad hoc queries; this DW back end is refreshed nightly during a brief maintenance window via conventional table partition switching. SSRS, SSAS, & MDX Conventional relational structures were unable to provide adequate performance for user interaction for the SSRS reports. An SSAS solution was implemented requiring personnel to ramp up technically, including learning enough MDX to satisfy requirements. Ad Hoc Queries Even though the fact table is relatively small—only 22 million rows & 33GB—the table was a typical DW table in terms of its width: 137 columns, any of which could be the target of ad hoc interrogation. As is common in DW reporting scenarios such as this, it is often nearly to optimize for such queries using conventional indexing. DevCon DBAs & developers attended PASS 2012 & were introduced to the marvels of columnstore in a session presented by Klaus Aschenbrenner (b|@Aschenbrenner) The Details Classic vs. columnstore before-&-after metrics are impressive. Scenario   Conventional Structures   Columnstore   Δ SSRS via SSAS 10 - 12 seconds 1 second >10x Ad Hoc 5-7 minutes (300 - 420 seconds) 1 - 2 seconds >100x Here are two charts characterizing this data graphically.  The first is a linear representation of Report Duration (in seconds) for Conventional Structures vs. Columnstore Indexes.  As is so often the case when we chart such significant deltas, the linear scale doesn’t expose some the dramatically improved values corresponding to the columnstore metrics.  Just to make it fair here’s the same data represented logarithmically; yet even here the values corresponding to 1 –2 seconds aren’t visible.  The Wins Performance: Even prior to columnstore implementation, at 10 - 12 seconds canned report performance against the SSAS cube was tolerable. Yet the 1 second performance afterward is clearly better. As significant as that is, imagine the user experience re: ad hoc interrogation. The difference between several minutes vs. one or two seconds is a game changer, literally changing the way users interact with their data—no mental context switching, no wondering when the results will appear, no preoccupation with the spinning mind-numbing hurry-up-&-wait indicators.  As we’ve commonly found elsewhere, columnstore indexes here provided performance improvements of one, two, or more orders of magnitude. Simplified Infrastructure: Because in this case a nonclustered columnstore index on a conventional DW table was faster than an Analysis Services cube, the entire SSAS infrastructure was rendered superfluous & was retired. PASS Rocks: Once again, the value of attending PASS is proven out. The trip to Charlotte combined with eager & enquiring minds let directly to this success story. Find out more about the next PASS Summit here, hosted this year in Seattle on November 4 - 7, 2014. DevCon BI Team Lead Nathan Allan provided this unsolicited feedback: “What we found was pretty awesome. It has been a game changer for us in terms of the flexibility we can offer people that would like to get to the data in different ways.” Summary For DW, reports, & other BI workloads, columnstore often provides significant performance enhancements relative to conventional indexing.  I have documented here, the second in a series of reports on columnstore implementations, results from DevCon Security, a live customer production app for which performance increased by factors of from 10x to 100x for all report queries, including canned queries as well as reducing time for results for ad hoc queries from 5 - 7 minutes to 1 - 2 seconds. As a result of columnstore performance, the customer retired their SSAS infrastructure. I invite you to consider leveraging columnstore in your own environment. Let me know if you have any questions.

    Read the article

  • Making hovor state of hidden list visible when page is active

    - by Joel
    Hi guys, One day I hope to not be such a newbie on this stuff, but some of this feels so insurmountable sometimes! OK. I have a nav bar with hidden li items that are visible when hovered over. Here's the live site: http://www.rattletree.com Here's the code for the nav: <ul id="navbar"> <li id="iex"><a href="index.php">About Rattletree</a></li> <li id="upcomgshows"><a href="upcomingshows.php">Calendar</a></li> <li id="sods"><a href="#">Sights &amp; Sounds</a> <ul class="innerlist"> <li class="innerlist"><img class="arrowAdjust" src="images/curved_arrow.png" alt="" /></li> <li class="innerlist"><a href="/playlist.m3u" target="_blank" onclick="javascript:BatmoAudioPop('Rattletree Marimba',this.href,'1'); return false">Listen</a></li> <li class="innerlist"><a href="/new_pictures.php">Photos</a></li> <li class="innerlist"><a href="/video.php">Video</a></li> <li class="innerlist"><a href="/press.php">Press</a></li> </ul> </li> <li id="bookin"><a href="#">Contact</a> <ul class="innerlist"> <li class="innerlist"><img class="arrowAdjust" src="images/curved_arrow.png" alt="" /></li> <li class="innerlist"><a href="/booking.php">Booking Info</a></li> <li class="innerlist"><a href="/media.php">Media Inquiries</a></li> </ul> </li> <li id="ste"> <a href="/sounds.php">Store</a></li> <li id="instrumes"><a href="/instruments.php">The Instruments</a></li> <li id="classe"><a href="classes.php">Workshops</a></li> </ul> css: div#navbar2 { background-color:#546F8B; border-bottom:1px solid #546F8B; border-top:1px solid #000000; display:inline-block; position:relative; width:100%; } div#navbar2 ul#navbar { color:#FFFFFF; font-family:Arial,Helvetica,sans-serif; font-size:16px; letter-spacing:1px; margin:10px 0; padding:0; white-space:nowrap; } div#navbar2 ul#navbar li ul.innerlist { color:#000000; display:none; position:relative; z-index:20; } div#navbar2 ul#navbar li { display:inline; list-style-type:none; margin:0; padding:0; position:relative; } Now it's a bit tricky what I want to do: If a user navigates to one of the innerlist pages, I'd like that innerlist ul to remain visible (with the specific li displaying the hovered state). Now I think I could figure that out on my own, but you can see on the live page that if the user is on a page from the innerlist and that list was visible, then if they hovered over the other nav tab, then those innerlists would overlap. This is a problem. Hopefully that last sentence makes sense! In short: I need to keep the inner list of the active page displaying, BUT if the user hovers over another nav button WITH it's own inner list, then the live innerlist needs to disappear. Clear as mud?

    Read the article

  • view state in ASP.NET MVC Application

    - by Shetty
    Hi, I have read that viewstate is not there in asp.net MVC application. I am doing model validation. Now if i have two text boxes on my page,and i am doing required filed validation for both of them in model. This validation is done on server side on click of a button. I will fill one text box and click on submit button. It does the validation and returns the result saying second field is required. At this time value of the first text box is retained. So can you tell me how this text box is retaining the value even after postback?

    Read the article

< Previous Page | 404 405 406 407 408 409 410 411 412 413 414 415  | Next Page >