Search Results

Search found 92388 results on 3696 pages for 'virtual dedicated server'.

Page 83/3696 | < Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >

  • File Server Resource Manager attempting to access quota.xml on System Reserved partition?

    - by pmellett
    I've got a new install of Server 2008 R2 that is designed to be our quota server for user home directories and shared areas. I installed FSRM and set up a few quotas to try out. They worked fine but at some point over the weekend it's stopped loading the FSRM console quota screen and gives the following error, with Event ID 8228: File Server Resource Manager was unable to access the following file or volume: '\\?\Volume{73649de6-7f04-11e1-a344-005056b10310}\System Volume Information\SRM\quota.xml'. This file or volume might be locked by another application right now, or you might need to give Local System access to it. I have removed and reinstalled the FSRM Role Service, cleared the \System Volume Information\SRM folder on each volume and am at the verge of just starting again. I'd rather not since then I have to go through and set up all my NTFS permissions again. Since it looks like the service is trying to access the System Reserved partition, which I assume won't have any files it could possibly need, how do I remove System Reserved partition as a volume to be monitored for the quota service? (I am not aware of configuring that to be the case originally though!)

    Read the article

  • Change the logical name of sql server express 2005 database file?

    - by oob
    In Microsoft SQL Server Management Studio Express for Sql Server Express 2005, I needed to copy a database for testing and keep it on the same server as the old database. I did the following: Right Click on Databases Created new database Detached the database I wanted to copy "Restored" my new database from the backup file of my old database. I did this by clicking the 'Overwrite the existing database' box on the Options pane, and I changed the paths in the 'restore as' options so that they pointed to my new .mdf and .ldf files. Everything is working like I want. Problem is, when I right-click - Properties - Files on my new database, the logical name of the .mdf file is the same as the logical name of the old .mdf file. They are actually different files - they just share the same logical name? I guess maybe this isn't a short-term problem, but I can see it confusing somebody down the road. Any way to change the logical name of the .mdf file? UPDATE EDIT - Apparently you can just change the logical name through the GUI by, get this, clicking on it and typing a new name. I could swear that was not possible when I posted this, but maybe it was and I somehow missed it! Either way - the solution below should still work but doing it through the GUI is also an option.

    Read the article

  • Faster way to transfer table data from linked server

    - by spender
    After much fiddling, I've managed to install the right ODBC driver and have successfully created a linked server on SQL Server 2008, by which I can access my PostgreSQL db from SQL server. I'm copying all of the data from some of the tables in the PgSQL DB into SQL Server using merge statements that take the following form: with mbRemote as ( select * from openquery(someLinkedDb,'select * from someTable') ) merge into someTable mbLocal using mbRemote on mbLocal.id=mbRemote.id when matched /*edit*/ /*clause below really speeds things up when many rows are unchanged*/ /*can you think of anything else?*/ and not (mbLocal.field1=mbRemote.field1 and mbLocal.field2=mbRemote.field2 and mbLocal.field3=mbRemote.field3 and mbLocal.field4=mbRemote.field4) /*end edit*/ then update set mbLocal.field1=mbRemote.field1, mbLocal.field2=mbRemote.field2, mbLocal.field3=mbRemote.field3, mbLocal.field4=mbRemote.field4 when not matched then insert ( id, field1, field2, field3, field4 ) values ( mbRemote.id, mbRemote.field1, mbRemote.field2, mbRemote.field3, mbRemote.field4 ) WHEN NOT MATCHED BY SOURCE then delete; After this statement completes, the local (SQL Server) copy is fully in sync with the remote (PgSQL server). A few questions about this approach: is it sane? it strikes me that an update will be run over all fields in local rows that haven't necessarily changed. The only prerequisite is that the local and remote id field match. Is there a more fine grained approach/a way of constraining the merge statment to only update rows that have actually changed?

    Read the article

  • Debugging SQL Server Slowness: Same Database, Different Servers

    - by Craig Walker
    For a while now we've been having anecdotal slowness on our newly-minted (VMWare-based) SQL Server 2005 database servers. Recently the problem has come to a head and I've started looking for the root cause of the issue. Here's the weird part: on the stored procedure that I'm using as a performance test case, I get a 30x difference in the execution speed depending on which DB server I run it on. This is using the same database (mdf) and log (ldf) files, detached, copied, and reattached from the slow server to the fast one. This doesn't appear to be a (virtualized) hardware issue: he slow server has 4x the CPU capacity and 2x the memory as the fast one. As best as I can tell, the problem lies in the environment/configuration of the servers (either operating system or SQL Server installation). However, I've checked a bunch of variables (SQL Server config options, running services, disk fragmentation) and found nothing that has made a difference in testing. What things should I be looking at? What tools can I use to investigate why this is happening?

    Read the article

  • SQL Server 2005 Reporting Services (x64) on Windows 2K8 -> CleanCurrentUserName() not found

    - by Steven Pardo
    I have installed SQL Server 2005 three times now on the same box. I cleaned up registry settings, files, you name it. All along I have been trying to install SQL Server 2005 Database and Reporting Services (x64) on a Windows 2008 Server. I have also applied the SP3 patch. Installing and Restarting the Server at every point. I have installed multiple instances (SQLDEV64, SQLQA64, SQLSTAGE64) of the Database and Reporting Services. I started to go through the Reporting Services Configuration manager, installing the Reporting Database along with setting up IIS. When I go test the website I get the following and there lies my question. How can I get around this error? http://localhost/reportserver Reporting Services Error -------------------------------------------------------------------------------- An internal error occurred on the report server. See the error log for more details. (rsInternalError) Method not found: 'Void Microsoft.ReportingServices.Diagnostics.UserUtil.CleanCurrentUserName()'. -------------------------------------------------------------------------------- SQL Server Reporting Services Any help would be greatly appreciated.

    Read the article

  • SQL Server: What locale should be used to format numeric values into SQL Server format?

    - by Ian Boyd
    It seems that SQL Server does not accept numbers formatted using any particular locale. It also doesn't support locales that have digits other than 0-9. For example, if the current locale is bengali, then the number 123456789 would come out as "?????????". And that's just the digits, nevermind what the digit grouping would be. But the same problem happens for numbers in the Invariant locale, which formats numbers as "123,456,789", which SQL Server won't accept. Is there a culture that matches what SQL Server accepts for numeric values? Or will i have to create some custom "sql server" culture, generating rules for that culture myself from lower level formatting routines? If i was in .NET (which i'm not), i could peruse the Standard Numeric Format strings. Of the format codes available in .NET: c (Currency): $123.46 d (Decimal): 1234 e (Exponentional): 1.052033E+003 f (Fixed Point): 1234.57 g (General): 123.456 n (Number): 1,234.57 p (Percent): 100.00 % r (Round Trip): 123456789.12345678 x (Hexadecimal): FF Only 6 accept all numeric types: c (Currency): $123.46 d (Decimal): 1234 e (Exponentional): 1.052033E+003 f (Fixed Point): 1234.57 g (General): 123.456 n (Number): 1,234.57 p (Percent): 100.00 % r (Round Trip): 123456789.12345678 x (Hexadecimal): FF And of those only 2 generate string representations, in the en-US locale anyway, that would be accepted by SQL Server: c (Currency): $123.46 d (Decimal): 1234 e (Exponentional): 1.052033E+003 f (Fixed Point): 1234.57 g (General): 123.456 n (Number): 1,234.57 p (Percent): 100.00 % r (Round Trip): 123456789.12345678 x (Hexadecimal): FF Of the remaining two, fixed is dependant on the locale's digits, rather than the number being used, leaving General g format: c (Currency): $123.46 d (Decimal): 1234 e (Exponentional): 1.052033E+003 f (Fixed Point): 1234.57 g (General): 123.456 n (Number): 1,234.57 p (Percent): 100.00 % r (Round Trip): 123456789.12345678 x (Hexadecimal): FF And i can't even say for certain that the g format won't add digit groupings (e.g. 1,234). Is there a locale that formats numbers in the way SQL Server expects? Is there a .NET format code? A java format code? A Delphi format code? A VB format code? A stdio format code? latin-numeral-digits

    Read the article

  • Remote connection to SQL Server Express fails

    - by worlds-apart89
    I have two computers that share the same Internet IP address. Using one of the computers, I can remotely connect to a SQL Server database on the other. Here is my connection string: SqlConnection connection = new SqlConnection(@"Data Source=192.168.1.101\SQLEXPRESSNI,1433;Network Library=DBMSSOCN;Initial Catalog=FirstDB;Persist Security Info=True;User ID=username;Password=password;"); 192.168.1.101 is the server, SQLEXPRESSNI is the SQL Server instance name, and FirstDB is the name of the database. Now, I have another computer with a different Internet IP address. I want to connect to the server above using the third computer that does not belong to my local area network. I dont have access to that third computer at the moment, so I want to use (if possible) the client computer in LAN again. SqlConnection connection = new SqlConnection(@"Data Source=SharedInternetIP\SQLEXPRESSNI,1433;Network Library=DBMSSOCN;Initial Catalog=FirstDB;Persist Security Info=True;User ID=username;Password=password;"); Does not work Note that I am a beginner, so I am not quite sure what I am doing even though I know what I want to do. By passing the Internet IP to the SqlConnection object rather than the local IP address, how can I successfully connect to the server computer, using the client computer in the same network? Also note that my ultimate goal is to connect to the server with an external client, but I don't have access to that computer right now. I'd appreciate any help.

    Read the article

  • C ++ virtual function

    - by user2950788
    masters of C++. I am trying to implement polymorphism in C++. I want to write a base class with a virtual function and then redefine that function in the child class. then demonstrate dynamic binding in my driver program. But I just couldn't get it to work. I know how to do it in C#, so I figured that I might have made some syntactical mistakes where I had used C#'s syntax in my C++ code, but these mistakes are not obvious to me at all. So I'd greatly appreciate it if you would correct my mistakes. class polyTest { public: polyTest(); virtual void type(); virtual ~polyTest(); }; void polyTest::type() { cout << "first gen"; } class polyChild: public polyTest { public: void type(); }; void polyChild::type() { cout << "second gen"; } int main() { polyChild * ptr1; polyChild * ptr2; ptr1 = new polyTest(); ptr2 = new polyChild(); ptr1 -> type(); ptr2 -> type(); }

    Read the article

  • Distributing my Application inside a Debian Virtual Machine Image-- How to meet GPL obligations?

    - by bdk
    I have a Linux application I've developed, and I have created a standalone VMWare Image that people can download to try out the application without needing to install and configure a Linux Server. I created this VMWare Image by starting with a base Debian system, installing a bunch of packages and then configuring all the packages and daemons my application depends on. Upon load, the VMWare Image boots right into an X Server running only my application and no Window manager, so its more of a "Virtual Appliance" than a normal Linux Desktop environment. Users generally will never see a command prompt or any application other than my own. (My application itself I have a handle on the licensing issues of) Now I would like to distribute this image, but I'm not sure how to meet my GPL (and other licenses the various Debian components are released under) Obligations. As I understand it, I have two primary obligations to meet. Providing Copyright and License Information for each component I use. As I understand it, all the information I am required to present is located in the /usr/share directory in the Debian, but since my users generally will never touch a console or terminal, they will never see this. Does providing a text file containing a concatenation of all the files inside /usr/share meet this obligation Making source code available for all components I distribute. Since I am not creating the image from source, but from binary packages, I can't provide the actual source code that results in exactly my image being generated. Does providing an ftp mirror and an offer to send that mirror on DVDs of the Debian source debs for all the packages I use meet this obligation? Anything Else I'm required to do to legally distribute this image?

    Read the article

  • How small (spec wise) can a virtual machine be and still boot up and run some sort of OS?

    - by IllvilJa
    One of the advantages with virtual machines is that you can be very flexible with their sizes. If the host system permits it, you can have a very large virtual machine with a lot of virtual RAM and disk. Also, you can decide to go the other way around, to give the virtual machine a very modest amount of RAM and disk and then choose and configure the OS appropriately. The question is, how small virtual machines have people managed to setup (and get to both boot up and to run)? Virtual machines doing something usuful is preferable, even if I know "useful" in this context is awfully subjective, but laboratory-cases with a configuration stripped beyond common sense could be intresting as well, just to see what people manage to boot and run. Quite open ended question and quite academic, but think of it: an extremely small VM (which still does something useful) takes very little memory and disk and can be quite quickly saved to and restored from disk. If it's also gentle on CPU resources, one might consider having a huge number of such VMs up and running on a host. (Imagine a VM running just an old Commodore 64 or Commodore Amiga in it. Ok, way wrong CPU architecture for modern Virtualization software running on a x86-based PC but still an interesting thought. You could have quite a few such small VMs running on a modern PC.)

    Read the article

  • Hyper-V Virtual Machine Networking issues related to Max Ethernet Frame Size

    - by Goatmale
    I fixed an issue today earlier today but i'm interested in learning WHY it worked. We set up a new Hyper-V virtual machine only to discover that HTTP traffic wasn't working. HTTPS, pings, everything else was working fine. After months of prodding around I took a shot in the dark. On the Hyper-V host server, the physical NIC card had an advanced setting of "Max Ethernet Frame Size" set to 1500. After setting this setting to 1514 the issue was fixed. Alternatively, setting this to 1512 did not solve the issue; 1514 is the magic number. My best guess it that when this setting was set to 1500 it was allowing incoming pings because the data payload was a lot smaller of say, HTTP traffic. As far as HTTPS traffic, I read about something called "Path MTU discovery" which i'm going to assume why is HTTPs traffic was getting through fine, albeit slower. Looking at this post, people agree that 1518 is the max total frame size. Why didn't I need to change this to 1518 instead of 1514 bytes? Why is the default frame size 1500 if that's the max size of the Ethernet payload and not the max size.

    Read the article

  • Very uneven CPU utilization with SQL Server 2012 on 2 processor computer with 16 cores / processor

    - by cooplarsh
    After installing SQL Server Enterprise 2012 with the Server + Cal license model, on a computer with 2 processors each with 16 cores (and no hyperthreading involved) and putting the server under extremely heavy load the 16 cores on the first processor were very underutilized, the first 4 cores on the 2nd CPU were heavily utilized, and the last 12 cores were not used at all (because of the 20 core limit for this sql server version). Total CPU utilization was displaying as around 25%. Unfortunately, the server suffered from extremely poor performance even though if the tasks were evenly distributed across the 20 cores it wouldn't have been anywhere near as bad. The Windows Server was running on a VMWare virtual image under ESX Server, but all of the CPU was allocated to the windows server. We tried changing affinity settings (e.g., allocating most cores to CPU and the others to I/O), but that didn't help solve the performance problems. Upgrading the product edition to SQL Server Enterprise Core 2012 not only allowed the SQL Server to utilize the 12 previously unused cores on the 2nd processor, but it also resulted in a much more even distribution of tasks across all of the processors. To get through the backlog of requests cpU utilization jumped to around 90%, and then came down to around 33% once it was caught up, but performance improved dramatically since we failed over to the newly updated version And the performance issues went away. I was wondering if anyone knows what might cause SQL Server to unevenly distribute the load, relying almost exclusively on the first 4 cores of the 2nd processor that had 12 cores idle, and allocate only a few tasks to each of the 16 cores on the first processor. Also, is there any way we could have more evenly distributed the load across the 20 cores that were being used without the product edition upgrade? The flip side of that question is what did the product upgrade do that caused SQL Server to start evenly distributing the load across all of the cores that it recognized? Thanks to any insight to answer these questions and/or links that might help me better understand how to make sense of what was happenings.

    Read the article

  • Hyper-V Virtual Machine won't respond over network

    - by Brad Gignac
    Recently, one of our Hyper-V virtual machines has periodically stopped responding over the network. It seems to be happening every few days, and it occasionally happens up to several times a day. I am by no means a sysadmin, so any direction you guys could provide would be very welcome. I've included everything I know to include below. If you need any additional information, I'll be glad to include it. I can connect through the Hyper-V console. I can't connect to network shares, IIS web apps, using RDP, or using ping. Memory usage seems to be normal (3 of 4 GB) Processor usage seems low. We don't know the exact time the server goes down, but the following error appears consistently around the time it goes down: Error 5719, NETLOGON This computer was not able to set up as secure session with a domain controller in domain *** due to the following: There are currently no logon servers available to service the logon request. This may lead to authentication problems. Make sure that this computer is connected to the network. If this problem persists, please contact your domain administrator.

    Read the article

  • Process PHP files from a network share in a vmware virtual machine

    - by nhinkle
    As a testing environment, I have set up a vmware virtual machine running Windows Server 2008 R2. I have Apache and PHP installed (as part of the xampp package). I am doing the development outside of the VM, and so want Apache to serve PHP files from a VM shared folder (which appears as a network share in the VM). I have done this by creating an NTFS symbolic link in Apache's htdocs directory. I can access this directory from the browser, and plain-text files are readable. However, PHP fails to process files, instead returning the following error: Warning: Unknown: failed to open stream: No such file or directory in Unknown on line 0 Fatal error: Unknown: Failed opening required 'C:/xampplite/htdocs/path/to/file.php' (include_path='.;C:\xampplite\php\PEAR') in Unknown on line 0 It appears to be a permissions issue — PHP doesn't seem to be allowed to read the file to process it. However, Apache has no problem opening files in the directory. I cannot figure out how to give PHP the necessary permissions to process the file. Does anybody know of a way to make this work, or else another solution for getting the files into the VM automatically while I develop on the host machine?

    Read the article

  • nginx virtual hosts are not working, all vhosts goes to the default one

    - by Adirael
    Hello, I just did a clean install of nginx + php-fpm on a VPS running Ubuntu 10.10, nginx is serving and PHP is working fine, but I'm not able to add vhosts to it. Well, I can add them, but only one works, the rest go to this first one. This is my first vhost, for host1: server { listen 80; server_name host1; access_log /var/log/nginx/host1.log; error_log /var/log/nginx/host1.error.log; location / { root /var/www/vhosts/host1/; index index.html index.htm index.php; } location ~ \.php$ { include /etc/nginx/fastcgi_params; #fastcgi_pass 127.0.0.1:9000; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_param SCRIPT_FILENAME /var/www/vhosts/host1/$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_index index.php; } } And the second one, for host2: server { listen 80; server_name host2; access_log /var/log/nginx/host2.log; error_log /var/log/nginx/host2.error.log; location / { root /var/www/vhosts/host2/; index index.html index.htm index.php; } location ~ \.php$ { include /etc/nginx/fastcgi_params; #fastcgi_pass 127.0.0.1:9000; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_param SCRIPT_FILENAME /var/www/vhosts/host2/$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_index index.php; } } The problem is, when I go to http://host1 everything is fine, but on http://host2, it just shows host1! I don't have Apache installed and everything comes from repos. Any pointers?

    Read the article

  • linux networking: how to redirect incoming connections from old server to new server?

    - by aliz
    hi I'm in the process of moving my old server to a new server, but i will keep the old server running for database replication and load balancing, etc. each server has a separate internet connection with a static ip, and they are connected through a local Ethernet connection. I've got Ubuntu 8.04 32-bit running on old server and Debian 6.0 64-bit on new one. shorewall firewall is installed on both servers. there are some outdoor devices which are periodically sending data to port 43597 for old server IP address. I can run multiple instances of the network service which is responsible for receiving data from devices on a server but on different ports. here's the question: how can I run the service on new server and have connections coming to old server redirected to it, and new devices can still connect to new server's IP address preferably on the same port and same service? until all devices get updated to send to new server. I've tried a shorewall DNAT rule, but seems like new server's default route should be changed to ethernet connection, which breaks other things. I also found about redir utility, but still haven't tried it. is there any best practice or simple solution for such a scenario, i'm not aware of? thanks in advance.

    Read the article

  • Configuring SQL Server Express 2005

    - by MrTognio
    What's the proper way to configure SQL Server Express 2005 so that it can allow for a number of clients to get connected to the server? I have my application running both in the server machine and the client machines. Given the nature of my application, clients are the branches geographically distant from each other, and the server itself. Every operation the client records must be reported to the server, because the server needs total control over the usage and production. But, what should I consider when configuring the connection in both sides, the server and the client? I'm not as used to SQL Server, I'm a beginner, however through SQL Server Configuration Manager I have set the main options without success. The problem seems to be related to trusted connections even though I have set it to support both windows and SQL Server authentication. When the client tries to connect to the server using windows authentication it displays no table; when it tries to communicate using a password (SQL Server authentication), tables are successfully displayed but no access is allowed... Thanx in advance!

    Read the article

  • SQL SERVER – Extending SQL Azure with Azure worker role – Guest Post by Paras Doshi

    - by pinaldave
    This is guest post by Paras Doshi. Paras Doshi is a research Intern at SolidQ.com and a Microsoft student partner. He is currently working in the domain of SQL Azure. SQL Azure is nothing but a SQL server in the cloud. SQL Azure provides benefits such as on demand rapid provisioning, cost-effective scalability, high availability and reduced management overhead. To see an introduction on SQL Azure, check out the post by Pinal here In this article, we are going to discuss how to extend SQL Azure with the Azure worker role. In other words, we will attempt to write a custom code and host it in the Azure worker role; the aim is to add some features that are not available with SQL Azure currently or features that need to be customized for flexibility. This way we extend the SQL Azure capability by building some solutions that run on Azure as worker roles. To understand Azure worker role, think of it as a windows service in cloud. Azure worker role can perform background processes, and to handle processes such as synchronization and backup, it becomes our ideal tool. First, we will focus on writing a worker role code that synchronizes SQL Azure databases. Before we do so, let’s see some scenarios in which synchronization between SQL Azure databases is beneficial: scaling out access over multiple databases enables us to handle workload efficiently As of now, SQL Azure database can be hosted in one of any six datacenters. By synchronizing databases located in different data centers, one can extend the data by enabling access to geographically distributed data Let us see some scenarios in which SQL server to SQL Azure database synchronization is beneficial To backup SQL Azure database on local infrastructure Rather than investing in local infrastructure for increased workloads, such workloads could be handled by cloud Ability to extend data to different datacenters located across the world to enable efficient data access from remote locations Now, let us develop cloud-based app that synchronizes SQL Azure databases. For an Introduction to developing cloud based apps, click here Now, in this article, I aim to provide a bird’s eye view of how a code that synchronizes SQL Azure databases look like and then list resources that can help you develop the solution from scratch. Now, if you newly add a worker role to the cloud-based project, this is how the code will look like. (Note: I have added comments to the skeleton code to point out the modifications that will be required in the code to carry out the SQL Azure synchronization. Note the placement of Setup() and Sync() function.) Click here (http://parasdoshi1989.files.wordpress.com/2011/06/code-snippet-1-for-extending-sql-azure-with-azure-worker-role1.pdf ) Enabling SQL Azure databases synchronization through sync framework is a two-step process. In the first step, the database is provisioned and sync framework creates tracking tables, stored procedures, triggers, and tables to store metadata to enable synchronization. This is one time step. The code for the same is put in the setup() function which is called once when the worker role starts. Now, the second step is continuous (or on demand) synchronization of SQL Azure databases by propagating changes between databases. This is done on a continuous basis by calling the sync() function in the while loop. The code logic to synchronize changes between SQL Azure databases should be put in the sync() function. Discussing the coding part step by step is out of the scope of this article. Therefore, let me suggest you a resource, which is given here. Also, note that before you start developing the code, you will need to install SYNC framework 2.1 SDK (download here). Further, you will reference some libraries before you start coding. Details regarding the same are available in the article that I just pointed to. You will be charged for data transfers if the databases are not in the same datacenter. For pricing information, go here Currently, a tool named DATA SYNC, which is built on top of sync framework, is available in CTP that allows SQL Azure <-> SQL server and SQL Azure <-> SQL Azure synchronization (without writing single line of code); however, in some cases, the custom code shown in this blogpost provides flexibility that is not available with Data SYNC. For instance, filtering is not supported in the SQL Azure DATA SYNC CTP2; if you wish to have such a functionality now, then you have the option of developing a custom code using SYNC Framework. Now, this code can be easily extended to synchronize at some schedule. Let us say we want the databases to get synchronized every day at 10:00 pm. This is what the code will look like now: (http://parasdoshi1989.files.wordpress.com/2011/06/code-snippet-2-for-extending-sql-azure-with-azure-worker-role.pdf) Don’t you think that by writing such a code, we are imitating the functionality provided by the SQL server agent for a SQL server? Think about it. We are scheduling our administrative task by writing custom code – in other words, we have developed a “Light weight SQL server agent for SQL Azure!” Since the SQL server agent is not currently available in cloud, we have developed a solution that enables us to schedule tasks, and thus we have extended SQL Azure with the Azure worker role! Now if you wish to track jobs, you can do so by storing this data in SQL Azure (or Azure tables). The reason is that Windows Azure is a stateless platform, and we will need to store the state of the job ourselves and the choice that you have is SQL Azure or Azure tables. Note that this solution requires custom code and also it is not UI driven; however, for now, it can act as a temporary solution until SQL server agent is made available in the cloud. Moreover, this solution does not encompass functionalities that a SQL server agent provides, but it does open up an interesting avenue to schedule some of the tasks such as backup and synchronization of SQL Azure databases by writing some custom code in the Azure worker role. Now, let us see one more possibility – i.e., running BCP through a worker role in Azure-hosted services and then uploading the backup files either locally or on blobs. If you upload it locally, then consider the data transfer cost. If you upload it to blobs residing in the same datacenter, then no transfer cost applies but the cost on blob size applies. So, before choosing the option, you need to evaluate your preferences keeping the cost associated with each option in mind. In this article, I have shown that Azure worker role solution could be developed to synchronize SQL Azure databases. Moreover, a light-weight SQL server agent for SQL Azure can be developed. Also we discussed the possibility of running BCP through a worker role in Azure-hosted services for backing up our precious SQL Azure data. Thus, we can extend SQL Azure with the Azure worker role. But remember: you will be charged for running Azure worker roles. So at the end of the day, you need to ask – am I willing to build a custom code and pay money to achieve this functionality? I hope you found this blog post interesting. If you have any questions/feedback, you can comment below or you can mail me at Paras[at]student-partners[dot]com Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Azure, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #051

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 Explanation and Understanding NOT NULL Constraint NOT NULL is integrity CONSTRAINT. It does not allow creating of the row where column contains NULL value. Most discussed questions about NULL is what is NULL? I will not go in depth analysis it. Simply put NULL is unknown or missing data. When NULL is present in database columns, it can affect the integrity of the database. I really do not prefer NULL in the database unless they are absolutely necessary. Three T-SQL Script to Create Primary Keys on Table I have always enjoyed writing about three topics Constraint and Keys, Backup and Restore and Datetime Functions. Primary Keys constraints prevent duplicate values for columns and provides a unique identifier to each column, as well it creates clustered index on the columns. 2008 Get Numeric Value From Alpha Numeric String – UDF for Get Numeric Numbers Only SQL is great with String operations. Many times, I use T-SQL to do my string operation. Let us see User Defined Function, which I wrote a few days ago, which will return only Numeric values from Alpha Numeric values. Introduction and Example of UNION and UNION ALL It is very much interesting when I get requests from blog reader to re-write my previous articles. I have received few requests to rewrite my article SQL SERVER – Union vs. Union All – Which is better for performance? with examples. I request you to read my previous article first to understand what is the concept and read this article to understand the same concept with an example. Downgrade Database for Previous Version The main questions is how they can downgrade the from SQL Server 2005 to SQL Server 2000? The answer is : Not Possible. Get Common Records From Two Tables Without Using Join Following is my scenario, Suppose Table 1 and Table 2 has same column e.g. Column1 Following is the query, 1. Select column1,column2 From Table1 2. Select column1 From Table2 I want to find common records from these tables, but I don’t want to use the Join clause because for that I need to specify the column name for Join condition. Will you help me to get common records without using Join condition? I am using SQL Server 2005. Retrieve – Select Only Date Part From DateTime – Best Practice – Part 2 A year ago I wrote a post about SQL SERVER – Retrieve – Select Only Date Part From DateTime – Best Practice where I have discussed two different methods of getting the date part from datetime. Introduction to CLR – Simple Example of CLR Stored Procedure CLR is an abbreviation of Common Language Runtime. In SQL Server 2005 and later version of it database objects can be created which are created in CLR. Stored Procedures, Functions, Triggers can be coded in CLR. CLR is faster than T-SQL in many cases. CLR is mainly used to accomplish tasks which are not possible by T-SQL or can use lots of resources. The CLR can be usually implemented where there is an intense string operation, thread management or iteration methods which can be complicated for T-SQL. Implementing CLR provides more security to the Extended Stored Procedure. 2009 Comic Slow Query – SQL Joke Before Presentation After Presentation Enable Automatic Statistic Update on Database In one of the recent projects, I found out that despite putting good indexes and optimizing the query, I could not achieve an optimized performance and I still received an unoptimized response from the SQL Server. On examination, I figured out that the culprit was statistics. The database that I was trying to optimize had auto update of the statistics was disabled. Recently Executed T-SQL Query Please refer to blog post  query to recently executed T-SQL query on database. Change Collation of Database Column – T-SQL Script – Consolidating Collations – Extention Script At some time in your DBA career, you may find yourself in a position when you sit back and realize that your database collations have somehow run amuck, or are faced with the ever annoying CANNOT RESOLVE COLLATION message when trying to join data of varying collation settings. 2010 Visiting Alma Mater – Delivering Session on Database Performance and Career – Nirma Institute of Technology Everyone always dreams of visiting their school and college, where they have studied once. It is a great feeling to see the college once again – where you have spent the wonderful golden years of your time. College time is filled with studies, education, emotions and several plans to build a future. I consider myself fortunate as I got the opportunity to study at some of the best places in the world. Change Column DataTypes There are times when I feel like writing that I am a day older in SQL Server. In fact, there are many who are looking for a solution that is simple enough. Have you ever searched online for something very simple. I often do and enjoy doing things which are straight forward and easy to change. 2011 Three DMVs – sys.dm_server_memory_dumps – sys.dm_server_services – sys.dm_server_registry In this blog post we will see three new DMVs which are introduced in Denali. The DMVs are very simple and there is not much to describe them. So here is the simple game. I will be asking a question back to you after seeing the result of the each of the DMV and you help me to complete this blog post. A Simple Quiz – T-SQL Brain Trick If you have some time, I strongly suggest you try this quiz out as it is for sure twists your brain. 2012 List All The Column With Specific Data Types in Database 5 years ago I wrote script SQL SERVER – 2005 – List All The Column With Specific Data Types, when I read it again, it is very much relevant and I liked it. This is one of the script which every developer would like to keep it handy. I have upgraded the script bit more. I have included few additional information which I believe I should have added from the beginning. It is difficult to visualize the final script when we are writing it first time. Find First Non-Numeric Character from String The function PATINDEX exists for quite a long time in SQL Server but I hardly see it being used. Well, at least I use it and I am comfortable using it. Here is a simple script which I use when I have to identify first non-numeric character. Finding Different ColumnName From Almost Identitical Tables Well here is the interesting example of how we can use sys.column catalogue views and get the details of the newly added column. I have previously written about EXCEPT over here which is very similar to MINUS of Oracle. Storing Data and Files in Cloud – Dropbox – Personal Technology Tip I thought long and hard about doing a Personal Technology Tips series for this blog.  I have so many tips I’d like to share.  I am on my computer almost all day, every day, so I have a treasure trove of interesting tidbits I like to share if given the chance.  The only thing holding me back – which tip to share first?  The first tip obviously has the weight of seeming like the most important.  But this would mean choosing amongst my favorite tricks and shortcuts.  This is a hard task. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • AD-Integrated DNS failure: "Access was Denied"

    - by goldPseudo
    I have a single Windows 2008 R2 server configured as a domain controller with Active Directory Domain Services and DNS Server. The DNS Server was recently uninstalled and reinstalled in an attempt to fix a (possibly unrelated) problem; the event log was previously flooded with errors (#4000, "The DNS Server was unable to open Active Directory...") which reinstalling did not fix. However, while before it was at least showing and resolving names from the local network (slowly), now it's showing nothing at all. (The original error started with a #4015 error "The DNS server has encountered a critical error from the Active Directory," followed by a long string of #4000 and a few #4004. This may have been caused when a new DNS name was recently added, but I can't be sure of the timing.) Attempting to manage the DNS through Administrative Tools > DNS brings up an error: The server SERVERNAME could not be contacted. The error was: Access was denied. Would you like to add it anyway? Selecting yes just puts a SERVERNAME item on the list, but with all the configuration options grayed out. I attempted editing my hosts file as per this post but to no avail. Running dcdiag, it does identify the home server properly, but fails right away testing connectivity with: Starting test: Connectivity The host blahblahblahyaddayaddayadda could not be resolved to an IP address. Check the DNS server, DHCP, server name, etc. Got error while checking LDAP and RPC connectivity. Please check your firewall settings. ......................... SERVERNAME failed test Connectivity Adding the blahblahblahyaddayaddayadda address to hosts (pointing at 127.0.0.1), the connectivity test succeeded but it didn't seem to solve the fundamental problem (Access was denied) so I hashed it out again. Primary DNS server is properly pointing at 127.0.0.1 according to ipconfig /all. And the DNS server is forwarding requests to external addresses properly (if slowly), but the resolving of local network names is borked. The DNS database itself is small enough that I am (grudgingly) able to rebuild it if need be, but the DNS Server doesn't seem willing to let me work with (or around) it at all. (and yes before you ask there are no system backups available) Where do I go from here? As requested, my (slightly obfuscated) dcdiag output: Directory Server Diagnosis Performing initial setup: Trying to find home server... Home Server = bulgogi * Identified AD Forest. Done gathering initial info. Doing initial required tests Testing server: Obfuscated\BULGOGI Starting test: Connectivity The host a-whole-lot-of-numbers._msdcs.obfuscated.address could not be resolved to an IP address. Check the DNS server, DHCP, server name, etc. Got error while checking LDAP and RPC connectivity. Please check your firewall settings. ......................... BULGOGI failed test Connectivity Doing primary tests Testing server: Obfuscated\BULGOGI Skipping all tests, because server BULGOGI is not responding to directory service requests. Running partition tests on : ForestDnsZones Starting test: CheckSDRefDom ......................... ForestDnsZones passed test CheckSDRefDom Starting test: CrossRefValidation ......................... ForestDnsZones passed test CrossRefValidation Running partition tests on : DomainDnsZones Starting test: CheckSDRefDom ......................... DomainDnsZones passed test CheckSDRefDom Starting test: CrossRefValidation ......................... DomainDnsZones passed test CrossRefValidation Running partition tests on : Schema Starting test: CheckSDRefDom ......................... Schema passed test CheckSDRefDom Starting test: CrossRefValidation ......................... Schema passed test CrossRefValidation Running partition tests on : Configuration Starting test: CheckSDRefDom ......................... Configuration passed test CheckSDRefDom Starting test: CrossRefValidation ......................... Configuration passed test CrossRefValidation Running partition tests on : obfuscated Starting test: CheckSDRefDom ......................... obfuscated passed test CheckSDRefDom Starting test: CrossRefValidation ......................... obfuscated passed test CrossRefValidation Running enterprise tests on : obfuscated.address Starting test: LocatorCheck ......................... obfuscated.address passed test LocatorCheck Starting test: Intersite ......................... obfuscated.address passed test Intersite And my hosts file (minus the hashed lines for brevity): 127.0.0.1 localhost ::1 localhost And, for the sake of completion, here's selected chunks of my netstat -a -n output: TCP 0.0.0.0:88 0.0.0.0:0 LISTENING TCP 0.0.0.0:135 0.0.0.0:0 LISTENING TCP 0.0.0.0:389 0.0.0.0:0 LISTENING TCP 0.0.0.0:445 0.0.0.0:0 LISTENING TCP 0.0.0.0:464 0.0.0.0:0 LISTENING TCP 0.0.0.0:593 0.0.0.0:0 LISTENING TCP 0.0.0.0:636 0.0.0.0:0 LISTENING TCP 0.0.0.0:3268 0.0.0.0:0 LISTENING TCP 0.0.0.0:3269 0.0.0.0:0 LISTENING TCP 0.0.0.0:3389 0.0.0.0:0 LISTENING TCP 0.0.0.0:9389 0.0.0.0:0 LISTENING TCP 0.0.0.0:47001 0.0.0.0:0 LISTENING TCP 0.0.0.0:49152 0.0.0.0:0 LISTENING TCP 0.0.0.0:49153 0.0.0.0:0 LISTENING TCP 0.0.0.0:49154 0.0.0.0:0 LISTENING TCP 0.0.0.0:49155 0.0.0.0:0 LISTENING TCP 0.0.0.0:49157 0.0.0.0:0 LISTENING TCP 0.0.0.0:49158 0.0.0.0:0 LISTENING TCP 0.0.0.0:49164 0.0.0.0:0 LISTENING TCP 0.0.0.0:49178 0.0.0.0:0 LISTENING TCP 0.0.0.0:49179 0.0.0.0:0 LISTENING TCP 0.0.0.0:50480 0.0.0.0:0 LISTENING TCP 127.0.0.1:53 0.0.0.0:0 LISTENING TCP 192.168.12.127:53 0.0.0.0:0 LISTENING TCP 192.168.12.127:139 0.0.0.0:0 LISTENING TCP 192.168.12.127:445 192.168.12.50:51118 ESTABLISHED TCP 192.168.12.127:3389 192.168.12.4:33579 ESTABLISHED TCP 192.168.12.127:3389 192.168.12.100:1115 ESTABLISHED TCP 192.168.12.127:50784 192.168.12.50:49174 ESTABLISHED <snip ipv6> UDP 0.0.0.0:123 *:* UDP 0.0.0.0:500 *:* UDP 0.0.0.0:1645 *:* UDP 0.0.0.0:1645 *:* UDP 0.0.0.0:1646 *:* UDP 0.0.0.0:1646 *:* UDP 0.0.0.0:1812 *:* UDP 0.0.0.0:1812 *:* UDP 0.0.0.0:1813 *:* UDP 0.0.0.0:1813 *:* UDP 0.0.0.0:4500 *:* UDP 0.0.0.0:5355 *:* UDP 0.0.0.0:59638 *:* <snip a few thousand lines> UDP 0.0.0.0:62140 *:* UDP 127.0.0.1:53 *:* UDP 127.0.0.1:49540 *:* UDP 127.0.0.1:49541 *:* UDP 127.0.0.1:53655 *:* UDP 127.0.0.1:54946 *:* UDP 127.0.0.1:58345 *:* UDP 127.0.0.1:63352 *:* UDP 127.0.0.1:63728 *:* UDP 127.0.0.1:63729 *:* UDP 127.0.0.1:64215 *:* UDP 127.0.0.1:64646 *:* UDP 192.168.12.127:53 *:* UDP 192.168.12.127:67 *:* UDP 192.168.12.127:68 *:* UDP 192.168.12.127:88 *:* UDP 192.168.12.127:137 *:* UDP 192.168.12.127:138 *:* UDP 192.168.12.127:389 *:* UDP 192.168.12.127:464 *:* UDP 192.168.12.127:2535 *:* <snip ipv6 again>

    Read the article

  • SQL SERVER – What is AdventureWorks?

    - by pinaldave
    NOTE: If you know the answer of this question, then I request you to stop reading this post right now. Please do not leave comment about this blog post not being useful to you, if you knew the answer. Few days ago, I received DM asking What is an AdventureWorks database and why in all the examples I use that instead of any other database (e.g. Pubs or  Northwind)? As matter of fact, when I went back to my question list, which I have yet not answered, there were a few more variations of this same question. AdventureWorks is a Sample Database shipped with SQL Server and it can be downloaded from http://codeplex.com site. AdventureWorks has replaced Northwind and Pubs from the sample database in SQL Server 2005. The Microsoft team keeps updating the sample database as they release new versions. Here are some quick links: AdventureWorks SQL Server 2008 SR4 AdventureWorks 2008R2 November CTP AdventureWorks for SQL Azure (December CTP) AventureWorks for SQL Server 2005 SP2A SQL SERVER – 2008 – Download and Install Samples Database AdventureWorks 2005 – Detail Tutorial I have previously written few other articles on the same subject; you can find them easily here: [email protected] Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Information Related to DATETIME and DATETIME2

    - by pinaldave
    I recently received interesting comment on the blog regarding workaround to overcome the precision issue while dealing with DATETIME and DATETIME2. I have written over this subject earlier over here. SQL SERVER – Difference Between GETDATE and SYSDATETIME SQL SERVER – Difference Between DATETIME and DATETIME2 – WITH GETDATE SQL SERVER – Difference Between DATETIME and DATETIME2 SQL Expert Jing Sheng Zhong has left following comment: The issue you found in SQL server new datetime type is related time source function precision. Folks have found the root reason of the problem – when data time values are converted (implicit or explicit) between different data type, which would lose some precision, so the result cannot match each other as thought. Here I would like to gave a work around solution to solve the problem which the developers met. -- Declare and loop DECLARE @Intveral INT, @CurDate DATETIMEOFFSET; CREATE TABLE #TimeTable (FirstDate DATETIME, LastDate DATETIME2, GlobalDate DATETIMEOFFSET) SET @Intveral = 10000 WHILE (@Intveral > 0) BEGIN ----SET @CurDate = SYSDATETIMEOFFSET(); -- higher precision for future use only SET @CurDate = TODATETIMEOFFSET(GETDATE(),DATEDIFF(N,GETUTCDATE(),GETDATE())); -- lower precision to match exited date process INSERT #TimeTable (FirstDate, LastDate, GlobalDate) VALUES (@CurDate, @CurDate, @CurDate) SET @Intveral = @Intveral - 1 END GO -- Distinct Values SELECT COUNT(DISTINCT FirstDate) D_DATETIME, COUNT(DISTINCT LastDate) D_DATETIME2, COUNT(DISTINCT GlobalDate) D_SYSGETDATE FROM #TimeTable GO -- Join SELECT DISTINCT a.FirstDate,b.LastDate, b.GlobalDate, CAST(b.GlobalDate AS DATETIME) GlobalDateASDateTime FROM #TimeTable a INNER JOIN #TimeTable b ON a.FirstDate = CAST(b.GlobalDate AS DATETIME) GO -- Select SELECT * FROM #TimeTable GO -- Clean up DROP TABLE #TimeTable GO If you read my blog SQL SERVER – Difference Between DATETIME and DATETIME2 you will notice that I have achieved the same using GETDATE(). Are you using DATETIME2 in your production environment? If yes, I am interested to know the use case. Reference: Pinal Dave (http://www.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL DateTime, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Problem with setup VPN on Ubuntu Server 12.04

    - by Yozone W.
    I have a problem with setup VPN server on my Ubuntu VPS, here is my server environments: Ubuntu Server 12.04 x86_64 xl2tpd 1.3.1+dfsg-1 pppd 2.4.5-5ubuntu1 openswan 1:2.6.38-1~precise1 After install software and configuration: ipsec verify Checking your system to see if IPsec got installed and started correctly: Version check and ipsec on-path [OK] Linux Openswan U2.6.38/K3.2.0-24-virtual (netkey) Checking for IPsec support in kernel [OK] SAref kernel support [N/A] NETKEY: Testing XFRM related proc values [OK] [OK] [OK] Checking that pluto is running [OK] Pluto listening for IKE on udp 500 [OK] Pluto listening for NAT-T on udp 4500 [OK] Checking for 'ip' command [OK] Checking /bin/sh is not /bin/dash [WARNING] Checking for 'iptables' command [OK] Opportunistic Encryption Support [DISABLED] /var/log/auth.log message: Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [RFC 3947] method set to=115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike] meth=114, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-08] meth=113, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-07] meth=112, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-06] meth=111, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-05] meth=110, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-04] meth=109, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-03] meth=108, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-02] meth=107, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-02_n] meth=106, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: ignoring Vendor ID payload [FRAGMENTATION 80000000] Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [Dead Peer Detection] Oct 16 06:50:54 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: responding to Main Mode from unknown peer [My IP Address] Oct 16 06:50:54 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: transition from state STATE_MAIN_R0 to state STATE_MAIN_R1 Oct 16 06:50:54 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: STATE_MAIN_R1: sent MR1, expecting MI2 Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: NAT-Traversal: Result using draft-ietf-ipsec-nat-t-ike (MacOS X): peer is NATed Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: transition from state STATE_MAIN_R1 to state STATE_MAIN_R2 Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: STATE_MAIN_R2: sent MR2, expecting MI3 Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: ignoring informational payload, type IPSEC_INITIAL_CONTACT msgid=00000000 Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: Main mode peer ID is ID_IPV4_ADDR: '192.168.12.52' Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: switched from "L2TP-PSK-NAT" to "L2TP-PSK-NAT" Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: deleting connection "L2TP-PSK-NAT" instance with peer [My IP Address] {isakmp=#0/ipsec=#0} Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: transition from state STATE_MAIN_R2 to state STATE_MAIN_R3 Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: new NAT mapping for #5, was [My IP Address]:2251, now [My IP Address]:2847 Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: STATE_MAIN_R3: sent MR3, ISAKMP SA established {auth=OAKLEY_PRESHARED_KEY cipher=aes_256 prf=oakley_sha group=modp1024} Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: Dead Peer Detection (RFC 3706): enabled Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: the peer proposed: [My Server IP Address]/32:17/1701 -> 192.168.12.52/32:17/0 Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: NAT-Traversal: received 2 NAT-OA. using first, ignoring others Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: responding to Quick Mode proposal {msgid:8579b1fb} Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: us: [My Server IP Address]<[My Server IP Address]>:17/1701 Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: them: [My IP Address][192.168.12.52]:17/65280===192.168.12.52/32 Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: transition from state STATE_QUICK_R0 to state STATE_QUICK_R1 Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: STATE_QUICK_R1: sent QR1, inbound IPsec SA installed, expecting QI2 Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: Dead Peer Detection (RFC 3706): enabled Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: transition from state STATE_QUICK_R1 to state STATE_QUICK_R2 Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: STATE_QUICK_R2: IPsec SA established transport mode {ESP=>0x08bda158 <0x4920a374 xfrm=AES_256-HMAC_SHA1 NATOA=192.168.12.52 NATD=[My IP Address]:2847 DPD=enabled} Oct 16 06:51:16 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: received Delete SA(0x08bda158) payload: deleting IPSEC State #6 Oct 16 06:51:16 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: ERROR: netlink XFRM_MSG_DELPOLICY response for flow eroute_connection delete included errno 2: No such file or directory Oct 16 06:51:16 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: received and ignored informational message Oct 16 06:51:16 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: received Delete SA payload: deleting ISAKMP State #5 Oct 16 06:51:16 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address]: deleting connection "L2TP-PSK-NAT" instance with peer [My IP Address] {isakmp=#0/ipsec=#0} Oct 16 06:51:16 vpn pluto[3963]: packet from [My IP Address]:2847: received and ignored informational message xl2tpd -D message: xl2tpd[4289]: Enabling IPsec SAref processing for L2TP transport mode SAs xl2tpd[4289]: IPsec SAref does not work with L2TP kernel mode yet, enabling forceuserspace=yes xl2tpd[4289]: setsockopt recvref[30]: Protocol not available xl2tpd[4289]: This binary does not support kernel L2TP. xl2tpd[4289]: xl2tpd version xl2tpd-1.3.1 started on vpn.netools.me PID:4289 xl2tpd[4289]: Written by Mark Spencer, Copyright (C) 1998, Adtran, Inc. xl2tpd[4289]: Forked by Scott Balmos and David Stipp, (C) 2001 xl2tpd[4289]: Inherited by Jeff McAdams, (C) 2002 xl2tpd[4289]: Forked again by Xelerance (www.xelerance.com) (C) 2006 xl2tpd[4289]: Listening on IP address [My Server IP Address], port 1701 Then it just stopped here, and have no any response. I can't connect VPN on my mac client, the /var/log/system.log message: Oct 16 15:17:36 azone-iMac.local configd[17]: SCNC: start, triggered by SystemUIServer, type L2TP, status 0 Oct 16 15:17:36 azone-iMac.local pppd[3799]: pppd 2.4.2 (Apple version 596.13) started by azone, uid 501 Oct 16 15:17:38 azone-iMac.local pppd[3799]: L2TP connecting to server 'vpn.netools.me' ([My Server IP Address])... Oct 16 15:17:38 azone-iMac.local pppd[3799]: IPSec connection started Oct 16 15:17:38 azone-iMac.local racoon[359]: Connecting. Oct 16 15:17:38 azone-iMac.local racoon[359]: IPSec Phase1 started (Initiated by me). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKE Packet: transmit success. (Initiator, Main-Mode message 1). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKE Packet: receive success. (Initiator, Main-Mode message 2). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKE Packet: transmit success. (Initiator, Main-Mode message 3). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKE Packet: receive success. (Initiator, Main-Mode message 4). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKE Packet: transmit success. (Initiator, Main-Mode message 5). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKEv1 Phase1 AUTH: success. (Initiator, Main-Mode Message 6). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKE Packet: receive success. (Initiator, Main-Mode message 6). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKEv1 Phase1 Initiator: success. (Initiator, Main-Mode). Oct 16 15:17:38 azone-iMac.local racoon[359]: IPSec Phase1 established (Initiated by me). Oct 16 15:17:39 azone-iMac.local racoon[359]: IPSec Phase2 started (Initiated by me). Oct 16 15:17:39 azone-iMac.local racoon[359]: IKE Packet: transmit success. (Initiator, Quick-Mode message 1). Oct 16 15:17:39 azone-iMac.local racoon[359]: IKE Packet: receive success. (Initiator, Quick-Mode message 2). Oct 16 15:17:39 azone-iMac.local racoon[359]: IKE Packet: transmit success. (Initiator, Quick-Mode message 3). Oct 16 15:17:39 azone-iMac.local racoon[359]: IKEv1 Phase2 Initiator: success. (Initiator, Quick-Mode). Oct 16 15:17:39 azone-iMac.local racoon[359]: IPSec Phase2 established (Initiated by me). Oct 16 15:17:39 azone-iMac.local pppd[3799]: IPSec connection established Oct 16 15:17:59 azone-iMac.local pppd[3799]: L2TP cannot connect to the server Oct 16 15:17:59 azone-iMac.local racoon[359]: IPSec disconnecting from server [My Server IP Address] Oct 16 15:17:59 azone-iMac.local racoon[359]: IKE Packet: transmit success. (Information message). Oct 16 15:17:59 azone-iMac.local racoon[359]: IKEv1 Information-Notice: transmit success. (Delete IPSEC-SA). Oct 16 15:17:59 azone-iMac.local racoon[359]: IKE Packet: transmit success. (Information message). Oct 16 15:17:59 azone-iMac.local racoon[359]: IKEv1 Information-Notice: transmit success. (Delete ISAKMP-SA). Anyone help? Thanks a million!

    Read the article

  • SQLAuthority News – Download Whitepaper – Understanding and Controlling Parallel Query Processing in SQL Server

    - by pinaldave
    My recently article SQL SERVER – Reducing CXPACKET Wait Stats for High Transactional Database has received many good comments regarding MAXDOP 1 and MAXDOP 0. I really enjoyed reading the comments as the comments are received from industry leaders and gurus. I was further researching on the subject and I end up on following white paper written by Microsoft. Understanding and Controlling Parallel Query Processing in SQL Server Data warehousing and general reporting applications tend to be CPU intensive because they need to read and process a large number of rows. To facilitate quick data processing for queries that touch a large amount of data, Microsoft SQL Server exploits the power of multiple logical processors to provide parallel query processing operations such as parallel scans. Through extensive testing, we have learned that, for most large queries that are executed in a parallel fashion, SQL Server can deliver linear or nearly linear response time speedup as the number of logical processors increases. However, some queries in high parallelism scenarios perform suboptimally. There are also some parallelism issues that can occur in a multi-user parallel query workload. This white paper describes parallel performance problems you might encounter when you run such queries and workloads, and it explains why these issues occur. In addition, it presents how data warehouse developers can detect these issues, and how they can work around them or mitigate them. To review the document, please download the Understanding and Controlling Parallel Query Processing in SQL Server Word document. Note: Above abstract has been taken from here. The real question is what does the parallel queries has made life of DBA much simpler or is it looked at with potential issue related to degradation of the performance? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL White Papers, SQLAuthority News, T SQL, Technology

    Read the article

  • SQLAuthority News – Free eBook Download – Introducing Microsoft SQL Server 2008 R2

    - by pinaldave
    Microsoft Press has published FREE eBook on the most awaiting release of SQL Server 2008 R2. The book is written by Ross Mistry and Stacia Misner. Ross is my personal friend and one of the most active book writer in SQL Server Domain. When I see his name on any book, I am sure that it will be high quality and easy to read book. The details about the book is here: Introducing Microsoft SQL Server 2008 R2, by Ross Mistry and Stacia Misner The book contains 10 chapters and 216 pages. PART I   Database Administration CHAPTER 1   SQL Server 2008 R2 Editions and Enhancements CHAPTER 2   Multi-Server Administration CHAPTER 3   Data-Tier Applications CHAPTER 4   High Availability and Virtualization Enhancements CHAPTER 5   Consolidation and Monitoring PART II   Business Intelligence Development CHAPTER 6   Scalable Data Warehousing CHAPTER 7   Master Data Services CHAPTER 8   Complex Event Processing with StreamInsight CHAPTER 9   Reporting Services Enhancements CHAPTER 10   Self-Service Analysis with PowerPivot More detail about the book is listed here. You can download the ebook in XPS format here and in PDF format here. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Business Intelligence, Pinal Dave, SQL, SQL Authority, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

< Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >