Search Results

Search found 109076 results on 4364 pages for 'ubuntu server edition'.

Page 156/4364 | < Previous Page | 152 153 154 155 156 157 158 159 160 161 162 163  | Next Page >

  • How to get Visual Studio 2010 Express Edition on Windows 7

    - by thanigai
    Visual studio 2010 is an amzing release from Microsoft. I have tried the beta 1,2 of Visual Studio 2010 and finally the full version is released. I am also interested in the latest edition of Windows which nothing but our Windows 7. Next to Vista I like this version very much. Out of curiosity I have installed the prebuild version of Windows 7. I tried installing the express edition here and it failed making me disappointed. I tried two or three times and finally I decided to download the trial version of Windows 7. After that I can install the Visual Studio 2010 express edition easily. I have given the link below from where I have downloaded the file. http://www.microsoft.com/express/downloads/ Here the link give is through Web PI Installer. Other option is you can download the ISO image file and burn them to a disc or use a virtual disc This Visual Web developer will provide the Sql Server engine alone. To get a Sql Server Management Studio get from the following link http://www.microsoft.com/express/Database/InstallOptions.aspx That's it all the things necessary for the web application programming is ready. Ah I forget to tell about the Silverlight. Please find the Silverlight 4 latest tools from the below link (WCF RIA services is the main update) http://www.silverlight.net/getstarted/ Silverlight 4 Tools(http://www.microsoft.com/downloads/details.aspx?FamilyID=eff8a0da-0a4d-48e8-8366-6ddf2ecad801&displaylang=en) Expression Blend 4 trial(http://www.microsoft.com/downloads/details.aspx?FamilyID=88484825-1b3c-4e8c-8b14-b05d025e1541&displaylang=en) I think the reader would have enjoyed on how to get these things. Please let me know if you are not clear with any of these things.  Thanks, Thani

    Read the article

  • Oracle Database 11g Express Edition(XE)????~??????????????????????

    - by Yuichi Hayashi
    Oracle Database 11g (11.2.0.2)??????????????? Oracle Database 11g Express Edition(XE)?????????????Oracle Database????????(11.2.0.2)?????????????????????? v$version??11.2.0.2???????? ?????? Oracle Technology Network(OTN)????????????? http://www.oracle.com/technetwork/database/express-edition/downloads/index.html ???? XE?????????????????????? 1CPU????(???CPU???????????1CPU???????) 1???????1????????????????1DB???????????? ????????????11GByte?? ????????(RAM)?1GByte?? ????????????Windows(x86)???Linux(x86_64) Web???????????? Oracle 11g XE??????web??????????????????????? [??????????????] ??????????????????????????????????????????????Oracle Database?????????????????????????????????? [?????????????] ???????????SQL Developer????????SQL Developer???????????????????? ??????????? ?SQL Developer????~??????????????DB???SQL??? http://blogs.oracle.com/oracle4engineer/entry/sql_developersql Web?????????????? Oracle 11g XE????????Application Express???????????????????????????APEX????????????????APEX???????????????????????? APEX?????????????????????Oracle Application Express(APEX) - ??(??), ??, ??????????????? http://blogs.oracle.com/oracle4engineer/entry/cat_apex ?????? & ??????????? ????????????(DBA)??????????????????????·????????Oracle 11g XE??????????????????????????????????????????????????????&?????????????? [??????&?????????????] [???????????????RMAN(Recovery Manager)??????] ?????????????????????????Oracle Database??????????????????????Oracle 11g XE??????????????????????? ????Oracle???:Oracle Database 11g Express Edition(XE)??????????!

    Read the article

  • Upgrading from SQL Server 2008 Express to 2008 Developer

    - by josecortesp
    Hey Guys, this one is a quick Question: What is the best (or THE) way to change my SQL Express 2008 (with advance...) installation to a 2008 Developer edition? I need to keep the databases, along with the logins and so on. I need to upgrade because, I Want to use all the features in TFS 2010. Do I have to make backups of all the data and uninstall express - install developer? Is there a quicker way? Thanks in advance SOLVED: In the SQL Server Installation Center, there a Edition Upgrade options found under Maintenance. The only thing is that you have to choose processor type as x86 (as Express is only x86) in the options in the Installation Center. Now my SQL Server is developer...

    Read the article

  • SQL Server CLR Integration to acheive Encryption/Decryption

    - by Aakash
    I have a requirement to store the data in encrypted form in database tables. I want to do it at database level but the problems I am facing: ( a) Data Type of the field should be Varbinary. ( b) Encryption is not supported by Workgroup edition ( c) Is it possible to encrypt Numeric Fields? I want to access the encrypted data in tables to fetch in views and stored procedure for some processing but due to above problems I am not able to. Here is my Environment: Development Platform - ASP.Net,.Net Framework 3.5,Visual studio 2008 Server Operating System - Windows Server 2008 Database - SQL Server 2008 Work group edition I was also thinking to adopt a different approach to resolve this issue (yet to test it's feasibility). I was just wondering if I could create a CLR function (which could take parameters to encrypt and decrypt data using Cryptography types provided in .Net framework) and use the CLR integration feature of SQL Server and call that function from stored procedure and views. I am not sure if I am thinking in right direction? Any advice on this as well please.

    Read the article

  • Installation error on Ubuntu 11.10

    - by Abhishek Chanda
    I upgraded to Ubuntu 11.10 and now, when I try to install or uninstall a software, I get this error installArchives() failed: (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 158945 files and directories currently installed.) Removing aisleriot ... Processing triggers for gconf2 ... Processing triggers for man-db ... Processing triggers for hicolor-icon-theme ... Processing triggers for libglib2.0-0 ... Processing triggers for gnome-menus ... Processing triggers for desktop-file-utils ... Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Setting up flashplugin-downloader (11.0.1.152ubuntu1) ... Downloading... --2012-05-02 18:47:29-- http://archive.canonical.com/pool/partner/a/adobe-flashplugin/adobe-flashplugin_11.0.1.152.orig.tar.gz Resolving archive.canonical.com... 91.189.92.150, 91.189.92.191 Connecting to archive.canonical.com|91.189.92.150|:80... connected. HTTP request sent, awaiting response... 404 Not Found 2012-05-02 18:47:29 ERROR 404: Not Found. download failed The Flash plugin is NOT installed. dpkg: error processing flashplugin-downloader (--configure): subprocess installed post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of flashplugin-installer: flashplugin-installer depends on flashplugin-downloader (>= 11.0.1.152ubuntu1); however: Package flashplugin-downloader is not configured yet. dpkg: error processing flashplugin-installer (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: flashplugin-downloader flashplugin-installer Error in function: SystemError: E:Sub-process /usr/bin/dpkg returned an error code (1) Setting up flashplugin-downloader (11.0.1.152ubuntu1) ... Downloading... --2012-05-02 18:47:33-- http://archive.canonical.com/pool/partner/a/adobe-flashplugin/adobe-flashplugin_11.0.1.152.orig.tar.gz Resolving archive.canonical.com... 91.189.92.191, 91.189.92.150 Connecting to archive.canonical.com|91.189.92.191|:80... connected. HTTP request sent, awaiting response... 404 Not Found 2012-05-02 18:47:34 ERROR 404: Not Found. download failed The Flash plugin is NOT installed. dpkg: error processing flashplugin-downloader (--configure): subprocess installed post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of flashplugin-installer: flashplugin-installer depends on flashplugin-downloader (>= 11.0.1.152ubuntu1); however: Package flashplugin-downloader is not configured yet. dpkg: error processing flashplugin-installer (--configure): dependency problems - leaving unconfigured This seems to be a bug that has been reported.Does anyone know a workaround?

    Read the article

  • Very uneven CPU utilization with SQL Server 2012 on 2 processor computer with 16 cores / processor

    - by cooplarsh
    After installing SQL Server Enterprise 2012 with the Server + Cal license model, on a computer with 2 processors each with 16 cores (and no hyperthreading involved) and putting the server under extremely heavy load the 16 cores on the first processor were very underutilized, the first 4 cores on the 2nd CPU were heavily utilized, and the last 12 cores were not used at all (because of the 20 core limit for this sql server version). Total CPU utilization was displaying as around 25%. Unfortunately, the server suffered from extremely poor performance even though if the tasks were evenly distributed across the 20 cores it wouldn't have been anywhere near as bad. The Windows Server was running on a VMWare virtual image under ESX Server, but all of the CPU was allocated to the windows server. We tried changing affinity settings (e.g., allocating most cores to CPU and the others to I/O), but that didn't help solve the performance problems. Upgrading the product edition to SQL Server Enterprise Core 2012 not only allowed the SQL Server to utilize the 12 previously unused cores on the 2nd processor, but it also resulted in a much more even distribution of tasks across all of the processors. To get through the backlog of requests cpU utilization jumped to around 90%, and then came down to around 33% once it was caught up, but performance improved dramatically since we failed over to the newly updated version And the performance issues went away. I was wondering if anyone knows what might cause SQL Server to unevenly distribute the load, relying almost exclusively on the first 4 cores of the 2nd processor that had 12 cores idle, and allocate only a few tasks to each of the 16 cores on the first processor. Also, is there any way we could have more evenly distributed the load across the 20 cores that were being used without the product edition upgrade? The flip side of that question is what did the product upgrade do that caused SQL Server to start evenly distributing the load across all of the cores that it recognized? Thanks to any insight to answer these questions and/or links that might help me better understand how to make sense of what was happenings.

    Read the article

  • Migrating from SQL Server to firebird: pro and cons

    - by user193655
    I am considering the migration for 4 reasons: 1) SQLSERVER installation is a nightmare, expecially for 1-user software (Even if typically I have 3-20 users, sometimes I sell my software to single users: it is incredible to have troubles installing the DB, while installing the applicatino means copying an exe...). (note my max installation is 100 users, but there is no an upper limit). Software installs in 10 seconds, SQLServer in 1 hour. Firebird installation is much easier. 2) SQLSERVER runs on windows server only 3) My customers have all the express edition 4) i am not using any advanced feature, I am now starting using filestream, but the main reason for this is that Express edition has 4/10GB db size limit So these are all Pros of moving to Firebird. Which are the cons? I can also plan to support both platforms, but this will backfire I fear.

    Read the article

  • hostapd running on Ubuntu Server 13.04 only allows single station to connect when using wpa

    - by user450688
    Problem Only a single station can connect to hostapd at a time. Any single station can connect (W8, OSX, iOS, Nexus) but when two or more hosts are connected at the same time the first client loses its connectivity. However there are no connectivity issues when WPA is not used. Setup Linux (Ubuntu server 13.04) wireless router (with separate networks for wired WAN, wired LAN, and Wireless LAN. iptables-save output: *nat :PREROUTING ACCEPT [0:0] :INPUT ACCEPT [0:0] :OUTPUT ACCEPT [0:0] :POSTROUTING ACCEPT [0:0] -A POSTROUTING -s 10.0.0.0/24 -o p4p1 -j MASQUERADE -A POSTROUTING -s 10.0.1.0/24 -o p4p1 -j MASQUERADE COMMIT *mangle :PREROUTING ACCEPT [13:916] :INPUT ACCEPT [9:708] :FORWARD ACCEPT [4:208] :OUTPUT ACCEPT [9:3492] :POSTROUTING ACCEPT [13:3700] COMMIT *filter :INPUT DROP [0:0] :FORWARD DROP [0:0] :OUTPUT ACCEPT [9:3492] -A INPUT -i p4p1 -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -i p4p1 -p tcp -m tcp --dport 22 -m state --state NEW -j ACCEPT -A INPUT -i eth0 -j ACCEPT -A INPUT -i wlan0 -j ACCEPT -A INPUT -i lo -j ACCEPT -A FORWARD -i p4p1 -m state --state RELATED,ESTABLISHED -j ACCEPT -A FORWARD -i eth0 -j ACCEPT -A FORWARD -i wlan0 -j ACCEPT -A FORWARD -i lo -j ACCEPT COMMIT /etc/hostapd/hostapd.conf #Wireless Interface interface=wlan0 driver=nl80211 ssid=<removed> hw_mode=g channel=6 max_num_sta=15 auth_algs=3 ieee80211n=1 wmm_enabled=1 wme_enabled=1 #Configure Hardware Capabilities of Interface ht_capab=[HT40+][SMPS-STATIC][GF][SHORT-GI-20][SHORT-GI-40][RX-STBC12] #Accept all MAC address macaddr_acl=0 #Shared Key Authentication wpa=1 wpa_passphrase=<removed> wpa_key_mgmt=WPA-PSK wpa_pairwise=CCMP rsn_pairwise=CCMP ###IPad Connectivevity Repair ieee8021x=0 eap_server=0 Wireless Card #lshw output product: RT2790 Wireless 802.11n 1T/2R PCIe vendor: Ralink corp. physical id: 0 bus info: pci@0000:03:00.0 logical name: mon.wlan0 version: 00 serial: <removed> width: 32 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list logical wireless ethernet physical configuration: broadcast=yes driver=rt2800pci driverversion=3.8.0-25-generic firmware=0.34 ip=10.0.1.254 latency=0 link=yes multicast=yes wireless=IEEE 802.11bgn #iw list output Band 1: Capabilities: 0x272 HT20/HT40 Static SM Power Save RX Greenfield RX HT20 SGI RX HT40 SGI RX STBC 2-streams Max AMSDU length: 3839 bytes No DSSS/CCK HT40 Maximum RX AMPDU length 65535 bytes (exponent: 0x003) Minimum RX AMPDU time spacing: 2 usec (0x04) HT RX MCS rate indexes supported: 0-15, 32 TX unequal modulation not supported HT TX Max spatial streams: 1 HT TX MCS rate indexes supported may differ Frequencies: * 2412 MHz [1] (27.0 dBm) * 2417 MHz [2] (27.0 dBm) * 2422 MHz [3] (27.0 dBm) * 2427 MHz [4] (27.0 dBm) * 2432 MHz [5] (27.0 dBm) * 2437 MHz [6] (27.0 dBm) * 2442 MHz [7] (27.0 dBm) * 2447 MHz [8] (27.0 dBm) * 2452 MHz [9] (27.0 dBm) * 2457 MHz [10] (27.0 dBm) * 2462 MHz [11] (27.0 dBm) * 2467 MHz [12] (disabled) * 2472 MHz [13] (disabled) * 2484 MHz [14] (disabled) Bitrates (non-HT): * 1.0 Mbps * 2.0 Mbps (short preamble supported) * 5.5 Mbps (short preamble supported) * 11.0 Mbps (short preamble supported) * 6.0 Mbps * 9.0 Mbps * 12.0 Mbps * 18.0 Mbps * 24.0 Mbps * 36.0 Mbps * 48.0 Mbps * 54.0 Mbps max # scan SSIDs: 4 max scan IEs length: 2257 bytes Coverage class: 0 (up to 0m) Supported Ciphers: * WEP40 (00-0f-ac:1) * WEP104 (00-0f-ac:5) * TKIP (00-0f-ac:2) * CCMP (00-0f-ac:4) Available Antennas: TX 0 RX 0 Supported interface modes: * IBSS * managed * AP * AP/VLAN * WDS * monitor * mesh point software interface modes (can always be added): * AP/VLAN * monitor valid interface combinations: * #{ AP } <= 8, total <= 8, #channels <= 1 Supported commands: * new_interface * set_interface * new_key * new_beacon * new_station * new_mpath * set_mesh_params * set_bss * authenticate * associate * deauthenticate * disassociate * join_ibss * join_mesh * set_tx_bitrate_mask * set_tx_bitrate_mask * action * frame_wait_cancel * set_wiphy_netns * set_channel * set_wds_peer * Unknown command (84) * Unknown command (87) * Unknown command (85) * Unknown command (89) * Unknown command (92) * testmode * connect * disconnect Supported TX frame types: * IBSS: 0x00 0x10 0x20 0x30 0x40 0x50 0x60 0x70 0x80 0x90 0xa0 0xb0 0xc0 0xd0 0xe0 0xf0 * managed: 0x00 0x10 0x20 0x30 0x40 0x50 0x60 0x70 0x80 0x90 0xa0 0xb0 0xc0 0xd0 0xe0 0xf0 * AP: 0x00 0x10 0x20 0x30 0x40 0x50 0x60 0x70 0x80 0x90 0xa0 0xb0 0xc0 0xd0 0xe0 0xf0 * AP/VLAN: 0x00 0x10 0x20 0x30 0x40 0x50 0x60 0x70 0x80 0x90 0xa0 0xb0 0xc0 0xd0 0xe0 0xf0 * mesh point: 0x00 0x10 0x20 0x30 0x40 0x50 0x60 0x70 0x80 0x90 0xa0 0xb0 0xc0 0xd0 0xe0 0xf0 * P2P-client: 0x00 0x10 0x20 0x30 0x40 0x50 0x60 0x70 0x80 0x90 0xa0 0xb0 0xc0 0xd0 0xe0 0xf0 * P2P-GO: 0x00 0x10 0x20 0x30 0x40 0x50 0x60 0x70 0x80 0x90 0xa0 0xb0 0xc0 0xd0 0xe0 0xf0 * Unknown mode (10): 0x00 0x10 0x20 0x30 0x40 0x50 0x60 0x70 0x80 0x90 0xa0 0xb0 0xc0 0xd0 0xe0 0xf0 Supported RX frame types: * IBSS: 0x40 0xb0 0xc0 0xd0 * managed: 0x40 0xd0 * AP: 0x00 0x20 0x40 0xa0 0xb0 0xc0 0xd0 * AP/VLAN: 0x00 0x20 0x40 0xa0 0xb0 0xc0 0xd0 * mesh point: 0xb0 0xc0 0xd0 * P2P-client: 0x40 0xd0 * P2P-GO: 0x00 0x20 0x40 0xa0 0xb0 0xc0 0xd0 * Unknown mode (10): 0x40 0xd0 Device supports RSN-IBSS. HT Capability overrides: * MCS: ff ff ff ff ff ff ff ff ff ff * maximum A-MSDU length * supported channel width * short GI for 40 MHz * max A-MPDU length exponent * min MPDU start spacing Device supports TX status socket option. Device supports HT-IBSS.

    Read the article

  • linux networking: how to redirect incoming connections from old server to new server?

    - by aliz
    hi I'm in the process of moving my old server to a new server, but i will keep the old server running for database replication and load balancing, etc. each server has a separate internet connection with a static ip, and they are connected through a local Ethernet connection. I've got Ubuntu 8.04 32-bit running on old server and Debian 6.0 64-bit on new one. shorewall firewall is installed on both servers. there are some outdoor devices which are periodically sending data to port 43597 for old server IP address. I can run multiple instances of the network service which is responsible for receiving data from devices on a server but on different ports. here's the question: how can I run the service on new server and have connections coming to old server redirected to it, and new devices can still connect to new server's IP address preferably on the same port and same service? until all devices get updated to send to new server. I've tried a shorewall DNAT rule, but seems like new server's default route should be changed to ethernet connection, which breaks other things. I also found about redir utility, but still haven't tried it. is there any best practice or simple solution for such a scenario, i'm not aware of? thanks in advance.

    Read the article

  • Configuring SQL Server Express 2005

    - by MrTognio
    What's the proper way to configure SQL Server Express 2005 so that it can allow for a number of clients to get connected to the server? I have my application running both in the server machine and the client machines. Given the nature of my application, clients are the branches geographically distant from each other, and the server itself. Every operation the client records must be reported to the server, because the server needs total control over the usage and production. But, what should I consider when configuring the connection in both sides, the server and the client? I'm not as used to SQL Server, I'm a beginner, however through SQL Server Configuration Manager I have set the main options without success. The problem seems to be related to trusted connections even though I have set it to support both windows and SQL Server authentication. When the client tries to connect to the server using windows authentication it displays no table; when it tries to communicate using a password (SQL Server authentication), tables are successfully displayed but no access is allowed... Thanx in advance!

    Read the article

  • SQL SERVER – Extending SQL Azure with Azure worker role – Guest Post by Paras Doshi

    - by pinaldave
    This is guest post by Paras Doshi. Paras Doshi is a research Intern at SolidQ.com and a Microsoft student partner. He is currently working in the domain of SQL Azure. SQL Azure is nothing but a SQL server in the cloud. SQL Azure provides benefits such as on demand rapid provisioning, cost-effective scalability, high availability and reduced management overhead. To see an introduction on SQL Azure, check out the post by Pinal here In this article, we are going to discuss how to extend SQL Azure with the Azure worker role. In other words, we will attempt to write a custom code and host it in the Azure worker role; the aim is to add some features that are not available with SQL Azure currently or features that need to be customized for flexibility. This way we extend the SQL Azure capability by building some solutions that run on Azure as worker roles. To understand Azure worker role, think of it as a windows service in cloud. Azure worker role can perform background processes, and to handle processes such as synchronization and backup, it becomes our ideal tool. First, we will focus on writing a worker role code that synchronizes SQL Azure databases. Before we do so, let’s see some scenarios in which synchronization between SQL Azure databases is beneficial: scaling out access over multiple databases enables us to handle workload efficiently As of now, SQL Azure database can be hosted in one of any six datacenters. By synchronizing databases located in different data centers, one can extend the data by enabling access to geographically distributed data Let us see some scenarios in which SQL server to SQL Azure database synchronization is beneficial To backup SQL Azure database on local infrastructure Rather than investing in local infrastructure for increased workloads, such workloads could be handled by cloud Ability to extend data to different datacenters located across the world to enable efficient data access from remote locations Now, let us develop cloud-based app that synchronizes SQL Azure databases. For an Introduction to developing cloud based apps, click here Now, in this article, I aim to provide a bird’s eye view of how a code that synchronizes SQL Azure databases look like and then list resources that can help you develop the solution from scratch. Now, if you newly add a worker role to the cloud-based project, this is how the code will look like. (Note: I have added comments to the skeleton code to point out the modifications that will be required in the code to carry out the SQL Azure synchronization. Note the placement of Setup() and Sync() function.) Click here (http://parasdoshi1989.files.wordpress.com/2011/06/code-snippet-1-for-extending-sql-azure-with-azure-worker-role1.pdf ) Enabling SQL Azure databases synchronization through sync framework is a two-step process. In the first step, the database is provisioned and sync framework creates tracking tables, stored procedures, triggers, and tables to store metadata to enable synchronization. This is one time step. The code for the same is put in the setup() function which is called once when the worker role starts. Now, the second step is continuous (or on demand) synchronization of SQL Azure databases by propagating changes between databases. This is done on a continuous basis by calling the sync() function in the while loop. The code logic to synchronize changes between SQL Azure databases should be put in the sync() function. Discussing the coding part step by step is out of the scope of this article. Therefore, let me suggest you a resource, which is given here. Also, note that before you start developing the code, you will need to install SYNC framework 2.1 SDK (download here). Further, you will reference some libraries before you start coding. Details regarding the same are available in the article that I just pointed to. You will be charged for data transfers if the databases are not in the same datacenter. For pricing information, go here Currently, a tool named DATA SYNC, which is built on top of sync framework, is available in CTP that allows SQL Azure <-> SQL server and SQL Azure <-> SQL Azure synchronization (without writing single line of code); however, in some cases, the custom code shown in this blogpost provides flexibility that is not available with Data SYNC. For instance, filtering is not supported in the SQL Azure DATA SYNC CTP2; if you wish to have such a functionality now, then you have the option of developing a custom code using SYNC Framework. Now, this code can be easily extended to synchronize at some schedule. Let us say we want the databases to get synchronized every day at 10:00 pm. This is what the code will look like now: (http://parasdoshi1989.files.wordpress.com/2011/06/code-snippet-2-for-extending-sql-azure-with-azure-worker-role.pdf) Don’t you think that by writing such a code, we are imitating the functionality provided by the SQL server agent for a SQL server? Think about it. We are scheduling our administrative task by writing custom code – in other words, we have developed a “Light weight SQL server agent for SQL Azure!” Since the SQL server agent is not currently available in cloud, we have developed a solution that enables us to schedule tasks, and thus we have extended SQL Azure with the Azure worker role! Now if you wish to track jobs, you can do so by storing this data in SQL Azure (or Azure tables). The reason is that Windows Azure is a stateless platform, and we will need to store the state of the job ourselves and the choice that you have is SQL Azure or Azure tables. Note that this solution requires custom code and also it is not UI driven; however, for now, it can act as a temporary solution until SQL server agent is made available in the cloud. Moreover, this solution does not encompass functionalities that a SQL server agent provides, but it does open up an interesting avenue to schedule some of the tasks such as backup and synchronization of SQL Azure databases by writing some custom code in the Azure worker role. Now, let us see one more possibility – i.e., running BCP through a worker role in Azure-hosted services and then uploading the backup files either locally or on blobs. If you upload it locally, then consider the data transfer cost. If you upload it to blobs residing in the same datacenter, then no transfer cost applies but the cost on blob size applies. So, before choosing the option, you need to evaluate your preferences keeping the cost associated with each option in mind. In this article, I have shown that Azure worker role solution could be developed to synchronize SQL Azure databases. Moreover, a light-weight SQL server agent for SQL Azure can be developed. Also we discussed the possibility of running BCP through a worker role in Azure-hosted services for backing up our precious SQL Azure data. Thus, we can extend SQL Azure with the Azure worker role. But remember: you will be charged for running Azure worker roles. So at the end of the day, you need to ask – am I willing to build a custom code and pay money to achieve this functionality? I hope you found this blog post interesting. If you have any questions/feedback, you can comment below or you can mail me at Paras[at]student-partners[dot]com Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Azure, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #051

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 Explanation and Understanding NOT NULL Constraint NOT NULL is integrity CONSTRAINT. It does not allow creating of the row where column contains NULL value. Most discussed questions about NULL is what is NULL? I will not go in depth analysis it. Simply put NULL is unknown or missing data. When NULL is present in database columns, it can affect the integrity of the database. I really do not prefer NULL in the database unless they are absolutely necessary. Three T-SQL Script to Create Primary Keys on Table I have always enjoyed writing about three topics Constraint and Keys, Backup and Restore and Datetime Functions. Primary Keys constraints prevent duplicate values for columns and provides a unique identifier to each column, as well it creates clustered index on the columns. 2008 Get Numeric Value From Alpha Numeric String – UDF for Get Numeric Numbers Only SQL is great with String operations. Many times, I use T-SQL to do my string operation. Let us see User Defined Function, which I wrote a few days ago, which will return only Numeric values from Alpha Numeric values. Introduction and Example of UNION and UNION ALL It is very much interesting when I get requests from blog reader to re-write my previous articles. I have received few requests to rewrite my article SQL SERVER – Union vs. Union All – Which is better for performance? with examples. I request you to read my previous article first to understand what is the concept and read this article to understand the same concept with an example. Downgrade Database for Previous Version The main questions is how they can downgrade the from SQL Server 2005 to SQL Server 2000? The answer is : Not Possible. Get Common Records From Two Tables Without Using Join Following is my scenario, Suppose Table 1 and Table 2 has same column e.g. Column1 Following is the query, 1. Select column1,column2 From Table1 2. Select column1 From Table2 I want to find common records from these tables, but I don’t want to use the Join clause because for that I need to specify the column name for Join condition. Will you help me to get common records without using Join condition? I am using SQL Server 2005. Retrieve – Select Only Date Part From DateTime – Best Practice – Part 2 A year ago I wrote a post about SQL SERVER – Retrieve – Select Only Date Part From DateTime – Best Practice where I have discussed two different methods of getting the date part from datetime. Introduction to CLR – Simple Example of CLR Stored Procedure CLR is an abbreviation of Common Language Runtime. In SQL Server 2005 and later version of it database objects can be created which are created in CLR. Stored Procedures, Functions, Triggers can be coded in CLR. CLR is faster than T-SQL in many cases. CLR is mainly used to accomplish tasks which are not possible by T-SQL or can use lots of resources. The CLR can be usually implemented where there is an intense string operation, thread management or iteration methods which can be complicated for T-SQL. Implementing CLR provides more security to the Extended Stored Procedure. 2009 Comic Slow Query – SQL Joke Before Presentation After Presentation Enable Automatic Statistic Update on Database In one of the recent projects, I found out that despite putting good indexes and optimizing the query, I could not achieve an optimized performance and I still received an unoptimized response from the SQL Server. On examination, I figured out that the culprit was statistics. The database that I was trying to optimize had auto update of the statistics was disabled. Recently Executed T-SQL Query Please refer to blog post  query to recently executed T-SQL query on database. Change Collation of Database Column – T-SQL Script – Consolidating Collations – Extention Script At some time in your DBA career, you may find yourself in a position when you sit back and realize that your database collations have somehow run amuck, or are faced with the ever annoying CANNOT RESOLVE COLLATION message when trying to join data of varying collation settings. 2010 Visiting Alma Mater – Delivering Session on Database Performance and Career – Nirma Institute of Technology Everyone always dreams of visiting their school and college, where they have studied once. It is a great feeling to see the college once again – where you have spent the wonderful golden years of your time. College time is filled with studies, education, emotions and several plans to build a future. I consider myself fortunate as I got the opportunity to study at some of the best places in the world. Change Column DataTypes There are times when I feel like writing that I am a day older in SQL Server. In fact, there are many who are looking for a solution that is simple enough. Have you ever searched online for something very simple. I often do and enjoy doing things which are straight forward and easy to change. 2011 Three DMVs – sys.dm_server_memory_dumps – sys.dm_server_services – sys.dm_server_registry In this blog post we will see three new DMVs which are introduced in Denali. The DMVs are very simple and there is not much to describe them. So here is the simple game. I will be asking a question back to you after seeing the result of the each of the DMV and you help me to complete this blog post. A Simple Quiz – T-SQL Brain Trick If you have some time, I strongly suggest you try this quiz out as it is for sure twists your brain. 2012 List All The Column With Specific Data Types in Database 5 years ago I wrote script SQL SERVER – 2005 – List All The Column With Specific Data Types, when I read it again, it is very much relevant and I liked it. This is one of the script which every developer would like to keep it handy. I have upgraded the script bit more. I have included few additional information which I believe I should have added from the beginning. It is difficult to visualize the final script when we are writing it first time. Find First Non-Numeric Character from String The function PATINDEX exists for quite a long time in SQL Server but I hardly see it being used. Well, at least I use it and I am comfortable using it. Here is a simple script which I use when I have to identify first non-numeric character. Finding Different ColumnName From Almost Identitical Tables Well here is the interesting example of how we can use sys.column catalogue views and get the details of the newly added column. I have previously written about EXCEPT over here which is very similar to MINUS of Oracle. Storing Data and Files in Cloud – Dropbox – Personal Technology Tip I thought long and hard about doing a Personal Technology Tips series for this blog.  I have so many tips I’d like to share.  I am on my computer almost all day, every day, so I have a treasure trove of interesting tidbits I like to share if given the chance.  The only thing holding me back – which tip to share first?  The first tip obviously has the weight of seeming like the most important.  But this would mean choosing amongst my favorite tricks and shortcuts.  This is a hard task. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • AD-Integrated DNS failure: "Access was Denied"

    - by goldPseudo
    I have a single Windows 2008 R2 server configured as a domain controller with Active Directory Domain Services and DNS Server. The DNS Server was recently uninstalled and reinstalled in an attempt to fix a (possibly unrelated) problem; the event log was previously flooded with errors (#4000, "The DNS Server was unable to open Active Directory...") which reinstalling did not fix. However, while before it was at least showing and resolving names from the local network (slowly), now it's showing nothing at all. (The original error started with a #4015 error "The DNS server has encountered a critical error from the Active Directory," followed by a long string of #4000 and a few #4004. This may have been caused when a new DNS name was recently added, but I can't be sure of the timing.) Attempting to manage the DNS through Administrative Tools > DNS brings up an error: The server SERVERNAME could not be contacted. The error was: Access was denied. Would you like to add it anyway? Selecting yes just puts a SERVERNAME item on the list, but with all the configuration options grayed out. I attempted editing my hosts file as per this post but to no avail. Running dcdiag, it does identify the home server properly, but fails right away testing connectivity with: Starting test: Connectivity The host blahblahblahyaddayaddayadda could not be resolved to an IP address. Check the DNS server, DHCP, server name, etc. Got error while checking LDAP and RPC connectivity. Please check your firewall settings. ......................... SERVERNAME failed test Connectivity Adding the blahblahblahyaddayaddayadda address to hosts (pointing at 127.0.0.1), the connectivity test succeeded but it didn't seem to solve the fundamental problem (Access was denied) so I hashed it out again. Primary DNS server is properly pointing at 127.0.0.1 according to ipconfig /all. And the DNS server is forwarding requests to external addresses properly (if slowly), but the resolving of local network names is borked. The DNS database itself is small enough that I am (grudgingly) able to rebuild it if need be, but the DNS Server doesn't seem willing to let me work with (or around) it at all. (and yes before you ask there are no system backups available) Where do I go from here? As requested, my (slightly obfuscated) dcdiag output: Directory Server Diagnosis Performing initial setup: Trying to find home server... Home Server = bulgogi * Identified AD Forest. Done gathering initial info. Doing initial required tests Testing server: Obfuscated\BULGOGI Starting test: Connectivity The host a-whole-lot-of-numbers._msdcs.obfuscated.address could not be resolved to an IP address. Check the DNS server, DHCP, server name, etc. Got error while checking LDAP and RPC connectivity. Please check your firewall settings. ......................... BULGOGI failed test Connectivity Doing primary tests Testing server: Obfuscated\BULGOGI Skipping all tests, because server BULGOGI is not responding to directory service requests. Running partition tests on : ForestDnsZones Starting test: CheckSDRefDom ......................... ForestDnsZones passed test CheckSDRefDom Starting test: CrossRefValidation ......................... ForestDnsZones passed test CrossRefValidation Running partition tests on : DomainDnsZones Starting test: CheckSDRefDom ......................... DomainDnsZones passed test CheckSDRefDom Starting test: CrossRefValidation ......................... DomainDnsZones passed test CrossRefValidation Running partition tests on : Schema Starting test: CheckSDRefDom ......................... Schema passed test CheckSDRefDom Starting test: CrossRefValidation ......................... Schema passed test CrossRefValidation Running partition tests on : Configuration Starting test: CheckSDRefDom ......................... Configuration passed test CheckSDRefDom Starting test: CrossRefValidation ......................... Configuration passed test CrossRefValidation Running partition tests on : obfuscated Starting test: CheckSDRefDom ......................... obfuscated passed test CheckSDRefDom Starting test: CrossRefValidation ......................... obfuscated passed test CrossRefValidation Running enterprise tests on : obfuscated.address Starting test: LocatorCheck ......................... obfuscated.address passed test LocatorCheck Starting test: Intersite ......................... obfuscated.address passed test Intersite And my hosts file (minus the hashed lines for brevity): 127.0.0.1 localhost ::1 localhost And, for the sake of completion, here's selected chunks of my netstat -a -n output: TCP 0.0.0.0:88 0.0.0.0:0 LISTENING TCP 0.0.0.0:135 0.0.0.0:0 LISTENING TCP 0.0.0.0:389 0.0.0.0:0 LISTENING TCP 0.0.0.0:445 0.0.0.0:0 LISTENING TCP 0.0.0.0:464 0.0.0.0:0 LISTENING TCP 0.0.0.0:593 0.0.0.0:0 LISTENING TCP 0.0.0.0:636 0.0.0.0:0 LISTENING TCP 0.0.0.0:3268 0.0.0.0:0 LISTENING TCP 0.0.0.0:3269 0.0.0.0:0 LISTENING TCP 0.0.0.0:3389 0.0.0.0:0 LISTENING TCP 0.0.0.0:9389 0.0.0.0:0 LISTENING TCP 0.0.0.0:47001 0.0.0.0:0 LISTENING TCP 0.0.0.0:49152 0.0.0.0:0 LISTENING TCP 0.0.0.0:49153 0.0.0.0:0 LISTENING TCP 0.0.0.0:49154 0.0.0.0:0 LISTENING TCP 0.0.0.0:49155 0.0.0.0:0 LISTENING TCP 0.0.0.0:49157 0.0.0.0:0 LISTENING TCP 0.0.0.0:49158 0.0.0.0:0 LISTENING TCP 0.0.0.0:49164 0.0.0.0:0 LISTENING TCP 0.0.0.0:49178 0.0.0.0:0 LISTENING TCP 0.0.0.0:49179 0.0.0.0:0 LISTENING TCP 0.0.0.0:50480 0.0.0.0:0 LISTENING TCP 127.0.0.1:53 0.0.0.0:0 LISTENING TCP 192.168.12.127:53 0.0.0.0:0 LISTENING TCP 192.168.12.127:139 0.0.0.0:0 LISTENING TCP 192.168.12.127:445 192.168.12.50:51118 ESTABLISHED TCP 192.168.12.127:3389 192.168.12.4:33579 ESTABLISHED TCP 192.168.12.127:3389 192.168.12.100:1115 ESTABLISHED TCP 192.168.12.127:50784 192.168.12.50:49174 ESTABLISHED <snip ipv6> UDP 0.0.0.0:123 *:* UDP 0.0.0.0:500 *:* UDP 0.0.0.0:1645 *:* UDP 0.0.0.0:1645 *:* UDP 0.0.0.0:1646 *:* UDP 0.0.0.0:1646 *:* UDP 0.0.0.0:1812 *:* UDP 0.0.0.0:1812 *:* UDP 0.0.0.0:1813 *:* UDP 0.0.0.0:1813 *:* UDP 0.0.0.0:4500 *:* UDP 0.0.0.0:5355 *:* UDP 0.0.0.0:59638 *:* <snip a few thousand lines> UDP 0.0.0.0:62140 *:* UDP 127.0.0.1:53 *:* UDP 127.0.0.1:49540 *:* UDP 127.0.0.1:49541 *:* UDP 127.0.0.1:53655 *:* UDP 127.0.0.1:54946 *:* UDP 127.0.0.1:58345 *:* UDP 127.0.0.1:63352 *:* UDP 127.0.0.1:63728 *:* UDP 127.0.0.1:63729 *:* UDP 127.0.0.1:64215 *:* UDP 127.0.0.1:64646 *:* UDP 192.168.12.127:53 *:* UDP 192.168.12.127:67 *:* UDP 192.168.12.127:68 *:* UDP 192.168.12.127:88 *:* UDP 192.168.12.127:137 *:* UDP 192.168.12.127:138 *:* UDP 192.168.12.127:389 *:* UDP 192.168.12.127:464 *:* UDP 192.168.12.127:2535 *:* <snip ipv6 again>

    Read the article

  • Tomcat 7 on Ubuntu 12.04 with JRE 7 not starting

    - by Andreas Krueger
    I am running a virtual server in the web on Ubuntu 12.04 LTS / 32 Bit. After a clean install of JRE 7 and Tomcat 7, following the instructions on http://www.sysadminslife.com, I don't get Tomcat 7 up and running. > java -version java version "1.7.0_09" Java(TM) SE Runtime Environment (build 1.7.0_09-b05) Java HotSpot(TM) Client VM (build 23.5-b02, mixed mode) > /etc/init.d/tomcat start Starting Tomcat Using CATALINA_BASE: /usr/local/tomcat Using CATALINA_HOME: /usr/local/tomcat Using CATALINA_TMPDIR: /usr/local/tomcat/temp Using JRE_HOME: /usr/lib/jvm/java-7-oracle Using CLASSPATH: /usr/local/tomcat/bin/bootstrap.jar:/usr/local/tomcat/bin/tomcat-juli.jar > telnet localhost 8080 Trying ::1... Trying 127.0.0.1... telnet: Unable to connect to remote host: Connection refused netstat sometimes shows a Java process, most of the times not. If it does, nothing works either. Does anyone have a solution or encountered similar situations? Here are the contents of catalina.out: 16.11.2012 18:36:39 org.apache.catalina.core.AprLifecycleListener init INFO: The APR based Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library.path: /usr/lib/jvm/java-6-oracle/lib/i386/client:/usr/lib/jvm/java-6-oracle/lib/i386:/usr/lib/jvm/java-6-oracle/../lib/i386:/usr/java/packages/lib/i386:/lib:/usr/lib 16.11.2012 18:36:40 org.apache.coyote.AbstractProtocol init INFO: Initializing ProtocolHandler ["http-bio-8080"] 16.11.2012 18:36:40 org.apache.coyote.AbstractProtocol init INFO: Initializing ProtocolHandler ["ajp-bio-8009"] 16.11.2012 18:36:40 org.apache.catalina.startup.Catalina load INFO: Initialization processed in 1509 ms 16.11.2012 18:36:40 org.apache.catalina.core.StandardService startInternal INFO: Starting service Catalina 16.11.2012 18:36:40 org.apache.catalina.core.StandardEngine startInternal INFO: Starting Servlet Engine: Apache Tomcat/7.0.29 16.11.2012 18:36:40 org.apache.catalina.startup.HostConfig deployDirectory INFO: Deploying web application directory /usr/local/tomcat/webapps/manager Here come the results of ps -ef, iptables --list and netstat -plut: > ps -ef UID PID PPID C STIME TTY TIME CMD root 1 0 0 Nov16 ? 00:00:00 init root 2 1 0 Nov16 ? 00:00:00 [kthreadd/206616] root 3 2 0 Nov16 ? 00:00:00 [khelper/2066167] root 4 2 0 Nov16 ? 00:00:00 [rpciod/2066167/] root 5 2 0 Nov16 ? 00:00:00 [rpciod/2066167/] root 6 2 0 Nov16 ? 00:00:00 [rpciod/2066167/] root 7 2 0 Nov16 ? 00:00:00 [rpciod/2066167/] root 8 2 0 Nov16 ? 00:00:00 [nfsiod/2066167] root 119 1 0 Nov16 ? 00:00:00 upstart-udev-bridge --daemon root 125 1 0 Nov16 ? 00:00:00 /sbin/udevd --daemon root 157 125 0 Nov16 ? 00:00:00 /sbin/udevd --daemon root 158 125 0 Nov16 ? 00:00:00 /sbin/udevd --daemon root 205 1 0 Nov16 ? 00:00:00 upstart-socket-bridge --daemon root 276 1 0 Nov16 ? 00:00:00 /usr/sbin/sshd -D root 335 1 0 Nov16 ? 00:00:00 /usr/sbin/xinetd -dontfork -pidfile /var/run/xinetd.pid -stayalive -inetd root 348 1 0 Nov16 ? 00:00:00 cron syslog 368 1 0 Nov16 ? 00:00:00 /sbin/syslogd -u syslog root 472 1 0 Nov16 ? 00:00:00 /usr/lib/postfix/master postfix 482 472 0 Nov16 ? 00:00:00 qmgr -l -t fifo -u root 520 1 0 Nov16 ? 00:00:04 /usr/sbin/apache2 -k start www-data 523 520 0 Nov16 ? 00:00:00 /usr/sbin/apache2 -k start www-data 525 520 0 Nov16 ? 00:00:00 /usr/sbin/apache2 -k start www-data 526 520 0 Nov16 ? 00:00:00 /usr/sbin/apache2 -k start tomcat 1074 1 0 Nov16 ? 00:01:08 /usr/lib/jvm/java-6-oracle/bin/java -Djava.util.logging.config.file=/usr/ postfix 1351 472 0 Nov16 ? 00:00:00 tlsmgr -l -t unix -u -c postfix 3413 472 0 17:00 ? 00:00:00 pickup -l -t fifo -u -c root 3457 276 0 17:31 ? 00:00:00 sshd: root@pts/0 root 3459 3457 0 17:31 pts/0 00:00:00 -bash root 3470 3459 0 17:31 pts/0 00:00:00 ps -ef > iptables --list Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT tcp -- anywhere anywhere tcp dpt:http-alt ACCEPT tcp -- anywhere anywhere tcp dpt:8005 ACCEPT tcp -- anywhere anywhere tcp dpt:http-alt Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination > netstat -plut Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 *:smtp *:* LISTEN 472/master tcp 0 0 *:3213 *:* LISTEN 276/sshd tcp6 0 0 [::]:smtp [::]:* LISTEN 472/master tcp6 0 0 [::]:8009 [::]:* LISTEN 1074/java tcp6 0 0 [::]:3213 [::]:* LISTEN 276/sshd tcp6 0 0 [::]:http-alt [::]:* LISTEN 1074/java tcp6 0 0 [::]:http [::]:* LISTEN 520/apache2

    Read the article

  • SQL SERVER – What is AdventureWorks?

    - by pinaldave
    NOTE: If you know the answer of this question, then I request you to stop reading this post right now. Please do not leave comment about this blog post not being useful to you, if you knew the answer. Few days ago, I received DM asking What is an AdventureWorks database and why in all the examples I use that instead of any other database (e.g. Pubs or  Northwind)? As matter of fact, when I went back to my question list, which I have yet not answered, there were a few more variations of this same question. AdventureWorks is a Sample Database shipped with SQL Server and it can be downloaded from http://codeplex.com site. AdventureWorks has replaced Northwind and Pubs from the sample database in SQL Server 2005. The Microsoft team keeps updating the sample database as they release new versions. Here are some quick links: AdventureWorks SQL Server 2008 SR4 AdventureWorks 2008R2 November CTP AdventureWorks for SQL Azure (December CTP) AventureWorks for SQL Server 2005 SP2A SQL SERVER – 2008 – Download and Install Samples Database AdventureWorks 2005 – Detail Tutorial I have previously written few other articles on the same subject; you can find them easily here: [email protected] Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Information Related to DATETIME and DATETIME2

    - by pinaldave
    I recently received interesting comment on the blog regarding workaround to overcome the precision issue while dealing with DATETIME and DATETIME2. I have written over this subject earlier over here. SQL SERVER – Difference Between GETDATE and SYSDATETIME SQL SERVER – Difference Between DATETIME and DATETIME2 – WITH GETDATE SQL SERVER – Difference Between DATETIME and DATETIME2 SQL Expert Jing Sheng Zhong has left following comment: The issue you found in SQL server new datetime type is related time source function precision. Folks have found the root reason of the problem – when data time values are converted (implicit or explicit) between different data type, which would lose some precision, so the result cannot match each other as thought. Here I would like to gave a work around solution to solve the problem which the developers met. -- Declare and loop DECLARE @Intveral INT, @CurDate DATETIMEOFFSET; CREATE TABLE #TimeTable (FirstDate DATETIME, LastDate DATETIME2, GlobalDate DATETIMEOFFSET) SET @Intveral = 10000 WHILE (@Intveral > 0) BEGIN ----SET @CurDate = SYSDATETIMEOFFSET(); -- higher precision for future use only SET @CurDate = TODATETIMEOFFSET(GETDATE(),DATEDIFF(N,GETUTCDATE(),GETDATE())); -- lower precision to match exited date process INSERT #TimeTable (FirstDate, LastDate, GlobalDate) VALUES (@CurDate, @CurDate, @CurDate) SET @Intveral = @Intveral - 1 END GO -- Distinct Values SELECT COUNT(DISTINCT FirstDate) D_DATETIME, COUNT(DISTINCT LastDate) D_DATETIME2, COUNT(DISTINCT GlobalDate) D_SYSGETDATE FROM #TimeTable GO -- Join SELECT DISTINCT a.FirstDate,b.LastDate, b.GlobalDate, CAST(b.GlobalDate AS DATETIME) GlobalDateASDateTime FROM #TimeTable a INNER JOIN #TimeTable b ON a.FirstDate = CAST(b.GlobalDate AS DATETIME) GO -- Select SELECT * FROM #TimeTable GO -- Clean up DROP TABLE #TimeTable GO If you read my blog SQL SERVER – Difference Between DATETIME and DATETIME2 you will notice that I have achieved the same using GETDATE(). Are you using DATETIME2 in your production environment? If yes, I am interested to know the use case. Reference: Pinal Dave (http://www.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL DateTime, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQLAuthority News – Download Whitepaper – Understanding and Controlling Parallel Query Processing in SQL Server

    - by pinaldave
    My recently article SQL SERVER – Reducing CXPACKET Wait Stats for High Transactional Database has received many good comments regarding MAXDOP 1 and MAXDOP 0. I really enjoyed reading the comments as the comments are received from industry leaders and gurus. I was further researching on the subject and I end up on following white paper written by Microsoft. Understanding and Controlling Parallel Query Processing in SQL Server Data warehousing and general reporting applications tend to be CPU intensive because they need to read and process a large number of rows. To facilitate quick data processing for queries that touch a large amount of data, Microsoft SQL Server exploits the power of multiple logical processors to provide parallel query processing operations such as parallel scans. Through extensive testing, we have learned that, for most large queries that are executed in a parallel fashion, SQL Server can deliver linear or nearly linear response time speedup as the number of logical processors increases. However, some queries in high parallelism scenarios perform suboptimally. There are also some parallelism issues that can occur in a multi-user parallel query workload. This white paper describes parallel performance problems you might encounter when you run such queries and workloads, and it explains why these issues occur. In addition, it presents how data warehouse developers can detect these issues, and how they can work around them or mitigate them. To review the document, please download the Understanding and Controlling Parallel Query Processing in SQL Server Word document. Note: Above abstract has been taken from here. The real question is what does the parallel queries has made life of DBA much simpler or is it looked at with potential issue related to degradation of the performance? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL White Papers, SQLAuthority News, T SQL, Technology

    Read the article

  • SQLAuthority News – Free eBook Download – Introducing Microsoft SQL Server 2008 R2

    - by pinaldave
    Microsoft Press has published FREE eBook on the most awaiting release of SQL Server 2008 R2. The book is written by Ross Mistry and Stacia Misner. Ross is my personal friend and one of the most active book writer in SQL Server Domain. When I see his name on any book, I am sure that it will be high quality and easy to read book. The details about the book is here: Introducing Microsoft SQL Server 2008 R2, by Ross Mistry and Stacia Misner The book contains 10 chapters and 216 pages. PART I   Database Administration CHAPTER 1   SQL Server 2008 R2 Editions and Enhancements CHAPTER 2   Multi-Server Administration CHAPTER 3   Data-Tier Applications CHAPTER 4   High Availability and Virtualization Enhancements CHAPTER 5   Consolidation and Monitoring PART II   Business Intelligence Development CHAPTER 6   Scalable Data Warehousing CHAPTER 7   Master Data Services CHAPTER 8   Complex Event Processing with StreamInsight CHAPTER 9   Reporting Services Enhancements CHAPTER 10   Self-Service Analysis with PowerPivot More detail about the book is listed here. You can download the ebook in XPS format here and in PDF format here. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Business Intelligence, Pinal Dave, SQL, SQL Authority, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • Windows Server 2008 R2 + IIS 7.5 + ASP.NET 4.0 = HTTP Error 500.0

    - by Dave
    I am having an impossible time getting asp.net 4.0 to work in any fashion at all. In fact, I completely wiped my server, reinstalled with Server 2008 R2 Standard (running on a VMWare ESXi box, not that it should matter), and cannot even get a test .aspx page to work. Here is exactly what I did: Installed 2008 R2 Standard Activated windows and enabled Remote Desktop Installed the Web Server Role with the necessary role services(common http, asp.net, logging, tracing, management service and FTP) Enabled the management service Installed .Net Framework 4.0 via web executable Added FTP publishing to the default web site Switched default web site application pool to asp.net 4.0 (integrated) Added a 'test.aspx' file to the inetpub\wwwroot folder (contents below) Opened a browser to http://localhost/test.aspx and received a 500.0 error (also below) What am I missing? I haven't touched IIS in a while (3+ years), so it could be something stupid/trvial. Please point it out, call me a noob; my ego can take it. Thanks, Dave test.aspx <% @Page language="C# %> <html> <head> <title>Test.aspx</title> </head> <body> <asp:label runat="server" text="This is an asp.net 4.0 label" /> </body> </html> Error page: Module AspNetInitClrHostFailureModule Notification BeginRequest Handler PageHandlerFactory-Integrated-4.0 Error Code 0x80070002 Requested URL http://localhost:80/test.aspx Physical Path C:\inetpub\wwwroot\test.aspx Logon Method Not yet determined Logon User Not yet determined Trace: And in my trace file I get: 96. view trace Warning -SET_RESPONSE_ERROR_DESCRIPTION ErrorDescription An error message detailing the cause of this specific request failure can be found in the application event log of the web server. Please review this log entry to discover what caused this error to occur. 97. view trace Warning -MODULE_SET_RESPONSE_ERROR_STATUS ModuleName AspNetInitClrHostFailureModule Notification 1 HttpStatus 500 HttpReason Internal Server Error HttpSubStatus 0 ErrorCode 2147942402 ConfigExceptionInfo Notification BEGIN_REQUEST ErrorCode The system cannot find the file specified. (0x80070002) The application error log shows: Log Name: Application Source: Microsoft-Windows-IIS-W3SVC-WP Date: 5/28/2010 2:08:10 PM Event ID: 2299 Task Category: None Level: Error Keywords: Classic User: N/A Computer: win-ltfkdo1dnfp Description: An application has reported as being unhealthy. The worker process will now request a recycle. Reason given: An error message detailing the cause of this specific request failure can be found in the application event log of the web server. Please review this log entry to discover what caused this error to occur. The data is the error. Event Xml: <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> <System> <Provider Name="Microsoft-Windows-IIS-W3SVC-WP" Guid="{670080D9-742A-4187-8D16-41143D1290BD}" EventSourceName="W3SVC-WP" /> <EventID Qualifiers="49152">2299</EventID> <Version>0</Version> <Level>2</Level> <Task>0</Task> <Opcode>0</Opcode> <Keywords>0x80000000000000</Keywords> <TimeCreated SystemTime="2010-05-28T21:08:10.000000000Z" /> <EventRecordID>1663</EventRecordID> <Correlation /> <Execution ProcessID="0" ThreadID="0" /> <Channel>Application</Channel> <Computer>win-ltfkdo1dnfp</Computer> <Security /> </System> <EventData> <Data Name="Reason">An error message detailing the cause of this specific request failure can be found in the application event log of the web server. Please review this log entry to discover what caused this error to occur. </Data> <Binary>02000780</Binary> </EventData> </Event>

    Read the article

  • SQL SERVER – 2012 – Summary of All the Analytic Functions – MSDN and SQLAuthority

    - by pinaldave
    SQL Server 2012 (RC0 Available here) has introduced new analytic functions. These functions were long awaited and I am glad that they are here. Previously when any of this function was needed people use to write long T-SQL code to simulate that and now no need of the same. Having available native function also helps performance as well readability. In last few days I have written many articles on this subject on my blog. The goal was make these complex analytic functions easy to understand and make it widely accepted. As this new functions are available and as awareness spreads we should start using the new functions. Here is the quick list of the new function and relevant MSDN site. Function SQLAuthority MSDN CUME_DIST CUME_DIST CUME_DIST FIRST_VALUE FIRST_VALUE FIRST_VALUE LAST_VALUE LAST_VALUE LAST_VALUE LEAD LEAD LEAD LAG LAG LAG PERCENTILE_CONT PERCENTILE_CONT PERCENTILE_CONT PERCENTILE_DISC PERCENTILE_DISC PERCENTILE_DISC PERCENT_RANK PERCENT_RANK PERCENT_RANK I also enjoyed three different puzzles during the course of this series which gave clear idea to the SQL Server 2012 analytic functions. SQL SERVER – Puzzle to Win Print Book – Functions FIRST_VALUE and LAST_VALUE with OVER clause and ORDER BY SQL SERVER – Puzzle to Win Print Book – Write T-SQL Self Join Without Using LEAD and LAG SQL SERVER – Puzzle to Win Print Book – Explain Value of PERCENTILE_CONT() Using Simple Example This series will be always my dear series as during this series I had went through very unique experience of my book going out of stock and becoming available after 48 hours. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Function, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Configure SQL Server to Allow Remote Connections

    - by Ben Griswold
    Okay. This post isn’t about configuring SQL to allow remote connections, but wait, I still may be able to help you out. "A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 – Could not open a connection to SQL Server)" I love this exception. It summarized the issue and leads you down a path to solving the problem.  I do wish the bit about allowing remote connections was left out of the message though. I can’t think of a time when having remote connections disabled caused me grief.  Heck, I can’t ever remember how to enable remote connections unless I Google for the answer. Anyway, 9 out of 10 times, SQL Server simply isn’t running.  That’s why the exception occurs.  The next time this exception pops up, open up the services console and make sure SQL Server is started.  And if that’s not the problem, only then start digging into the other possible reasons for the failure.

    Read the article

  • SQLAuthority News – SQL Server Interview Questions And Answers Book Summary

    - by pinaldave
    Today we are using computers for various activities, motor vehicles for traveling to places, and mobile phones for conversation. How many of us can claim the invention of micro-processor, a basic wheel, or the telegraph? Similarly, this book was not written overnight. The journey of this book goes many years back with many individuals to be thanked for. To begin with, we want to thank all those interviewers who reject interviewees by saying they need to know ‘the key things’ regardless of having high grades in class. The whole concept of interview questions and answers revolves around knowing those ‘key things’. The core concept of this book will continue to evolve over time. I am sure many of you will come along with us on this journey and submit your suggestions to us to make this book a key reference for anybody who wants to start with SQL Server. Today we want to acknowledge the fact that you will help us keep this book alive forever with the latest updates. We want to thank everyone who participates in this journey with us. Though each of these chapters are geared towards convenience we highly recommend reading each of the sections irrespective of the roles you might be doing since each of the sections have some interesting trivia about working with SQL Server. In the industry the role of accidental DBA’s (especially with SQL Server) is very common. Hence if you have performed the role of DBA for a short stint and want to brush-up your fundamentals then the upcoming sections will be a great review. Table Of Contents Database Concepts With Sql Server Common Generic Questions & Answers Common Developer Questions Common Tricky Questions Miscellaneous Questions On Sql Server 2008 Dba Skills Related Questions Data Warehousing Interview Questions & Answers General Best Practices [Amazon] | [Flipkart] Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Best Practices, Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL Interview Questions and Answers, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • SQLAuthority News – 5 days of SQL Server Reporting Service (SSRS) Summary

    - by Pinal Dave
    Earlier this week, I wrote five days series on SQL Server Reporting Service. The series is based on the book Beginning SSRS by Kathi Kellenberger. Supporting files are available with a free download from thewww.Joes2Pros.com web site. I just completed reading the book – it is a fantastic book and I am loving every bit of it. I new SSRS and I also knew how it is working however, I did not know was fine details of how I can get maximum out of the SSRS subject. This book has personally enabled me with the knowledge that I was missing in my knowledge back. Here is the question back to you – how many of you are working with SSRS and when you have a question you are left with no help online. There are not enough blogs or books available on this subject. The way Kathi has written this book is that it attempts to solve your day to day problem and make you think how you can take your daily problem and take it to the next level. Here is the article series which I have written on this subject and available to read: SQL SERVER – What is SSRS and Why SSRS is asked for in many Job Opening? Determine if SSRS 2012 is Installed on your SQL Server Installing SQL Server Data Tools and SSRS Create a Very First Report with the Report Wizard How to an Add Identity Column to Table in SQL Server Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Reporting Service, SSRS

    Read the article

  • How to manage own bots at the server?

    - by Nikolay Kuznetsov
    There is a game server and people can play in game rooms of 2, 3 or 4. When a client connects to server he can send a request specifying a number of people or range he wants to play with. One of this value is valid: {2-4, 2-3, 3-4, 2, 3, 4} So the server maintains 3 separate queues for game room with 2, 3 and 4 people. So we can denote queues as #2, #3 and #4. It work the following way. If a client sends request, 3-4, then two separate request are added to queues #3 and #4. If queue #3 now have 3 requests from different people then game room with 3 players is created, and all other requests from those players are removed from all queues. Right now not many people are online simultaneously, so they apply for a game wait for some time and quit because game does not start in a reasonable time. That's a simple bot for beginning has been developed. So there is a need to patch server code to run a bot, if some one requests a game, but humans are not online. Input: request from human {2-4, 2-3, 3-4, 2, 3, 4} Output: number of bots to run and time to wait for each before connecting, depending on queues state. The problem is that I don't know how to manage bots properly at the server? Example: #3 has 1 request and #4 has 1 request Request from user is {3,4} then server can add one bot to play game with 3 people or two bots to play game of 4. Example: #3 has 1 request and #4 has 2 requests Request from user is {3,4} then in each case just one bot is needed so game with 4 players is more preferrable.

    Read the article

  • Turn-based Client-Server Card Game - Unicast (TCP) or Multicast (UDP)

    - by LDM91
    I am currently planning to make a card game project where the clients will communicate with the server in a turn-based and synchronous manner using messages sent over sockets. The problem I have is how to handle the following scenario: (Client takes it turn and sends its action to server) Client sends a message telling the server its move for the turn (e.g. plays the card 5 from its hand which needs to placed onto the table) Server receives messages and updates game state (server will hold all game state). Server iterates through a list of connected clients and sends a message to tell of them change in state Clients all refresh to display the state This is all based on using TCP, and looking at it now it seems a bit like the Observer pattern. The reason this seems to be an issue to me is this message doesn't seem to be point-to-point like the others as I want to send it to all the clients, and doesn't seem very efficient sending the same message in that way. I was thinking about using multicasting with UDP as then I could send the message to all the clients, however wouldn't this mean that the clients would in theory be able to message each other? There is of course the synchronous aspect as well, though this could be put on top of the UDP I guess. Basically, I would like to know what would be good practice as this project is really all about learning, and even though it won't be big enough to encounter performance issues from this I would like to consider them anyway. However, please note I am not interested in using message oriented middleware as a solution (I have experience with using MOM and I'm interested in considering other options excluding MOM if TCP sockets is a bad idea!).

    Read the article

< Previous Page | 152 153 154 155 156 157 158 159 160 161 162 163  | Next Page >