Search Results

Search found 99285 results on 3972 pages for 'server load'.

Page 122/3972 | < Previous Page | 118 119 120 121 122 123 124 125 126 127 128 129  | Next Page >

  • Updating Ubuntu server from 8.10 to 10.04

    - by Ward
    I have a VPS that has Ubuntu 8.10 Server Edition installed on it and I would like to upgrade it to 10.04. What would be the correct way of doing this? I only have ssh access to it and a "Start/Shutdown VPS" in the client panel of the vendor. In other words, I do not have physical access to it. Also worth noting is that I apparently cannot install programs any more since the sources (osuosl.org ?) are not online. Not the ones this server has set anyway. # apt-get update Ign http://ubuntu.osuosl.org intrepid Release.gpg Ign http://ubuntu.osuosl.org intrepid/main Translation-en_US Ign http://ubuntu.osuosl.org intrepid/universe Translation-en_US Ign http://ubuntu.osuosl.org intrepid Release Ign http://ubuntu.osuosl.org intrepid/main Packages Ign http://ubuntu.osuosl.org intrepid/universe Packages Err http://ubuntu.osuosl.org intrepid/main Packages 404 Not Found Err http://ubuntu.osuosl.org intrepid/universe Packages 404 Not Found W: Failed to fetch http://ubuntu.osuosl.org/ubuntu/dists/intrepid/main/binary-amd64/Packages.gz 404 Not Found W: Failed to fetch http://ubuntu.osuosl.org/ubuntu/dists/intrepid/universe/binary-amd64/Packages.gz 404 Not Found E: Some index files failed to download, they have been ignored, or old ones used instead.

    Read the article

  • how to troubleshoot sql server issues

    - by joe
    i have an ASP .net application with sql server database, and i am wondering if you can give your ideas on how to troubleshoot the following issue: i can insert / update / delete from any table, but i have one page that uses transactions to insert into different tables. the c# code is correct and very simple, but it fails. i used the sql profiler to see how my app interacts with the DB, especially that the app is using stored procedures, i can catch the exec procedure statement and run it manually from SSMS and it works fine, but the same stored procedure fails from the application!!! which lead me to think that issue is coming from the user account and settings, i am no expert in sql server and wondering if anyone can explain how to verify the required settings for user account. thanks EDIT: in web.config here is how i reference my connection <connectionStrings> <add name="Conn" connectionString="Data Source=localhost;Initial Catalog=myDB;Persist Security Info=True;User ID=DbUser;Password=password1254_3" providerName="System.Data.SqlClient"> </connectionstring> EDIT: i will try to describe the process here: 1- i begin a transaction 2- i call a stored proc to insert (which succeeds) and return the scope identity ( that will be used in the next step) 3- i call another stored procedure to insert some more info + scope identity from step 2, which is a foreign key here 4- i get error, foreign key violation 5- transaction rolled back, now tables empty again... thanks

    Read the article

  • SQL SERVER – Excel Losing Decimal Values When Value Pasted from SSMS ResultSet

    - by pinaldave
    No! It is not a SQL Server Issue or SSMS issue. It is how things work. There is a simple trick to resolve this issue. It is very common when users are coping the resultset to Excel, the floating point or decimals are missed. The solution is very much simple and it requires a small adjustment in the Excel. By default Excel is very smart and when it detects the value which is getting pasted is numeric it changes the column format to accommodate that. Now as Zero which are training any digit after decimal points have no value, Excel automatically hides it. To prevent this to happen user has to convert columns to text format so it can preserve the formatting. Here is how you can do it. Select the corner between A and 1 and Right Click on it. It will select complete spreadsheet. If you want to change the format of any column you can select an individual column the same way. In the menu Click on Format Cells… It will bring up the following menu. Here by default the selected column will be General, change that to Text. It will change the format of all the cells to Text. Now once again paste the values from SSMS to the Excel. This time it will preserve the decimal values from SSMS. Solved! Any other trick you do you know to preserve the decimal values? Leave a comment please. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology Tagged: Excel

    Read the article

  • Frequent disconnects with oneiric server using wlan AR9285

    - by John Neil
    I'm getting a large number of disconnects from my wireless when I switched to oneiric server (I did not see these happen with oneiric desktop) from my AR9285 wireless LAN device. Here is the syslog snippet: Oct 17 09:43:17 weather kernel: [ 1537.329138] wlan0: deauthenticated from 00:12:17:7a:8e:42 (Reason: 7) Oct 17 09:43:17 weather kernel: [ 1537.340409] cfg80211: All devices are disconnected, going to restore regulatory settings Oct 17 09:43:17 weather kernel: [ 1537.340423] cfg80211: Restoring regulatory settings Oct 17 09:43:17 weather kernel: [ 1537.340435] cfg80211: Calling CRDA to update world regulatory domain Oct 17 09:43:17 weather kernel: [ 1537.348571] cfg80211: Ignoring regulatory request Set by core since the driver uses its own custom regulatory domain Oct 17 09:43:17 weather kernel: [ 1537.348581] cfg80211: World regulatory domain updated: Oct 17 09:43:17 weather kernel: [ 1537.348586] cfg80211: (start_freq - end_freq @ bandwidth), (max_antenna_gain, max_eirp) Oct 17 09:43:17 weather kernel: [ 1537.348594] cfg80211: (2402000 KHz - 2472000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) Oct 17 09:43:17 weather kernel: [ 1537.348600] cfg80211: (2457000 KHz - 2482000 KHz @ 20000 KHz), (300 mBi, 2000 mBm) Oct 17 09:43:17 weather kernel: [ 1537.348607] cfg80211: (2474000 KHz - 2494000 KHz @ 20000 KHz), (300 mBi, 2000 mBm) Oct 17 09:43:17 weather kernel: [ 1537.348613] cfg80211: (5170000 KHz - 5250000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) Oct 17 09:43:17 weather kernel: [ 1537.348620] cfg80211: (5735000 KHz - 5835000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) Here is the relevant lspci output: # lspci | grep Atheros 02:00.0 Network controller: Atheros Communications Inc. AR9285 Wireless Network Adapter (PCI-Express) (rev 01) I have done quite a bit of searching and saw discussions for previous versions of ubuntu that recommended installing the linux-backports-modules package. However, this does not appear to be available for oneiric (just the headers are listed as a package). Any advice on how to achieve a stable wireless connection for this server? It's location mitigates against using a wired connection.

    Read the article

  • Problem connecting to isp server using xl2tpd as client. Ubuntu server 13.04

    - by Deon Pretorius
    I have followed guides found on google and ubuntu support pages and can get xl2tpd connection up but only under the following conditions: 1 - ADSL model must be configured and connected to the ISP or 2 - ADSL modem in bridge mode I must have an existing PPPoe connection established. If neither of the above are active xl2tpd wont trigger pppd and connect to the isp and thus tunnel connection fails to connect to the L2TP server of the ISP. Am I doing something wrong; /etc/ppp/options.l2tpd.axxess ipcp-accept-local ipcp-accept-remote refuse-eap refuse-chap require-pap noccp noauth idle 1800 mtu 1200 mru 1200 defaultroute usepeerdns debug lock connect-delay 5000 name (name used for ppp connection) /etc/ppp/pap-secrets # * password (name used for ppp connection as above) * (ppp password supplied by isp) /etc/xl2tpd/xl2tpd.conf [global] ; Global parameters: auth file = /etc/xl2tpd/l2tp-secrets ; * Where our challenge secrets are access control = yes ; * Refuse connections without IP match debug tunnel = yes [lac axxess] lns = 196.30.121.50 ; * Who is our LNS? redial = yes ; * Redial if disconnected? redial timeout = 5 ; * Wait n seconds between redials max redials = 5 ; * Give up after n consecutive failures hidden bit = yes ; * User hidden AVP's? length bit = yes ; * Use length bit in payload? require pap = yes ; * Require PAP auth. by peer require chap = no ; * Require CHAP auth. by peer refuse chap = yes ; * Refuse CHAP authentication require authentication = yes ; * Require peer to authenticate name = BLA85003@axxess ; * Report this as our hostname ppp debug = yes ; * Turn on PPP debugging pppoptfile = /etc/ppp/options.l2tpd.axxess ; * ppp options file for this lac /etc/xl2tpd/l2tp-secrets # Secrets for authenticating l2tp tunnels # us them secret # * marko blah2 # zeus marko blah # * * interop * vzb_l2tp (*** secret supplied by isp) ^ isp server host name Any help will be greatly appreciated

    Read the article

  • Set UFW before.rules without restart of server

    - by enedene
    I use UFW on my Ubuntu server. Unfortunately there are no rules in UFW to port forward to another machine. What you need to do is edit /etc/before.rules and put routing commands there, for example # nat Table rules *nat :POSTROUTING ACCEPT [0:0] # Forward traffic from eth0 through eth1. -A POSTROUTING -s 192.168.0.0/24 -o eth1 -j MASQUERADE -A PREROUTING -i eth1 -p tcp --dport 80 -j DNAT --to 192.168.0.200:80 -A PREROUTING -i eth1 -p udp --dport 10090 -j DNAT --to 192.168.0.202:22 -A PREROUTING -i eth1 -p tcp --dport 10090 -j DNAT --to 192.168.0.202:22 -A PREROUTING -i eth1 -p tcp --dport 443 -j DNAT --to 192.168.0.200:443 -A PREROUTING -i eth1 -p udp --dport 443 -j DNAT --to 192.168.0.200:443 -A PREROUTING -i eth1 -p tcp --dport 57626 -j DNAT --to 192.168.0.2:57626 -A PREROUTING -i eth1 -p udp --dport 57626 -j DNAT --to 192.168.0.2:57626 -A PREROUTING -i eth1 -p tcp --dport 3306 -j DNAT --to 192.168.0.200:3306 -A PREROUTING -i eth1 -p udp --dport 3306 -j DNAT --to 192.168.0.200:3306 COMMIT My problem is that I can't find a way to run new forwarding rules without restarting the server, which I hate to do very much. So please help me, is there a way?

    Read the article

  • Installing 10.04 server on HP xw9400 Workstation with RAID 5

    - by Dave Long
    I have a workstation that was given to me that is a friggen powerhouse, so I figured I would set it up as my development and demo server. This is my first experience installing Ubuntu onto a RAID array and so far it has not been a fun one. I have been following the Advanced Installation guide for installing Ubuntu 10.04 server, and it says that there will be an option on the Partition Disks screen to manually create the partitions, but the only options I have are: Configure iSCSI volumes Undo changes to partitions Finish partitioning and write changes to disk Just before I got to that screen I got a message that said: One or more drives containing Serial ATA RAID configurations have been found. Do you wish to activate these RAID devices? It doesn't matter whether I answer yes or no to that, I still get the same Partition Disks screen. When I try to select Finish partitioning and write changes to disk I just get the No root file system error. Has anyone else experienced this, and how do I get past it? Can I not run Ubuntu on this machine?

    Read the article

  • Moving away from PHP and running towards server-side JavaScript [on hold]

    - by Sosukodo
    I've decided to start moving away from PHP and server-side JavaScript looks like an attractive replacement. However, I'm having a hard time wrapping my head around how others are using Node.js for web applications. I'm currently using Lighttpd with FastCGI PHP. The one thing I like about PHP is that I can "inline" my scripts in the document like so: <?php echo 'Hello, World!'; ?> My question is: Is there any server-side JavaScript solution that I can use in this manor? For instance, I'd love to be able to do this: <?js print('Hello, World!'); ?> Is there such a thing? I'm not looking for opinions about "which is better". I just want to know what's out there and I'll explore each of them on my own. The important thing is that I'd like to use it like I demonstrated above. Links to the software along with implementation examples will be considered above other answers.

    Read the article

  • Setting console resolution in Ubuntu Server 13.10 within VMware

    - by user205625
    I've completed an install of Ubuntu Server 13.10 within VMware and am running into a problem configuring the console (non-graphical) resolution. When I was running Ubuntu Server 13.04, I ran into the same problem... posted the question here, which I later solved by editing /etc/default/grub thus: GRUB_CMDLINE_LINUX_DEFAULT="splash vga=789" I then ran sudo update-grub, sudo reboot and 13.04 stuck in a larger-size console mode... just what I wanted. BUT when I run the same commands in 13.10, during the reboot it changes to the new screen-res, BUT the screen stays black and I can't interact with it. I power down the VM, go back to a previous snapshot, and try again... and again. Since the hwinfo package is no longer available, I can't run sudo hwinfo --framebuffer to see what options are available. Ideas anyone? Here are the uncommented settings in my /etc/default/grub file at this moment: GRUB_DEFAULT=0 GRUB_HIDDEN_TIMEOUT=0 GRUB_HIDDEN_TIMEOUT_QUIET=false GRUB_TIMEOUT=10 GRUB_DISTRIBUTOR=lsb_release -i -s 2> /dev/null || echo Debian GRUB_CMDLINE_LINUX_DEFAULT="splash" GRUB_CMDLINE_LINUX="find_preseed=/preseed.cfg" GRUB_DISABLE_LINUX_RECOVERY=false GRUB_GFXMODE=800x600

    Read the article

  • Sending Emails from existing SMTP host from Ubuntu Server

    - by ezgoodnight
    I feel like my problem is very simple, but I've been trying for quite some time and haven't cracked it. You experienced server guys will probably laugh at this, but I'm finally at the point I need help or I'll never get anywhere. I have a little box running 12.04 LTS and I've wanted to script some status checks and have the server send me an email and schedule this with cron. I basically want a command line mail client that I can set up as easily as Thunderbird to send through my existing SMTP through the command line. Something that can easily be rolled into my bash scripts. I already have a remote host handling our email, SMTP, MTA, all that garbage. I don't particularly want to set up a relay just to send email when I have one that everyone else in the company already uses. I've tried, but there are too many aspects I don't understand AND I don't see why I should set up something local when we already pay for a remote host to do these things. If I absolutely have to set up sendmail or postfix, then so be it, but I'd appreciate a simple alternative. I'm open to practically anything at this point.

    Read the article

  • syspolicy_purge_history generates failed logins

    - by jbrown414
    I have a development server with 3 instances: Default, A and B. It is a physical server, non clustered. Whenever the syspolicy_purge_history job runs at 2 am, I get failed login alerts. Looking at the job steps, all are successfully completed. It appears that some point during the step "Erase Phantom System Health Records" is when the failed logins occur. syspolicy_purge_history on instance B works OK. syspolicy_purge_history on the Default instance seems to want to connect to instance B, resulting in: Error: 18456, Severity: 14, State: 11. Login failed for user 'Machinename\sqlsvc-B'. Reason: Token-based server access validation failed with an infrastructure error. Check for previous errors. [CLIENT: <local machine>] . No errors are reported by Powershell. syspolicy_purge_history on the A instance seems to want to connect to the Default instance resulting in Error: 18456, Severity: 14, State: 11. Login failed for user 'Machinename\sqlsvc-Default'. Reason: Token-based server access validation failed with an infrastructure error. Check for previous errors. [CLIENT: <local machine>] . Then it tries to connect to the B instance, resulting in Error: 18456, Severity: 14, State: 11. Login failed for user 'Machinename\sqlsvc-B'. Reason: Token-based server access validation failed with an infrastructure error. Check for previous errors. [CLIENT: <local machine>] . No errors are reported by Powershell. I tried the steps posted here hoping they would fix it. http://support.microsoft.com/kb/955726 But again, this is not a virtual server nor is it in a cluster. Do you have any suggestions? Thanks.

    Read the article

  • AclPermissionsFacet fault install SQL-2008-R2

    - by photo_tom
    While attempting to do an installation repair of SQL-2008R2, I'm failing the pre-check rules. Module that is failing is AclPermissionsFacet - with this message "The SQL Server registry keys from a prior installation cannot be modified. To continue, see SQL Server Setup documentation about how to fix registry keys." In the log file "Detail_GlobalRules.txt", I've been able to find the following error messages - 2010-09-05 07:24:39 Slp: Could not open sub key key HKEY_LOCAL_MACHINE\Software\Microsoft\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSearch. 2010-09-05 07:24:39 Slp: Could not open sub key key HKEY_LOCAL_MACHINE\Software\Microsoft\Microsoft SQL Server\MSSQL10.MSSQLSERVER\SQLServerSCP. 2010-09-05 07:24:39 Slp: Could not open sub key key HKEY_LOCAL_MACHINE\Software\Microsoft\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQLServer. 2010-09-05 07:24:39 Slp: Could not open sub key key HKEY_LOCAL_MACHINE\Software\Microsoft\Microsoft SQL Server\MSSQL10.MSSQLSERVER\SQLServerAgent. When I look at these keys in the registry, all of their permissions are blank. My problem is that I cannot find any good information on how to reset these keys. This is on my new home dev and I think during the migration from my previous machine, these settings got corrupted on my new box. In reviewing the web, there doesn't seem to be good infomration. And what there is suggests using subinacl.exe. But after trying it and seeing it is an XP based program, I'm at a loss on how to continue. Configuration - Windows 7/64bit Home Edition, SQL2008R2, 6gb ram. Suggestions? Su

    Read the article

  • Activation context generation failed for "C:\php\php-cgi.exe". Dependent Assembly

    - by Eyla
    Greeting, I have Windows Server 2008 Server Core. I want to configure this server to host php websites using IIS 7. I installed and configured IIS7 to run php using the steps in this website: http://blogs.msdn.com/b/philpenn/archive/2009/07/19/deploying-iis-7-5-fastcgi-php-on-server-core.aspx Now I’m facing a problem that when I request my php website I would get this error. Server Error 500 - Internal server error. There is a problem with the resource you are looking for, and it cannot be displayed. I check the even log and I found these details too: Activation context generation failed for "C:\php\php-cgi.exe". Dependent Assembly Microsoft.VC90.CRT,processorArchitecture="x86",publicKeyToken="1fc8b3b9a1e18e3b",type="win32",version="9.0.21022.8" could not be found. Please use sxstrace.exe for detailed diagnosis. I search about this error and I found a solution for it but which is to install Microsoft Visual C++ 2008 SP1 Redistributable Package (x86). I installed but still I’m getting same error. Please help me to solve this problem and let me know if you want to know more info about my issue.

    Read the article

  • cloning a kvm guest os to a vmdk file

    - by Bond
    I have a production environment where I am having 4 Guest OS running on a Ubuntu server which uses kvm. These OS are in an LVM based setup.I want these Virtual Machines to be in a vmdk format also.Where people would do experiments with these Virtual Machines so this in a vmware environment (or it can be Xen too) would be different from the kvm server.I would not have any control on that other environment so I want to give people vmdk images of these virtual machines. The production Virtual Machines will still keep running on kvm server but the VMs on which experiments would be done would be of type vmdk.(vmdk is a constraint) Here is output of lvscan ACTIVE '/dev/abcd/lvm1' [100.00 GiB] inherit ACTIVE '/dev/abcd/lvm2' [150.00 GiB] inherit ACTIVE '/dev/abcd/lvm3' [50.00 GiB] inherit ACTIVE '/dev/abcd/lvm4' [100.00 GiB] inherit I was reading man page of qemu-img and what I understand is I need to first create a qcow image file which I need to populate and then convert that to a vmdk file. Is that understanding correct? Now suppose /dev/abcd/lvm4 is the virtual machine with which I am going to start this experiment.I can shutdown the production VMs for some time to do this. So is the following way correct to go on server 1 (where kvm is running) qemu-img convert -c -f raw -O vmdk /dev/abcd/lvm4 /backup/lvm4.img or it will affect the lvm4 on kvm server 1. I do not want the VM running on original server to at all loose its any of the content but also have a vmdk file for each of the Guest OS on kvm. Before I proceed with any of the above things on the production machine I just want to make sure that I am doing the correct thing so I asking here.

    Read the article

  • How to recover the plesk database?

    - by Kau-Boy
    When I try to launch the Plesk administration page of you server I get the following error: ERROR: PleskMainDBException MySQL query failed: MySQL server has gone away The MySQL Server is working well. Although it seems that the plesk database is somehow corrupt and any action on this database results in a restart of the mysql process, so even queries to other databases on the same MySQL server will be lost. If I try to connect to the plesk database using phpMyAdmin, I can only see the number of tables, the database had originally. But I am not able to open the tables listing. As soon as I try it, the mysql process crashes again. Trying to connect to the database using ssh works. I can even run a SELECT statement against any table an get a result. I don't know if it is an plesk error or an error of the psa database or even the MySQL server. Can you give me any tips on how to recover the plesk system. Should I try to repair the Plesk installation before. And if so, how can I do it and will all my settings get lost doing it? EDIT: Trying to dump the database, I get the following error: mysqldump: Got error: 2013: Lost connection to MySQL server during query when using LOCK TABLES

    Read the article

  • Home Server: cpu virtualisation, what to choose?

    - by Huygens
    I'm looking for virtualisation solutions for storage and OS for a home server. A sort of private cloud where I manage the storage space independently of the VM one. This question focus on VM (or compute instance) management and what would best suit my needs. (I have another question related to the storage management). My use cases are: A backup server: rsync and other services running. A personal cloud server: a kind of owned dropbox system, à la ownCloud. " users foreseen. A media server: streaming videos and displaying photos. Here my environement and wishes: Server: HP Proliant MicroServer with 8 GB RAM (AMD Turion dual core with AMD-V technology) OS types: only Linux (perhaps a *BSD VM in the future) Linux distributions do not matter, I'm familiar with RHEL, Fedora, Suse, Ubuntu, but any other recommandation will be fine 2-3 VMs foreseen: backup server, owncloud server and media server (optional). Those are only servers, so no graphical console needed (I don't need VirtualBox) By VM I mean a virtualised environment like KVM, Xen, etc. or a compute instance like with OpenStack storage should be "virtualised/cloudified" see my other question. VM should be able to be migrated to another server in the future if performance cannot be fullfilled anymore by the current server It does not matter if installation of such setup is complicated as long as management tools allow for easy maintenance I don't have Windows at home, so solution should be Linux friendly and would be nice to be web based. But native apps are OK too. System should be easy to enhance: by adding a new server to migate some of the VMs to it. So it's really a kind of private cloud on which I could run some Linux OS. I would prefer free (libre, as in a free speach) and open source tools. But it does not have to be free as in a free beer. So Xen, KVM, VitualBox or OpenStack? What would you recommend?

    Read the article

  • MSMQ on Win2008 R2 won't receive messages from older clients

    - by Graffen
    I'm battling a really weird problem here. I have a Windows 2008 R2 server with Message Queueing installed. On another machine, running Windows 2003 is a service that is set up to send messages to a public queue on the 2008 server. However, messages never show up on the server. I've written a small console app that just sends a "Hello World" message to a test queue on the 2008 machine. Running this app on XP or 2003 results in absolutely nothing. However, when I try running the app on my Windows 7 machine, a message is delivered just fine. I've been through all sorts of security settings, disabled firewalls on all machines etc. The event log shows nothing of interest, and no exceptions are being thrown on the clients. Running a packet sniffer (WireShark) on the server reveals only a little. When trying to send a message from XP or 2003 I only see an ICMP error "Port Unreachable" on port 3527 (which I gather is an MQPing packet?). After that, silence. Wireshark shows a nice little stream of packets when I try from my Win7 client (as expected - messages get delivered just fine from Win7). I've enabled MSMQ End2End logging on the server, but only entries from the messages sent from my Win7 machine are appearing in the log. So somehow it seems that messages are being dropped silently somewhere along the route from XP or 2003 to my 2008 server. Does anyone have any clues as to what might be causing this mysterious behaviour? -- Jesper

    Read the article

  • SQL SERVER – Disabled Index and Update Statistics

    - by pinaldave
    When we try to update the statistics, it throws an error as if the clustered index is disabled. Now let us enable the clustered index only and attempt to update the statistics of the table right after that. Have you ever come across the situation where a conversation never gets over and it continues even though original point of discussion has passed. I am facing the same situation in the case of Disabled Index. Here is the link to original conversations. SQL SERVER – Disable Clustered Index and Data Insert – Reader had a issue here with Disabled Index SQL SERVER – Understanding ALTER INDEX ALL REBUILD with Disabled Clustered Index – Reader asked the effect of Rebuilding Indexes The same reader asked me today – “I understood what the disabled indexes do; what is their effect on statistics. Is it true that even though indexes are disabled, they continue updating the statistics?“ The answer is very interesting: If you have disabled clustered index, you will be not able to update the statistics at all for any index. If you have enabled clustered index and disabled non clustered index when you update the statistics of the table, it automatically updates the statistics of the ALL (disabled and enabled – both) the indexes on the table. If you are not satisfied with the answer, let us go over a simple example. I have written necessary comments in the code itself to have a clear idea. USE tempdb GO -- Drop Table if Exists IF EXISTS (SELECT * FROM sys.objects WHERE OBJECT_ID = OBJECT_ID(N'[dbo].[TableName]') AND type IN (N'U')) DROP TABLE [dbo].[TableName] GO -- Create Table CREATE TABLE [dbo].[TableName]( [ID] [int] NOT NULL, [FirstCol] [varchar](50) NULL ) GO -- Insert Some data INSERT INTO TableName SELECT 1, 'First' UNION ALL SELECT 2, 'Second' UNION ALL SELECT 3, 'Third' UNION ALL SELECT 4, 'Fourth' UNION ALL SELECT 5, 'Five' GO -- Create Clustered Index ALTER TABLE [TableName] ADD CONSTRAINT [PK_TableName] PRIMARY KEY CLUSTERED ([ID] ASC) GO -- Create Nonclustered Index CREATE UNIQUE NONCLUSTERED INDEX [IX_NonClustered_TableName] ON [dbo].[TableName] ([FirstCol] ASC) GO -- Check that all the indexes are enabled SELECT OBJECT_NAME(OBJECT_ID), Name, type_desc, is_disabled FROM sys.indexes WHERE OBJECT_NAME(OBJECT_ID) = 'TableName' GO Now let us update the statistics of the table and check the statistics update date. -- Update the stats of table UPDATE STATISTICS TableName WITH FULLSCAN GO -- Check Statistics Last Updated Datetime SELECT name AS index_name, STATS_DATE(OBJECT_ID, index_id) AS StatsUpdated FROM sys.indexes WHERE OBJECT_ID = OBJECT_ID('TableName') GO Now let us disable the indexes and check if they are disabled using sys.indexes. -- Disable Indexes -- Disable Nonclustered Index ALTER INDEX [IX_NonClustered_TableName] ON [dbo].[TableName] DISABLE GO -- Disable Clustered Index ALTER INDEX [PK_TableName] ON [dbo].[TableName] DISABLE GO -- Check that all the indexes are disabled SELECT OBJECT_NAME(OBJECT_ID), Name, type_desc, is_disabled FROM sys.indexes WHERE OBJECT_NAME(OBJECT_ID) = 'TableName' GO Let us try to update the statistics of the table. -- Update the stats of table UPDATE STATISTICS TableName WITH FULLSCAN GO /* -- Above operation should thrown following error Msg 1974, Level 16, State 1, Line 1 Cannot perform the specified operation on table 'TableName' because its clustered index 'PK_TableName' is disabled. */ When we try to update the statistics it throws an error as it clustered index is disabled. Now let us enable the clustered index only and attempt to update the statistics of the table right after that. -- Now let us rebuild clustered index only ALTER INDEX [PK_TableName] ON [dbo].[TableName] REBUILD GO -- Check that all the indexes status SELECT OBJECT_NAME(OBJECT_ID), Name, type_desc, is_disabled FROM sys.indexes WHERE OBJECT_NAME(OBJECT_ID) = 'TableName' GO -- Check Statistics Last Updated Datetime SELECT name AS index_name, STATS_DATE(OBJECT_ID, index_id) AS StatsUpdated FROM sys.indexes WHERE OBJECT_ID = OBJECT_ID('TableName') GO -- Update the stats of table UPDATE STATISTICS TableName WITH FULLSCAN GO -- Check Statistics Last Updated Datetime SELECT name AS index_name, STATS_DATE(OBJECT_ID, index_id) AS StatsUpdated FROM sys.indexes WHERE OBJECT_ID = OBJECT_ID('TableName') GO We can clearly see that even though the nonclustered index is disabled it is also updated. If you do not need a nonclustered index, I suggest you to drop it as keeping them disabled is an overhead on your system. This is because every time the statistics are updated for system all the statistics for disabled indexesare also updated. -- Clean up DROP TABLE [TableName] GO The complete script is given below for easy reference. USE tempdb GO -- Drop Table if Exists IF EXISTS (SELECT * FROM sys.objects WHERE OBJECT_ID = OBJECT_ID(N'[dbo].[TableName]') AND type IN (N'U')) DROP TABLE [dbo].[TableName] GO -- Create Table CREATE TABLE [dbo].[TableName]( [ID] [int] NOT NULL, [FirstCol] [varchar](50) NULL ) GO -- Insert Some data INSERT INTO TableName SELECT 1, 'First' UNION ALL SELECT 2, 'Second' UNION ALL SELECT 3, 'Third' UNION ALL SELECT 4, 'Fourth' UNION ALL SELECT 5, 'Five' GO -- Create Clustered Index ALTER TABLE [TableName] ADD CONSTRAINT [PK_TableName] PRIMARY KEY CLUSTERED ([ID] ASC) GO -- Create Nonclustered Index CREATE UNIQUE NONCLUSTERED INDEX [IX_NonClustered_TableName] ON [dbo].[TableName] ([FirstCol] ASC) GO -- Check that all the indexes are enabled SELECT OBJECT_NAME(OBJECT_ID), Name, type_desc, is_disabled FROM sys.indexes WHERE OBJECT_NAME(OBJECT_ID) = 'TableName' GO -- Update the stats of table UPDATE STATISTICS TableName WITH FULLSCAN GO -- Check Statistics Last Updated Datetime SELECT name AS index_name, STATS_DATE(OBJECT_ID, index_id) AS StatsUpdated FROM sys.indexes WHERE OBJECT_ID = OBJECT_ID('TableName') GO -- Disable Indexes -- Disable Nonclustered Index ALTER INDEX [IX_NonClustered_TableName] ON [dbo].[TableName] DISABLE GO -- Disable Clustered Index ALTER INDEX [PK_TableName] ON [dbo].[TableName] DISABLE GO -- Check that all the indexes are disabled SELECT OBJECT_NAME(OBJECT_ID), Name, type_desc, is_disabled FROM sys.indexes WHERE OBJECT_NAME(OBJECT_ID) = 'TableName' GO -- Update the stats of table UPDATE STATISTICS TableName WITH FULLSCAN GO /* -- Above operation should thrown following error Msg 1974, Level 16, State 1, Line 1 Cannot perform the specified operation on table 'TableName' because its clustered index 'PK_TableName' is disabled. */ -- Now let us rebuild clustered index only ALTER INDEX [PK_TableName] ON [dbo].[TableName] REBUILD GO -- Check that all the indexes status SELECT OBJECT_NAME(OBJECT_ID), Name, type_desc, is_disabled FROM sys.indexes WHERE OBJECT_NAME(OBJECT_ID) = 'TableName' GO -- Check Statistics Last Updated Datetime SELECT name AS index_name, STATS_DATE(OBJECT_ID, index_id) AS StatsUpdated FROM sys.indexes WHERE OBJECT_ID = OBJECT_ID('TableName') GO -- Update the stats of table UPDATE STATISTICS TableName WITH FULLSCAN GO -- Check Statistics Last Updated Datetime SELECT name AS index_name, STATS_DATE(OBJECT_ID, index_id) AS StatsUpdated FROM sys.indexes WHERE OBJECT_ID = OBJECT_ID('TableName') GO -- Clean up DROP TABLE [TableName] GO Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Index, SQL Optimization, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL Statistics

    Read the article

  • SQL SERVER – Solution – Puzzle – Statistics are not Updated but are Created Once

    - by pinaldave
    Earlier I asked puzzle why statistics are not updated. Read the complete details over here: Statistics are not Updated but are Created Once In the question I have demonstrated even though statistics should have been updated after lots of insert in the table are not updated.(Read the details SQL SERVER – When are Statistics Updated – What triggers Statistics to Update) In this example I have created following situation: Create Table Insert 1000 Records Check the Statistics Now insert 10 times more 10,000 indexes Check the Statistics – it will be NOT updated Auto Update Statistics and Auto Create Statistics for database is TRUE Now I have requested two things in the example 1) Why this is happening? 2) How to fix this issue? I have many answers – here is the how I fixed it which has resolved the issue for me. NOTE: There are multiple answers to this problem and I will do my best to list all. Solution: Create nonclustered Index on column City Here is the working example for the same. Let us understand this script and there is added explanation at the end. -- Execution Plans Difference -- Estimated Execution Plan Vs Actual Execution Plan -- Create Sample Database CREATE DATABASE SampleDB GO USE SampleDB GO -- Create Table CREATE TABLE ExecTable (ID INT, FirstName VARCHAR(100), LastName VARCHAR(100), City VARCHAR(100)) GO CREATE NONCLUSTERED INDEX IX_ExecTable1 ON ExecTable (City); GO -- Insert One Thousand Records -- INSERT 1 INSERT INTO ExecTable (ID,FirstName,LastName,City) SELECT TOP 1000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%20 = 1 THEN 'New York' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 5 THEN 'San Marino' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 3 THEN 'Los Angeles' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 7 THEN 'La Cinega' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 13 THEN 'San Diego' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 17 THEN 'Las Vegas' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Display statistics of the table sp_helpstats N'ExecTable', 'ALL' GO -- Select Statement SELECT FirstName, LastName, City FROM ExecTable WHERE City  = 'New York' GO -- Display statistics of the table sp_helpstats N'ExecTable', 'ALL' GO -- Replace your Statistics over here DBCC SHOW_STATISTICS('ExecTable', IX_ExecTable1); GO -------------------------------------------------------------- -- Round 2 -- Insert One Thousand Records -- INSERT 2 INSERT INTO ExecTable (ID,FirstName,LastName,City) SELECT TOP 1000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%20 = 1 THEN 'New York' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 5 THEN 'San Marino' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 3 THEN 'Los Angeles' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 7 THEN 'La Cinega' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 13 THEN 'San Diego' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 17 THEN 'Las Vegas' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Select Statement SELECT FirstName, LastName, City FROM ExecTable WHERE City  = 'New York' GO -- Display statistics of the table sp_helpstats N'ExecTable', 'ALL' GO -- Replace your Statistics over here DBCC SHOW_STATISTICS('ExecTable', IX_ExecTable1); GO -- Clean up Database DROP TABLE ExecTable GO When I created non clustered index on the column city, it also created statistics on the same column with same name as index. When we populate the data in the column the index is update – resulting execution plan to be invalided – this leads to the statistics to be updated in next execution of SELECT. This behavior does not happen on Heap or column where index is auto created. If you explicitly update the index, often you can see the statistics are updated as well. You can see this is for sure happening if you follow the tell of John Sansom. John Sansom‘s suggestion: That was fun! Although the column statistics are invalidated by the time the second select statement is executed, the query is not compiled/recompiled but instead the existing query plan is reused. It is the “next” compiled query against the column statistics that will see that they are out of date and will then in turn instantiate the action of updating statistics. You can see this in action by forcing the second statement to recompile. SELECT FirstName, LastName, City FROM ExecTable WHERE City = ‘New York’ option(RECOMPILE) GO Kevin Cross also have another suggestion: I agree with John. It is reusing the Execution Plan. Aside from OPTION(RECOMPILE), clearing the Execution Plan Cache before the subsequent tests will also work. i.e., run this before round 2: ————————————————————– – Clear execution plan cache before next test DBCC FREEPROCCACHE WITH NO_INFOMSGS; ————————————————————– Nice puzzle! Kevin As this was puzzle John and Kevin both got the correct answer, there was no condition for answer to be part of best practices. I know John and he is finest DBA around – his tremendous knowledge has always impressed me. John and Kevin both will agree that clearing cache either using DBCC FREEPROCCACHE and recompiling each query every time is for sure not good advice on production server. It is correct answer but not best practice. By the way, if you have better solution or have better suggestion please advise. I am open to change my answer and publish further improvement to this solution. On very separate note, I like to have clustered index on my Primary Key, which I have not mentioned here as it is out of the scope of this puzzle. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, Readers Contribution, Readers Question, SQL, SQL Authority, SQL Index, SQL Puzzle, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Statistics

    Read the article

  • SQL SERVER – SmallDateTime and Precision – A Continuous Confusion

    - by pinaldave
    Some kinds of confusion never go away. Here is one of the ancient confusing things in SQL. The precision of the SmallDateTime is one concept that confuses a lot of people, proven by the many messages I receive everyday relating to this subject. Let me start with the question: What is the precision of the SMALLDATETIME datatypes? What is your answer? Write it down on your notepad. Now if you do not want to continue reading the blog post, head to my previous blog post over here: SQL SERVER – Precision of SMALLDATETIME. A Social Media Question Since the increase of social media conversations, I noticed that the amount of the comments I receive on this blog is a bit staggering. I receive lots of questions on facebook, twitter or Google+. One of the very interesting questions yesterday was asked on Facebook by Raghavendra. I am re-organizing his script and asking all of the questions he has asked me. Let us see if we could help him with his question: CREATE TABLE #temp (name VARCHAR(100),registered smalldatetime) GO DECLARE @test smalldatetime SET @test=GETDATE() INSERT INTO #temp VALUES ('Value1',@test) INSERT INTO #temp VALUES ('Value2',@test) GO SELECT * FROM #temp ORDER BY registered DESC GO DROP TABLE #temp GO Now when the above script is ran, we will get the following result: Well, the expectation of the query was to have the following result. The row which was inserted last was expected to return as first row in result set as the ORDER BY descending. Side note: Because the requirement is to get the latest data, we can’t use any  column other than smalldatetime column in order by. If we use name column in the order by, we will get an incorrect result as it can be any name. My Initial Reaction My initial reaction was as follows: 1) DataType DateTime2: If file precision of the column is expected from the column which store date and time, it should not be smalldatetime. The precision of the column smalldatetime is One Minute (Read Here) for finer precision use DateTime or DateTime2 data type. Here is the code which includes above suggestion: CREATE TABLE #temp (name VARCHAR(100), registered datetime2) GO DECLARE @test datetime2 SET @test=GETDATE() INSERT INTO #temp VALUES ('Value1',@test) INSERT INTO #temp VALUES ('Value2',@test) GO SELECT * FROM #temp ORDER BY registered DESC GO DROP TABLE #temp GO 2) Tie Breaker Identity: There are always possibilities that two rows were inserted at the same time. In that case, you may need a tie breaker. If you have an increasing identity column, you can use that as a tie breaker as well. CREATE TABLE #temp (ID INT IDENTITY(1,1), name VARCHAR(100),registered datetime2) GO DECLARE @test datetime2 SET @test=GETDATE() INSERT INTO #temp VALUES ('Value1',@test) INSERT INTO #temp VALUES ('Value2',@test) GO SELECT * FROM #temp ORDER BY ID DESC GO DROP TABLE #temp GO Those two were the quick suggestions I provided. It is not necessary that you should use both advices. It is possible that one can use only DATETIME datatype or Identity column can have datatype of BIGINT or have another tie breaker. An Alternate NO Solution In the facebook thread this was also discussed as one of the solutions: CREATE TABLE #temp (name VARCHAR(100),registered smalldatetime) GO DECLARE @test smalldatetime SET @test=GETDATE() INSERT INTO #temp VALUES ('Value1',@test) INSERT INTO #temp VALUES ('Value2',@test) GO SELECT name, registered, ROW_NUMBER() OVER(ORDER BY registered DESC) AS "Row Number" FROM #temp ORDER BY 3 DESC GO DROP TABLE #temp GO However, I believe it is not the solution and can be further misleading if used in a production server. Here is the example of why it is not a good solution: CREATE TABLE #temp (name VARCHAR(100) NOT NULL,registered smalldatetime) GO DECLARE @test smalldatetime SET @test=GETDATE() INSERT INTO #temp VALUES ('Value1',@test) INSERT INTO #temp VALUES ('Value2',@test) GO -- Before Index SELECT name, registered, ROW_NUMBER() OVER(ORDER BY registered DESC) AS "Row Number" FROM #temp ORDER BY 3 DESC GO -- Create Index ALTER TABLE #temp ADD CONSTRAINT [PK_#temp] PRIMARY KEY CLUSTERED (name DESC) GO -- After Index SELECT name, registered, ROW_NUMBER() OVER(ORDER BY registered DESC) AS "Row Number" FROM #temp ORDER BY 3 DESC GO DROP TABLE #temp GO Now let us examine the resultset. You will notice that an index which is created on the base table which is (indeed) schema change the table but can affect the resultset. As you can see, an index can change the resultset, so this method is not yet perfect to get the latest inserted resultset. No Schema Change Requirement After giving these two suggestions, I was waiting for the feedback of the asker. However, the requirement of the asker was there can’t be any schema change because the application was used by many other applications. I validated again, and of course, the requirement is no schema change at all. No addition of the column of change of datatypes of any other columns. There is no further help as well. This is indeed an interesting question. I personally can’t think of any solution which I could provide him given the requirement of no schema change. Can you think of any other solution to this? Need of Database Designer This question once again brings up another ancient question:  “Do we need a database designer?” I often come across databases which are facing major performance problems or have redundant data. Normalization is often ignored when a database is built fast under a very tight deadline. Often I come across a database which has table with unnecessary columns and performance problems. While working as Developer Lead in my earlier jobs, I have seen developers adding columns to tables without anybody’s consent and retrieving them as SELECT *.  There is a lot to discuss on this subject in detail, but for now, let’s discuss the question first. Do you have any suggestions for the above question? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: CodeProject, Developer Training, PostADay, SQL, SQL Authority, SQL DateTime, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • SQL SERVER – Recover the Accidentally Renamed Table

    - by pinaldave
    I have no answer to following question. I saw a desperate email marked as urgent delivered in my mailbox. “I accidentally renamed table in my SSMS. I was scrolling very fast and I made mistakes. It was either because I double clicked or clicked on F2 (shortcut key for renaming). However, I have made the mistake and now I have no idea how to fix this. I am in big trouble. Help me get my original tablename.” I have seen many similar scenarios in my life and they give me a very good opportunity to preach wisdom but when the house is burning, we cannot talk about how we should have conserved the water earlier. The goal at that point is to put off the fire as fast as we can. I decided to answer this email with my best knowledge. If you have renamed the table, I think you pretty much is out of luck. Here are few things which you can do which can give you idea about what your tablename can be if you are lucky. Method 1: (Not Recommended but try your luck) Check your naming convention of your system. I have often seen that many organizations name their index as IX_TableName_Colms or name their keys as FK_TableName1_TableName2_Cols. If your organization is following the same you can get the name from your table, you may refer your keys. Again, note that this is quite possible that your tablename was already renamed and your keys were not updated. This can easily lead you to select incorrect name. I think follow this if you are confident or move to the next method. Method 2: (Not Recommended but try your luck) This method is also based on your orgs naming convention. If you use the name of the table in any columnname (some organizations use tablename in their incremental identity column name), you can get that name from there. Method 3: (Not Recommended but try your luck) If you know where your table was used in your stored procedures, you can script your stored procedure and find the name of the table back. Method 4: (Try your luck) All the best organizations first create a data model of the schema and there is good chance that this table is used there, you should take your chances and refer original document. If your organization is good at managing docs or source code, you will get the name of the table back for sure. Method 5: (It WORKS but try on a development server) There is no sure way to get you the name of the table which you accidentally renamed however, there is one way which will work for sure. You need to take your latest full backup and restore it on your development server (remember not on production or where you have renamed this column). Now restore latest differential file of the full backup. Now restore all the log files one by one making sure that you are restoring before the point of time of you renamed the tablename. Now go to explore and this will give you the name of the table which you have renamed. If you are confident that the same table existed with the same name when the last full backup was made, you do not have to go to all the steps. You can just get the name of the table directly from last backup’s restore. Read the article about Backup Timeline. Wisdom: How can I miss to preach wisdom when I get the opportunity to do so? Here are a few points to remember. Use a different account to explore production environment. Do not use the same account which have all the rights and permissions all the time. Use the account which has read only permissions if there are no modification required. Use policy based management to prevent changes which are accidental. If there was policy of valid names, the accidental change of the table was not possible unless it was intentional delibarate changes. Have a proper auditing of the system in place. You can use DDL triggers but be careful with its usage (get it reviewed properly first). (Add your suggestion here) I guess Method 5 will work all the time (using point in time restore). Everything else is chance of luck and if you are lucky are bad – you will get further incorrect name. Now go back and read the first line of this blog. Out of five method four methods are just lucky guesses. The method 5 will work but again it is a lengthy process if the size of the database is huge or if you do not have full backup. Did I miss anything obvious? Please leave a comment and I will publish your answer with due credit. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Better way to load level content in XNA?

    - by user2002495
    Currently I loaded all my assets in XNA in the main Game class. What I want to achieve later is that I only load specific assets for specific levels (the game will consist of many levels). Here is how I load my main assets into the main class: protected override void LoadContent() { spriteBatch = new SpriteBatch(GraphicsDevice); plane = new Player(Content.Load<Texture2D>(@"Player/playerSprite"), 6, 8); plane.animation = "down"; plane.pos = new Vector2(400, 500); plane.fps = 15; Global.currentPos = plane.pos; lvl1 = new Level1(Content.Load<Texture2D>(@"Levels/bgLvl1"), Content.Load<Texture2D>(@"Levels/bgLvl1-other"), new Vector2(0, 0), new Vector2(0, -600)); CommonBullet.LoadContent(Content); CommonEnemyBullet.LoadContent(Content); } protected override void UnloadContent() { } protected override void Update(GameTime gameTime) { if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.Pressed) this.Exit(); plane.Update(gameTime); lvl1.Update(gameTime); foreach (CommonEnemy ce in cel) { if (ce.CollidesWith(plane)) { ce.hasSpawn = false; } foreach (CommonBullet b in plane.commonBulletList) { if (b.CollidesWith(ce)) { ce.hasSpawn = false; } } ce.Update(gameTime); } LoadCommonEnemy(); base.Update(gameTime); } private void LoadCommonEnemy() { int randY = rand.Next(-600, -10); int randX = rand.Next(0, 750); if (cel.Count < 3) { cel.Add(new CommonEnemy(Content.Load<Texture2D>(@"Enemy/Common/commonEnemySprite"), 7, 2, "left", randX, randY)); } for (int i = 0; i < cel.Count; i++) { if (!cel[i].hasSpawn) { cel.RemoveAt(i); i--; } } } protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.Black); spriteBatch.Begin(); lvl1.Draw(spriteBatch); plane.Draw(spriteBatch); foreach (CommonEnemy ce in cel) { ce.Draw(spriteBatch); } spriteBatch.End(); base.Draw(gameTime); } I wish to load my players, enemies, all in Level1 class. However, when I move my player & enemy code into the Level1 class, the gameTime returns null. Here is my Level1 class: using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Audio; using Microsoft.Xna.Framework.Content; using Microsoft.Xna.Framework.Graphics; using Microsoft.Xna.Framework.Media; using Microsoft.Xna.Framework.Input; using SpaceShooter_Beta.Animation.PlayerCollection; using SpaceShooter_Beta.Animation.EnemyCollection.Common; namespace SpaceShooter_Beta.Levels { public class Level1 { public Texture2D bgTexture1, bgTexture2; public Vector2 bgPos1, bgPos2; public float speed = 5f; Player plane; public Level1(Texture2D texture1, Texture2D texture2, Vector2 pos1, Vector2 pos2) { this.bgTexture1 = texture1; this.bgTexture2 = texture2; this.bgPos1 = pos1; this.bgPos2 = pos2; } public void LoadContent(ContentManager cm) { plane = new Player(cm.Load<Texture2D>(@"Player/playerSprite"), 6, 8); plane.animation = "down"; plane.pos = new Vector2(400, 500); plane.fps = 15; Global.currentPos = plane.pos; } public void Draw(SpriteBatch sb) { sb.Draw(bgTexture1, bgPos1, Color.White); sb.Draw(bgTexture2, bgPos2, Color.White); plane.Draw(sb); } public void Update(GameTime gt) { bgPos1.Y += speed; bgPos2.Y += speed; if (bgPos1.Y >= 600) { bgPos1.Y = -600; } if (bgPos2.Y >= 600) { bgPos2.Y = -600; } plane.Update(gt); } } } Of course when I did this, I delete all my player's code in the main Game class. All of that works fine (no errors) except that the game cannot start. The debugger says that plane.Update(gt); in Level 1 class has null GameTime, same thing with the Draw method in the Level class. Please help, I appreciate for the time. [EDIT] I know that using switch in the main class can be a solution. But I prefer a cleaner solution than that, since using switch still means I need to load all the assets through the main class, the code will be A LOT later on for each levels

    Read the article

  • Windows Server 2012 Branchcache vs. DFS-R

    - by TheCleaner
    Warning, subjective question ahead! But hopefully a good one that won't get closed. SCENARIO: I have a branch office that currently has no on-premise server. They access everything including a DC across a 12Mbps WAN link (MPLS). The link isn't saturated, averaging around 20% utilization. The circuit is very stable and has a high SLA and excellent uptime. However, large file transfers (mainly reads, not writes) from the file server across the WAN can be slow. We don't currently utilize DFS. RESEARCH DONE: I'm aware of WAN acceleration, using either dedicated hardware (Riverbed) or a dedicated software VM (Silver Peak) for example. But the pricing is outside of our current budget and the need isn't quite there yet from our perspective (since the issue is mainly in a "pull" scenario not necessarily push/pull). I'm mainly looking at deploying a Windows server at this branch office and either utilizing DFS-R or BranchCache. Looking at a table comparison and assuming we are looking at a "hosted branchcache server" and not simply distributed: It would appear there are benefits to both, even if both are "hosted" on a server. QUESTIONS I ACTUALLY HAVE: In what scenarios do each of these techs shine and where do you choose one over the other? Looking at a hosted Branchcache server, can you set "pre-fetching" of certain folders/files on the central file server so that they are immediately accessible locally at the branch? Do you have to do this on a schedule (if it is possible)? Looking at DFS-R my concern (and apparently solved with 3rd party apps) is file locking and making sure the file gets updated properly during a write operation (ie, making sure if both copies are accessed and both are written to, which file takes precedence and what happens to the changes?). Ideal it would seem would be to lock any alternate replicas of the data, but is it really that big of an issue? Does Branchcache lock the central file for editing? Does branchcache only transmit the deltas back to the central file of what has changed? Would either technology be ill advised if the branch office server was going to be utilized as a domain controller as well?

    Read the article

  • The proxy server received an invalid response from an upstream server

    - by chandank
    I have tomcat server behind the apache. I am using mod_ssl and reverse proxy to the tomcat. All are running at default ports. The full error is as follow. ack Proxy Error The proxy server received an invalid response from an upstream server. The proxy server could not handle the request POST /pages/doeditpage.action. Reason: Error reading from remote server If I clean the browser cache the error goes away and comes back after few attempts. I test the same on Chrome/Firefox/IE on Windows platform. Wondering it works perfectly on Linux based Chrome/Firefox. I googled a lot there are few answers at stack overflow but I am not able to find my answer. Is this a server side problem? because so many browsers cant be wrong at same time on Windows.

    Read the article

  • log shipping of biztalk database on SQL server 2008 standard edition

    - by Manjot
    Hi, I want to do log shipping for biztalk databases on SQL server 2008 standard edition (server A) to another SQL server 2008 standard edition (server B). I was told that for biztalk, logshipping is not like standard logshipping. I was able to find 2 links: http://msdn.microsoft.com/en-us/library/cc296836%28v=BTS.10%29.aspx http://msdn.microsoft.com/en-us/library/cc296741%28v=BTS.10%29.aspx but they are not talking about SQL 2008 servers. Can anyone please help in this? Thanks in advance

    Read the article

< Previous Page | 118 119 120 121 122 123 124 125 126 127 128 129  | Next Page >