Search Results

Search found 1342 results on 54 pages for 'till helge helwig'.

Page 47/54 | < Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >

  • Testing Hibernate DAO, without building the universe around it.

    - by Varun Mehta
    We have an application built using spring/Hibernate/MySQL, now we want to test the DAO layer, but here are a few shortcomings we face. Consider the use case of multiple objects connected to one another, eg: Book has Pages. The Page object cannot exist without the Book as book_id is mandatory FK in Page. For testing a Page I have to create a Book. This simple usecase is easy to manage, but if you start building a Library, till you don't create the whole universe surrounding the Book and Page, you cannot test it! So to test Page; Create Library Create Section Create Genre Create Author Create Book Create Page Now test Page. Is there an easy way to by pass this "universe creation" and just test he page object in isolation. I also want to be able to test HQLs related to Page. eg: SELECT new com.test.BookPage (book.id, page.name) FROM Book book, Page page. JUnit is supposed to run in isolation, so I have to write the whole test case to create the Page. Any tips will be useful.

    Read the article

  • Timer not beginning from 00:00

    - by studentProgrammer
    I have a Game in which there is a timer involved. For each two minutes that passes a new round between different players begins. So, I have a text box starting from 00:00 and changes each second until it is equal to 02:00. Now, I want to save the state of the game in the middle of a round if the user closes the form. What I need to do is that upon loading, the textbox starts at the time that the user left the game the last time and continue up till 02:00 normally. How can I do this? This is what I have until now where Tournament is the Form public Tournament() { _timer = new System.Windows.Forms.Timer(); _timer.Interval = 1000; _timer.Tick += Timer_Tick; _myDateTime = DateTime.Now; newDate = new DateTime(); newDate = newDate.AddMinutes(2.00); _timer.Start(); InitializeComponent(); } void Timer_Tick(object sender, EventArgs e) { var diff = DateTime.Now.Subtract(_myDateTime); this.textBox1.Text = diff.ToString(@"mm\:ss"); DateTime dt = Convert.ToDateTime(diff.ToString()); if (newDate.Minute == dt.Minute) { _timer.Stop(); _myDateTime = DateTime.Now; displayPointsOrResults(); this.textBox1.Text = diff.ToString(@"mm\:ss"); } } In my LoadGame method: where timePassed is what I have written in the text box string[] splitted6 = timePassed.Split(':'); if (splitted6[0] == "00") { int remainingTime = 120 - Convert.ToInt32(splitted6[1]); DateTime time = DateTime.Now.Date; time = time.AddMinutes(remainingTime); _myDateTime = time; } else { int leftTime = Convert.ToInt32(splitted[0].Trim('0') + splitted[1]); int remainingTime = 120 - leftTime; DateTime time = DateTime.Now.Date; time = time.AddMinutes(remainingTime); _myDateTime = time; }

    Read the article

  • yet another logic.

    - by Sunil
    I'm working on a research problem out of curiosity and I don't know how to program the logic that I've in mind. Let me explain it to you : I've 4 vectors say for example, v1 = 1 1 1 1 v2 = 2 2 2 2 v3 = 3 3 3 3 v4 = 4 4 4 4 Now what I want to do is to add them combination-wise. i.e v12 = v1+v2 v13 = v1+v3 v14 = v1+v4 v23 = v2+v3 v24 = v2+v4 v34 = v3+v4 Till this step it is just fine. The problem/trick is now, at the end of each iteration I give the obtained vectors into a black box function and it returns only few of the vectors say v12, v13 and v34. Now, I want to add each of these vectors one vector from v1,v2,v3,v4 which it hasn't added before. For example v3 and v4 hasn't been added to v12 so I want to create v123 and v124. similarly for all the vectors like, v12 should become : v123 = v12+v3 v124 = v12+v4 v13 should become : v132 // this should not occur because I already have v123 v134 = v13+v4; v14,v23 and v24 cannot be considered because it was deleted in the black box function so all we have in our hands to work with is v12,v13 and v34. v34 should become : v341 // cannot occur because we have 134 v342 = v34+v2 It is important that I do not do all at one step at the start like for example I can do (4 choose 3) 4C3 and finish it off but I want to do it step by step at each iteration. I've asked a modified version of this question before (without including the black box function) and got answers here. Can anybody tell me how to do it when the black box function is included ? A modification of the previous answer would also be great. Thanks in advance.

    Read the article

  • A better way of handling the below program(may be with Take/Skip/TakeWhile.. or anything better)

    - by Newbie
    I have a data table which has only one row. But it is having 44 columns. My task is to get the columns from the 4th row till the end. Henceforth, I have done the below program that suits my requirement. (kindly note that dt is the datatable) List<decimal> lstDr = new List<decimal>(); Enumerable.Range(0, dt.Columns.Count).ToList().ForEach(i => { if (i > 3) lstDr.Add(Convert.ToDecimal(dt.Rows[0][i])); } ); There is nothing harm in the program. Works fine. But I feel that there may be a better way of handimg the program may be with Skip ot Take or TakeWhile or anyother stuff. I am looking for a better solution that the one I implemented. Is it possible? I am using c#3.0 Thanks.

    Read the article

  • Click event for dynamically added li element?

    - by user1774460
    I am having totally 20 links.First 10 links directly visible to user and remaining 10 links shown when user hover the down arrow image(used for hover). When user click any one hover link, the link till the currently clicked are moved to left side(another down arrow used for add the right side links to left side dynamically by creating li). This one working fine.But this is not working as vice versa. (i.e)When i click left side link it should navigate to right side.Click event not working for li element that i created dynamically. Please Can any one help for me?????? My sample Code: //To append the line from right hover to looplink div $('#loop_link').append(''+$('#pagelinkli_'+val3).html()+''); //To hide the link in right hover div once it selected and appended in loop link div $('#pagelink_a #pagelinkli_'+val3).css('display','none'); //This line to move the link from loop link to left hover div $('#pagelink_a_left ul').prepend((''+$('#pagelinkli_'+val6).html()+'')); //This line to hide the link in looplink div $('#loop_link #pagelinkli_'+val6).css('display','none'); This code is like navigating link from right hover to tab and from tab to left hover and vice versa.....

    Read the article

  • ajax panel update during the middle of a function C# ASP.net site

    - by user2615302
    ajax panel update during the middle of a function C# ASP.net site This is the button click. I would like to update LbError.Text to "" before the rest of the function continues. This is my current code. protected void BUpload_Click(object sender, EventArgs e) { LbError.Text = ""; UpdatePanel1.Update(); //// need it to update here before it moves on but it waits till the end to update the lablel Exicute functions..... ....... ....... } <asp:UpdatePanel ID="UpdatePanel1" runat="server" UpdateMode="Conditional"> <contenttemplate> <asp:Label ID="LbError" runat="server" CssClass="failureNotification" Text=""></asp:Label> <br /><br /> <br /><br /> <asp:TextBox ID="NewData" runat="server"></asp:TextBox><br /> then click <asp:Button ID="BUpload" runat="server" Text="Upload New Data" onclick="BUpload_Click"/><br /> Things i have tried include have another UpdatePanel just and same results. any help would be greatly appreciated.

    Read the article

  • Having trouble deleting a node from a linked list

    - by Requiem
    I've been working on this code for my shell that I'm creating and for some reason it isn't working. I'm implementing a watchuser function that watch's a user when an argument is given (args[1]). However, when a second argument (args[2]) of "off" is given, the user should be deleted from the linked list and should no longer be watched. struct userList * goList; goList = userInventory; do{ if (strcmp(userInventory->username, args[1]) == 0){ printf("%s\n", args[1]); printf("%s\n",userInventory->username); struct userList * temp2; temp2 = userInventory->next; if (userInventory->next != NULL){ userInventory->next = temp2->next; userInventory->next->prev = userInventory; } free(temp2); } goList = goList->next; }while (goList != userInventory); My global struct is also as follows: struct userList{ char * username; struct userList * prev; struct userList * next; } For reason, this code won't delete the user node from my linked list. The adding works, but this remove function won't and I'm not sure why. The print statements are there just to make sure it's executing the condition, which it is. If anyone could help me find the reasoning behind my error, I'd greatly appreciate it. Till then, I'll be trying to debug this. Thanks.

    Read the article

  • cron doesn't execute it's commands

    - by Silvio Keller
    I created an own small server with Debian. Last night i updated it. It created an error while generating the initrd and it didn't boot. Today i booted from another filesystem and did dpkg --configure -a with chroot. I also checked the filesystem. Now everything should be ok. But cron doesn't work:-( It is the same /etc/crontab-File but it doesn't work. I reinstalled cron and tried many things. Is there a way to see cron's log? I only readed about rsyslog, but i have not installed rsyslog, because the server is based on a minimal system (Freeagent Dockstar). Has someone an idea? Best regards Silvio Keller Update There is no file /var/log/syslog and dpkg -l|grep syslog gives me no output, so i think syslog is not installed. It is only a minimal system. cron -l gives: cron: can't lock /var/run/crond.pid, otherpid may be 687: Resource temporarily unavailable So i stopped cron with /etc/init.d/cron stop and executed cron -l again, this gives no output. At this moment i tried to start cron with /etc/init.d/cron start: Starting periodic command scheduler: cron failed! But there's no additional error info... But i see there's now in the background a proccess called cron -l which runs. If i stop it /etc/init.d/cron start works: Starting periodic command scheduler: cron. I used the crontab-file /etc/crontab, this worked for me always. Till i updated my kernel and the initrd it doesn't. The file's content is: SHELL=/bin/sh PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin # m h dom mon dow user command 17 * * * * root cd / && run-parts --report /etc/cron.hourly 25 6 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily ) 47 6 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly ) 52 6 1 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly ) 00 5 * * * root dummy 23 45 * * 7 root dummy 00 * * * * root dummy */1 * * * * root dummy 00 1 * * * root dummy 00 4 * * * root dummy */5 * * * * root dummy #00 */10 * * * root dummy 01 0 * * * root dummy 00 5 * * * root dummy 00 4 * * * root dummy # If i start crontab -e it creates a new file /tmp/crontab.vn87tv/crontab, which is unfortunaly on a tmpfs and which also doesn't work. Thanks & Best regards

    Read the article

  • Double Layer DVD+R burning problem - I/O Error

    - by Mehper C. Palavuzlar
    I have a modern PC (Quad Core CPU, 4 GB RAM, Win7 Home Premium 64-bit) but I have a problem with burning .dvd images to Double Layer (8.5 GB) DVDs. I wasted too many DVD+R DL discs but to no avail. Here is a short explanation of what I did: I'm using ImgBurn v2.5.0.0 (latest version). I'm trying to burn an image file (.dvd) which is together with the related .iso file in the same folder. In ImgBurn, I select the file with .dvd extension, and set writing speed to 2.4x. Burning process starts normally, but around 7% of the process, it gives a I/O Write Error, which is as follows: I wasted 3 discs (Magic, Made in Taiwan, DVD+R DL, 8.5 GB) trying the same thing. My DVD writer is LG GH22NP20 with IDE connection type. I updated its firmware from 1.04 to 2.00 but no success in burning again. Then my cousin brought his LG (an older model) which, he claims, was successful in writing DL discs with the same brand (Magic). I plugged off my LG and plugged the older one in, and tried to burn the image again. It also gave an I/O Error even without standing till 7%. I tried another burning program (CloneCD), but failed again. Then I bought other brands (TDK and VERBATIM) and tried to burn the image. Burning process started successfully, but around 14% (for Verbatim) and 25% (for TDK) failed again. Here is a screeny from ImgBurn: I've burned lots of 4.7 GB DVD+Rs and DVD-Rs successfully, even without a single error, with this LG DVD writer, but this case is very bothering for me. What should I do? Should I buy a new DVD writer other than LG? Could this be related to Windows or my hardware configuration? Thanks for your help. Edit: My burner works on my cousin's machine. So the problem must be related to my system. What could be the reason? Latest news: I borrowed an external USB DVD writer from a friend, which is PHILIPS SPD3000CC (an old model). Guess what! It's burning DVD+R DLs successfully! How come an internal DVD writer of a brand new computer system cannot burn DL DVDs? Now I'm considering buying a new internal DVD writer with not IDE, but SATA connection...

    Read the article

  • missing files after reassemble of RAID-5

    - by Kris_R
    I had to open my file-server's housing on Sunday to replace a faulty fan. What I didn't see was that one of the sata-cables was not properly connected. The 1st thing I did after a reboot was a check of the RAID status and it showed immediately that one drive is missing. Till this moment the device was not used (however it was mounted, so I'm not 100% sure that system did nothing). I stopped md0 and re-plugged the cable: mdadm --stop /dev/md0 poweroff After another reboot I checked the removed drive: mdadm --examine /dev/sdd1 ... Checksum : 3276bc1d - correct Events : 315782 Layout : left-symmetric Chunk Size : 32K Number Major Minor RaidDevice State this 0 8 49 0 active sync /dev/sdd1 0 0 8 49 0 active sync /dev/sdd1 1 1 8 65 1 active sync /dev/sde1 2 2 8 33 2 active sync /dev/sdc1 3 3 8 17 3 active sync /dev/sdb1 I was a bit surprised that it was shown as active (even if earlier mdadm said, that this device was removed from array) and its checksum was OK. I recreated RAID with: mdadm --assemble /dev/md0 --scan The command mdadm --detail /dev/md0 showed that all drives were running and system was in "clean" state. I mounted the device md0 and then came hic-cup. I wanted to work on one of the last files that I had been using before all the situation and it was not there. In another place I missed actually all files from the directory where I was working. As far as I can see most of the files that are older than a few days are intact but some newer ones are missing. Now the big question: what would be your advice? Is there a way to get these data? I thought about removing the drive that was earlier labeled by mdadm and rebuild array with another empty HDD. I've found that after re-assemble the "broken" drive has another label (changed from sdd to sdb). Can this have influence on rebuilt process? If yes, how to reassemble the array properly? I'm sure the SATA-cables are connected still in the same order to the controller. p.s. Please no advises like "restore from backup". I'm doing back-ups on Sunday's night and this happened in the late afternoon, so backup is not really options for me. p.s.s. I asked this question on Unix&Linux but no answer came up during last two days. I'm getting quite anxious. Sorry for duplicating if any of you reading the other forum.

    Read the article

  • Windows Server 2008 - one MAC Address, assign multiple external IP's to VirtualBoxes running as guests on host

    - by Sise
    Couldn't find any help @ google or here. The scenario: Windows Server 2008 Std x64 on i7-975, 12 GB RAM. The server is running in a data centre. One hardware NIC - RealTek PCIe GBE - one MAC Address. The data centre provides us 4 static external IP's. The first is assigned to the host by default of course. I have ordered all 4 IP's, the data centre can assign the available IP's to the physical MAC address of the given NIC only. This means one NIC, one MAC Address, 4 IP's. Everything works fine so far. Now, what I would like to have: Installed VirtualBox with 1-3 guests running, each gets it's own external IP assigned. Each of it should be an standalone Win Server 2008. It looks like the easiest way would be to put the guests into an virtual subnet and routing all data coming to the 2nd till 4th external IP through to this guests using there subnet IP's. I have been through the VirtualBox User Manuel regarding networking. What's not working: I can't use bridged networking without anything else, because the IP's are assigned to the one MAC address only. I can't use NAT networking because it does not allow access from outside or the host to the guest. I do not wanna use port forwarding. Host-only networking itself would not allow internet access, by sharing the default internet connection of the host, internet is granted from the guest to the outside but not from outside or the host to the guest. InternalNetworking is not really an option here. What I have tried is to create an additional MS Loopback adapter for a routed subnet, where the Vbox guests are in, now the idea was to NAT the internet connection to the loopback 'subnet'. But I can't ping the gateway from the guests. By using route command in the command shell or RRAS (static route, NAT) I didn't get there as well. Solutions like the following do work for the one way, but not for the way back: For your situation, it might be best to use the Host-Only adapter for ICS. Go to the preferences of VB itself and select network. There you can change the configuration for the interface. Set the IP address to 192.168.0.1, netmask 255.255.255.0. Disable the DHCP server if it isn't already and that's it. Now the Guest should get an IP from Windows itself and be able to get onto the internet, while you can also access the Host. Slowly I'm pretty stucked with this topic. There is a possibility I've just overlooked something or just didn't getting it by trying, especially using RRAS, but it's kinda hard to find useful howto's or something in the web. Thanks in advance! Best regards, Simon

    Read the article

  • Issue with VMWare vSphere and NFS: re occurring apd state

    - by Bastian N.
    I am experiencing issues with VMWare vSphere 5.1 and NFS storage on 2 different setups, which result in an "All Path Down" state for the NFS shares. This first happened once or twice a day, but lately it occurs much more frequent, as specially when Acronis Backup jobs are running. Setup 1 (Production): 2 ESXi 5.1 hosts (Essentials Plus) + OpenFiler with NFS as storage Setup 2 (Lab): 1 ESXi 5.1 host + Ubuntu 12.04 LTS with NFS as storage Here is an example from the vmkernel.log: 2013-05-28T08:07:33.479Z cpu0:2054)StorageApdHandler: 248: APD Timer started for ident [987c2dd0-02658e1e] 2013-05-28T08:07:33.479Z cpu0:2054)StorageApdHandler: 395: Device or filesystem with identifier [987c2dd0-02658e1e] has entered the All Paths Down state. 2013-05-28T08:07:33.479Z cpu0:2054)StorageApdHandler: 846: APD Start for ident [987c2dd0-02658e1e]! 2013-05-28T08:07:37.485Z cpu0:2052)NFSLock: 610: Stop accessing fd 0x410007e4cf28 3 2013-05-28T08:07:37.485Z cpu0:2052)NFSLock: 610: Stop accessing fd 0x410007e4d0e8 3 2013-05-28T08:07:41.280Z cpu1:2049)StorageApdHandler: 277: APD Timer killed for ident [987c2dd0-02658e1e] 2013-05-28T08:07:41.280Z cpu1:2049)StorageApdHandler: 402: Device or filesystem with identifier [987c2dd0-02658e1e] has exited the All Paths Down state. 2013-05-28T08:07:41.281Z cpu1:2049)StorageApdHandler: 902: APD Exit for ident [987c2dd0-02658e1e]! 2013-05-28T08:07:52.300Z cpu1:3679)NFSLock: 570: Start accessing fd 0x410007e4d0e8 again 2013-05-28T08:07:52.300Z cpu1:3679)NFSLock: 570: Start accessing fd 0x410007e4cf28 again As long as the issue occurred once or twice a day it really wasn't a problem, but now this issue has impact on the VMs. The VMs get slow or even hang, resulting in a reset through vCenter in the production environment. I searched the web extensively and asked in forums, but till now nobody was able to help me. Based on blog posts and VMWare KB articles I tried the following NFS settings: Net.TcpipHeapSize = 32 Net.TcpipHeapMax = 128 NFS.HartbeatFrequency = 12 NFS.HartbeatMaxFailures = 10 NFS.HartbeatTimeout = 5 NFS.MaxQueueDepth = 64 Instead of NFS.MaxQueueDepth = 64 I already tried other settings like NFS.MaxQueueDepth = 32 or even NFS.MaxQueueDepth = 1. Unfortunately without any luck. It would be great if someone could help me on this issue. It is really annoying. Thanks in advance for all the help. [UPDATE] As I explained in the comment below, here is the network setup: On the production setup the NFS traffic is bound to a separate VLAN with ID 20. I am using a HP 1810 24 Port Switch. The OpenFiler system is connected to the VLAN with 4 Intel GbE NICs with dynamic LACP. The ESXis both have 4 Intel GbE NICs using 2 static LACP trunks containing 2 NICs each. One pair is connected to the regular LAN and the other one to the VLAN 20. And here is a screenshot of the vSwitch: Switch configuration: Port configuration: On the lab setup its a single Intel NIC on each side without VLAN, but with different IP subnet.

    Read the article

  • Intermittent Disconnection of Client Computers from Domain Server

    - by dilip nagle
    The Background: I have Windows 2008 server Enterprise Version with 25 user cal licences. It has a domain and all users and a network shared HP printer in it. The Server has two network cards and both these cards as well as all client machines are on IP addressing scheme of 192.168.1.* with subnetmask 255.255.255.0. Of the two network cards viz. 192.168.1.231 and 192.168.1.233, only 192.168.1.231 is registered with DNS. In 192.168.1.233(i.e. 2nd network card) has default getway as 192.168.1.231 and dns address as 192.168.1.231. The Server has three hard disks with capacities as 500gb, 500gb and 1TB and are partitioned as (C,D,E), (F,G) and (K) with partition K having all user data into various Shared Folders. Each of these folders(On Partition K), are mapped onto each user's computer as per the right of access given to them. The Problem: The Server was installed about 6 months ago and till date not even once, the Server has Hung or has given any problem. All the Clients computers are able to run the web based software from their computers via ip address, e.g. http://192.168.1.231/webERP/default.aspx. However, occassionally, when any client computer tries to browse network mappings, it hangs. Again, there is no fixed pattern. This may happen after running smoothly for say 3 days. On each Client's machine, the network settings are as follows: IP Address: 192.168.1.* where * is 1,2,3 .... Sunnetmask: 255.255.255.0 defauly getway: 192.168.1.231 Which is a server card and DNS address. preferred DNS Server: 192.168.1.231 In Advanced Tab under Wins: LMHostLookup is Unticked and default is radio buttoned. Ideally, I would have loved to have Disabled NETBIOS over TCP/IP but some network printers do not get accessed if this option is enabled(ie. Radio Buttoned). Bacause Disabling Netbios will drastically reduce traffic of NETBIOS broadcasting to all the computers on the net to do naming resolution. On Server, I have WINs Running which I have Scavanged Records, verified Database Integrity etc, removed Tombstoned Records etc. The Critical Errors shown only once a day when the server is statred are 4224(WINS) and 12923 - Server Licencing failed to Update DNS Record. I fail to understand as why do client machines HANG when they try to browse mapped network shared folders on K Drive. Kindly Advice

    Read the article

  • Old CRT screen's high resolution doesn't work anymore on windows 7

    - by Mixxiphoid
    One year ago I decided to switch from Windows XP to Windows 7. I had a 17" CRT monitor with a screen resolution of 1600x1200 which worked fine on Windows XP. While installing Windows 7 everything went well until Windows 7 was going to install the video card its driver. Windows 7 puts the screen to its recommended resolution and my screen became black. I waited a few minutes to be sure the installation was finished. I turned off the computer by hand and restarted the computer on a resolution of 800x640. When windows 7 was done installing I went to screen resolutions and the resolution of 1600x1200 was on the top of the list with '(recommended)' next to it. I tried putting it on 1600x1200 but again my screen went black. I installed all windows 7 updates including the video card driver from the NVidia site (NOT from Windows 7). I tried about everything to make it work on 1600x1200 but with no succes. The highest resolution I got with the crt monitor was 1280x1024. I had a TFT screen which had 1280x1024 as max resolution and had better colors, so I used that one till today. My video card is 9600GT and my power supply is beyond sufficient. I even tried to install the driver I had on XP to see if it worked, but no results. I tried classic mode on windows 7, changed the dpi, the frequentie and the monitor settings, but nothing worked. I really like a vertical resolution of 1200, but it seems today I'm bound to all those standard monitors with a resolution of 1980x1024... Can anybody explain to me what the cause is that it worked on Windows XP but not on Windows 7? And maybe a solution to the problem (I actually gave up on getting it fixed...) Thanks a lot in advance. SOLUTION I downloaded the according monitor driver and installed it. Next I rebooted my computer on low resolution (800x640) and connected the CRT monitor. When Windows 7 booted successfully I went to computer management and 'update the driver' of my monitor. I manually selected 'Generic PnP monitor' and made that one active. I want to advanced settings at 'Screen resolution' and selected the mode '1600x1200 (32-bit) 80 Hertz (95 Hertz did not work). Now I had my resolution on 1600x1200. I repeated the earlier step to select the original monitor again instead of the Generic monitor. Quite a way to solve this problem, but it worked! Thanks a lot you all.

    Read the article

  • Very slow printing from print server

    - by evolvd
    Print server is a VM on Xen The VM is Windows 2003 32bit. During the issue the VM is not being taxed in anyway, cpu, memory, hd read/write, and network speed is all good. The problem that I see is the transfer of the print file from the print server to the printer. The 80Mb file is transferred from the client to the print server in about 2 minutes but then it takes about 2 hours for that file to be sent to the printer. I can't figure out why this would just start to happen. The printer is rebooted every evening and is just used for one large print job in the morning. The server has been rebooted with no effect I changed the spool option to send the entire spool to the server before printing starts and it had no effect. This printer problem did happen to come about after some changes to the Xen environment. The Xen servers changed from using HBA NIC cards to software iscsi and a new switch was put in. I don't think this is related to the problem since all the speeds on the VMs are better now. The changed happened on Saturday and the first print to this printer happened on Monday morning. I'm just putting that out there but like I said I don't think it is related but I don't want to rule it out. At this point I don't have many other options besides the physical layer. I can switch out network cable that goes to the printer and I might be able to print the same job to another printer. I wont be able to test those things out till this afternoon though. Any other ideas or test I could do to try to find the reason for the slow speed? I forgot to say that this is only happening when printing to this one printer. ===Update=== I found out that there are a few printers that currently have this issue, not just the one. There are over 30 printers on the server though so I know it's not happening to all of them. I printed a large pdf doc from the server and it was able to print at the normal speed. If the machine sends the large print request it gets to the server fine but then slow to get from the server to the printer. If sent directly from the printer it gets to the printer at the normal speed. The question now is why is there a speed difference when it comes from the machine and why would it start now?

    Read the article

  • Windows Server 2008 - one MAC Address, assign multiple external IP's to VirtualBoxes running as guests on host

    - by Sise
    Couldn't find any help @ google or here. The scenario: Windows Server 2008 Std x64 on i7-975, 12 GB RAM. The server is running in a data centre. One hardware NIC - RealTek PCIe GBE - one MAC Address. The data centre provides us 4 static external IP's. The first is assigned to the host by default of course. I have ordered all 4 IP's, the data centre can assign the available IP's to the physical MAC address of the given NIC only. This means one NIC, one MAC Address, 4 IP's. Everything works fine so far. Now, what I would like to have: Installed VirtualBox with 1-3 guests running, each gets it's own external IP assigned. Each of it should be an standalone Win Server 2008. It looks like the easiest way would be to put the guests into an virtual subnet and routing all data coming to the 2nd till 4th external IP through to this guests using there subnet IP's. I have been through the VirtualBox User Manuel regarding networking. What's not working: I can't use bridged networking without anything else, because the IP's are assigned to the one MAC address only. I can't use NAT networking because it does not allow access from outside or the host to the guest. I do not wanna use port forwarding. Host-only networking itself would not allow internet access, by sharing the default internet connection of the host, internet is granted from the guest to the outside but not from outside or the host to the guest. InternalNetworking is not really an option here. What I have tried is to create an additional MS Loopback adapter for a routed subnet, where the Vbox guests are in, now the idea was to NAT the internet connection to the loopback 'subnet'. But I can't ping the gateway from the guests. By using route command in the command shell or RRAS (static route, NAT) I didn't get there as well. Solutions like the following do work for the one way, but not for the way back: For your situation, it might be best to use the Host-Only adapter for ICS. Go to the preferences of VB itself and select network. There you can change the configuration for the interface. Set the IP address to 192.168.0.1, netmask 255.255.255.0. Disable the DHCP server if it isn't already and that's it. Now the Guest should get an IP from Windows itself and be able to get onto the internet, while you can also access the Host. Slowly I'm pretty stucked with this topic. There is a possibility I've just overlooked something or just didn't getting it by trying, especially using RRAS, but it's kinda hard to find useful howto's or something in the web. Thanks in advance! Best regards, Simon

    Read the article

  • "log on as a batch job" user rights removed by what GPO?

    - by LarsH
    I am not much of a server administrator, but get my feet wet when I have to. Right now I'm running some COTS software on a Windows 2008 Server machine. The software installer creates a few user accounts for running its processes, and then gives those users the right to "log on as a batch job". Every so often (e.g. yesterday at 2:52pm and this morning at 7:50am), those rights disappear. The software then stops working. I can verify that the user rights are gone by using secedit /export /cfg e:\temp\uraExp.inf /areas USER_RIGHTS and I have a script that does this every 30 seconds and logs the results with a timestamp, so I know when the rights disappear. What I see from the export is that in the "good" state, i.e. after I install the software and it's working correctly, the line for SeBatchLogonRight from the secedit export includes the user accounts created by the software. But every few hours (sometimes more), those user accounts are removed from that line. The same thing can be seen by using the GUI tool Local Security Policy > Security Settings > Local Policies > User Rights Assignment > Log on as a batch job: in the "good" state, that policy includes the needed user accounts, and in the bad state, the policy does not. Based on the above-mentioned logging script and the timestamps at which the user rights are being removed, I can see clearly that some GPOs are causing the change. The GPO Operational log shows GPOs being processed at exactly the right times. E.g.: Starting Registry Extension Processing. List of applicable GPOs: (Changes were detected.) Local Group Policy I have run GPOs on demand using gpupdate /force, and was able to verify that this caused the User Rights to be removed. We have looked over local group policies till our eyes are crossed, trying to figure out which one might be stripping these User Rights to "log on as a batch job." We have not configured any local group policies on this machine, that we know of; so is there a default local group policy that might typically do such a thing? Are there typical domain policies that would do this? I have been working with our IT staff colleagues to troubleshoot the problem, but none of them are really GPO experts... They wear many hats, and they do what they need to do in order to keep most things running. Any suggestions would be greatly appreciated!

    Read the article

  • My server's been hacked EMERGENCY

    - by Grant unwin
    I'm on my way into work at 9.30 p.m. on a Sunday because our server has been compromised somehow and was resulting in a DOS attack on our provider. The servers access to the Internet has been shut down which means over 5-600 of our clients sites are now down. Now this could be an FTP hack, or some weakness in code somewhere. I'm not sure till I get there. How can I track this down quickly? We're in for a whole lot of litigation if I don't get the server back up ASAP. Any help is appreciated. UPDATE Thanks to everyone for your help. Luckily I WASN'T the only person responsible for this server, just the nearest. We managed to resolve this problem, although it may not apply to many others in a different situation. I'll detail what we did. We unplugged the server from the net. It was performing (attempting to perform) a Denial Of Service attack on another server in Indonesia, and the guilty party was also based there. We firstly tried to identify where on the server this was coming from, considering we have over 500 sites on the server, we expected to be moonlighting for some time. However, with SSH access still, we ran a command to find all files edited or created in the time the attacks started. Luckily, the offending file was created over the winter holidays which meant that not many other files were created on the server at that time. We were then able to identify the offending file which was inside the uploaded images folder within a ZenCart website. After a short cigarette break we concluded that, due to the files location, it must have been uploaded via a file upload facility that was inadequetly secured. After some googling, we found that there was a security vulnerability that allowed files to be uploaded, within the ZenCart admin panel, for a picture for a record company. (The section that it never really even used), posting this form just uploaded any file, it did not check the extension of the file, and didn't even check to see if the user was logged in. This meant that any files could be uploaded, including a PHP file for the attack. We secured the vulnerability with ZenCart on the infected site, and removed the offending files. The job was done, and I was home for 2 a.m. The Moral - Always apply security patches for ZenCart, or any other CMS system for that matter. As when security updates are released, the whole world is made aware of the vulnerability. - Always do backups, and backup your backups. - Employ or arrange for someone that will be there in times like these. To prevent anyone from relying on a panicy post on Server Fault. Happy servering!

    Read the article

  • Program crash on deque from queue

    - by SwedishGit
    My first question asked here, so please excuse if I fail to include something... I'm working on a homework project, which basically consists of creating a "Jukebox" (importing/exporting albums from txt files, creating and "playing" a playlist, etc.). I've become stuck on one point: When "playing" the playlist, which consists of a self-made Queue, a copy of it is made from which songs are dequeued and printed out with a time delay. This appears to run fine on the first run through the program, but if the "play" option is chosen again (with the same playlist, created from a different menu option), it crashes before managing to print the first song. It also crashes if creating a new playlist, but then it manages to print some songs (seem to depend on the number of songs in the first/new playlists...) before crashing. With printouts I've been able to track the crashing down to being on the "item = n-data" call in the deque function... but can't get my head around why this would crash. Below is the code I think should be relevant... let me know if there are other parts that would help if I include. Edit: The Debug Error shown on crash is: R6010 abort() has been called The method to play from the playlist: void Jukebox::playList() { if(songList.getNodes() > 0) { Queue tmpList(songList); Song tmpSong; while(tmpList.deque(tmpSong)) { clock_t temp; temp = clock () + 2 * CLOCKS_PER_SEC ; while (clock() < temp) {} } } else cout << "There are no songs in the playlist!" << endl; } Queue: // Queue.h - Projekt-uppgift // Håkan Sjölin 2014-05-31 //----------------------------------------------------------------------------- #ifndef queue_h #define queue_h #include "Song.h" using namespace std; typedef Song Item; class Node; class Queue { private: Node *first; Node *last; int nodes; public: Queue():first(nullptr),last(nullptr),nodes(0){}; ~Queue(); void enque(Item item); bool deque(Item &item); int getNodes() const { return nodes; } void empty(); }; #endif // Queue.cpp - Projekt-uppgift // Håkan Sjölin 2014-05-31 //----------------------------------------------------------------------------- #include "queue.h" using namespace std; class Node { public: Node *next; Item data; Node (Node *n, Item newData) : next(n), data(newData) {} }; //------------------------------------------------------------------------------ // Funktionsdefinitioner för klassen Queue //------------------------------------------------------------------------------ //------------------------------------------------------------------------------ // Destruktor //------------------------------------------------------------------------------ Queue::~Queue() { while(first!=0) { Node *tmp = first; first = first->next; delete tmp; } } //------------------------------------------------------------------------------ // Lägg till data sist i kön //------------------------------------------------------------------------------ void Queue::enque(Item item) { Node *pNew = new Node(0,item); if(getNodes() < 1) first = pNew; else last->next = pNew; last = pNew; nodes++; } //------------------------------------------------------------------------------ // Ta bort data först i kön //------------------------------------------------------------------------------ bool Queue::deque(Item &item) { if(getNodes() < 1) return false; //cout << "deque: test2" << endl; Node *n = first; //cout << "deque: test3" << endl; //cout << "item = " << item << endl; //cout << "first = " << first << endl; //cout << "n->data = " << n->data << endl; item = n->data; //cout << "deque: test4" << endl; first = first->next; //delete n; nodes--; if(getNodes() < 1) // Kön BLEV tom last = nullptr; return true; } //------------------------------------------------------------------------------ // Töm kön //------------------------------------------------------------------------------ void Queue::empty() { while (getNodes() > 0) { Item item; deque(item); } } //------------------------------------------------------------------------------ Song: // Song.h - Projekt-uppgift // Håkan Sjölin 2014-05-15 //----------------------------------------------------------------------------- #ifndef song_h #define song_h #include "Time.h" #include <string> #include <iostream> using namespace std; class Song { private: string title; string artist; Time length; public: Song(); Song(string pTitle, string pArtist, Time pLength); // Setfunktioner void setTitle(string pTitle); void setArtist(string pArtist); void setLength(Time pLength); // Getfunktioner string getTitle() const { return title;} string getArtist() const { return artist;} Time getLength() const { return length;} }; ostream &operator<<(ostream &os, const Song &song); istream &operator>>(istream &is, Song &song); #endif // Song.cpp - Projekt-uppgift // Håkan Sjölin 2014-05-15 //----------------------------------------------------------------------------- #include "Song.h" #include "Constants.h" #include <iostream> //------------------------------------------------------------------------------ // Definiering av Songs medlemsfunktioner //------------------------------------------------------------------------------ // Fövald konstruktor //------------------------------------------------------------------------------ Song::Song() { } //------------------------------------------------------------------------------ // Initieringskonstruktor //------------------------------------------------------------------------------ Song::Song(string pTitle, string pArtist, Time pLength) { title = pTitle; artist = pArtist; length = pLength; } //------------------------------------------------------------------------------ // Setfunktioner //------------------------------------------------------------------------------ //------------------------------------------------------------------------------ // setTitle // Ange titel //------------------------------------------------------------------------------ void Song::setTitle(string pTitle) { title = pTitle; } //------------------------------------------------------------------------------ // setArtist // Ange artist //------------------------------------------------------------------------------ void Song::setArtist(string pArtist) { artist = pArtist; } //------------------------------------------------------------------------------ // setTitle // Ange titel //------------------------------------------------------------------------------ void Song::setLength(Time pLength) { length = pLength; } //--------------------------------------------------------------------------- // Överlagring av utskriftsoperatorn //--------------------------------------------------------------------------- ostream &operator<<(ostream &os, const Song &song) { os << song.getTitle() << DELIM << song.getArtist() << DELIM << song.getLength(); return os; } //--------------------------------------------------------------------------- // Överlagring av inmatningsoperatorn //--------------------------------------------------------------------------- istream &operator>>(istream &is, Song &song) { string tmpString; Time tmpLength; getline(is, tmpString, DELIM); song.setTitle(tmpString); getline(is, tmpString, DELIM); song.setArtist(tmpString); is >> tmpLength; is.get(); song.setLength(tmpLength); return is; } //--------------------------------------------------------------------------- Album: // Album.h - Projekt-uppgift // Håkan Sjölin 2014-05-17 //----------------------------------------------------------------------------- #ifndef album_h #define album_h #include "Song.h" #include <string> #include <vector> #include <iostream> using namespace std; class Album { private: string name; vector<Song> songs; public: Album(); Album(string pNameTitle, vector<Song> pSongs); // Setfunktioner void setName(string pName); // Getfunktioner string getName() const { return name;} vector<Song> getSongs() const { return songs;} int getNumberOfSongs() const { return songs.size();} Time getTotalTime() const; void addSong(Song pSong); bool operator<(const Album &album) const; }; ostream &operator<<(ostream &os, const Album &album); istream &operator>>(istream &is, Album &album); #endif // Album.cpp - Projekt-uppgift // Håkan Sjölin 2014-05-17 //----------------------------------------------------------------------------- #include "Album.h" #include "Constants.h" #include <iostream> #include <string> //------------------------------------------------------------------------------ // Definiering av Albums medlemsfunktioner //------------------------------------------------------------------------------ // Fövald konstruktor //------------------------------------------------------------------------------ Album::Album() { } //------------------------------------------------------------------------------ // Initieringskonstruktor //------------------------------------------------------------------------------ Album::Album(string pName, vector<Song> pSongs) { name = pName; songs = pSongs; } //------------------------------------------------------------------------------ // Setfunktioner //------------------------------------------------------------------------------ //------------------------------------------------------------------------------ // setName // Ange namn //------------------------------------------------------------------------------ void Album::setName(string pName) { name = pName; } //------------------------------------------------------------------------------ // addSong // Lägg till song //------------------------------------------------------------------------------ void Album::addSong(Song pSong) { songs.push_back(pSong); } //------------------------------------------------------------------------------ // getTotalTime // Returnera total speltid //------------------------------------------------------------------------------ Time Album::getTotalTime() const { Time tTime(0,0,0); for(Song s : songs) { tTime = tTime + s.getLength(); } return tTime; } //--------------------------------------------------------------------------- // Mindre än //--------------------------------------------------------------------------- bool Album::operator<(const Album &album) const { return getTotalTime() < album.getTotalTime(); } //--------------------------------------------------------------------------- // Överlagring av utskriftsoperatorn //--------------------------------------------------------------------------- ostream &operator<<(ostream &os, const Album &album) { os << album.getName() << endl; os << album.getNumberOfSongs() << endl; for (size_t i = 0; i < album.getSongs().size(); i++) os << album.getSongs().at(i) << endl; return os; } //--------------------------------------------------------------------------- // Överlagring av inmatningsoperatorn //--------------------------------------------------------------------------- istream &operator>>(istream &is, Album &album) { string tmpString; int tmpNumberOfSongs; Song tmpSong; getline(is, tmpString); album.setName(tmpString); is >> tmpNumberOfSongs; is.get(); for (int i = 0; i < tmpNumberOfSongs; i++) { is >> tmpSong; album.addSong(tmpSong); } return is; } //--------------------------------------------------------------------------- Time: // Time.h - Projekt-uppgift // Håkan Sjölin 2014-05-15 //----------------------------------------------------------------------------- #ifndef time_h #define time_h #include <iostream> using namespace std; class Time { private: int hours; int minutes; int seconds; public: Time(); Time(int pHour, int pMinute, int pSecond); // Setfunktioner void setHour(int pHour); void setMinute(int pMinute); void setSecond(int pSecond); // Getfunktioner int getHour() const { return hours;} int getMinute() const { return minutes;} int getSecond() const { return seconds;} Time operator+(const Time &time) const; bool operator==(const Time &time) const; bool operator<(const Time &time) const; }; ostream &operator<<(ostream &os, const Time &time); istream &operator>>(istream &is, Time &Time); #endif // Time.cpp - Projekt-uppgift // Håkan Sjölin 2014-05-15 //----------------------------------------------------------------------------- #include "Time.h" #include <iostream> //------------------------------------------------------------------------------ // Definiering av Times medlemsfunktioner //------------------------------------------------------------------------------ // Fövald konstruktor //------------------------------------------------------------------------------ Time::Time() { } //------------------------------------------------------------------------------ // Initieringskonstruktor //------------------------------------------------------------------------------ Time::Time(int pHour, int pMinute, int pSecond) { setHour(pHour); setMinute(pMinute); setSecond(pSecond); } //------------------------------------------------------------------------------ // Setfunktioner //------------------------------------------------------------------------------ //------------------------------------------------------------------------------ // setHour // Ange timme //------------------------------------------------------------------------------ void Time::setHour(int pHour) { if(pHour>-1) hours = pHour; else hours = 0; } //------------------------------------------------------------------------------ // setMinute // Ange minut //------------------------------------------------------------------------------ void Time::setMinute(int pMinute) { if(pMinute < 60 && pMinute > -1) { minutes = pMinute; } else minutes = 0; } //------------------------------------------------------------------------------ // setSecond // Ange sekund //------------------------------------------------------------------------------ void Time::setSecond(int pSecond) { if(pSecond < 60 && pSecond > -1) { seconds = pSecond; } else seconds = 0; } //--------------------------------------------------------------------------- // Överlagring av utskriftsoperatorn //--------------------------------------------------------------------------- ostream &operator<<(ostream &os, const Time &time) { os << time.getHour()*3600+time.getMinute()*60+time.getSecond(); return os; } //--------------------------------------------------------------------------- // Överlagring av inmatningsoperatorn //--------------------------------------------------------------------------- istream &operator>>(istream &is, Time &time) { int tmp; is >> tmp; time.setSecond(tmp%60); time.setMinute((tmp/60)%60); time.setHour(tmp/3600); return is; } //--------------------------------------------------------------------------- // Likhet //-------------------------------------------------------------------------- bool Time::operator==(const Time &time) const { return hours == time.getHour() && minutes == time.getMinute() && seconds == time.getSecond(); } //--------------------------------------------------------------------------- // Mindre än //--------------------------------------------------------------------------- bool Time::operator<(const Time &time) const { if(hours == time.getHour()) { if(minutes == time.getMinute()) { return seconds < time.getSecond(); } else { return minutes < time.getMinute(); } } else { return hours < time.getHour(); } } //--------------------------------------------------------------------------- // Addition //--------------------------------------------------------------------------- Time Time::operator+(const Time &time) const { return Time(hours+time.getHour() + (minutes+time.getMinute() + (seconds+time.getSecond())/60)/60, (minutes+time.getMinute() + (seconds+time.getSecond())/60)%60, (seconds+time.getSecond())%60); } //--------------------------------------------------------------------------- Thanks in advance for any help! Edit2: Didn't think of including the more detailed crash info (as it didn't show in the crash pop-up, so to say). Anyway, here it is: Output: 'Jukebox.exe' (Win32): Loaded 'C:\Users\Håkan\Documents\Studier - IT\Objektbaserad programmering i C++\Inlämningsuppgifter\Projekt\Jukebox\Debug\Jukebox.exe'. Symbols loaded. 'Jukebox.exe' (Win32): Loaded 'C:\Windows\SysWOW64\ntdll.dll'. Cannot find or open the PDB file. 'Jukebox.exe' (Win32): Loaded 'C:\Windows\SysWOW64\kernel32.dll'. Cannot find or open the PDB file. 'Jukebox.exe' (Win32): Loaded 'C:\Windows\SysWOW64\KernelBase.dll'. Cannot find or open the PDB file. 'Jukebox.exe' (Win32): Loaded 'C:\Windows\SysWOW64\msvcp110d.dll'. Symbols loaded. 'Jukebox.exe' (Win32): Loaded 'C:\Windows\SysWOW64\msvcr110d.dll'. Symbols loaded. The thread 0xe50 has exited with code 0 (0x0). Unhandled exception at 0x0083630C in Jukebox.exe: 0xC0000005: Access violation reading location 0x0000003C. Call stack: > Jukebox.exe!Song::getLength() Line 27 C++ Jukebox.exe!operator<<(std::basic_ostream<char,std::char_traits<char> > & os, const Song & song) Line 59 C++ Jukebox.exe!Queue::deque(Song & item) Line 55 C++ Jukebox.exe!Jukebox::playList() Line 493 C++ Jukebox.exe!Jukebox::play() Line 385 C++ Jukebox.exe!Jukebox::run() Line 536 C++ Jukebox.exe!main() Line 547 C++ Jukebox.exe!__tmainCRTStartup() Line 536 C Jukebox.exe!mainCRTStartup() Line 377 C kernel32.dll!754d86e3() Unknown [Frames below may be incorrect and/or missing, no symbols loaded for kernel32.dll] ntdll.dll!7748bf39() Unknown ntdll.dll!7748bf0c() Unknown

    Read the article

  • Bulk inserting best way to about it? + Helping me understand fully what I found so far

    - by chobo2
    Hi So I saw this post here and read it and it seems like bulk copy might be the way to go. http://stackoverflow.com/questions/682015/whats-the-best-way-to-bulk-database-inserts-from-c I still have some questions and want to know how things actually work. So I found 2 tutorials. http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx First way uses 2 ado.net 2.0 features. BulkInsert and BulkCopy. the second one uses linq to sql and OpenXML. This sort of appeals to me as I am using linq to sql already and prefer it over ado.net. However as one person pointed out in the posts what he just going around the issue at the cost of performance( nothing wrong with that in my opinion) First I will talk about the 2 ways in the first tutorial I am using VS2010 Express, .net 4.0, MVC 2.0, SQl Server 2005 Is ado.net 2.0 the most current version? Based on the technology I am using, is there some updates to what I am going to show that would improve it somehow? Is there any thing that these tutorial left out that I should know about? BulkInsert I am using this table for all the examples. CREATE TABLE [dbo].[TBL_TEST_TEST] ( ID INT IDENTITY(1,1) PRIMARY KEY, [NAME] [varchar](50) ) SP Code USE [Test] GO /****** Object: StoredProcedure [dbo].[sp_BatchInsert] Script Date: 05/19/2010 15:12:47 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[sp_BatchInsert] (@Name VARCHAR(50) ) AS BEGIN INSERT INTO TBL_TEST_TEST VALUES (@Name); END C# Code /// <summary> /// Another ado.net 2.0 way that uses a stored procedure to do a bulk insert. /// Seems slower then "BatchBulkCopy" way and it crashes when you try to insert 500,000 records in one go. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchInsert() { // Get the DataTable with Rows State as RowState.Added DataTable dtInsertRows = GetDataTable(); SqlConnection connection = new SqlConnection(connectionString); SqlCommand command = new SqlCommand("sp_BatchInsert", connection); command.CommandType = CommandType.StoredProcedure; command.UpdatedRowSource = UpdateRowSource.None; // Set the Parameter with appropriate Source Column Name command.Parameters.Add("@Name", SqlDbType.VarChar, 50, dtInsertRows.Columns[0].ColumnName); SqlDataAdapter adpt = new SqlDataAdapter(); adpt.InsertCommand = command; // Specify the number of records to be Inserted/Updated in one go. Default is 1. adpt.UpdateBatchSize = 1000; connection.Open(); int recordsInserted = adpt.Update(dtInsertRows); connection.Close(); } So first thing is the batch size. Why would you set a batch size to anything but the number of records you are sending? Like I am sending 500,000 records so I did a Batch size of 500,000. Next why does it crash when I do this? If I set it to 1000 for batch size it works just fine. System.Data.SqlClient.SqlException was unhandled Message="A transport-level error has occurred when sending the request to the server. (provider: Shared Memory Provider, error: 0 - No process is on the other end of the pipe.)" Source=".Net SqlClient Data Provider" ErrorCode=-2146232060 Class=20 LineNumber=0 Number=233 Server="" State=0 StackTrace: at System.Data.Common.DbDataAdapter.UpdatedRowStatusErrors(RowUpdatedEventArgs rowUpdatedEvent, BatchCommandInfo[] batchCommands, Int32 commandCount) at System.Data.Common.DbDataAdapter.UpdatedRowStatus(RowUpdatedEventArgs rowUpdatedEvent, BatchCommandInfo[] batchCommands, Int32 commandCount) at System.Data.Common.DbDataAdapter.Update(DataRow[] dataRows, DataTableMapping tableMapping) at System.Data.Common.DbDataAdapter.UpdateFromDataTable(DataTable dataTable, DataTableMapping tableMapping) at System.Data.Common.DbDataAdapter.Update(DataTable dataTable) at TestIQueryable.Program.BatchInsert() in C:\Users\a\Downloads\TestIQueryable\TestIQueryable\TestIQueryable\Program.cs:line 124 at TestIQueryable.Program.Main(String[] args) in C:\Users\a\Downloads\TestIQueryable\TestIQueryable\TestIQueryable\Program.cs:line 16 InnerException: Time it took to insert 500,000 records with insert batch size of 1000 took "2 mins and 54 seconds" Of course this is no official time I sat there with a stop watch( I am sure there are better ways but was too lazy to look what they where) So I find that kinda slow compared to all my other ones(expect the linq to sql insert one) and I am not really sure why. Next I looked at bulkcopy /// <summary> /// An ado.net 2.0 way to mass insert records. This seems to be the fastest. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchBulkCopy() { // Get the DataTable DataTable dtInsertRows = GetDataTable(); using (SqlBulkCopy sbc = new SqlBulkCopy(connectionString, SqlBulkCopyOptions.KeepIdentity)) { sbc.DestinationTableName = "TBL_TEST_TEST"; // Number of records to be processed in one go sbc.BatchSize = 500000; // Map the Source Column from DataTabel to the Destination Columns in SQL Server 2005 Person Table // sbc.ColumnMappings.Add("ID", "ID"); sbc.ColumnMappings.Add("NAME", "NAME"); // Number of records after which client has to be notified about its status sbc.NotifyAfter = dtInsertRows.Rows.Count; // Event that gets fired when NotifyAfter number of records are processed. sbc.SqlRowsCopied += new SqlRowsCopiedEventHandler(sbc_SqlRowsCopied); // Finally write to server sbc.WriteToServer(dtInsertRows); sbc.Close(); } } This one seemed to go really fast and did not even need a SP( can you use SP with bulk copy? If you can would it be better?) BatchCopy had no problem with a 500,000 batch size.So again why make it smaller then the number of records you want to send? I found that with BatchCopy and 500,000 batch size it took only 5 seconds to complete. I then tried with a batch size of 1,000 and it only took 8 seconds. So much faster then the bulkinsert one above. Now I tried the other tutorial. USE [Test] GO /****** Object: StoredProcedure [dbo].[spTEST_InsertXMLTEST_TEST] Script Date: 05/19/2010 15:39:03 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[spTEST_InsertXMLTEST_TEST](@UpdatedProdData nText) AS DECLARE @hDoc int exec sp_xml_preparedocument @hDoc OUTPUT,@UpdatedProdData INSERT INTO TBL_TEST_TEST(NAME) SELECT XMLProdTable.NAME FROM OPENXML(@hDoc, 'ArrayOfTBL_TEST_TEST/TBL_TEST_TEST', 2) WITH ( ID Int, NAME varchar(100) ) XMLProdTable EXEC sp_xml_removedocument @hDoc C# code. /// <summary> /// This is using linq to sql to make the table objects. /// It is then serailzed to to an xml document and sent to a stored proedure /// that then does a bulk insert(I think with OpenXML) /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertXMLBatch() { using (TestDataContext db = new TestDataContext()) { TBL_TEST_TEST[] testRecords = new TBL_TEST_TEST[500000]; for (int count = 0; count < 500000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; testRecords[count] = testRecord; } StringBuilder sBuilder = new StringBuilder(); System.IO.StringWriter sWriter = new System.IO.StringWriter(sBuilder); XmlSerializer serializer = new XmlSerializer(typeof(TBL_TEST_TEST[])); serializer.Serialize(sWriter, testRecords); db.insertTestData(sBuilder.ToString()); } } So I like this because I get to use objects even though it is kinda redundant. I don't get how the SP works. Like I don't get the whole thing. I don't know if OPENXML has some batch insert under the hood but I do not even know how to take this example SP and change it to fit my tables since like I said I don't know what is going on. I also don't know what would happen if the object you have more tables in it. Like say I have a ProductName table what has a relationship to a Product table or something like that. In linq to sql you could get the product name object and make changes to the Product table in that same object. So I am not sure how to take that into account. I am not sure if I would have to do separate inserts or what. The time was pretty good for 500,000 records it took 52 seconds The last way of course was just using linq to do it all and it was pretty bad. /// <summary> /// This is using linq to sql to to insert lots of records. /// This way is slow as it uses no mass insert. /// Only tried to insert 50,000 records as I did not want to sit around till it did 500,000 records. /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertAll() { using (TestDataContext db = new TestDataContext()) { db.CommandTimeout = 600; for (int count = 0; count < 50000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; db.TBL_TEST_TESTs.InsertOnSubmit(testRecord); } db.SubmitChanges(); } } I did only 50,000 records and that took over a minute to do. So I really narrowed it done to the linq to sql bulk insert way or bulk copy. I am just not sure how to do it when you have relationship for either way. I am not sure how they both stand up when doing updates instead of inserts as I have not gotten around to try it yet. I don't think I will ever need to insert/update more than 50,000 records at one type but at the same time I know I will have to do validation on records before inserting so that will slow it down and that sort of makes linq to sql nicer as your got objects especially if your first parsing data from a xml file before you insert into the database. Full C# code using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Xml.Serialization; using System.Data; using System.Data.SqlClient; namespace TestIQueryable { class Program { private static string connectionString = ""; static void Main(string[] args) { BatchInsert(); Console.WriteLine("done"); } /// <summary> /// This is using linq to sql to to insert lots of records. /// This way is slow as it uses no mass insert. /// Only tried to insert 50,000 records as I did not want to sit around till it did 500,000 records. /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertAll() { using (TestDataContext db = new TestDataContext()) { db.CommandTimeout = 600; for (int count = 0; count < 50000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; db.TBL_TEST_TESTs.InsertOnSubmit(testRecord); } db.SubmitChanges(); } } /// <summary> /// This is using linq to sql to make the table objects. /// It is then serailzed to to an xml document and sent to a stored proedure /// that then does a bulk insert(I think with OpenXML) /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertXMLBatch() { using (TestDataContext db = new TestDataContext()) { TBL_TEST_TEST[] testRecords = new TBL_TEST_TEST[500000]; for (int count = 0; count < 500000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; testRecords[count] = testRecord; } StringBuilder sBuilder = new StringBuilder(); System.IO.StringWriter sWriter = new System.IO.StringWriter(sBuilder); XmlSerializer serializer = new XmlSerializer(typeof(TBL_TEST_TEST[])); serializer.Serialize(sWriter, testRecords); db.insertTestData(sBuilder.ToString()); } } /// <summary> /// An ado.net 2.0 way to mass insert records. This seems to be the fastest. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchBulkCopy() { // Get the DataTable DataTable dtInsertRows = GetDataTable(); using (SqlBulkCopy sbc = new SqlBulkCopy(connectionString, SqlBulkCopyOptions.KeepIdentity)) { sbc.DestinationTableName = "TBL_TEST_TEST"; // Number of records to be processed in one go sbc.BatchSize = 500000; // Map the Source Column from DataTabel to the Destination Columns in SQL Server 2005 Person Table // sbc.ColumnMappings.Add("ID", "ID"); sbc.ColumnMappings.Add("NAME", "NAME"); // Number of records after which client has to be notified about its status sbc.NotifyAfter = dtInsertRows.Rows.Count; // Event that gets fired when NotifyAfter number of records are processed. sbc.SqlRowsCopied += new SqlRowsCopiedEventHandler(sbc_SqlRowsCopied); // Finally write to server sbc.WriteToServer(dtInsertRows); sbc.Close(); } } /// <summary> /// Another ado.net 2.0 way that uses a stored procedure to do a bulk insert. /// Seems slower then "BatchBulkCopy" way and it crashes when you try to insert 500,000 records in one go. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchInsert() { // Get the DataTable with Rows State as RowState.Added DataTable dtInsertRows = GetDataTable(); SqlConnection connection = new SqlConnection(connectionString); SqlCommand command = new SqlCommand("sp_BatchInsert", connection); command.CommandType = CommandType.StoredProcedure; command.UpdatedRowSource = UpdateRowSource.None; // Set the Parameter with appropriate Source Column Name command.Parameters.Add("@Name", SqlDbType.VarChar, 50, dtInsertRows.Columns[0].ColumnName); SqlDataAdapter adpt = new SqlDataAdapter(); adpt.InsertCommand = command; // Specify the number of records to be Inserted/Updated in one go. Default is 1. adpt.UpdateBatchSize = 500000; connection.Open(); int recordsInserted = adpt.Update(dtInsertRows); connection.Close(); } private static DataTable GetDataTable() { // You First need a DataTable and have all the insert values in it DataTable dtInsertRows = new DataTable(); dtInsertRows.Columns.Add("NAME"); for (int i = 0; i < 500000; i++) { DataRow drInsertRow = dtInsertRows.NewRow(); string name = "Name : " + i; drInsertRow["NAME"] = name; dtInsertRows.Rows.Add(drInsertRow); } return dtInsertRows; } static void sbc_SqlRowsCopied(object sender, SqlRowsCopiedEventArgs e) { Console.WriteLine("Number of records affected : " + e.RowsCopied.ToString()); } } }

    Read the article

  • SQL SERVER – Signal Wait Time Introduction with Simple Example – Wait Type – Day 2 of 28

    - by pinaldave
    In this post, let’s delve a bit more in depth regarding wait stats. The very first question: when do the wait stats occur? Here is the simple answer. When SQL Server is executing any task, and if for any reason it has to wait for resources to execute the task, this wait is recorded by SQL Server with the reason for the delay. Later on we can analyze these wait stats to understand the reason the task was delayed and maybe we can eliminate the wait for SQL Server. It is not always possible to remove the wait type 100%, but there are few suggestions that can help. Before we continue learning about wait types and wait stats, we need to understand three important milestones of the query life-cycle. Running - a query which is being executed on a CPU is called a running query. This query is responsible for CPU time. Runnable – a query which is ready to execute and waiting for its turn to run is called a runnable query. This query is responsible for Signal Wait time. (In other words, the query is ready to run but CPU is servicing another query). Suspended – a query which is waiting due to any reason (to know the reason, we are learning wait stats) to be converted to runnable is suspended query. This query is responsible for wait time. (In other words, this is the time we are trying to reduce). In simple words, query execution time is a summation of the query Executing CPU Time (Running) + Query Wait Time (Suspended) + Query Signal Wait Time (Runnable). Again, it may be possible a query goes to all these stats multiple times. Let us try to understand the whole thing with a simple analogy of a taxi and a passenger. Two friends, Tom and Danny, go to the mall together. When they leave the mall, they decide to take a taxi. Tom and Danny both stand in the line waiting for their turn to get into the taxi. This is the Signal Wait Time as they are ready to get into the taxi but the taxis are currently serving other customer and they have to wait for their turn. In other word they are in a runnable state. Now when it is their turn to get into the taxi, the taxi driver informs them he does not take credit cards and only cash is accepted. Neither Tom nor Danny have enough cash, they both cannot get into the vehicle. Tom waits outside in the queue and Danny goes to ATM to fetch the cash. During this time the taxi cannot wait, they have to let other passengers get into the taxi. As Tom and Danny both are outside in the queue, this is the Query Wait Time and they are in the suspended state. They cannot do anything till they get the cash. Once Danny gets the cash, they are both standing in the line again, creating one more Signal Wait Time. This time when their turn comes they can pay the taxi driver in cash and reach their destination. The time taken for the taxi to get from the mall to the destination is running time (CPU time) and the taxi is running. I hope this analogy is bit clear with the wait stats. You can check the Signalwait stats using following query of Glenn Berry. -- Signal Waits for instance SELECT CAST(100.0 * SUM(signal_wait_time_ms) / SUM (wait_time_ms) AS NUMERIC(20,2)) AS [%signal (cpu) waits], CAST(100.0 * SUM(wait_time_ms - signal_wait_time_ms) / SUM (wait_time_ms) AS NUMERIC(20,2)) AS [%resource waits] FROM sys.dm_os_wait_stats OPTION (RECOMPILE); Higher the Signal wait stats are not good for the system. Very high value indicates CPU pressure. In my experience, when systems are running smooth and without any glitch the Signal wait stat is lower than 20%. Again, this number can be debated (and it is from my experience and is not documented anywhere). In other words, lower is better and higher is not good for the system. In future articles we will discuss in detail the various wait types and wait stats and their resolution. Read all the post in the Wait Types and Queue series. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL DMV, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • SQL SERVER – Single Wait Time Introduction with Simple Example – Wait Type – Day 2 of 28

    - by pinaldave
    In this post, let’s delve a bit more in depth regarding wait stats. The very first question: when do the wait stats occur? Here is the simple answer. When SQL Server is executing any task, and if for any reason it has to wait for resources to execute the task, this wait is recorded by SQL Server with the reason for the delay. Later on we can analyze these wait stats to understand the reason the task was delayed and maybe we can eliminate the wait for SQL Server. It is not always possible to remove the wait type 100%, but there are few suggestions that can help. Before we continue learning about wait types and wait stats, we need to understand three important milestones of the query life-cycle. Running - a query which is being executed on a CPU is called a running query. This query is responsible for CPU time. Runnable – a query which is ready to execute and waiting for its turn to run is called a runnable query. This query is responsible for Single Wait time. (In other words, the query is ready to run but CPU is servicing another query). Suspended – a query which is waiting due to any reason (to know the reason, we are learning wait stats) to be converted to runnable is suspended query. This query is responsible for wait time. (In other words, this is the time we are trying to reduce). In simple words, query execution time is a summation of the query Executing CPU Time (Running) + Query Wait Time (Suspended) + Query Single Wait Time (Runnable). Again, it may be possible a query goes to all these stats multiple times. Let us try to understand the whole thing with a simple analogy of a taxi and a passenger. Two friends, Tom and Danny, go to the mall together. When they leave the mall, they decide to take a taxi. Tom and Danny both stand in the line waiting for their turn to get into the taxi. This is the Signal Wait Time as they are ready to get into the taxi but the taxis are currently serving other customer and they have to wait for their turn. In other word they are in a runnable state. Now when it is their turn to get into the taxi, the taxi driver informs them he does not take credit cards and only cash is accepted. Neither Tom nor Danny have enough cash, they both cannot get into the vehicle. Tom waits outside in the queue and Danny goes to ATM to fetch the cash. During this time the taxi cannot wait, they have to let other passengers get into the taxi. As Tom and Danny both are outside in the queue, this is the Query Wait Time and they are in the suspended state. They cannot do anything till they get the cash. Once Danny gets the cash, they are both standing in the line again, creating one more Single Wait Time. This time when their turn comes they can pay the taxi driver in cash and reach their destination. The time taken for the taxi to get from the mall to the destination is running time (CPU time) and the taxi is running. I hope this analogy is bit clear with the wait stats. You can check the single wait stats using following query of Glenn Berry. -- Signal Waits for instance SELECT CAST(100.0 * SUM(signal_wait_time_ms) / SUM (wait_time_ms) AS NUMERIC(20,2)) AS [%signal (cpu) waits], CAST(100.0 * SUM(wait_time_ms - signal_wait_time_ms) / SUM (wait_time_ms) AS NUMERIC(20,2)) AS [%resource waits] FROM sys.dm_os_wait_stats OPTION (RECOMPILE); Higher the single wait stats are not good for the system. Very high value indicates CPU pressure. In my experience, when systems are running smooth and without any glitch the single wait stat is lower than 20%. Again, this number can be debated (and it is from my experience and is not documented anywhere). In other words, lower is better and higher is not good for the system. In future articles we will discuss in detail the various wait types and wait stats and their resolution. Read all the post in the Wait Types and Queue series. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL DMV, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • SQLAuthority News – Learning Trip – Traveling to Learn SQL Server

    - by pinaldave
    I am currently traveling to Delhi to learn SQL Server in person from my friend. You can read more details about why am I learning SQL Server.  I have signed up for the course End to End SQL Server Business Intelligence at Koenig Solutions. Yesterday I blogged about my registration experience and today I am going to write about my  experience once I arrived at Delhi. From Ahmedabad to Delhi I stay with my wife and daughter in Bangalore (IT Hub of India), my hometown is Ahmedabad. My parents stay in city nearby Ahmedabad. I decided to spend few days with my folks before I sign up for 3 days of solid learning. I had selected an early morning flight to Delhi. I landed at 8:30 AM in Delhi. As soon as I checked email in my mobile I was really glad that I had received details of my pick up vehicle from Koenig. I walked out of the airport and I noticed that a driver was waiting with a placard with my name and photo associated with it. He was in Koenig uniform so there was no chance to make mistakes. In minutes of landing in Delhi I was in my transport heading to the Koenig Training Center. After the quick introduction driver handed me a bag (to be precise Eco friendly bag). The bag contained following items: My registration form All necessary documents in print which I had received earlier A Printed Book of the course next day INR 1000 (What?) I was glad to receive the bag but I was very confused with the Rs 1000. I decided to figure this out once I reach to the training center. Arriving at Koenig Inn Deluxe Koenig registration fees include all the stay and meals. I had opted for Koenig Inn Deluxe as my stay as it was recommended by my friend as well it was the right economical choice for me. When I reached to my accommodation, they were well aware of my arrival and was immediately led to my spacious room. The room is well equipped with all the amenities (hot water, air condition, coffee table, munching snacks,  and free internet) and the staff is very friendly. I immediately got ready as I had to go to Koenig Training Center to meet Center Head for a quick introduction. Koenig Inn Delux Koenig Training Center The training center is within five minutes of distance from the accommodation. I was lead to center head right away and had a very meaningful conversation with Ms Hema regarding my learning goals. She gave me a quick tour of the training center. I was amazed with the numbers of lab rooms they have in the center. The labs are spacious and give the most needed hand’s on experience to the users. I was led to the lab where I was suppose to learn my class the very next day as well I was provided my trainer’s profile. Mystery of Rs 1000 Well, after all this I have still not forgotten why I was provided Rs 1000 when arrived at the airport. When I asked about that I was told that because many students comes from foreign places and they may not have Indian Currency when they land at airport. This was for their immediate consumption till they arrive at the training center. Later on they can get their currency converted to local currency at Koenig Travel Desk. My curiosity was satisfied but I had not expected this answer. I am amazed at the attention to the details. Koenig Travel Desk When I heard about Koenig Travel Desk, I remembered that I have few friends in Delhi and Gurgaon. I had completed all of the formalities so I had reset of the day on my hand. I requested the travel desk if they can arrange a day cab for me so I can visit my friends in Guragon. Within 10 minutes I was on my way to Gurgaon. Telerik India Office Visit What did I do in Guragaon? I met my friends Abhishek Kant, Dhananjay Kumar and Amit Chowdhary. I visited Telerik India office and we had an excellent conversation on various aspects of technology and community. The Telerik India office is very spacious and Abhishek Kant (Telerik India Country Manager) gave us a quick tour of the office. We had an excellent lunch and dinner. One thing is for sure – the day was well spent. Pinal Dave, Dhananjay Kumar and Abhishek Kant Later evening I returned to my accommodation and decided to read up a few of the topics which I was going to learn next day. In tomorrow’s blog post I will discuss about my learning experience. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Training, T SQL, Technology

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #005

    - by pinaldave
    Here is the list of curetted articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2006 SQL SERVER – Cursor to Kill All Process in Database I indeed wrote this cursor and when I often look back, I wonder how naive I was to write this. The reason for writing this cursor was to free up my database from any existing connection so I can do database operation. This worked fine but there can be a potentially big issue if there was any important transaction was killed by this process. There is another way to to achieve the same thing where we can use ALTER syntax to take database in single user mode. Read more about that over here and here. 2007 Rules of Third Normal Form and Normalization Advantage – 3NF The rules of 3NF are mentioned here Make a separate table for each set of related attributes, and give each table a primary key. If an attribute depends on only part of a multi-valued key, remove it to a separate table If attributes do not contribute to a description of the key, remove them to a separate table. Correct Syntax for Stored Procedure SP Sometime a simple question is the most important question. I often see in industry incorrectly written Stored Procedure. Few writes code after the most outer BEGIN…END and few writes code after the GO Statement. In this brief blog post, I have attempted to explain the same. 2008 Switch Between Result Pan and Query Pan – SQL Shortcut Many times when I am writing query I have to scroll the result displayed in the result set. Most of the developer uses the mouse to switch between and Query Pane and Result Pane. There are few developers who are crazy about Keyboard shortcuts. F6 is the keyword which can be used to switch between query pane and tabs of the result pane. Interesting Observation – Use of Index and Execution Plan Query Optimization is a complex game and it has its own rules. From the example in the article we have discovered that Query Optimizer does not use clustered index to retrieve data, sometime non clustered index provides optimal performance for retrieving Primary Key. When all the rows and columns are selected Primary Key should be used to select data as it provides optimal performance. 2009 Interesting Observation – TOP 100 PERCENT and ORDER BY If you pull up any application or system where there are more than 100 SQL Server Views are created – I am very confident that at one or two places you will notice the scenario wherein View the ORDER BY clause is used with TOP 100 PERCENT. SQL Server 2008 VIEW with ORDER BY clause does not throw an error; moreover, it does not acknowledge the presence of it as well. In this article we have taken three perfect examples and demonstrated which clause we should use when. Comma Separated Values (CSV) from Table Column A Very common question – How to create comma separated values from a table in the database? The answer is also very common if we use XML. Check out this article for quick learning on the same subject. Azure Start Guide – Step by Step Installation Guide Though Azure portal has changed a quite bit since I wrote this article, the concept used in this article are not old. They are still valid and many of the functions are still working as mentioned in the article. I believe this one article will put you on the track to use Azure! Size of Index Table for Each Index – Solution Earlier I have posted a small question on this blog and requested help from readers to participate here and provide a solution. The puzzle was to write a query that will return the size for each index that is on any particular table. We need a query that will return an additional column in the above listed query and it should contain the size of the index. This article presents two of the best solutions from the puzzle. 2010 Well, this week in 2010 was the week of puzzles as I posted three interesting puzzles. Till today I am noticing pretty good interesting in the puzzles. They are tricky but for sure brings a great value if you are a database developer for a long time. I suggest you go over this puzzles and their answers. Did you really know all of the answers? I am confident that reading following three blog post will for sure help you enhance the experience with T-SQL. SQL SERVER – Challenge – Puzzle – Usage of FAST Hint SQL SERVER – Puzzle – Challenge – Error While Converting Money to Decimal SQL SERVER – Challenge – Puzzle – Why does RIGHT JOIN Exists 2011 DVM sys.dm_os_sys_info Column Name Changed in SQL Server 2012 Have you ever faced a situation where something does not work? When you try to fix it - you enjoy fixing it and started to appreciate the breaking changes. Well, this was exactly I felt yesterday. Before I begin my story, I want to candidly state that I do not encourage anybody to use * in the SELECT statement. Now the disclaimer is over – I suggest you read the original story – you will love it! Get Directory Structure using Extended Stored Procedure xp_dirtree Here is the question to you – why would you do something in SQL Server where you can do the same task in command prompt much easily. Well, the answer is sometime there are real use cases when we have to do such thing. This is a similar example where I have demonstrated how in SQL Server 2012 we can use extended stored procedure to retrieve directory structure. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Big Data – What is Big Data – 3 Vs of Big Data – Volume, Velocity and Variety – Day 2 of 21

    - by Pinal Dave
    Data is forever. Think about it – it is indeed true. Are you using any application as it is which was built 10 years ago? Are you using any piece of hardware which was built 10 years ago? The answer is most certainly No. However, if I ask you – are you using any data which were captured 50 years ago, the answer is most certainly Yes. For example, look at the history of our nation. I am from India and we have documented history which goes back as over 1000s of year. Well, just look at our birthday data – atleast we are using it till today. Data never gets old and it is going to stay there forever.  Application which interprets and analysis data got changed but the data remained in its purest format in most cases. As organizations have grown the data associated with them also grew exponentially and today there are lots of complexity to their data. Most of the big organizations have data in multiple applications and in different formats. The data is also spread out so much that it is hard to categorize with a single algorithm or logic. The mobile revolution which we are experimenting right now has completely changed how we capture the data and build intelligent systems.  Big organizations are indeed facing challenges to keep all the data on a platform which give them a  single consistent view of their data. This unique challenge to make sense of all the data coming in from different sources and deriving the useful actionable information out of is the revolution Big Data world is facing. Defining Big Data The 3Vs that define Big Data are Variety, Velocity and Volume. Volume We currently see the exponential growth in the data storage as the data is now more than text data. We can find data in the format of videos, musics and large images on our social media channels. It is very common to have Terabytes and Petabytes of the storage system for enterprises. As the database grows the applications and architecture built to support the data needs to be reevaluated quite often. Sometimes the same data is re-evaluated with multiple angles and even though the original data is the same the new found intelligence creates explosion of the data. The big volume indeed represents Big Data. Velocity The data growth and social media explosion have changed how we look at the data. There was a time when we used to believe that data of yesterday is recent. The matter of the fact newspapers is still following that logic. However, news channels and radios have changed how fast we receive the news. Today, people reply on social media to update them with the latest happening. On social media sometimes a few seconds old messages (a tweet, status updates etc.) is not something interests users. They often discard old messages and pay attention to recent updates. The data movement is now almost real time and the update window has reduced to fractions of the seconds. This high velocity data represent Big Data. Variety Data can be stored in multiple format. For example database, excel, csv, access or for the matter of the fact, it can be stored in a simple text file. Sometimes the data is not even in the traditional format as we assume, it may be in the form of video, SMS, pdf or something we might have not thought about it. It is the need of the organization to arrange it and make it meaningful. It will be easy to do so if we have data in the same format, however it is not the case most of the time. The real world have data in many different formats and that is the challenge we need to overcome with the Big Data. This variety of the data represent  represent Big Data. Big Data in Simple Words Big Data is not just about lots of data, it is actually a concept providing an opportunity to find new insight into your existing data as well guidelines to capture and analysis your future data. It makes any business more agile and robust so it can adapt and overcome business challenges. Tomorrow In tomorrow’s blog post we will try to answer discuss Evolution of Big Data. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

< Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >