Search Results

Search found 87045 results on 3482 pages for 'nx server'.

Page 478/3482 | < Previous Page | 474 475 476 477 478 479 480 481 482 483 484 485  | Next Page >

  • SQLAuthority News – MSDN Flash Mentions – TechNet Flash Mention – Top Community Contributors (Annual

    - by pinaldave
    I was going over my email to reach the famous Inbox(0), I found TechNet Flash and MSDN Flash email. I had kept them because those email editions had mentioned me in the same. I quickly took the screenshot for the same. I am posting them here to refer them back again. It is always good idea to store important information for revisiting memory lane. As a recent update, Microsoft has awarded me Top Community Contributors (Annual) Winners. I want to express that I would have not done without your valuable contribution. I want to dedicate the award to all of you, as without your presence I would have not given this prestigious award. I could have not done this with myself only. I had complete support of Jacob Sebastian (SQL Server MVP) for all of the community related activity. Here are few images. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, Pinal Dave, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • SQLAuthority News – TechED India 2012 – Bangalore – March 21-23, 2012

    - by pinaldave
    TechEd is one event which every developers and IT professionals are looking forward to attend. It is opportunity of life time and no matter how many time one gets chance to engage with it, it is never enough. I still remember every single moment of every TechEd I have attended so far. This year TechEd India 2012 will be held in Bangalore between March 21 and 23. There will be three 3 days of lots of learning and fun. If you are data professional, you are going to find yourself very very fortunate as every single day we will have data track for various audience. Day 1 will be for developer, Day 2 will be for Architect and Day 3 will be for Database Administrators. Every day we will have plenty of learning from industries leading experts. How many of you know that the first TechEd was held in 1993 in Orlando, FL? Well, there are many similar interesting information is available on Wiki page for TechEd. I will be presenting on my favorite subject of performance tuning. Just like every other time this time the session will be unique and different. I will bring something lesser known but very important aspect of the performance tuning to the light. Besides SQL Server we will be covering lots of other technologies such as Windows 8, Windows Phone, Windows Azure, Visual Studio, System Center, Security, Private Cloud etc. The biggest attraction of the TechEd is Keynote and Demo Extravaganza. One can not miss either of them when present at TechEd India. If you are attending TechEd India – I am looking forward to meet you in person. It is always pleasant to meet community face to face and I promise to remember your name. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, SQLAuthority News, T SQL, Technology

    Read the article

  • Set umask, set permissions, and set ACL, but SAMBA isn't using those?

    - by Kris Anderson
    I'm running on Ubuntu Server 12.04. I have a folder called Music and I want the default folder permissions to be 775 and the default file to then be 664. I set the default permissions on the Music folder to be 775. I configured ACL to use these default permissions as well: file: Music owner: kris group: kris flags: ss- user::rwx group::rwx other::r-x default:user::rwx default:group::rwx default:other::r-x I also changed the default umask for my user account, kris, to 002 in .profile. Shouldn't and new file/folder now use those permissions when writing to the Samba share? ACL should work with Samba from what I can gather. Currently, if I write to that folder using my mac, folders are getting 755 and files 644. I have another app on my mac called GoodSync which which is able to sync a local directory on my mac to a network samba share, but those permissions are even worse. files are being written as 700 using that program. So it looks like Samba is allowing the host/program to determine the folder/file permissions. What changes do I need to make to force the permissions I want regardless of what the host tries to write on the server?

    Read the article

  • Phpmyadmin Not Working

    - by glenbl54
    I recently installed phpmyadmin onto ubuntu server 10.04 using sudo apt-get install phpmyadmin The installation went fine and everything was working including phpmyadmin. I then performed a restart of the server and now apache2 starts up but when I navigate to http://192.168.1.72/phpmyadmin/ I am getting a 403 error. I have included /etc/phpmyadmin/apache.conf file in /etc/apahe2/apache2.conf file /etc/phpmyadmin/apache.conf # phpMyAdmin default Apache configuration Alias /phpmyadmin /usr/share/phpmyadmin <Directory /usr/share/phpmyadmin> Options FollowSymLinks DirectoryIndex index.php <IfModule mod_php5.c> AddType application/x-httpd-php .php php_flag magic_quotes_gpc Off php_flag track_vars On php_flag register_globals Off php_value include_path . </IfModule> </Directory> # Authorize for setup <Directory /usr/share/phpmyadmin/setup> <IfModule mod_authn_file.c> AuthType Basic AuthName "phpMyAdmin Setup" AuthUserFile /etc/phpmyadmin/htpasswd.setup </IfModule> Require valid-user </Directory> #Disallow web access to directories that don't need it <Directory /usr/share/phpmyadmin/libraries> Order Deny,Allow Deny from All </Directory> <Directory /usr/share/phpmyadmin/setup/lib> Order Deny,Allow Deny from All </Directory> The only change that was made since phpmyadmin was installed was that timetrex was installed. Is there anyway to manually start phpmyadmin or should it already be working once apache started?

    Read the article

  • Log shipping and shrinking transaction logs

    - by DavidWimbush
    I just solved a problem that had me worried for a bit. I'm log shipping from three primary servers to a single secondary server, and the transaction log disk on the secondary server was getting very full. I established that several primary databases had unused space that resulted from big, one-off updates so I could shrink their logs. But would this action be log shipped and applied to the secondary database too? I thought probably not. And, more importantly, would it break log shipping? My secondary databases are in a Standby / Read Only state so I didn't think I could shrink their logs. I RTFMd, Googled, and asked on a Q&A site (not the evil one) but was none the wiser. So I was facing a monumental round of shrink, full backup, full secondary restore and re-start log shipping (which would leave us without a disaster recovery facility for the duration). Then I thought it might be worthwhile to take a non-essential database and just make absolutely sure a log shrink on the primary wouldn't ship over and occur on the secondary as well. So I did a DBCC SHRINKFILE and kept an eye on the secondary. Bingo! Log shipping didn't blink and the log on the secondary shrank too. I just love it when something turns out even better than I dared to hope. (And I guess this highlights something I need to learn about what activities are logged.)

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 23 (sys.dm_db_index_usage_stats)

    - by Tamarick Hill
    The sys.dm_db_index_usage_stats Dynamic Management View is used to return usage information about the various indexes on your SQL Server instance. Let’s have a look at this DMV against our AdventureWorks2012 database so we can examine the information returned. SELECT * FROM sys.dm_db_index_usage_stats WHERE database_id = db_id('AdventureWorks2012') The first three columns in the result set represent the database_id, object_id, and index_id of a given row. You can join these columns back to other system tables to extract the actual database, object, and index names. The next four columns are probably the most beneficial columns within this DMV. First, the user_seeks column represents the number of times that a user query caused a seek operation against a particular index. The user_scans column represents how many times a user query caused a scan operation on a particular index. The user_lookups column represents how many times an index was used to perform a lookup operation. The user_updates column refers to how many times an index had to be updated due to a write operation that effected a particular index. The last_user_seek, last_user_scan, last_user_lookup, and last_user_update columns provide you with DATETIME information about when the last user scan, seek, lookup, or update operation was performed. The remaining columns in the result set are the same as the ones we previously discussed, except instead of the various operations being generated from user requests, they are generated from system background requests. This is an extremely useful DMV and one of my favorites when it comes to Index Maintenance. As we all know, indexes are extremely beneficial with improving the performance of your read operations. But indexes do have a downside as well. Indexes slow down the performance of your write operations, and they also require additional resources for storage. For this reason, in my opinion, it is important to regularly analyze the indexes on your system to make sure the indexes you have are being used efficiently. My AdventureWorks2012 database is only used for demonstrating or testing things, so I dont have a lot of meaningful information here, but for a Production system, if you see an index that is never getting any seeks, scans, or lookups, but is constantly getting a ton of updates, it more than likely would be a good candidate for you to consider removing. You would not be getting much benefit from the index, but yet it is incurring a cost on your system due to it constantly having to be updated for your write operations, not to mention the additional storage it is consuming. You should regularly analyze your indexes to ensure you keep your database systems as efficient and lean as possible. One thing to note is that these DMV statistics are reset every time SQL Server is restarted. Therefore it would not be a wise idea to make decisions about removing indexes after a Server Reboot or a cluster roll. If you restart your SQL Server instances frequently, for example if you schedule weekly/monthly cluster rolls, then you may not capture indexes that are being used for weekly/monthly reports that run for business users. And if you remove them, you may have some upset people at your desk on Monday morning. If you would like to begin analyzing your indexes to possibly remove the ones that your system is not using, I would recommend building a process to load this DMV information into a table on scheduled basis, depending on how frequently you perform an operation that would reset these statistics, then you can analyze the data over a period of time to get a more accurate view of what indexes are really being used and which ones or not. For more information about this DMV, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/ms188755.aspx Follow me on Twitter @PrimeTimeDBA

    Read the article

  • SQL SERVER- Differences Between Left Join and Left Outer Join

    - by pinaldave
    There are a few questions that I had decided not to discuss on this blog because I think they are very simple and many of us know it. Many times, I even receive not-so positive notes from several readers when I am writing something simple. However, assuming that we know all and beginners should know everything is not the right attitude. Since day 1, I have been keeping a small journal regarding questions that I receive in this blog. There are around 200+ questions I receive every day through emails, comments and occasional phone calls. Yesterday, I received a comment with the following question: What are the differences between Left Join and Left Outer Join? Click here to read original comment. This question has triggered the threshold of receiving the same question repeatedly. Here is the answer: There is absolutely no difference between LEFT JOIN and LEFT OUTER JOIN. The same is true for RIGHT JOIN and RIGHT OUTER JOIN. When you use LEFT JOIN keyword in SQL Server, it means LEFT OUTER JOIN only. I have already written in-depth visual diagram discussing the JOINs. I encourage all of you to read the article for further understanding of the JOINs: Read Introduction to JOINs – Basic of JOINs Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Joins, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • TFS Build 2010: BuildNumber and DropLocation

    - by javarg
    Automatic Builds for Application Release is a current practice in every major development factory nowadays. Using Team Foundation Server Build 2010 to accomplish this offers many opportunities to improve quality of your releases. The following approach allow us to generate build drop folders including the BuildNumber and the Changeset or Label provided. Using this procedure we can quickly identify the generated binaries in the Drop Server with the corresponding Version. Branch the DefaultTemplate.xaml and renamed it with CustomDefaultTemplate.xaml Open it for edit (check out) Go to the Set Drop Location Activity and edit the DropLocation property. Write the following expression: BuildDetail.DropLocationRoot + "\" + BuildDetail.BuildDefinition.Name + "\" + If(String.IsNullOrWhiteSpace(GetVersion), BuildDetail.SourceGetVersion, GetVersion) + "_" + BuildDetail.BuildNumber Check in the branched template. Now create a build definition named TestBuildForDev using the new template. The previous expression sets the DropLocation with the following format: (ChangesetNumber|LabelName)_BuildName_BuildNumber The first part of the folder name will be the changeset number or the label name (if triggered using labels). Folder names will be generated as following: C1850_TestBuildForDev_20111117.1 (changesets start with letter C) LLabelname_TestBuildForDev_20111117.1 (labels start with letter L) Try launching a build from a Changeset and from a Label. You can specify a Label in the GetVersion parameter in the Queue new Build Wizard, going to the Parameters tab (for labels add the “L” prefix):

    Read the article

  • Runaway version store in tempdb

    - by DavidWimbush
    Today was really a new one. I got back from a week off and found our main production server's tempdb had gone from its usual 200MB to 36GB. Ironically I spent Friday at the most excellent SQLBits VI and one of the sessions I attended was Christian Bolton talking about tempdb issues - including runaway tempdb databases. How just-in-time was that?! I looked into the file growth history and it looks like the problem started when my index maintenance job was chosen as the deadlock victim. (Funny how they almost make it sound like you've won something.) That left tempdb pretty big but for some reason it grew several more times. And since I'd left the file growth at the default 10% (aaargh!) the worse it got the worse it got. The last regrowth event was 2.6GB. Good job I've got Instant Initialization on. Since the Disk Usage report showed it was 99% unallocated I went into the Shrink Files dialogue which helpfully informed me the data file was 250MB.  I'm afraid I've got a life (allegedly) so I restarted the SQL Server service and then immediately ran a script to make the initial size bigger and change the file growth to a number of MB. The script complained that the size was smaller than the current size. Within seconds! WTF? Now I had to find out what was using so much of it. By using the DMV sys.dm_db_file_space_usage I found the problem was in the version store, and using the DMV sys.dm_db_task_space_usage and the Top Transactions by Age report I found that the culprit was a 3rd party database where I had turned on read_committed_snapshot and then not bothered to monitor things properly. Just because something has always worked before doesn't mean it will work in every future case. This application had an implicit transaction that had been running for over 2 hours.

    Read the article

  • SQLAuthority News – Community Tech Days – TechEd on The Road – Ahmedabad – June 11, 2011

    - by pinaldave
    TechEd on Road is back! In Ahmedabad June 11, 2011! Inviting all Professional Developers, Project Managers, Architects, IT Managers, IT Administrators and Implementers of Ahmedabad to be a part of Tech•Ed on the Road, on 11th June, 2011. We have put together the best sessions from Tech•Ed India 2011 for you in your city. Focal point will be technologies like Database and BI, Windows 7, ASP.NET. REGISTER HERE! Venue: Venue: Ahmedabad Management Association (AMA) Dr. Vikram Sarabhai Marg, University Area, Ahmedabad, Gujarat 380 015 Time: 9:30AM – 5:30PM The biggest attraction of the event is session HTML5 – Future of the Web by Harish Vaidyanathan. He is Evangelist Lead in Microsoft and hands on developer himself. I strongly urge all of you to attend his session to understand direction of the web and Microsoft’s take on the subject. I (Pinal Dave) will be presenting on the session of SQL Server Performance Tuning and Jacob Sebastian will be presenting on T-SQL Worst Practices. Do not miss this opportunity. Those who have attended in the past know that from last two years the venue is jam packed in first few minutes. Do come in early to get better seat and reserve your spot. We will have QUIZ during the event and we will have various gifts – Watches, USB Drives, T-Shirts and many more interesting gifts. Refer the agenda today and register right away. There will be no video recording so come and visit the event in person. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, Best Practices, Database, DBA, MVP, Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • SQLAuthority News – Presenting at Virtual Tech Days TechEd Pre-Con – February 9, 2011

    - by pinaldave
    I will be presenting on following subject on Virtual Tech Days TechEd Pre-Con – February 9, 2011. Auditing Made Easy: Change Tracking and Change Data Capture Date and Time: February 9, 2011 11:45am-12:45pm Location: Online In this fast paced demo oriented session we will go over few of concept which are related to real life problem at customers. We often see developers and DBA looking for details like who has dropped the table, who has last modified any object as well what was actually modified. SQL Server 2008 has all the answers. It has various new methods for Auditing where not only you can know details about what was changed as well know who changed it as well. In addition to that we can capture way more details configuring Auditing. We can also work prevent changes if proper policy management is configured. If you have ever attended my session on this subject earlier, this is going to absolutely new session and very much demo oriented. There is going to be quiz at the end of the session and I promise that if you attend the session, you will get all the answers correct. Reference: Pinal Dave (http://blog.SQLAuthority.com)   Filed under: About Me, Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • Remote Access to MSSQL Database From 1&1 Hosting [duplicate]

    - by Zerkey
    This question already has an answer here: How to find web hosting that meets my requirements? 5 answers I just paid ($6 /month) for shared Windows hosting through 1&1 hosting. I was having trouble connecting to my database from home, so I sent an email to support. I received the following response: As we checked your concern here in our end, please be advised that due to limitation of Shared Hosting services, there is no option to connect the database to your SQL Management Studio or through Visual Studio. It is only possible for Dedicated Server package. You may only access the database using MyLittleAdmin at the Control Panel. A dedicated server is like $200 per month! What is the point of having database access only through a web console? I feel I am missing something here, or maybe the support agent is. Is there a way to access my MS SQL database on their servers through Visual Studio or SQL Management Studio from my machine? If not, is there a web host who allows this for less than $200 a month? EDIT: Marked as duplicate... I'm not asking for a list of web hosts, I'm asking how to remotely connect to my MSSQL database through 1&1's services.

    Read the article

  • Cannot FTP without simultaneous SSH connection?

    - by Lucas
    I'm trying to set up an old box as a backup server (running 10.04.4 LTS). I intend to use 3rd party software on my PC to periodically connect to my server via FTP(S) and to mirror certain files. For some reason, all FTP connection attempts fail UNLESS I'm simultaneously connected via SSH. For example, if I use putty to test the connection to port 21, the system hangs and times out. I get: 220 Connected to LeServer USER lucas 331 Please specify the password. PASS [password] <cursor> However, when I'm simultaneously logged in (in another session) everything works: 220 Connected to LeServer USER lucas 331 Please specify the password. PASS [password] 230 Login successful. Basically, this means that my software will never be able to connect on its own, as intended. I know that the correct port is open because it works (sometimes) and nmap gives me: Starting Nmap 5.00 ( http://nmap.org ) at 2012-03-20 16:15 CDT Interesting ports on xx.xxx.xx.x: Not shown: 995 closed ports PORT STATE SERVICE 21/tcp open ftp 22/tcp open ssh 53/tcp open domain 139/tcp open netbios-ssn 445/tcp open microsoft-ds Nmap done: 1 IP address (1 host up) scanned in 0.15 seconds My only hypothesis is that this has something to do with iptables. Maybe it's allowing only established connections? I don't think that's how I set it up, but maybe? Here's my iptables rules for INPUT: lucas@rearden:~$ sudo iptables -L INPUT Chain INPUT (policy DROP) target prot opt source destination fail2ban-ssh tcp -- anywhere anywhere multiport dports ssh ufw-before-logging-input all -- anywhere anywhere ufw-before-input all -- anywhere anywhere ufw-after-input all -- anywhere anywhere ufw-after-logging-input all -- anywhere anywhere ufw-reject-input all -- anywhere anywhere ufw-track-input all -- anywhere anywhere ACCEPT tcp -- anywhere anywhere tcp dpt:ftp I'm using vsftpd. Any thoughts/resources on how I could fix this? L

    Read the article

  • cPanel's Web Disk - Security issues?

    - by Tim Sparks
    I'm thinking of using Web Disk (built into the later versions of cPanel) to allow a Windows or Mac computer to map a network drive that is actually a folder on our website (above the public_html folder). We currently use an antiquated local server to store information, but it is only accessible from one location - we would like to be able to access it from other locations as well. I understand that folders above public_html are not accessible via http, but I want to know how secure is the access to these folders as a network drive? There is potentially sensitive information that we need to decide whether it is appropriate to store here. The map network drive option seems to work well as it behaves as if the files are on your own computer (i.e. you can open and save files without then having to upload them - as it happens automatically). We have used Dropbox for similar purposes, but space is a issue with them, as is accountability and so we haven't used it for sensitive information. Are there any notable security concerns with using Web Disk as a secure file server?

    Read the article

  • Troubleshoot broken ZFS

    - by BBK
    I have one zpool called tank in RaidZ1 with 5x1TB SATA HDDs. I'm using Ubuntu Server 11.10 Oneric, kernel 3.0.0-15-server. Installed ZFS from ppa also I'm using zfs-auto-snapshot. The ZFS file system when zfs module loaded to the kernel hangs my computer. Before it I created few new file systems: zfs create -V 10G tank/iscsi1 zfs create -V 10G tank/iscsi2 zfs create -V 10G tank/iscsi3 I shared them through iSCSI by /dev/tank/iscsiX path. And my computer started to hanging sometimes when I used tank/iscsiX by iSCSI, do not know why exactly. I switched off iSCSI and started to remove this file systems: zfs destroy tank/iscsi3 I'm also using zfs-auto-snapshot so I had snapshots and without -r key my command not destroying the FS. So I issued next command: zfs destroy tank/iscsi3 -r The tank/iscsi3 FS was clean and contain nothing - it was destroyed without an issue. But tank/iscsi2 and tank/iscsi1 contained a lot of information. I tried zfs destroy tank/iscsi2 -r After some time my computer hang out. I rebooted computer. It didn't boot very fast, HDDs starts working like a crazy making a lot of noise, after 15 minutes HDDs stopped go crazy and OS booted at last. All seems to be ok - tank/iscsi2 was destroyed. After file systems at the tank was accessible, zpool status showed no corruption. I issued new command: zfs destroy tank/iscsi1 -r Situation was repeated - after some time my computer hang out. But this time ZFS seams not to healed itself. After computer switched on it started to work: loading scripts and kernel modules, after zfs starting to work it hanging my computer. I need to recover else ZFS file systems which lying in the same zpool. Few month ago I backup OS to flash drive. Booting from backed-up OS and import have the same results - OS starts hanging. How to recover my data at ZFS tank?

    Read the article

  • Traffic estimation for a multiplayer flash game

    - by Steve Addington
    hey, i want to know if my rough traffic estimations are right, it would be for a pretty simple realtime flashgame in the style of haxball (but not as a soccer game) heres a video of it http://www.youtube.com/watch?v=z_xBdFg1RcI So here comes my estimation, i dont know if they are realistic! i hope someone can help me. consider the packet attached as a typical one sent every 200ms, its 148bytes + 64 bytes of header will make around a 200bytes packet. The server will receive 200bytes x 6 players x 5 times a sec=6000bytes/s=5.85Kbytes/s=46.9kbit/s plus he has to send all back to the players, so at this point are 94Kbit/s.The server received all the information, perform the definitive calculation and send the new position to all players, in a bigger packet of around 900bytes that have to be delivered to the others 6, which makes 900bytes x 6 players x 5 times a sec=27000bytes/s=26Kbytes/s=210kbit/s. overall that would be 26kbyte per second. thats like 130mb traffic per hour for a 6player room. but somehow i think the numbers are too high? that would be really much traffic for such a simple game. did i calculate something wrong?

    Read the article

  • SQL – Download NuoDB and Qualify for FREE Amazon Gift Cards

    - by Pinal Dave
    July has been a fantastic month and Team NuoDB has really appreciated the active participation of the SQLAuthority.com active reader base. Earlier we had launched two contests with NuoDB and both of them are very much appreciated by readers. There are constant demands of more contests and team NuoDB is very much excited to support more contests. Here are the details to constests ran earlier: What ACID stands in the Database? – Contest to Win 24 Amazon Gift Cards and Joes 2 Pros 2012 Kit What is the latest Version of NuoDB? – A Quick Contest to Get Amazon Gift Cards Based on the earlier successful contests, the kind folks at NuoDB decided that they will support one more round of the giveaways to SQLAuthority.com contests. However, please note that this month’s contest will end in next 48 hours. You have to take part before July 31st, 2013 11:59:00 PM PST. Here is the quick contest: You just have to go and download NuoDB. The first 10 people who will download the NuoDB will get Amzon USD 10 cards. Remaining everyone will be entered into a lucky draw of Amazon Gift cards of USD 50. Winners will be announced in next 24 hours. To eligible for this contest, please download NuoDB before July 31st, 2013 11:59:00 PM PST. Bonus Round: If you have entered in the contest above, you can also enter to win latest Beginning SSRS Joes 2 Pros book. You just have to leave a comment over here with the note about how many different platform NuoDB supports. Here are few of the blog post I wrote earlier on that subject: Part 1 – Install NuoDB in 90 Seconds Part 2 – Manage NuoDB Installation Part 3 – Explore NuoDB Database Part 4 – Migrate from SQL Server to NuoDB Part 5 - NuoDB and Third Party Explorer – SQuirreL SQL Client, SQL Workbench/J and DbVisualizer Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Ubuntu and racadm

    - by lmqcn
    I recently purchased a used poweredge 1850 server and it came with a DRAC card. After wiping the HDD and installing ubuntu server 12.04.3 LTS amd64 on it, I am now trying to gain access to the DRAC which I believe is version 4. I have properly configured the DRAC to use it's own IP on my LAN and when I point my browser to the IP address, I am greeted with the DRAC login page (it has the dell logo and everything). However, after trying the credentials of root/calvin, I was denied access. So I think that the previous owners had set their own password. After doing some reading, it appears that I can reset the credentials to the default using racadm config -g cfgUserAdmin -o cfgUserAdminPassword -i 1 newpassword but upon entering the command, I get this error: bash: /usr/sbin/racadm: No such file or directory This holds true even if I run sudo su prior to running the racadm command. If, however, I run sudo racadm config -g cfgUserAdmin -o cfgUserAdminPassword -i 1 newpassword there are no errors. Yet, when I try to log into the DRAC via the web interface using the credentials of root/newpassword I am still not granted access. I installed the dell utilities via the guide at https://wiki.ubuntu.com/HardwareSupportMachinesServersDellNotes. I first tried to install the 64 bit version that is on the dell repositories, but after that was unsuccessful, I just followed the guide verbatim. No errors were produced in either case. I even followed the information at the bottom of the guide by executing sudo pppd /dev/ttyS1 1382400 crtscts noipdefault noauth lock persist connect 'chat -v "" CLIENT CLIENTSERVER "\\c"' but obviously, replacing the /dev/ttyS1 with the correct information for my system. ls -l /usr/sbin/ | grep racadm yields -rwxr-xr-x 1 root root 87930 Sep 16 04:03 racadm I have tried these credentials after each attempt of changing the password: root/calvin root/newpassword admin/calvin admin/newpassword All have been unsuccessful. What is the next course of action that I should take?

    Read the article

  • MySQL – Grouping by Multiple Columns to Single Column as A String

    - by Pinal Dave
    In this post titled SQL SERVER – Grouping by Multiple Columns to Single Column as A String we have seen how to group multiple column data in comma separate values in a single row grouping by another column by using FOR XML clause. In this post we will see how we can produce the same result using the GROUP_CONCAT function in MySQL. Let us create the following table and data. CREATE TABLE TestTable (ID INT, Col VARCHAR(4)); INSERT INTO TestTable (ID, Col) SELECT 1, 'A' UNION ALL SELECT 1, 'B' UNION ALL SELECT 1, 'C' UNION ALL SELECT 2, 'A' UNION ALL SELECT 2, 'B' UNION ALL SELECT 2, 'C' UNION ALL SELECT 2, 'D' UNION ALL SELECT 2, 'E'; Now to generate csv values of the column col for each ID, use the following code SELECT ID, GROUP_CONCAT(col) AS CSV FROM TestTable GROUP BY ID; The result is ID CSV 1 A,B,C 2 A,B,C,D,E You can also change the delimiters. For example instead of comma, if you want to have a pipe symbol (|), use the following SELECT ID, REPLACE(GROUP_CONCAT(col),',','|') AS CSV FROM TestTable GROUP BY ID; The result is ID CSV 1 A|B|C 2 A|B|C|D|E MySQL makes this very simple with its support of GROUP_CONCAT function. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • ecryptfs - decrypt and mount at boot with USB key

    - by Josh McGee
    I have a system running Ubuntu Server as a testbed for some services that I want to get familiar with. I decided to let the installation procedure set up encryption. I knew all along that I would have to decrypt it with the passphrase in order to get the system booted, but I assumed it wouldn't matter since it will only boot once or twice a month. However, my brother has informed me that he is a victim of power outages at the residence where this server is located. This means we have to explain to his girlfriend how to turn on the computer, attach a keyboard, connect a monitor (she just can't understand that she can type to the computer without a display, so whatever) and input the passphrase for us, while we are at work. I have arrived at the conclusion that I should just put together a USB key that can be plugged in before powering on the computer, to avoid all the trouble. Is this possible with ecryptfs? Is there a tutorial or simple list of instructions available so that I can knock this out and focus back on the stuff I care about? EDIT: I am aware that this is possible with LUKS and dm-crypt, but unfortunately the magical encryption that Ubuntu hands you during the installation is only ecryptfs so my question is specific to that.

    Read the article

  • Use Those Schemas, People!

    - by BuckWoody
    Database Schemas are just containers – they aren’t users or anything else – think of a sub-directory on the hard drive. In early versions of SQL Server we “hid” schemas, placing all objects under “dbo”, which gave the erroneous perception that Schemas are users. In SQL Server 2005, we “un-hid” or re-introduced schemas within the database. Users can have a default schema (a place where their new objects go), you can add new schemas and transfer objects between them, and they have many other benefits. But I still see a lot of applications, developed by shops I know as well as vendors, that don’t make use of a Schema. Everything is piled under dbo. I completely understand this – since permissions can be granted to a schema, they feel a lot like a user, so it’s just easier not to worry about both users and schemas when you create a database. But if you’ll use them properly you can make your application more understandable and portable. You should at least take a few minutes and read more about them – you owe it to your users: http://msdn.microsoft.com/en-us/library/ms190387.aspx Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Controlling server configurations with IPS

    - by barts
    I recently received a customer question regarding how they best could control which packages and which versions were used on their production Solaris 11 servers.  They had considered pointing each server at its own software repository - a common initial approach.  A simpler method leverages one of dependency mechanisms we introduced with Solaris 11, but is not immediately obvious to most people. Typically, most internal IT departments qualify particular versions for production use.  What this customer wanted to do was insure that their operations staff only installed internally qualified versions of Solaris on their servers.  The easiest way of doing this is to leverage the 'incorporate' type of dependency in a small package defined for each server type.  From the reference " Packaging and Delivering Software With the Image Packaging System in Oracle® Solaris 11.1":  The incorporate dependency specifies that if the given package is installed, it must be at the given version, to the given version accuracy. For example, if the dependent FMRI has a version of 1.4.3, then no version less than 1.4.3 or greater than or equal to 1.4.4 satisfies the dependency. Version 1.4.3.7 does satisfy this example dependency. The common way to use incorporate dependencies is to put many of them in the same package to define a surface in the package version space that is compatible. Packages that contain such sets of incorporate dependencies are often called incorporations. Incorporations are typically used to define sets of software packages that are built together and are not separately versioned. The incorporate dependency is heavily used in Oracle Solaris to ensurethat compatible versions of software are installed together. An example incorporate dependency is: depend type=incorporate fmri=pkg:/driver/network/ethernet/[email protected],5.11-0.175.0.0.0.2.1 So, to make sure only qualified versions are installed on a server, create a package that will be installed on the machines to be controlled.  This package will contain an incorporate dependency on the "entire" package, which controls the various components used to be build Solaris.  Every time a new version of Solaris has been qualified for production use, create a new version of this package specifying the new version of "entire" that was qualified.  Once this new control package is available in the repositories configured on the production server, the pkg update command will update that system to the specified version.  Unless a new version of the control package is made available, pkg update will report that no updates are available since no version of the control package can be installed that satisfies the incorporate constraint. Note that if desired, the same package can be used to specify which packages must be present on the system by adding either "require" or "group" dependencies; the latter permits removal of some of the packages, the former does not.  More details on this can be found in either the section 5 pkg man page or the previously mentioned reference document. This technique of using package dependencies to constrain system configuration leverages the SAT solver which is at the heart of IPS, and is basic to how we package Solaris itself.  

    Read the article

  • SQL – What is the latest Version of NuoDB? – A Quick Contest to Get Amazon Gift Cards

    - by Pinal Dave
    We had a great contest earlier last week - What ACID stands in the Database? – Contest to Win 24 Amazon Gift Cards and Joes 2 Pros 2012 Kit. It has received quite a few responses. Just like any other contest, not everyone was winner. The kind folks at NuoDB decided to give another chance to everyone who have not won in the last contest. This means if you have missed to take part in the earlier contest or if you have taken part and not won, you still have one more chance to win Amazon Gift Card. Here is the quick contest: You just have to go and download NuoDB. The first 10 people who will download the NuoDB will get 10 – USD 10 cards. Remaining everyone will be entered into a lucky draw of Amazon Gift cards of USD 50. Winners will be announced in next 24 hours. Bonus Round: If you have entered in the contest above, you can also enter to win latest Beginning SSRS Joes 2 Pros book. You just have to leave a comment over here with your experience about your experience with NuoDB and what is the latest version of the product. Here are few of the blog post I wrote earlier on that subject: Part 1 – Install NuoDB in 90 Seconds Part 2 – Manage NuoDB Installation Part 3 – Explore NuoDB Database Part 4 – Migrate from SQL Server to NuoDB Part 5 - NuoDB and Third Party Explorer – SQuirreL SQL Client, SQL Workbench/J and DbVisualizer Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Static IP configuration causing apt-get errors

    - by JPbuntu
    I am getting errors when running apt-get update or when installing new packages. Although this only happens when the server configured for a static IP address. Changing the configuration back to DHCP and restarting networking fixes the problem, although I want a static IP. Once it is working I can change back to my static IP address and restart networking. Although this only works until I restart the server (restarting the router is ok), and then I start getting the same errors and have to switch back to DHCP. Any ideas on what could be causing this or tips on troubleshooting it? Thanks in advance. here is my static IP configuration: auto eth0 iface eth0 inet static address 192.168.2.2 netmask 255.255.255.0 gateway 192.168.2.1 The apt-get update errors go something like this: A few of these Ign http://us.archive.ubuntu.com precise-backports InRelease then a lot of these Err http://security.ubuntu.com precise-security Release.gpg Something wicked happened resolving 'security.ubuntu.com:http' (-5 - No address associated with hostname) and a lot of these W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise-backports/universe/i18n/Translation-en Something wicked happened resolving 'us.archive.ubuntu.com:http' (-5 - No address associated with hostname)

    Read the article

  • sp_ssiscatalog v1.0.1.0 now available for download

    - by jamiet
    13 days ago I wrote a blog post entitled Introducing sp_ssiscatalog (v1.0.0.0) in which I first made mention of sp_ssiscatalog, an open source stored procedure intended to make it easy to query the SSIS Catalog. I have been working on some enhancements since then and hence v1.0.1.0 is now available for download from Codeplex. What’s new in this release This release includes the following enhancements: [execution_id] now gets returned in a call to EXEC [dbo].[sp_ssiscatalog] @operation_type='exec'; Filter events by specifying packages to ignore EXEC [dbo].[sp_ssiscatalog] @operation_type='exec',@exec_events_packagesexcluded='SomePackage.dtsx,AnotherPackage.dtsx'; [event_message_id] is now returned in a list of events List of executions can now be filtered via a minimum and maximum execution_id EXEC [dbo].[sp_ssiscatalog] @operation_type='execs',@execs_minimum_execution_id=198,@execs_maximum_execution_id=201 Events resultsets now have a field, [event_message_context_xml] that contains an XML document containing all [event_message_context] info (if any exists) Installation instructions Download the zip file at DB v1.0.1.0. It contains two files, SsisReportingPack.dacpac & SSISDB.dacpac Unzip to a folder of your choosing Open a command prompt and change to the directory into which you unzipped the files Execute: "%PROGRAMFILES(x86)%\Microsoft SQL Server\110\DAC\bin\sqlpackage.exe" /a:Publish /tdn:SsisReportingPack /sf:SSISReportingPack.dacpac /v:SSISDB=SSISDB /tsn:(local) (/tsn specifies the target server. Change as appropriate.) If everything works OK you’ll see something like the following: or depending on whether the target database already exists or not This will create a database called [SsisReportingPack] which contains [dbo].[sp_ssiscatalog] Feedback is welcomed! @Jamiet

    Read the article

< Previous Page | 474 475 476 477 478 479 480 481 482 483 484 485  | Next Page >