Search Results

Search found 94141 results on 3766 pages for 'server storage systems'.

Page 213/3766 | < Previous Page | 209 210 211 212 213 214 215 216 217 218 219 220  | Next Page >

  • Using Google Docs as a Storage System like S3. Is there any limit?

    - by mickthomp
    Hi all, I'm considering to upgrade to a Google Docs Premium Account (gDrive)? I'm wondering if that can be used as I'm using Amazon S3 at the moment. I'd like to upload images. Do you know if there is there any limit on the number of images I can upload on my 200GB Google Docs account? I think it could be really useful to have something like that and we could save some money on our webapps. Thank you ;)

    Read the article

  • SQL Server 2008 Restore hangs on 100%

    - by CL4NCY
    Hi, I have backed up a large database from SQL 2005 and am trying to restore it to a SQL 2008 database. It seems to work ok until it gets to 100% when it hangs indefinitely. I've managed to restore smaller databases to this server ok. Any ideas?

    Read the article

  • Oracle Solaris 11 ZFS Lab for Openworld 2012

    - by user12626122
    Preface This is the content from the Oracle Openworld 2012 ZFS lab. It was well attended - the feedback was that it was a little short - thats probably because in writing it I bacame very time-concious after the ASM/ACFS on Solaris extravaganza I ran last year which was almost too long for mortal man to finish in the 1 hour session. Enjoy. Table of Contents Exercise Z.1: ZFS Pools Exercise Z.2: ZFS File Systems Exercise Z.3: ZFS Compression Exercise Z.4: ZFS Deduplication Exercise Z.5: ZFS Encryption Exercise Z.6: Solaris 11 Shadow Migration Introduction This set of exercises is designed to briefly demonstrate new features in Solaris 11 ZFS file system: Deduplication, Encryption and Shadow Migration. Also included is the creation of zpools and zfs file systems - the basic building blocks of the technology, and also Compression which is the compliment of Deduplication. The exercises are just introductions - you are referred to the ZFS Adminstration Manual for further information. From Solaris 11 onward the online manual pages consist of zpool(1M) and zfs(1M) with further feature-specific information in zfs_allow(1M), zfs_encrypt(1M) and zfs_share(1M). The lab is easily carried out in a VirtualBox running Solaris 11 with 6 virtual 3 Gb disks to play with. Exercise Z.1: ZFS Pools Task: You have several disks to use for your new file system. Create a new zpool and a file system within it. Lab: You will check the status of existing zpools, create your own pool and expand it. Your Solaris 11 installation already has a root ZFS pool. It contains the root file system. Check this: root@solaris:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 15.9G 6.62G 9.25G 41% 1.00x ONLINE - root@solaris:~# zpool status pool: rpool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 c3t0d0s0 ONLINE 0 0 0 errors: No known data errors Note the disk device the root pool is on - c3t0d0s0 Now you will create your own ZFS pool. First you will check what disks are available: root@solaris:~# echo | format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c3t0d0 <ATA-VBOX HARDDISK-1.0 cyl 2085 alt 2 hd 255 sec 63> /pci@0,0/pci8086,2829@d/disk@0,0 1. c3t2d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@2,0 2. c3t3d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@3,0 3. c3t4d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@4,0 4. c3t5d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@5,0 5. c3t6d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@6,0 6. c3t7d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@7,0 Specify disk (enter its number): Specify disk (enter its number): The root disk is numbered 0. The others are free for use. Try creating a simple pool and observe the error message: root@solaris:~# zpool create mypool c3t2d0 c3t3d0 'mypool' successfully created, but with no redundancy; failure of one device will cause loss of the pool So destroy that pool and create a mirrored pool instead: root@solaris:~# zpool destroy mypool root@solaris:~# zpool create mypool mirror c3t2d0 c3t3d0 root@solaris:~# zpool status mypool pool: mypool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM mypool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c3t2d0 ONLINE 0 0 0 c3t3d0 ONLINE 0 0 0 errors: No known data errors Back to topExercise Z.2: ZFS File Systems Task: You have to create file systems for later exercises. You can see that when a pool is created, a file system of the same name is created: root@solaris:~# zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 86.5K 2.94G 31K /mypool Create your filesystems and mountpoints as follows: root@solaris:~# zfs create -o mountpoint=/data1 mypool/mydata1 The -o option sets the mount point and automatically creates the necessary directory. root@solaris:~# zfs list mypool/mydata1 NAME USED AVAIL REFER MOUNTPOINT mypool/mydata1 31K 2.94G 31K /data1 Back to top Exercise Z.3: ZFS Compression Task:Try out different forms of compression available in ZFS Lab:Create 2nd filesystem with compression, fill both file systems with the same data, observe results You can see from the zfs(1) manual page that there are several types of compression available to you, set with the property=value syntax: compression=on | off | lzjb | gzip | gzip-N | zle Controls the compression algorithm used for this dataset. The lzjb compression algorithm is optimized for performance while providing decent data compression. Setting compression to on uses the lzjb compression algorithm. The gzip compression algorithm uses the same compression as the gzip(1) command. You can specify the gzip level by using the value gzip-N where N is an integer from 1 (fastest) to 9 (best compression ratio). Currently, gzip is equivalent to gzip-6 (which is also the default for gzip(1)). Create a second filesystem with compression turned on. Note how you set and get your values separately: root@solaris:~# zfs create -o mountpoint=/data2 mypool/mydata2 root@solaris:~# zfs set compression=gzip-9 mypool/mydata2 root@solaris:~# zfs get compression mypool/mydata1 NAME PROPERTY VALUE SOURCE mypool/mydata1 compression off default root@solaris:~# zfs get compression mypool/mydata2 NAME PROPERTY VALUE SOURCE mypool/mydata2 compression gzip-9 local Now you can copy the contents of /usr/lib into both your normal and compressing filesystem and observe the results. Don't forget the dot or period (".") in the find(1) command below: root@solaris:~# cd /usr/lib root@solaris:/usr/lib# find . -print | cpio -pdv /data1 root@solaris:/usr/lib# find . -print | cpio -pdv /data2 The copy into the compressing file system takes longer - as it has to perform the compression but the results show the effect: root@solaris:/usr/lib# zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 1.35G 1.59G 31K /mypool mypool/mydata1 1.01G 1.59G 1.01G /data1 mypool/mydata2 341M 1.59G 341M /data2 Note that the available space in the pool is shared amongst the file systems. This behavior can be modified using quotas and reservations which are not covered in this lab but are covered extensively in the ZFS Administrators Guide. Back to top Exercise Z.4: ZFS Deduplication The deduplication property is used to remove redundant data from a ZFS file system. With the property enabled duplicate data blocks are removed synchronously. The result is that only unique data is stored and common componenents are shared. Task:See how to implement deduplication and its effects Lab: You will create a ZFS file system with deduplication turned on and see if it reduces the amount of physical storage needed when we again fill it with a copy of /usr/lib. root@solaris:/usr/lib# zfs destroy mypool/mydata2 root@solaris:/usr/lib# zfs set dedup=on mypool/mydata1 root@solaris:/usr/lib# rm -rf /data1/* root@solaris:/usr/lib# mkdir /data1/2nd-copy root@solaris:/usr/lib# zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 1.02M 2.94G 31K /mypool mypool/mydata1 43K 2.94G 43K /data1 root@solaris:/usr/lib# find . -print | cpio -pd /data1 2142768 blocks root@solaris:/usr/lib# zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 1.02G 1.99G 31K /mypool mypool/mydata1 1.01G 1.99G 1.01G /data1 root@solaris:/usr/lib# find . -print | cpio -pd /data1/2nd-copy 2142768 blocks root@solaris:/usr/lib#zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 1.99G 1.96G 31K /mypool mypool/mydata1 1.98G 1.96G 1.98G /data1 You could go on creating copies for quite a while...but you get the idea. Note that deduplication and compression can be combined: the compression acts on metadata. Deduplication works across file systems in a pool and there is a zpool-wide property dedupratio: root@solaris:/usr/lib# zpool get dedupratio mypool NAME PROPERTY VALUE SOURCE mypool dedupratio 4.30x - Deduplication can also be checked using "zpool list": root@solaris:/usr/lib# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT mypool 2.98G 1001M 2.01G 32% 4.30x ONLINE - rpool 15.9G 6.66G 9.21G 41% 1.00x ONLINE - Before moving on to the next topic, destroy that dataset and free up some space: root@solaris:~# zfs destroy mypool/mydata1 Back to top Exercise Z.5: ZFS Encryption Task: Encrypt sensitive data. Lab: Explore basic ZFS encryption. This lab only covers the basics of ZFS Encryption. In particular it does not cover various aspects of key management. Please see the ZFS Adminastrion Manual and the zfs_encrypt(1M) manual page for more detail on this functionality. Back to top root@solaris:~# zfs create -o encryption=on mypool/data2 Enter passphrase for 'mypool/data2': ******** Enter again: ******** root@solaris:~# Creation of a descendent dataset shows that encryption is inherited from the parent: root@solaris:~# zfs create mypool/data2/data3 root@solaris:~# zfs get -r encryption,keysource,keystatus,checksum mypool/data2 NAME PROPERTY VALUE SOURCE mypool/data2 encryption on local mypool/data2 keysource passphrase,prompt local mypool/data2 keystatus available - mypool/data2 checksum sha256-mac local mypool/data2/data3 encryption on inherited from mypool/data2 mypool/data2/data3 keysource passphrase,prompt inherited from mypool/data2 mypool/data2/data3 keystatus available - mypool/data2/data3 checksum sha256-mac inherited from mypool/data2 You will find the online manual page zfs_encrypt(1M) contains examples. In particular, if time permits during this lab session you may wish to explore the changing of a key using "zfs key -c mypool/data2". Exercise Z.6: Shadow Migration Shadow Migration allows you to migrate data from an old file system to a new file system while simultaneously allowing access and modification to the new file system during the process. You can use Shadow Migration to migrate a local or remote UFS or ZFS file system to a local file system. Task: You wish to migrate data from one file system (UFS, ZFS, VxFS) to ZFS while mainaining access to it. Lab: Create the infrastructure for shadow migration and transfer one file system into another. First create the file system you want to migrate root@solaris:~# zpool create oldstuff c3t4d0 root@solaris:~# zfs create oldstuff/forgotten Then populate it with some files: root@solaris:~# cd /var/adm root@solaris:/var/adm# find . -print | cpio -pdv /oldstuff/forgotten You need the shadow-migration package installed: root@solaris:~# pkg install shadow-migration Packages to install: 1 Create boot environment: No Create backup boot environment: No Services to change: 1 DOWNLOAD PKGS FILES XFER (MB) Completed 1/1 14/14 0.2/0.2 PHASE ACTIONS Install Phase 39/39 PHASE ITEMS Package State Update Phase 1/1 Image State Update Phase 2/2 You then enable the shadowd service: root@solaris:~# svcadm enable shadowd root@solaris:~# svcs shadowd STATE STIME FMRI online 7:16:09 svc:/system/filesystem/shadowd:default Set the filesystem to be migrated to read-only root@solaris:~# zfs set readonly=on oldstuff/forgotten Create a new zfs file system with the shadow property set to the file system to be migrated: root@solaris:~# zfs create -o shadow=file:///oldstuff/forgotten mypool/remembered Use the shadowstat(1M) command to see the progress of the migration: root@solaris:~# shadowstat EST BYTES BYTES ELAPSED DATASET XFRD LEFT ERRORS TIME mypool/remembered 92.5M - - 00:00:59 mypool/remembered 99.1M 302M - 00:01:09 mypool/remembered 109M 260M - 00:01:19 mypool/remembered 133M 304M - 00:01:29 mypool/remembered 149M 339M - 00:01:39 mypool/remembered 156M 86.4M - 00:01:49 mypool/remembered 156M 8E 29 (completed) Note that if you had created /mypool/remembered as encrypted, this would be the preferred method of encrypting existing data. Similarly for compressing or deduplicating existing data. The procedure for migrating a file system over NFS is similar - see the ZFS Administration manual. That concludes this lab session.

    Read the article

  • winusb formatting and installation error, lost data storage on 32gb flash drive

    - by Cary Felton
    i recently purchased a pny brand 32gb usb drive to use for configuring a dual boot on my computer, and for hdd files backup. i also recently installed zorin os 6.1 as my main os, and then downloaded winusb for linux, and set it to install the iso for windows 8 release preview as the activation code is still good untill jan.2013, and there was an error on the installation. i rebooted the computer, plugged in the usb drive, and i went from 32gb usable space to somehow having a partitioned drive (mounted as two separate drives) one being 3.7gb, and the other being i believe 25gb or so... i then formatted the drive with gparted back to fat32, and it still read as a windows usb with the installation files on it still? so then i took my usb drive to my library which runs windows 8, i installed bootice, and completely reformatted my usb drive, and somehow it is usable, but still not perfect. it reads i only have 29.9gb usable space... i have already thought of the non usable area, and that doesnt account for this error as when i first bought the drive and plugged it in on linux, it read total drive space was 32.2 and exactly 32 was usable. somehow i am short by 2gb of space which is very critical for what i am doing. i love linux, i only needed windows for bluray playback with daplayer as it wont work well in wine, but if i keep losing space on my usb drives im afraid ill have to switch back. any help would be appreciated as i already visited pny's site, and they have no support for this issue, and im not that fond of partitions, file systems, or formatting.

    Read the article

  • Windows 2008 R2 SMB / CIFS Logging to diagnose Brother MFC Network Scanning

    - by Steven Potter
    I am attempting to setup network scanning on a brother MFC-9970CDW printer. According to the Brother documentation, the printer is setup to connect to any CIFS network share. I applied all of the appropriate setting in the printer however I get a "sending error" when I try to scan a document. When I look at the logs of the 2008 R2 server that I am attempting to connect to; I can see in the security log where the printer successfully authenticates, however nothing else is logged. I would assume that immediately after the authentication, the printer is making a CIFS request and some sort of error is occurring, however I can't seem to find any way to log this information to find out what is going on. Is it possible to get Windows 2008 to log SMB/CIFS traffic? Followup: I installed Microsoft netmon and captured the packets associated with the transaction: 510 3:04:28 PM 7/9/2012 34.4277743 System 192.168.1.134 192.168.1.10 SMB SMB:C; Negotiate, Dialect = NT LM 0.12 {SMBOverTCP:30, TCP:29, IPv4:22} 511 3:04:28 PM 7/9/2012 34.4281246 System 192.168.1.10 192.168.1.134 SMB SMB:R; Negotiate, Dialect is NT LM 0.12 (#0), SpnegoToken (1.3.6.1.5.5.2) {SMBOverTCP:30, TCP:29, IPv4:22} 519 3:04:29 PM 7/9/2012 34.8986214 System 192.168.1.134 192.168.1.10 SMB SMB:C; Session Setup Andx, NTLM NEGOTIATE MESSAGE {SMBOverTCP:30, TCP:29, IPv4:22} 520 3:04:29 PM 7/9/2012 34.8989310 System 192.168.1.10 192.168.1.134 SMB SMB:R; Session Setup Andx, NTLM CHALLENGE MESSAGE - NT Status: System - Error, Code = (22) STATUS_MORE_PROCESSING_REQUIRED {SMBOverTCP:30, TCP:29, IPv4:22} 522 3:04:29 PM 7/9/2012 34.9022870 System 192.168.1.134 192.168.1.10 SMB SMB:C; Session Setup Andx, NTLM AUTHENTICATE MESSAGEVersion:v2, Domain: CORP, User: PRINTSUPOFF, Workstation: BRN001BA9AD1FE6 {SMBOverTCP:30, TCP:29, IPv4:22} 523 3:04:29 PM 7/9/2012 34.9032421 System 192.168.1.10 192.168.1.134 SMB SMB:R; Session Setup Andx {SMBOverTCP:30, TCP:29, IPv4:22} 525 3:04:29 PM 7/9/2012 34.9051855 System 192.168.1.134 192.168.1.10 SMB SMB:C; Tree Connect Andx, Path = \\192.168.1.10\IPC$, Service = ????? {SMBOverTCP:30, TCP:29, IPv4:22} 526 3:04:29 PM 7/9/2012 34.9053083 System 192.168.1.10 192.168.1.134 SMB SMB:R; Tree Connect Andx, Service = IPC {SMBOverTCP:30, TCP:29, IPv4:22} 528 3:04:29 PM 7/9/2012 34.9073573 System 192.168.1.134 192.168.1.10 DFSC DFSC:Get DFS Referral Request, FileName: \\192.168.1.10\NSCFILES, MaxReferralLevel: 3 {SMB:33, SMBOverTCP:30, TCP:29, IPv4:22} 529 3:04:29 PM 7/9/2012 34.9152042 System 192.168.1.10 192.168.1.134 SMB SMB:R; Transact2, Get Dfs Referral - NT Status: System - Error, Code = (549) STATUS_NOT_FOUND {SMB:33, SMBOverTCP:30, TCP:29, IPv4:22} 531 3:04:29 PM 7/9/2012 34.9169738 System 192.168.1.134 192.168.1.10 SMB SMB:C; Tree Disconnect {SMBOverTCP:30, TCP:29, IPv4:22} 532 3:04:29 PM 7/9/2012 34.9170688 System 192.168.1.10 192.168.1.134 SMB SMB:R; Tree Disconnect {SMBOverTCP:30, TCP:29, IPv4:22} As you can see, the DFS referral fails and the transaction is shut down. I can't see any reason for the DFS referral to fail. The only reference I can find online is: https://bugzilla.samba.org/show_bug.cgi?id=8003 Anyone have any ideas for a solution?

    Read the article

  • Visual C# 2008 Express connection to SQL Server 2008 Express problem

    - by Phil
    Hi guys, I have a problem with Visual C# 2008 express (SP1) connecting to SQL Server 2008 express. The "Add Connection" window (wherever initiated) doesn't list existing sql server and no option for sql server except a compact edition. Note that, I've got the VWD 2008 express (SP1) on the same machine which shows the window regularly (with SQL server listed) and SQL Server Management studio works fine with the server as well. I've seen other similar posts, did take some advices: reinstalled the VC#, services run ok, etc... but with no success with VC# so far. Again, on the same machine the VWD shows the dialog with sql server option regularly, but VC# shows only 3 options in "Change data source" dialog (1. Microsoft Access Database File (OLE DB) 2. Microsoft SQL Server Compact 3.5, 3. Microsoft SQL Server Database File) Any idea? Thanks in advice, Phil

    Read the article

  • How to implement dual communication between server and client via http

    - by Alex Sebastiano
    I have a AJAX client which must receive messages from server. Some messages from server not like request-response type. For example, imaging game in which players can enter. Server must send to client info about player entering. But how can server send message to client via http without request from client? Only decision that i can invent: client send request to server (getNewPlayerEnter request) with big timeout, server checks state of player set, if in set new players are, then server send info to client, if not server 'sleeps' on some time, and after 'sleeping' server checks players set again. I think my desicion a little stupid(maybe not little). How implement it right? p.s. sorry for my english

    Read the article

  • Adding user groups from a remote domain server to permissions of a remote desktop terminal server fails. why?

    - by doveyg
    I have 3 computers, two of which are servers running Windows Server 2008 and another running Windows 7. One of the servers has the following roles installed; Active Directory, DHCP and DNS. The other server has a Terminal Server role installed. I am trying to log-on to the Terminal Server via Remote Desktop using the Windows 7 machine with credentials from the Active Directory server. Sounds simple enough, right? Well, no. Whenever I try to add users or groups from the Active Directory Domain server to the Terminal Server's permissions for RDP it seems to ignore, or forget, them. Though the various methods I was able to find it either adds a strange sting of numbers after the user group or the logo to the left has a question mark on it, reopening the dialogue box replaces the user group with the name of the Domain. I am confident I have the Domain setup correctly as I am able to log-on to users in the Active Directory from other computers I have put in the Domain, and when I attempt to browse the user objects from the Domain I am prompted with a username/password field and am able to view the structure of Active Directory objects. Please advise.

    Read the article

  • SQL Server: Is it possible to prevent SQL Agent from failing a step on error?

    - by Kenneth
    I have a stored procedure that runs custom backups for around 60 SQL servers (mixes 2000 through 2008R2). Occasionally, due to issues outside of my control (backup device inaccessible, network error, etc.) an individual backup on one or two databases will fail. This causes this entire step to fail, which means any subsequent backup commands are not executed and half of the databases on a given server may not be backed up. On the 2005+ boxes I am using TRY/CATCH blocks to manage these problems and continue backing up the remaining databases. On a 2000 server however, for example, I have no way to prevent this error from failing the entire step: Msg 3201, Level 16, State 1, Line 1 Cannot open backup device 'db-diff(\PATH\DB-DIFF-03-16-2010.DIF)'. Operating system error 5(Access is denied.). Msg 3013, Level 16, State 1, Line 1 BACKUP DATABASE is terminating abnormally. I am simply asking if anything like TRY/CATCH is possible in SQL 2000? I realize there are no built in methods for this, so I guess I am looking for some creativity. Even when wrapping each backup (or any failing statement) via sp_executesql the job fails instantly. Example: DECLARE @x INT, @iReturn INT PRINT 'Executing statement that will fail with 208.' EXEC @iReturn = Sp_executesql N'SELECT * from TABLETHATDOESNTEXIST;' PRINT Cast(@iReturn AS NVARCHAR) --In SSMS this return code prints. Executed as a job it fails and aborts before this statement.

    Read the article

  • Tell me SQL Server Full-Text searcher is crazy, not me.

    - by Ian Boyd
    i have some customers with a particular address that the user is searching for: 123 generic way There are 5 rows in the database that match: ResidentialAddress1 ============================= 123 GENERIC WAY 123 GENERIC WAY 123 GENERIC WAY 123 GENERIC WAY 123 GENERIC WAY i run a FT query to look for these rows. i'll show you each step as i add more criteria to the search: SELECT ResidentialAddress1 FROM Patrons WHERE CONTAINS(Patrons.ResidentialAddress1, '"123*"') ResidentialAddress1 ========================= 123 MAPLE STREET 12345 TEST 123 MINE STREET 123 GENERIC WAY 123 FAKE STREET ... (30 row(s) affected) Okay, so far so good, now adding the word "generic": SELECT ResidentialAddress1 FROM Patrons WHERE CONTAINS(Patrons.ResidentialAddress1, '"123*"') AND CONTAINS(Patrons.ResidentialAddress1, '"generic*"') ResidentialAddress1 ============================= 123 GENERIC WAY 123 GENERIC WAY 123 GENERIC WAY 123 GENERIC WAY 123 GENERIC WAY (5 row(s) affected) Excellent. And now i'l add the final keyword that the user wants to make sure exists: SELECT ResidentialAddress1 FROM Patrons WHERE CONTAINS(Patrons.ResidentialAddress1, '"123*"') AND CONTAINS(Patrons.ResidentialAddress1, '"generic*"') AND CONTAINS(Patrons.ResidentialAddress1, '"way*"') ResidentialAddress1 ------------------------------ (0 row(s) affected) Huh? No rows? What if i query for just "way*": SELECT ResidentialAddress1 FROM Patrons WHERE CONTAINS(Patrons.ResidentialAddress1, '"way*"') ResidentialAddress1 ------------------------------ (0 row(s) affected) At first i thought that perhaps it's because of the *, and it's requiring that the root way have more characters after it. But that's not true: Searching for "123*" matches "123" Searching for "generic*" matches "generic" Books online says, The asterisk matches zero, one, or more characters What if i remove the * just for s&g: SELECT ResidentialAddress1 FROM Patrons WHERE CONTAINS(Patrons.ResidentialAddress1, '"way"') Server: Msg 7619, Level 16, State 1, Line 1 A clause of the query contained only ignored words. So one might think that you are just not allowed to even search for way, either alone, or as a root. But this isn't true either: SELECT * FROM Patrons WHERE CONTAINS(Patrons.*, '"way*"') AccountNumber FirstName Lastname ------------- --------- -------- 33589 JOHN WAYNE So sum up, the user is searching for rows that contain all the words: 123 generic way Which i, correctly, translate into the WHERE clauses: SELECT * FROM Patrons WHERE CONTAINS(Patrons.*, '"123*"') AND CONTAINS(Patrons.*, '"generic*"') AND CONTAINS(Patrons.*, '"way*"') which returns no rows. Tell me this just isn't going to work, that it's not my fault, and SQL Server is crazy. Note: i've emptied the FT index and rebuilt it.

    Read the article

  • What are the arguments against the inclusion of server side scripting in JavaScript code blocks?

    - by James Wiseman
    I've been arguing for some time against embedding server-side tags in JavaScript code, but was put on the spot today by a developer who seemed unconvinced The code in question was a legacy ASP application, although this is largely unimportant as it could equally apply to ASP.NET or PHP (for example). The example in question revolved around the use of a constant that they had defined in ServerSide code. 'VB Const MY_CONST: MY_CONST = 1 If sMyVbVar = MY_CONST Then 'Do Something End If //JavaScript if (sMyJsVar === "<%= MY_CONST%>"){ //DoSomething } My standard arguments against this are: Script injection: The server-side tag could include code that can break the JavaScript code Unit testing. Harder to isolate units of code for testing Code Separation : We should keep web page technologies apart as much as possible. The reason for doing this was so that the developer did not have to define the constant in two places. They reasoned that as it was a value that they controlled, that it wasn't subject to script injection. This reduced my justification for (1) to "We're trying to keep the standards simple, and defining exception cases would confuse people" The unit testing and code separation arguments did not hold water either, as the page itself was a horrible amalgam of HTML, JavaScript, ASP.NET, CSS, XML....you name it, it was there. No code that was every going to be included in this page could possibly be unit tested. So I found myself feeling like a bit of a pedant insisting that the code was changed, given the circumstances. Are there any further arguments that might support my reasoning, or am I, in fact being a bit pedantic in this insistence?

    Read the article

  • How to close and open access to SQL Server 2008 in Windows application?

    - by hgulyan
    Hi, I have a MS Access 97 application (but the question is general) working directly with SQL Server 2008 (without application server or anything). Numbers of users can be up to 1000. Windows Authentication is used. The question is: How to handle modes, so some users will be allowed to work in read-only mode some users won't have access to db for some time My versions: Using a table with a mode id for every group of users, that will work the same way. On Form Load application will query that table for mode id. Using trigger on the tables, that must work according to that mode. The trigger will query mode value and doesn't work if access is closed or it's in read-only mode I know these are not the best solutions, that's why I'm asking for your advice. There's one more point. If the mode is changed to "access-is-closed" for a group of users, that group must not be able to query to DB starting that moment. With first solution I wrote it won't work, because user can be in application at that moment and no form load event will work. How can I do this? Is there any optimal solution? Thank you. Any help would be appreciated.

    Read the article

  • least privilege account for WinRM remote calls on Windows 2008 Server

    - by aldrin
    ServerFault Windows experts: please consider the following use case: I have 2 Windows 2008 Server SP2 boxes let’s call them – SOURCE, CLIENT. On SOURCE: I create a new user called 'normal'. Just a plain user - no special privileges. On CLIENT: I run the following from a command prompt winrm get wmi/root/cimv2/Win32_UTCTime -r:SOURCE -u:normal -p:NormalPassword I get an output containing WSManFault: Message = Access is denied. On CLIENT: I repeat step 3 with the administrator identity, i.e. winrm get wmi/root/cimv2/Win32_UTCTime -r:SOURCE -u:Administrator -p:AdminPassword I get the current UTC time at SOURCE. The question is, what are the least privileges I need to assign to the user 'normal' to ensure that Step 3 behaves like Step 5. In other words, what's the least privilege to enable WinRM access for a non-Admin account?

    Read the article

  • Windows Server 2008 R2 Accessing NFS share without AD or NIS

    - by Jon Rhoades
    I'm trying to mount an NFS share on our NetApp SAN on Windows 2008 R2. Using XP I have no problem mounting this share without a username/NIS/pswd file etc, but the new functionality in 2008 seems to insist on either using AD or an NIS server (to "streamline" Services for NFS MS removed user account mapping) see Technet. When I go to map the share using "map network drive" no combination of "root", no username, no password, my username works. Using the command line mount -o anon \\172... :n or mount -o -u:root \\172... :n either gives me a network error 53 or 67 error Is it possible with 2008 to mount an NFS share without AD or NIS? If so what am I doing wrong? (Security is taken care off by IP address permissions and VLANs)

    Read the article

  • Windows 2008 R2 DHCP server not responding to DHCP discover

    - by MartinSteel
    I've got two Windows 2008 Enterprise R2 servers both running DNS and DHCP called cod & lobster. DHCP is setup using the split scope option introduced with 2008 R2, whereby both servers should respond with the first response providing the lease. Setup is as follows: Cod - IP: 192.168.0.231 - Pool: 192.168.0.101 - 192.168.0.179, exclusion for 160-179. - Response Delay: 0ms - Authorised in Active Directory (Re-authorised to confirm) - Windows firewall disabled while testing Lobster - IP: 192.168.0.232 - Pool: 192.168.0.101 - 192.168.0.179, exclusion for 101-159. - Response Delay: 1000ms - Authorised in Active Directory All DHCP leases to clients are currently being issues by Lobster rather than Cod. Packet captures with Wireshark show the following (all to broadcast address): Client - DHCP Discover Lobster - DHCP Offer (after 1s delay) Client - DHCP Request Lobster - DHCP Ack Client - DHCP Inform From my setup with two servers I'd expect to see a DHCP Offer coming from Cod almost immediately after the DHCP Discover. Does anybody have any idea what would prevent the DHCP Server responding to the discover?

    Read the article

  • Having IIS remote management problem with my vista machine managing Server 2008 IIS 7.5

    - by Breadtruck
    I am trying to use IIS 7 Remote Management installed on Vista Ultimate SP1. Connection is to IIS 7.5 on Windows Server 2008 Webserver R2. Tried on both full & core install. When I connect up, the console wants to download and install new features. Microsoft.Web.Management.IisClient 7.5.0.0 Microsoft.Web.Management.AspnetClient 7.5.0.0 I check the boxes and click OK and it downloads them and asks if I want to install them, but after I click run it just quits. I tried just choosing one or the other, same thing. I ran IIS Remote tool as administrator. These features installed correctly on my XP machine. Any ideas? UPDATE : If I had any Rep I would offer like 500 rep to get this fixed!

    Read the article

  • Windows Server 2008: specifying the default IP address when NIC has multiple addresses

    - by Cédric Boivin
    I have a Windows Server which has ~10 IP addresses statically bound. The problem is I don't know how to specify the default IP address. Sometimes when I assign a new address to the NIC, the default IP address changes with the last IP entered in the advanced IP configuration on the NIC. This has the effect (since I use NAT) that the outgoing public IP changes too. Even though this problem is currently on Windows Server 2008 How can you set the default IP address on a NIC when it has multiple IP addresses bound? There is more explication on my probleme. Here is the ipconfig DHCP Enabled. . . . . . . . . . . : No Autoconfiguration Enabled . . . . : Yes IPv4 Address. . . . . . . . . . . : 192.168.99.49(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 IPv4 Address. . . . . . . . . . . : 192.168.99.51(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 IPv4 Address. . . . . . . . . . . : 192.168.99.52(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 IPv4 Address. . . . . . . . . . . : 192.168.99.53(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 IPv4 Address. . . . . . . . . . . : 192.168.99.54(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 IPv4 Address. . . . . . . . . . . : 192.168.99.55(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 IPv4 Address. . . . . . . . . . . : 192.168.99.56(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 IPv4 Address. . . . . . . . . . . : 192.168.99.57(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 IPv4 Address. . . . . . . . . . . : 192.168.99.58(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 IPv4 Address. . . . . . . . . . . : 192.168.99.59(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 IPv4 Address. . . . . . . . . . . : 192.168.99.60(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 IPv4 Address. . . . . . . . . . . : 192.168.99.61(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 IPv4 Address. . . . . . . . . . . : 192.168.99.62(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 IPv4 Address. . . . . . . . . . . : 192.168.99.64(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 IPv4 Address. . . . . . . . . . . : 192.168.99.65(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 IPv4 Address. . . . . . . . . . . : 192.168.99.66(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 IPv4 Address. . . . . . . . . . . : 192.168.99.67(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 IPv4 Address. . . . . . . . . . . : 192.168.99.68(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 IPv4 Address. . . . . . . . . . . : 192.168.99.70(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 IPv4 Address. . . . . . . . . . . : 192.168.99.71(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 IPv4 Address. . . . . . . . . . . : 192.168.99.100(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 IPv4 Address. . . . . . . . . . . : 192.168.99.108(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 IPv4 Address. . . . . . . . . . . : 192.168.99.109(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 IPv4 Address. . . . . . . . . . . : 192.168.99.112(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 IPv4 Address. . . . . . . . . . . : 192.168.99.63(Duplicate) Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : 192.168.99.1 If i do a pathping there is the answer, the first up is the 99.49, also if my default ip is 99.100 Tracing route to www.l.google.com [72.14.204.99] over a maximum of 30 hops: 0 Machine [192.168.99.49] There is the routing table on the machine Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 192.168.99.1 192.168.99.49 261 10.10.10.0 255.255.255.0 On-link 10.10.10.10 261 10.10.10.10 255.255.255.255 On-link 10.10.10.10 261 10.10.10.255 255.255.255.255 On-link 10.10.10.10 261 192.168.99.0 255.255.255.0 On-link 192.168.99.49 261 192.168.99.49 255.255.255.255 On-link 192.168.99.49 261 192.168.99.51 255.255.255.255 On-link 192.168.99.49 261 192.168.99.52 255.255.255.255 On-link 192.168.99.49 261 192.168.99.53 255.255.255.255 On-link 192.168.99.49 261 192.168.99.54 255.255.255.255 On-link 192.168.99.49 261 192.168.99.55 255.255.255.255 On-link 192.168.99.49 261 192.168.99.56 255.255.255.255 On-link 192.168.99.49 261 192.168.99.57 255.255.255.255 On-link 192.168.99.49 261 192.168.99.58 255.255.255.255 On-link 192.168.99.49 261 192.168.99.59 255.255.255.255 On-link 192.168.99.49 261 192.168.99.60 255.255.255.255 On-link 192.168.99.49 261 192.168.99.61 255.255.255.255 On-link 192.168.99.49 261 192.168.99.62 255.255.255.255 On-link 192.168.99.49 261 192.168.99.64 255.255.255.255 On-link 192.168.99.49 261 192.168.99.65 255.255.255.255 On-link 192.168.99.49 261 192.168.99.66 255.255.255.255 On-link 192.168.99.49 261 192.168.99.67 255.255.255.255 On-link 192.168.99.49 261 192.168.99.68 255.255.255.255 On-link 192.168.99.49 261 192.168.99.70 255.255.255.255 On-link 192.168.99.49 261 192.168.99.71 255.255.255.255 On-link 192.168.99.49 261 192.168.99.100 255.255.255.255 On-link 192.168.99.49 261 192.168.99.108 255.255.255.255 On-link 192.168.99.49 261 192.168.99.109 255.255.255.255 On-link 192.168.99.49 261 192.168.99.112 255.255.255.255 On-link 192.168.99.49 261 192.168.99.255 255.255.255.255 On-link 192.168.99.49 261 224.0.0.0 240.0.0.0 On-link 192.168.99.49 261 224.0.0.0 240.0.0.0 On-link 10.10.10.10 261 255.255.255.255 255.255.255.255 On-link 192.168.99.49 261 255.255.255.255 255.255.255.255 On-link 10.10.10.10 261 How i can be sure the ip use in the image ( suppose to be the default ip address ) will be use by my server as the default address ?

    Read the article

  • ESXi and Windows Server CPU parking

    - by Chris J
    For those that don't know, CPU parking is a feature in recent Windows Server releases that allows Windows to pretty much drop a CPU core to zero use, and having nothing use it. It's been introduced as a power-saving measure. There's more detail about it here, amongst other places. However what I'm curious about is whether this matter on a virtualised guest - or is CPU parking more of a hindrance than a help, given that the physical CPUs are managed by ESXi, not Windows, and that a parked CPU is less likely to deal with traffic unless the scheduler deems there's enough work to unpark the CPU? I've not found anything about this - I do suspect it will be very much based on a given workload, but I've not seen any discussion (unlike, say, whether hyper-threading has any effect, which seems to be discussed regularly). Whilst I do understand the "test with your workload" I was wondering if there was any advice/guidelines out there that I've missed.

    Read the article

  • Windows server reboot loop - uninstalling hotfixes

    - by Jack
    After installing 3 updates, my system is stuck in a reboot loop. I am using server 2008 r2. I have tried deleting pending.xml from the windows directory, so pelase don't suggest that. I tried dism /image:d:\ /cleanup-image /revertpendingactions which completed successfully but did not solve my issue I then tried: dism /image:d:\ /Remove-Package /PackageName: and sucusfully removed one of the 3 updates. The two updates that are left are not listed with the dism get-packages command, but are listed with the get-apppatches command. I cannot find a way to uninstall them with dism however. So my question is, how can I manually uninstall specific updates or hotfixes from within the winre environment?

    Read the article

  • Unlock file on Windows Server 2003 without rebooting

    - by BalusC
    We've several Windows Server 2003 machines running, each with its own purposes. There are scheduled jobs which synchronizes some files over SFTP using WinSCP. Very sometimes a newly copied file is left locked in the "inbox" folder without any reason. The machine's own background task (programmed in Java) can't move it to the "processed" folder anymore after processing it. Manually moving it only yields the well known error message Cannot move [filename]: it is being used by another person or program. The only resort is to reboot the machine, but we would of course like to avoid that. Any suggestions? I tried Unlocker which works fine locally at WinXP, but doesn't work at those Win2K3 machines by remote desktop (unlock option doesn't show up in rightclick context menu).

    Read the article

  • Win a place at a SQL Server Masterclass with Kimberly Tripp and Paul Randal

    - by Testas
    The top things YOU need to know about managing SQL Server - in one place, on one day - presented by two of the best SQL Server industry trainers!And you could be there courtesy of UK SQL Server User Group and SQL Server Magazine! This week the UK SQL Server User Group will provide you with details of how to win a place at this must see seminar   You can also register for the seminar yourself at:www.regonline.co.uk/kimtrippsql More information about the seminar   Where: Radisson Edwardian Heathrow Hotel, London When: Thursday 17th June 2010 This one-day MasterClass will focus on many of the top issues companies face when implementing and maintaining a SQL Server-based solution. In the case where a company has no dedicated DBA, IT managers sometimes struggle to keep the data tier performing well and the data available. This can be especially troublesome when the development team is unfamiliar with the affect application design choices have on database performance. The Microsoft SQL Server MasterClass 2010 is presented by Paul S. Randal and Kimberly L. Tripp, two of the most experienced and respected people in the SQL Server world. Together they have over 30 years combined experience working with SQL Server in the field, and on the SQL Server product team itself. This is a unique opportunity to hear them present at a UK event which will:·         Debunk many of the ingrained misconceptions around SQL Server's behaviour   ·         Show you disaster recovery techniques critical to preserving your company's life-blood - the data   ·         Explain how a common application design pattern can wreak havoc in the database ·         Walk through the top-10 points to follow around operations and maintenance for a well-performing and available data tier! Please Note: Agenda may be subject to changeSessions AbstractsKEYNOTE: Bridging the Gap Between Development and Production  Applications are commonly developed with little regard for how design choices will affect performance in production. This is often because developers don't realize the implications of their design on how SQL Server will be able to handle a high workload (e.g. blocking, fragmentation) and/or because there's no full-time trained DBA that can recognize production problems and help educate developers. The keynote sets the stage for the rest of the day. Discussing some of the issues that can arise, explaining how some can be avoided and highlighting some of the features in SQL 2008 that can help developers and DBAs make better use of SQL Server, and troubleshoot when things go wrong.  SESSION ONE: SQL Server MythbustersIt's amazing how many myths and misconceptions have sprung up and persisted over the years about SQL Server - after many years helping people out on forums, newsgroups, and customer engagements, Paul and Kimberly have heard it all. Are there really non-logged operations? Can interrupting shrinks or rebuilds cause corruption? Can you override the server's MAXDOP setting? Will the server always do a table-scan to get a row count? Many myths lead to poor design choices and inappropriate maintenance practices so these are just a few of many, many myths that Paul and Kimberly will debunk in this fast-paced session on how SQL Server operates and should be managed and maintained. SESSION TWO: Database Recovery Techniques Demo-Fest Even if a company has a disaster recovery strategy in place, they need to practice to make sure that the plan will work when a disaster does strike. In this fast-paced demo session Paul and Kimberly will repeatedly do nasty things to databases and then show how they are recovered - demonstrating many techniques that can be used in production for disaster recovery. Not for the faint-hearted! SESSION THREE: GUIDs: Use, Abuse, and How To Move Forward Since the addition of the GUID (Microsoft’s implementation of the UUID), my life as a consultant and "tuner" has been busy. I’ve seen databases designed with GUID keys run fairly well with small workloads but completely fall over and fail because they just cannot scale. And, I know why GUIDs are chosen - it simplifies the handling of parent/child rows in your batches so you can reduce round-trips or avoid dealing with identity values. And, yes, sometimes it's even for distributed databases and/or security that GUIDs are chosen. I'm not entirely against ever using a GUID but overusing and abusing GUIDs just has to be stopped! Please, please, please let me give you better solutions and explanations on how to deal with your parent/child rows, round-trips and clustering keys! SESSION 4: Essential Database MaintenanceIn this session, Paul and Kimberly will run you through their top-ten database maintenance recommendations, with a lot of tips and tricks along the way. These are distilled from almost 30 years combined experience working with SQL Server customers and are geared towards making your databases more performant, more available, and more easily managed (to save you time!). Everything in this session will be practical and applicable to a wide variety of databases. Topics covered include: backups, shrinks, fragmentation, statistics, and much more! Focus will be on 2005 but we'll explain some of the key differences for 2000 and 2008 as well.    Speaker Biographies     Paul S.Randal  Kimberley L. Tripp Paul and Kimberly are a husband-and-wife team who own and run SQLskills.com, a world-renowned SQL Server consulting and training company. They are both SQL Server MVPs and Microsoft Regional Directors, with over 30 years of combined experience on SQL Server. Paul worked on the SQL Server team for nine years in development and management roles, writing many of the DBCC commands, and ultimately with responsibility for core Storage Engine for SQL Server 2008. Paul writes extensively on his blog (SQLskills.com/blogs/Paul) and for TechNet Magazine, for which he is also a Contributing Editor. Kimberly worked on the SQL Server team in the early 1990s as a tester and writer before leaving to found SQLskills and embrace her passion for teaching and consulting. Kimberly has been a staple at worldwide conferences since she first presented at TechEd in 1996, and she blogs at SQLskills.com/blogs/Kimberly. They have written Microsoft whitepapers and books for SQL Server 2000, 2005 and 2008, and are regular, top-rated presenters worldwide on database maintenance, high availability, disaster recovery, performance tuning, and SQL Server internals. Together they teach the SQL MCM certification and throughout Microsoft.In their spare time, they like to find frogfish in remote corners of the world.  

    Read the article

< Previous Page | 209 210 211 212 213 214 215 216 217 218 219 220  | Next Page >