Search Results

Search found 18728 results on 750 pages for 'setup deployment'.

Page 443/750 | < Previous Page | 439 440 441 442 443 444 445 446 447 448 449 450  | Next Page >

  • Is there a convenient way to manually copy the Log Shipping *.trn files from one SQL 2008 server to

    - by Rick
    We have a remote SQL 2008 server (ServerB) that needs to keep a warm (15 minute interval is OK) copy of production data (ServerA). ServerA is also SQL 2008. Log Shipping looks like it will do the job. We can only get to the destination ServerB with remote desktop. Is there a way to set this up when we can't get to both servers from one Management Studio? We want to be able to temporarily (until a VPN is setup between our network and the ServerB network) manually export a small .trn file, copy it via remote desktop to ServerB and then manually import those transactions from the .trn file. My supervisor says he saw a post saying this is possible. We were just trying to avoid doing a full database backup and copying that every time. Thanks in advance for any suggestions on this.

    Read the article

  • Linux distro for software development support?

    - by Xie Jilei
    I've spent too much time on setup & maintain a development server, which contains following tools: Common services like SSH, BIND, rsync, etc. Subversion, Git. Apache server, which runs CGit, Trac, Webmin, phpmyadmin, phppgadmin, etc. Jetty, which runs Archiva and Hudson. Bugzilla. PostgresSQL server, MySQL server. I've created a lot of Debian packages, like my-trac-utils, my-bugzilla-utils, my-bind9-utils, my-mysql-utils, etc. to make my life more convenient. However, I still feel I need a lot more utils. And I've spent a lot of time to maintain these packages, too. I think there maybe many developers doing the same things. As tools like subversion, git, trac are so common today. It's not to hard to install and configure each of them, but it took a long time to install them all. And it's time consuming to maintain them. Like backup the data, plot the usage graph and generate web reports. (gitstat for example) So, I'd like to hear if there exist any pre-configured distro for Development Server purpose, i.e., something like BackTrack for hackers?

    Read the article

  • Disk quota problem in Windows Server SBS 2003

    - by deddebme
    I have got a new job and the existing SBS 2003 domain setup is unsecure (i.e. everyone is a domain admin etc etc). There are lots of problem due to inexperienced "network admin", and I am trying to fix them one by one. There exist one issue which I found quite weird, that the "Quota" tab exists in the C:(NTFS) drive but not the D:(NTFS) drive. I played around with gpedit to enable disk quota (it was "not configured" before), but still I can't see that tab. Have you seen this problem before? How did you solve it?

    Read the article

  • How likely can my data be recovered after Windows CHKDSK performed on a degraded RAID 5 array?

    - by chrisling106
    Hello there, We have a RAID 5 setup with 3 SATA disks, #2 went down as reported on the pre-POST screen. Unfortunately, for some reason out of my control, the system was rebooted with a degraded RAID :-O Windows XP (64-bit) loaded, CHKDSK ran automatically and done its recovery! From that point onwards, the following error prompts every time even in Safe Mode: lsass.exe - The endpoint format is invalid I took those 3 disks to the data recovery expert and need to wait at least 2-4 days for results. There are 2 VMs on multiple files stored in this RAID 5 array, and there's no backup! Sorry, I just inherited the system from an ex-staff who has left the company 2 months before I joined. How likely the data can be recovered?

    Read the article

  • Tracking feature requests for small-scale components

    - by DXM
    I'm curious how other development teams (especially those that work in moderate to large development groups) track "future" features/wishlists for functionality for internally developed frameworks or components. I know the standard advice is that a development team should find one good tool for tracking bugs/features and use that for everything and I agree with that if the future requests are for the product itself. In my company we have an engineering department, which is broken up into multiple groups and within each there can be one to several agile teams. The bug tracking product we use has been "a leader since 1997" (their UI/usability seems to also be evaluated against that year even today) but my agile team or even group doesn't really control what is being used by the whole department. What we are looking to track is not necessarily product features but expansion/nice to have functionality for internal components that go into our product. So to name a few for example... framework/utility library on top of CppUnit which our developers share low-level IPC communications framework Common development SDK that myself and several other team leads started to help share some common code/tools at the department-wide level (this SDK is released as internal "product" to each of the groups). Is the standard practice to use the one bug tracking tool? Or would it make more sense to setup something more localized specifically for our needs and maintain it ourselves? It's also unclear how management will feel if developers start performing "IT" roles of maintaining software and servers. At the same time, right now, we use excel files, internal wiki and MS OneNote for this kind of stuff and that just doesn't feel right. (I'm afraid to ask for actual software recommendations, since that might make this question more localized or something. Also developers needs this way more than management, so it would be nice to find something either free or no more than the cost of a happy hour).

    Read the article

  • What is the lease irriating printer manufacturer?

    - by aireq
    Currently I have a Lexmark all in one printer/scanner which has some of the worse drivers I've seen for a printer. The installation takes forever. Then once it's installed the printer will only work if I keep the "Lexmark Productivity Studio" running in my system tray. Then later after I've scanned something 99% of the time the "Save to PDF" button doesn't do anything when I click it. It is also a wireless printer, but of course the only way to set any of the wireless settings is during the driver setup. So if my WEP key changes then I have to go off and reinstall the entire printer driver. Lately I tried refilling one of the ink cartridges with a key I bought off amazon, and now both the printer and the drivers keep complaining about being out of "Official Lexmark Ink" This comic from The Oatmeal pretty much sums up my feelings about consumer printers and their drivers. This question is, of course, pretty subjective but I'd like to know what (if any) consumer printer brands actual provide quality drivers and software with their printers.

    Read the article

  • Looking for event management application

    - by taudep
    Hello, I'm looking for a web-based event management application for managing events (or activities) on certain days that I setup. And then I want people to be able to sign up for them. I'm looking for something that can then be embedded in my website, similar to a Google Calendar. Then for a given event's day, I can click on it, see who's attending. Ideally, I wouldn't have to invite people to the event, but they can just sign up for it. I'm not looking to use something like Evite. This application is going to be used to manage a schedule of bike races, and who from my club will sign up for them. Thanks for any suggestions.

    Read the article

  • Was API hooking done as needed for Stuxnet to work? I don't think so

    - by The Kaykay
    Caveat: I am a political science student and I have tried my level best to understand the technicalities; if I still sound naive please overlook that. In the Symantec report on Stuxnet, the authors say that once the worm infects the 32-bit Windows computer which has a WINCC setup on it, Stuxnet does many things and that it specifically hooks the function CreateFileA(). This function is the route which the worm uses to actually infect the .s7p project files that are used to program the PLCs. ie when the PLC programmer opens a file with .s7p the control transfers to the hooked function CreateFileA_hook() instead of CreateFileA(). Once Stuxnet gains the control it covertly inserts code blocks into the PLC without the programmers knowledge and hides it from his view. However, it should be noted that there is also one more function called CreateFileW() which does the same task as CreateFileA() but both work on different character sets. CreateFileA works with ASCII character set and CreateFileW works with wide characters or Unicode character set. Farsi (the language of the Iranians) is a language that needs unicode character set and not ASCII Characters. I'm assuming that the developers of any famous commercial software (for ex. WinCC) that will be sold in many countries will take 'Localization' and/or 'Internationalization' into consideration while it is being developed in order to make the product fail-safe ie. the software developers would use UNICODE while compiling their code and not just 'ASCII'. Thus, I think that CreateFileW() would have been invoked on a WINCC system in Iran instead of CreateFileA(). Do you agree? My question is: If Stuxnet has hooked only the function CreateFileA() then based on the above assumption there is a significant chance that it did not work at all? I think my doubt will get clarified if: my assumption is proved wrong, or the Symantec report is proved incorrect. Please help me clarify this doubt. Note: I had posted this question on the general stackexchange website and did not get appropriate responses that I was looking for so I'm posting it here.

    Read the article

  • Sun Java keytool importing EV certificates into a single keystore

    - by ss0
    At my current job we are using tomcat, customers have custom web portals setup on their own local machines. EV certs are new to me, they have 2 part intermediary and a primary certificate. For our product to work it appears I need to get all three parts installed under a single keystore entry. How can I roll all three parts into a single x.509 compliant file for import? They syntax I am using is as follows: /blah/system/j2sdk/bin/keytool -import -alias foo -keystore /zix/system/jdk1.5.0_06/jre/lib/security/cacerts -file certname.pem -trustcacerts where foo = the keystore name and certname.pem is the main cert. I have tried importing the intermediate certs under their own names into the keystore and I don't know if it's just the product I have to work with (not vanilla tomcat) or what but it doesn't see those. I have seen a working system and all three certs were under the single keystore alias. Anyone have any ideas?

    Read the article

  • What configuration management solutions exist in a non-networked environment?

    - by Rob Spieldenner
    My servers exist in an environment without outside network connectivity (this is a requirement), so when I deploy updates all packages, binaries, config files, etc. must be included on the delivered media. And of course I want some sort of configuration management so I can tell what has and hasn't been installed. So I was wondering if people had experience with chef, puppet, or another configuration management type tool for dealing with this type of environment. Worst case I deploy my updates as an RPM. EDIT: My setup has both Linux servers and Windows servers.

    Read the article

  • store image installation error in UEC

    - by selvakumar
    to my college final year project we planned to setup the private cloud on the two machines. I recently installed Ubuntu Enterprise Cloud (UEC) on two of my machines . I was trying to install the store image through WebUI. I was able to download Ubuntu 10.04 - (i386) image but while installing, it's giving me following error: - Command 'euca-upload-bundle' returned status code 1: Checking bucket: image-store-1296600766 Traceback (most recent call last): File "/usr/bin/euca-upload-bundle", line 231, in main() File "/usr/bin/euca-upload-bundle", line 214, in main bucket_instance = ensure_bucket(conn, bucket, canned_acl) File "/usr/bin/euca-upload-bundle", line 87, in ensure_bucket bucket_instance = connection.get_bucket(bucket) File "/usr/lib/pymodules/python2.6/boto/s3/connection.py", line 275, in get_bucket rs = bucket.get_all_keys(headers, maxkeys=0) File "/usr/lib/pymodules/python2.6/boto/s3/bucket.py", line 204, in get_all_keys headers=headers, query_args=s) File "/usr/lib/pymodules/python2.6/boto/s3/connection.py", line 342, in make_request data, host, auth_path, sender) File "/usr/lib/pymodules/python2.6/boto/connection.py", line 459, in make_request return self._mexe(method, path, data, headers, host, sender) File "/usr/lib/pymodules/python2.6/boto/connection.py", line 437, in _mexe raise e socket.error: [Errno 110] Connection timed out could anyone please help me?

    Read the article

  • Bad archive mirror using PXE boot method

    - by user11566
    I'm trying to automatically install Ubuntu on a client PC by using the PXE BOOT method....my Objectives are below: I am following the steps given in this link installation using PXE BOOT the server will have a KICKSTART config file which contains the parameters for the OS installation and the files which are required for the OS installations. the client will have to detect this configuration along with the setup files and complete the installation without any input from the user. In my server I have installed DHCP3-server,Apache2 and TFTP to help me with the installation. I have nearly achieved my first objective, I am able to boot my client using the files stored in the server but during the installation stage it is asking me to CHOOSE A MIRROR OF UBUNTU ARCHIVE I gave the server's IP address and the path in the server where the files are located but then its giving me this error BAD ARCHIVE MIRROR So is it possible that instead of downloading all the files from the internet and storing them on my disk can I use the files which comes with the UBUNTU-CD, and how to store these files in what format (should I zip them) on the disk? secondly I am also generating the ks.cfg which I wanted to give to the client for automatic installation of the OS. So how should the configuration file be given to the installation process?

    Read the article

  • Change the format of the date column in Thunderbird

    - by TheOmega
    I want to change the format of the "Date"-column in the messagelist in Thunderbird. For mails from today, I want to display only the time, not the date. For mails from before today, I want to display only the date, not the time. This is the same setup mutt uses. I know of the Date display format wiki article, which describes how to change the date format, but you can only switch between five predefined formats, and none of them is "Date only". I also know of the ConfigDate extension, but it's got the same limitations, you can't define a new date format.

    Read the article

  • Can non-IT people handle a wiki?

    - by Andrew Heath
    (I'm hoping that some of you will have encountered this issue before and can offer some insights...) My company is looking to improve their market research data management. Current data management style: "Hey Jimbo, where's that picture of our WhatZit 2.0? "yeah I remember that email about that company from that guy, gimme a few minutes to search my Outlook" "who has the newest copy of the Important Competitor's product catalogue? Mine is from '09." ... "Colleen does, and she's on maternity leave. You'll have to call her to get her workstation password..." Desired data management style: data organized neatly by topic (legal, economic, industrial, competitor) for each topic, multiple media types stored together (company product images, press releases, contact info) but still neatly sorted by type data editing histories communal access (no data silos) I was thinking about setting up a department wiki for all users to access. It seems to satisfy the four criteria above, but I'm a little concerned about how user-friendly (read: decipherable to non-technical people) it is for the more advanced features like image galleries, article formatting, and the like. Has anyone here setup a wiki for non-IT people and had it not catch on fire//become a ghost town//look like Geocities? Bonus question: can you see any obvious drawbacks to my choice of MediaWiki (or any other wiki) for solving this problem? Thank you.

    Read the article

  • Passenger not working with SSL on Apache 2

    - by Zak
    I have a Rails app running on Passenger; It works as expected over unencrypted connections. I also have a working Apache SSL setup; I can access any static file available via http with https. When I try to access the Rails app via https, I get a 403 error (Directory index forbidden by rule). Turning on indexes for the directory simply causes Apache to display an index. I do have +ExecCGI set for the appropriate directory in the SSL version of the VirtualHost directive. I'm sure there's something obvious I'm overlooking. I'm just not sure where I need to be looking.

    Read the article

  • Using physical disk with VMware Workstation

    - by chx
    I am using VMware Workstation 9.0 under Windows 7 and trying to load my Linux from Physicaldisk0. And it boots, grub sees the two partitions on the disk (I checked in command line) and the kernel and the initrd loads and then it stops saying "device not found"and drops me into an emergency shell. Indeed there is absolutely nothing in /dev not the /sda device it expects not /hda nothing that looks like a disk. Edit: I can boot the Linux disk just fine if I boot it from BIOS and not as a VM. Edit2: The question is, how can I make this setup work?

    Read the article

  • Domain changes required for SSL integration

    - by user131003
    Currently my site supports regular payment options (User is taken to Payment Gateway/PG website). Now I'm trying to implement "seamless" PG integration. I need SSL for this. I'm having a dedicated server with 5 static IPs from Hostgator/HG. options: I take SSL for www.my_domain.com. According to HG, I need to change IP of main site as current IP is not really dedicated as it is being shared by cpanel etc. So They need to bind another dedicated IP to main domain for SSL to work. This would required DNS change for main website and hence cause few hours downtime (which is ok). I've noticed that most of the e-commerce websites are using subdomains like secure.my_domain.com for ssl/https. This sounds like a better approach. But I've got few doubts in this case: a) Would I need to re-register with existing PGs (Paypal, Google Checkout, Authorize.net) if I switch to subdomain? Re-registering is not an option for me. b) Would DNS change be required for www.my_domain.com in this case. This confusion arose because of following reply from HG : "If the sub domain secure.my_domain.com is added to an existing cPanel it will use the IP for that cPanel so as long as it is a Dedicated IP that will be fine. If secure.my_domain.com gets setup as its own cPanel it will need to be assigned to a Dedicated IP which would have a DNS change involved.". Please suggest?

    Read the article

  • Wireless card not found on Ubuntu

    - by idealflip
    Hello all, I'm trying to setup Ubuntu on an old PC. I ran the following command, lspci, in order to get info on the wireless card installed. 00:0a.0 Ethernet controller: Marvell Technology Group Ltd. 88w8335 [Libertas] 802.11b/g Wireless (rev 03) Does anyone know where I can get drivers for this card/chip? And also, how to install (I'm rather new to linux). Also, I'm sure the card is working because I just formatted over xp, and everything was working well. Thanks in advanced!

    Read the article

  • Reccomendation for tuning 100's of Sql Databases

    - by wayne
    Hi, I'm running several sql servers, each running a few hundred multi gig databases for customers. They are all setup homogeneously as far as the schemas are concerned, however customer usages of the data differ quite alot from database to database. What would be the best way to auto-index / profile / tune this large amount of databases? As there are atleast 600 or more catalogs i cant have someone manually profile, and index as required by each databases usage patterns. I'm currently running SQL 2005 but will be moving to 2008, so solutions that work with either are fine!

    Read the article

  • Access Control Service: Home Realm Discovery (HRD) Gotcha

    - by Your DisplayName here!
    I really like ACS2. One feature that is very useful is home realm discovery. ACS provides a Nascar style list as well as discovery based on email addresses. You can take control of the home realm selection process yourself by downloading the JSON feed or by manually setting the home realm parameter. Plenty of options – the only option missing is turning it off… In other words, when you setup your ACS namespace and realm and register identity provider, there is no way to keep the list of identity providers secret. An interested “user” can always retrieve all registered identity provider (using the browser or download the JSON feed). This may not be an issue with web identity providers, but when you use ACS to federate with customers or business partners, you maybe don’t want to disclose that list to the public (or to other customers). This is an adoption blocker for certain situations. I hope this feature will be added soon. In addition I would also like to see a feature I call “home realm aliases”. Some random string that I can use as a whr parameter instead of using the real issuer URI.

    Read the article

  • Nginx/FPM/PHP all php files say 'File not found.'

    - by Boon
    i just installed nginx 1.1.13 and php 5.4.0 on a centos 5.8 final 64bit machine. Nginx and PHP/Fpm are running, and I can run php scripts via ssh command line, but in the browser I keep getting 'File not found.' errors on all my PHP files. This is how I have my nginx.conf handle PHP scripts: location ~ \.php$ { root /opt/nginx/html; fastcgi_pass unix:/tmp/fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /opt/nginx/html$fastcgi_script_name; include fastcgi_params; } This is a direct copy/paste from my other servers, where it works fine with this setup (but they run older versions of php/fpm). Why am I getting those errors?

    Read the article

  • Why don't I have a loop error with these redirects?

    - by byronyasgur
    I know this may seem a bit of a question in reverse, but I actually don't seem to have a problem I just want to make sure before I proceed. I have 2 domains domain1.com and domain2.com and a directory my_directory at domain2.com. I have domain2.com setup as an "add on domain" in the cpanel account of domain1.com so that when I go to domain2.com I am taken to domain1.com/my_directory but the browser shows domain2.com in the addressbar so it looks and acts like and is a separate site. However when people browse to domain1.com/directory I want the address bar to show domain2.com not domain1.com/directory. So I put a redirect in the htaccess file to redirect domain1.com/directory to domain2.com and it works perfectly, but I think it shouldnt and I'm worried I've done something wrong. My question is this: domain2.com was already redirected to domain1.com/directory in the first place (I see the redirect in my cpanel under addon domains) so by adding the second redirect in the htaccess file I should be creating a loop! Could somebody please set my mind at rest and show me why not?

    Read the article

  • Webcast Q&A: ResCare Solves Content Lifecycle Challenges with Oracle WebCenter

    - by Kellsey Ruppel
    Last week we had the fourth webcast in our WebCenter in Action webcast series, "ResCare Solves Content Lifecycle Challenges with Oracle WebCenter", where customer Joe Lichtefeld from ResCare and Wayne Boerger & Doug Thompson from Oracle Partner TEAM Informatics shared how Oracle WebCenter is powering allowing ResCare to solve content lifecycle challenges, reduce compliance and business risks, and increase adoption of intranet as primary business communication tool In case you missed it, here's a recap of the Q&A.   Joe Lichtefeld, ResCare  Q: Did you run into any issues in the deployment of the platform?A: We experienced very few issues when implementing the content management and search functionalities. There were some challenges in determining the metadata structure. We tried to find a fine balance between having enough fields to provide the functionality needed, but trying to limit the impact to the contributing members.  Q: What has been the biggest benefit your end users have seen?A: The biggest benefit to date is two-fold. Content on the intranet can be maintained by the individual contributors more timely than in our old process of all requests being updated by IT. The other big benefit is the ability to find the most current version of a document instead of relying on emails and phone calls to track down the "current" version. Q: Was there any resistance internally when implementing the solution? If so, how did you overcome that?A: We experienced very little resistance. Most of our community groups were eager to be able to contribute and maintain their information. We had the normal hurdles of training and follow-up training with implementing a new system and process. As our second phase rolled out access to all employees, we have received more positive feedback on the accessibility of information. Wayne Boerger & Doug Thompson, TEAM Informatics Q: Can you integrate multiple repositories with the Google Search Appliance? Yes, the Google Search Appliance is designed to index lots of different repositories, from both public and internal sources. There are included connectors to many repositories, such as SharePoint, databases, file systems, LDAP, and with the TEAM GSA Connector and the Oracle Content Server. And the index for these repositories can be configured into different collections depending on the use cases that each customer has, and really, for each need within a customer environment. Q: How many different filters can you add when the search results are returned? A: Presuming this question is about the filtering on the search results. You can add as many filters as you like and it can be done by collection or any number of other criteria. Most importantly, customers now have the ability to limit the returned content by a set metadata value. Q: With the TEAM Sites Connector, what types of content can you sync? A: There’s really no limit; if it can be checked into the content server, then it is eligible for sync into Sites.  So basically, any digital file that has relevance to a Sites implementation can be checked into the WC Content central repository and then the connector can/will manage it. Q: Using the Connector, are there any limitations around where in Sites that synced content can be used? A: There are no limitations about where it can be used. When setting up your environment to use it, you just need to think through the different destinations on the Sites side that might use the content; that way you’ve got the right information to create the rules needed for the connector. If you missed the webcast, be sure to catch the replay to see a live demonstration of WebCenter in action!  ResCare Solves Content Lifecycle Challenges with Oracle WebCenter from Oracle WebCenter

    Read the article

  • Shared mailbox - users cannot create or view subfolders

    - by carlpett
    I've setup a shared mailbox on our Exchange 2010/SBS2011 server. I've added some users as Full permission-users on this mailbox, and when they open Outlook/login to OWA the mailbox is automatically opened. Great stuff. However, only the Inbox folder is visible, and the alternative to create a folder is grayed out. If they open the mailbox explicitly (for instance in OWA by clicking open other user's mailbox) they can see other folders, as well as create new ones. What configuration is needed to be able to view and create subfolders directly?

    Read the article

  • SQL SERVER – Unable to DELETE Project in Data Quality Projects (DQS)

    - by pinaldave
    Here is the email which made me write this blog post. When I write a blog post I write keeping in mind that if the developer is not familiar with the concept he will attempt this on the development server. If due to any reason you attempt it on any other server than your personal server, developer should make sure to have complete confidence on his own expertise and understand the risk behind it.  Well, let us read the email which I received. I have modified it a bit to remove information related to organizational and individual. “I just read your blog post on Beginning DQS. I went ahead and followed every single screenshot and it worked fine. I was able to execute the DQS project successfully. However, the same blog post got me in trouble – a serious trouble. After first successful deployment I went ahead and created a few of my own knowledge base and projects. I played around a bit and then decided to get back to real work. Now we had deployed DQS on production server only, so experiment on production server. Now, when I got back to my work, I forgot to close all the windows. My manager found the window open and have seen my test projects. He has asked me to delete my experiments immediately and have said words which I cannot write to you. Here is the problem. I am not able to delete the project which I have created earlier. I am able to open it and play with it but the delete option is disabled and grayed out (see attached image). Now I believe there is nothing wrong with this project as it was just a test project. Would you please write to my manager that it is not harmful to leave that project there as it is? It is also not using any resources. I think he will believe you.” As I said this kind of email makes me uncomfortable. I do not want someone to execute anything on production server. I often write notes and disclaimer on my post when something is dangerous to execute on production server. However, if someone is not expert with SQL Server and attempts something new on production server, I think the major issue is here with the person (admin) who gave new developer permission to production server. This has to be carefully avoided. Here was my response to the individual. “I cannot write to your manager anything as he has not asked me anything. Honestly I believe he is correct in his behavior as you should have not executed anything on the production server without prior approval and testing on the development server. Any R&D must be done on local box or development box. I suggest you request your manager to prevent access to users who does not need access. If he is a good manager, he might have already implemented by now recent event. I also see your screenshot. Here is the issue: While you were playing with project, you might have closed the project half the way, without completing it. Due to the same reason it is locked. You can open and continue from the same place where you have left the project. If you do not need the project any more. Right click on it, click on unlock the project. This will enable the DELETE option and now you can delete the project. Next time, be safe out there. It may be dangerous to have admin access to production server when not needed.“ I have yet not heard from him but I believe he will take my words positively. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Data Quality Services, DQS

    Read the article

< Previous Page | 439 440 441 442 443 444 445 446 447 448 449 450  | Next Page >