Search Results

Search found 26214 results on 1049 pages for 'farm solution'.

Page 110/1049 | < Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >

  • How to connect a USB GDI printer to Linux over a D-Link print server?

    - by jpe
    The setup is the following: +------------+ +-----------------+ +---------+ | HP LJ P1005|--USB--| D-Link DPR-1020 |---LAN---| PC Linux| +------------+ +-----------------+ + +---------+ | +------------+ +--| PC Windows | +------------+ HP LJ P1005 is one of those GDI printers that requires the printer driver to do most of the work for it and therefore is a bit "special". D-Link DPR-1020 is a print server with an Ethernet and an USB port that actually supports printing to challenged (read GDI) printers using a utility called PS-Link. What the utility does is basically mirror a USB port over the network to the print server so that the printer driver and the printer both are happy to talk to each other. The PC-s are notebooks that come and go, i.e. are not there all the time. Is there an equivalent of the D-Link PS-Link utility for Linux that could mirror a USB port over the network for a Linux host? And can the solution be used with D-Link DPR-1020? If not then I basically wasted the money buying the print server because the goal was to share a small printer among a couple of users with diverse operating systems in an office. The print server specs say that it supports Linux and LJ P1005, but the Catch 22 appears to be the solution used for GDI printers... It should be noted that it is possible to print from Linux to LJ P1005 directly over USB. This far sharing involved reconnecting the USB cable to appropriate computer to print. Now one of the desks is separated, so the cable does not work. Searching the net did not yield anything useful. Please do not suggest solutions involving a Windows machine (either virtual or not), my question is whether a solution only involving a Linux machine exists.

    Read the article

  • A lots of Apache processes are using my CPU uses always more than 70%

    - by Barkat Ullah
    I am running a plesk panel in 1and1. I have 120 sites running and all are using pligg cms, each site has 600 visitors per day. Please see the details of my server below: HDD-1000GB RAM-16GB Processor-6 Core I always see a lot of apache processes running in my # top view, so the server seems overloaded. If I can reduce the amount apache processes I think the server will be ok. But I don't know why too many apache processes are running. Please see the link below for the screenshot of my # top view: http://dl.dropbox.com/u/26967109/%23Top-2.jpg Sometimes I saw too many connection error in my plesk control panel, so I added the below line in my [mysqld] section: set-variable=max_connections=416 But I didn't find a solution yet. I have also added maxclients and serverlimit 416 in the config /etc/httpd/conf/httpd.conf But no solution yet. I am researching around more than 7 days but don't get any solution. Please help me to solve the problem. In peak hours my sites are taking too much time to load, but off-peak hour it is ok. Please help me to find out the actual problem.

    Read the article

  • Exchange 2003: recover mailbox when migrated to 2007

    - by lyngsie
    We have migrated almost all of our mailboxes from Exchange 2003 to 2007 so both are still up and co-existing in our domain. One mailbox is missing some mails, and I tried to recover from an Exchange 2003 backup using ExMerge as usual, but it throws an error: Error opening message store (MSEMS). Verify that the Microsoft Exchange Information Store service is running and that you have the correct permissions to log on. (0x8004011d) This should be normal when a mailbox has been migrated to 2007 and you try to recover a 2003 backup as stated here: http://www.petri.co.il/restoring-exchange-2003-mailboxes-exchange-2007-exmerge.htm We then tried the solution given in the article (copying the value "homemdb" for the user and pasting this in "msExchOrigMDB" for the Recovery Storage Group object in adsiedit). The error unfortuntely persists. So my question is, how can I extract a .pst from a mailbox in the recovery storage group when both Exchange 2003 and 2007 are running? I assume this is why the solution in the article didn't work. If no solution is possible using regular tools (exmerge, eseutil, etc.) which third-party tool should we choose - preferably the cheapest as we only need this one mailbox recovered.

    Read the article

  • Is there any way to distribute x264 encoding jobs across multiple computers (to increase the encoding speed)?

    - by Breakthrough
    Does anyone know of a current, active solution to encoding x264 videos across many computers (via the network) to increase encoding FPS? Brownie points for cross-platform and open source, but just so you all know, I usually use Windows. Programs that I have heard of, and why I do not believe they are suitable: x264farm: Not actively developed. Good interface, but does not support two-pass encoding, and fails with newer x264 builds. ELDER: Again, not actively developed, but my issue was that it didn't work with new x264 builds, and it was very difficult to configure (read: randomly stopped working). While I don't absolutely need a program which is being actively developed, I would like one that supports two-pass encoding, and works with new(er) x264 builds. Additional information: So far, I've offered (and awarded!) two separate bounties on this question since I first posted it over two years ago, and I still haven't found a solution to this problem. What I'm looking for basically is a simple program to allow me to encode x264 videos using the processing power of multiple computers connected over a LAN. Furthermore, it would be nice if it worked with new(er) x264 builds, and supported two-pass encoding. If at any time someone has an updated answer, or a new solution to this problem, please post it and it will be given some consideration.

    Read the article

  • Does a VPN requirement kill the concept of having a Web Application in the Cloud?

    - by Christian
    Recently I posted a question in SO, but so far I got no answers. I wonder if I'm asking the wrong question. This is the problem: We need to design an application which offers a public http web service, but at the same time it must consume some services through a VPN connection from other existing company. There is no other alternative but to use a VPN connection to access those services. We want to host our application in some cloud infrastructure like Heroku or Amazon EC2. But there is no direct way to access the VPN services of the other company from there. The solution I'm thinking, but I don't like is to have a different server to expose the services from that VPN. But this will require the setup of another server which I prefer to avoid. In the case this is the solution, can I use an Amazon EC2 instance to connect to a VPN? This is what I was thinking, is it correct? I don't have experience using VPNs, tunnels or those kind of networking stuff. I will really appreciate if you can propose me an alternative solution, or just give me a comment.

    Read the article

  • nginx redirect proxy

    - by andrew
    I have a web app running on a nginx server on local ip 192.168.0.30:80 I have this in my etc/hosts 127.0.0.1 w.myapp.in If someone accesses my app using a "w" subdomain, it shows a webdav interface, otherwise it runs normally (for example, someone calls http://myapp.in , it goes into the app, and http://w.myapp.in goes into webdav interface - this is done within the app, nginx has nothing to do with it) Because I don't have a dns or anything like that, users must access the app by ip. A problem appears if someone wants to access the webdav interface, because you cannot access the app by a subdomain - unless you write a line in your local hosts file, which is not a solution) A possible solution If it's possible to setup the nginx server so that if someone calls http://192.168.0.30 (on port 80), it goes normally into the app, but if a user tries to access say http://192.168.0.30:81 (another defined port) it redirects internally to w.myapp.in, and the app sees the subdomain Given the app, can this be done? If yes, what should I put in the nginx config file? And if you guys think of a better solution, I'm open to any.

    Read the article

  • How to change .htaccess file to work right in localhost?

    - by Manolo Salsas
    I have this snippet code in my .htaccess file to prevent users from hotlinking the server's images: RewriteEngine On RewriteCond %{HTTP_REFERER} ^$ [OR] RewriteCond %{HTTP_REFERER} !^http://(www.)?itransformer.es/.*$ [NC] RewriteRule \.(gif|jpe?g|png|wbmp)$ http://itransformer.es [R,L] Of course, it is not working in my localhost, but don't know how to achieve it. My guess is that I should change the domain name with any wildcard. Any idea? Update I've finally found out the answer thanks to @Chris solution: RewriteCond %{HTTP_REFERER} ^$ [OR] RewriteCond %{HTTP_REFERER} ^https?://%{HTTP_HOST}/.*/usuarios/.*$ [NC] RewriteRule \.(gif|jpe?g|png|wbmp)$ http://%{HTTP_HOST} [R=301,L] The /usuarios/ directory is because I only want to deny direct access to files inside this directory. Update2 For some reason, it doesn't work again. Finally I think that I found out a better solution: RewriteCond %{REQUEST_FILENAME} .*/usuarios/.*$ [NC] RewriteRule \.(gif|jpe?g|png|wbmp)$ http://%{HTTP_HOST} [R=301,L] I say better solution because what I want to deny is direct access to a file (image). Update3 Well, after a while I discovered above wasn't exactly what I wanted, so the next is definitive: RewriteCond %{HTTP_REFERER} ^$ [OR] RewriteCond %{HTTP_REFERER} !^https?://itransformer.*$ [NC] RewriteRule /usuarios/.*\.(gif|jpe?g|png|wbmp)$ - [R=404,L] Just two doubts: If I change the above to: RewriteCond %{HTTP_REFERER} ^$ [OR] RewriteCond %{HTTP_REFERER} !^https?://%{HTTP_HOST}.*$ [NC] RewriteRule /usuarios/.*\.(gif|jpe?g|png|wbmp)$ - [R=404,L] it doesn't work. I don't understand why, because %{HTTP_HOST} is equal to itransformer in my localhost, and it should work. The second doubt is why is shown the default 404 page and not my custom page (that is shown in all other 404 responses).

    Read the article

  • Synchronize two directories on linux pc

    - by Gab
    I need a distributed filesystem (or a synchronization tool) that is capable of keeping a directory synchronized across 4 pc. My requirements are: offline access (data must be available offline on each pc) preserve execution rights: some files are marked executable on a linux partition. This flag should be replicated. efficient sync strategy: some of my files are 20GB, they are changed quite often, but in very little parts (Virtualbox images). Delta transmissions are welcome. efficient handling of space: no history for files, files shouldn't be copied to temp directories "just in case you break it". it must propagate deletions of files modification can happen in any of the 4 pcs, they should be propagated when other pc are connected. Other specs of my solution are: Sync is over a lan, the total amount of data to be synced is around 180GB, in some ten thousand files. Changes are small, but can happen in big files. At the moment i'm interested in a linux only solution. conflicts either don't happen or are solved with "last one wins" I haven't found any good solution. I've been trying: unison: it is the only one working at the moment, but during the hashing phase it hangs my pc for some minute, disk light steady on. Sparkleshare doesn't handle large files nicely. It keeps an history of all your changes that grows up indefinitely. They promise it will be fixed in next releases, but at the moment it still doesn't fit my needs. owncloud (keeps history of each file i change) coda ? (help! i couldn't set it up correctly!) git-annex assistant transforms all your files in symlinks and mark the original file as read only ("just in case you make a mistake while you modify it"!). Before you edit a file you have to issue a special command "git-annex unlock", that creates a local copy of the file, and you have to remember to lock it again if you want it synchronized. What to try next?

    Read the article

  • Allowing access to company files accross the internet

    - by Renaud Bompuis
    The premise I've been tasked with finding a solution to the following scenario: our main file server is a Linux machine. on the LAN, users simply access the files using SMB. each user has an account on the file server and his/her own access rights. user accounts are simple passwd/group security accounts, not NIS/LDAP. The problem We want to give users (or at least some of them, say if they belong to a particular group) the ability to access the files from the Internet while travelling. Ideally I'd like a seamless solution. Maybe something that allows the user to access a mapped drive would be ideal. A web-oriented solution is also good but it should present files in a way that is familiar to users, in an explorer-like fashion for instance. Security is a must of course, and users would be expected to log-in. The connection to the server should also be encrypted. Anyone has some pointers to neat solutions? Any experiences? Edit The client machines are Windows only.

    Read the article

  • Windows disk change monitoring for malware analysis

    - by SuperDuck
    Not sure if this question belongs to here, because it has some relations with 'serverfault' (system backups) and 'stackoverflow' (software analysis). I'm looking for a solution to monitor disk changes on a Windows system and selectively revert them. It should be able to handle live files like registry parts, so may need to be an offline backup software. It shouldn't silently pass over files which the current admin user doesn't have permissions on (files with no permission entries or owned by the 'system' user) Registry change tracking would be a bonus but is not a requirement I use virtual machines for malware analysis, there is even no solution to list file changes in disk snapshot files (delta VMDK). I currently use Ashampoo for monitoring changes. Though it's the best one between similars, it's not a good software and hasn't really evolved in many 'platinum', 'deluxe' versions released in the last 10 years (it even used non-resizable windows until the latest version). The real problem is it misses some disk / registry changes. Perhaps it only compares modification dates and doesn't catch a change if the dates are preserved. So, I think the solution should compare files using hashes, or file sizes at least. There are numerous backup software out there and I'm sure one can handle this, offline or online.

    Read the article

  • Reverse proxy for mailserver (SMTP + HTTP for web client)

    - by ba
    I'm looking at doing some reverse proxy work for a mail server with corresponding web client. Both servers are running on the same machine, this is not a server with a high load. :) The solution I've discussed with friends is having the mail server/web client on our internal network. Then to put a reverse proxy on the DMZ to service both SMTP and web client HTTP-traffic to the mail server on the internal network. From what I understand this is the recommended secure solution? So far I've thought for the SMTP-proxy part of using postfix which will receive mail, do some spamhause and similar anti-spam measures and if it all checks out, send the mail to the mail server on the inside. The mail server on the inside will send all outgoing mail to the proxy which will then send it out on the Internet. For the web client I'm not sure exactly which software I should be running on the proxy machine, I've been thinking about using Squid -- but that's basically based on the fact that I know squid is a http proxy. The web client data will be sent out over SSL. Reading around some here on Serverfault I've seen other people using Apache with mod_proxy+mod_security for similar situations. Am I thinking correctly for this solution? What software would you guys use and with which modules? Thanks in advance for the help! :)

    Read the article

  • Software/hardware to build video streaming server?

    - by Sasha Yanovets
    I am looking for a video streaming server solution, something like online TV server, with ability to make live broadcasts in the internet. What software could you recommend for that? What kind of hardware it should run on, should be there anything special? I am looking for a solution that could be scaled up to at least 1000 simultaneous users online with good resolution of video. I think it is good to have general answer on what direction to choose. But here more details on my specific case: I just looking for a solution almost from scratch. We have some video content that we've produced, but it is not delivered over internet yet. We do not tied to any particular vendor for now. We want to make 24 hours of steaming three 8 hour blocks with change of content every day. We want the ability to make regular live broadcasts. I guess we will need to have several options of streaming quality (low ~56 kb/s mid ~273 kb/s). Some terms just foreign to me (like play-truncation rate), if you could point out what parameters we should avare of, it would be great. Uplink to the internet is to be determined. We plan to start from something and scale up on the way. If you are already have some kind of media streaming server, just describe its configuration here (hardware, OS, software), peak number of concurrent users it serves. I think it could help people approaching this task.

    Read the article

  • Accessing a webpage folder with .htaccess in it via apache webdav?

    - by pingo
    I have setup webdav access in order to enable an external user to upload the content of his web page to his folder on my server that is served by apache to the web. This way he could update his web page via webdav. Now the problem is that the user requires a .htaccess file and of course .htaccess breaks webdav probably because it overrides settings. (new files cannot be uploaded anymore via webdav if below specified .htaccess exists) I am running Apache2.2.17 and this is my webdav config: Alias /folderDAV "d:/wamp/www/somewebsite/" <Location /folderDAV> Order Allow,Deny Allow from all Dav On AuthType Digest AuthName DAV-upload AuthUserFile "D:/wamp/passtore/user.passwd" AuthDigestProvider file require valid-user </Location> This config is part of my naive solution to fixing this problem. The idea was to specify an alias to the web page folder where webdav would be enabled and then set AllowOverride to none so that the .htaccess would have no effect. Of course I then found out that in <Location /> AllowOverride directive is not valid. The .htaccess file looks like this: #opencart settings Options +FollowSymlinks Options -Indexes <FilesMatch "\.(tpl|ini)"> Order deny,allow Deny from all </FilesMatch> RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)\?*$ index.php?_route_=$1 [L,QSA] ErrorDocument 403 /403.html deny from 1.1.1.1/19 allow from 2.2.2.2 What would be the solution here? I would like to have the web page accessible from the web but at the same time be able to access and modify it via apache's webdav (with digest auth). How would I do that? Also if possible I would like a solution that permits the existence of the .htaccess so that the user still has the power to setup access rules for his web page.

    Read the article

  • Make a drive from one machine appear as a physical disk in another machine.

    - by Roberto Sebestyen
    I want to take a physical disk (or part of a disk) in one machine (call it machine-A) and I want to make it available in another machine (machine-B). But I don't want to map a network drive. I want it to appear in machine-B as a physical drive. Even though it is not a physical drive. The reason I want to do this is i want the ability to create shares in machine-B on that drive. Since I cannot do that on mapped drives, I need to use some utility that fools machine-B to think that it is a physical drive, and treat it as such. Both of these machines are windows server 2003. I heard about NFS, It sounds like what could be the solution to my problem. But isn't that a Linux/Unix protocol? What tools can I use to make this happen? Are there any open source solutions? I don't care what the solution is, as long as it achieves the end result, preferably open source solution though. Thanks for reading guys and gals!

    Read the article

  • Large scale file replication with an option to "unsubscribe" from a replicated file on a given machine

    - by Alexander Gladysh
    I have a 100+ GB files per day incoming on one machine. (File size is arbitrary and can be adjusted as needed.) I have several other machines that do some work on these files. I need to reliably deliver each incoming file to the worker machines. A worker machine should be able to free its HDD from a file once it is done working with it. It is preferable that a file would be uploaded to the worker only once and then processed in place, and then deleted, without copying somewhere else — to minimize already high HDD load. (Worker itself requires quite a bit of bandwidth.) Please advise a solution that is not based on Java. None of existing replication solutions that I've seen can do the "free HDD from the file once processed" stuff — but maybe I'm missing something... A preferable solution should work with files (from the POV of our business logic code), not require the business logic to connect to some queue or other. (Internally the solution may use whatever technology it needs to — except Java.)

    Read the article

  • Combat server downtime by duplicating server and re-routing when main server is down

    - by Wasim
    I have a CentOS server which at times either crashes or gets attacked with DDOS. At the moment I have an off site backup which is filled up with 1.7TB of data. I'm currently paying as much for the backup as I am for the server and I was looking for advice from experienced people as to what option is best to proceed from here. Would it be a viable solution to ditch the offsite backup, and instead purchase an additional server which is an exact duplication of the first server. So if the first server is down, users are re-routed to the second server without noticing the first server is even down. This would create an automatic backup of the first server (albeit not offsite) and relinquish the need for the expensive offsite backup. Is the above solution a true solution to pricey backup or is offsite backup absolutely necessary? How would I go about doing this (obviously it's pretty complex so just links to some reading material or the terminology of the procedure would be great)? Appreciate the help and advice.

    Read the article

  • SQL Server Replication Backup

    - by user18039
    Hi We have a new system that runs on SQL Server 2008 r2 64-bit. There is a primary on-line transactional processing (OLTP) database that accepts a high volume of updates from several thousand Point of Sale systems at stores around the country. In order to protect this vital function, I have decided to introduce a dedicated reporting database server - from which multiple users will run some pretty complex reports. I realise that there were a number of choices but I decided to use Transaction Replication as the mechanism for copying the data from the OLTP database to the new reporting database - one way replication. The solution has worked well in test. I'm now being asked what changes need to be made to the backup policy to cover the architectural changes. I have read pages such as MSDN:Strategies for Backing Up and Restoring Snapshot and Transactional Replication but I think these are overkill for my solution. In fact, my current thinking is that we simply need to continue making backups of the OLTP data and logs. If the Reporting db or any of the system replication (eg distribution) databases fail then it's no big deal - we can clear all down then re-create the replication. I realise that taking a complete snapshot of the OLTP would be time consuming (approx 5 hours) but I'd be more relaxed about this that trying to restore backups of the various data and log files in the correct sequence. My view is that the complex strategies set out in the MSDN article would only be the way to go for a more complex replication solution than I have, eg if there were multiple subscribers with 2-way replication. Would you agree? I'd be grateful for any advice. Many thanks, Rob.,

    Read the article

  • Vim: auto-comment in new line

    - by padde
    Vim automatically inserts a comment when I start a new line from a commented out line, because I have set formatoptions=tcroql. For example (cursor is *): // this is a comment* and after hitting <Enter> (insert mode) or o (normal mode) i am left with: // this is a comment // * This feature is very handy when writing long multi-line comments, but often I just want a single line comment. Now if I want to end the comment series I have several options: hit <Esc>S hit <BS> three times Both of these afford three keystrokes, taken together with the <Enter> this means four keystrokes for a new line, which I think is too much. Ideally, I would like to just hit <Enter> a second time to be left with: // this is a comment * It is important that the solution will also work with different indentation levels, i.e. int main(void) { // this is a comment* } hit <Enter> int main(void) { // this is a comment // * } hit <Enter> int main(void) { // this is a comment * } I think I have seen this feature in some text editor a few years ago but I do not recall which one it was. Is anyone aware of a solution that will do this for me in Vim? Pointers in the right direction on how to roll my own solution are also very welcome.

    Read the article

  • DNAT from localhost (127.0.0.1)

    - by pts
    I'd like to set up a TCP DNAT from 127.0.0.1, port 4242 to 11.22.33.44, port 5353 on Linux 3.x (currently 3.2.52, but I can upgrade if needed). It looks like the simple DNAT rule setup doesn't work, telnet 127.0.0.1 4242 hangs for a minute in Trying 127.0.0.1..., and then it times out. Maybe it's because the kernel is discarding the returning packets (e.g. SYN+ACK), because it considers them Martian. I don't need an explanation why the simple solution doesn't work, I need a solution, even if it's complicated (e.g. it involves creating may rules). I could set up a usual DNAT from another local IP address, outside the 127.0.0.0/8 network, but now I need 127.0.0.1 as the destination address. I know that I can set up a user-level port forwarding process, but now I need a solution which can be set up using iptables and doesn't need helper processes. I was googling for this for an hour. It was asked multiple times, but I couldn't find any working solutions. Also there are many questions about DNAT to 127.0.0.1, but I don't need that, I need the opposite.

    Read the article

  • why does my dl380 G3 crank the fan up so high (and how do i stop it?)

    - by smoofra
    I've got a HP DL380 and after I leave it on for a while it decides it needs to run the fans at a much higher speed than it did before. It's really loud and annoying. Is there any way to manually control the fan speed? Or otherwise get it to stop doing that? Thanks. edit: I guess i should have made it clear that this is a bit of a jury-rigged situation. I know the ideal solution is to put the thing in a server room with nice cool air. Unfortunately, that isn't happening. This is the sort of problem that calls for a jury rigged solution like manually setting the fan speed and accepting the fact that it's going to run hot shutting down a CPU (can i do that?) spinning down the disks when they're not in use (is that possible?) solution: Install HP's system health monitoring daemon, hpasmd. I installed it to try to figure out what was going on, and just running it fixed the problem.

    Read the article

  • VM automatic provisioning advice

    - by jdgregson
    In my lab we have 24 workstations, each with five technician-maintained virtual machines set up in VMware Workstation. These provide a lot of management overhead, as we have to update them as well as the host operating systems every three months (the start of the next quarter), which adds up to 144 systems to update instead of just 24. Whenever we need to reimage the hosts, the VMs add another 130GB to each image, which is over 3TB of extra data to send over the network, and a lot more time to apply each image, and then we still have to boot all 120 VMs and assign them a unique IP Address and host names. We would like to get the VMs off the hosts and onto a server, but after looking around for a few days, I still don't know where to begin looking for a solution. There may be a better way to do this, but in my mind, the ideal solution would be to replace the VMs on the host machines with five Thin Client operating systems, each configured to connect to a server and be sent or connected to a unique virtual machine. We can't have 120 VMs running on the server all the time, so the server would have to create a copy of the VM from a template whenever a student tries to boot one, and destroy the VM after the student is finished with it. If there is another client application that has to be installed on the hosts that would be fine, the only reason I'd like to keep them in VMware Workstation is because students already know to look there for the VMs when they need to use them. What, if any, virtualization software will allow this? Is there some other solution I'm not seeing?

    Read the article

  • Find kth smallest element in a binary search tree in Optimum way

    - by Bragaadeesh
    Hi, I need to find the kth smallest element in the binary search tree without using any static/global variable. How to achieve it efficiently? The solution that I have in my mind is doing the operation in O(n), the worst case since I am planning to do an inorder traversal of the entire tree. But deep down I feel that I am not using the BST property here. Is my assumptive solution correct or is there a better one available ?

    Read the article

  • npoi export from datatable

    - by Iulian
    I have an asp.net website that will generate some excel files with 7-8 sheets of data. The best solution so far seems to be NPOI, this can create excel files without installing excel on the server, and has a nice API simillar to the excel interop. However i can't find a way to dump an entire datatable in excel similar to CopyFromRecordset Any tips on how to do that , or a better solution than NPOI ?

    Read the article

  • sql server 2005 delphi 2010 dbexpress connection

    - by xsintill
    Trying to connect to sql server 2005 with delphi 2010 gives me the following error after configuring the data explorer: 'Borland.data.TDBXError: DBX Error: Driver could not be properly initialized. Client library may be missing, not installed properly, or wrong version' I know there is a work around by installing the sql server 2008 management studio. But this is not a good solution if all pc's need to have this prerequirement. Anybody know a better solution/fix. Delphi 6 does not have this problem.

    Read the article

< Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >