Search Results

Search found 13808 results on 553 pages for 'remote storage'.

Page 80/553 | < Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >

  • Mac, VNC and multiple monitors

    - by MarqueIV
    I asked a similar question here before but apparently I wasn't as clear as I had expected by the responses. That said, I'll try again. I have a Mac Pro with quad monitors which I would like to access remotely. I've been using VNC for this (either via screen sharing or a dedicated VNC client), which works, but the VNC protocol matches the physical layout/resolutions of attached monitors. One of the things I like about Microsoft's Remote Desktop (Terminal Server) client is that when you connect, it blanks out the local screens and sets the resolution to a client-specified setting. In other words, when natively running Windows, even though I'm running a physical 30" monitor flanked by 2 24" monitors as well as a 21" Cintiq monitor, I can set the Remote Desktop resolution to match my notebook's screen giving me a native, single-monitor configuration. As soon as I disconnect (and you log back in locally), the desktop un-blanks and the resolution resets back to the four physically attached monitors. Again, VNC works and yes I know I can use 5901, 5902...n to attach VNC to a specific monitor as opposed to the entire desktop, but I'm still at the mercy of trying to look at a 2560x1600 resolution on a 1280x800 screen. I'm left with either scaling (everything's too small) or panning/scrolling (it's like playing hide-and-seek with your documents!) SO... anyone know of any Mac-based remote software (client and server) that will let me connect to my Mac Pro and reset the resolution by the client, just like you can in Windows, or am I SOL?

    Read the article

  • Trouble connecting to a local SQL server instance from the web

    - by dfarney
    We have a small network behind a firewall (WatchGuard XTM 2 series) and network switch. On our network we have multiple instances of SQL server, but 1 in specific that I would like to be able to access remotely from our website. We have a static IP address from our ISP and then all the machines on the network have a locally assigned dynamic IP address. When trying to connect to the database from outside our network how do I get the request to be directed to the proper machine / SQL instance? Is it a parameter in my connection string or something in my firewall? A few things to rule out: 1) The firewall is allowing access from the website to our network. I added the site's IP and opened up port 1433. Also, when trying to connect and monitoring the firewall no exceptions come up as they did before I added the proper IP address. 2) Remote connections on the SQL server has been setup and enabled. I've done a lot of reading up on remote connections and I am sure it has been setup properly. I am currently getting this error message on my site: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: TCP Provider, error: 0 - A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.)

    Read the article

  • Windows Server 2012 Can't Print

    - by Chris
    I know this may sound incredibly stupid and there is probably an easy solution but I can't seem to find it. Friends of mine recently upgraded their server for their small business from the POS old one. New hardware and a change from Windows Server 2003 to Windows Server 2012. I've got everything they need transfered over and running except for printing. They need to be able to print to printers in the vans their technicians use from the server via remote desktop. In other words the use a laptop to remote desktop into the server and need to print invoices out from the remote server to printers attached locally via usb. On the old server they just installed the identical driver and that was it, they could print as needed. On this server no matter what we seem to do we can't get it to print remotely, and in the process we also discovered that the server can't even print to the network printer. It sees the printer on it's network and it sees (through redirect) the printers in the vans but when you hit print it claims it did and nothing happens. There isn't an issue with the printers themselves as every other device we have can print to them without issues. Is there some setting that is inhibiting the server from printing? Is there something I need to install (print server?) to add the functionality? Thanks in advance for helping me out here

    Read the article

  • Configure X connections over TCP without using an X connection

    - by Darren Cook
    I want to run a GUI application on a remote machine I only have ssh access to. I don't need to, or want to, see the GUI window. (I know I could use something like ssh -C -X remote_server if I wanted the GUI to be on my client.) I know X is running on the remote machine, as ps shows this: root ... /usr/bin/Xorg :0 -br -audit 0 -auth /var/gdm/:0.Xauth -nolisten tcp vt7 I set DISPLAY=:0.0 but I then get "Xlib: connection to ":0.0" refused by server" when I try to use it. At Get remote x display working in linux without ssh tunneling and Xserver doesn't work unless DISPLAY=0.0 I see the advice to use gdmsetup to allow X to listen on TCP. But, gdmsetup is a GUI application! And trying to run it over ssh -X did not work ("X11 connection rejected because of wrong authentication"). So, is there a text file I can edit to remove -nolisten? And, after editing it, how do I safely restart X, remotely? (There is other stuff running on this machine, so requesting a reboot is possible, but undesirable.) If not, should gdmsetup be able to run over ssh and I should persevere in that direction? UPDATE: I had to do the ssh -X session as root (ssh as a normal user, then sudo or su, does not work.) So, I did the edit with gdmsetup. I then restarted X with gdm-restart. I've also done xhost + from that ssh -X session. The ps line no longer shows the -nolisten tcp part. But still no luck connecting to it, with either DISPLAY=:0 or DISPLAY=localhost:0

    Read the article

  • Connecting Dell PowerVault NAS to ESXi

    - by Matt Fitz
    Just got a Dell PowerVault NAS storage device running Windows Storage Server Standard which includes NFS. When I try and connect the ESXi server I get the following message: Call "HostDatastoreSystem.CreateNasDatastore" for object "ha-datastoresystem" on ESXi "powerhouse" failed. Operation failed, diagnostics report: Unable to complete Sysinfo operation. Please see the VMkernel log file for more details. I am pretty sure it is a username/password type thing. Not sure were to begin though. Also, I am planning on using "Username Mapping" instead of Active Directory. Any ideas would be greatly appreciated

    Read the article

  • many partitions on a single filegroup?¿ does it make sense?

    - by river0
    Hi, I'm designing a datawarehouse solution and I'm a newbie in disk configuration issues, let me explain you. Our storage is spread over 6 storage enlosures having each of them 5 raid-1 disk arrays, and having 2 LUNS defined per each disk array, which makes a total 48 LUNS (this is following Microsoft fast track recommendations for datawarehouse architectures). I would like to partition my data, on other projects I have worked before, we always followed a 1 partition - 1 filegroup rule. On the microsoft fast track recomendations it is advised to create a filegroup and then for that filegroup a data file per each lun... but I pretend to have a week level partitioning... if I apply that rule I think that I'll get too many files and a complex layout. I'm thinking of just creating just one filegroup (with the 48 lun data files), but still create the partitions since I want to keep soem of the benefits of partitions like partition switching... Is this scenario not recommended? What would you suggest?

    Read the article

  • Windows 2008 terminal server - How to restrict access to DVD/floppy?

    - by test1839
    I has a very simple task. I need to block access to removable media (CD, DVD, floppy, USB drives etc.) on a Windows 2008 R2 Terminal Server for users and allow it for admins. I tried to enable the following policy in GPO: User Configuration/Administrative Templates/System/Removable Storage Access All Removable Storage classes: Deny all access = Enabled But it did not work. I tried different physical and virtual 2008 servers with the same result. It works on Windows 7 but not on Windows 2008. Has anyone had success with this parameter on Windows 2008? Thank you

    Read the article

  • Server vendor that allows 3rd party disks

    - by Alvin S
    As noted here, Dell is no longer allowing 3rd party disks to be used with their latest servers. As in, they don't work period. Which means that if you buy one of these boxes and want to upgrade the storage later, you have buy disks from Dell at significant premiums. Dell has just given me a very strong reason to take my server business elsewhere. My company buys (instead of leasing) our servers, and typically uses them for 5 years. I need to be able to upgrade/repurpose storage periodically, and do not want to be locked in to whatever Dell might have in stock, at inflated prices to boot. As you will see in the comments of the above link, it seems HP is doing the same thing. I am looking for a server vendor that offers 3-5 year warranty with same day/next day onsite service, and allows me to use 3rd party disks. Suggestions?

    Read the article

  • best filesystem for an aws s3 like service

    - by gucki
    Hi! I need to build a fault tolerant, highly available key/value storage (no posix, only same functionaluty as S3) using cheap existing hardware. The storage should be able to handle several billions of items. The maximum size of items is around 1GB, most are only several KB. What's the best software/ filesystem for this task? I already had a brief look at mogilefs, mongodb (grid-fs) & glusterfs but I'm not really sure which is stable & fault tolerant enough. The simpler the setup and later expansion the better :). Corin

    Read the article

  • Advantage of using nexenta vs. OpenSolaris

    - by jotango
    I am currently building a NAS for about 24 TB of storage. Video files, slow access, long term storage. No performance issues. I am currently undecided between buying a JBOD case and installing OpenSolaris (because of ZFS), or purchasing a Nexenta license. The difference is about $ 12.500 for licenses over three years. What would you see as the main advantage in purchasing a nexenta license, beside the support? Did nexenta really enhance the basic OpenSolaris, or is it just a lot of marketing speak? No one really wanted to answer that question.

    Read the article

  • Alternatives to FTP

    - by Jack Hickerson
    I need to share files with clients outside of my business and unfortunately our FTP server is becoming too much of a hassle (with regards to clients use of an ftp client and creating password protected downloads based on customized account privileges) Essentially, I need: a remote service that mimics an FTP server with a web interface (easy for basic internet users to comprehend). over 100gb of storage file transfer size over 2gb customizable user account privileges (password protected downloads) secure storage and data transfer preferably less then $100/mo I have already looked into some services that almost meet my requirements (StreamFile.com, box.net, onehub.com, filesanywhere.com)- has anyone used a service they would recommend? cheers, jack

    Read the article

  • Effecient organization of spare cables and hardware

    - by Jake Wharton
    As many of you also likely do, I have a growing collection of cables, hardware, and spare parts (screws, connectors, etc.). I'm looking to find a good system of organization so that everything isn't a tangled mess, mismatched, and potentially able to be damaged. Since the the three things listed above are all have varying sizes and degrees of delicacy this poises an interesting problem. Presently I have those cheap plastic storage bins you find at Wal-mart for everything. Cables that were once wrapped neatly have become tangled due to numerous "I know I have a cable for this" moments. Hardware is mixed in other bins with odds and ends with no protection from each other. NICs, CPUs, and HDDs are all interacting and likely causing damage. Finally there are stray parts sprinkled amongst these two both in plastic bags and loose. I'm looking to unify this storage into a controlled chaos. Here are my thoughts: Odds and ends are the easiest. Screws, connectors, and small electronic parts lend themselves perfectly to tackle boxes and jewelry boxes. Since these are usually dynamically compartmentalized I can adjust for the contents and label them on the outside or inside of the lid. Cables are easily wrangled with short velcro strips but that doesn't stop them from being all mixed in together. Hardware is the worst offender. Size, shape, and degree of delicacy changes with nearly every piece. I'm willing to sacrifice a bit of organization for a somewhat efficient manner. What are all your thoughts? What is the best type of tackle or jewelry box to use? Most of them are cheap and flimsy. Is there a better alternative? How can I organize cables to know exactly (within reason) where one is? What about associating cables with hardware (Wall adapter to router, etc.)? What kind of storage unit lends itself to all shapes of hardware? Do I need to separate by size or degree of delicacy for better organization?

    Read the article

  • VMware ESXi 4 On-Disk Data Deduplication - possible and supported?

    - by hurikhan77
    Environment: We are running multiple web, database, and application servers which usually share a pretty common installation (gentoo linux) and similar configuration in VMware ESXi 4. The differences are usually only some installed features or differing component versions. To create a new server, I usually choose the most similar (by features) running server, rsync a copy of it into freshly mounted filesystems, run grub, reconfigure and reboot. Problem: Over time this duplicates many on-disk data blocks which probably sums up to several 10's of gigabytes. I suppose if I could use a base system as template with the actual machines based on top of that, only writing changed blocks to some sort of "diff image", performance should improve (increased cache hit rate) and storage efficiency should increase (deduplicated storage space). This would be similar to what ESXi already supports for RAM deduplication (page sharing). Question: Is there any way to easily do this on ESXi 4? I already share the portage tree via NFS but this would not work for the rootfs.

    Read the article

  • Fast, reliable data transfers from/to China

    - by Nils
    We are a small company and we will need to transfer rather large amounts of data (10GB+ each time) between Europe and China in the near future. As many may have experienced, Internet connections to or from China can be rather unreliable and slow at times without any apparent reason. For example, while sending data to China via FTP generally works well, it can be painfully slow in the other direction. Currently, we are investigating new ways to have high transfer rates in both directions. So far we have tried: FTP (see above) FTP over VPN services (generally slower than direct connections) F2F (like Retroshare or Freenet - slow!!) Aspera (fast but expensive!) BitTorrent (unreachable end nodes, b/c of firewalls which we must not configure) We would like to try: Cloud storage (e.g. Amazon S3, Google Storage) - are those services always and reliably reachable from inside China? Point-to-Point VPN (currently not possible, b/c of the network, see above) I'd be especially grateful to hear from people who have already dealt with this kind of problem before.

    Read the article

  • What disk setup is needed / best practice for hypervisor-only servers?

    - by Luke404
    Planning to buy some servers to run an hypervisor (Citrix XenServer or VMware vSphere, still have to decide between the two) we'd like to boot off the local redundant SD card module offered by various vendors (eg. Dell, HP, etc...). The actual VMs will run from an existing iSCSI SAN (which, by the way, can't support booting the servers directly off the SAN). What are the reasons, if any, to choose completely diskless servers VS having some local storage? And what would be the guidelines to choose that local storage? (number of spindles, raid level, etc)

    Read the article

  • Network Drive Via Ethernet Port for Speed?

    - by Yar
    I have a Macbook with Firewire 400 and USB 2.0, so the only way I can get fast external storage is through the Ethernet port. A really fast firewire 800 drive on ANOTHER computer is actually much faster than the built-in drive (according to XBench). So I thought I would try to go one better and buy an ethernet-ready drive. I bought a Seagate GoFlex™ Home Network Storage System, and it seems like the only way to get it to work is to plug it into a router. Can this drive be used without a router (i.e., direct to computer)? Are there any drives that can be plugged directly into the ethernet port for fast access? I don't want the drive on my router: I want it on my computer. Ideally I'd need 7200rpm or faster, too... Update: Just chatted with Seagate and they said that this particular drive will not work that way. Will any others?

    Read the article

  • Any experience with SATA SAS Interposer Cards?

    - by korkman
    Driven by the current price difference between SATA and SAS disks on one side and the potentially bad behaviour of SATA disks in bigger storage arrays on the other side, I have found so-called SATA-to-SAS interposer cards. Advertised as "seamlessly adding SAS capabilities to existing SATA disk drives", I wonder if anyone here has had some experience with these or similar products. The major benefits I can identify are the increased cable voltage (if all drives are SAS connected), the ability to power-cycle the drive and multipath (if desired). Obviously the SATA drive will still have to be RAID edition. The question is: Do these cards indeed increase the overall reliability of a storage system, or will failing SATA disks cause trouble nevertheless? Edit: I'm not asking for hypothetical answers, only actual experience please. I'm well aware that the typical 10k SAS drive is more reliable (and better performing) than 7200 SATA drives. But how does a nearline SAS, which is phyiscally the same disk as its SATA counterpart, compare to the SATA version with interposer?

    Read the article

  • Expendable, Redundant, Easily recoverable

    - by MeIr
    I am desperate at this point, I have been looking for "Big storage" solution for a while on my own and I can't find anything that would suite my needs. But now push came to shove. Current situation: I have about 6TB data storage (already full) - Drobo. Yesterday Drobo died on me and it put me into bad situation - I can't recover my data without buying another Drobo. From extensive research online I realized that Drobo is not the safest bet and by now it seems very poor choice. I ordered new Drobo to try to get my data back, however I don't want to be in the same situation later and continuing using Drobo promises this event to re-occur. What I am looking for: 1) Inexpensive setup. 2) Dynamically extendable - add more drives and/or replace a drive with bigger capacity. 3) Redundant - setup against 1-3 drive failure, will depend on total number of drives. For the sake of argument let's assume for every 4 drives one should be able to fail without data loss. 4) Easy data recovery - let's say unforeseen happens, I would like to be able to recover information without buying new tools or replacements - example: new Drobo. 5) Should be USB or Network Attach Storage 6) No demand on speed. Doesn't have to be fast, I am not doing video editing on the setup. However if option exists, would be nice to have a decent speed. After thoughts: I reviewed few options and FreeNAS looks nice, but it doesn't have #2 - Dynamic extendability. There are work around with Pools but it seems a bit complicated and unnecessary. More over it seems like data safety is a big question - saw some horror stories. Please advise on what options I have and what seems like an optimal solution (if any). I don't care if it has to be Windows or Linux box or any other OS and/or software that has to run on top, but simple solution is more attractive. Thank you! P.S: Feel free to ignore "After thoughts".

    Read the article

  • What is the "real" difference between a NAS and NFS?

    - by warren
    From an end-user perspective, what is the difference between a NAS device and using NFS exports from a file server? They seem to accomplish the same end result. The difference between a SAN and other file storage is related (in my experience) to how they are connected to the server infrastructure. However, the difference between a NAS, connecting over standard ethernet, and NFS (sharing storage off specific servers, also over the network), seems more nebulous. Is there a good reason to pick a NAS filer over NFS on servers?

    Read the article

  • Email server - Disk quota sizes - suggestions?

    - by Ian H
    Working out a new server for an agency of 200 Employees - with approx 240 email accounts. Internally I'm arguing with myself over the amount of drive space to allocate to each user for the disk quota, I'm just looking for suggestions. Once i have a quota size decided, it will define the solution for storage. I've had everything from 4 GB per account ( which i feel is being generous ) down to 500 Mb ( with is rather restrictive in today's day and age. ) Thing is 4 GB per acocunt is just under 1 TB of allocated storage for email alone. Does anyone follow a "rule of thumb" or have thoughts on this? thanks in advance

    Read the article

  • Amazon S3: allow users to upload on a restricted basis (per bucket maybe)?

    - by Tom
    Hi there, I'm thinking about signing up to the Amazon S3 storage service. What I want to do is create a service where other people can register their own bucket with a certain amount of storage. These users will install my software, which then uploads their files. Of course, the users may only upload what they have paid for. For this to work I would like to create a separate bucket for each customer, each with its own properties. Question 1: is this possible with the API? How? This means that the installed software must have the rights needed to upload to my Amazon S3 account. Question 2: can I create individual authentication IDs for each bucket or customer, so that they can only upload with restrictions I have set? Thanks in advance.

    Read the article

  • What is the "real" difference between a NAS and NFS? Or, why pick a NAS device over "mere" NFS?

    - by warren
    From an end-user perspective, what is the difference between a NAS device and using NFS exports from a file server? They seem to accomplish the same end result. The difference between a SAN and other file storage is related (in my experience) to how they are connected to the server infrastructure. However, the difference between a NAS, connecting over a standard ethernet port, and NFS (sharing storage off specific servers, also over the network), seems more nebulous. Is there a good reason to pick a NAS filer over just running NFS on servers?

    Read the article

  • Explained: EF 6 and “Could not determine storage version; a valid storage connection or a version hint is required.”

    - by Ken Cox [MVP]
    I have a legacy ASP.NET 3.5 web site that I’ve upgraded to a .NET 4 web application. At the same time, I upgraded to Entity Framework 6. Suddenly one of the pages returned the following error: [ArgumentException: Could not determine storage version; a valid storage connection or a version hint is required.]    System.Data.SqlClient.SqlVersionUtils.GetSqlVersion(String versionHint) +11372412    System.Data.SqlClient.SqlProviderServices.GetDbProviderManifest(String versionHint) +91    System.Data.Common.DbProviderServices.GetProviderManifest(String manifestToken) +92 [ProviderIncompatibleException: The provider did not return a ProviderManifest instance.]    System.Data.Common.DbProviderServices.GetProviderManifest(String manifestToken) +11431433    System.Data.Metadata.Edm.Loader.InitializeProviderManifest(Action`3 addError) +11370982    System.Data.EntityModel.SchemaObjectModel.Schema.HandleAttribute(XmlReader reader) +216 A search of the error message didn’t turn up anything helpful except that someone mentioned that the error messages was bogus in his case. The page in question uses the ASP.NET EntityDataSource control, consumed by a Telerik RadGrid. This is a fabulous combination for putting a huge amount of functionality on a page in a very short time. Unfortunately, the 6.0.1 release of EF6 doesn’t support EntityDataSource. According to the people in charge, support is planned but there’s no timeline for an EntityDataSource build that works with EF6.  I’m not sure what to do in the meantime. Should I back out EF6 or manually wire up the RadGrid? The upshot is that you might want to rethink plans to upgrade to Entity Framework 6 for Web forms projects if they rely on that handy control. It might also help to spend a User voice vote here:  http://data.uservoice.com/forums/72025-entity-framework-feature-suggestions/suggestions/3702890-support-for-asp-net-entitydatasource-and-dynamicda

    Read the article

  • Fortigate Remote VPN : no matching gateway for new request

    - by Kedare
    I am trying to configure a Fortigate 60C to act as an IPSec endpoint for remote VPN. I configured it like this : SCR-F0-FGT100C-1 # diagnose vpn ike config vd: root/0 name: SCR-REMOTEVPN serial: 7 version: 1 type: dynamic mode: aggressive dpd: enable retry-count 3 interval 5000ms auth: psk dhgrp: 2 xauth: server-auto xauth-group: VPN-group interface: wan1 distance: 1 priority: 0 phase2s: SCR-REMOTEVPN-PH2 proto 0 src 0.0.0.0/0.0.0.0:0 dst 0.0.0.0/0.0.0.0:0 dhgrp 5 replay keep-alive dhcp policies: none Here is the configuration: config vpn ipsec phase1-interface edit "SCR-REMOTEVPN" set type dynamic set interface "wan1" set dhgrp 2 set xauthtype auto set mode aggressive set proposal aes256-sha1 aes256-md5 set authusrgrp "VPN-group" set psksecret ENC xxx next config vpn ipsec phase2-interface edit "SCR-REMOTEVPN-PH2" set keepalive enable set phase1name "SCR-REMOTEVPN" set proposal aes256-sha1 aes256-md5 set dhcp-ipsec enable next end But when I try to connect from a remote device (I tested with an Android Phone), the phone fail to connect and the fortinet return this error : 2012-07-20 13:08:51 log_id=0101037124 type=event subtype=ipsec pri=error vd="root" msg="IPsec phase 1 error" action="negotiate" rem_ip=xxx loc_ip=xxx rem_port=1049 loc_port=500 out_intf="wan1" cookies="xxx" user="N/A" group="N/A" xauth_user="N/A" xauth_group="N/A" vpn_tunnel="N/A" status=negotiate_error error_reason=no matching gateway for new request peer_notif=INITIAL-CONTACT I tried searching on the web, but i did not find anything revelant to this. Do you have any idea of what can be the problem ? I tried many combinaisons of settings on the fortigate without success..

    Read the article

  • Architecture for a business objects / database access layer

    - by gregmac
    For various reasons, we are writing a new business objects/data storage library. One of the requirements of this layer is to separate the logic of the business rules, and the actual data storage layer. It is possible to have multiple data storage layers that implement access to the same object - for example, a main "database" data storage source that implements most objects, and another "ldap" source that implements a User object. In this scenario, User can optionally come from an LDAP source, perhaps with slightly different functionality (eg, not possible to save/update the User object), but otherwise it is used by the application the same way. Another data storage type might be a web service, or an external database. There are two main ways we are looking at implementing this, and me and a co-worker disagree on a fundamental level which is correct. I'd like some advice on which one is the best to use. I'll try to keep my descriptions of each as neutral as possible, as I'm looking for some objective view points here. Business objects are base classes, and data storage objects inherit business objects. Client code deals with data storage objects. In this case, common business rules are inherited by each data storage object, and it is the data storage objects that are directly used by the client code. This has the implication that client code determines which data storage method to use for a given object, because it has to explicitly declare an instance to that type of object. Client code needs to explicitly know connection information for each data storage type it is using. If a data storage layer implements different functionality for a given object, client code explicitly knows about it at compile time because the object looks different. If the data storage method is changed, client code has to be updated. Business objects encapsulate data storage objects. In this case, business objects are directly used by client application. Client application passes along base connection information to business layer. Decision about which data storage method a given object uses is made by business object code. Connection information would be a chunk of data taken from a config file (client app does not really know/care about details of it), which may be a single connection string for a database, or several pieces connection strings for various data storage types. Additional data storage connection types could also be read from another spot - eg, a configuration table in a database that specifies URLs to various web services. The benefit here is that if a new data storage method is added to an existing object, a configuration setting can be set at runtime to determine which method to use, and it is completely transparent to the client applications. Client apps do not need to be modified if data storage method for a given object changes. Business objects are base classes, data source objects inherit from business objects. Client code deals primarily with base classes. This is similar to the first method, but client code declares variables of the base business object types, and Load()/Create()/etc static methods on the business objects return the appropriate data source-typed objects. The architecture of this solution is similar to the first method, but the main difference is the decision about which data storage object to use for a given business object is made by the business layer, not the client code. I know there are already existing ORM libraries that provide some of this functionality, but please discount those for now (there is the possibility that a data storage layer is implemented with one of these ORM libraries) - also note I'm deliberately not telling you what language is being used here, other than that it is strongly typed. I'm looking for some general advice here on which method is better to use (or feel free to suggest something else), and why.

    Read the article

< Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >