Search Results

Search found 433 results on 18 pages for 'centralized'.

Page 7/18 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • changed plesk root name, what DNS settings get modified?

    - by NRGdallas
    we recently changed our plesk server's main URL from siteold.com to sitenew.com. many websites had their NS set to ns1.siteold.com - does plesk automatically update that to need ns1.sitenew.com? should I change the godaddy settings? attempting to change them states "Nameserver Not Registered" - is this simply the delay required? lastly, when adding a new domain to plesk, one would simply need to adjust the nameserver for that site in godaddy to ns1.sitenew.com or ns1.newdomain.com? (does plesk have a centralized name server, or does each site acquire its own?)

    Read the article

  • What kind of programmer job positions are there in professional game development? [on hold]

    - by skiwi
    I have been wondering the following since recently, seeing as I want to pursue a career in game development after university: What programmer job positions are there in professional game development? Think about AAA titles, etc. What programming language are the most commonly used ones in that area? I can think of some job aspects, like game engine, network, centralized server and artificial intelligence. I am just wondering what options I have later on, and in what programming languages I should invest right now. I am quite proficient with Java, and also wondering if that is of any help.

    Read the article

  • Desktop GUI Client - Remote RDBMS communication

    - by magom001
    Sorry if I am asking a trivial question but I have been searching for a while without any luck. I need to design a system and I am looking for advice on the technology that should be used. The layout is very simple: it is a sales application with a centralized database and multiple clients. Each salesperson has GUI app installed on his/her laptop that should be able to connect to the database to retrieve data and upload data (i.e. register new orders). My question is the following: how should the communication between the client and the server be implemented? I doubt that connecting directly to the RDBMS is a good idea... Should I use web-services? XML-RPC? How to implement authentication and encode the data? Thanks for your advice!

    Read the article

  • How to share connection pool among multiple Java applications

    - by Wilson
    Hi all. I'm implementing several JavaSE applications on single server. Is it possible to setup a single connection pool (e.g. C3P0) and share among these applications? I just want to have an easy way to manage the total number of DB connections. Is there any drawbacks using such centralized connection pool? Thank you, Wilson

    Read the article

  • users authentication and dht

    - by mtasic
    Lets say that I have only DHT (distributed hash table) implemented (in Python), and I want to build authentication service over P2P network, but without introducing centralized authentication server with such a service. Can it be done, and if so how can I achieve this? I'm familiar with how Skype and Wuala have done this, but I am looking for decentralized solution without single point of failure.

    Read the article

  • Web-Frameworks for Education Management Systems?

    - by Indebi
    So, I'm working on an idea and I'll go into a brief overview of that but my question is, What are some good web frameworks for this situation? I have some experience in the following languages: C# Python I have considerably more experience in C# than Python, however I am expecting to learn new things. My idea is this, a completely web-based community-oriented Education Management System that focuses on making students and teachers day-to-day lives easier. For students it will provide a centralized place for them to do homework, study for tests, and reinforce concepts learned previously in class. For teachers it will give them a centralized place to handle assignments, attendance, homework, tests, and all other major parts of classroom management. All of that, but in a community-oriented fashion. Everything a teacher does is shared and open to constructive criticism, allowing other teachers to use their assignments/tests and for students or other teachers to comment, rate and criticize their assignments. This encourages an environment of openness that will allow teacher's to focus on teaching and student's to focus on learning. And that community wouldn't be limited to one school or school-district, this system would be completely school-independent. Please note that I have no problem with hearing constructive criticism on this idea, however I would prefer if this post was more focused on my question. I have somewhat explored about the following options: Django ASP.NET Ruby on Rails Silverlight (1) I have Django installed and I played with it for a little bit, I really like how easy setting up databases are and how it handles the database completely for you. I don't really know how to use it very well and I don't quite understand the Model-View-Controller paradigm(?) for it yet but I haven't thought about it much. I also like the fact that it uses Python. (2) I don't really like Visual Studio for developing in ASP.NET, I hate the way the web-designer works and it just feels clunky and old. I like the server-side development part though. I don't like how expensive ASP.NET and overall Visual Studio is, even if I do get it for free for now using DreamSpark (3) I haven't been able to explore much with this, I could not get Rails (or maybe Ruby) properly installed. I first installed it within RadRails and that didn't work so I uninstalled RadRails and then installed the latest version of Ruby off the official Windows Installer and then installed Ruby on Rails through gem and even after all that it still didn't work, so I installed Netbeans and attempted to use it there but it still did not work (4) I like Silverlight in some extents, I've played with this one the most, it's very similar to WPF (which I've used the most) in a lot of ways but I don't like how database connectivity works, at least in comparison to Django. I also dislike how expensive everything with Microsoft is, even if I get it for free for now with DreamSpark. I would like to hear some suggestions from experienced web-developers as to what I should use and why, or at least what some good options are for my scenario Your help would be very appreciated

    Read the article

  • service repository.

    - by SteveCav
    hundreds of our clients around the country have a vb6/MS Access app. The boss needs them to talk to each other, eg client A creates a new task in client B's database, and status updates go back to A. I'm trying to design a WCF system that can accomplish this using a centralized service talking to a service of some kind installed on each client. What I'm wondering is, how the central system knows the address of the clients, ie determine and consume services on the fly? What's a good architecture to fit these requirements?

    Read the article

  • c++ exceptions and program execution logic

    - by Andrew
    Hello everyone, I have been thru a few questions but did not find an answer. I wonder how should the exception handling be implemented in a C++ software so it is centralized and it is tracking the software progress? For example, I want to process exceptions at four stages of the program and know that exception happened at that specific stage: 1. Initialization 2. Script processing 3. Computation 4. Wrap-up. At this point, I tried this: int main (...) { ... // start logging system try { ... } catch (exception &e) { cerr << "Error: " << e.what() << endl; cerr << "Could not start the logging system. Application terminated!\n"; return -1; } catch(...) { cerr << "Unknown error occured.\n"; cerr << "Could not start the logging system. Application terminated!\n"; return -2; } // open script file and acquire data try { ... } catch (exception &e) { cerr << "Error: " << e.what() << endl; cerr << "Could not acquire input parameters. Application terminated!\n"; return -1; } catch(...) { cerr << "Unknown error occured.\n"; cerr << "Could not acquire input parameters. Application terminated!\n"; return -2; } // computation try { ... } ... This is definitely not centralized and seems stupid. Or maybe it is not a good concept at all?

    Read the article

  • Is there a good way of displaying required field indicators when using DataAnnotations in MVC 2?

    - by Jeremy Gruenwald
    I've got validation working with DataAnnotations on all my models, but I'd like to display an indicator for required fields on page load. Since I've got all my validation centralized, I'd rather not hard-code indicators in the View. Calling validation on load would show the validation summary. Has anyone found a good way of letting the model determine what's required, but checking it upon rendering the view, similar to Html.ValidationMessageFor?

    Read the article

  • Multiple login locations for an online app.

    - by Goro
    Hello, I am working on a browser based application that will have many users. The catch is that every user should have their own customized login page, but the actual application is the same for everyone, and needs to be in a central location. What is the most secure way of doing this? Would it make more sense to have a copy of the application for each user, and keep the database centralized? The projected number of users is not very high, probably around 20-80. Thank you,

    Read the article

  • Linux - specific API reference

    - by Goofy
    Hello! Where can I find centralized and complete documentation aboput Linux - specific API? I'm preparing Linux port of my application and i want to use as much Linux - specific features as it's possible. So far I found that Linux provide epoll and inotify API, which are great news for me, because my program works as network server and monitor local file systems.

    Read the article

  • Network authentication + roaming home directory - which technology should I look into using?

    - by Brian
    I'm looking into software which provides a user with a single identity across multiple computers. That is, a user should have the same permissions on each computer, and the user should have access to all of his or her files (roaming home directory) on each computer. There seem to be many solutions for this general idea, but I'm trying to determine the best one for me. Here are some details along with requirements: The network of machines are Amazon EC2 instances running Ubuntu. We access the machines with SSH. Some machines on this LAN may have different uses, but I am only discussing machines for a certain use (running a multi-tenancy platform). The system will not necessarily have a constant amount of machines. We may have to permanently or temporarily alter the amount of machines running. This is the the reason why I'm looking into centralized authentication/storage. The implementation of this effect should be a secure one. We're unsure if users will have direct shell access, but their software will potentially be running (under restricted Linux user names, of course) on our systems, which is as good as direct shell access. Let's assume that their software could potentially be malicious for the sake of security. I have heard of several technologies/combinations to achieve my goal, but I'm unsure of the ramifications of each. An older ServerFault post recommended NFS & NIS, though the combination has security problems according to this old article by Symantec. The article suggests moving to NIS+, but, as it is old, this Wikipedia article has cited statements suggesting a trending away from NIS+ by Sun. The recommended replacement is another thing I have heard of... LDAP. It looks like LDAP can be used to save user information in a centralized location on a network. NFS would still need to be used to cover the 'roaming home folder' requirement, but I see references of them being used together. Since the Symantec article pointed out security problems in both NIS and NFS, is there software to replace NFS, or should I heed that article's suggestions for locking it down? I'm tending toward LDAP because another fundamental piece of our architecture, RabbitMQ, has a authentication/authorization plugin for LDAP. RabbitMQ will be accessible in a restricted manner to users on the system, so I would like to tie the security systems together if possible. Kerberos is another secure authentication protocol that I have heard of. I learned a bit about it some years ago in a cryptography class but don't remember much about it. I have seen suggestions online that it can be combined with LDAP in several ways. Is this necessary? What are the security risks of LDAP without Kerberos? I also remember Kerberos being used in another piece of software developed by Carnegie Mellon University... Andrew File System, or AFS. OpenAFS is available for use, though its setup seems a bit complicated. At my university, AFS provides both requirements... I can log in to any machine, and my "AFS folder" is always available (at least when I acquire an AFS token). Along with suggestions for which path I should look into, does anybody have any guides which were particularly helpful? As the bold text pointed out, LDAP looks to be the best choice, but I'm particularly interested in the implementation details (Keberos? NFS?) with respect to security.

    Read the article

  • Choice and setup of version control

    - by Peter M
    I am about to set up an new laptop and in the process transition to a new version control system as part of a general cleanup. Currently I use a centralized version control system (yes it is VSS, and yes I know all the pro's and con's of that system, but as a single user system it works well for me). I have very little requirements for a new system and I am free to choose among any of the current mainstream players, but cost constraints will push me towards oss. Some of my requirements are: Runs on a single machine (ie the laptop in question) under windows I am not sharing things with other developers or workers - this is more for my own historical benefits. I want to version source code, documentation and binary files I have a large hierarchy of projects that are unrelated (see below) I have files within the hierarchy that don't need to be controlled (but could be) Some projects use Visual Studio, so some integration there could be nice. There could be some sharing of files between jobs. I generally only need a small about of branching in code files The directory hierarchy that I have at the moment is somewhat like: Root | |--Customer #1 | | | |--Job #1 | | | | | |--Data files received from Customer for Job (not controlled) | | |--Documentation files (controlled) | | |--Project information files (not controlled - but could be) | | |--Software Project Files (controlled) | | |--Scratch dir for job (not controlled) | | | |--Job #2 | | (same structure as above) | |--Customer #2 | |.. | |--Cusmtomer #n |.. Currently I have about 22 customers with differing numbers of projects underneath them. At the moment I have a single VSS repository based at the root of the directory structure. If I kept with a centralized system (ie SVN) I believe that I should keep the same approach and continue with a single repository based from the root dir. Is this a valid approach? However if I move to a distributed tool then I am unsure of how I should handle the situation. My initial guess is that I should not have a repository based on the root of my entire directory structure - but that is a guess so I really don't know how valid it is. Should I pitch a distributed approach at the Root, Customer, Job or sub-Job directory level? Also what I am not clear on with distributed tools (and perhaps with SVN as well), is if I can branch parts of a repository. For example, I can see branching source code in software projects as being useful, but branching my documentation as not being useful. So if I pitch a repository at the Job level, can I just branch the Software Project Files? Or would all files in that Job be branched? Every time I look at distributed tools I get a nagging feeling that they are not suited to my style of setup. I am uncomfortable with idea of having to manually set up something like 50 to 80 separate repositories (if I pitch at the Job level, or 20+ if at the Customer level) within my directory hierarchy. This feeling also extends to having all those repositories scattered around as well - however I do have a backup strategy that I trust, so this latter feeling is pretty well unfounded. So what advice can you all give me? Thanks in advance!

    Read the article

  • IPv6: Should I have private addresses?

    - by AlReece45
    Right now, we have a rack of servers. Every server right now has at least 2 IP addresses, one for the public interface, another for the private. The servers that have SSL websites on them have more IP addresses. We also have virtual servers, that are configured similarly. Private Network The private range is currently just used for backups and monitoring. Its a gigabit port, the interface usage does not usually get very high. There are other technologies we're considering using that would use this port: iSCSI (implementations usually recommends dedicating an interface to it, which would be yet another IP network), VPN to get access to the private range (something I'd rather avoid) dedicated database servers LDAP centralized configuration (like puppet) centralized logging We don't have any private addresses in our DNS records (only public addresses). For our servers to utilize the correct IP address for the right interface (and not hard code the IP address) probably requires setting up a private DNS server (So now we add 2 different dns entries to 2 different systems). Public Network Our public range has a variety of services include web, email, and ftp. There is a hardware firewall between our network and the "public" network. We have (relatively secure) method to instruct the firewall to open and close administrative access (web interfaces, ssh, etc) for our current IP address. With either solution discussed, the host-based firewalls will be configured as well. The public network currently runs at a dedicated 20Mbps link. There are a couple of legacy servers with fast-ethernet ports, but they are scheduled for decommissioning. All of the other production boxes have at least 2 Gigabit Ethernet ports. The more traffic-heavy servers have 4-6 available (none is using more than the 2 Gigabit ports right now). IPv6 I want to get an IPv6 prefix from our ISP. So at least every "server" has at least one IPv6 interface. We'll still need to keep the IPv4 addressees up and available for legacy clients (web servers and email at the very least). We have two IP networks right now. Adding the public IPv6 address would make it three. Just use IPv6? I'm thinking about just dumping the private IPv4 range and using the IPv6 range as the primary means of all communications. If an interface starts reaching its capacity, utilize the newly free interfaces to create a trunk. It has the advantage that if either the public or private traffic needs to exceed 1Gbps. The traffic for each interface is already analyzed on a regular basis to predict future bandwidth use. In the rare instances where bandwidth unexpected peaks: utilize QoS to ensure traffic (like our limited SSH access) is prioritized correctly so the problem can be corrected (if possible, our WAN is the bottleneck right now). It also has the advantage of not needing to make an entry for every private address. We may have private DNS (or just LDAP), but it'll be much more limited in scope with less entries to duplicate. Summary I'm trying to make this network as "simple" as possible. At the same time, I want to make sure its reliable, upgradeable, scalable, and (eventually) redundant. Having one IPv6 network, and a legacy IPv4 network seems to be the best solution to me. Regarding using assigned IPv6 addresses for both networks, sharing the available bandwidth on one (more trunked if needed): Are there any technical disadvantages (limitations, buffers, scalability)? Are there any other security considerations (asides from firewalls mentioned above) to consider? Are there regulations or other security requirements (like PCI-DSS) that this doesn't meet? Is there typical software for setting up a Linux network that doesn't have IPv6 support yet? (logging, ldap, puppet) Some other thing I didn't consider?

    Read the article

  • Unlocking Productivity

    - by Michael Snow
    Unlocking Productivity in Life Sciences with Consolidated Content Management by Joe Golemba, Vice President, Product Management, Oracle WebCenter As life sciences organizations look to become more operationally efficient, the ability to effectively leverage information is a competitive advantage. Whether data mining at the drug discovery phase or prepping the sales team before a product launch, content management can play a key role in developing, organizing, and disseminating vital information. The goal of content management is relatively straightforward: put the information that people need where they can find it. A number of issues can complicate this; information sits in many different systems, each of those systems has its own security, and the information in those systems exists in many different formats. Identifying and extracting pertinent information from mountains of farflung data is no simple job, but the alternative—wasted effort or even regulatory compliance issues—is worse. An integrated information architecture can enable health sciences organizations to make better decisions, accelerate clinical operations, and be more competitive. Unstructured data matters Often when we think of drug development data, we think of structured data that fits neatly into one or more research databases. But structured data is often directly supported by unstructured data such as experimental protocols, reaction conditions, lot numbers, run times, analyses, and research notes. As life sciences companies seek integrated views of data, they are typically finding diverse islands of data that seemingly have no relationship to other data in the organization. Information like sales reports or call center reports can be locked into siloed systems, and unavailable to the discovery process. Additionally, in the increasingly networked clinical environment, Web pages, instant messages, videos, scientific imaging, sales and marketing data, collaborative workspaces, and predictive modeling data are likely to be present within an organization, and each source potentially possesses information that can help to better inform specific efforts. Historically, content management solutions that had 21CFR Part 11 capabilities—electronic records and signatures—were focused mainly on content-enabling manufacturing-related processes. Today, life sciences companies have many standalone repositories, requiring different skills, service level agreements, and vendor support costs to manage them. With the amount of content doubling every three to six months, companies have recognized the need to manage unstructured content from the beginning, in order to increase employee productivity and operational efficiency. Using scalable and secure enterprise content management (ECM) solutions, organizations can better manage their unstructured content. These solutions can also be integrated with enterprise resource planning (ERP) systems or research systems, making content available immediately, in the context of the application and within the flow of the employee’s typical business activity. Administrative safeguards—such as content de-duplication—can also be applied within ECM systems, so documents are never recreated, eliminating redundant efforts, ensuring one source of truth, and maintaining content standards in the organization. Putting it in context Consolidating structured and unstructured information in a single system can greatly simplify access to relevant information when it is needed through contextual search. Using contextual filters, results can include therapeutic area, position in the value chain, semantic commonalities, technology-specific factors, specific researchers involved, or potential business impact. The use of taxonomies is essential to organizing information and enabling contextual searches. Taxonomy solutions are composed of a hierarchical tree that defines the relationship between different life science terms. When overlaid with additional indexing related to research and/or business processes, it becomes possible to effectively narrow down the amount of data that is returned during searches, as well as prioritize results based on specific criteria and/or prior search history. Thus, search results are more accurate and relevant to an employee’s day-to-day work. For example, a search for the word "tissue" by a lab researcher would return significantly different results than a search for the same word performed by someone in procurement. Of course, diverse data repositories, combined with the immense amounts of data present in an organization, necessitate that the data elements be regularly indexed and cached beforehand to enable reasonable search response times. In its simplest form, indexing of a single, consolidated data warehouse can be expected to be a relatively straightforward effort. However, organizations require the ability to index multiple data repositories, enabling a single search to reference multiple data sources and provide an integrated results listing. Security and compliance Beyond yielding efficiencies and supporting new insight, an enterprise search environment can support important security considerations as well as compliance initiatives. For example, the systems enable organizations to retain the relevance and the security of the indexed systems, so users can only see the results to which they are granted access. This is especially important as life sciences companies are working in an increasingly networked environment and need to provide secure, role-based access to information across multiple partners. Although not officially required by the 21 CFR Part 11 regulation, the U.S. Food and Drug Administraiton has begun to extend the type of content considered when performing relevant audits and discoveries. Having an ECM infrastructure that provides centralized management of all content enterprise-wide—with the ability to consistently apply records and retention policies along with the appropriate controls, validations, audit trails, and electronic signatures—is becoming increasingly critical for life sciences companies. Making the move Creating an enterprise-wide ECM environment requires moving large amounts of content into a single enterprise repository, a daunting and risk-laden initiative. The first key is to focus on data taxonomy, allowing content to be mapped across systems. The second is to take advantage new tools which can dramatically speed and reduce the cost of the data migration process through automation. Additional content need not be frozen while it is migrated, enabling productivity throughout the process. The ability to effectively leverage information into success has been gaining importance in the life sciences industry for years. The rapid adoption of enterprise content management, both in operational processes as well as in scientific management, are clear indicators that the companies are looking to use all available data to be better informed, improve decision making, minimize risk, and increase time to market, to maintain profitability and be more competitive. As more and more varieties and sources of information are brought under the strategic management umbrella, the ability to divine knowledge from the vast pool of information is increasingly difficult. Simple search engines and basic content management are increasingly unable to effectively extract the right information from the mountains of data available. By bringing these tools into context and integrating them with business processes and applications, we can effectively focus on the right decisions that make our organizations more profitable. More Information Oracle will be exhibiting at DIA 2012 in Philadelphia on June 25-27. Stop by our booth Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} (#2825) to learn more about the advantages of a centralized ECM strategy and see the Oracle WebCenter Content solution, our 21 CFR Part 11 compliant content management platform.

    Read the article

  • Build a RAID-5 USB Array or NAS?

    - by TK Kocheran
    I'm looking to build a RAID-5 or RAID-6 hard disk array to be used as a centralized backup location for my home network. I have a lot of 4-8GB DVD ISO files which I intend to serve over the network to my TV using DLNA or something similar. I need decent write times, fast read times, and preferably a way to connect this array to my network router over USB. I'll probably be using ext3/4 on it (I'd use btrfs if my router supported it :-[). Is there a way to build a RAID array that is accessible over USB? I'm kind of new to this and most guides assume you're running a RAID array inside your machine. I need at least 5TB of space, and I need redundancy, so I prefer RAID-6. How can I go about accomplishing this?

    Read the article

  • Windows Server Certified as Secure Global Desktop Clients with EBS 12

    - by Steven Chan (Oracle Development)
    Oracle Secure Global Desktop provides secure access to centralized applications—Microsoft Windows, UNIX, mainframe, and midrange—from a wide variety of popular client devices, including Windows PCs, Oracle Solaris workstations, Linux PCs, and thin clients. Secure Global Desktop is certified for use with Microsoft Windows Server 2003 and 2008 virtualized environments acting as desktop clients connecting to Oracle E-Business Suite Release 12 environments.  32-bit and 64-bit versions of Microsoft Windows Server are certified. These combinations may also be used in conjunction with Oracle VM, if required. How does this work? For example, a Secure Desktop Client can connect to a Secure Global Desktop environment.  That environment can be running Microsoft Server 2008.  That environment can be used, in turn, as a "desktop client" to access Oracle E-Business Suite Release 12.1.3. Requirements EBS 12.1.3 + Windows Server 2008 R2 (64-bit) Secure Global Desktop version 4.6 or higher Internet Explorer 8 (32-bit and 64-bit) or Internet Explorer 9 (32-bit and 64-bit) JRE Plug-in 1.6.0_32 (32-bit and 64-bit) or later 1.6 releases EBS 12.1.3 + Windows Server 2008 (32-bit) Secure Global Desktop version 4.6 or higher Internet Explorer 8 (32-bit) or Internet Explorer 9 (32-bit) JRE Plug-in 1.6.0_32 (32-bit) or later 1.6 releases EBS 12.1.3 + Windows Server 2003 R2 (64-bit) Secure Global Desktop version 4.6 or higher Internet Explorer 8 (32-bit and 64-bit) JRE Plug-in 1.6.0_32 (32-bit and 64-bit) or later 1.6 releases EBS 12.1.3 + Windows Server 2003 R2 (32-bit) Secure Global Desktop version 4.6 or higher Internet Explorer 8 (32-bit) JRE Plug-in 1.6.0_32 (32-bit) or later 1.6 releases References Oracle Secure Global Desktop with E-Business Suite Release 12.1.3 (Note 1491211.1) Related Articles Oracle VM Templates Available for E-Business Suite 12.1.3 Support Policies for Virtualization Technologies and Oracle E-Business Suite Webcast Replay Available: Virtualization and Cloud Deployments of Oracle E-Business Suite

    Read the article

  • On Server Disk Storage VS SAN Storage

    - by Justin
    Hello, I am looking at buying three servers, and trying to figure out which storage solution makes the most sense in terms of performance and cost. Total budget is around: $10,000. OPTION 1: Dell servers with RAID 10 (4 Drives) each 7200RPM SAS 500GB, for a total capacity of 1TB. Each server is approx: $3000. Total storage then across all three servers is 3TB. OPTION 2: Same Dell servers with a cheap single drive no RAID for $2000 and go with a centralized SAN solution. The biggest problem is that I haven't been able to even find a SAN solution that is a reasonable price. Dell entry level storage servers are like $15,000. I am thinking just iSCSI, not fiber (too expensive). What do you guys recommend?

    Read the article

  • Debug.Assert replacement for Phone and Store apps

    - by Daniel Moth
    I don’t know about you, but all my code is, and always has been, littered with Debug.Assert statements. I think it all started way back in my (short-lived, but impactful to me) Eiffel days, when I was applying Design by Contract. Anyway, I can’t live without Debug.Assert. Imagine my dismay when I upgraded my Windows Phone 7.x app (Translator By Moth) to Windows Phone 8 and discovered that my Debug.Assert statements would not display anything on the target and would not break in the debugger any longer! Luckily, the solution was simple and in this post I share it with you – feel free to teak it to meet your needs. Steps to use Add a new code file to your project, delete all its contents, and paste in the code from MyDebug.cs Perform a global search in your solution replacing Debug.Assert with MyDebug.Assert Build solution and test Now, I do not know why this functionality was broken, but I do know that it exhibits the same broken characteristics for Windows Store apps. There is a simple workaround there to use Contract.Assert which does display a message and offers an option to break in the debugger (although it doesn’t output the message to the Output window). Because I plan on code sharing between Phone and Windows 8 projects, I prefer to have the conditional compilation centralized, so I added the Contract.Assert workaround directly in MyDebug class, so that you can use this class for both platforms – enjoy and enhance! Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • If some standards apply when "it depends" then should I stick with custom approaches?

    - by Travis J
    If I have an unconventional approach which works better than the industry standard, should I just stick with it even though in principal it violates those standards? What I am talking about is referential integrity for relational database management systems. The standard for enforcing referential integrity is to CASCADE delete. In practice, this is just not going to work all the time. In my current case, it does not. The alternative suggested is to either change the reference to NULL, DEFAULT, or just to take NO ACTION - usually in the form of a "soft delete". I am all about enforcing referential integrity. Love it. However, sometimes it just does not fully apply to use all the standards in practice. My approach has been to slightly abandon a small part of one of those practices which is the part about leaving "hanging references" around. Oops. The trade off is plentiful in this situation I believe. Instead of having deprecated data in the production database, a splattering of "soft delete" logic all across my controllers (and views sometimes depending on how far down the chain the soft delete occurred), and the prospect of queries taking longer and longer - instead of all that - I now have a recycle bin and centralized logic. The only tradeoff is that I must explicitly manage the possibility of "hanging references" which can be done through generics with one class. Any thoughts?

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >