Search Results

Search found 787 results on 32 pages for 'budget'.

Page 23/32 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • Database Mirroring – deprecated

    - by fatherjack
    Do you use mirroring on any of your databases? Do you use mirroring on SQL Server Standard Edition? I do, as a way of having a stand-by server ready to take over if there is a problem with the live server so that business can continue despite whatever disaster may strike at our primary server location. In my experience it has been a great solution for us as it is simple to implement, reliable and predictable. Mirroring has been around since SQL Server 2005 sp1 but with the release of SQL Server 2012 mirroring has now been placed on the deprecation list. That’s right, Microsoft are removing this feature from SQL Server. SQL Server 2012 had lots of improvements and new features around this sort of technology – the High Availability, Disaster recovery and Always On features described in detail here by Brent Ozar and  Microsoft’s own Customer Service and Support SQL Server Engineers . Now the bad news, the HADRON features are pretty much all wrapped up in the Enterprise Edition of SQL Server 2012. This is going to be a big issue for people, like me, who are only on Standard Edition of earlier versions mostly due to our requirements and the budget (or lack thereof) required for Enterprise Edition licenses. No mirroring in Standard Edition means no upgrade. Don’t Panic. There are two stages of deprecation and they dont happen fast. The first stage – Deprecation Announcement- means that Microsoft have decided that there is a limited future for a particular feature and this is your cue that new projects and developments should not be implemented on this technology as it will cease to exist in the future. This is where mirroring currently stands. You have time to consider your options and start work on planning how you will move away from using this feature. This can be 2 or 3 versions of SQL Server, possibly more. The next stage is Deprecation Final Support - this is where you are on your last chance, When you see this then the next version of SQL Server will not have this feature in it so you need to implement your plans to move to an alternative solution. While these two phases are taking place Microsoft are open to feedback on how people use their products and if enough people make the case for mirroring (or an equivalent technology) to be in the Standard Edition then they may make changes rather than lose customers or have customers cease upgrading in order to keep the functionality they need. Denny Cherry (@MrDenny) has published an article on this same topic here with more detail than me so I wont go over old ground. All I will say is that you should read his article now and then follow the link to his own site where he is collecting peoples information on how they use mirroring in Standard Edition so that our voice can be put to Microsoft.  

    Read the article

  • Guiding Management to the Correct Decision

    - by Blumer
    My supervisor (also a developer) and I have a running joke about writing a book called "Managing From Beneath: Subversively Guiding Management to the Right Decision" and including a number of "techniques" we've developed for helping those who make the decisions to make the right ones. So far, we've got (cynicism warning!): BIC It! BIC stands for "Bury In Committee." When a bad idea comes up that someone wants to champion, we try to get it deferred to a committee for input. Typically it will either get killed outright (especially if other members of the committee are competing for you as a resource), or it will be hung up long enough that the proponent forgets about it. Smart, Stupid, or Expensive? When someone gets a visionary idea, offer them three ways to do it: a smart way, a stupid way, and an expensive way. The hope is that you've at least got a 2/3 shot of not having to do it the way that makes a piece of your soul die. All-Pro. It's a preemptive pro/con list in which you get into the mind of the (pr)opponent and think what would be cons against doing it your way. Twist them into pros and present them in your pro list before they have a chance to present them as cons. Dependicitis. Link pending decisions together, ideally with the proponent's pet project as the final link in the chain. Use this leverage to force action on those that have been put off. Preemptive Acceptance. Sometimes it's clear that management is going to go a particular direction regardless of advice to the contrary, and it's time to make the best of it. Take the opportunity to get something else you need, though. Approach the sponsor out of the blue and take the first step: "You know, I've been thinking about it, and while it's not the route I would advise, as long as we can get the schedule and budget for Project Awesome loosened up, I can work some magic to make your project fly." So ... what techniques have you come up with to try to head off the problem projects or make the best of what may come?

    Read the article

  • SOA, Cloud & Service Technology Symposium 2012 London

    - by JuergenKress
    Registration Is Now Open With Special Pricing For Oracle Promotional Discount For Exclusive Oracle Discount, Enter Promo Code: Djmxz370 OVERVIEW The International SOA, Cloud + Service Technology Symposium is a yearly event that features the top experts and authors from around the world, providing a series of keynotes, talks, demonstrations, and panels, as well as training and certification workshops - all dedicated to empowering IT professionals to realize modern service technologies and practices in the real world. Click here for a two-page printable conference overview (PDF). KEYNOTES & SPEAKERS More than 80 international subject matter experts will be speaking at the Symposium. Below are confirmed keynotes and speakers so far. Over 50% of the agenda has not yet been finalized. Many more speakers to come. View the partial program calendars on the Conference Agenda page. Keynotes and Speakers Thomas Erl Arcitura Education "SOA, Cloud Computing & Semantic Web Technology: The Sequel - The Era of Intelligent Service Technology" Markus Zirn Oracle "Big Data with CEP and SOA" Clemens Utschig Boehringer Ingelheim Pharma Manas Deb Oracle "The Successful Execution of the SOA and BPM Vision Tim E. Hall Oracle "Community Management: The Next Wave of SOA Governance and API Management" Registration is Now Open with Special Pricing for Oracle Promotional discount for exclusive Oracle discount, Enter Promo Code: DJMXZ370. SPONSORSHIP OPPORTUNITIES The Symposium provides an excellent opportunity to promote your organization in the lead-up to the event, to delegates during the Symposium, and after when the proceedings are made available on the Symposium web site. There are a limited number of premier sponsorship packages available, and a package can be tailored to your needs and budget. Download the Symposium Sponsorship Guide. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: SOA Symposium,SOA Cloud Service Technology Symposium,Thomas Erl,SOA Community,Oracle SOA,Oracle BPM,BPM Community,OPN,Jürgen Kress

    Read the article

  • How do we provide valid time estimates during Sprint Planning without doing "too much" design?

    - by Michael Edenfield
    My team is getting up to speed with Scrum, but most of us are more familiar with non-agile or "pseudo-"agile methodologies. The part that is the biggest hurdle for us is running an efficient Sprint Planning meeting where we break our backlog items into tasks, and estimate hours. (I'm using the terminology from the VS2010 Scrum Template; apologies if I use the wrong word somewhere.) When we try to figure out how long a task is going to take, we often fall into the trap of designing the feature at the code level -- table layout, interfaces, etc -- in order to figure out how long that's going to take. I'm pretty sure this is not the appropriate place to be doing that kind of design. We should be scheduling tasks for these design meetings during the sprint. However, we are having trouble figuring out how else to come up with meaningful estimates for the tasks. Are there any practical habits/techniques/etc. for making a judgement call about how long a feature is going to take, without knowing how you plan to implement it? If our time estimates are going to change significantly once the design has been completed, how can we properly budget our Sprint backlog ahead of time? EDIT: Just to clarify, since some of the comments/answers are very valid but I think addressing the wrong question. We know that what we're doing is not right, and that we should be building time into the sprint for this design. Conceptually all of the developers understand that. We also also bringing in a team member with Scrum experience to keep us on track if we start going off into the weeds. The problem is that, without going through this design process, we are finding it difficult to provide concrete time estimates for anything. We are constantly saying things like "well if we design it this way it might take 8 hours but if we end up having to do this other way instead that will take about 32 but it might not be as bad once we start trying to write it...". I also assume that this process will get better once we have some historical velocity to work from, but many of the technologies and architectural patterns we are using are new to us. But if potentially-wildly-wrong estimates are just a natural part of adapting this process then we will just need to recondition ourselves to accept that :)

    Read the article

  • Arçelik A.S. Uses Advanced Analytics to Improve Product Development

    - by Sylvie MacKenzie, PMP
    "Oracle’s Primavera P6 Enterprise Project Portfolio Management’s advanced analytics gives us better insight into the product development process by helping us to identify potential roadblocks.” – Iffet Iyigun Meydanli, Innovation and System Development Manager, R&D Center, Arçelik A.S. Founded in 1955, Arçelik A.S. is now the leading household appliance manufacturer in Turkey, and the third-largest household appliance company in Europe. It operates 14 production facilities in five countries (Turkey, Romania, Russia, China, and South Africa), with international sales and marketing offices in 20 countries. Additionally, the company manages 10 brands (Arçelik, Beko, Grundig, Blomberg, Elektrabregenz, Arctic, Leisure, Flavel, Defy, and Altus). The company has a household presence in more than 100 countries, including China and the United States. Arçelik’s Beko brand is among the top-10 household appliance brands in world, as a market leader for refrigerators, freezers, and washing machines in the United Kingdom. Arçelik implemented Oracle’s Primavera P6 Enterprise Project Portfolio Management for improved management of its design and manufacturing projects. With the solution, Arelik has improved its research and development (R&D) with the ability to evaluate technology risks when planning its projects. Also, it is now more easy to make plans for several locations, monitor all resources, and plan for future projects.  Challenges Improve monitoring of R&D resources?including human resources and critical laboratory equipment?to optimize management of the company’s R&D project portfolio Establish a transparent project platform to enable better product and process planning, gain insight into product performance, and facilitate advanced analytics that support R&D and overall business decisions Identify potential roadblocks for better risk management Solutions Worked with Oracle Partner PRM to implement Oracle’s Primavera P6 Enterprise Project Portfolio Management to manage the entire household-appliance, R&D project portfolio lifecycle, enabling managers and project leaders to better track and monitor resources and deliverables in real time Improved risk analysis and evaluation abilities for R&D projects Supported long-term planning needs Used advanced reporting features to capture data needed for budgeting and other project details, including employee performance evaluations Improved monitoring abilities and insight into the overall performance of products postproduction Enabled flexible, fast, and customized reporting with the P6 dashboard on a centralized platform to meet custom reporting needs for project leaders and support on-time and on-budget deliverables Integrated with other corporate departments, such as accounts payable, to upload project invoice data into the Primavera solution and the company’s e-mail system, so that project leaders will be alerted about milestones and other project related information Partner“Oracle Partner PRM provided us with a quick, reliable, and solution-focused approach to its support,” said Iffet Iyigun Meydanli, innovation and system development manager, R&D Center, Arçelik A.S. “The company’s service covered the entire spectrum of our needs, including implementation, training, configuration, problem solving, and integration.”

    Read the article

  • Fast Data Executive Round Table FY14 event kit

    - by JuergenKress
    We are very interested to run joint marketing events jointly with you as our partners! At our SOA Community Workspace (SOA Community membership required) you can find a new Fast Data Executive Round Table FY14 event kit. This event is designed at senior IT and executives level for the purposes of education, awareness, and thought leadership around the subject of big data; and a specific flavor of big data - Fast Data - that has begun to spark the imagination of many Oracle customers. Fast Data is not new. It’s a term that was invented initially by Ovum’s Tony Baer as a way to represent the collection of ‘high velocity’ solutions with respect to the big data. For Oracle, the Fast Data campaign in FY13 began as a way to tie a broader set of solutions together (SOA/Business Process Management, Data Integration and Business Analytics) under a set of use cases focused on real-time, high velocity data. It has helped to give Oracle a leap-frog advantage over many of the niche integration vendors (i.e. Informatica, Pega, Tibco, Software AG, Terracotta) who haven’t been able to address these types of end-to-end use cases which rely on the combination of filtering, in-memory data processing, correlation, real-time data movement and transformation, end-to-end analytics, and business process management. Only Oracle can address all the dimensions of fast data, and only Oracle can provide a set of engineered solutions to address this space. This event is designed to continue that thought leadership momentum and raise the awareness about what Oracle Fast Data solutions are designed to solve. It’s designed to highlight real customer solutions and articulate the business benefits that fast data can address. This is not an event that gets into the esoteric technical standards of Hadoop, NoSQL, and in-memory data grids. This is an event that instead gets into the heart of business problems that big data has left un-addressed and charts the path for next steps in fast data. Get the Fast Data Executive Round Table FY14 event kit here. Support marketing campaigns We can support such events by: Oracle speakers - contact your partner manager Marketing budget - contact your A&C marketing manager Event location - free use of Oracle Customer Visitor Centers conference rooms Promote your event at events.oracle.com: http://tinyurl.com/eventspecialized E-Blast: invite customers to your event – contact your A&C marketing manager For additional marketing kits e.g for Business Process Managementplease visit our SOA Community Workspace. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags:

    Read the article

  • Cloud Computing Business Benefits

    - by workflowman
    If you have been living under a rock for the past year, you wouldn't have heard about cloud computing. Cloud computing is a loose term that describes anything that is hosted in data centers and accessed via the internet. It is normally associated with developers who draw clouds in diagrams indicating where services or how systems communicate with each other. Cloud computing also incorporates such well-known trends as Web 2.0 and Software as a Service (SaaS) and more recently Infrastructure as a Service (IaaS) and Platform as a Service (PaaS). Its aim is to change the way we compute, moving from traditional desktop and on-premises servers to services and resources that are hosted in the cloud.  Benefits of Cloud Computing  There are clearly benefits in building applications using cloud computing, some of which are listed here:  Zero up- front investment:  Delivering a large-scale system costs a fortune in both time and money. Often IT departments are split into hardware/network and software services. The hardware team provisions servers and so forth under the requirements of the software team. Often the hardware team has a different budget that requires approval. Although hardware and software management are two separate disciplines, sometimes what happens is developers are given the task to estimate CPU cycles, disk space, and so forth, which ends up in underutilized servers.  Usage-based costing:  You pay for what you use, no more, no less, because you never actually own the server. This is similar to car leasing, where in the long run you get a new car every three years and maintenance is never a worry.  Potential for shrinking the processing time:  If processes are split over multiple machines, parallel processing is performed, which decreases processing time.  More office space:  Walk into most offices, and guaranteed you will find a medium- sized room dedicated to servers.  Efficient resource utilization:  The resource utilization is handed by a centralized cloud administrator who is in charge of deciding exactly the right amount of resources for a system. This takes the task away from local administrators, who have to regularly monitor these servers.  Just-in-time infrastructure:  If your system is a success and needs to scale to meet demand, this can cause further time delays or a slow- performing service. Cloud computing solves this because you can add more resources at any time.  Lower environmental impact:  If servers are centralized, potentially an environment initiative is more likely to succeed. As an example, if servers are placed in sunny or windy parts of the world, then why not use these resources to power those servers?  Lower costs:  Unfortunately, this is one point that administrators will not like. If you have people administrating your e-mail server and network along with support staff doing other cloud-based tasks, this workforce can be reduced. This saves costs, though it also reduces jobs.

    Read the article

  • Advice needed: ADSL and VPN for a small company

    - by Saajid Ismail
    Hi. I need advice on purchasing an ADSL modem/router for a small company. At the moment, we are using the iBurst Wireless service for internet connectivity. I have the iBurst desktop modem, which connects to my Netgear WNR2000 router via ethernet. I am using the Netgear WNR2000 to deploy a wireless network as well. I have also set up a VPN using Windows Server 2003, and enabled the VPN Passthrough settings on the Netgear router. I am able to connect to the office network remotely without difficulty. However the problem that I've read is that the Netgear WNR2000 only supports VPN passthrough for a single session. This is simply not good enough. I need to be able to support at least 3 concurrent VPN connections immediately, and up to 5 in the near future. Now I am cancelling my iBurst Wireless service and have just got my ADSL line installed. I have to purchase an ADSL modem, and now is a good time to think of future proofing my investment. I need a good ADSL modem, that will allow me to support at least 5 concurrent VPN connections, or more, without breaking the bank. My budget is about 150-200 USD. I believe that my current Netgear WNR2000 router will be useless, except maybe to extend my wireless network in the future by a bit. Is there a solution where I can still use my Netgear WNR2000 for WiFi, for e.g., by connecting a cheaper non-WiFi ADSL modem to the Netgear router? If not, then which WiFi-enabled ADSL modem/router that supports at least 5 VPN passthroughs can you recommend? To sum it up, I need an ADSL modem/router that is: ADSL & ADSL2+ compatible has built-in 802.11n 270/300mbps WiFi (if having this feature doesn't push the price up too much) supports at least 5 VPN connections using VPN passthrough EDIT: Answer 2.10 in the following FAQ has me a bit worried - What is VPN/multiple VPN Pass-through?

    Read the article

  • deploy LAMP config to new boxes with low/no effort

    - by user1444233
    I'm spending a lot of time setting up new Centos 6 instances. I use a VCS (Subversion) for most of the config files and all of the webapp source files (Github), but even with excellent package managers (like yum, npm, easy_install, etc.) it still takes time. I'd like to get to the point where I could try out a new potential web host by just signing up for an account, logging in and automatically sucking my standardised config onto the box. I know there are a set of tools that can help: Puppet Chef Vagrant and a set of services that sell solutions: [Jumpbox] http://www.jumpbox.com/ [BitNami Cloud] http://bitnami.org/cloud I don't mind investing time in learning a new tool, but as a no-budget start-up, I'm keen to keep monthly costs down. My biggest concern is that time spent on the server config is time away from the codebase, and that's where I think my team and I should be investing our energy, at least until we get funded and scale up a bit. I'd be grateful of some recommendations for which way to jump on config: stick with SSH and manual deploys, at least until you get big. bite the bullet and learn [say] puppet. You may only use it 8-10 times, but it pays to have such an easy tunable server bootstrap. don't bother, just pay the $100/month for a standard config service. It'll cost you $1000/year, but you should focus on the code. Other questions in this domain I use quite a complex stack (Drupal, Zend Server, MySQL, PHP, MongoDB, Python, django), but are there standard(ish) setups that include these or that I could build upon more quickly? Are the configs optimised for small, medium, large VPS (1GB, 4GB, 16GB)? How secure are they?

    Read the article

  • Options for synchronizing Palm Desktop calendars

    - by Al Everett
    My wife and I each have Palm Centro phones and very full calendars. We've been using Palm products for years and are happy with them and are not looking to switch (and don't have the budget for it even if we wanted to). What is a viable way we can synchronize our calendars? Back when we had Palm Z22s we used AirSet. It worked great in synchronizing our desktop calendars. Unfortunately, their sync software does not support Palm Desktop v6 and there is nothing in the pipeline to support it. (The third party vendor is apparently not interested in updating for the newer Palm desktop.) I would love to be able to get back to having each other's appointments appear on the other's calendar. What can we do? Some limitations: A data plan is not an option (so this precludes over-the-air synchronization options) No Microsoft Outlook Edit: Syncing my Palm to my Palm Desktop is not the issue. Being able to sync, in some way, the Palm calendar databases of my wife's and my calendars is what's desired.

    Read the article

  • Limiting interface bandwidth with tc under Linux

    - by Matt
    I have a linux router which has a 10GBe interface on the outside and bonded Gigabit ethernet interfaces on the inside. We have currently budget for 2GBit/s. If we exceed that rate by more than 5% average for a month then we'll be charged for the whole 10Gbit/s capacity. Quite a step up in dollar terms. So, I want to limit this to 2GBit/s on 10GBe interface. TBF filter might be ideal, but this comment is of concern. On all platforms except for Alpha, it is able to shape up to 1mbit/s of normal traffic with ideal minimal burstiness, sending out data exactly at the configured rates. Should I be using TBF or some other filter to apply this rate to the interface and how would I do it. I don't understand the example given here: Traffic Control HOWTO In particular "Example 9. Creating a 256kbit/s TBF" tc qdisc add dev eth0 handle 1:0 root dsmark indices 1 default_index 0 tc qdisc add dev eth0 handle 2:0 parent 1:0 tbf burst 20480 limit 20480 mtu 1514 rate 32000bps How is the 256K bit/s rate calculated? In this example, 32000bps = 32k bytes per second. Since tc uses bps = bytes per second. I guess burst and limit come into play but how would you go about choosing sensible numbers to reach the desired rate? This is not a mistake. I tested this and it gave a rate close to 256K but not exactly that.

    Read the article

  • Long Gigabit Ethernet Run

    - by Timothy R. Butler
    I am trying to get an Gig-E network between two buildings that are approximately 260 ft. away. While some TRENDnet switches failed to be able to connect to each other over Cat 6 at that distance, two Netgear 5-port Gig-E switches do so just fine. However, it still fails after I put in place APC PNET1GB ethernet surge protectors at each end before the line connects to the respective switches. So I find myself wondering if I simply need to find a better surge protector that doesn't degrade the signal as much (if so, what kind would you recommend?) or if I should give up on copper and use fiber between the buildings. If I opt to go the latter route, I could really use some pointers. It looks like LC connectors are the most common, but I keep running into some others as well. A media converter on each end seems like the simplest solution, but perhaps a Gig-E switch with an SFP port would make more sense? Given a very limited budget, sticking with my existing copper seems best, but if it is bound to be a headache, a 100 meter fiber cable is something I think I can swing cost wise.

    Read the article

  • Linux: Managing users, groups and applications

    - by RN
    I am fairly new to linux admin so this may sound quite a noob question. I have a VPS account with a root access I need to install Tomcat, Java on it and later other open source applications as well. Installation for all of these is as simple as unzipping the .gz in a folder. My questions are A) Where should I keep all these programs? In Windows, I typically have a folder called programs under c:\ where I unzip all applications. I plan to have something similar here as well. Currently, I have all these under apps folder under/root- which I am guessing is a bad idea B) To what group should Tom belong to ? I would need a user - say Tom who can simply execute these programs. Do I need to create a new group? or just add Tom to some existing group ? C) Finally- Am I doing something really stupid by installing all these application by simply unzipping them? I mean an alternate way would be to use Yup or RPM or something like that to install these applications. Given my familiarity and (tight budget) that seems too much to me. I feel uncomfortable running commands which i don't understand too well

    Read the article

  • How does the LeftHand SAN perform in a Production environment?

    - by Keith Sirmons
    Howdy, I previously asked this ServerFault question: Does anyone have experience with lefthands VSA SAN The general consensus looks like it does not perform well enough for a production SQL server even at a light load. So the new question is, How does LeftHand's SAN perform on the HP or Dell dedicated Hardware boxes? We are looking at the Starter SAN with 2 HP nodes in a 2-way replication, 2 ESX servers hosting a total of 2 Active Directory server, 1 MS SQL server, 1 File Server, and 1 General Purpose Server for things like Virus Scan (All Microsoft Server 2005 or 2008). The reason I am looking at LeftHand is for the complete software package. I plan to have a DR site and like how the SAN can perform an Async Replication to the offsite location without having to go back to the Vendor for more licenses. I also like the redundancy built into the Network Raid architecture. I have looked at other SANS and found different faults with them. For example, Dell's EqualLogic: Found that although the individual box is very redundant in hardware, the Data once spanned across multiple boxes is not redundant, if a node goes down you have lost the only copy of the data sitting on that hardware (One thing is certain, all hardware fails... When? is the only question.). I have used an XioTech SAN as well.. Well worth the money BTW, but I think it is overkill for the size of the office I am targeting. The cost to get the hardware redundancy in the XioTech makes it a little out of reach for the budget I am working in. Thank you, Keith

    Read the article

  • samba4 dc "network location cannot be reached"

    - by mitchell babies peters
    to clear the air centos 6.4? (maybe 6.3) as the server, running samba 4.0.10, trying to add a windows 7 client that has connectivity to the server. this is what windows shouts as me as it mocks my dependence on network infrastructure. "the network location cannot be reached." i have access to the domain contoller (dc) im using the dc as the domain name server (dns) already, and the name is correctly resolving, and it is correctly forwarding outbound traffic. i have nothing but self taught experience with active directory(ad) so if i am missing something obvious, please shout it out, but keep the verbal abuse to a minimum. i checked samba4DC + my error and found nothing relevant to my issue, if i missed something please point me in that direction. the weekend is just starting as i write this so i probably wont be back on to check this post for a day or three, but i might because this mystery is killing me. i followed the samba4 as a dc guide here and i supplimented gaps with this i have tested kerberos, ntp, and set my DC as the clock to sync to in my windows client and it appears to be a very small fraction of a second off so that shouldn't be it. also, firewall and selinux are both off for testing. i have also tried disabling ipv6, and cleared the registry of ipv6 records (allegedly the default samba4 as a DC runs as windows server 2003 which allegedly does not support or tolerate the existence of ipv6, fair warning, i heard this on the internet so it is probably a lie) i have tried a few other things that i have forgotten because i have been doing this for a day and a half now. ideas welcome. suggestions for alternatives are also welcome, as long as they are free. i was given a budget of $0 dollars and told to implement active directory (no prior knowledge of active directory at that point).

    Read the article

  • eee box "drobpox" server 24/7

    - by microspino
    I'd like to create a mini dropbox and print server on a small soho network of 5 users (all of them use windows XP desktops). The device need to run 24/7 or at least 12/7 (I can accept just workday hours too but the other two options would be better). Dropbox mini server: I mean I will have a 90gb dropbox on every computer on my network LAN syncing with It and the one onto It syncing to the web. Print Server: I have a Samsung A4 small laser printer, a HP500 Designjet Plotter, a Samsung Multifunction Machine (fax/print/scan/copy), a modern HP color A3 Deskjet printer and a HP laserjet A4 color printer. All of them need to be connected to this mini server. Fax/Scan server: since I have the above mentioned fax/print/scan/copy machine I would like to make people use It from/to their computers through the mini server. I thought to a recent EEEBOX machine because I heard good things about ATOM cpus and because It seems that a recent BIOS version could switch It off and on autonomously. I'd like to listen some advice from You. Best of all would be: - If You have something similar running for a long time - If You disagree with this hardware choice and If You would suggest some other device. - If You see any issues with my printing setup - Anything else ;) My budget is from Zero (using right sw to build something on top on a old PC) to 500€ max.

    Read the article

  • Temporary boot problem after thunder storm - likely causes?

    - by alastairs
    The village where I live was sat under a thunder cloud for most of Friday, and we suffered a few power fluctuations (specifically, what seemed to be split-second outages). When I got back home from work, I found that my PCs had shut down during one of these outages. When I went to boot one of them back up, I couldn't get anything to display on screen, nor did the boot seem to complete correctly. I tried a number of things - unplugging different bits of hardware, swapping graphics adaptors, etc. - to no avail. I thought I was looking at a fried motherboard or CPU. Power seemed to be distributed correctly to the peripherals (the drives all appeared to be working) so I figured it couldn't be the PSU. Eventually I unplugged it from the mains and left it overnight (approx 12hrs unplugged). I tried it again this morning, and it booted up correctly. Woo-hoo! I have all my equipment protected by surge-protected power strips, so I don't think a spike caused these problems. Obviously it has something to do with the power fluctuations, and maybe the PSU in the problem machine got itself confused somehow. The questions are, for future reference and to help people with similar problems: What are the likely causes of the boot failure I experienced? Is a UPS a simple and cost-effective solution, or might other things help prevent this happening in future? What UPS can you recommend (my budget is limited)?

    Read the article

  • EEE PC dropbox server running 24/7

    - by microspino
    I'd like to create a mini dropbox and print server on a small soho network of 5 users (all of them use windows XP desktops). The device need to run 24/7 or at least 12/7 (I can accept just workday hours too but the other two options would be better). Dropbox mini server: I mean I will have a 90gb dropbox on every computer on my network LAN syncing with It and the one onto It syncing to the web. Print Server: I have Samsung SCX 4521F (fax/print/scan/copy), Samsung ML2010, HP Laser jet P1006, HP Color Laserjet CP1215, HP Office jet pro K8600, HP Design jet 500. All of them now are connected using little print servers and I want to get rid of them hooking everything to this mini server. The fax/print/scan/copy machine need to stay connected to a PC to make me able to use the software that comes with It. The mini server would save me on this too. Fax/Scan server: since I have the above mentioned fax/print/scan/copy machine I would like to make people use It from/to their computers through the mini server. I thought to a recent EEEBOX machine because I heard good things about ATOM cpus and because It seems that a recent BIOS version could switch It off and on autonomously. I'd like to listen some advice from You. Best of all would be: If You have something similar running for a long time If You disagree with this hardware choice and If You would suggest some other device. If You see any issues with my printing setup Anything else ;) My budget is from Zero (using right sw to build something on top on a old PC) to 500€ max.

    Read the article

  • Moving Farm to co-location hosting - network settings requirements

    - by Saariko
    I am moving my farm (2 Dell's R620) to a co-location hosting service. I am trying to figure out the secure way to have my network settings The requirements are: VM1 is the working HOST, includes: esxi 5.1, vSphere, 4 clients (w2008r2 all) VM2 has esxi 5.1 installed, and a single machine with Veeam Backup and copy 6.5 - keeping a copy of VM1 clients on the VM2 internal storage (this solution is due to a very small budget - in case of failure on Host 1 - can redirect IP's) Only 2 VM clients require network address and access from the WWAN - ISP provides IP's range for them (with Gateway and DNS) I need connection to the iDrac's from my office (option to create a VPN-SSL tunnel) Connection to the vSphere appliances I want to be able to RDP to the VM clients The current configuration is that each host has the iDrac dedicated nic connected , and another (NIC #1) connected - with a static IP on 192.168.3.x The iDrac's have a static IP from the same network range (19.168.3.x) It will look something like this: My thoughts: On NIC#2 of both hosts I will connected a crossed cable I will give each VM clients that needs internet access a 2ndry VM network with the assigned IP from the ISP open only to web - can not access from the My Question: Should I give IP's (external) to the machines who DO NOT require WWAN Access? - I can't see a way to RDP to them directly if not. Should I use the crossed cable? or just plug NIC #2 to the switch? Will this setup even work? What do I need to verify? What Virtual nic's and/or switches should I create on the Hosts?

    Read the article

  • Proper Network Infastructure Setup DMZ, VPN, Routing Hardware Question

    - by NickToyota
    Greetings Server Fault Universe, So here's a quick background. Two weeks ago I started a new position as the systems administrator for an expanding health services company of just over 100 persons. The individual I was replacing left the company with little to no notice. Basically, I have inherited a network of one main HQ (where I am situated) which has existed for over 10 years, with five smaller offices (less than 20 persons). I am trying to make sense of the current setup. The network at the HQ includes: Linksys RV082 Router providing internet access for employees and site to site VPN connecting the smaller offices (using an RV042 each). We have both cable and dsl lines connected to balance traffic (however this does not work at all and is not my main concern right now). Cisco Ironport appliance. This is the main gateway for our incoming and outgoing emails. This also has an external IP and internal IP. Lotus domino in and out email servers connected to the mentioned Cisco gateway. These also have an external IP and internal IP. Two windows 2003 and 2008 boxes running as domain controllers with DNS of course. These also have both an external IP and internal IP. Website and web mail servers also on both external and internal IPs. I am still confused as why there are so many servers connected directly to the internet. I am seriously looking to redesign this setup with proper security practices in mind (my highest concern) and am in need of a proper firewall setup for the external/internal servers along with a VPN solution about 50 employees. Budget is not a concern as I have been given some flexibility to purchase necessary solutions. I have been told Cisco ASA appliance may help. Does anyone out in the Server Fault Universe have some recommendations? Thank you all in advance.

    Read the article

  • Correct way to set up office network - 8 workstations, a file server and a staging server

    - by naunu
    Our office had this old school windows 2003 domain setup, our server caught fire, and now we are looking to do it right from scratch. Here is what we need: 5 PC and 3 Mac workstations for web development, they will each have WAMP/MAMP setup on them, managed by their developers. We will have a file server for assets, and a LAMP server with an external IP for staging. Here is what we have to work with: 5 IP addresses, brand new PC file server with windows 2008 SE, D-Link DSS-16+ 16 port switch, belkin 5 port wireless router, cable modem with 4 ports. How I have it set up now (this is a temporary makeshift setup): Cable modem = LAMP server, wireless router Wireless router = Switch = All of the workstations and file server (setup as a workgroup). We have noticed our internet is very slow with us all plugged in to the switch, and the switch plugged in to the router. I am not positive, but I think it is because our router does not have NAT. We are also having problems with the MACs connection to the network drive - it keeps disconnecting. I want this done right, and we have a ~$600 budget to buy anything else we need. Does anybody have any advice for me? Should I set up a domain or workgroup?

    Read the article

  • Adobe Reader Wants Sensitive Email Details

    - by KDM
    When I run Adobe Reader, it tells me: Either there is no default mail client or the current mail client cannot fulfill the messaging request. Please run Microsoft Outlook and set it as the default mail client. I have a couple of issues with this: 1) It presupposes everyone has Microsoft Office installed. Not all home users have the budget or inclination for this. 2) It presupposes everyone wants Microsoft Outlook to be their default mail client. 3) I have Microsoft Office (incl. Outlook) installed and set as my default mail client. Even if I make it the default mail client from within the Adobe Reader Preferences, that doesn't stop the dialog appearing. 4) I thought I'd give Adobe Reader a new email address in the preferences, just to get it to stop bugging me. I notice, though, that it want's the SMTP and POP addresses and the account password? They have got to be kidding? I just want to view PDF files. How do I get the message to go away without telling Adobe my life story, giving them my mother's maiden name, my favourite movie, my place of birth, the name of my first goldfish and emptying the contents of my wallet for them?

    Read the article

  • What do you upgrade to make games load faster? [on hold]

    - by Superbest
    Let's say you have a relatively modern game like Shogun 2. The loading screens take several minutes. This bothers you and you'd like to improve it. What is actually going on when loading screens are up? I'm guessing assets are being loaded into memory from disk, and possibly being decompressed first. However, what is actually causing the slow down? The memory? Mainboard? CPU? HDD? If you had $100 to spend on upgrades and your only goal is to speed up loading screens without reducing other performance, what component of the computer does it make sense to upgrade for maximum benefit? If your answer is "it depends on the existing setup", what sort of benchmarks would you run to determine what is causing the bottleneck? What if you had $500 instead? I give the two budgets for context. I am not asking for actual recommendations about which component to buy (nor are the numbers supposed to be rigid limits), but what features are important when shopping for components with small and large budgets (a large budget could allow buying multiple components which are not so good on their own, but work particularly well together). I mention Shogun 2 as an example, but I'm asking about reducing overall loading times, across all games, not just one game. Therefore, "put it on a solid state disk" probably won't be good solution, because putting every game on your SDD will quickly fill it up.

    Read the article

  • Will adding a SSD cache device to my ZFS storage improve performance?

    - by Sysadminicus
    The server has 4GB of RAM and my zpool is made up of 15.5k SAS drives arranged like this: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 c0t2d0 ONLINE 0 0 0 c0t3d0 ONLINE 0 0 0 c0t4d0 ONLINE 0 0 0 c0t5d0 ONLINE 0 0 0 c0t6d0 ONLINE 0 0 0 c0t7d0 ONLINE 0 0 0 c0t8d0 ONLINE 0 0 0 raidz1-1 ONLINE 0 0 0 c0t10d0 ONLINE 0 0 0 c0t11d0 ONLINE 0 0 0 c0t12d0 ONLINE 0 0 0 c0t13d0 ONLINE 0 0 0 c0t14d0 ONLINE 0 0 0 spares c0t9d0 AVAIL c0t1d0 AVAIL The primary use is as an NFS store for a couple VMWare ESXi servers. I can't do any "true" benchmarks because this is a production system (no budget for test systems), but using dd and bonnie++ I can't get more than ~40-50MB/s writes and ~70-90MB/s reads. It seems I should be able to do much better, but I'm not sure where to optimize. Based on what I've read, I think dropping in a OCZ Vertex 2 Pro SSD as my L2ARC is going to be the best bang-for-the-buck to improve througput. Is there something else I should be looking into to help performance? If not... How do I know how big a cache device I need? Am I safe with only a single SSD as my cache device?

    Read the article

  • Small Business HP Virtualisation and iSCSI SAN Options

    - by Robin Day
    We are a small business that hosts our core product on a number of HP servers. Our core production setup is 1x HP DL380, high powered for a SQL Server Database 1x HP DL360, mid powered for our core application server 6x HP DL320, low powered for our front ends We run our training / testing / support systems on a similar setup, the servers are just older and less powerful. Unfortunately this is now causing us issues as the system has grown beyond the capabilities of these older servers. Upgrading these servers would be expensive and we believe that virtualisation is probably the way to go for the future. Locally we run a number of test / dev environments on ESXi using Direct Storoage on a couple of high powered DL360's and these are performing fairly well. We're thinking that instead of replacing all of our test servers that we can implement an iSCSI SAN and one or two high powered hosts. Hopefully looking that when it comes to replace our live servers as well that we can just expand the virual environment to cope. So my question is... Can anyone offer any advice on some suitable options? We have generally always been extremely happy with HP servers, all of our kit is currently HP, therefore our preference would be to stick with HP, however, I'm always happy to hear about other options. I'm hoping that initially a budget of around 15-25k (GBP) would be suitable, this could potentially be increased if I had confidence that the system would pave the way for a cost effective upgrade of our live systems in the future as well. I am new to SAN's and my only real experience is playing with OpenFiler on some old desktops. I think iSCSI should be suitable, but I've not done any research into how SQL server may perform. I've had a browser through HP's sites and see plenty of information about EVA, MSA, LeftHand, etc. However, from looking at all that, I don't see which options would be best and more importantly I don't know exactly what I would need to buy. Any help, links, opinions would be much appreciated. Thanks

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >