Search Results

Search found 21331 results on 854 pages for 'require once'.

Page 88/854 | < Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >

  • How to sync Ubuntu/software/configurations between N computers with free software and/or without a cloud?

    - by skanatek
    Note: this question is not about syncing data in a Dropbox-like way (files, folders), it is more about syncing configurations. I would like to have exactly the same version of Ubuntu with all the software installed and configured both on my Desktop PC and on my Laptop PC (and maybe on my small netbook PC) without using Ubuntu Sync and with minimal maintenance effort (setup once, run for a long time). The use case is the following: I work on my Laptop PC and do some changes to software configuration, for example: configure vim to have a new plugin update the Search Tracker / Recoll file search index configure Thunderbird to have an additional IMAP account ('remember password') add some new bookmarks in Firefox/Chrome change the desktop background image install new software with apt-get install build and install new software with checkinstall etc. I do some 'sync' operation I switch to my Desktop PC and get all the changes from (1) working on the Desktop PC I work on my Desktop PC and do some changes to software configuration, for example: add new directory to the list of directories to be backed up by DejaDup add a new check spelling dictionary to the Libreoffice Writer configure the Terminator software to have colored fonts install new font into the Ubuntu system configure Ekiga to make phone calls etc. I do some 'sync' operation I switch to my Laptop PC and get all the changes from (1) and (4) working on the Laptop PC. Question: What free/open-source software can I use to sync both machines' Ubuntu systems, installed software and configurations? Is it possible to do that without any cloud services? Complementary question: It is obvious that the Desktop PC and the Laptop PC have different hardware configurations. How does the 'sync software' in question deal with video drivers, wlan drivers and their configurations? Note: I do not need all the PCs to be synced at the same time, because I work with only one single machine at once. Note: I considered to use Chef to solve the problem, but it seems that it might be really cumbersome to maintain such a setup. Note: I also considered using a bootable USB with Ubuntu installed (portable Linux), but I am not sure that the video drivers will work then.

    Read the article

  • How soon does nginx's token bucket replenish when limiting at requests per minute?

    - by Michael Gorsuch
    Hi all. We've decided that we want to experiment and limit requests per minute instead of requests per second on our sites. However, I am confused by the burst parameter in this context. I am under the impression that when you use the 'nodelay' flag, the rate limiting facility acts like a token bucket instead of a leaky bucket. That being the case, the bucket size is equal to the burst parameter, and every time that you violate the policy (say 1 req/s), you have to put a token in the bucket. Once the bucket is full (being equal to the burst setting), you are given a 503 error page. I am also under the impression that once a violator stops going against the policy, a token is removed from the bucket at a rate of 1 token/s allowing him to regain access to the site. Assuming that I have the above correct, my question is what happens when I start regulating access per minute? If we chose 60 requests per minute, at what rate does the token bucket replenish?

    Read the article

  • yum not able to install a package

    - by shadyabhi
    [root@mypc yum.repos.d]# yum search perl-Locale-gettext Loaded plugins: dellsysid, fastestmirror Repository tmz-puppet is listed more than once in the configuration Loading mirror speeds from cached hostfile * atomic: www6.atomicorp.com * base: mirror.trouble-free.net * epel: mirrors.tummy.com * extras: eq-centosrepo.hopto.org * rpmforge: mirror.hmc.edu * updates: mirror.team-cymru.org =================================================================== N/S Matched: perl-Locale-gettext ==================================================================== perl-Locale-gettext.x86_64 : Internationalization for Perl Name and summary matches only, use "search all" for everything. [root@mypc yum.repos.d] And [root@mypc yum.repos.d]# yum install perl-Locale-gettext Loaded plugins: dellsysid, fastestmirror Repository tmz-puppet is listed more than once in the configuration Loading mirror speeds from cached hostfile * atomic: mir01.syntis.net * base: mirrors.gigenet.com * epel: mirror.us.leaseweb.net * extras: centos.mirror.lstn.net * rpmforge: mirror.hmc.edu * updates: centos.mirror.choopa.net Setting up Install Process Nothing to do [root@mypc yum.repos.d]# What is going wrong here?

    Read the article

  • Lock System when certain hardware is removed

    - by er4z0r
    Hi all, I am working at a company where you are suppused to lock your screen whenever you leave your desk alone for a few minutes. Now I wondered if there is a nifty little tool that would lock my screen once a certain device is removed from the system. The ideal thing would of course be to have a short-range transmitter that causes the screen to be locked once it goes out of range. But for now I would also stick with removing a pen-drive from my laptop. I am pretty sure this is feasible. I just wan't to know if there are any preexistent projects.

    Read the article

  • Web Development Environment: How to distribute edited hosts files over bunch of mac machines?

    - by Alex Reds
    I am doing some research to prepare some web development environment for our small(10ppl and growing) new office. User Case: For each new web project usually we create new alias on an Apache server someproject.companywebsite From my understanding in order to see this website locally for all the rest of our team(including mangers and directors) they will need to edit hosts file (e.g. "192.168.1.10 someproject.companywebsite"), and like that each time for a new project(can be 2-5 each week) Solution: And I looking for a solution how to edit this hosts file only once and distribute it over all mac machines in our network at once or much more flawlessly than poking around with each machine every time over and over again. Is that possible? Or that a very wrong way of doing that? Perhaps we better set up own local dns server and point to it our router? Though own dns server a bit concerns me because of might be some network interruption and others lags, if you know what I mean. Or perhaps there are another workflows for that? What's the best way for such things? So I'll be so grateful to hear some advices from experienced admins. I couldn't find that info on internet, so if you know where to read about it, point me in a right direction. Thank you in advance Alex

    Read the article

  • Download a website that requires log-in with HTTtrack Copier

    - by H.Moss
    Hi guys! I have been researching of how to download content of a site that requires username and password. This is actually harder than I thought it would be. I tried to use both HTTtrack Copier and followed the instruction below, but it's not working! Q: I can not access several pages (access forbidden, or redirect to another location), but I can with my browser, what's going on? A: You may need cookies! Cookies are specific data (for example, your username or password) that are sent to your browser once you have logged in certain sites so that you only have to log-in once. For example, after having entered your username in a website, you can view pages and articles, and the next time you will go to this site, you will not have to re-enter your username/password. To "merge" your personnal cookies to an HTTrack project, just copy the cookies.txt file from your Netscape folder (or the cookies located into the Temporary Internet Files folder for IE) into your project folder (or even the HTTrack folder)

    Read the article

  • Solving Big Problems with Oracle R Enterprise, Part I

    - by dbayard
    Abstract: This blog post will show how we used Oracle R Enterprise to tackle a customer’s big calculation problem across a big data set. Overview: Databases are great for managing large amounts of data in a central place with rigorous enterprise-level controls.  R is great for doing advanced computations.  Sometimes you need to do advanced computations on large amounts of data, subject to rigorous enterprise-level concerns.  This blog post shows how Oracle R Enterprise enables R plus the Oracle Database enabled us to do some pretty sophisticated calculations across 1 million accounts (each with many detailed records) in minutes. The problem: A financial services customer of mine has a need to calculate the historical internal rate of return (IRR) for its customers’ portfolios.  This information is needed for customer statements and the online web application.  In the past, they had solved this with a home-grown application that pulled trade and account data out of their data warehouse and ran the calculations.  But this home-grown application was not able to do this fast enough, plus it was a challenge for them to write and maintain the code that did the IRR calculation. IRR – a problem that R is good at solving: Internal Rate of Return is an interesting calculation in that in most real-world scenarios it is impractical to calculate exactly.  Rather, IRR is a calculation where approximation techniques need to be used.  In this blog post, we will discuss calculating the “money weighted rate of return” but in the actual customer proof of concept we used R to calculate both money weighted rate of returns and time weighted rate of returns.  You can learn more about the money weighted rate of returns here: http://www.wikinvest.com/wiki/Money-weighted_return First Steps- Calculating IRR in R We will start with calculating the IRR in standalone/desktop R.  In our second post, we will show how to take this desktop R function, deploy it to an Oracle Database, and make it work at real-world scale.  The first step we did was to get some sample data.  For a historical IRR calculation, you have a balances and cash flows.  In our case, the customer provided us with several accounts worth of sample data in Microsoft Excel.      The above figure shows part of the spreadsheet of sample data.  The data provides balances and cash flows for a sample account (BMV=beginning market value. FLOW=cash flow in/out of account. EMV=ending market value). Once we had the sample spreadsheet, the next step we did was to read the Excel data into R.  This is something that R does well.  R offers multiple ways to work with spreadsheet data.  For instance, one could save the spreadsheet as a .csv file.  In our case, the customer provided a spreadsheet file containing multiple sheets where each sheet provided data for a different sample account.  To handle this easily, we took advantage of the RODBC package which allowed us to read the Excel data sheet-by-sheet without having to create individual .csv files.  We wrote ourselves a little helper function called getsheet() around the RODBC package.  Then we loaded all of the sample accounts into a data.frame called SimpleMWRRData. Writing the IRR function At this point, it was time to write the money weighted rate of return (MWRR) function itself.  The definition of MWRR is easily found on the internet or if you are old school you can look in an investment performance text book.  In the customer proof, we based our calculations off the ones defined in the The Handbook of Investment Performance: A User’s Guide by David Spaulding since this is the reference book used by the customer.  (One of the nice things we found during the course of this proof-of-concept is that by using R to write our IRR functions we could easily incorporate the specific variations and business rules of the customer into the calculation.) The key thing with calculating IRR is the need to solve a complex equation with a numerical approximation technique.  For IRR, you need to find the value of the rate of return (r) that sets the Net Present Value of all the flows in and out of the account to zero.  With R, we solve this by defining our NPV function: where bmv is the beginning market value, cf is a vector of cash flows, t is a vector of time (relative to the beginning), emv is the ending market value, and tend is the ending time. Since solving for r is a one-dimensional optimization problem, we decided to take advantage of R’s optimize method (http://stat.ethz.ch/R-manual/R-patched/library/stats/html/optimize.html). The optimize method can be used to find a minimum or maximum; to find the value of r where our npv function is closest to zero, we wrapped our npv function inside the abs function and asked optimize to find the minimum.  Here is an example of using optimize: where low and high are scalars that indicate the range to search for an answer.   To test this out, we need to set values for bmv, cf, t, emv, tend, low, and high.  We will set low and high to some reasonable defaults. For example, this account had a negative 2.2% money weighted rate of return. Enhancing and Packaging the IRR function With numerical approximation methods like optimize, sometimes you will not be able to find an answer with your initial set of inputs.  To account for this, our approach was to first try to find an answer for r within a narrow range, then if we did not find an answer, try calling optimize() again with a broader range.  See the R help page on optimize()  for more details about the search range and its algorithm. At this point, we can now write a simplified version of our MWRR function.  (Our real-world version is  more sophisticated in that it calculates rate of returns for 5 different time periods [since inception, last quarter, year-to-date, last year, year before last year] in a single invocation.  In our actual customer proof, we also defined time-weighted rate of return calculations.  The beauty of R is that it was very easy to add these enhancements and additional calculations to our IRR package.)To simplify code deployment, we then created a new package of our IRR functions and sample data.  For this blog post, we only need to include our SimpleMWRR function and our SimpleMWRRData sample data.  We created the shell of the package by calling: To turn this package skeleton into something usable, at a minimum you need to edit the SimpleMWRR.Rd and SimpleMWRRData.Rd files in the \man subdirectory.  In those files, you need to at least provide a value for the “title” section. Once that is done, you can change directory to the IRR directory and type at the command-line: The myIRR package for this blog post (which has both SimpleMWRR source and SimpleMWRRData sample data) is downloadable from here: myIRR package Testing the myIRR package Here is an example of testing our IRR function once it was converted to an installable package: Calculating IRR for All the Accounts So far, we have shown how to calculate IRR for a single account.  The real-world issue is how do you calculate IRR for all of the accounts?This is the kind of situation where we can leverage the “Split-Apply-Combine” approach (see http://www.cscs.umich.edu/~crshalizi/weblog/815.html).  Given that our sample data can fit in memory, one easy approach is to use R’s “by” function.  (Other approaches to Split-Apply-Combine such as plyr can also be used.  See http://4dpiecharts.com/2011/12/16/a-quick-primer-on-split-apply-combine-problems/). Here is an example showing the use of “by” to calculate the money weighted rate of return for each account in our sample data set.  Recap and Next Steps At this point, you’ve seen the power of R being used to calculate IRR.  There were several good things: R could easily work with the spreadsheets of sample data we were given R’s optimize() function provided a nice way to solve for IRR- it was both fast and allowed us to avoid having to code our own iterative approximation algorithm R was a convenient language to express the customer-specific variations, business-rules, and exceptions that often occur in real-world calculations- these could be easily added to our IRR functions The Split-Apply-Combine technique can be used to perform calculations of IRR for multiple accounts at once. However, there are several challenges yet to be conquered at this point in our story: The actual data that needs to be used lives in a database, not in a spreadsheet The actual data is much, much bigger- too big to fit into the normal R memory space and too big to want to move across the network The overall process needs to run fast- much faster than a single processor The actual data needs to be kept secured- another reason to not want to move it from the database and across the network And the process of calculating the IRR needs to be integrated together with other database ETL activities, so that IRR’s can be calculated as part of the data warehouse refresh processes In our next blog post in this series, we will show you how Oracle R Enterprise solved these challenges.

    Read the article

  • hardware addressing and configurable addressing scheme

    - by Zia ur Rahman
    basically i want to ask question about configurable addressing scheme for LAN interface hardware. i have read about it from a book, some main points are given by a configurable addressing scheme provides a mechanism that a customer can use to set a physical address.The mechanism can be manual (the switches that must be set when the interface is first installed).or an electronic memory such as an EPROM that can be downloded from the computer(what does this means). Most hardware needs to be configured only once- configuration is usually done when the hardware is first installed. Question:Suppose a network administrator configures the LAN interface hardware (assigns the address) when he installs it. Now later on if he needs to change the physical address of the device can he change it? Or in this addressing scheme the hardware can only be configured once and we can not reconfigure it later on.

    Read the article

  • Are there Know issues with Ubuntu 11 and socket 1155? acpi=off

    - by James
    Since building my new system I cant get Ubuntu (or any other flavor of Linux) to "run" properly. To boot the live cd I have to "F6" and turn off acpi or the screen halts at a black screen with "boot stuff" written above the blinking curser. When installed I can only boot if I enter repair mode from Grub menu then once in repair mode I choose Boot Normally and it boots to the desktop. Once on the desktop I can "click" (with the mouse) only on one thing...like firefox or "desktop appearance" and the mouse no longer "clicks" on anything else ...its like the computer freezes but the mouse still moves. I end up using the reset button to restart the computer. I was able to update when prompted to do so....but at the end of the update I could not "click" the "finish" and had to use the manual reset button. I have run Ubuntu since v 8... My system specs are: intel i7 2600k ...Graphics disabled in bios... Asus p8z68-v pro.... 16G Kingston HyperX.... 2 EVGA GTX 570 in SLI.... Mouse is a simple Logitech usb Wireless. Ubuntu installed on secondary sata drive.

    Read the article

  • Drivers for Ubuntu 13.10 [on hold]

    - by Fernando De Souza Martins
    I just installed Ubuntu 13.10, my screen resolution is not fitting my screen as the ubuntu interface is all around stretching over the screen, so i thought i might install nvidia's driver that i know can let me adjust the exact resolution i need. So i began a 2 hour quest, i downloaded the driver hoping i would have a wizard to instal it, but yeah, so i tried to do a bit of research and i found that feature, i think its called in english additional drivers, but it wont show the nvidia drivers, i tried the terminal, but once i write the commands i found it asks for a password but i cant type anything once the password is asked. So, my question, obviously, how do i install this driver? I am not sure if this is appropriate, but why doesnt ubuntu have a wizard to install things? I feel like im working for the OS, when it should be the other way around, but i love the concept of linux, so im pushing forward and trying to use it. Another thing is, i had to install a bunch of drivers and applications for the drivers in windows, do i need to install any other driver? I cant change my mouse's sensibility in the os, it seems, so how do i do it? I'm sorry i'm asking all of this, but it seems necessary.

    Read the article

  • How do I ensure my Apple keyboard connects on boot?

    - by Stacey Richards
    I am using Ubuntu 10.04 on a laptop and have an Apple wireless keyboard which pairs fine. Every time I turn my computer off and back on again my keyboard stops working. I have to use the keyboard on my laptop to log in. Once I've logged in, in order to get the wireless keyboard to work, I need disconnect and reconnect it by: Clicking on the Bluetooth icon, select Apple Wireless Keyboard from the drop down menu, then click on Disconnect. Clicking on the Bluetooth icon, select Apple Wireless Keyboard from the drop down menu, then click on Connect. Looking through syslog, to see what's happening during boot, I find: Nov 25 10:29:21 sony kernel: [ 24.525372] apple 0005:05AC:0239.0002: parse failed Nov 25 10:29:21 sony kernel: [ 24.525379] apple: probe of 0005:05AC:0239.0002 failed with error -14 and then later in syslog, once I've disconnected then connected the keyboard, I find: Nov 25 10:30:14 sony bluetoothd[1247]: link_key_request (sba=00:21:4F:49:8A:DB, dba=E8:06:88:5A:E0:D4) Nov 25 10:30:14 sony kernel: [ 79.427277] input: Apple Wireless Keyboard as /devices/pci0000:00/0000:00:1d.2/usb8/8-1/8-1:1.0/bluetooth/hci0/hci0:12/input11 Nov 25 10:30:14 sony kernel: [ 79.427611] apple 0005:05AC:0239.0003: input,hidraw1: BLUETOOTH HID v0.50 Keyboard [Apple Wireless Keyboard] on 00:21:4F:49:8A:DB I can't find anything helpful when Googling "apple: probe of failed with error -14".

    Read the article

  • DotNetNuke + PayPal

    - by Nuri Halperin
    A DotNetNuke i'm supporting has had a paypal "buy now" button and other variations with custom fields for a while now. About 2 weeks ago (somewhere in March 2010) they all stopped working. The problem manifested such that once you clicked the "buy now" button, Paypal site would throw a scary error page to the effect of: "Internal Server Error The server encountered an internal error or misconfiguration and was unable to complete your request. Please contact the server administrator, [email protected] and inform them of the time the error occurred, and anything you might have done that may have caused the error. More information about this error may be available in the server error log" Once I verified no cheeky content editor changed the page, I went digging for answers. The main source incompatibility of PayPal's simple HTML forms is that DNN includes a form on every page, and nested forms are not really supported. As blogged here and lamented here, the solution I came up with is simply to modify the form enctype to 'application/x-www-form-urlencoded' as illustrated below: 1: <input type="image" border="0" 2: src="https://www.paypal.com/en_US/i/btn/btn_buynowCC_LG.gif" 3: name="submit" 4: alt="PayPal - The safer, easier way to pay online!" 5: onClick="this.form.action='https://www.paypal.com/cgi-bin/webscr'; this.form.enctype='application/x-www-form-urlencoded';this.form.submit();" /> One would think that PayPal would want the masses submitting HTML in all manners of "enctype", but I guess every company has it's quirks. At least my favorite non-profit can now continue and accept payments. Sigh.

    Read the article

  • Going by the eBook

    - by Tony Davis
    The book and magazine publishing world is rapidly going digital, and the industry is faced with making drastic changes to their ways of doing business. The sudden take-up of digital readers by the book-buying public has surprised even the most technological-savvy of the industry. Printed books just aren't selling like they did. In contrast, eBooks are doing well. The ePub file format is the standard around which all publishers are converging. ePub is a standard for formatting book content, so that it can be reflowed for various devices, with their widely differing screen-sizes, and can be read offline. If you unzip an ePub file, you'll find familiar formats such as XML, XHTML and CSS. This is both a blessing and a curse. Whilst it is good to be able to use familiar technologies that have been developed to a level of considerable sophistication, it doesn't get us all the way to producing a viable publication. XHTML is a page-description language, not a book-description language, as we soon found out during our initial experiments, when trying to specify headers, footers, indexes and chaptering. As a result, it is difficult to predict how any particular eBook application will decide to render a book. There isn't even a consensus as to how the cover image is specified. All of this is awkward for the publisher. Each book must be created and revised in a form from which can be generated a whole range of 'printed media', from print books, to Mobi for kindles, ePub for most Tablets and SmartPhones, HTML for excerpted chapters on websites, and a plethora of other formats for other eBook readers, each with its own idiosyncrasies. In theory, if we can get our content into a clean, semantic XML form, such as DOCBOOKS, we can, from there, after every revision, perform a series of relatively simple XSLT transformations to output anything from a HTML article, to an ePub file for reading on an iPad, to an ICML file (an XML-based file format supported by the InDesign tool), ready for print publication. As always, however, the task looks bigger the closer you get to the detail. On the way to the utopian world of an XML-based book format that encompasses all the diverse requirements of the different publication media, ePub looks like a reasonable format to adopt. Its forthcoming support for HTML 5 and CSS 3, with ePub 3.0, means that features, such as widow-and-orphan controls, multi-column flow and multi-media graphics can be incorporated into eBooks. This starts to make it possible to build an "app-like" experience into the eBook and to free publishers to think of putting context before container; to think of what content is required, be it graphical, textual or audio, from the point of view of the user, rather than what's possible in a given, traditional book "Container". In the meantime, there is a gap between what publishers require and what current technology can provide and, of course building this app-like experience is far from plain sailing. Real portability between devices is still a big challenge, and achieving the sort of wizardry seen in the likes of Theodore Grey's "Elements" eBook will require some serious device-specific programming skills. Cheers, Tony.

    Read the article

  • Is there any way to disable certain keyboard shortcuts in Windows 7?

    - by matt b
    There is something going on with my Windows 7 machine where attempting to unlock the workstation the Win key is "stuck" on (not quite sure what is causing this). What this causes is that when I am entering my password, which contains a "p", Windows thinks I am pressing Win + p as if I want to change my display settings to enable a projector. This is highly annoying when attempting to unlock my workstation! The workaround I've found to "unstick" the Win key is to cycle through the projector/display settings once, land back on the original value, and then resume entering my password as normal. But is there any way to completely disable the shortcut for Win + p ? In all likelihood I will never hook up a projector to this machine, or on the once-in-a-lifetime chance that I do, I am more than happy with having to go into the Display Settings myself.

    Read the article

  • Oracle MAA Part 1: When One Size Does Not Fit All

    - by JoeMeeks
    The good news is that Oracle Maximum Availability Architecture (MAA) best practices combined with Oracle Database 12c (see video) introduce first-in-the-industry database capabilities that truly make unplanned outages and planned maintenance transparent to users. The trouble with such good news is that Oracle’s enthusiasm in evangelizing its latest innovations may leave some to wonder if we’ve lost sight of the fact that not all database applications are created equal. Afterall, many databases don’t have the business requirements for high availability and data protection that require all of Oracle’s ‘stuff’. For many real world applications, a controlled amount of downtime and/or data loss is OK if it saves money and effort. Well, not to worry. Oracle knows that enterprises need solutions that address the full continuum of requirements for data protection and availability. Oracle MAA accomplishes this by defining four HA service level tiers: BRONZE, SILVER, GOLD and PLATINUM. The figure below shows the progression in service levels provided by each tier. Each tier uses a different MAA reference architecture to deploy the optimal set of Oracle HA capabilities that reliably achieve a given service level (SLA) at the lowest cost.  Each tier includes all of the capabilities of the previous tier and builds upon the architecture to handle an expanded fault domain. Bronze is appropriate for databases where simple restart or restore from backup is ‘HA enough’. Bronze is based upon a single instance Oracle Database with MAA best practices that use the many capabilities for data protection and HA included with every Oracle Enterprise Edition license. Oracle-optimized backups using Oracle Recovery Manager (RMAN) provide data protection and are used to restore availability should an outage prevent the database from being able to restart. Silver provides an additional level of HA for databases that require minimal or zero downtime in the event of database instance or server failure as well as many types of planned maintenance. Silver adds clustering technology - either Oracle RAC or RAC One Node. RMAN provides database-optimized backups to protect data and restore availability should an outage prevent the cluster from being able to restart. Gold raises the game substantially for business critical applications that can’t accept vulnerability to single points-of-failure. Gold adds database-aware replication technologies, Active Data Guard and Oracle GoldenGate, which synchronize one or more replicas of the production database to provide real time data protection and availability. Database-aware replication greatly increases HA and data protection beyond what is possible with storage replication technologies. It also reduces cost while improving return on investment by actively utilizing all replicas at all times. Platinum introduces all of the sexy new Oracle Database 12c capabilities that Oracle staff will gush over with great enthusiasm. These capabilities include Application Continuity for reliable replay of in-flight transactions that masks outages from users; Active Data Guard Far Sync for zero data loss protection at any distance; new Oracle GoldenGate enhancements for zero downtime upgrades and migrations; and Global Data Services for automated service management and workload balancing in replicated database environments. Each of these technologies requires additional effort to implement. But they deliver substantial value for your most critical applications where downtime and data loss are not an option. The MAA reference architectures are inherently designed to address conflicting realities. On one hand, not every application has the same objectives for availability and data protection – the Not One Size Fits All title of this blog post. On the other hand, standard infrastructure is an operational requirement and a business necessity in order to reduce complexity and cost. MAA reference architectures address both realities by providing a standard infrastructure optimized for Oracle Database that enables you to dial-in the level of HA appropriate for different service level requirements. This makes it simple to move a database from one HA tier to the next should business requirements change, or from one hardware platform to another – whether it’s your favorite non-Oracle vendor or an Oracle Engineered System. Please stay tuned for additional blog posts in this series that dive into the details of each MAA reference architecture. Meanwhile, more information on Oracle HA solutions and the Maximum Availability Architecture can be found at: Oracle Maximum Availability Architecture - Webcast Maximize Availability with Oracle Database 12c - Technical White Paper

    Read the article

  • Windows Server Task Scheduler: Running scheduled executable fail-safe?

    - by Mikael Koskinen
    I have an executable which I've scheduled to run once in every five minutes (using Window's built-in Task Scheduler). It's crucial that this executable is run because it updates few time critical files. But how can I react if the virtual server running the executable goes down? At no point there shouldn't be more than 15 minutes break between the runs. As I'm using Windows Server and its Task Scheduler, I wonder is it possible to create some kind of a cluster which automatically handles the situation? The problem is that the server in question is running on Windows Azure and I don't think I can create actual clusters using the virtual machines. If the problem can be solved using a 3rd party tool, that's OK too. To generalize the question a little bit: How to make sure that an executable is run once in every 5 minutes, even if there might be server failures?

    Read the article

  • Was API hooking done as needed for Stuxnet to work? I don't think so

    - by The Kaykay
    Caveat: I am a political science student and I have tried my level best to understand the technicalities; if I still sound naive please overlook that. In the Symantec report on Stuxnet, the authors say that once the worm infects the 32-bit Windows computer which has a WINCC setup on it, Stuxnet does many things and that it specifically hooks the function CreateFileA(). This function is the route which the worm uses to actually infect the .s7p project files that are used to program the PLCs. ie when the PLC programmer opens a file with .s7p the control transfers to the hooked function CreateFileA_hook() instead of CreateFileA(). Once Stuxnet gains the control it covertly inserts code blocks into the PLC without the programmers knowledge and hides it from his view. However, it should be noted that there is also one more function called CreateFileW() which does the same task as CreateFileA() but both work on different character sets. CreateFileA works with ASCII character set and CreateFileW works with wide characters or Unicode character set. Farsi (the language of the Iranians) is a language that needs unicode character set and not ASCII Characters. I'm assuming that the developers of any famous commercial software (for ex. WinCC) that will be sold in many countries will take 'Localization' and/or 'Internationalization' into consideration while it is being developed in order to make the product fail-safe ie. the software developers would use UNICODE while compiling their code and not just 'ASCII'. Thus, I think that CreateFileW() would have been invoked on a WINCC system in Iran instead of CreateFileA(). Do you agree? My question is: If Stuxnet has hooked only the function CreateFileA() then based on the above assumption there is a significant chance that it did not work at all? I think my doubt will get clarified if: my assumption is proved wrong, or the Symantec report is proved incorrect. Please help me clarify this doubt. Note: I had posted this question on the general stackexchange website and did not get appropriate responses that I was looking for so I'm posting it here.

    Read the article

  • Need advice on which PCI SATA Controller Card to Purchase

    - by Matt1776
    I have a major issue with the build of a machine I am trying to get up and running. My goal is to create a file server that will service the needs of my software development, personal media storage and streaming/media server needs, as well as provide a strong platform for backing up all this data in a routine, cron-job oriented German efficiency sort of way. The issue is a simple one - all my drives are SATA drives and my motherboard controller only contains 4 ports. Solving the issue has proven to be an unmitigated nightmare. I would like advice on the purchase of the following: 4 Port internal SATA / 2 Port external eSATA PCI SATA Controller Card that has the following features and/or advantages: It must function. If I plug it in and attach drives, I expect my system to still make it to the Operating System login screen. It must function on CentOS, and I mean it must function WELL and with MINIMAL hassle. If hassle is unavoidable, there shall be CLEAR CUT and EASY TO FOLLOW instructions on how to install drivers and other supporting software. I do not need nor want fakeRAID - I will be setting up any RAID configurations from within the operating system. Now, if I am able to find such a mythical device, I would be eternally grateful to whomever would be able to point me in the right direction, a direction which I assume will be paved with yellow bricks. I am prepared to pay a considerable sum of money (as SATA controller cards go) and so paying anywhere between 60 to 120 dollars will not be an issue whatsoever. Does such a magical device exist? The following link shows an "example" of the type of thing I am looking for, however, I have no way of verifying that once I plug this baby in that my system will still continue to function once I've attached the drives, or that once I've made it to the OS, I will be able to install whatever drivers or software programs I need to make it work with relative ease. It doesn't have to be dog-shit simple, but it cannot involve kernels or brain surgery. http://www.amazon.com/gp/product/B00552PLN4/ref=pd_lpo_k2_dp_sr_1?pf_rd_p=486539851&pf_rd_s=lpo-top-stripe-1&pf_rd_t=201&pf_rd_i=B003GSGMPU&pf_rd_m=ATVPDKIKX0DER&pf_rd_r=1HJG60XTZFJ48Z173HKY So does anyone have a suggestion regarding the subject I am asking about? PCI SATA Controller Cards? It would help if you've had experience with the component before - that is after all why I am asking here - for those who have had experience that I do not have. Bear in mind that this is for a home setup and that I do not have a company credit card. I have a budget with a 'relative' upper limit of about $150.00.

    Read the article

  • Tender vs. Requirements vs. Solution Design

    - by Tom Tom
    Conventionally, which of the above documents is deemed to hold the most weight when it comes to system acceptance? I recently had a conversation along these lines: It was argued that the initial requirements / tender documentation should be used to determine system acceptance. It was said that the solution design only serves to describe the way in which the system will solve the problem, not the problem it will solve. Furthermore, it was argued that if requirements are missed during solution design, the requirements should be referenced during system acceptance and that if any requirements were missed then the original tender should be referenced. Conversely, I suggested that - while requirements may be based on the original tender - they supersede it once agreed with the stakeholders. Furthermore, during solution design, analysis is performed to address and refine these initial requirements, translating them into a system capable of meeting the actual requirements. Once signed off by the relevant users, this solution design should absolutely represent the requirements (by virtue of the fact that it's designed upon them) but actually supersedes them as the basis for system acceptance. Is one of the above arguments more valid than the other?

    Read the article

  • Login on VMWare XP guest machine keeps locking user AD account

    - by mark
    Environment: Windows 2003 AD Host: Windows 7 Pro VMWare Guest: Windows XP I use a normal user but I also have AD Admin rights (via another user). I use my AD account to login on the host as well on the aforementioned guest system. They don't share their profiles, so no problems here. I had reason to change my user (AD) password. When I did this, the guest was suspended but my user was logged in. A few days after my password change I resumed the guest. I was able to work but couldn't access networked mapped drives. I logged out and tried to log in again. At this point I realized that I initially was logged in with a user from a point before I changed my password. I logged in again with the new password, but then things went bad. I was able to successfully log in to my XP guest, however once that was completed, my AD user account got locked. This now also affected my user on the host. I was able to unlock the account, but there is still this problem: I log in via my new password into the guest and then my AD account gets locked. I'm successfully logged into the guest, but I can't access network shares from the AD server. If I don't unlock my account on the AD server, I will get further problems with my AD user. I tried multiple things, none worked: removed XP guest from AD, deleted all users, even my XP AD user profile on the guest, added machine to the AD, logged in - log in successful, account locked I resumed an older state of my guest (sometimes from the last year even) but the problem still persists. I tried this with disabled networking when the old machine state is resumed and so on, but no luck. It seems to me, although only my account is locked, this is somehow connected to the guest machine itself. I really want to avoid re-installation. This guest image was my old workstation which I virtualized once I moved to W7 pro and thus is still very valueable or me. I can work locally on the guest once logged in, but I can't access any network shares which is a problem. thanks

    Read the article

  • Finding bugs is difficult, right?

    - by Laila
    Something I hear developers tell us all the time is that they take pride in being a developer.and that bugs are a dent in that pride. Someone once told me "I know I have found bugs years later, and it's the worst feeling in the world." So how can you avoid that sinking feeling when you find out a bug has been in production months before someone lets you know about it? Besides, let's face it: hearing about a bug often means a world of pain, because it can take hours to track down where the problem is and more hours (if not days) to fix it. And during that time, you're not working on something new, and that, my friends, is really frustrating! So to cheer you up, we've created a Bug Hunt game, where you battle against the clock to spot bugs. We've really enjoyed putting this together and hope you enjoy playing it too. Once you're done with the bug hunt, we explain how easy it can be to find and fix bugs in real life, using a neat mechanism that we call Automated Error Reporting. Play the game now.

    Read the article

  • How to Configure Microsoft Word 2013 to Connect to Geekswithblogs

    - by Enrique Lima
    The first step in this process is to open Word 2013. Once there, you will have the different templates available. You will select Blog Post.  Once the template for Blog Post opens, you will have a dialog popup with the option to Register a Blog Account. And click on Register Now.  The next part of the dialog will prompt you to provide the New Blog Account details, starting with the type of Blog you have (SharePoint, WordPress, TypePad and others are listed). In our case for GeeksWithBlogs, we will select Other.  Now come the juicy details! Under the New Account dialog, you will have the API set to MetaWebLog.Then provide the Blog Post URL, this needs to be http://geekswithblogs.net/<your-account>/services/metablogapi.aspx (remember to change the <your-account> part with your info).Then, enter your User Name and Password, click OK and you should be set (you will receive a dialog letting you know information will be transferred).  Hope it works for you!

    Read the article

  • Identifying the Latest Family Packs for Oracle E-Business Suite

    - by Oracle_EBS
    Identifying the Latest Family Packs for Oracle E-Business Suite (Reprinted from blogs.oracle.com/stevenchan from Mar 15, 2012) It's important to ensure that your E-Business Suite environment is kept current, but it can be tricky to identify the latest product-level Family Packs that have been released.  You might need to identify the latest Family Pack updates available for an Oracle E-Business Suite R12 or R11i environment when performing one of the following tasks: Researching the latest functionality available Implementing a new module Planning an upgrade to your Oracle E-Business Suite environment Two useful links are available on My Oracle Support that provide a listing of packs  for an Oracle E-Business Suite R12 or R11i environment.  To navigate to a listing of packs, first login to My Oracle Support.  Once logged in, navigate to the Patches & Updates tab: Once on the Patches & Updates section, navigate to the Patching Quick Links region.  This region contains links toLatest R12 Packs and Latest R11i Packs: After clicking one of the links for either R12 or R11i packs, you will be directed to a new screen that displays all available packs for the selected version.  Here's an example of the screen displayed upon clicking the Latest R12 Packs link (naturally, the actual Family Pack references may change over time): Note that for R12, the listing displayed is the latest packs available for the most current release of R12.  Currently, this is Release 12.1.3.  For Release 11i, the listing displayed is for the most current release of R11i., 11.5.10.   Related Articles Quarterly E-Business Suite Upgrade Recommendations: January 2012 Edition Identifying Recommended Patches for E-Business Suite Environments EBS Support Information Center + Patching & Maintenance Advisor Available on My Oracle Support What's the Best Way to Patch an E-Business Suite Environment?

    Read the article

  • Ubuntu 12.04 installation on GPT + RAID going into grub rescue

    - by Proy
    I have two 2TB disks. I am installing Ubuntu 12.04 using the alternate version of the server cd. On the partitioning page I have done my partitioning as follows /dev/sda1 - 32 MB - bios_grub /dev/sda2 - 50 GB -raid device /dev/sda3 - 8 GB -raid device /dev/sda4 - Balance full GB - raid device /dev/sdb1 - 32 MB - bios_grub /dev/sdb2 - 50 GB -raid device /dev/sdb3 - 8 GB -raid device /dev/sdb4 - Balance full GB - raid device After this I have setup raid devices /dev/md0 for /(/dev/sda2 + /dev/sdb2) for / ext4 /dev/md1 for swap( /dev/sda3 + /dev/sdb3)for swap /dev/md2 for /home(/dev/sda4 + /dev/sdb4)for /home ext4 The installation finishes it shows that it is installing grub to /dev/sda and /dev/sdb. But once the system reboots it falls into grub rescue mode. on doing ls I can not see the md devices only hd once. I also tried booting into rescue mode with the install cd and doing grub-install /dev/sda and /dev/sdb. What am I doing wrong ? Why is grub2 not detecting the raid revices ? UPDATE: I just did the same steps with Ubuntu 10.04 and it worked perfectly fine. I wiped out the RAID and partitions and everything and did it from scratch. I think the issue is with Ubuntu 12.04 and the way it partitions 2 TB disks

    Read the article

  • Google ranking, page crawl

    - by Nawaf Mubarak
    please don't mind me for asking this newbie question about Google ranking. I know that in order to get ranked the page has to be crawled by Google bots, I have had a page example of which I will get a better understanding of how the system works with Google. I have made a page on my website last month, it got indexed pretty quickly, then I found that it's in Google's page 15 on my keyword as a start, next day it made it to page 13, then after a week it was jumping back and forth in page 17/18 up to 20. Now a month passed by, when and it isn't listed in any position of that 'keyword' sometimes I will find it in page 30, but later I won't find it anywhere, keep happening this way these days. Even if it isn't listed in any page for my keyword if I do a search for "site:thepageadress" it will be listed which means I'm not penalized and my page is there for google to see, but it isn't in the search result for my keyword. But when I write "site:thepage_adress" and I hit "search tools" option and click on "Past day" or "past week" it isn't listed, it is only listed when I click on "Past month" which I think means that Google indexed the page, looked at it once when I published it, and never looked at it again, is this a fair statement? So two questions that comes to mind here. 1- Should Google keep looking at a page even if I haven't changed any info for it? and is this an indication for me that my page is doing fine? or is it normal that Google see's it once and thats it? 2- Why and how to fix the fact that my page keeps jumping back and forth in the ranking result for keyword, and sometimes it isn't even listed, what does that mean? Sorry for the long msg, I hope to god that somebody help me with this. Thank you!

    Read the article

< Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >