Search Results

Search found 33445 results on 1338 pages for 'single instance storage'.

Page 314/1338 | < Previous Page | 310 311 312 313 314 315 316 317 318 319 320 321  | Next Page >

  • Old PCI Video Card and PCI-Express ATI Radeon HD 4200 Pro Living Together

    - by Adam Driscoll
    I have two video cards installed in a Dell Optiplex 745. The PCI card is from 2001 and I was hoping to be able to install it, allowing me to have 3 monitors. The Radeon is already running two itself. The problem is that when I boot the box with default BIOS settings the PCI card displays video to the single display but not the other two. In device manager I can see that both video cards are visible with drivers loaded correctly but underneath monitors I only have a single Generic PnP monitor from the VGA connection to the PCI card. If I switch the BIOS setting to force PCI-E I get the Radeon to display to the two other monitors. When I check device manager with this configuration I see that the PCI card displays an error saying the driver could not be started. Any ideas?

    Read the article

  • Updated Intel display driver causing errors when booting

    - by cdysthe
    I upgraded to to Graphics Installer 1.0.6 on my Ubuntu 14.04 and installed the drivers using the Intel Graphics Installer. The laptop is Intel Ivybridge powered with Intel HD graphics. It Optimus but I have disabled the Nvidia card in bios. The Intel Graphics Installer installs the package i915-3.15-3.13-dkms.deb which I assume is the updated driver. It causes a bunch of error messages when I boot. Here are the relevant errors from dmesg when I boot: [ 7.206151] drm: module verification failed: signature and/or required key missing - tainting kernel [ 7.208045] drm: module has bad taint, not creating trace events [ 7.336470] fb: conflicting fb hw usage inteldrmfb vs VESA VGA - removing generic driver [ 7.393854] [drm] Supports vblank timestamp caching Rev 2 (21.10.2013) [ 7.393855] [drm] Driver supports precise vblank timestamp query. [ 7.393921] vgaarb: device changed decodes: PCI:0000:00:02.0,olddecodes=io+mem,decodes=io+mem:owns=io+mem [ 7.505798] [drm] GMBUS [i915 gmbus dpb] timed out, falling back to bit banging on pin 5 [ 7.507233] init: Failed to obtain startpar-bridge instance: Unknown parameter: INSTANCE [ 7.944183] [drm:cpt_serr_int_handler] ERROR PCH transcoder A FIFO underrun [ 8.368479] i915 0000:00:02.0: fb0: inteldrmfb frame buffer device [ 8.368480] i915 0000:00:02.0: registered panic notifier [ 8.818416] [drm] Enabling RC6 states: RC6 on, RC6p on, RC6pp off What could the problem be and will it affect performance? I tried to remove the package and the errors went away but I'm then running and older driver I assume?

    Read the article

  • Mac friendly file sharing from VirtualBox

    - by kitsched
    I have set up Ruby on Rails on Ubuntu into a VirtualBox instance on my PC, I enabled Samba and I'm connecting to it via the home network from my Mac. All is fine except that I have some issues deleting some files from inside applications e.g. in Sublime Text 2 when I right click a file in the browser and select delete nothing happens (same in my Git client). To be able to delete files I have to navigate to the folder in Finder (which leaves those nasty .DS_Store files scattered all around) or issue the delete command from the terminal (inconvenient). If you're asking why I'm using VirtualBox for Rails instead of doing the development directly on the Mac it's because the ease of portability. So my question is: are there any network sharing options which I could use to make the Linux instance play nicer with my Mac?

    Read the article

  • SQL SERVER – Copy Database – SQL in Sixty Seconds #067

    - by Pinal Dave
    There are multiple reasons why a user may want to make a copy of the database. Sometimes a user wants to copy the database to the same server and sometime wants to copy the database on a different server. The important point is that DBA and Developer may want copies of their database for various purposes. I copy my database for backup purpose. However, when we hear coping database – the very first thought which comes to our mind is – Backup and Restore or Attach and Detach. Both of these processes have their own advantage and disadvantages. The matter of the fact, those methods is much efficient and recommended methods. However, if you just want to copy your database as it is and do not want to go for advanced feature. You can just use the copy feature of the SQL Server. Here are the settings, which you can use to copy the database. SQL in Sixty Seconds Video I have attempted to explain the same subject in simple words over in following video. Action Item Here are the blog posts I have previously written on the subject of SA password. You can read it over here: Copy Database from Instance to Another Instance – Copy Paste in SQL Server Copy Database With Data – Generate T-SQL For Inserting Data From One Table to Another Table Copy Data from One Table to Another Table – SQL in Sixty Seconds #031 – Video Generate Script for Schema and Data – SQL in Sixty Seconds #021 – Video You can subscribe to my YouTube Channel for frequent updates. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Joes 2 Pros, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Book Review, SQLAuthority News, T SQL, Video

    Read the article

  • 5 Design Tricks Facebook Uses To Affect Your Privacy Decisions

    - by Jason Fitzpatrick
    If you feel like Facebook increasingly has fewer and fewer options to reject applications and organization access to your private information, you’re not imagining it. Here are five ways Facebook’s design choices in the App Center have minimized your choices over time. Over at TechCrunch they have a guest post by Avi Charkham highlighting five ways recent changes to the Facebook App Center put privacy settings on the back burner. In regard to the comparison seen in the image above, for example, he writes: #1: The Single Button Trick In the old design Facebook used two buttons – “Allow” and “Don’t Allow” – which automatically led you to make a decision. In the new App Center Facebook chose to use a single button. No confirmation, no decisions to make. One click and, boom, your done! Your information was passed on to the app developers and you never even notice it. Hit up the link below to check out the other four redesign choices that minimize the information about privacy and data usage you see and maximize the click-through and acceptance rate for apps. How To Switch Webmail Providers Without Losing All Your Email How To Force Windows Applications to Use a Specific CPU HTG Explains: Is UPnP a Security Risk?

    Read the article

  • How to explain pointers to a Java/VB programmer

    - by Skeith
    I am writing a game and my friend has offered to help me as it is a RPG and will take a long time to do the "scripting" bit of the game. The problem is IMO he's not that good a programmer :( (add flame war here). He has only programmed in Java and VB and keeps saying really stupid things to me like "Why don't you drag and drop an onClick event" to design my UI when I'm using DirectX. I tried explaining pointers to him but his response was, if it's just a variable that holds a memory address, why don't you just use an int? I create an instance of an attack class and give the creature a pointer to it so if several creatures use the same attack there is only one instance of it. He keeps saying why not put if statements in the creature class for every attack class and set true for the ones that are there. He has programmed mainly in VB and a little in Java just to learn OOP. How can I explain advanced C++ concepts like pointers and memory management to him? He just doesn't understand there are no super functions like form.show in C++.

    Read the article

  • How to get OpenSSH to use ksshaskpass under KDE?

    - by Guss
    When using a GNOME desktop on Ubuntu, if I use OpenSSH client to connect to another computer (running from the gnome-terminal), I get a single graphic popup asking for my private key's pass-phrase. After that I no longer need to enter my pass-phrase as it is cached by the SSH agent. Under KDE it doesn't work like that - when I start ssh from konsole, I get a text prompt for my pass-phrase every single time, even though ssh-agent is running. If I run ssh-add from the terminal then I can enter my pass-phrase on the terminal and it will be stored by ssh-agent and I won't get any more pass-phrase prompts, while if I run ssh-add the KRunner graphical command line ("Run" dialog) then I get a graphical prompt with the same behavior. The problem is I have to remember running ssh-add every time I log in to the desktop. How can I get ssh to behave under KDE, the same as it does on GNOME - the first time the pass-phrase is needed, pop up a graphical dialog and store the pass-phrase in the agent. I've installed ksshaskpass, but that didn't change anything.

    Read the article

  • Unity scaling instantiated GameObject at Start() doesn't "keep"

    - by Shivan Dragon
    I have a very simple scenario: A box-like Prefab which is imported from Blender automatically (I have the .blend file in the Assets folder). A script that has two public GameObject fields. In one I place the above prefab, and in the other I place a terrain object (which I've created in Unity's graphical view): public Collider terrain; public GameObject aStarCellHighlightPrefab; This script is attached to the camera. The idea is to have the Blender prefab instantiated, have the terrain set as its parent, and then scale said prefab instance up. I first did it like this, in the Start() method: void Start () { cursorPositionOnTerrain = new RaycastHit(); aStarCellHighlight = (GameObject)Instantiate(aStarCellHighlightPrefab, new Vector3(300,300,300), terrain.transform.rotation); aStarCellHighlight.name = "cellHighlight"; aStarCellHighlight.transform.parent = terrain.transform; aStarCellHighlight.transform.localScale = new Vector3(100,100,100); } and first thought it didn't work. However later I noticed that it did in fact work, in the sense where the scale was applied right at the start, but then right after the prefab instance came back to its initial scale. Putting the scale code in the Update() methods fixes it in the sense where now it stays scaled all the time: void Update () { aStarCellHighlight.transform.localScale = new Vector3(100,100,100); //... } However I've noticed that when I run this code, the object is first displayed without the scale being applied, and it takes about 5-10 seconds for the scale to happen. During this time everything works fine (like input and logging, etc). The scene is very simple, it's not like it has a lot of stuff to load or anything (there's a Ray cast from the camera on to the terrain, but that seems to happen without such delays). My (2 part) question is: Why doesn't it take the scale transform when I do it at the beginning in the Start() method. Why do I have to keep scaling it in the Update() method? Why does it take so long for the scale to "apply/show up".

    Read the article

  • Unable to connect to sites using IIS7 Manager

    - by Phil.Wheeler
    I'm a developer who has been assigned the task of managing and configuring a new IIS7 instance on a remote server. My domain account has been added as to the local Administrators group on the box, but IIS7 has been configured to accept connections only from accounts with Windows credentials. I've added my domain account to the IIS Manager Permissions for one of my sites, but I'm still unable to connect to either that site, the IIS instance or the server in general from my local machine. There's obviously a missing element to the configuration of this setup but I don't know where to start looking. The event logs on the IIS box show audit failures for my account when trying to connect remote via the IIS7 Manager tool on my local machine. Suggestions gratefully received.

    Read the article

  • Manager/Container class vs static class methods

    - by Ben
    Suppose I a have a Widget class that is part of a framework used independently by many applications. I create Widget instances in many situations and their lifetimes vary. In addition to Widget's instance specified methods, I would like to be able to perform the follow class wide operations: Find a single Widget instance based on a unique id Iterate over the list of all Widgets Remove a widget from the set of all widgets In order support these operations, I have been considering two approaches: Container class - Create some container or manager class, WidgetContainer, which holds a list of all Widget instances, support iteration and provides methods for Widget addition, removal and lookup. For example in C#: public class WidgetContainer : IEnumerable<Widget { public void AddWidget(Widget); public Widget GetWidget(WidgetId id); public void RemoveWidget(WidgetId id); } Static class methods - Add static class methods to Widget. For example: public class Widget { public Widget(WidgetId id); public static Widget GetWidget(WidgetId id); public static void RemoveWidget(WidgetId id); public static IEnumerable<Widget AllWidgets(); } Using a container class has the added problem of how to access the container class. Make it a singleton?..yuck! Create some World object that provides access to all such container classes? I have seen many frameworks that use the container class approach, so what is the general consensus?

    Read the article

  • Installing SQL Server 2005 Express on Windows 8 [closed]

    - by Angel
    We have an application that installs a custom instance of Microsoft SQL Server 2005 Express as part if the whole installation process. Microsoft states that SQL Server 2005 Express is not compatible with Windows 8, but in reality it seems to install and work perfectly fine. The only problem is that during the installation a dialog appears saying it's not compatible, and offers options to get help online, continue with the installation anyway, or cancel. If you chose to continue anyway on all these incompatibility prompts, then the SQL server instance is installed without any problem whatsoever. Does anyone know if there is a way to suppress these incompatibility messages during the SQL service installation (or any installation, for that matter)?

    Read the article

  • Dell R910 with Integrated PERC H700 Adapter

    - by Alex
    I am in the process of designing an architecture based around a single Dell R910 server running Windows Server 2008 Enterprise. I would like the server to have 8 RAID1 pairs of spinning disks, so I intend to implement: Dell R910 Server Integrated PERC H700 Adapter with 1 SAS expander on each SAS connector (so 8 expanders in total) 7 RAID1 pairs of 143Gb 15K HDD, each paired on one connector using an expander 1 RAID1 pair of 600Gb 10K HDD, paired on the remaining connector using an expander My main concern is not to introduce bottlenecks in this architecture, and I have the following questions. Will the PERC H700 Adapter act as a bottleneck for disk access? Will using SAS expanders for each RAID1 pair cause a bottleneck or would this be as fast as pairing disks directly attached to the SAS connectors? Can I mix the disks, as long as the disks in each RAID1 pair are the same? I assume so. Can anyone recommend any single-to-double SAS Expanders that are known to function well with the H700? Cheers Alex

    Read the article

  • OSX Snow Leopard - Multiple httpd/apache instances for PHP 5.2 & 5.3 together

    - by iongion
    I need to run Apache with both php 5.2 and 5.3, without other webservers such as nginx, lighttpd, etc. Just Apache HTTPD. The easiest way to have both PHP 5.2 and PHP 5.3 on Apache, on the same machine, is to have them run in different webservers (or at least different webserver instances). I already do this on windows, it works flawlessly because it is easy to specify the conf file that a specific instance loads. But how can this be achieved on Mac OSX, without ditching the web server that OSX comes with built in ? The basic is to create N-ip addresses that each apache instance will bind to, for example: 192.168.0.52 - This is for apache httpd with PHP 5.2 192.168.0.53 - This is for apache httpd with PHP 5.3 (each apache will bind to its own ip address) On OSX, i don't know how to configure HTTPD to start as multiple service/daemon, with different startup httpd.conf files!

    Read the article

  • Help with IPTables - Masquerading + Forwarding, 1-to-1?

    - by Artiom Chilaru
    I've got a clean Ubuntu Server 10.10 with OpenSSH, OpenVPN and vsFTPd installed. The server is running as a VM on the Hyper-V server (hypervisor), has two network interfaces mapped to physical adapters (eth0 and eth1), and a virtual interface with a direct connection to the hypervisor (eth2). The VPN will create a tun0 interface when a client connects. What I want is the remote user, connecting over VPN to be able to connect to the hypervisor (all ports, ping etc). The initial idea was to make the VPN create a tap0 interface, and bridge eth2 to tap0, but this didn't work, unfortunately, as it seems that the adapters don't want to go into promiscuous mode (partially confirmed by MS) At the same time, both the hypervisor and the remove client over VPN can successfully ping/connect to the ubuntu server with no problems. So my plan right now is to try doing some 1-1 masquerading, if possible. Basically, I want every request sent from the VPN client to the ubuntu server to be redurected to the hypervisor instead (with IP translation ofc), and every request from the hypervisor to the ubuntu machine sent to the VPN client (IP translated too). Only 1 client will be connected at a time to the VPN, so I can force limit it to a single IP at all times, if necessary. Is this the right way to go, and if true, how can this be achieved? It's almost like a special case of port-forwarding, except every single port on tun0 is forwarded to a machine in eth2, and every port on the eth2 side forwards to an ip on tun0 I guess it could be done with iptables, but I'm rather new in linux, so I can't do it myself... help? :(

    Read the article

  • SUDO YUM not found

    - by ThomasReggi
    I am running a Amazon ec2 instance on amazon's linux. Whenever I run anything sudo yum it give me this: sudo: yum: command not found ec2-user$ rpm -qf /usr/bin/yum yum-3.2.29-30.24.amzn1.noarch ec2-user$ which yum /usr/bin/yum which yum while in root gives: root$ which yum /usr/bin/which: no yum in (/sbin:/bin:/usr/sbin:/usr/bn:/usr/local/bin:/opt/aws/bin) This is a new ec2 instance two days old. When I first logged in I ran sudo yum update and everything wen't well. What changed?

    Read the article

  • Reducing latency for different geographic regions on Amazon Cloud

    - by Shoaibi
    I have got an application which has three components Application code : Amazon EC2 US-EAST-1 instance Application images, and other static data : Amazon S3 with CloudFront Application Database : Amazon RDS In short i need something like Cloud Front for EC2. In long, people using this application from a different region say middle east will have faster static content downloading due to Cloud Front but there would be a lot of latency in communicating to EC2 instance. I want to use a budget friendly way of enhancing this. Launching Amazon Instances in every region that offer is sure a choice, but isn't really cheap, so would try to avoid it unless its last resort. Also say if my clients also need to communicate to the RDS database directly, is there some kind of solution which gives that kind of functionality mentioned above, but for RDS?

    Read the article

  • Linux Router - Share bandwidth per IPs with current active connections

    - by SRoe
    We have a Linux machine running as a custom router, currently utilising Shorewall. This sits between our incoming internet connection and the internal LAN. What we would like to achieve is 'fair use' of the bandwidth on a per IP basis. If only one person currently has an active connection then they get 100% utilisation of the line. However if 20 people have active connections then they should each get 5% utilisation of the line. This should be irrespective of the number of connections held by each user. For example, say we have two users, Bill and Ted, that both have active connections. Bill has a single active connection while Ted has ten active connections. Bill should get 50% utilisation for his single connections whilst Ted should get 5% utilisation for each of his ten connections, giving Ted a total utilisation of 50%.

    Read the article

  • Record 8 separate Line IN Channels from M-Audio Delta 1010 Card

    - by Peter Hoffmann
    I want to record the 8 separate Line IN Channels from my M-Audio Delta 1010 Card. The card is recogniced nicely and a can record a single channel via arecord -d 10 -f cd -t wav -D channel1 out2.wav. I've set up the different channels in ~/.asoundrc. Now if I want to record a second channel in parallel (arecord -d 10 -f cd -t wav -D channel2 out2.wav) I get the error arecord: main:564: audio open error: Device or resource busy As I understand the delta 1010 is a single Access Card, so only one application can access it at a time. Is this correct? The next step was to configure a dual channel input in .asoundrc # envy24 channel 1+2 only pcm.test { type plug ttable.0.0 1 ttable.0.1 1 slave.pcm ice1712 } Which works ok when I do a arecord -d 10 -f cd -t wav -D test -c 2 out.wav (BTW can anyone point me to a tool to split a multi channel wav into a file per channel?) But when I want to record the channels separately with (-I option) arecord -d 10 -f cd -t wav -D test -c 2 -I channel1.wav channel2.wav I get no recordings. Did I miss something with the configuration or what are my options to record all 8 channels via arecord. I've no experience with jackd. Is it an option to install jackd and record the line ins via jackd?

    Read the article

  • SQL SERVER – Load Generator – Free Tool From CodePlex

    - by pinaldave
    One of the most common questions I receive is if there any tool available to generate load on SQL Server. Absolutely there is a fabulous free tool available to generate load on SQL Server on Codeplex. This tool was released in 2008 but it is still extremely relevant to generate the load on SQL Server as well works fabulously. CodePlex is a project initiated by Microsoft for hosting open source softwares. The best part of this SQL Server Load Generator is that users can run multiple simultaneous queries again SQL Server using different login account and different application name. The interface of the tool is extremely easy to use and very intuitive as well. One of the things which I felt needed improvement was a default configuration. As every single time when I was adding a query the default settings were showing up and I had to manually change that. However, when I went to Menu >> Tools >>Options I was really happy as it has options to change every single default which is available. Here one can give default username, password, database name as well various settings related to configuration. Additionally application logging is also possible through the options. A couple of other important points I noticed was the button to reset counters as well status bar containing useful information of Total Threads, Completed Queries and Failed Queries. I use this frequently for my load testing. What tool do you use for SQL Server Load Generator? Download SQL Load Generator Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology

    Read the article

  • MongoDB ReplicaSet Elections when some nodes are down

    - by SecondThought
    I'm trying to get into ReplicaSet concept, and found something weird in mongoDB Documentation: For a node to be elected primary, it must receive a majority of votes. This is a majority of all votes in the set: if you have a 5-member set and 4 members are down, a majority of the set is still 3 members (floor(5/2)+1). Each member of the set receives a single vote and knows the total number of available votes. If no node can reach a majority, then no primary can be elected and no data can be written to that replica set (although reads to secondaries are still possible). (taken from here) So, If I got that right, in the 5-member case mentioned there the one node that's still standing WILL NOT be chosen as primary and the whole set will not get any writes? and that's even if this single node was the last primary before the elections? If it's true there can be many less-radical cases which will end up with a degenerated set. How can we avoid this?

    Read the article

  • How to use private DNS to map private IP with "non registred" domain name

    - by PapelPincel
    I would like to use a private DNS (Route53 in our case) in order to map hosts to EC2 instance private IP addresse. The hosted zone we are using for testing is not declared in any registrar (company-test.com.). There are different servers (Nagios, Puppet, ActiveMQ ...) all hosted in ec2, that means their IP can change over time (restart, new instance launch...). That would be great if I can use DNS instead of clients' /etc/hosts for mapping private IP/internal domain name... The ActiveMQ server url is activemq.company-test.com and it maps to (A record) private IP address of the AMQ server. This url is only reachable by other ec2 owned by the same aws account. My question is how to configure ec2 instances so they could reach the ActiveMQ server WITHOUT having to buy a new domain company-test.com ?

    Read the article

  • Economical DNS hosting separate from local registrar for country specific TLDs - email or web hosting not required

    - by Eric Nguyen
    Our company owns many country specific top level domains (TLDs; .sg, .my). We will purchase more for other countries in all South East Asia. These domains are associated with our websites hosted on Amazon EC2. The DNS records are currently hosted on a dedicated server that will shut down tomorrow. (The name servers are set to the ones of a web hosting company) Therefore, I will need to host the DNS records somewhere else. Hosting the DNS records with the local registrar costs SGD18 a year per domain in addition to the domain price (which is already very expensive but we have no choice). It would be convenience to host DNS recors for all the country specific TLDs we have using a single service, separate from the local registrars from which we bought the domains. A few searches prompted examples like Amazon Route 53 and dnsmadeeasy.com and the likes. However, since I'm only concern about the country specific TLDs, not .com 1) Is it really economical to host DNS records of all domains in 1 single place as described above? (Have the relevant countries and/or the local registrars done something to keep their monopoly and always charges ridiculous prices for their country specific TLDs?) 2) I would imagine I will need to tell the local registrars to update the name servers to those of the DNS hosting service provider e.g. dnsmadeeasy.com here. Am I correct about how it works here? 3) Will I be able point the TLDs themselves to IP addresses I desire (the EC2 instances where my websites are) or will I only able to do so with the subdomains? 4) Are there any drawbacks that I should know here? Background about our needs: We need the websites associated with the country TLDs to be up and running all the times Also, we'll need to be able to add/edit A and CNAME records We use Google Apps for Business for internal email so I will need to be able to add/edit MX records and TXT records

    Read the article

  • How to install Oracle Database 11g Express Edition on Ubuntu 12.10?

    - by Praneeth Pj
    I installed the Oracle database following the steps mentioned in this blog. Downloaded 11g express edition Created a new user oracle under the group dba. Following steps are executed using this. Unzipped oracle-xe-11.2.0-1.0.x86_64.rpm.zip and then converted the rpm to the Ubuntu package by running: sudo alien --scripts -d oracle-xe-11.2.0-1.0.x86_64.rpm Created /sbin/chkconfig file and added the entries as specified there. Created /etc/sysctl.d/60-oracle.conf and added the entries as specified in the same link as above. Running the commands: ln -s /usr/bin/awk /bin/awk mkdir /var/lock/subsys touch /var/lock/subsys/listener .deb generated in step 3: sudo dpkg --install oracle-xe_11.2.0-2_amd64.deb Left the default values as it is: sudo /etc/init.d/oracle-xe configure Set the following env variables in ~/.bashrc file: export ORACLE_HOME=/u01/app/oracle/product/11.2.0/xe export ORACLE_SID=XE export NLS_LANG=`$ORACLE_HOME/bin/nls_lang.sh` export ORACLE_BASE=/u01/app/oracle export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$LD_LIBRARY_PATH export PATH=$ORACLE_HOME/bin:$PATH Running the commands: chown -R oracle:dba /var/tmp/.oracle chmod -R 755 /var/tmp/.oracle chown -R oracle:dba /tmp/.oracle chmod -R 755 /tmp/.oracle Starting Oracle Database 11g Express Edition instance: sudo service oracle-xe start sqlplus / as sysdba and got the following: SQL*Plus: Release 11.2.0.2.0 Production on Thu Jan 3 09:41:58 2013 Copyright (c) 1982, 2011, Oracle. All rights reserved. Connected to an idle instance. Now when exectuting any SQL statements on SQLplus, I end up with the following error: SQL> select * from dual; select * from dual * ERROR at line 1: ORA-01034: ORACLE not available Process ID: 0 Session ID: 0 Serial number: 0 I have increased the swap memory as specified here $ free -m total used free shared buffers cached Mem: 3901 3428 473 0 182 1988 -/+ buffers/cache: 1258 2643 Swap: 5066 0 5066

    Read the article

  • VirtualBox in production?

    - by MrG
    I'm planning to move a service which is currently powered by Debian into a VirtualBox. That would allow us to easily port it i.e. to a faster machine if required. The setup would be: debian host > Virtual Box #1 > debian instance #1 running Apache & application > Virtual Box #2 > debian instance #2 containing database Do you have any experience with a production setup based on Virtual Box? Is it stable and fast enough? Many thanks!

    Read the article

  • Extreme Optimization –Mathematical Constants and Basic Functions

    - by JoshReuben
    Machine constants The MachineConstants class - contains constants for floating-point arithmetic because the CLS System.Single and Double floating-point types do not follow the standard conventions and are useless. machine constants for the Double type: machine precision: Epsilon , SqrtEpsilon CubeRootEpsilon largest possible value: MaxDouble , SqrtMaxDouble, LogMaxDouble smallest Double-precision floating point number that is greater than zero: MinDouble , SqrtMinDouble , LogMinDouble A similar set of constants is available for the Single Datatype  Mathematical Constants The Constants class contains static fields for many mathematical constants and common expressions involving small integers – if you are doing thousands of iterations, you wouldn't want to calculate OneOverSqrtTwoPi , Sqrt17 or Log17 !!! Fundamental constants E - The base for the natural logarithm, e (2.718...). EulersConstant - (0.577...). GoldenRatio - (1.618...). Pi - the ratio between the circumference and the diameter of a circle (3.1415...). Expressions involving fundamental constants: TwoPi, PiOverTwo, PiOverFour, LogTwoPi, PiSquared, SqrPi, SqrtTwoPi, OneOverSqrtPi, OneOverSqrtTwoPi Square roots of small integers: Sqrt2, Sqrt3, Sqrt5, Sqrt7, Sqrt17 Logarithms of small integers: Log2, Log3, Log10, Log17, InvLog10  Elementary Functions The IterativeAlgorithm<T> class in the Extreme.Mathematics namespace defines many elementary functions that are missing from System.Math. Hyperbolic Trig Functions: Cosh, Coth, Csch, Sinh, Sech, Tanh Inverse Hyperbolic Trig Functions: Acosh, Acoth, Acsch, Asinh, Asech, Atanh Exponential, Logarithmic and Miscellaneous Functions: ExpMinus1 - The exponential function minus one, ex-1. Hypot - The hypotenuse of a right-angled triangle with specified sides. LambertW - Lambert's W function, the (real) solution W of x=WeW. Log1PlusX - The natural logarithm of 1+x. Pow - A number raised to an integer power.

    Read the article

< Previous Page | 310 311 312 313 314 315 316 317 318 319 320 321  | Next Page >