Search Results

Search found 34649 results on 1386 pages for 'direct access'.

Page 456/1386 | < Previous Page | 452 453 454 455 456 457 458 459 460 461 462 463  | Next Page >

  • Oracle Endeca "Getting Started" Partner Guide

    - by Grant Schofield
    For partners looking for a concise step by step guide to getting started with Oracle Endeca Information Discovery, here it is to help you get started as quickly as possible. Step 1: Join the Knowledge Zone as a company and an individual - this will give you a) the right to resell Oracle Endeca ID, and b) notice of any free / subsidised training events in your region Step 2: For a quick general overview & positioning see the following article, in particular the Agile BI Video series which are useful in sharing with prospective clients. Also find a link to the official OEID Data Sheet. Step 3: For a more detailed overview there is a live recorded OEID partner webcast with downloadable slides. In conjunction with this, your sales / presales team have free access to the official OEID Partner Playbook as well as the full Oracle price book. Step 4: Download the OEID software and install. Please be aware you will need a 64-bit machine & a 64-bit Operating System. A useful solution for partners that have a 32-bit Operating System is to use Oracle's free VirtualBox software to quickly and easily create a Linux image and install on that. Step 5: Attend a free / subsidised training event in your region. Please join the Knowledge Zone as an Individual (opt in) to be informed of these. We will also publish these via the blog Things are moving fast, so please be aware that the team are working hard to produce more and more material such as downloadable data sets (structured / unstructured), a downloadable image, access to demos, and over the next few weeks we will update this article as soon as new material becomes available!

    Read the article

  • Running a Screen instance of a program as non-root

    - by user288467
    I've got a dedicated server (Ubuntu 12.04, no GUI) set up to launch an instance of McMyAdmin and attach it to a screen instance every time I reboot the hardware. I have the command saved to root's crontab as: @reboot cd /var/MC_SVR && screen -dmS McMyAdmin ./MCMA2_Linux_x86_64 Problem being, though, I have a user set up specifically for FTP access to the server files so I don't always have to SSH into the machine. Since the server is being started as a root process, all the files it makes are, obviously, set with root as the owner. So I chown'd all the files and set them to ftpuser. Now I'm stuck with trying to get the process to start as ftpuser. I've tried doing the following but to no avail: cd /var/MC_SVR && su ftpuser - -c 'screen -dmS McMyAdmin ./MCMA2_Linux_x86_64' I try this in terminal and I get no errors or anything (in fact I never get anything unless it's a syntax error from su), but there's no screen instance to access and so I can assume the server never starts. So, what am I doing wrong? Or am I just not accessing the screen instance correctly since it's (supposed) to be launched by another user?

    Read the article

  • Adding files and folders to a Root Folder (inode/directory)

    - by xBaldwin
    Ok so I'm fairly new to Ubuntu and wasn't even the one who put it one this computer(my friend did while I was storing it at his house because I was in the middle of transitioning between houses), but It's on here so I need to learn what I can so I can use it more effectively. My question at the moment is "Would it be safe to add files/folders to a folder (inode/directory) that requires Root access?" I continue to be informed by the system that the directory I am using is running low on space which I found odd seeing how I should have a lot more room on this computer. That's when I started looking at the directories and found that there are two with a bunch of un-used space on them. One says it has 46.9 GB of free space and the other has 24.9 GB of free space. Seems like a complete waste to not use that space and yet they both say they require Root access to add to them. I know that Root folders and files are normally all system folders and files. I also know that changing or deleting them can mess up the computer which right now I cant afford to do. I just don't know if it would mess anything up to add something to those folders. Thank you in advance to anyone who takes the time to reply and try to teach me about how all that works. I really do appreciate it and will do the same if by some crazy (completely unlikely) reason I have an answer to your question. :-)`

    Read the article

  • Can't write to disk

    - by nofacts
    I have seen questions similar to this, but the answers are either beyond me or the situation doesn't quite match mine. Would appreciate some direction. I recently installed Ubuntu 12.04 LTS. The OS is on a disk formatted as ext4. I added another disk to the system and formatted it as W95 FAT 32 (LBA) (0x0c). I did this because I am moving from Windows to Linux, will be needing to go back and forth with data for a while, and might need to move the disk to a Windows machine. There may have been a better format to use, but if so I didn't know any better. I was able to transfer data from an external drive to this FAT32 drive with no problem. Now, though, when I try to create a new folder or write a file to the disk I get a message that the disk is read-only. If I go to the properties, permissions for the disk, for Folder Access it says 'create and delete files'. If I try to change File Access underneath to 'read and write', either nothing happens or I get a message it can't be done. Thank you for any help.

    Read the article

  • Anemic Domain Model, Business Logic and DataMapper (PHP)

    - by sunwukung
    I've implemented a rudimentary ORM layer based on DataMapper (I don't want to use a full blown ORM like Propel/Doctrine - for anything beyond simple fetch/save ops I prefer to access the data directly layer using a SQL abstraction layer). Following the DataMapper pattern, I've endeavoured to keep all persistence operations in the Mapper - including the location of related entities. My Entities have access to their Mapper, although I try not to call Mapper logic from the Entity interface (although this would be simple enough). The result is: // get a mapper and produce an entity $ProductMapper = $di->get('product_mapper'); $Product = $ProductMapper->find('[email protected]','email'); //.. mutaute some values.. save $ProductMapper->save($Product) // uses __get to trigger relation acquisition $Manufacturer = $Product->manufacturer; I've read some articles regarding the concept of an Anemic Domain model, i.e. a Model that does not contain any "business logic". When demonstrating the sort of business logic ideally suited to a Domain Model, however, acquiring related data items is a common example. Therefore I wanted to ask this question: Is persistence logic appropriate in Domain Model objects?

    Read the article

  • How-To: Run CMSDK against a RAC cluster

    - by frank.closheim
    Using CMSDK in a production environment often requires a robust, reliable and failover enabled repository. When using Oracle Real Application Cluster (RAC) with your CMSDK repository you need to have a specific configuration in place to support such a setup. This post will explain the configuration steps required when running CMSDK 9.0.4.6 with Oracle WebLogic Server (WLS).In the previous CMSDK 9.0.4.2 version a RAC enabled connect string looked like this: (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = rac1)(PORT = 1521))(ADDRESS = (PROTOCOL = TCP)(HOST = rac2)(PORT = 1521))(LOAD_BALANCE = NO)(FAILOVER = ON)(CONNECT_DATA =(SERVICE_NAME = rac)(failover_mode = (type=select)(method=basic)))CMSDK 9.0.4.6 makes use of data sources to connect to the underlying database. These data sources are configured inside your Application Server, such as Oracle WebLogic Server.In Oracle WebLogic Server 10.3.4, a single data source implementation has been introduced to support an RAC cluster. It responds to Fast Application Notification (FAN) events to provide Fast Connection Failover (FCF), Runtime Connection Load-Balancing (RCLB), and RAC instance graceful shutdown. XA affinity is supported at the global transaction Id level. The new feature is called WebLogic Active GridLink for RAC; which is implemented as the GridLink data source within WebLogic Server.This GridLink data source also works with Oracle Single Client Access Name (SCAN). SCAN is a feature used in RAC environments that provides a single name for clients to access any Oracle Database running in a cluster. You can think of SCAN as a cluster alias for databases in the cluster. The benefit is that the client’s connect information does not need to change if you add or remove nodes or databases in the cluster.The CMSDK 9.0.4.6 documentation describes how to create a regular JDBC data source named jdbc/OracleDS. Please refer to the following document which describes in detail how to create a GridLink data source in WLS.

    Read the article

  • How to host an AP or a hotspot?

    - by user1048138
    I'm running Ubuntu 12.04 as a virtual machine on my Mac. Since I am unable to get the virtual machine to have full access to my WiFi card, I bought another USB WiFi card to use. This is my WiFi card. If you are unfamiliar with Virtual machine, as far as I know, since the Ubuntu has its own card now, it shouldn't matter. I have followed these guides with no luck: https://help.ubuntu.com/community/WifiDocs/WirelessAccessPoint http://www.danbishop.org/2011/12/11/using-hostapd-to-add-wireless-access-point-capabilities-to-an-ubuntu-server/ The problem is that the WiFi connection appears on all of the machines that I have in my house: 2 iPhones, Dell machine running Ubuntu and two Macbooks. However the connection times out on all of these machines. Questions: Could this be a driver issue if that same WiFi card can connect to other WiFi points and use its internet Could this be DHCP related? I would think not. It should at least get a 169.X.X.X address? No? Any solutions for me?

    Read the article

  • I can't hear any sounds on ubuntu 11.10 on Dell inspiron N5010

    - by Ahmed
    I have a problem that I can't hear any sounds and I don't know where to start. I did the following : lspci -v | grep -A7 -i "audio" 00:1b.0 Audio device: Intel Corporation 5 Series/3400 Series Chipset High Definition Audio (rev 06) Subsystem: Dell Device 0447 Flags: bus master, fast devsel, latency 0, IRQ 48 Memory at fbf00000 (64-bit, non-prefetchable) [size=16K] Capabilities: <access denied> Kernel driver in use: HDA Intel Kernel modules: snd-hda-intel -- 01:00.1 Audio device: ATI Technologies Inc Manhattan HDMI Audio [Mobility Radeon HD 5000 Series] Subsystem: Dell Device 0447 Flags: bus master, fast devsel, latency 0, IRQ 49 Memory at fbe40000 (64-bit, non-prefetchable) [size=16K] Capabilities: <access denied> Kernel driver in use: HDA Intel Kernel modules: snd-hda-intel And It seems that I have 2 soundcards. Is that normal ?? I also did this: aplay -l **** List of PLAYBACK Hardware Devices **** card 0: Intel [HDA Intel], device 0: STAC92xx Analog [STAC92xx Analog] Subdevices: 1/1 Subdevice #0: subdevice #0 card 1: Generic [HD-Audio Generic], device 3: HDMI 0 [HDMI 0] Subdevices: 1/1 Subdevice #0: subdevice #0 Also on the sound setting GUI. I have 2 hardware profiles for sound cards but none of them works when I test the speakers. Where should I start searching ?

    Read the article

  • How to write code that communicates with an accelerator in the real address space (real mode)?

    - by ysap
    This is a preliminary question for the issue, where I was asked to program a host-accelerator program on an embedded system we are building. The system is comprised of (among the standard peripherals) an ARM core and an accelerator processor. Both processors access the system bus via their bus interfaces, and share the same 32-bit global physical memory space. Both share access to the system's DRAM through the system bus. (The computer concept is similar to Beagleboard/raspberry Pie, but with a specialized accelerator added) The accelerator has its own internal memory (SRAM) which is exposed to the system and occupies a portion of the global address space (as opposed to how a graphics card would talk to teh CPU via a "small" aperture in the system memory space). On the ARM core (the host) we plan on running Ubuntu 12.04. The mode of operation of communicating between the processors should be that the host issues memory transactions on the system bus that are targeted at the accelerator internal memory. As far as my understanding goes, if I write a program for the host that simply writes to the physical address of the accelerator, most chances are that the program will crash due to a segmentation violation. So, I assume that I need some way of communicating with the device in real mode. What is the easiest way to achieve this mode of operation?

    Read the article

  • Single-developer GIT workflow (moving from straightforward FTP)

    - by melat0nin
    I'm trying to decide whether moving to VCS is sensible for me. I am a single web developer in a small organisation (5 people). I'm thinking of VCS (Git) for these reasons: version control, offsite backup, centralised code repository (can access from home). At the moment I work on a live server generally. I FTP in, make my edits and save them, then reupload and refresh. The edits are usually to theme/plugin files for CMSes (e.g. concrete5 or Wordpress). This works well but provides no backup and no version control. I'm wondering how best to integrate VCS into this procedure. I would envisage setting up a Git server on the company's web server, but I'm not clear how to push changes out to client accounts (usually VPSes on the same server) - at the moment I simply log into SFTP with their details and make the changes directly. I'm also not sure what would sensibly represent a repository - would each client's website get their own one? Any insights or experience would be really helpful. I don't think I need the full power of Git by any means, but basic version control and de facto cloud access would be really useful.

    Read the article

  • Pantheon Not Completely Installed?

    - by Simon
    I have just installed the pantheon shell today, and I have not found any help with this yet. I am not just a random noob, i use a bunch of other shells and i also have a few development applications on my copy of ubuntu. But ever since ive opened up pantheon, i cannot find settings or this thing called the launchpad. (if the launchpad is the app drawer up in the upper left corner, or if its the dock. i have it, if thats not what launchpad is, then i cant access it.) I can only change my wallpaper by going back to unity, gnome, or KDE. There is a system settings in the power menu (upper right), but it only has Language Support, Ubuntu One, Additional Drivers, And Printing. I can still access the full ubuntu settings in GNOME of Unity. But thats it. I installed in the terminal, uninstalled, and reinstalled using the software center. Please Help Me If Any Of You Can! Thanks!!!

    Read the article

  • error: no such partition after 11.10 upgrade to 12.04

    - by Alan King
    -I recently upgraded my 11.10 install to 12.04 LTS and got the above error message upon reboot after a GNU GRUB version ubuntu3 display showing Ubuntu 3.2.0-23-generic pae and other kernels or memory tests to choose from. The upgrade had to be done by CD because the Update Manager did not show the 12.04 upgrade option. After selecting the default install option of upgrading 11.10 to 12.04, I was presented with a screen saying that I had not specified a swap partition. Upon selection the 'back' key, I was taken to a partition page which listed two current partitions (only Ubuntu 11.10 had been installed - no Windoz): an ext4 partition plus a small 1.8GB partition. I double clicked the small partition and selected it as the swap partition even though I wondered at the time why this even came up. I can see the two user folders under home from the file manager screen while runnning 12.04 from the CD but if I try to access either one an error message is displayed saying I do not have permission while I get a loading message in the lower right corner of the window that does not go away. I have two questions: Can I access the user folders prior to recovery via the Terminal? If so, how? How do I fix the GRUB issue?

    Read the article

  • Branding/Restricting a Software by License/Serial

    - by Sid
    I have made a POS System for a client of mine using MS Access Server-Client approach. He asked me to brand his software to allow only a certain "number" of users (cashiers) to access the POS System, and must be determined to the license his client will buy. EX: 10 User License = 10 Cashiers ( not necessarily 10 users, it can be 30 users, shifting) = it means 10 PCs will be installed with the client software I made. How and where do I put the logic that will determine if it is licensed or not. What I have done: I have created a serial key generator using Name. Problem is it can be duplicated once you give than name+serial combination, it would still work. I am counting the number of users logged at a time. This could be problematic as I am using MSAccess and not MSSQL. I have scrapped this idea, He also asked me if I could just put serial+mac address combination. That I could do but he will have a hard time implementing it and selling it if he needs the mac address of every computers to be installed with my POS. I am at lost on what can I do. Would like to ask for tips and suggestions. Thank you.

    Read the article

  • Ubuntu Server 12.04, NAT, Router, DNS. It just doesnt work

    - by Bjørnar Kibsgaard
    I recently inherited some server hardware from work and decided that it could be my main router at home (among other things). Ubuntu 12.04 server installation harware wise goes well and everything is found and working when I boot up. So I begin with setting up eth1 with DHCP. This works fine and it gets a public IP address from my modem and we have a working internet connection. Then I set up my other NIC (eth0) as static (192.168.0.1) and this also works fine. I can access it from other computers in the network. The problems are coming when I am trying to set up a DHCP server with isc-dhcp-server. It seems like it is working and giving the computers IP adresses but after one reboot it stops working. After the reboot eth1 will get a public ip from the modem but it doesnt have internet access. I have to manually run dhcpcd eth1 to get it to work again. As far as I know I havent made any changes to DNS. What am I doing wrong? I have never really had problems with this before. :)

    Read the article

  • Learn More About the PO Approvals Analyzer

    - by LuciaC
    You may think that the PO Approvals Analyzer for Release 12 is only for diagnosing problems when you have a single Purchase Order or Requisition stuck in process, but it offers valuable information to keep your Procurement environment healthy.  Consider this:     The analyzer will list all Procurement critical patches that have not been applied.     It will provide Procurement invalid objects with error messages and provides solutions.     Validations of setup and database conditions for example max extents and space issues. Also the analyzer can be run on all Purchasing documents starting from a date you enter.  This multiple document check provides validations on:     Data corruption issues.     Workflow errors with generic messages i.e. document manager errors.     Documents with workflows in error that cannot be progressed via the application. And, unlike other diagnostics, the analyzer provides known solutions to the problems indicated! So access the Analyzer today and run it on your instance!  Access it now via Doc ID 1525670.1.

    Read the article

  • Looking to create website that can have custom GUI and database per user

    - by riley3131
    I have developed an MS Access database for a company to track data in regards to production of a certain commodity. It has many many tables, forms, reports, etc. These were all done as the user requested, and resemble the users previously used system, mostly printed worksheets and excel workbooks. This has created a central location for all information and has allowed the company to compare data in a new way. I am now looking to do this for other companies, but would like to switch it to a web application. Here is my question. What is the best way to create unique solutions for individual companies that can have around 100 users each? I would love to create one site that would serve all parties, but that would ruin the customizable nature of what I am developing. I love the ability to create reports, excel sheets, pdf, graphs, etc with access, but am tired of relying on my customers software, servers, etc. I have some experience with WAMP, but I am far better at VBA. I was okay at PHP, and was getting a grasp on JavaScript a few years back. I am also trying to decide whether to go with WAMP or LAMP, if web is the best choice. Also, should I try set up one site for all users that after log-in goes to company specific pages, or individual sites for each company? Should I host or use a service?

    Read the article

  • What You Said: How Do You Browse Securely Away From Home?

    - by Jason Fitzpatrick
    Responses to this week’s Ask the Reader question show that just because you’re away from home doesn’t mean you have to give up the security and privacy that your home network provides. Earlier this week we asked you to share you browsing away from home security tips and tricks and obliged. JC offered one of the more entertaining tales of away-from-home browsing: Recently a bunch of us stayed at a high end resort down in Mexico. Internet was offered as a pay per device service at about $80/week/device. Considering we had about 12 wifi devices there among us(a few geeks), I decided to plan ahead. I setup a WRT54G as a WiFi client with a vpn back to my house and NAT. Setup a second one as a basic wireless access point with password and plugged it into the first. Onsite we setup the devices and connected to the wireless with one paid account(tied to the MAC address). Everyone connected to the other device for wireless access and it was all tunnelled through my home network with encryption. HTG Explains: Learn How Websites Are Tracking You Online Here’s How to Download Windows 8 Release Preview Right Now HTG Explains: Why Linux Doesn’t Need Defragmenting

    Read the article

  • Unable to detect windows hard drive while running Ubuntu 12.04 from USB

    - by eapen jacob
    I am completely new to Ubuntu. I experimented with Ubuntu 12.04 by running it from a USB drive, in-order to recover files from my hard disc. History: My laptop is an IBM R60 running windows 7. Suddenly it gave me an error stating "error 2100 - Hard drive initialization error". I have read all the forums and most of them suggested that I remove and replace my HDD and that did not work. And one site suggested to try using Ubuntu to recover files. I booted my system from USB, and once Ubuntu came up, I choose "Try Ubuntu". It came up fine and I was able to surf ,and do other things, etc. I was unable to to access my files which are on the hard disc and "Attached Devices" is grayed out. 1- Is there any way to gain access to my hard disc to recover the files? How do I navigate to search for my files. 2- Is it just simply not possible if the hard disc themselves are not working? Is that why I`m unable to find the drives. I know its a very novice question, but hoping someone would help me out. Thank you, Eapen

    Read the article

  • Exit stage right...

    - by Peter Korn
    I joined Sun Microsystems in December of 1996, not quite 17 years ago. Over the course of those years, it has been my great pleasure and honor to work with a many talented folks, on a many incredible projects - first at Sun, and then at Oracle. In those nearly 17 years, we made quite a few platforms and products accessible - including Java, GNOME, Solaris, and Linux. We pioneered many of the accessibility techniques that are now used throughout the industry, including accessibility API techniques which first appeared in the Java and GNOME accessibility APIs; and screen access techniques like the API-based switch access of the GNOME Onscreen Keyboard. Our work was recognized as groundbreaking by many in the industry, both through awards for the innovations we had delivered (such as those we received from the American Foundation for the Blind), and awards of money to develop new innovations (the two European Commission accessibility grants we received). Our knowledge and expertise contributed to the first Section 508 accessibility standard, and provided significantly to the upcoming refresh of that standard, to the European Mandate 376 accessibility standard, and to a number of web accessibility standards. After 17 years of helping Sun and Oracle accomplish great things, it is time to start a new chapter... Today is my last day at Oracle. It is not, however, my last day in the field of accessibility. Next week I will begin working with another group of great people, and I am very much looking forward to the great things I will help contribute to in the future. Starting tomorrow, please follow me on my new, still under constriction, Wordpress blog: http://peterkorn.wordpress.com/.

    Read the article

  • Who should have full visibility of all (non-data) requirements information?

    - by ebyrob
    I work at a smallish mid-size company where requirements are sometimes nothing more than an email or brief meeting with a subject matter manager requiring some new feature. Should a programmer working on a feature reasonably expect to have access to such "request emails" and other requirements information? Is it more appropriate for a "program manager" (PGM) to rewrite all requirements before sharing with programmers? The company is not technology-centric and has between 50 and 250 employees. (fewer than 10 programmers in sum) Our project management "software" consists of a "TODO.txt" checked into source control in "/doc/". Note: This is nothing to do with "sensitive data access". Unless a particular subject matter manager's style of email correspondence is top secret. Given the suggested duplicate, perhaps this could be a turf war, as the PGM would like to specify HOW. Whereas WHY is absent and WHAT is muddled by the time it gets through to the programmer(s)... Basically. Should specification be transparent to programmers? Perhaps a history of requirements might exist. Shouldn't a programmer be able to see that history of reqs if/when they can tell something is hinky in the spec? This isn't a question about organizing requirements. It is a question about WHO should have full VISIBILITY of requirements. I'd propose it should be ALL STAKEHOLDERS. Please point out where I'm wrong here.

    Read the article

  • Recommended formats to store bitmaps in memory?

    - by Geotarget
    I'm working with general purpose image rendering, and high-performance image processing, and so I need to know how to store bitmaps in-memory. (24bpp/32bpp, compressed/raw, etc) I'm not working with 3D graphics or DirectX / OpenGL rendering and so I don't need to use graphics card compatible bitmap formats. My questions: What is the "usual" or "normal" way to store bitmaps in memory? (in C++ engines/projects?) How to store bitmaps for high-performance algorithms, such that read/write times are the fastest? (fixed array? with/without padding? 24-bpp or 32-bpp?) How to store bitmaps for applications handling a lot of bitmap data, to minimize memory usage? (JPEG? or a faster [de]compression algorithm?) Some possible methods: Use a fixed packed 24-bpp or 32-bpp int[] array and simply access pixels using pointer access, all pixels are allocated in one continuous memory chunk (could be 1-10 MB) Use a form of "sparse" data storage so each line of the bitmap is allocated separately, reusing more memory and requiring smaller contiguous memory segments Store bitmaps in its compressed form (PNG, JPG, GIF, etc) and unpack only when its needed, reducing the amount of memory used. Delete the unpacked data if its not used for 10 secs.

    Read the article

  • .htaccess causes 403 error

    - by erdomester
    I have a working website on a free shared server. I decided to hire a dedicated server and purchase a domain for my website. I started uploading the files but things aren't working the way they should. First of all .htaccess is not working, however I set AllowOverride from None to All in /etc/apache2/sites-available/default DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> I restarted the server of course. I enabled mod_rewrite: a2enmod rewrite and restarted the server. This change causes a 403 forbidden access error which I am unable to work out. If I change the All back to None then .htaccess is ignored so instead of loading the website the file hierarchy is loaded (the main page is index4.php which should be opened by .htaccess). If I rename index4.php to index.php the website loads, just fyi. The permissions on the file is 600. If I change it to 444 I get 500 Internal Server Error. I checked the logs and I see many errors of this: Permission denied: file permissions deny server access: /var/www/index.html

    Read the article

  • WIFI looses connection with AR9285 - have to disconnect / connect to regain internet?

    - by CodyLoco
    I have a G73JW laptop using Ubuntu 12.04. It has an Atheros AR9285 card. While wifi seems to work fine most of the time, about every hour or two it will loose the connection to the internet. The connection will appear to be connected still to the wireless router, but internet access is gone. Disconnecting and reconnecting to the access point solves the problem, as does disabling and reenabling the adapter via the hardware keyboard shortcut. How might this be solved so the connection is stable and doesn't drop? EDIT: It looks like the network dropping isn't related to DNS as it fails either way: codyloco@CodyLoco-Ubuntu:~$ dig askubuntu.com @8.8.8.8 ; <<>> DiG 9.8.1-P1 <<>> askubuntu.com @8.8.8.8 ;; global options: +cmd ;; connection timed out; no servers could be reached codyloco@CodyLoco-Ubuntu:~$ dig askubuntu.com ; <<>> DiG 9.8.1-P1 <<>> askubuntu.com ;; global options: +cmd ;; connection timed out; no servers could be reached Again cycling (disabling / enabling) the adapter corrects the issue.

    Read the article

  • What is a Relational Database Management System (RDBMS)?

    A Relational Database Management System (RDBMS)  can also be called a traditional database that uses a Structured Query Language (SQL) to provide access to stored data while insuring the integrity of the data. The data is stored in a collection of tables that is defined by relationships between data items. In addition, data permitted to be joined in new relationships. Traditional databases primarily process data through transactions called transaction processing. Transaction processing is the methodology of grouping related business operations based predefined business events. An example of this can be seen when a person attempts to purchase an item from an online e-tailor. The business must execute specific operations for a related  business event. In this case, a business must store the following information: Customer Info, Order Info, Order Item Info, Customer Payment Data, Payment Results, and Current Order Status. Example: Pseudo SQL Operations needed for processing an online e-tailor sale. Insert Customer into Customers Insert New Order into Orders Insert Each New Order Item into OrderItems Insert Customer Payment Info into PaymentInfo Insert Payment Processing Result into PaymentDetails Update Customer for Current Order Status Common Relational Database Management System Microsoft SQL Server Microsoft Access Oracle MySQL DB2 It is important to note that no current RDBMS has fully implemented all of the Relational Principles. Common RDBMS Traits Volatile Data Supports Transaction Processing Optimized for Updates and Simple Queries 

    Read the article

  • Kubuntu not showing eth0

    - by Laurbert515
    I just installed Kubuntu 12.04.2 and do not have an internet connection. I also cannot access the Desktop and only have the terminal. When I enter xlcock I get Error: Can't open display:. I believe this is because I do not have the correct drivers and need to download them ... using the internet which I can't access. So here's what's going on ... ifconfig gives: lo Link ecap: Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:16 errors:0 dropped:0 overrruns:0 frame:0 TX packets:16 errors:0 dropped:0 overrruns:0 frame:0 collisions:0 txqueuelen:0 RX bytes:1296 (1.2 KB) TX bytes:1296 (1.2 KB) wlan0 Link encap: Ethernet HWaddr 68:17:29:58:49:4a UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:16 errors:0 dropped:0 overrruns:0 frame:0 TX packets:16 errors:0 dropped:0 overrruns:0 frame:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) lspci -nnk gives: 07:00.0 Network Controller [0280] Intel Corporation Centrino Wireless-N 2230 [8086:0887] (rev c4) Subsystem: Intel Coporation Centrino Wireless-N 2230 BGN [8086:4062] Kernel driver in use: iwlwifi Kernel modules: iwlwifi 0d:00.0 Ethernet Controller [0200]: Atheros Communications Inc. AR8161 Gigabit Ethernet [1969:1091] (rev 10) Subsystem: Toshiba America Info Systems Device [1179:fa77] I believe this means that it is using the ethernet connection but thinks it is a wireless connection. sudo lshw -class network gives: *-network description: Wireless interface product: Centrino Wireless=N 2230 ... *network UNCLAIMED description: Ethernet controller product: AR8161 Gigabit Ethernet I want to get the internet working so I can fix the GPU driver, etc. but I can't seem to get it working even though I have my ethernet cable plugged in (and I'm sure it is working).

    Read the article

< Previous Page | 452 453 454 455 456 457 458 459 460 461 462 463  | Next Page >