Search Results

Search found 37240 results on 1490 pages for 'ubuntu enterprise cloud'.

Page 49/1490 | < Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >

  • Ubuntu's Lucid Lynx: Ubuntu's Most Innovative

    <b>Datamation:</b> "Ubuntu&#8217;s Lucid Lynx (Ubuntu 10.04) is still six weeks away from release. However, on the eve of the first beta release, the daily builds and news releases suggest that Lucid will be one of the most innovative versions of Ubuntu for several years."

    Read the article

  • Oracle Enterprise Manager Ops Center 12c Update 1 is available now

    - by Anand Akela
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Following the announcement of Oracle Enterprise Manager Ops Center 12c on April 4th, we are happy to announce the release of Oracle Enterprise Manager Ops Center 12c update 1. This is a bundled patch release for Oracle Enterprise Manager Ops Center.  Here are the key features of the Oracle Enterprise Manager Ops Center 12c update 1 : Oracle VM SPARC Server Pool HA Policy  Automatically Upgrade from Ops Center 11g update 3 and Ops Center 12c  Oracle Linux 5.8 and 6.x Support  Oracle VM SPARC IaaS (Virtual Datacenters) WANBoot Improvements with OBP Handling Enhancements SPARC SuperCluster Support Stability fixes This new release contains significant enhancements in the update provisioning, bare metal OS provisioning, shared storage management, cloud/virtual datacenter, and networking management sections of the product.  With this update, customers can achieve better handling of ASR faults, add networks and storage to virtual guests more easily, understand IPMP and VLAN configurations better, get a more robust LDAP integration, get  virtualization aware firmware patching, and observe improved product performance across the board.  Customers can now accelerate Oracle VM SPARC and T4 deployments into production . Oracle Enterprise Manager Ops Center 11g and Ops Center 12c customers will now notice the availability of new product update under the Administration tab within the  Browser User Interface (BUI) .  Upgrade process is explained in detail within the Ops Center Administration Guide under “Chapter 10: Upgrading”.  Please be sure to read over that chapter and the Release Notes before upgrading.  During the week of July 9th,  the full download of the product will be available from the Oracle Enterprise Manager Ops Center download website.  Based on the customer feedback, we have changed the updates to include the entire product. Customers no longer need to install Ops Center 12c and then upgrade to the update 1 release.  The can simply install Ops Center 12c update 1 directly.  Here are some of the resources that can help you learn more about the Oracle Enterprise Manager Ops Center and the new update 1. Oracle Enterprise Manager Ops Center OTN site Bi-Monthly Product Demos Oracle Enterprise Manager Ops Center Forum Oracle Enterprise Manager Ops Center MOS Community Watch the recording of Oracle Enterprise Manager 12c launch webcast by clicking the following banner. Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • An Actionable Common Approach to Federal Enterprise Architecture

    - by TedMcLaughlan
    The recent “Common Approach to Federal Enterprise Architecture” (US Executive Office of the President, May 2 2012) is extremely timely and well-organized guidance for the Federal IT investment and deployment community, as useful for Federal Departments and Agencies as it is for their stakeholders and integration partners. The guidance not only helps IT Program Planners and Managers, but also informs and prepares constituents who may be the beneficiaries or otherwise impacted by the investment. The FEA Common Approach extends from and builds on the rapidly-maturing Federal Enterprise Architecture Framework (FEAF) and its associated artifacts and standards, already included to a large degree in the annual Federal Portfolio and Investment Management processes – for example the OMB’s Exhibit 300 (i.e. Business Case justification for IT investments).A very interesting element of this Approach includes the very necessary guidance for actually using an Enterprise Architecture (EA) and/or its collateral – good guidance for any organization charged with maintaining a broad portfolio of IT investments. The associated FEA Reference Models (i.e. the BRM, DRM, TRM, etc.) are very helpful frameworks for organizing, understanding, communicating and standardizing across agencies with respect to vocabularies, architecture patterns and technology standards. Determining when, how and to what level of detail to include these reference models in the typically long-running Federal IT acquisition cycles wasn’t always clear, however, particularly during the first interactions of a Program’s technical and functional leadership with the Mission owners and investment planners. This typically occurs as an agency begins the process of describing its strategy and business case for allocation of new Federal funding, reacting to things like new legislation or policy, real or anticipated mission challenges, or straightforward ROI opportunities (for example the introduction of new technologies that deliver significant cost-savings).The early artifacts (i.e. Resource Allocation Plans, Acquisition Plans, Exhibit 300’s or other Business Case materials, etc.) of the intersection between Mission owners, IT and Program Managers are far easier to understand and discuss, when the overlay of an evolved, actionable Enterprise Architecture (such as the FEA) is applied.  “Actionable” is the key word – too many Public Service entity EA’s (including the FEA) have for too long been used simply as a very highly-abstracted standards reference, duly maintained and nominally-enforced by an Enterprise or System Architect’s office. Refreshing elements of this recent FEA Common Approach include one of the first Federally-documented acknowledgements of the “Solution Architect” (the “Problem-Solving” role). This role collaborates with the Enterprise, System and Business Architecture communities primarily on completing actual “EA Roadmap” documents. These are roadmaps grounded in real cost, technical and functional details that are fully aligned with both contextual expectations (for example the new “Digital Government Strategy” and its required roadmap deliverables - and the rapidly increasing complexities of today’s more portable and transparent IT solutions.  We also expect some very critical synergies to develop in early IT investment cycles between this new breed of “Federal Enterprise Solution Architect” and the first waves of the newly-formal “Federal IT Program Manager” roles operating under more standardized “critical competency” expectations (including EA), likely already to be seriously influencing the quality annual CPIC (Capital Planning and Investment Control) processes.  Our Oracle Enterprise Strategy Team (EST) and associated Oracle Enterprise Architecture (OEA) practices are already engaged in promoting and leveraging the visibility of Enterprise Architecture as a key contributor to early IT investment validation, and we look forward in particular to seeing the real, citizen-centric benefits of this FEA Common Approach in particular surface across the entire Public Service CPIC domain - Federal, State, Local, Tribal and otherwise. Read more Enterprise Architecture blog posts for additional EA insight!

    Read the article

  • Configure an Azure VM for Dynamic DNS for Cloud Services

    - by Adam
    I am trying to setup an azure VM with proper DNS to allow multiple cloud services to communicate across cloud service boundaries. As I understand it, I need to provide my own DNS server. I do not have any on-premise infrastructure, so I am trying to configure an Azure VM to act as my DNS. This SO question (http://stackoverflow.com/questions/21858926/azure-how-to-connect-one-cloud-service-with-other-in-one-virtual-network) is very similar to my setup. This article (http://msdn.microsoft.com/en-us/library/windowsazure/jj156088.aspx) describes my particular case: Name resolution between virtual machines and role instances located in the same virtual network, but different cloud services Here is what I have done: Created Azure Virtual Network and declared subnets for each of my cloud services. Created an Azure VM (Windows 2012 R2) with DNS enabled RDP to the VM and enabled the DNS role and installed features Added the appropriate NetworkConfiguration xml section to each of my cloud services .csfg files Re-deployed my cloud services I have verified that I setup the virtual network and networkconfiguration properly because my cloud service hosts are able to communicate with each other if I use the internal ips. However, name resolution doesn't appear to be working, and it doesn't appear that my cloud service roles can communicate with my DNS server. How do I configure my VM so that my different cloud services roles register themselves with my DNS server? EDIT: I think I am 1 step closer to getting this to work. The cloud services that I was using are in an old affinity group which is not supported by VMs, so I was unable to add my VM into my virtual network. I created a new VNET in a new affinity group with my VM added into it. However, I still don't know how to configure the azure VM's DNS server so that the cloud services register themselves for name resolution. Also, an added bonus guaranteed to get a +1 would be to explain if it is possible to register a DNS entry for the VIP for an internal endpoint of my cloud services so we can get load balancing. Thanks!

    Read the article

  • Cannot install g++ on ubuntu

    - by Erel Segal
    I don't have g++: erelsgl@ubuntu:/etc/apt$ which g++ erelsgl@ubuntu:/etc/apt$ erelsgl@ubuntu:/etc/apt$ g++ The program 'g++' can be found in the following packages: * g++ * pentium-builder Try: sudo apt-get install <selected package> So I try to install it: erelsgl@ubuntu:~/srilm$ sudo apt-get install g++ Reading package lists... Done Building dependency tree Reading state information... Done g++ is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 5 not upgraded. 2 not fully installed or removed. After this operation, 0B of additional disk space will be used. Setting up g++ (4:4.4.3-1ubuntu1) ... update-alternatives: error: alternative path /usr/bin/g++ doesn't exist. dpkg: error processing g++ (--configure): subprocess installed post-installation script returned error exit status 2 dpkg: dependency problems prevent configuration of build-essential: build-essential depends on g++ (>= 4:4.3.1); however: Package g++ is not configured yet. dpkg: error processing build-essential (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: g++ build-essential E: Sub-process /usr/bin/dpkg returned an error code (1) I also try to install build-essential, and get same results. I also tried "sudo apt-get update" - didn't help. This is my apt-cache: erelsgl@ubuntu:/etc/apt$ apt-cache policy g++ build-essential g++: Installed: 4:4.4.3-1ubuntu1 Candidate: 4:4.4.3-1ubuntu1 Version table: *** 4:4.4.3-1ubuntu1 0 500 http://il.archive.ubuntu.com/ubuntu/ lucid/main Packages 100 /var/lib/dpkg/status build-essential: Installed: 11.4build1 Candidate: 11.4build1 Version table: *** 11.4build1 0 500 http://il.archive.ubuntu.com/ubuntu/ lucid/main Packages 100 /var/lib/dpkg/status erelsgl@ubuntu:/etc/apt$ I also tried this and got the same error: erelsgl@ubuntu:~/Ace/Files/corpus$ sudo dpkg --configure -a Setting up g++ (4:4.4.3-1ubuntu1) ... update-alternatives: error: alternative path /usr/bin/g++ doesn't exist. dpkg: error processing g++ (--configure): subprocess installed post-installation script returned error exit status 2 dpkg: dependency problems prevent configuration of build-essential: build-essential depends on g++ (>= 4:4.3.1); however: Package g++ is not configured yet. dpkg: error processing build-essential (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: g++ build-essential

    Read the article

  • Ubuntu Newbie Needs Assistance!!

    - by Steve Greene
    New Ubuntu User Needs Help!- version 9.10 does not communicate with laptop Hello folks, Several days ago, I installed Ubuntu 9.10 onto my Acer Aspire 3100 laptop, running it alongside Widows Vista as a dual-bootable system. Creation of the Ubuntu boot CD went fine, and the installation onto my hard drive was flawless. Ubuntu opens and behaves as I would expect, except for one little problem. For reasons unknown to me, Ubuntu is not communicating with my laptop's networking hardware, and I have no internet connectivity, even when sitting directly under the wireless router at the local library (literally), which puts out a wickedly-fast signal that my Windows Vista OS auto-detects and immediately connects to. Up in the right side of the Ubuntu desktop, I click on the network icon and it does not show a wireless connection at all, even though I am only a few feet from the router. At home, where I use a dialup modem, I also see no means of getting online. My modem is an HDAUDIO Soft Data Fax Modem with Smart CP,manufactured by CXT (Conexant Systems Inc., file version 4.0.13.0, and the driver version is 7.58.0.0). I desparately wish to convert to Ubuntu. I used Mac for ten years, and then Windows for ten years. Now, after 20 years, I want to live out my days as an open-source Ubuntu fanatic. I am ready to give the old status quo the boot! I am an advanced computer user, but I am not a programmer. I seek a solution that is user-friendly for normal people, something equivalent to a driver that I can easily install or activate that will allow Ubuntu to see my hardware and get me connected. Can anyone help me over this hopefully-little glitch so that I can move on in total Ubuntu bliss? My processor is a Mobile AMD Sempron Processor 3500+ at 1.80 GHz, 1.50 GB RAM, and a 32-bit Operating System. I am running Windows Vista Home Basic, Service Pack 2. My current email is [email protected] if you have a workable solution that does not require programmer status to implement. Surely this must be a simple fix that I simply am overlooking, but being the new guy on the block, I have yet to be enlightened. Thanks for your help in coming up to speed!! Steve Wanna' be Ubuntu Fanatic "If you're not living on the edge, you're taking up too much space."

    Read the article

  • Oracle BI Server Modeling, Part 1- Designing a Query Factory

    - by bob.ertl(at)oracle.com
      Welcome to Oracle BI Development's BI Foundation blog, focused on helping you get the most value from your Oracle Business Intelligence Enterprise Edition (BI EE) platform deployments.  In my first series of posts, I plan to show developers the concepts and best practices for modeling in the Common Enterprise Information Model (CEIM), the semantic layer of Oracle BI EE.  In this segment, I will lay the groundwork for the modeling concepts.  First, I will cover the big picture of how the BI Server fits into the system, and how the CEIM controls the query processing. Oracle BI EE Query Cycle The purpose of the Oracle BI Server is to bridge the gap between the presentation services and the data sources.  There are typically a variety of data sources in a variety of technologies: relational, normalized transaction systems; relational star-schema data warehouses and marts; multidimensional analytic cubes and financial applications; flat files, Excel files, XML files, and so on. Business datasets can reside in a single type of source, or, most of the time, are spread across various types of sources. Presentation services users are generally business people who need to be able to query that set of sources without any knowledge of technologies, schemas, or how sources are organized in their company. They think of business analysis in terms of measures with specific calculations, hierarchical dimensions for breaking those measures down, and detailed reports of the business transactions themselves.  Most of them create queries without knowing it, by picking a dashboard page and some filters.  Others create their own analysis by selecting metrics and dimensional attributes, and possibly creating additional calculations. The BI Server bridges that gap from simple business terms to technical physical queries by exposing just the business focused measures and dimensional attributes that business people can use in their analyses and dashboards.   After they make their selections and start the analysis, the BI Server plans the best way to query the data sources, writes the optimized sequence of physical queries to those sources, post-processes the results, and presents them to the client as a single result set suitable for tables, pivots and charts. The CEIM is a model that controls the processing of the BI Server.  It provides the subject areas that presentation services exposes for business users to select simplified metrics and dimensional attributes for their analysis.  It models the mappings to the physical data access, the calculations and logical transformations, and the data access security rules.  The CEIM consists of metadata stored in the repository, authored by developers using the Administration Tool client.     Presentation services and other query clients create their queries in BI EE's SQL-92 language, called Logical SQL or LSQL.  The API simply uses ODBC or JDBC to pass the query to the BI Server.  Presentation services writes the LSQL query in terms of the simplified objects presented to the users.  The BI Server creates a query plan, and rewrites the LSQL into fully-detailed SQL or other languages suitable for querying the physical sources.  For example, the LSQL on the left below was rewritten into the physical SQL for an Oracle 11g database on the right. Logical SQL   Physical SQL SELECT "D0 Time"."T02 Per Name Month" saw_0, "D4 Product"."P01  Product" saw_1, "F2 Units"."2-01  Billed Qty  (Sum All)" saw_2 FROM "Sample Sales" ORDER BY saw_0, saw_1       WITH SAWITH0 AS ( select T986.Per_Name_Month as c1, T879.Prod_Dsc as c2,      sum(T835.Units) as c3, T879.Prod_Key as c4 from      Product T879 /* A05 Product */ ,      Time_Mth T986 /* A08 Time Mth */ ,      FactsRev T835 /* A11 Revenue (Billed Time Join) */ where ( T835.Prod_Key = T879.Prod_Key and T835.Bill_Mth = T986.Row_Wid) group by T879.Prod_Dsc, T879.Prod_Key, T986.Per_Name_Month ) select SAWITH0.c1 as c1, SAWITH0.c2 as c2, SAWITH0.c3 as c3 from SAWITH0 order by c1, c2   Probably everybody reading this blog can write SQL or MDX.  However, the trick in designing the CEIM is that you are modeling a query-generation factory.  Rather than hand-crafting individual queries, you model behavior and relationships, thus configuring the BI Server machinery to manufacture millions of different queries in response to random user requests.  This mass production requires a different mindset and approach than when you are designing individual SQL statements in tools such as Oracle SQL Developer, Oracle Hyperion Interactive Reporting (formerly Brio), or Oracle BI Publisher.   The Structure of the Common Enterprise Information Model (CEIM) The CEIM has a unique structure specifically for modeling the relationships and behaviors that fill the gap from logical user requests to physical data source queries and back to the result.  The model divides the functionality into three specialized layers, called Presentation, Business Model and Mapping, and Physical, as shown below. Presentation services clients can generally only see the presentation layer, and the objects in the presentation layer are normally the only ones used in the LSQL request.  When a request comes into the BI Server from presentation services or another client, the relationships and objects in the model allow the BI Server to select the appropriate data sources, create a query plan, and generate the physical queries.  That's the left to right flow in the diagram below.  When the results come back from the data source queries, the right to left relationships in the model show how to transform the results and perform any final calculations and functions that could not be pushed down to the databases.   Business Model Think of the business model as the heart of the CEIM you are designing.  This is where you define the analytic behavior seen by the users, and the superset library of metric and dimension objects available to the user community as a whole.  It also provides the baseline business-friendly names and user-readable dictionary.  For these reasons, it is often called the "logical" model--it is a virtual database schema that persists no data, but can be queried as if it is a database. The business model always has a dimensional shape (more on this in future posts), and its simple shape and terminology hides the complexity of the source data models. Besides hiding complexity and normalizing terminology, this layer adds most of the analytic value, as well.  This is where you define the rich, dimensional behavior of the metrics and complex business calculations, as well as the conformed dimensions and hierarchies.  It contributes to the ease of use for business users, since the dimensional metric definitions apply in any context of filters and drill-downs, and the conformed dimensions enable dashboard-wide filters and guided analysis links that bring context along from one page to the next.  The conformed dimensions also provide a key to hiding the complexity of many sources, including federation of different databases, behind the simple business model. Note that the expression language in this layer is LSQL, so that any expression can be rewritten into any data source's query language at run time.  This is important for federation, where a given logical object can map to several different physical objects in different databases.  It is also important to portability of the CEIM to different database brands, which is a key requirement for Oracle's BI Applications products. Your requirements process with your user community will mostly affect the business model.  This is where you will define most of the things they specifically ask for, such as metric definitions.  For this reason, many of the best-practice methodologies of our consulting partners start with the high-level definition of this layer. Physical Model The physical model connects the business model that meets your users' requirements to the reality of the data sources you have available. In the query factory analogy, think of the physical layer as the bill of materials for generating physical queries.  Every schema, table, column, join, cube, hierarchy, etc., that will appear in any physical query manufactured at run time must be modeled here at design time. Each physical data source will have its own physical model, or "database" object in the CEIM.  The shape of each physical model matches the shape of its physical source.  In other words, if the source is normalized relational, the physical model will mimic that normalized shape.  If it is a hypercube, the physical model will have a hypercube shape.  If it is a flat file, it will have a denormalized tabular shape. To aid in query optimization, the physical layer also tracks the specifics of the database brand and release.  This allows the BI Server to make the most of each physical source's distinct capabilities, writing queries in its syntax, and using its specific functions. This allows the BI Server to push processing work as deep as possible into the physical source, which minimizes data movement and takes full advantage of the database's own optimizer.  For most data sources, native APIs are used to further optimize performance and functionality. The value of having a distinct separation between the logical (business) and physical models is encapsulation of the physical characteristics.  This encapsulation is another enabler of packaged BI applications and federation.  It is also key to hiding the complex shapes and relationships in the physical sources from the end users.  Consider a routine drill-down in the business model: physically, it can require a drill-through where the first query is MDX to a multidimensional cube, followed by the drill-down query in SQL to a normalized relational database.  The only difference from the user's point of view is that the 2nd query added a more detailed dimension level column - everything else was the same. Mappings Within the Business Model and Mapping Layer, the mappings provide the binding from each logical column and join in the dimensional business model, to each of the objects that can provide its data in the physical layer.  When there is more than one option for a physical source, rules in the mappings are applied to the query context to determine which of the data sources should be hit, and how to combine their results if more than one is used.  These rules specify aggregate navigation, vertical partitioning (fragmentation), and horizontal partitioning, any of which can be federated across multiple, heterogeneous sources.  These mappings are usually the most sophisticated part of the CEIM. Presentation You might think of the presentation layer as a set of very simple relational-like views into the business model.  Over ODBC/JDBC, they present a relational catalog consisting of databases, tables and columns.  For business users, presentation services interprets these as subject areas, folders and columns, respectively.  (Note that in 10g, subject areas were called presentation catalogs in the CEIM.  In this blog, I will stick to 11g terminology.)  Generally speaking, presentation services and other clients can query only these objects (there are exceptions for certain clients such as BI Publisher and Essbase Studio). The purpose of the presentation layer is to specialize the business model for different categories of users.  Based on a user's role, they will be restricted to specific subject areas, tables and columns for security.  The breakdown of the model into multiple subject areas organizes the content for users, and subjects superfluous to a particular business role can be hidden from that set of users.  Customized names and descriptions can be used to override the business model names for a specific audience.  Variables in the object names can be used for localization. For these reasons, you are better off thinking of the tables in the presentation layer as folders than as strict relational tables.  The real semantics of tables and how they function is in the business model, and any grouping of columns can be included in any table in the presentation layer.  In 11g, an LSQL query can also span multiple presentation subject areas, as long as they map to the same business model. Other Model Objects There are some objects that apply to multiple layers.  These include security-related objects, such as application roles, users, data filters, and query limits (governors).  There are also variables you can use in parameters and expressions, and initialization blocks for loading their initial values on a static or user session basis.  Finally, there are Multi-User Development (MUD) projects for developers to check out units of work, and objects for the marketing feature used by our packaged customer relationship management (CRM) software.   The Query Factory At this point, you should have a grasp on the query factory concept.  When developing the CEIM model, you are configuring the BI Server to automatically manufacture millions of queries in response to random user requests. You do this by defining the analytic behavior in the business model, mapping that to the physical data sources, and exposing it through the presentation layer's role-based subject areas. While configuring mass production requires a different mindset than when you hand-craft individual SQL or MDX statements, it builds on the modeling and query concepts you already understand. The following posts in this series will walk through the CEIM modeling concepts and best practices in detail.  We will initially review dimensional concepts so you can understand the business model, and then present a pattern-based approach to learning the mappings from a variety of physical schema shapes and deployments to the dimensional model.  Along the way, we will also present the dimensional calculation template, and learn how to configure the many additivity patterns.

    Read the article

  • help on developing enterprise level software solutions

    - by wefwgeweg
    there is a specific niche which I would like to target by providing a complete enterprise level software solution.... the problem is, where do i begin ? meaning, i come from writing just desktop software on VB/ASP .net/PHP/mysql and suddenly unfamiliar terms popup like Oracle, SAP Business Information Warehouse, J2EE.... obviously, something is pointing towards Java, is it common for software suites, or solutions to be developed 100% on Java technology and standards? Are there any other platform to build enterprise level software on ? i am still lacking understanding what exactly is "Enterprise level" ? what is sufficient condition to call a software that sells for $199 and then suddenly it's $19,999 for "enterprise" package. I dont understand why there is such a huge discrepancy between "standard" and "enterprise" versions of software. Is it just attempting to bag large corporations on a spending spree ? so why does one choose to develop so called "enterprise" softwares ? is it because of the large inflated price tag you can justify with ? i would also like some more enterpreneural resources on starting your own enterprise software company in a niche.... Thank you for reading, i am still trying to find the right questions.

    Read the article

  • How can I free up disk space in my Ubuntu Hardy Heron install?

    - by rvs
    I'd like to make some room on /dev/sda1 without necessarily having to remove a whole bunch of applications (I've already gone through and deleted all frivolous apps). This is the state of /dev/sda1 currently: Dir: / Type: ext3 Total: 9.4GiB Free: 488.6MiB Available: 0bytes Used: 8.9GiB EDIT added du output from comments below: 769068 /var/lib/mysql 351208 /usr/lib 297060 /usr/local/bin/eclipse/plugins 184124 /usr/bin 175924 /usr/lib/openoffice/program 143940 /usr/local/bin/eclipsePHP/plugins 92520 /boot 81200 /opt/android-sdk-linux/add-ons/google_apis-6_r01/images 79964 /opt That's funny, because the tables in /var/lib/mysql are the reason that I ran out in the first place. But I need them, and room for many more possibly large db's.

    Read the article

  • Why can't I get Apache2 mod_dumpio working under Lucid Lynx Ubuntu?

    - by bland328
    I'm trying to capture all of the traffic to and from an Apache2 web server for troubleshooting purposes, so I did the following to try to set mod_dumpio up properly: Used a2enmod to enable mod_dumpio Changed LogLevel to "debug" in apache2.config Added "DumpIOInput On", "DumpIOOutput On" and "DumpIOLogLevel debug" to apache2.config Issued "/etc/init.d/apache2 restart" to restart Apache Issued "apache2ctl -t -D DUMP_MODULES" to make sure mod_dumpio was loaded I'm watching /var/log/apache2/error.log, but not seeing much there, and certainly not a dump of all input and output. Can anyone help?

    Read the article

  • Ubuntu 9.10: how do I troubleshoot a startup script that doesn't appear to run?

    - by TheDeeno
    I've created a bash script 'foo'. I've made that script executable with chmod+x and added it the the start-up by running sudo update-rc.d foo defaults 80 Despite that, it doesn't appear to be working at startup. Is there a way to have my script echo messages to a log? Or is there some log that would record events/errors for this? atm, I feel like I'm flying blind and don't really know how to troubleshoot this.

    Read the article

  • How to set up an FTP user on UBUNTU 9 server using vsftpd utility?

    - by Pavel
    Hi guys. I'm kinda new to this so bear with me. I've set up a server and now I need to create ftp user for it. I'm doing this by typing: useradd pavel passwd pavel And then I'm running iptables -I INPUT 1 -p tcp --dport 21 -j ACCEPT iptables-save > /etc/iptables.rules in order to open ftp ports and lastly, I'm changing the usermod by: usermod -s /bin/sh pavel So now tell me - what I'm doing wrong here? I just want to connect using FTP protocol. Please help...

    Read the article

  • Where to put X11 drivers configuration in Ubuntu Lucid?

    - by vava
    Since hal is removed from Lucid, where now can I put all those little configuration tweaks for mouse and other input devices? In particular, I want to configure ThinkPad trackpad to enable scrolling with middle button. In hal, it was done with <match key="info.product" string="TPPS/2 IBM TrackPoint"> <merge key="input.x11_options.EmulateWheel" type="string">true</merge> <merge key="input.x11_options.EmulateWheelButton" type="string">2</merge> <merge key="input.x11_options.ZAxisMapping" type="string">4 5</merge> <merge key="input.x11_options.XAxisMapping" type="string">6 7</merge> <merge key="input.x11_options.Emulate3Buttons" type="string">true</merge> <merge key="input.x11_options.EmulateWheelTimeout" type="string">200</merge> </match>

    Read the article

  • Is there a way to find what values comes from what file in HAL under Ubuntu?

    - by vava
    I've been playing with multitouch on my Thinkpad and read a few tutorials on how to setup it. One of them mentioned /usr/hal/fdi/policy/20thirdparty/11-x11-synaptics.fdi, I edited it and enabled SHMConfig through it. Later I found out about /etc/hal/policy/ directory and put some customization for my touchpad there as well in separate fdi file. But now it looks like touchpad doesn't care about my customizations. I have gsynaptec installed and can configure it though GUI, I can configure it with synclient but I can't set any values through fdi files. I even turned off SHMConfig, reverting 11-x11-synaptics,fdi file to it's original state but it seems like SHMConfig still enabled, otherwise I wouldn't be able to configure properties in runtime. So, I was thinking, maybe there's additional hal files I don't know about. How can I find them, particularly ones responsible for turning SHMConfig on?

    Read the article

  • xkb layouts not working (in KDE?) after upgrade from Ubuntu 9.10 to 10.04

    - by Alan
    I customised my keyboard layout in 9.10 by editing the appropriate /usr/share/X11/xkb/symbols/ file. After upgrading to 10.04 I noticed it had overwritten all my modifications, so I recovered the layout and overwrote the symbol file's base entry. Sadly KDE (and, presumably, the entire OS) seems to ignore the files altogether. The help files don't mention anything about modifying layouts anyway (and the layout switcher seems to be using setxkbmap, which uses the above path according to its man page), so I'm at a bit of a loss. Do I need to compile this into some other format somehow or how do I get it to work?

    Read the article

  • customized xkb layouts not working (in KDE?) after upgrade from Ubuntu 9.10 to 10.04

    - by Alan
    I customised my keyboard layout in 9.10 by editing the appropriate /usr/share/X11/xkb/symbols/ file. After upgrading to 10.04 I noticed it had overwritten all my modifications, so I recovered the layout and overwrote the symbol file's base entry. Sadly KDE (and, presumably, the entire OS) seems to ignore the files altogether. The help files don't mention anything about modifying layouts anyway (and the layout switcher seems to be using setxkbmap, which uses the above path according to its man page), so I'm at a bit of a loss. Do I need to compile this into some other format somehow or how do I get it to work?

    Read the article

  • How to play from mic to speakers in Ubuntu Karmic?

    - by vava
    Ok, I have a silly problem. I have few bluetooth headsets lying around and I want to make DYI baby monitor out of them. All I need is for some application to listen to the mic and send it to the speakers. Loopback (even though it doesn't work) is not good enough, it'll send sound from the mic to the speakers in the same device but I need it to go across devices. So does anyone know some application that can do that? I'm looking for something small and easy to use, not jackd or similar.

    Read the article

  • PostgreSQL 9.1 on Ubuntu Lucid fails to start - how to debug?

    - by Tom Fakes
    I'm using Vagrant with Chef Solo to setup a Lucid 64 box. I'm using a Chef recipe to install PostgreSQL 9.1 from Martin Pitt's backports. The install goes ok until the point where the database is started with /etc/init.d/postgresql start There's a log pause and the command fails. If I run pg_ctl manually, the database starts! The entire contents of my postgresql-9.1-main log file is: 2012-05-07 11:01:18 PDT LOG: database system was shut down at 2012-05-07 11:01:16 PDT 2012-05-07 11:01:18 PDT LOG: database system is ready to accept connections 2012-05-07 11:01:18 PDT LOG: autovacuum launcher started 2012-05-07 11:01:18 PDT LOG: incomplete startup packet 2012-05-07 11:01:26 PDT LOG: received fast shutdown request 2012-05-07 11:01:26 PDT LOG: aborting any active transactions 2012-05-07 11:01:26 PDT LOG: autovacuum launcher shutting down 2012-05-07 11:01:26 PDT LOG: shutting down 2012-05-07 11:01:26 PDT LOG: database system is shut down I've tried to change the postgresql config file to get more info into the logfile, but that hasn't worked at all. How do I debug this to find out what is failing so I can fix it?

    Read the article

  • How to change resolution in Ubuntu 10.04, where xvinfo is showing no adapters present?

    - by YumYum
    I am trying to maximize my resolution where I have Resolution: 800x600 (4:3) and Refresh rate: 61Hz I tried the following, but it did not work: $ xvinfo X-Video Extension version 2.2 screen #0 no adaptors present $ cvt 1920 1080 # 1920x1080 59.96 Hz (CVT 2.07M9) hsync: 67.16 kHz; pclk: 173.00 MHz Modeline "1920x1080_60.00" 173.00 1920 2048 2248 2576 1080 1083 1088 1120 -hsync +vsync $ xrandr --newmode clever_name 173.00 1920 2048 2248 2576 1080 1083 1088 1120 $ xrandr -q Screen 0: minimum 640 x 480, current 800 x 600, maximum 800 x 600 default connected 800x600+0+0 0mm x 0mm 800x600 61.0* 640x480 60.0 clever_name (0x11d) 173.0MHz h: width 1920 start 2048 end 2248 total 2576 skew 0 clock 67.2KHz v: height 1080 start 1083 end 1088 total 1120 clock 60.0Hz $ xrandr --addmode default clever_name $ xrandr --output default --mode clever_name xrandr: Configure crtc 0 failed

    Read the article

< Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >