Search Results

Search found 5514 results on 221 pages for 'rpm repository'.

Page 12/221 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • Is it possible to search an apt repository via a web browser?

    - by Bryan
    I've installed the beta version of Ubuntu 10.04 server edition (x64), but the system doesn't have an internet connection. Is there a way I can find out what packages are in the apt repository with nothing more than a web browser? The reason I'm asking, is because I will have an internet connection available when the production system goes live, but it simply isn't possible to internet connect my development system.

    Read the article

  • How to upgrade a 1.4.3 TortoiseSVN-created repository to 1.6.x?

    - by SiegeX
    A few years ago, TortoiseSVN 1.4.3 was deployed to our software development team and we are now looking at upgrading the client to the latest 1.6.x version. I had hoped this upgrade would be transparent with the additional features and modifications being client-side. For the most part, this was true except for a very important feature -- merging. When I try to merge a feature branch back into truck I get a show-stopping "Merge tracking not supported error." Here are some facts worth noting: When the repo was first created (before I was on board), it was created via the TortoiseSVN client itself. We do not have a 'svn server daemon' per se, rather the repository folders/database resides on a share folder that is accessible from our workstation machines via file:///. This was actually an eye opener for me, I had always thought there was some SVN server daemon we were talking to. We do not have any access to the underlying machine hosting the SVN share other than the ability to read/write to the share itself. I don't even know what OS the machine is running on. This share server was chosen because its drives are backed up nightly by our IT group. In all honesty, we really don't need the merge tracking feature although it would be nice to have. For the time being it would be sufficient to be able to use a 1.6.x TortoiseSVN client on the 1.4.3 repository and have it merge (sans tracking) without error. So now the question becomes, how does one upgrade a client-created 1.4.3 repo to a 1.6.x compatible version without access to the underlying machine the repo resides on? I was hoping the TortoiseSVN client itself had the ability to do this but that does not appear to be the case. Will I be forced to copy the entire repo over to my local drive, run some svn commands to upgrade the repo locally then copy the repo back to the share point? If so, will doing this break any compatibility with the the 1.4.3 clients in case we cant upgrade them all at the same time? Thanks for the help.

    Read the article

  • How to upgrade a 1.4.3 TortoiseSVN-created repository to 1.6.x?

    - by SiegeX
    A few years ago, TortoiseSVN 1.4.3 was deployed to our software development team and we are now looking at upgrading the client to the latest 1.6.x version. I had hoped this upgrade would be transparent with the additional features and modifications being client-side. For the most part, this was true except for a very important feature -- merging. When I try to merge a feature branch back into truck I get a show-stopping "Merge tracking not supported error." Here are some facts worth noting: When the repo was first created (before I was on board), it was created via the TortoiseSVN client itself. We do not have a 'svn server daemon' per se, rather the repository folders/database resides on a share folder that is accessible from our workstation machines via file:///. This was actually an eye opener for me, I had always thought there was some SVN server daemon we were talking to. We do not have any access to the underlying machine hosting the SVN share other than the ability to read/write to the share itself. I don't even know what OS the machine is running on. This share server was chosen because its drives are backed up nightly by our IT group. In all honesty, we really don't need the merge tracking feature although it would be nice to have. For the time being it would be sufficient to be able to use a 1.6.x TortoiseSVN client on the 1.4.3 repository and have it merge (sans tracking) without error. So now the question becomes, how does one upgrade a client-created 1.4.3 repo to a 1.6.x compatible version without access to the underlying machine the repo resides on? I was hoping the TortoiseSVN client itself had the ability to do this but that does not appear to be the case. Will I be forced to copy the entire repo over to my local drive, run some svn commands to upgrade the repo locally then copy the repo back to the share point? If so, will doing this break any compatibility with the the 1.4.3 clients in case we cant upgrade them all at the same time? Thanks for the help.

    Read the article

  • SQL Monitor’s data repository

    - by Chris Lambrou
    As one of the developers of SQL Monitor, I often get requests passed on by our support people from customers who are looking to dip into SQL Monitor’s own data repository, in order to pull out bits of information that they’re interested in. Since there’s clearly interest out there in playing around directly with the data repository, I thought I’d write some blog posts to start to describe how it all works. The hardest part for me is knowing where to begin, since the schema of the data repository is pretty big. Hmmm… I guess it’s tricky for anyone to write anything but the most trivial of queries against the data repository without understanding the hierarchy of monitored objects, so perhaps my first post should start there. I always imagine that whenever a customer fires up SSMS and starts to explore their SQL Monitor data repository database, they become immediately bewildered by the schema – that was certainly my experience when I did so for the first time. The following query shows the number of different object types in the data repository schema: SELECT type_desc, COUNT(*) AS [count] FROM sys.objects GROUP BY type_desc ORDER BY type_desc;  type_desccount 1DEFAULT_CONSTRAINT63 2FOREIGN_KEY_CONSTRAINT181 3INTERNAL_TABLE3 4PRIMARY_KEY_CONSTRAINT190 5SERVICE_QUEUE3 6SQL_INLINE_TABLE_VALUED_FUNCTION381 7SQL_SCALAR_FUNCTION2 8SQL_STORED_PROCEDURE100 9SYSTEM_TABLE41 10UNIQUE_CONSTRAINT54 11USER_TABLE193 12VIEW124 With 193 tables, 124 views, 100 stored procedures and 381 table valued functions, that’s quite a hefty schema, and when you browse through it using SSMS, it can be a bit daunting at first. So, where to begin? Well, let’s narrow things down a bit and only look at the tables belonging to the data schema. That’s where all of the collected monitoring data is stored by SQL Monitor. The following query gives us the names of those tables: SELECT sch.name + '.' + obj.name AS [name] FROM sys.objects obj JOIN sys.schemas sch ON sch.schema_id = obj.schema_id WHERE obj.type_desc = 'USER_TABLE' AND sch.name = 'data' ORDER BY sch.name, obj.name; This query still returns 110 tables. I won’t show them all here, but let’s have a look at the first few of them:  name 1data.Cluster_Keys 2data.Cluster_Machine_ClockSkew_UnstableSamples 3data.Cluster_Machine_Cluster_StableSamples 4data.Cluster_Machine_Keys 5data.Cluster_Machine_LogicalDisk_Capacity_StableSamples 6data.Cluster_Machine_LogicalDisk_Keys 7data.Cluster_Machine_LogicalDisk_Sightings 8data.Cluster_Machine_LogicalDisk_UnstableSamples 9data.Cluster_Machine_LogicalDisk_Volume_StableSamples 10data.Cluster_Machine_Memory_Capacity_StableSamples 11data.Cluster_Machine_Memory_UnstableSamples 12data.Cluster_Machine_Network_Capacity_StableSamples 13data.Cluster_Machine_Network_Keys 14data.Cluster_Machine_Network_Sightings 15data.Cluster_Machine_Network_UnstableSamples 16data.Cluster_Machine_OperatingSystem_StableSamples 17data.Cluster_Machine_Ping_UnstableSamples 18data.Cluster_Machine_Process_Instances 19data.Cluster_Machine_Process_Keys 20data.Cluster_Machine_Process_Owner_Instances 21data.Cluster_Machine_Process_Sightings 22data.Cluster_Machine_Process_UnstableSamples 23… There are two things I want to draw your attention to: The table names describe a hierarchy of the different types of object that are monitored by SQL Monitor (e.g. clusters, machines and disks). For each object type in the hierarchy, there are multiple tables, ending in the suffixes _Keys, _Sightings, _StableSamples and _UnstableSamples. Not every object type has a table for every suffix, but the _Keys suffix is especially important and a _Keys table does indeed exist for every object type. In fact, if we limit the query to return only those tables ending in _Keys, we reveal the full object hierarchy: SELECT sch.name + '.' + obj.name AS [name] FROM sys.objects obj JOIN sys.schemas sch ON sch.schema_id = obj.schema_id WHERE obj.type_desc = 'USER_TABLE' AND sch.name = 'data' AND obj.name LIKE '%_Keys' ORDER BY sch.name, obj.name;  name 1data.Cluster_Keys 2data.Cluster_Machine_Keys 3data.Cluster_Machine_LogicalDisk_Keys 4data.Cluster_Machine_Network_Keys 5data.Cluster_Machine_Process_Keys 6data.Cluster_Machine_Services_Keys 7data.Cluster_ResourceGroup_Keys 8data.Cluster_ResourceGroup_Resource_Keys 9data.Cluster_SqlServer_Agent_Job_History_Keys 10data.Cluster_SqlServer_Agent_Job_Keys 11data.Cluster_SqlServer_Database_BackupType_Backup_Keys 12data.Cluster_SqlServer_Database_BackupType_Keys 13data.Cluster_SqlServer_Database_CustomMetric_Keys 14data.Cluster_SqlServer_Database_File_Keys 15data.Cluster_SqlServer_Database_Keys 16data.Cluster_SqlServer_Database_Table_Index_Keys 17data.Cluster_SqlServer_Database_Table_Keys 18data.Cluster_SqlServer_Error_Keys 19data.Cluster_SqlServer_Keys 20data.Cluster_SqlServer_Services_Keys 21data.Cluster_SqlServer_SqlProcess_Keys 22data.Cluster_SqlServer_TopQueries_Keys 23data.Cluster_SqlServer_Trace_Keys 24data.Group_Keys The full object type hierarchy looks like this: Cluster Machine LogicalDisk Network Process Services ResourceGroup Resource SqlServer Agent Job History Database BackupType Backup CustomMetric File Table Index Error Services SqlProcess TopQueries Trace Group Okay, but what about the individual objects themselves represented at each level in this hierarchy? Well that’s what the _Keys tables are for. This is probably best illustrated by way of a simple example – how can I query my own data repository to find the databases on my own PC for which monitoring data has been collected? Like this: SELECT clstr._Name AS cluster_name, srvr._Name AS instance_name, db._Name AS database_name FROM data.Cluster_SqlServer_Database_Keys db JOIN data.Cluster_SqlServer_Keys srvr ON db.ParentId = srvr.Id -- Note here how the parent of a Database is a Server JOIN data.Cluster_Keys clstr ON srvr.ParentId = clstr.Id -- Note here how the parent of a Server is a Cluster WHERE clstr._Name = 'dev-chrisl2' -- This is the hostname of my own PC ORDER BY clstr._Name, srvr._Name, db._Name;  cluster_nameinstance_namedatabase_name 1dev-chrisl2SqlMonitorData 2dev-chrisl2master 3dev-chrisl2model 4dev-chrisl2msdb 5dev-chrisl2mssqlsystemresource 6dev-chrisl2tempdb 7dev-chrisl2sql2005SqlMonitorData 8dev-chrisl2sql2005TestDatabase 9dev-chrisl2sql2005master 10dev-chrisl2sql2005model 11dev-chrisl2sql2005msdb 12dev-chrisl2sql2005mssqlsystemresource 13dev-chrisl2sql2005tempdb 14dev-chrisl2sql2008SqlMonitorData 15dev-chrisl2sql2008master 16dev-chrisl2sql2008model 17dev-chrisl2sql2008msdb 18dev-chrisl2sql2008mssqlsystemresource 19dev-chrisl2sql2008tempdb These results show that I have three SQL Server instances on my machine (a default instance, one named sql2005 and one named sql2008), and each instance has the usual set of system databases, along with a database named SqlMonitorData. Basically, this is where I test SQL Monitor on different versions of SQL Server, when I’m developing. There are a few important things we can learn from this query: Each _Keys table has a column named Id. This is the primary key. Each _Keys table has a column named ParentId. A foreign key relationship is defined between each _Keys table and its parent _Keys table in the hierarchy. There are two exceptions to this, Cluster_Keys and Group_Keys, because clusters and groups live at the root level of the object hierarchy. Each _Keys table has a column named _Name. This is used to uniquely identify objects in the table within the scope of the same shared parent object. Actually, that last item isn’t always true. In some cases, the _Name column is actually called something else. For example, the data.Cluster_Machine_Services_Keys table has a column named _ServiceName instead of _Name (sorry for the inconsistency). In other cases, a name isn’t sufficient to uniquely identify an object. For example, right now my PC has multiple processes running, all sharing the same name, Chrome (one for each tab open in my web-browser). In such cases, multiple columns are used to uniquely identify an object within the scope of the same shared parent object. Well, that’s it for now. I’ve given you enough information for you to explore the _Keys tables to see how objects are stored in your own data repositories. In a future post, I’ll try to explain how monitoring data is stored for each object, using the _StableSamples and _UnstableSamples tables. If you have any questions about this post, or suggestions for future posts, just submit them in the comments section below.

    Read the article

  • How to import a svn repository underneath a git repository?

    - by Thiago Moreira
    Hi there, I have a svn repository that I migrated to git using the tool svn2git. Now I would like to push this git layout to a remote repository underneath an existing directory. But, I would like to keep the svn history (tags and branches). For instance: Git remote repository layout: git-repository/dirA git-repository/dirB git-repository/dirC/svn-repository-migrated-to-git Makes sense? Is it possible?? Thanks

    Read the article

  • OWSM Policy Repository in JDeveloper - Tips & Tricks - 11g

    - by Prakash Yamuna
    In this blog post I discussed about the OWSM Policy Repository that is embedded in JDeveloper. However some times people may run into issues with the embedded repository. Here is screen snapshot that shows the error you may run into (click on the image for larger image): If you run into "java.lang.IllegalArgumentException: WSM-04694 : An invalid directory was provided to connect to a file-base MDS repository." this caused due to spaces in the folder name. Here is a quick way to workaround this issue by running "Jdeveloper.exe - su". Hope people find this useful!

    Read the article

  • Delivering SOA Governance with EAMS and Oracle Enterprise Repository by Link Consulting Team

    - by JuergenKress
    In the last 12 years Link Consulting has been making its presence in specific areas such as Governance and Architecture, both in terms of practices and methodologies, products, know-how and technological expertise. The Enterprise Architecture Management System - Oracle Enterprise Edition (EAMS - OER Edition) is the result of this experience and combines the architecture management solution with OER in order to deliver a product specialized for SOA Governance that gathers the better of two worlds in solution that enables SOA Governance projects, initiatives and programs. Enterprise Architecture Management System Enterprise Architecture Management System (EAMS), is an automation based solution that enables the efficient management of Enterprise Architectures. The solution uses configured enterprise repositories and takes advantages of its features to provide automation capabilities to the users. EAMS provides capabilities to create/customize/analyze repository data, architectural blueprints, reports and analytic charts. Oracle Enterprise Repository Oracle Enterprise Repository (OER) is one of the major and central elements of the Oracle SOA Governance solution. Oracle Enterprise Repository provides the tools to manage and govern the metadata for any type of software asset, from business processes and services to patterns, frameworks, applications, components, and models. OER maps the relationships and inter-dependencies that connect those assets to improve impact analysis, promote and optimize their reuse, and measure their impact on the bottom line. It provides the visibility, feedback, controls, and analytics to keep your SOA on track to deliver business value. The intense focus on automation helps to overcome barriers to SOA adoption and streamline governance throughout the lifecycle. Core capabilities of the OER include: Asset Management Asset Lifecycle Management Usage Tracking Service Discovery Version Management Dependency Analysis Portfolio Management EAMS - OER Edition The solution takes the advantages and features from both products and combines them in a symbiotic tool that enhances the quality of SOA Governance Initiatives and Programs. EAMS is able to produce a vast number of outputs by combining its analytical engine, SOA-specific configurations and the assets in OER and other related tools, catalogs and repositories. The configurations encompass not only the extendable parametrization of the metadata but also fully configurable blueprints, PowerPoint reports, charts and queries. The SOA blueprints The solution comes with a set of predefined architectural representations that help the organization better perceive their SOA landscape. More blueprints can be easily created in order to accommodate the organizations needs in terms of detail, audience and metadata. Charts & Dashboards The solution encompasses a set of predefined charts and dashboards that promote a more agile way to control and explore the assets. Time Based Visualization All representations are time bound, and with EAMS - OER you can truly govern SOA with a complete view of the Past, Present and Future; The solution delivers Gap Analysis, a project oriented approach while taking into consideration the As-Was, As-Is an To-Be. Time based visualization differentiating factors: Extensive automation and maintenance of architectural representations Organization wide solution. Easy access and navigation to and between all architectural artifacts and representations. Flexible meta-model, customization and extensibility capabilities. Lifecycle management and enforcement of the time dimension over all the repository content. Profile based customization. Comprehensive visibility Architectural alignment Friendly and striking user interfaces For more information on EAMS visit us here. For more information on SOA visit us here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: Link Consulting,OER,OSR,SOA Governance,SOA Community,Oracle SOA,Oracle BPM,BPM Community,OPN,Jürgen Kress

    Read the article

  • Oracle R Distribution 3.1.1 Released

    - by Sherry LaMonica-Oracle
    Oracle R Distribution version 3.1.1 has been released to Oracle's public yum today. R-3.1.1 (code name "Sock it to Me") is an update to R-3.1.0 that consists mainly of bug fixes. It also includes enhancements related to accessing package help files, improved accuracy when importing data with large integers, and better integration with RStudio graphics. The full list of new features and bug fixes is listed in the NEWS file.To install Oracle R Distribution using yum, follow the instructions in the Oracle R Enterprise Installation and Administration Guide.Installing using yum will resolve any operating system dependencies automatically. As such, we recommend using yum to install Oracle R Distribution. However, if yum is not available, you can install Oracle R Distribution RPMs directly using RPM commands.For Oracle Linux 5, the Oracle R Distribution RPMs are available in the Enterprise Linux Add-Ons repository:  R-3.1.1-1.el5.x86_64.rpm   R-core-3.1.1-1.el5.x86_64.rpm  R-devel-3.1.1-1.el5.x86_64.rpm  libRmath-3.1.1-1.el5.x86_64.rpm  libRmath-devel-3.1.1-1.el5.x86_64.rpm  libRmath-static-3.1.1-1.el5.x86_64.rpm For Oracle Linux 6, the Oracle R Distribution RPMs are available in the Oracle Linux Add-Ons repository:  R-3.1.1-1.el6.x86_64.rpm  R-core-3.1.1-1.el6.x86_64.rpm  R-devel-3.1.1-1.el6.x86_64.rpm  libRmath-3.1.1-1.el6.x86_64.rpm  libRmath-devel-3.1.1-1.el6.x86_64.rpm  libRmath-static-3.1.1-1.el6.x86_64.rpmFor example, this command installs the R 3.1.1 RPM on Oracle Linux x86-64 version 6:  rpm -i R-3.1.1-1.el6.x86_64.rpm To complete the Oracle R Distribution 3.1.1 installation, repeat this command for each of the 6 RPMs, resolving dependencies as required. Oracle R Distribution 3.1.1 is not yet officially certified with Oracle R Enterprise. Refer to Table 1-2 in the Oracle R Enterprise Installation Guide for supported configurations of Oracle R Enterprise components, or check this blog for updates. The Oracle R Distribution 3.1.1 binaries for Windows, AIX, Solaris SPARC and Solaris x86 will be available on OSS, Oracle's Open Source Software portal, in the coming weeks.

    Read the article

  • PcLinuxOs demands I use only one repository at time. Is it right?

    - by m33600
    I come to your presence with this question that is paralyzing my coding efforts. PclinuxOs was my distro of choice for reliability, but it is jealous and does not permit me to add repos from, say, Debian. The wiki is clear advising on using just one repo, and I end up not finding what I used to find on normal Debians. Multimon, the audio decoder, for example (my other question) is not there. When I try to install multimon with hammer and plies, it returns errors of all kinds. Is there a way to safely and temporarily add a repository, make the install and remove the repo, returning pclinuxos to its stable state?

    Read the article

  • How to Install Python 2.6 on Fedora 8?

    - by Apreche
    I don't want to use Fedora 8. I would be very happy to use the newest version, but there is no choice. My problem is that 8 comes with python 2.5. I am trying to upgrade it to 2.6, but with no luck. The only caveat is that I don't want to just install directly from source. I want to do it through the package manager using an rpm. I have tried building my own rpm from source using rpmbuild. I have tried using src rpms from newer versions of Fedora. I've tried these CentOS instructions. Nothing seems to actually result in an rpm file that installs successfully. I have also tried extensive Google searching, and have been unsurprisingly unable to find any rpms that work, or working instructions to build my own rpm.

    Read the article

  • download all RPMs from a metapackage

    - by Joey BagODonuts
    example of what im trying to do # yumdownloader net-snmp.x86_64 --source Enabling epel-source repository epel-source | 2.9 kB 00:00 No source RPM found for 1:net-snmp-5.3.2.2-17.el5_8.1.x86_64 No source RPM found for 1:net-snmp-5.3.2.2-17.el5.x86_64 Nothing to download How do you download all the RPM's inside a metapackage?

    Read the article

  • dovecot rhel 5 installation fails because of newer libraries

    - by kayhan yüksel
    to whom it may respond to, we are trying to install dovecot (dovecot-2.2.10-1_14.el5.x86_64) on a RHEL 5.4 server and we get the error : [root@asgfkm /]# rpm -i dovecot-2.1.17-0_136.el5.x86_64.rpm uyarý: dovecot-2.1.17-0_136.el5.x86_64.rpm: Header V4 DSA/SHA1 Signature, key ID 66534c2b: NOKEY hata: Failed dependencies: libcrypto.so.6()(64bit) is needed by dovecot-1:2.1.17-0_136.el5.x86_64 libldap-2.3.so.0()(64bit) is needed by dovecot-1:2.1.17-0_136.el5.x86_64 libmysqlclient.so.15()(64bit) is needed by dovecot-1:2.1.17-0_136.el5.x86_64 libmysqlclient.so.15(libmysqlclient_15)(64bit) is needed by dovecot-1:2.1.17-0_136.el5.x86_64 libssl.so.6()(64bit) is needed by dovecot-1:2.1.17-0_136.el5.x86_64 [root@asgfkm /]# but when we try to install requested libraries, it conflicts with the never libraries : uyarý: openssl-0.9.8e-27.el5_10.1.x86_64.rpm: Header V3 DSA/SHA1 Signature, key ID e8562897: NOKEY openssl-1.0.0-20.el6.x86_64 paketi zaten yüklü (openssl-0.9.8e-27.el5_10.1.x86_64 sürümünden daha yeni) this is happening with the other libraries also : libldap, libmysql, etc... Do you recommend --force option to install it or is there any other proper way around ? Thank you for your time,

    Read the article

  • nginx PPA does not work?

    - by Peter Smit
    I want to use the newest version of nginx, so I wanted to add the nginx/stable ppa sudo add-apt-repository ppa:nginx/stable sudo apt-get update However, the upgrade command says that there are no upgrades available and nginx is still the old version. Did I do something wrong? I use Ubuntu server 10.04 Lucid add-apt-repository output: $ sudo apt-add-repository ppa:nginx/stable Executing: gpg --ignore-time-conflict --no-options --no-default-keyring --secret-keyring /etc/apt/secring.gpg --trustdb-name /etc/apt/trustdb.gpg --keyring /etc/apt/trusted.gpg --primary-keyring /etc/apt/trusted.gpg --keyserver keyserver.ubuntu.com --recv 8B3981E7A6852F782CC4951600A6F0A3C300EE8C gpg: requesting key C300EE8C from hkp server keyserver.ubuntu.com gpg: key C300EE8C: "Launchpad Stable" not changed gpg: Total number processed: 1 gpg: unchanged: 1 apt-cache policy ouput: $ sudo apt-cache policy nginx nginx: Installed: 0.7.65-1ubuntu2 Candidate: 0.7.65-1ubuntu2 Version table: *** 0.7.65-1ubuntu2 0 500 http://eu-west-1.ec2.archive.ubuntu.com/ubuntu/ lucid/universe Packages 100 /var/lib/dpkg/status

    Read the article

  • Architecting multi-model multi-DB ASP.NET MVC solution

    - by A. Murray
    I have an ASP.NET MVC 4 solution that I'm putting together, leveraging IoC and the repository pattern using Entity Framework 5. I have a new requirement to be able to pull data from a second database (from another internal application) which I don't have control over. There is no API available unfortunately for the second application and the general pattern at my place of work is to go direct to the database. I want to maintain a consistent approach to modeling the domain and use entity framework to pull the data out, so thus far I have used Entity Framework's database first approach to generate a domain model and database context over the top of this. However, I've become a little stuck on how to include the second domain model in the application. I have a generic repository which I've now moved out to a common DataAccess project, but short of creating two distinct wrappers for the generic repository (so each can identify with a specific database context), I'm struggling to see how I can elegantly include multiple models?

    Read the article

  • Instructions on Installing the FreeNX server on Ubuntu Karmic (9.10) are incorrect

    - by Bob Free
    How does one fix (or report an error) on an 'immutable' help.ubuntu.community wiki page? I am a registered member of the wiki. On http://help.ubuntu.com/community/FreeNX under the section: Installing the FreeNX server on Ubuntu Karmic (9.10) and higher, step #2, bjorn-nostvold erroneously changed on 2013-01-24 "add-apt-repository" to "apt-add-repository". The original "add-apt-repository" is correct - at least on my Karmic (9.10) version of Ubuntu. By the number of confused people I've found via google search, this seems to be throwing off a lot of people, myself included.

    Read the article

  • How to remove corrupted repositories ?

    - by istimsak abdulbasir
    I was in the process of updating 11.04 and came across and error message saying: Reading package lists... Error! E: Encountered a section with no Package: header E: Problem with MergeList /var/lib/apt/lists/security.ubuntu.com_ubuntu_dists_natty-security_main_i18n_Translation-en E: The package lists or status file could not be parsed or opened. I tried to remove the damage repository by going to ubuntu software center. There was no option of repository removal. Then I tried synaptic, however I got the same error message stated above. I cannot find software source in 11.04. How do I remove repository from the commandline since that seems to be my only option?

    Read the article

  • Is it possible to install all packages from an APT repository?

    - by Kristoffer Hagen
    Is it possible to install all packages from an APT repository? I know it is possible to do it manually, but then you would need to know all the package names, and I don't. Any suggestions? Thanks. Update: Well, you guys are going to kill me for this, but the reason for my madness is that I want to install all the packages from BackTrack into my Ubuntu installation. I really don't like the idea of having it in a VM and having a separate partition for it is even more out of the question. I know that the folks at BackTrack doesn't like it when people leech their repositories, but that's what you get for releasing open source software. Stupid? maybe.. A valid reason? probably not.. Do I still want it? Yes. Another edit: I have now given up on this as it seems impossible to get it to work even by manually installing packages.

    Read the article

  • How can I use apt-get to resolve package dependencies when there are multiple versions in the repository?

    - by user1165144
    I've package a-package.deb which depends on b-package.deb in version 1.0. Everything works fine. But now a b-package in version 1.1 gets added to the repository. I'd suspect that apt-get installs the a-package and version 1.0 of the b-package. What really happens is, that a-package won't get installed: # apt-get install a-package Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: a-package : Depends: b-package (= 1.0) but 1.1 is to be installed E: Unable to correct problems, you have held broken packages. Is there a workaround to fix the behavior? Is there other software to use, that can handle the dependencies as defined?

    Read the article

  • Using ISO Image with a Local Repository for updating Exadata Compute nodes

    - by Rene Kundersma
    For systems that cannot connect directly to Oracle ULN to build a local repository an ISO image file is made available by Oracle. This ISO image can be mounted and used as a local repository. The ISO image contains a file system that contains only the latest (x86_64) ULN channel and cannot be used to update the database servers to any other release than release 11.2.3.1.1. ISO and instructions can be found here  Rene Kundersma

    Read the article

  • Use the repository pattern when using PLINQO generated data?

    - by Chad
    I'm "upgrading" an MVC app. Previously, the DAL was a part of the Model, as a series of repositories (based on the entity name) using standard LINQ to SQL queries. Now, it's a separate project and is generated using PLINQO. Since PLINQO generates query extensions based on the properties of the entity, I started using them directly in my controller... and eliminated the repositories all together. It's working fine, this is more a question to draw upon your experience, should I continue down this path or should I rebuild the repositories (using PLINQO as the DAL within the repository files)? One benefit of just using the PLINQO generated data context is that when I need DB access, I just make one reference to the the data context. Under the repository pattern, I had to reference each repository when I needed data access, sometimes needing to reference multiple repositories on a single controller. The big benefit I saw on the repositories, were aptly named query methods (i.e. FindAllProductsByCategoryId(int id), etc...). With the PLINQO code, it's _db.Product.ByCatId(int id) - which isn't too bad either. I like both, but where it gets "harrier" is when the query uses predicates. I can roll that up into the repository query method. But on the PLINQO code, it would be something like _db.Product.Where(x = x.CatId == 1 && x.OrderId == 1); I'm not so sure I like having code like that in my controllers. Whats your take on this?

    Read the article

  • MVVM: where is the best place to integrate an id counter, in ViewModel or Repository or... ?

    - by msfanboy
    Hello, I am using http://loungerepo.codeplex.com/ this library needs a unique id when I persist my entities to the repository. I decided for integer not Guid. The question is now where do I retrieve a new integer and how do I do it? This is my current approach in the SchoolclassAdministrationViewModel.cs: public SchoolclassAdministrationViewModel() { _schoolclassRepo = new SchoolclassRepository(); _pupilRepo = new PupilRepository(); _subjectRepo = new SubjectRepository(); _currentSchoolclass = new SchoolclassModel(); _currentPupil = new PupilModel(); _currentSubject = new SubjectModel(); ... } private void AddSchoolclass() { // get the last free id for a schoolclass entity _currentSchoolclass.SchoolclassID = _schoolclassRepo.LastID; // add the new schoolclass entity to the repository _schoolclassRepo.Add(SchoolclassModel.SchoolclassModelToSchoolclass(_currentSchoolclass)); // add the new schoolclass entity to the ObservableCollection bound to the View Schoolclasses.Add(_currentSchoolclass); // Create a new schoolclass entity and the bound UI controls content gets cleaned CurrentSchoolclass = new SchoolclassModel(); } public class SchoolclassRepository : IRepository<Schoolclass> { private int _lastID; public SchoolclassRepository() { _lastID = FetchLastId(); } public void Add(Schoolclass entity) { //repo.Store(entity); } private int FetchLastId() { return // repo.GetIDOfLastEntryAndDoInc++ } public int LastID { get { return _lastID; } } } Explanation: Every time the user switches to the SchoolclassAdministrationViewModel which is datatemplated with a UserControl the saVM Ctor is called and the schoolclass repository is created wherein the FetchLastId() is called and I am up to date with the last ID doing a inc++ on it to get the free one... Do you have any better ideas? What I do not like about my current apporach: -Having a private method in repositry class because a repositry is to fetch data only not "entity logic" like the counter - Having to access from the ViewModel a public property - located in the repository -, actually its not the ViewModel concern to get a entity id and assign it. Actually the ViewModel should ask for a schoolclass POCO and get a SchoolclassModel to bind to the UI. But then I have again to re-read the Schoolclass properties into the SchoolclassModel properties what I want to avoid.

    Read the article

  • Problem Installing older TestNG plugin on Eclipse 3.5

    - by Stefan
    I'm trying to install TestNG 5.11 on eclipse 3.5 and gettign the following. eclipse.buildId=unknown java.version=1.6.0_19 java.vendor=Sun Microsystems Inc. BootLoader constants: OS=win32, ARCH=x86, WS=win32, NL=no_NO Framework arguments: -product org.eclipse.epp.package.jee.product Command-line arguments: -os win32 -ws win32 -arch x86 -product org.eclipse.epp.package.jee.product Error Mon Jun 07 15:45:53 CEST 2010 Artifact not found: org.eclipse.update.feature,org.testng.eclipse,5.11.0.28. java.io.FileNotFoundException: "http://beust.com/eclipse/features/org.testng.eclipse_5.11.0.28.jar" at org.eclipse.equinox.internal.p2.repository.RepositoryStatusHelper.checkFileNotFound(RepositoryStatusHelper.java:289) at org.eclipse.equinox.internal.p2.repository.FileReader.checkException(FileReader.java:352) at org.eclipse.equinox.internal.p2.repository.FileReader.sendRetrieveRequest(FileReader.java:326) at org.eclipse.equinox.internal.p2.repository.FileReader.readInto(FileReader.java:263) at org.eclipse.equinox.internal.p2.repository.RepositoryTransport.download(RepositoryTransport.java:71) at org.eclipse.equinox.internal.p2.repository.RepositoryTransport.download(RepositoryTransport.java:127) at org.eclipse.equinox.internal.p2.artifact.repository.simple.SimpleArtifactRepository.downloadArtifact(SimpleArtifactRepository.java:468) at org.eclipse.equinox.internal.p2.artifact.repository.simple.SimpleArtifactRepository.downloadArtifact(SimpleArtifactRepository.java:451) at org.eclipse.equinox.internal.p2.artifact.repository.simple.SimpleArtifactRepository.getArtifact(SimpleArtifactRepository.java:518) at org.eclipse.equinox.internal.p2.artifact.repository.MirrorRequest.getArtifact(MirrorRequest.java:200) at org.eclipse.equinox.internal.p2.artifact.repository.MirrorRequest.transferSingle(MirrorRequest.java:175) at org.eclipse.equinox.internal.p2.artifact.repository.MirrorRequest.transfer(MirrorRequest.java:159) at org.eclipse.equinox.internal.p2.artifact.repository.MirrorRequest.perform(MirrorRequest.java:95) at org.eclipse.equinox.internal.p2.artifact.repository.simple.SimpleArtifactRepository.getArtifact(SimpleArtifactRepository.java:507) at org.eclipse.equinox.internal.p2.artifact.repository.simple.DownloadJob.run(DownloadJob.java:64) at org.eclipse.core.internal.jobs.Worker.run(Worker.java:55) Error Mon Jun 07 15:45:53 CEST 2010 Artifact not found: osgi.bundle,org.testng.eclipse,5.11.0.28. java.io.FileNotFoundException: "http://beust.com/eclipse/plugins/org.testng.eclipse_5.11.0.28.jar" at org.eclipse.equinox.internal.p2.repository.RepositoryStatusHelper.checkFileNotFound(RepositoryStatusHelper.java:289) at org.eclipse.equinox.internal.p2.repository.FileReader.checkException(FileReader.java:352) at org.eclipse.equinox.internal.p2.repository.FileReader.sendRetrieveRequest(FileReader.java:326) at org.eclipse.equinox.internal.p2.repository.FileReader.readInto(FileReader.java:263) at org.eclipse.equinox.internal.p2.repository.RepositoryTransport.download(RepositoryTransport.java:71) at org.eclipse.equinox.internal.p2.repository.RepositoryTransport.download(RepositoryTransport.java:127) at org.eclipse.equinox.internal.p2.artifact.repository.simple.SimpleArtifactRepository.downloadArtifact(SimpleArtifactRepository.java:468) at org.eclipse.equinox.internal.p2.artifact.repository.simple.SimpleArtifactRepository.downloadArtifact(SimpleArtifactRepository.java:451) at org.eclipse.equinox.internal.p2.artifact.repository.simple.SimpleArtifactRepository.getArtifact(SimpleArtifactRepository.java:518) at org.eclipse.equinox.internal.p2.artifact.repository.MirrorRequest.getArtifact(MirrorRequest.java:200) at org.eclipse.equinox.internal.p2.artifact.repository.MirrorRequest.transferSingle(MirrorRequest.java:175) at org.eclipse.equinox.internal.p2.artifact.repository.MirrorRequest.transfer(MirrorRequest.java:159) at org.eclipse.equinox.internal.p2.artifact.repository.MirrorRequest.perform(MirrorRequest.java:95) at org.eclipse.equinox.internal.p2.artifact.repository.simple.SimpleArtifactRepository.getArtifact(SimpleArtifactRepository.java:507) at org.eclipse.equinox.internal.p2.artifact.repository.simple.DownloadJob.run(DownloadJob.java:64) at org.eclipse.core.internal.jobs.Worker.run(Worker.java:55) Error Mon Jun 07 15:45:53 CEST 2010 session context was:(profile=epp.package.jee, phase=org.eclipse.equinox.internal.provisional.p2.engine.phases.Collect, operand=, action=). I'm kinda stuck so I would really like help. Thanks

    Read the article

  • Partitioned Repository for WebCenter Content using Oracle Database 11g

    - by Adao Junior
    One of the biggest challenges for content management solutions is related to the storage management due the high volumes of the unstoppable growing of information. Even if you have storage appliances and a lot of terabytes, thinks like backup, compression, deduplication, storage relocation, encryption, availability could be a nightmare. One standard option that you have with the Oracle WebCenter Content is to store data to the database. And the Oracle Database allows you leverage features like compression, deduplication, encryption and seamless backup. But with a huge volume, the challenge is passed to the DBA to keep the WebCenter Content Database up and running. One solution is the use of DB partitions for your content repository, but what are the implications of this? Can I fit this with my business requirements? Well, yes. It’s up to you how you will manage that, you just need a good plan. During you “storage brainstorm plan” take in your mind what you need, such as storage petabytes of documents? You need everything on-line? There’s a way to logically separate the “good content” from the “legacy content”? The first thing that comes to my mind is to use the creation date of the document, but you need to remember that this document could receive a lot of revisions and maybe you can consider the revision creation date. Your plan can have also complex rules like per Document Type or per a custom metadata like department or an hybrid per date, per DocType and an specific virtual folder. Extrapolation the use, you can have your repository distributed in different servers, different disks, different disk types (Such as ssds, sas, sata, tape,…), separated accordingly your business requirements, separating the “hot” content from the legacy and easily matching your compliance requirements. If you think to use by revision, the simple way is to consider the dId, that is the sequential unique id for every content created using the WebCenter Content or the dLastModified that is the date field of the FileStorage table that contains the date of inclusion of the content to the DB Table using SecureFiles. Using the scenario of partitioned repository using an hierarchical separation by date, we will transform the FileStorage table in an partitioned table using  “Partition by Range” of the dLastModified column (You can use the dId or a join with other tables for other metadata such as dDocType, Security, etc…). The test scenario bellow covers: Previous existent data on the JDBC Storage to be migrated to the new partitioned JDBC Storage Partition by Date Automatically generation of new partitions based on a pre-defined interval (Available only with Oracle Database 11g+) Deduplication and Compression for legacy data Oracle WebCenter Content 11g PS5 (Could present some customizations that do not affect the test scenario) For the test case you need some data stored using JDBC Storage to be the “legacy” data. If you do not have done before, just create an Storage rule pointed to the JDBC Storage: Enable the metadata StorageRule in the UI and upload some documents using this rule. For this test case you can run using the schema owner or an dba user. We will use the schema owner TESTS_OCS. I can’t forgot to tell that this is just a test and you should do a proper backup of your environment. When you use the schema owner, you need some privileges, using the dba user grant the privileges needed: REM Grant privileges required for online redefinition. GRANT EXECUTE ON DBMS_REDEFINITION TO TESTS_OCS; GRANT ALTER ANY TABLE TO TESTS_OCS; GRANT DROP ANY TABLE TO TESTS_OCS; GRANT LOCK ANY TABLE TO TESTS_OCS; GRANT CREATE ANY TABLE TO TESTS_OCS; GRANT SELECT ANY TABLE TO TESTS_OCS; REM Privileges required to perform cloning of dependent objects. GRANT CREATE ANY TRIGGER TO TESTS_OCS; GRANT CREATE ANY INDEX TO TESTS_OCS; In our test scenario we will separate the content as Legacy, Day1, Day2, Day3 and Future. This last one will partitioned automatically using 3 tablespaces in a round robin mode. In a real scenario the partition rule could be per month, per year or any rule that you choose. Table spaces for the test scenario: CREATE TABLESPACE TESTS_OCS_PART_LEGACY DATAFILE 'tests_ocs_part_legacy.dat' SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_DAY1 DATAFILE 'tests_ocs_part_day1.dat' SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_DAY2 DATAFILE 'tests_ocs_part_day2.dat' SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_DAY3 DATAFILE 'tests_ocs_part_day3.dat' SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_ROUND_ROBIN_A 'tests_ocs_part_round_robin_a.dat' DATAFILE SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_ROUND_ROBIN_B 'tests_ocs_part_round_robin_b.dat' DATAFILE SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_ROUND_ROBIN_C 'tests_ocs_part_round_robin_c.dat' DATAFILE SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; Before start, gather optimizer statistics on the actual FileStorage table: EXEC DBMS_STATS.GATHER_TABLE_STATS(USER, 'FileStorage', cascade => TRUE); Now check if is possible execute the redefinition process: EXEC DBMS_REDEFINITION.CAN_REDEF_TABLE('TESTS_OCS', 'FileStorage',DBMS_REDEFINITION.CONS_USE_PK); If no errors messages, you are good to go. Create a Partitioned Interim FileStorage table. You need to create a new table with the partition information to act as an interim table: CREATE TABLE FILESTORAGE_Part ( DID NUMBER(*,0) NOT NULL ENABLE, DRENDITIONID VARCHAR2(30 CHAR) NOT NULL ENABLE, DLASTMODIFIED TIMESTAMP (6), DFILESIZE NUMBER(*,0), DISDELETED VARCHAR2(1 CHAR), BFILEDATA BLOB ) LOB (BFILEDATA) STORE AS SECUREFILE ( ENABLE STORAGE IN ROW NOCACHE LOGGING KEEP_DUPLICATES NOCOMPRESS ) PARTITION BY RANGE (DLASTMODIFIED) INTERVAL (NUMTODSINTERVAL(1,'DAY')) STORE IN (TESTS_OCS_PART_ROUND_ROBIN_A, TESTS_OCS_PART_ROUND_ROBIN_B, TESTS_OCS_PART_ROUND_ROBIN_C) ( PARTITION FILESTORAGE_PART_LEGACY VALUES LESS THAN (TO_DATE('05-APR-2012 12.00.00 AM', 'DD-MON-YYYY HH.MI.SS AM')) TABLESPACE TESTS_OCS_PART_LEGACY LOB (BFILEDATA) STORE AS SECUREFILE ( TABLESPACE TESTS_OCS_PART_LEGACY RETENTION NONE DEDUPLICATE COMPRESS HIGH ), PARTITION FILESTORAGE_PART_DAY1 VALUES LESS THAN (TO_DATE('06-APR-2012 07.25.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) TABLESPACE TESTS_OCS_PART_DAY1 LOB (BFILEDATA) STORE AS SECUREFILE ( TABLESPACE TESTS_OCS_PART_DAY1 RETENTION AUTO KEEP_DUPLICATES COMPRESS ), PARTITION FILESTORAGE_PART_DAY2 VALUES LESS THAN (TO_DATE('06-APR-2012 07.55.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) TABLESPACE TESTS_OCS_PART_DAY2 LOB (BFILEDATA) STORE AS SECUREFILE ( TABLESPACE TESTS_OCS_PART_DAY2 RETENTION AUTO KEEP_DUPLICATES NOCOMPRESS ), PARTITION FILESTORAGE_PART_DAY3 VALUES LESS THAN (TO_DATE('06-APR-2012 07.58.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) TABLESPACE TESTS_OCS_PART_DAY3 LOB (BFILEDATA) STORE AS SECUREFILE ( TABLESPACE TESTS_OCS_PART_DAY3 RETENTION AUTO KEEP_DUPLICATES NOCOMPRESS ) ); After the creation you should see your partitions defined. Note that only the fixed range partitions have been created, none of the interval partition have been created. Start the redefinition process: BEGIN DBMS_REDEFINITION.START_REDEF_TABLE( uname => 'TESTS_OCS' ,orig_table => 'FileStorage' ,int_table => 'FileStorage_PART' ,col_mapping => NULL ,options_flag => DBMS_REDEFINITION.CONS_USE_PK ); END; This operation can take some time to complete, depending how many contents that you have and on the size of the table. Using the DBA user you can check the progress with this command: SELECT * FROM v$sesstat WHERE sid = 1; Copy dependent objects: DECLARE redefinition_errors PLS_INTEGER := 0; BEGIN DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS( uname => 'TESTS_OCS' ,orig_table => 'FileStorage' ,int_table => 'FileStorage_PART' ,copy_indexes => DBMS_REDEFINITION.CONS_ORIG_PARAMS ,copy_triggers => TRUE ,copy_constraints => TRUE ,copy_privileges => TRUE ,ignore_errors => TRUE ,num_errors => redefinition_errors ,copy_statistics => FALSE ,copy_mvlog => FALSE ); IF (redefinition_errors > 0) THEN DBMS_OUTPUT.PUT_LINE('>>> FileStorage to FileStorage_PART temp copy Errors: ' || TO_CHAR(redefinition_errors)); END IF; END; With the DBA user, verify that there's no errors: SELECT object_name, base_table_name, ddl_txt FROM DBA_REDEFINITION_ERRORS; *Note that will show 2 lines related to the constrains, this is expected. Synchronize the interim table FileStorage_PART: BEGIN DBMS_REDEFINITION.SYNC_INTERIM_TABLE( uname => 'TESTS_OCS', orig_table => 'FileStorage', int_table => 'FileStorage_PART'); END; Gather statistics on the new table: EXEC DBMS_STATS.GATHER_TABLE_STATS(USER, 'FileStorage_PART', cascade => TRUE); Complete the redefinition: BEGIN DBMS_REDEFINITION.FINISH_REDEF_TABLE( uname => 'TESTS_OCS', orig_table => 'FileStorage', int_table => 'FileStorage_PART'); END; During the execution the FileStorage table is locked in exclusive mode until finish the operation. After the last command the FileStorage table is partitioned. If you have contents out of the range partition, you should see the new partitions created automatically, not generating an error if you “forgot” to create all the future ranges. You will see something like: You now can drop the FileStorage_PART table: border-bottom-width: 1px; border-bottom-style: solid; text-align: left; border-left-color: silver; border-left-width: 1px; border-left-style: solid; padding-bottom: 4px; line-height: 12pt; background-color: #f4f4f4; margin-top: 20px; margin-right: 0px; margin-bottom: 10px; margin-left: 0px; padding-left: 4px; width: 97.5%; padding-right: 4px; font-family: 'Courier New', Courier, monospace; direction: ltr; max-height: 200px; font-size: 8pt; overflow-x: auto; overflow-y: auto; border-top-color: silver; border-top-width: 1px; border-top-style: solid; cursor: text; border-right-color: silver; border-right-width: 1px; border-right-style: solid; padding-top: 4px; " id="codeSnippetWrapper"> DROP TABLE FileStorage_PART PURGE; To check the FileStorage table is valid and is partitioned, use the command: SELECT num_rows,partitioned FROM user_tables WHERE table_name = 'FILESTORAGE'; You can list the contents of the FileStorage table in a specific partition, per example: SELECT * FROM FileStorage PARTITION (FILESTORAGE_PART_LEGACY) Some useful commands that you can use to check the partitions, note that you need to run using a DBA user: SELECT * FROM DBA_TAB_PARTITIONS WHERE table_name = 'FILESTORAGE';   SELECT * FROM DBA_TABLESPACES WHERE tablespace_name like 'TESTS_OCS%'; After the redefinition process complete you have a new FileStorage table storing all content that has the Storage rule pointed to the JDBC Storage and partitioned using the rule set during the creation of the temporary interim FileStorage_PART table. At this point you can test the WebCenter Content downloading the documents (Original and Renditions). Note that the content could be already in the cache area, take a look in the weblayout directory to see if a file with the same id is there, then click on the web rendition of your test file and see if have created the file and you can open, this means that is all working. The redefinition process can be repeated many times, this allow you test what the better layout, over and over again. Now some interesting maintenance actions related to the partitions: Make an tablespace read only. No issues viewing, the WebCenter Content do not alter the revisions When try to delete an content that is part of an read only tablespace, an error will occurs and the document will not be deleted The only way to prevent errors today is creating an custom component that checks the partitions and if you have an document in an “Read Only” repository, execute the deletion process of the metadata and mark the document to be deleted on the next db maintenance, like a new redefinition. Take an tablespace off-line for archiving purposes or any other reason. When you try open an document that is included in this tablespace will receive an error that was unable to retrieve the content, but the others online tablespaces are not affected. Same behavior when deleting documents. Again, an custom component is the solution. If you have an document “out of range”, the component can show an message that the repository for that document is offline. This can be extended to a option to the user to request to put online again. Moving some legacy content to an offline repository (table) using the Exchange option to move the content from one partition to a empty nonpartitioned table like FileStorage_LEGACY. Note that this option will remove the registers from the FileStorage and will not be able to open the stored content. You always need to keep in mind the indexes and constrains. An redefinition separating the original content (vault) from the renditions and separate by date ate the same time. This could be an option for DAM environments that want to have an special place for the renditions and put the original files in a storage with less performance. The process will be the same, you just need to change the script of the interim table to use composite partitioning. Will be something like: CREATE TABLE FILESTORAGE_RenditionPart ( DID NUMBER(*,0) NOT NULL ENABLE, DRENDITIONID VARCHAR2(30 CHAR) NOT NULL ENABLE, DLASTMODIFIED TIMESTAMP (6), DFILESIZE NUMBER(*,0), DISDELETED VARCHAR2(1 CHAR), BFILEDATA BLOB ) LOB (BFILEDATA) STORE AS SECUREFILE ( ENABLE STORAGE IN ROW NOCACHE LOGGING KEEP_DUPLICATES NOCOMPRESS ) PARTITION BY LIST (DRENDITIONID) SUBPARTITION BY RANGE (DLASTMODIFIED) ( PARTITION Vault VALUES ('primaryFile') ( SUBPARTITION FILESTORAGE_VAULT_LEGACY VALUES LESS THAN (TO_DATE('05-APR-2012 12.00.00 AM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_VAULT_DAY1 VALUES LESS THAN (TO_DATE('06-APR-2012 07.25.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_VAULT_DAY2 VALUES LESS THAN (TO_DATE('06-APR-2012 07.55.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_VAULT_DAY3 VALUES LESS THAN (TO_DATE('06-APR-2012 07.58.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_VAULT_FUTURE VALUES LESS THAN (MAXVALUE) ) ,PARTITION WebLayout VALUES ('webViewableFile') ( SUBPARTITION FILESTORAGE_WEBLAYOUT_LEGACY VALUES LESS THAN (TO_DATE('05-APR-2012 12.00.00 AM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_WEBLAYOUT_DAY1 VALUES LESS THAN (TO_DATE('06-APR-2012 07.25.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_WEBLAYOUT_DAY2 VALUES LESS THAN (TO_DATE('06-APR-2012 07.55.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_WEBLAYOUT_DAY3 VALUES LESS THAN (TO_DATE('06-APR-2012 07.58.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_WEBLAYOUT_FUTURE VALUES LESS THAN (MAXVALUE) ) ,PARTITION Special VALUES ('Special') ( SUBPARTITION FILESTORAGE_SPECIAL_LEGACY VALUES LESS THAN (TO_DATE('05-APR-2012 12.00.00 AM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_SPECIAL_DAY1 VALUES LESS THAN (TO_DATE('06-APR-2012 07.25.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_SPECIAL_DAY2 VALUES LESS THAN (TO_DATE('06-APR-2012 07.55.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_SPECIAL_DAY3 VALUES LESS THAN (TO_DATE('06-APR-2012 07.58.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_SPECIAL_FUTURE VALUES LESS THAN (MAXVALUE) ) )ENABLE ROW MOVEMENT; The next post related to partitioned repository will come with an sample component to handle the possible exceptions when you need to take off line an tablespace/partition or move to another place. Also, we can include some integration to the Retention Management and Records Management. Another subject related to partitioning is the ability to create an FileStore Provider pointed to a different database, raising the level of the distributed storage vs. performance. Let us know if this is important to you or you have an use case not listed, leave a comment. Cross-posted on the blog.ContentrA.com

    Read the article

  • Asp.net Mvc 2: Repository, Paging, and Filtering how to?

    - by Dr. Zim
    It makes sense to pass a filter object to the repository so it can limit what records return: var myFilterObject = myFilterFactory.GetBlank(); myFilterObject.AddFilter( new Filter { "transmission", "eq", "Automatic"} ); var myCars = myRepository.GetCars(myfilterObject); Key question: how would you implement paging and where? Any links on how to return a LazyList from a Repository as it would apply here? Would this be part of the filter object? Something like: myFilterObject.AddFilter( new Filter { "StartAtRecord", "eq", "45"} ); myFilterObject.AddFilter( new Filter { "GetQuantity", "eq", "15"} ); var myCars = myRepository.GetCars(myfilterObject); I assume the repository must implement filtering, otherwise you would get all records.

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >