Search Results

Search found 14037 results on 562 pages for 'alter index'.

Page 25/562 | < Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >

  • [Disallow: /index.php] seems to block /my-beautiful-sef-url-123

    - by Jaroslav Záruba
    Hello I have robots.txt that looks like this: User-agent: * Disallow: /system/ Disallow: /admin/ Disallow: /index.php The obvious goal has been to prevent all the ugly URLs from being indexed, as they all begin with "/index.php". But for some reason all URLs like /my-beautiful-sef-url-123 are listed under Crawl errors in Google Webmaster Tools with "URL restricted by robots.txt". (When I test such URL it yields Allowed for both Googlebot and Googlebot-Mobile.) Can anyone help please?

    Read the article

  • MySQL Non Index Queries Analysis

    - by Markii
    I'm using the log queries not using index but it logs all that use indexes but just more advanced or using IFs. Is there a parser or a program out there that can analyze the log and give me a literal output of saying "table.column should be a index" Thanks

    Read the article

  • Parameter index is out of range

    - by czuroski
    Hello, I am getting the following error when trying to update an object using nhibernate. I am attempting to update a field which is a foreign key. Any thoughts why I might be getting this error? I can't figure it out from that error and my log4net log doesn't give any hints either. Thanks System.IndexOutOfRangeException was unhandled by user code Message="Parameter index is out of range." Source="MySql.Data" StackTrace: at MySql.Data.MySqlClient.MySqlParameterCollection.CheckIndex(Int32 index) at MySql.Data.MySqlClient.MySqlParameterCollection.GetParameter(Int32 index) at System.Data.Common.DbParameterCollection.System.Collections.IList.get_Item(Int32 index) at NHibernate.Type.Int32Type.Set(IDbCommand rs, Object value, Int32 index) at NHibernate.Type.NullableType.NullSafeSet(IDbCommand cmd, Object value, Int32 index) at NHibernate.Type.NullableType.NullSafeSet(IDbCommand st, Object value, Int32 index, ISessionImplementor session) at NHibernate.Persister.Entity.AbstractEntityPersister.Dehydrate(Object id, Object[] fields, Object rowId, Boolean[] includeProperty, Boolean[][] includeColumns, Int32 table, IDbCommand statement, ISessionImplementor session, Int32 index) at NHibernate.Persister.Entity.AbstractEntityPersister.Update(Object id, Object[] fields, Object[] oldFields, Object rowId, Boolean[] includeProperty, Int32 j, Object oldVersion, Object obj, SqlCommandInfo sql, ISessionImplementor session) at NHibernate.Persister.Entity.AbstractEntityPersister.UpdateOrInsert(Object id, Object[] fields, Object[] oldFields, Object rowId, Boolean[] includeProperty, Int32 j, Object oldVersion, Object obj, SqlCommandInfo sql, ISessionImplementor session) at NHibernate.Persister.Entity.AbstractEntityPersister.Update(Object id, Object[] fields, Int32[] dirtyFields, Boolean hasDirtyCollection, Object[] oldFields, Object oldVersion, Object obj, Object rowId, ISessionImplementor session) at NHibernate.Action.EntityUpdateAction.Execute() at NHibernate.Engine.ActionQueue.Execute(IExecutable executable) at NHibernate.Engine.ActionQueue.ExecuteActions(IList list) at NHibernate.Engine.ActionQueue.ExecuteActions() at NHibernate.Event.Default.AbstractFlushingEventListener.PerformExecutions(IEventSource session) at NHibernate.Event.Default.DefaultFlushEventListener.OnFlush(FlushEvent event) at NHibernate.Impl.SessionImpl.Flush() at NHibernate.Transaction.AdoTransaction.Commit() at DataAccessLayer.NHibernateDataProvider.UpdateItem_temp(items_temp item_temp) in C:\Documents and Settings\user\My Documents\Visual Studio 2008\Projects\mySolution\DataAccessLayer\NHibernateDataProvider.cs:line 225 at InventoryDataClean.Controllers.ImportController.Edit(Int32 id, FormCollection formValues) in C:\Documents and Settings\user\My Documents\Visual Studio 2008\Projects\mySolution\InventoryDataClean\Controllers\ImportController.cs:line 101 at lambda_method(ExecutionScope , ControllerBase , Object[] ) at System.Web.Mvc.ActionMethodDispatcher.Execute(ControllerBase controller, Object[] parameters) at System.Web.Mvc.ReflectedActionDescriptor.Execute(ControllerContext controllerContext, IDictionary`2 parameters) at System.Web.Mvc.ControllerActionInvoker.InvokeActionMethod(ControllerContext controllerContext, ActionDescriptor actionDescriptor, IDictionary`2 parameters) at System.Web.Mvc.ControllerActionInvoker.<>c__DisplayClassa.<InvokeActionMethodWithFilters>b__7() at System.Web.Mvc.ControllerActionInvoker.InvokeActionMethodFilter(IActionFilter filter, ActionExecutingContext preContext, Func`1 continuation) InnerException: Here is my item mapping - <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" assembly="DataTransfer" namespace="DataTransfer"> <class name="DataTransfer.items_temp, DataTransfer" table="items_temp"> <id name="id" unsaved-value="any" > <generator class="assigned"/> </id> <property name="assetid"/> <property name="description"/> <property name="caretaker"/> <property name="category"/> <property name="status" /> <property name="vendor" /> <many-to-one name="statusName" class="status" column="status" /> </class> </hibernate-mapping> Here is my status mapping - <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" assembly="DataTransfer" namespace="DataTransfer"> <class name="DataTransfer.status, DataTransfer" table="status"> <id name="id" unsaved-value="0"> <generator class="assigned"/> </id> <property name="name"/> <property name="def"/> </class> </hibernate-mapping> and here is my update function - public void UpdateItem_temp(items_temp item_temp) { ITransaction t = _session.BeginTransaction(); try { _session.SaveOrUpdate(item_temp); t.Commit(); } catch (Exception) { t.Rollback(); throw; } finally { t.Dispose(); } }

    Read the article

  • SQL SERVER – Update Statistics are Sampled By Default

    - by pinaldave
    After reading my earlier post SQL SERVER – Create Primary Key with Specific Name when Creating Table on Statistics, I have received another question by a blog reader. The question is as follows: Question: Are the statistics sampled by default? Answer: Yes. The sampling rate can be specified by the user and it can be anywhere between a very low value to 100%. Let us do a small experiment to verify if the auto update on statistics is left on. Also, let’s examine a very large table that is created and statistics by default- whether the statistics are sampled or not. USE [AdventureWorks] GO -- Create Table CREATE TABLE [dbo].[StatsTest]( [ID] [int] IDENTITY(1,1) NOT NULL, [FirstName] [varchar](100) NULL, [LastName] [varchar](100) NULL, [City] [varchar](100) NULL, CONSTRAINT [PK_StatsTest] PRIMARY KEY CLUSTERED ([ID] ASC) ) ON [PRIMARY] GO -- Insert 1 Million Rows INSERT INTO [dbo].[StatsTest] (FirstName,LastName,City) SELECT TOP 1000000 'Bob', CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 1 THEN 'New York' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 5 THEN 'San Marino' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 3 THEN 'Los Angeles' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Update the statistics UPDATE STATISTICS [dbo].[StatsTest] GO -- Shows the statistics DBCC SHOW_STATISTICS ("StatsTest"PK_StatsTest) GO -- Clean up DROP TABLE [dbo].[StatsTest] GO Now let us observe the result of the DBCC SHOW_STATISTICS. The result shows that Resultset is for sure sampling for a large dataset. The percentage of sampling is based on data distribution as well as the kind of data in the table. Before dropping the table, let us check first the size of the table. The size of the table is 35 MB. Now, let us run the above code with lesser number of the rows. USE [AdventureWorks] GO -- Create Table CREATE TABLE [dbo].[StatsTest]( [ID] [int] IDENTITY(1,1) NOT NULL, [FirstName] [varchar](100) NULL, [LastName] [varchar](100) NULL, [City] [varchar](100) NULL, CONSTRAINT [PK_StatsTest] PRIMARY KEY CLUSTERED ([ID] ASC) ) ON [PRIMARY] GO -- Insert 1 Hundred Thousand Rows INSERT INTO [dbo].[StatsTest] (FirstName,LastName,City) SELECT TOP 100000 'Bob', CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 1 THEN 'New York' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 5 THEN 'San Marino' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 3 THEN 'Los Angeles' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Update the statistics UPDATE STATISTICS [dbo].[StatsTest] GO -- Shows the statistics DBCC SHOW_STATISTICS ("StatsTest"PK_StatsTest) GO -- Clean up DROP TABLE [dbo].[StatsTest] GO You can see that Rows Sampled is just the same as Rows of the table. In this case, the sample rate is 100%. Before dropping the table, let us also check the size of the table. The size of the table is less than 4 MB. Let us compare the Result set just for a valid reference. Test 1: Total Rows: 1000000, Rows Sampled: 255420, Size of the Table: 35.516 MB Test 2: Total Rows: 100000, Rows Sampled: 100000, Size of the Table: 3.555 MB The reason behind the sample in the Test1 is that the data space is larger than 8 MB, and therefore it uses more than 1024 data pages. If the data space is smaller than 8 MB and uses less than 1024 data pages, then the sampling does not happen. Sampling aids in reducing excessive data scan; however, sometimes it reduces the accuracy of the data as well. Please note that this is just a sample test and there is no way it can be claimed as a benchmark test. The result can be dissimilar on different machines. There are lots of other information can be included when talking about this subject. I will write detail post covering all the subject very soon. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Index, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL Statistics

    Read the article

  • Google Desktop Search problems - unable to disable indexing and no index status page

    - by Howiecamp
    I've been running Google Desktop (I've used both the regular and Enterprise versions) without issues on my Windows 7 64-bit PC for a long time with no issues. I just paved my PC and reinstalled Google Desktop (regular edition). Initially, I want to use it only for the CTRL-CTRL launcher capability - I don't want it to do any indexing. So I unchecked all the items in the configuration dialog as well as excluded c:. Despite this, it is indexing all the content on my hard drive as well as Outlook email, etc. In my previous installations I had the same options set and they worked properly. The other problem I have is that when you right-click on the Google Desktop system tray icon, there used to be an option for "Index Status" which showed you the percentage complete the index creation process was, as well as an option to clear and rebuild the index. Those are no longer there. I uninstalled the regular edition and installed the Enterprise Edition, but same problems. Any ideas?

    Read the article

  • Archive software for big files and fast index

    - by AkiRoss
    I'm currently using tar for archiving some files. Problem is: archives are pretty big, contains many data and tar is very slow when listing and extracting. I often need to extract single files or folders from the archive, but I don't currently have an external index of files. So, is there an alternative for Linux, allowing me to build uncompressed archive files, preserving the file attributes AND having fast access list table? I'm talking about archives of 10 to 100 GB, and it's pretty impractical to wait several minutes to access a single file. Anyway, any trick to solve this problem is welcome (but single archives are non-optional, so no rsync or similar). Thanks in advance! EDIT: I'm not compressing archives, and using tar I think they are too slow. To be precise about "slow", I'd like that: listing archive content should take time linear in files count inside the archive, but with very little constant (e.g. if a list of all the files is included at the head of the archive, it could be very fast). extraction of a target file/directory should (filesystem premitting) take time linear with the target size (e.g. if I'm extracting a 2MB PDF file in a 40GB directory, I'd really like it to take less than few minutes... If not seconds). Of course, this is just my idea and not a requirement. I guess such performances could be achievable if the archive contained an index of all the files with respective offset and such index is well organized (e.g. tree structure).

    Read the article

  • Index a low-cost NAS on Windows 7

    - by JcMaco
    Has anyone found a way to index the files stored on a Networked Attached Storage on Windows 7 so that the files can be available in Windows Search and Libraries? I am referring to the cheap and available NAS like the Western Digital My Book series that use an embedded linux server. Similar question: http://windows7forums.com/windows-7-networking/6700-indexing-nas-drive-libraries.html EDIT Windows help proposes to make the files stored on the NAS available offline. This is obviously not a good solution if the NAS has more data than what the client can store. If the folder is on a network device that is not part of your homegroup, it can be included as long as the content of the folder is indexed. If the folder is already indexed on the device where it is stored, you should be able to include it directly in the library. If the network folder is not indexed, an easy way to index it is to make the folder available offline. This will create offline versions of the files in the folder, and add these files to the index on your computer. Once you make a folder available offline, you can include it in a library. When you make a network folder available offline, copies of all the files in that folder will be stored on your computer's hard disk. Take this into consideration if the network folder contains a large number of files.

    Read the article

  • cant remove index.php from url in codeigniter

    - by Ashiq
    iam new in codeigniter frame work,i want to remove index.php from url and tried many times bt its not working..... here is my .htaccess file RewriteEngine on RewriteBase /test/ RewriteCond $1 !^(index\.php|resources|robots\.txt) RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ test/index.php/$1 [L,QSA] iam also change $config['index_page'] = ''; bt when running this i got an error message... Internal Server Error The server encountered an internal error or misconfiguration and was unable to complete your request. Please contact the server administrator at [email protected] to inform them of the time this error occurred, and the actions you performed just before this error. More information about this error may be available in the server error log. here is my appache error log [Sat Jan 05 16:59:53.265625 2013] [core:error] [pid 3976:tid 1152] [client ] Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace. pls help to solve this........ Thanks

    Read the article

  • Windows 2003 Server Caching

    - by pablomedok
    We're experiencing almost everyday table index corruption on Windows Server 2003. We are running an old application which uses DBF/CDX tables. Everything was fine for ages, but 6 months after we've installed Advantage Database Server (which allows access to some tables to our website) we started to get index corruption problems. And we don't know whom to blame. We've tried to exclude all possible causes of this corruption. Now all users work in terminal mode - so no network problems can cause that, OpLocks also can't be a reason. We changed hardware, network cards, switches, reainstalled Server and even moved to new dedicated server. The only thing we can't exclude is ADS - because it should be working. Is that possible that local read/write caching that causes that problem? E.g. one user or process uses cached data, later another user/process changes it, and later the first user changes it again without knowing about the first change. Is it possible theoretically? Is it possible that this problem is caused by imporper file server or caching settings? Is it possible that normal users use non-cached data and ADS is using cached data? Or vice versa? Is it possible that each terminal user has its own cache? Or maybe the problem is about RAID caching somehow interfering with Windows Server caching? Or maybe there are some special settings for Windows Server for working with DBF tables that are being written simultaneously by several terminal users? Maybe there is a way to turn off caching for some certain files to check it? Sometimes we get index crash twice a day, sometimes everything is fine for 5 days in a row. Today only one user was working in the evening with the database (usually there are 30-50 users are working simultaneously on working hours). So it's almost zero load on server. , Syncronization with website is performed every 5 minutes during work hours and every 15 minutes in the evening and on weekend. We've done file access auditing and it shows that during website syncroniztions ADS server opens the table and index files for ReadEA and WriteEA though it performs only SELECT queries. ADS does UPDATE/INSERT queries but less freqently - not during regular synchronizations, but only when an order is placed by website visitor). Please help me. We are struggling with this problem for almost a year and still can't find any pattern or any clue about this problem. Here is my previous qestion about this issue on DBA: http://dba.stackexchange.com/questions/8646/foxpro-dbf-index-corruption

    Read the article

  • simple question about oracle indexes

    - by john
    If I have an oracle query like below: SELECT * FROM table_a where A = "1", B = "2", C = "3" for this query to pickup one of the indexes of table_a...does the index need to be on all 3 of these columns? What I am asking is: What if Index is on A, B, C, D? What if Index is on B, C? Will the index only be picked when it is on A, B, C?

    Read the article

  • ?Oracle Database 12c????Information Lifecycle Management ILM?Storage Enhancements

    - by Liu Maclean(???)
    Oracle Database 12c????Information Lifecycle Management ILM ?????????Storage Enhancements ???????? Lifecycle Management ILM ????????? Automatic Data Placement ??????, ??ADP? ?????? 12c???????Datafile??? Online Move Datafile, ????????????????datafile???????,??????????????? ????(12.1.0.1)Automatic Data Optimization?heat map????????: ????????? (CDB)?????Automatic Data Optimization?heat map Row-level policies for ADO are not supported for Temporal Validity. Partition-level ADO and compression are supported if partitioned on the end-time columns. Row-level policies for ADO are not supported for in-database archiving. Partition-level ADO and compression are supported if partitioned on the ORA_ARCHIVE_STATE column. Custom policies (user-defined functions) for ADO are not supported if the policies default at the tablespace level. ADO does not perform checks for storage space in a target tablespace when using storage tiering. ADO is not supported on tables with object types or materialized views. ADO concurrency (the number of simultaneous policy jobs for ADO) depends on the concurrency of the Oracle scheduler. If a policy job for ADO fails more than two times, then the job is marked disabled and the job must be manually enabled later. Policies for ADO are only run in the Oracle Scheduler maintenance windows. Outside of the maintenance windows all policies are stopped. The only exceptions are those jobs for rebuilding indexes in ADO offline mode. ADO has restrictions related to moving tables and table partitions. ??????row,segment???????????ADO??,?????create table?alter table?????? ????ADO??,??????????????,???????????????? storage tier , ?????????storage tier?????????, ??????????????ADO??????????? segment?row??group? ?CREATE TABLE?ALERT TABLE???ILM???,??????????????????ADO policy? ??ILM policy???????????????? ??????? ????ADO policy, ?????alter table  ???????,?????????????? CREATE TABLE sales_ado (PROD_ID NUMBER NOT NULL, CUST_ID NUMBER NOT NULL, TIME_ID DATE NOT NULL, CHANNEL_ID NUMBER NOT NULL, PROMO_ID NUMBER NOT NULL, QUANTITY_SOLD NUMBER(10,2) NOT NULL, AMOUNT_SOLD NUMBER(10,2) NOT NULL ) ILM ADD POLICY COMPRESS FOR ARCHIVE HIGH SEGMENT AFTER 6 MONTHS OF NO ACCESS; SQL> SELECT SUBSTR(policy_name,1,24) AS POLICY_NAME, policy_type, enabled 2 FROM USER_ILMPOLICIES; POLICY_NAME POLICY_TYPE ENABLED -------------------- -------------------------- -------------- P41 DATA MOVEMENT YES ALTER TABLE sales MODIFY PARTITION sales_1995 ILM ADD POLICY COMPRESS FOR ARCHIVE HIGH SEGMENT AFTER 6 MONTHS OF NO ACCESS; SELECT SUBSTR(policy_name,1,24) AS POLICY_NAME, policy_type, enabled FROM USER_ILMPOLICIES; POLICY_NAME POLICY_TYPE ENABLE ------------------------ ------------- ------ P1 DATA MOVEMENT YES P2 DATA MOVEMENT YES /* You can disable an ADO policy with the following */ ALTER TABLE sales_ado ILM DISABLE POLICY P1; /* You can delete an ADO policy with the following */ ALTER TABLE sales_ado ILM DELETE POLICY P1; /* You can disable all ADO policies with the following */ ALTER TABLE sales_ado ILM DISABLE_ALL; /* You can delete all ADO policies with the following */ ALTER TABLE sales_ado ILM DELETE_ALL; /* You can disable an ADO policy in a partition with the following */ ALTER TABLE sales MODIFY PARTITION sales_1995 ILM DISABLE POLICY P2; /* You can delete an ADO policy in a partition with the following */ ALTER TABLE sales MODIFY PARTITION sales_1995 ILM DELETE POLICY P2; ILM ???????: ?????ILM ADP????,???????: ?????? ???? activity tracking, ????2????????,???????????????????: SEGMENT-LEVEL???????????????????? ROW-LEVEL????????,??????? ????????: 1??????? SEGMENT-LEVEL activity tracking ALTER TABLE interval_sales ILM  ENABLE ACTIVITY TRACKING SEGMENT ACCESS ???????INTERVAL_SALES??segment level  activity tracking,?????????????????? 2? ??????????? ALTER TABLE emp ILM ENABLE ACTIVITY TRACKING (CREATE TIME , WRITE TIME); 3????????? ALTER TABLE emp ILM ENABLE ACTIVITY TRACKING  (READ TIME); ?12.1.0.1.0?????? ??HEAT_MAP??????????, ?????system??session?????heap_map????????????? ?????????HEAT MAP??,? ALTER SYSTEM SET HEAT_MAP = ON; ?HEAT MAP??????,??????????????????????????  ??SYSTEM?SYSAUX????????????? ???????HEAT MAP??: ALTER SYSTEM SET HEAT_MAP = OFF; ????? HEAT_MAP????, ?HEAT_MAP??? ?????????????????????? ?HEAT_MAP?????????Automatic Data Optimization (ADO)??? ??ADO??,Heat Map ?????????? ????V$HEAT_MAP_SEGMENT ??????? HEAT MAP?? SQL> select * from V$heat_map_segment; no rows selected SQL> alter session set heat_map=on; Session altered. SQL> select * from scott.emp; EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO ---------- ---------- --------- ---------- --------- ---------- ---------- ---------- 7369 SMITH CLERK 7902 17-DEC-80 800 20 7499 ALLEN SALESMAN 7698 20-FEB-81 1600 300 30 7521 WARD SALESMAN 7698 22-FEB-81 1250 500 30 7566 JONES MANAGER 7839 02-APR-81 2975 20 7654 MARTIN SALESMAN 7698 28-SEP-81 1250 1400 30 7698 BLAKE MANAGER 7839 01-MAY-81 2850 30 7782 CLARK MANAGER 7839 09-JUN-81 2450 10 7788 SCOTT ANALYST 7566 19-APR-87 3000 20 7839 KING PRESIDENT 17-NOV-81 5000 10 7844 TURNER SALESMAN 7698 08-SEP-81 1500 0 30 7876 ADAMS CLERK 7788 23-MAY-87 1100 20 7900 JAMES CLERK 7698 03-DEC-81 950 30 7902 FORD ANALYST 7566 03-DEC-81 3000 20 7934 MILLER CLERK 7782 23-JAN-82 1300 10 14 rows selected. SQL> select * from v$heat_map_segment; OBJECT_NAME SUBOBJECT_NAME OBJ# DATAOBJ# TRACK_TIM SEG SEG FUL LOO CON_ID -------------------- -------------------- ---------- ---------- --------- --- --- --- --- ---------- EMP 92997 92997 23-JUL-13 NO NO YES NO 0 ??v$heat_map_segment???,?v$heat_map_segment??????????????X$HEATMAPSEGMENT V$HEAT_MAP_SEGMENT displays real-time segment access information. Column Datatype Description OBJECT_NAME VARCHAR2(128) Name of the object SUBOBJECT_NAME VARCHAR2(128) Name of the subobject OBJ# NUMBER Object number DATAOBJ# NUMBER Data object number TRACK_TIME DATE Timestamp of current activity tracking SEGMENT_WRITE VARCHAR2(3) Indicates whether the segment has write access: (YES or NO) SEGMENT_READ VARCHAR2(3) Indicates whether the segment has read access: (YES or NO) FULL_SCAN VARCHAR2(3) Indicates whether the segment has full table scan: (YES or NO) LOOKUP_SCAN VARCHAR2(3) Indicates whether the segment has lookup scan: (YES or NO) CON_ID NUMBER The ID of the container to which the data pertains. Possible values include:   0: This value is used for rows containing data that pertain to the entire CDB. This value is also used for rows in non-CDBs. 1: This value is used for rows containing data that pertain to only the root n: Where n is the applicable container ID for the rows containing data The Heat Map feature is not supported in CDBs in Oracle Database 12c, so the value in this column can be ignored. ??HEAP MAP??????????????????,????DBA_HEAT_MAP_SEGMENT???????? ???????HEAT_MAP_STAT$?????? ??Automatic Data Optimization??????: ????1: SQL> alter system set heat_map=on; ?????? ????????????? scott?? http://www.askmaclean.com/archives/scott-schema-script.html SQL> grant all on dbms_lock to scott; ????? SQL> grant dba to scott; ????? @ilm_setup_basic C:\APP\XIANGBLI\ORADATA\MACLEAN\ilm.dbf @tktgilm_demo_env_setup SQL> connect scott/tiger ; ???? SQL> select count(*) from scott.employee; COUNT(*) ---------- 3072 ??? 1 ?? SQL> set serveroutput on SQL> exec print_compression_stats('SCOTT','EMPLOYEE'); Compression Stats ------------------ Uncmpressed : 3072 Adv/basic compressed : 0 Others : 0 PL/SQL ???????? ???????3072?????? ????????? ????policy ???????????? alter table employee ilm add policy row store compress advanced row after 3 days of no modification / SQL> set serveroutput on SQL> execute list_ilm_policies; -------------------------------------------------- Policies defined for SCOTT -------------------------------------------------- Object Name------ : EMPLOYEE Subobject Name--- : Object Type------ : TABLE Inherited from--- : POLICY NOT INHERITED Policy Name------ : P1 Action Type------ : COMPRESSION Scope------------ : ROW Compression level : ADVANCED Tier Tablespace-- : Condition type--- : LAST MODIFICATION TIME Condition days--- : 3 Enabled---------- : YES -------------------------------------------------- PL/SQL ???????? SQL> select sysdate from dual; SYSDATE -------------- 29-7? -13 SQL> execute set_back_chktime(get_policy_name('EMPLOYEE',null,'COMPRESSION','ROW','ADVANCED',3,null,null),'EMPLOYEE',null,6); Object check time reset ... -------------------------------------- Object Name : EMPLOYEE Object Number : 93123 D.Object Numbr : 93123 Policy Number : 1 Object chktime : 23-7? -13 08.13.42.000000 ?? Distnt chktime : 0 -------------------------------------- PL/SQL ???????? ?policy?chktime???6??, ????set_back_chktime???????????????“????”?,?????????,???????? ?????? alter system flush buffer_cache; alter system flush buffer_cache; alter system flush shared_pool; alter system flush shared_pool; SQL> execute set_window('MONDAY_WINDOW','OPEN'); Set Maint. Window OPEN ----------------------------- Window Name : MONDAY_WINDOW Enabled? : TRUE Active? : TRUE ----------------------------- PL/SQL ???????? SQL> exec dbms_lock.sleep(60) ; PL/SQL ???????? SQL> exec print_compression_stats('SCOTT', 'EMPLOYEE'); Compression Stats ------------------ Uncmpressed : 338 Adv/basic compressed : 2734 Others : 0 PL/SQL ???????? ??????????????? Adv/basic compressed : 2734 ??????? SQL> col object_name for a20 SQL> select object_id,object_name from dba_objects where object_name='EMPLOYEE'; OBJECT_ID OBJECT_NAME ---------- -------------------- 93123 EMPLOYEE SQL> execute list_ilm_policy_executions ; -------------------------------------------------- Policies execution details for SCOTT -------------------------------------------------- Policy Name------ : P22 Job Name--------- : ILMJOB48 Start time------- : 29-7? -13 08.37.45.061000 ?? End time--------- : 29-7? -13 08.37.48.629000 ?? ----------------- Object Name------ : EMPLOYEE Sub_obj Name----- : Obj Type--------- : TABLE ----------------- Exec-state------- : SELECTED FOR EXECUTION Job state-------- : COMPLETED SUCCESSFULLY Exec comments---- : Results comments- : --- -------------------------------------------------- PL/SQL ???????? ILMJOB48?????policy?JOB,?12.1.0.1??J00x???? ?MMON_SLAVE???M00x???15????????? select sample_time,program,module,action from v$active_session_history where action ='KDILM background EXEcution' order by sample_time; 29-7? -13 08.16.38.369000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.17.38.388000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.17.39.390000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.23.38.681000000 ?? ORACLE.EXE (M002) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.32.38.968000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.33.39.993000000 ?? ORACLE.EXE (M003) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.33.40.993000000 ?? ORACLE.EXE (M003) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.36.40.066000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.37.42.258000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.37.43.258000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.37.44.258000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.38.42.386000000 ?? ORACLE.EXE (M001) MMON_SLAVE KDILM background EXEcution select distinct action from v$active_session_history where action like 'KDILM%' KDILM background CLeaNup KDILM background EXEcution SQL> execute set_window('MONDAY_WINDOW','CLOSE'); Set Maint. Window CLOSE ----------------------------- Window Name : MONDAY_WINDOW Enabled? : TRUE Active? : FALSE ----------------------------- PL/SQL ???????? SQL> drop table employee purge ; ????? ???? ????? spool ilm_usecase_1_cleanup.lst @ilm_demo_cleanup ; spool off

    Read the article

  • How to understand these lines in apache.log

    - by chefnelone
    I just get 19000 lines like these in the apache.log file for my site example.com. My hosting provider shut down the hosting and notified me that I need to avoid to activate my hosting again. I understand that I got a big amount of visits but I don't know how to avoid this. 88.190.47.233 - - [27/Jun/2013:09:51:34 +0200] "GET / HTTP/1.0" 403 389 "http://example.com/" "Opera/9.80 (Windows NT 6.1; U; ru) Presto/2.10.289 Version/12.02" 417 88.190.47.233 - - [27/Jun/2013:09:51:34 +0200] "GET / HTTP/1.0" 403 389 "http://example.com/" "Opera/9.80 (Windows NT 6.1; U; ru) Presto/2.10.289 Version/12.02" 417 175.44.28.155 - - [27/Jun/2013:09:51:44 +0200] "GET /en/user/register HTTP/1.1" 403 503 "http://example.com/en/" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1;)" 248 175.44.29.140 - - [27/Jun/2013:09:53:19 +0200] "GET /en/node/1557?page=2 HTTP/1.0" 403 517 "http://example.com/en/node/1557?page=2" "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.12 Safari/535.11" 491 These are the lines from apache-error.log. There are more than 35000 lines like this. [Thu Jun 27 09:50:58 2013] [error] [client 5.39.19.183] (13)Permission denied: access to /index.php denied, referer: http://example.com/ [Thu Jun 27 09:51:03 2013] [error] [client 125.112.29.105] (13)Permission denied: access to /index.php denied, referer: http://example.com/en/ [Thu Jun 27 09:51:34 2013] [error] [client 88.190.47.233] (13)Permission denied: access to /index.php denied, referer: http://example.com/en/node/1557?page=1#comment-701 [Thu Jun 27 09:51:34 2013] [error] [client 88.190.47.233] (13)Permission denied: access to /index.php denied, referer: http://example.com/en/node/1557?page=1#comment-701 [Thu Jun 27 09:51:34 2013] [error] [client 88.190.47.233] (13)Permission denied: access to /index.html denied, referer: http://example.com/en/node/1557?page=1#comment-701 [Thu Jun 27 09:51:34 2013] [error] [client 88.190.47.233] (13)Permission denied: access to /index.htm denied, referer: http://example.com/en/node/1557?page=1#comment-701 [Thu Jun 27 09:51:34 2013] [error] [client 88.190.47.233] (13)Permission denied: access to /index.php denied, referer: http://example.com/ [Thu Jun 27 09:51:34 2013] [error] [client 88.190.47.233] (13)Permission denied: access to /index.html denied, referer: http://example.com/ [Thu Jun 27 09:51:34 2013] [error] [client 88.190.47.233] (13)Permission denied: access to /index.htm denied, referer: http://example.com/ [Thu Jun 27 09:51:34 2013] [error] [client 88.190.47.233] (13)Permission denied: access to /index.php denied, referer: http://example.com/ [Thu Jun 27 09:51:34 2013] [error] [client 88.190.47.233] (13)Permission denied: access to /index.html denied, referer: http://example.com/ [Thu Jun 27 09:51:34 2013] [error] [client 88.190.47.233] (13)Permission denied: access to /index.htm denied, referer: http://example.com/ [Thu Jun 27 09:51:44 2013] [error] [client 175.44.28.155] (13)Permission denied: access to /index.php denied, referer: http://example.com/en/ [Thu Jun 27 09:53:19 2013] [error] [client 175.44.29.140] (13)Permission denied: access to /index.php denied, referer: http://example.com/en/node/1557?page=2 [Thu Jun 27 09:53:20 2013] [error] [client 175.44.29.140] (13)Permission denied: access to /index.php denied, referer: http://example.com/en/node/1557?page=2 [Thu Jun 27 09:53:20 2013] [error] [client 175.44.29.140] (13)Permission denied: access to /index.html denied, referer: http://example.com/en/node/1557?page=2 [Thu Jun 27 09:53:20 2013] [error] [client 175.44.29.140] (13)Permission denied: access to /index.htm denied, referer: http://example.com/en/node/1557?page=2 [Thu Jun 27 09:53:21 2013] [error] [client 175.44.29.140] (13)Permission denied: access to /index.php denied, referer: http://example.com/ [Thu Jun 27 09:53:21 2013] [error] [client 175.44.29.140] (13)Permission denied: access to /index.html denied, referer: http://example.com/ [Thu Jun 27 09:53:21 2013] [error] [client 175.44.29.140] (13)Permission denied: access to /index.htm denied, referer: http://example.com/ [Thu Jun 27 09:53:22 2013] [error] [client 175.44.29.140] (13)Permission denied: access to /index.php denied, referer: http://example.com/ [Thu Jun 27 09:53:22 2013] [error] [client 175.44.29.140] (13)Permission denied: access to /index.html denied, referer: http://example.com/ [Thu Jun 27 09:53:22 2013] [error] [client 175.44.29.140] (13)Permission denied: access to /index.htm denied, referer: http://example.com/ [Thu Jun 27 09:56:53 2013] [error] [client 113.246.6.147] (13)Permission denied: access to /index.php denied, referer: http://example.com/en/ [Thu Jun 27 09:58:58 2013] [error] [client 108.62.71.180] (13)Permission denied: access to /index.php denied, referer: http://example.com/

    Read the article

  • Outlook 2010 PST not indexing

    - by kellyllek
    I've had this problem for months: Outlook is unable to search recent items in the inbox. I was running Outlook 2010 Beta. I've just moved to the Trial version, hoping to solve the issue. I have tons of PST files but one central one I'm mainly concerned with. As of now it seems none of it is indexing. I've been through all the sites and made all the changes; rebuilt the index, changed the name of the PST files, run scanpst, stopped and started the search services, made sure the Windows features under programs and features has the indexing option checked, etc... Status now says 'zero items left to index', and 150,000 have been indexed. I think I have a lot more files than that, and also nothing is showing up on any search. I'm not sure what else to do? Side question. I'm going to be moving out of Outlook. However I have 10Gigs+ of PST files over the years. I want to merge them and make them search-able in the easiest way possible. Any idea on how to do that? Could I even move over to Thunderbird right now and be able to index and search my PST files? Also, Google Desktop won't index Outlook 2010 email either...

    Read the article

  • Linq to XML - update/alter the nodes of an XML Document

    - by knox
    Hello! If got 2 Questions: 1. I've sarted working around with Linq to XML and i'm wondering if it is possible to change a XML document via Linq. I mean, is there someting like XDocument xmlDoc = XDocument.Load("sample.xml"); update item in xmlDoc.Descendants("item") where (int)item .Attribute("id") == id ... 2. I already know how to create and add a new XMLElement by simply using xmlDoc.Element("items").Add(new XElement(......); but how can i remove a single entry. XML sample data: <items> <item id="1" name="sample1" info="sample1 info" web="" /> <item id="2" name="sample2" info="sample2 info" web="" /> </itmes>

    Read the article

  • index help for a MySQL query using greater-than operator and ORDER BY

    - by Jaymon
    I have a table with at least a couple million rows and a schema of all integers that looks roughly like this: start stop first_user_id second_user_id The rows get pulled using the following queries: SELECT * FROM tbl_name WHERE stop >= M AND first_user_id=N AND second_user_id=N ORDER BY start ASC SELECT * FROM tbl_name WHERE stop >= M AND first_user_id=N ORDER BY start ASC I cannot figure out the best indexes to speed up these queries. The problem seems to be the ORDER BY because when I take that out the queries are fast. I've tried all different types of indexes using the standard index format: ALTER TABLE tbl_name ADD INDEX index_name (index_col_1,index_col_2,...) And none of them seem to speed up the queries. Does anyone have any idea what index would work? Also, should I be trying a different type of index? I can't guarantee the uniqueness of each row so I've avoided UNIQUE indexes. Any guidance/help would be appreciated. Thanks!

    Read the article

  • JQuery IE7 Z-Index Bug

    - by Thomas
    I built a Jquery dropdown menu using this tutorial: http://noupe.indexsite.org/tutorial/drop-down-menu-jquery-css.html It works across browsers except for IE7 (shooooocking). There seems to be a z-index sorting problem and the drop down menu shows up under all of my other JQuery elements. Im not sure how to set the z-index so that it shows up on top. I have thoroughly googled the issue and it seems to be related to multiple 'position:relative' elements. I've messed with it for a few hours but I can't seem to sort it out. I have already tried defining z-index for all the different page elements but it doesn't seem t help the situation. You can check out the problem here: http://hardtopdepot.com/dev/index.html Any help would be really appreciated - thanks! Also, I know there are other IE7 issues, but Im pretty confident that I can solve those as they are standard IE padding/margins nonsense.

    Read the article

  • z-Index overlay in google maps version 3

    - by fredrik
    Hi, I'm trying to get an overlay in google maps api v3 to appear above all markers. But it seems that the google api always put my overlay with lowest z-index priority. Only solution i've found is to iterate up through the DOM tree and manually set z-index to a higher value. Poor solution. I'm adding my a new div to my overlay with: onclick : function (e) { var index = $(e.target).index(), lngLatXYposition = $.view.overlay.getProjection().fromLatLngToDivPixel(this.getPosition()); icon = this.getIcon(), x = lngLatXYposition.x - icon.anchor.x, y = lngLatXYposition.y - icon.anchor.y $('<div>test</div>').css({ left: x, position: 'absolute', top: y + 'px', zIndex: 1000 }).appendTo('.overlay'); } I've tried every property I could think of while creating my overlay. zIndex, zPriority etc. I'm adding my overlay with: $.view.overlay = new GmapOverlay( { map: view.map.gmap }); And GmapOverlay inherits from new google.maps.OverlayView. Any ideas? ..fredrik

    Read the article

  • Nested attributes in the index view?

    - by user283179
    I seem to be getting error: uninitialized constant Style::Pic when I'm trying to render a nested object in to the index view the show view is fine. class Style < ActiveRecord::Base #belongs_to :users has_many :style_images, :dependent => :destroy accepts_nested_attributes_for :style_images, :reject_if => proc { |a| a.all? { |k, v| v.blank?} } #found this here http://ryandaigle.com/articles/2009/2/1/what-s-new-in-edge-rails-nested-attributes has_one :cover, :class_name => "Pic", :order => "updated_at DESC" accepts_nested_attributes_for :cover end class StyleImage < ActiveRecord::Base belongs_to :style #belongs_to :style_as_cover, :class_name => "Style", :foreign_key => "style_id" has_attached_file :pic, :styles => { :small => "200x0>", :normal => "600x> " } validates_attachment_presence :pic #validates_attachment_size :pic, :less_than => 5.megabytes end <% for style_image in @style.style_images %> <li><%= style_image.caption %></li> <div id="show_photo"> <%= image_tag style_image.pic.url(:normal) %></div> <% end %> As you can see from the above The main model style has many style_images, all these style_images are displayed in the show view but, in the the index view I wish to show one image which has been name and will act as a cover that is displayed for each style. in the index controller I have tried the following: class StylesController < ApplicationController layout "mini" def index @styles = Style.find(:all, :inculde => [:cover,]).reverse respond_to do |format| format.html # index.html.erb format.xml { render :xml => @styles } end end and the index <% @styles.each do |style| %> <%=image_tag style.cover.pic.url(:small) %> <% end %> class StyleImage < ActiveRecord::Base belongs_to :style #belongs_to :style_as_cover, :class_name => "Style", :foreign_key => "style_id" has_attached_file :pic, :styles => { :small => "200x0>", :normal => "600x> " } validates_attachment_presence :pic #validates_attachment_size :pic, :less_than => 5.megabytes end In the style_images table there is an cover_id also. From the about you can see that I have included the cover in the controller and the model. I have know idea where I'm going wrong here! If any one can help please do!

    Read the article

  • UITableView section index overlaps search bar

    - by Snej
    Hi: I want to display a section index in an UITableView with a search bar in the table view header (not section header). But the index strip is now overlapping the search bar. Is there an elegant solution to avoid this and let the index start below the table header?

    Read the article

  • Vim: Show the index of tabs in the tabline

    - by bitmask
    Lets say I opened file1.txt, file2.txt, file3a.txt and file3b.txt such that the tabline (the thing on the top) looks like this: file1.txt file2.txt 2 file3a.txt (Note how file3b.txt. is missing because it is shown in a split, in the same tab as file3a.txt) To move more quickly between tabs (with <Number>gt), I would like each tab to display its index, along the filename. Like so: 1:<file1.txt> 2:<file2.txt> 3:<2 file3a.txt> The formatting (the angle braces in particular) are optional; I just want the index to appear there (the 1:, 2: and so on). No clues on :h tab-page-commands or google whatsoever.

    Read the article

< Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >