Search Results

Search found 52807 results on 2113 pages for 'system tables'.

Page 457/2113 | < Previous Page | 453 454 455 456 457 458 459 460 461 462 463 464  | Next Page >

  • ????ASMM

    - by Liu Maclean(???)
    ???Oracle??????????????SGA/PGA???,????10g????????????ASMM????,????????ASMM?????????Oracle??????????,?ASMM??????DBA????????????;????????ASMM???????????????DBA???:????????????DB,?????????????DBA?????????????????????????????????,ASMM??????????,???????????,??????????,??????????????????;?10g release 1?10.2??????ASMM?????????????,???????ASMM????????ASMM?????startup???????????ASMM??AMM??,????????DBA????SGA/PGA?????????”??”??”???”???,???????????DBA????chemist(???????1??2??????????????)? ?????????????????ASMM?????,?????????????…… Oracle?SGA???????9i???????????,????: Buffer Cache ????????????,??????????????? Default Pool                  ??????,???DB_CACHE_SIZE?? Keep Pool                     ??????,???DB_KEEP_CACHE_SIZE?? Non standard pool         ???????,???DB_nK_cache_size?? Recycle pool                 ???,???db_recycle_cache_size?? Shared Pool ???,???shared_pool_size?? Library cache   ?????? Row cache      ???,?????? Java Pool         java?,???Java_pool_size?? Large Pool       ??,???Large_pool_size?? Fixed SGA       ???SGA??,???Oracle???????,?????????granule? ?9i?????ASMM,???????????SGA,??????MSMM??9i???buffer cache??????????,?????????????????????????,???9i?????????????,?????????????????????????? ????SGA?????: ?????shared pool?default buffer pool????????,??????????? ?9i???????????(advisor),?????????? ??????????????? ?????????,?????? ?????,?????ORA-04031?????????? ASMM?????: ?????????? ???????????????? ???????sga_target?? ???????????,??????????? ??MSMM???????: ???? ???? ?????? ???? ??????????,??????????? ??????????????????,??????????ORA-04031??? ASMM???????????:1.??????sga_target???????2.???????,???:????(memory component),????(memory broker)???????(memory mechanism)3.????(memory advisor) ASMM????????????(Automatically set),??????:shared_pool_size?db_cache_size?java_pool_size?large_pool _size?streams_pool_size;?????????????????,???:db_keep_cache_size?db_recycle_cache_size?db_nk_cache_size?log_buffer????SGA?????,????????????????,??log_buffer?fixed sga??????????????? ??ASMM?????????sga_target??,???????ASMM??????????????????db_cache_size?java_pool_size???,?????????????????????,????????????????????(???)????????,Oracle?????????(granule,?SGA<1GB?granule???4M,?SGA>1GB?granule???16M)???????,??????????????buffer cache,??????????????????(granule)??????????????????????sga_target??,???????????????????(dism,???????)???ASMM?????????????statistics_level?????typical?ALL,?????BASIC??MMON????(Memory Monitor is a background process that gathers memory statistics (snapshots) stores this information in the AWR (automatic workload repository). MMON is also responsible for issuing alerts for metrics that exceed their thresholds)?????????????????????ASMM?????,???????????sga_target?????statistics_level?BASIC: SQL> show parameter sga NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ lock_sga boolean FALSE pre_page_sga boolean FALSE sga_max_size big integer 2000M sga_target big integer 2000M SQL> show parameter sga_target NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ sga_target big integer 2000M SQL> alter system set statistics_level=BASIC; alter system set statistics_level=BASIC * ERROR at line 1: ORA-02097: parameter cannot be modified because specified value is invalid ORA-00830: cannot set statistics_level to BASIC with auto-tune SGA enabled ?????server parameter file?spfile??,ASMM????shutdown??????????????(Oracle???????,????????)???spfile?,?????strings?????spfile????????????????????,?: G10R2.__db_cache_size=973078528 G10R2.__java_pool_size=16777216 G10R2.__large_pool_size=16777216 G10R2.__shared_pool_size=1006632960 G10R2.__streams_pool_size=67108864 ???spfile?????????????????,???????????”???”?????,??????????”??”?? ?ASMM?????????????? ?????(tunable):????????????????????????????buffer cache?????????,cache????????????????,?????????? IO????????????????????????????Library cache????? subheap????,?????????????????????????????????(open cursors)?????????client??????????????buffer cache???????,???????????pin??buffer???(???????) ?????(Un-tunable):???????????????????,?????????????????,?????????????????????????large pool?????? ??????(Fixed Size):???????????,??????????????????????????????????????? ????????????????(memory resize request)?????????,?????: ??????(Immediate Request):???????????ASMM????????????????????????(chunk)?,??????OUT-OF-MEMORY(ORA-04031)???,????????????????????(granule)????????????????????granule,????????????,?????????????????????????????,????granule??????????????? ??????(Deferred Request):???????????????????????????,??????????????granule???????????????MMON??????????delta. ??????(Manual Request):????????????alter system?????????????????????????????????????????????????granule,??????grow?????ORA-4033??,?????shrink?????ORA-4034??? ?ASMM????,????(Memory Broker)????????????????????????????(Deferred)??????????????????????(auto-tunable component)???????????????,???????????????MMON??????????????????????????????????,????????????????;MMON????Memory Broker?????????????????????????MMON????????????????????????????????????????(resize request system queue)?MMAN????(Memory Manager is a background process that manages the dynamic resizing of SGA memory areas as the workload increases or decreases)??????????????????? ?10gR1?Shared Pool?shrink??????????,?????????????Buffer Cache???????????granule,????Buffer Cache?granule????granule header?Metadata(???buffer header??RAC??Lock Elements)????,?????????????????????shared pool????????duration(?????)?chunk??????granule?,????????????granule??10gR2????Buffer Cache Granule????????granule header?buffer?Metadata(buffer header?LE)????,??shared pool???duration?chunk????????granule,??????buffer cache?shared pool??????????????10gr2?streams pool?????????(???????streams pool duration????) ??????????(Donor,???trace????)???,?????????granule???buffer cache,????granule????????????: ????granule???????granule header ?????chunk????granule?????????buffer header ???,???chunk??????????????????????metadata? ???2-4??,???granule???? ??????????????????,??buffer cache??granule???shared pool?,???????: MMAN??????????buffer cache???granule MMAN????granule??quiesce???(Moving 1 granule from inuse to quiesce list of DEFAULT buffer cache for an immediate req) DBWR???????quiesced???granule????buffer(dirty buffer) MMAN??shared pool????????(consume callback),granule?free?chunk???shared pool??(consume)?,????????????????????granule????shared granule??????,???????????granule???????????,??????pin??buffer??Metadata(???buffer header?LE)?????buffer cache??? ???granule???????shared pool,???granule?????shared??? ?????ASMM???????????,??????????: _enabled_shared_pool_duration:?????????10g????shared pool duration??,?????sga_target?0?????false;???10.2.0.5??cursor_space_for_time???true??????false,???10.2.0.5??cursor_space_for_time????? _memory_broker_shrink_heaps:???????0??Oracle?????shared pool?java pool,??????0,??shrink request??????????????????? _memory_management_tracing: ???????MMON?MMAN??????????(advisor)?????(Memory Broker)?????trace???;??ORA-04031????????36,???8?????????????trace,???23????Memory Broker decision???,???32???cache resize???;??????????: Level Contents 0×01 Enables statistics tracing 0×02 Enables policy tracing 0×04 Enables transfer of granules tracing 0×08 Enables startup tracing 0×10 Enables tuning tracing 0×20 Enables cache tracing ?????????_memory_management_tracing?????DUMP_TRANSFER_OPS????????????????,?????????????????trace?????????mman_trace?transfer_ops_dump? SQL> alter system set "_memory_management_tracing"=63; System altered Operation make shared pool grow and buffer cache shrink!!!.............. ???????granule?????,????default buffer pool?resize??: AUTO SGA: Request 0xdc9c2628 after pre-processing, ret=0 /* ???0xdc9c2628??????addr */ AUTO SGA: IMMEDIATE, FG request 0xdc9c2628 /* ???????????Immediate???? */ AUTO SGA: Receiver of memory is shared pool, size=16, state=3, flg=0 /* ?????????shared pool,???,????16?granule,??grow?? */ AUTO SGA: Donor of memory is DEFAULT buffer cache, size=106, state=4, flg=0 /* ???????Default buffer cache,????,????106?granule,??shrink?? */ AUTO SGA: Memory requested=3896, remaining=3896 /* ??immeidate request???????3896 bytes */ AUTO SGA: Memory received=0, minreq=3896, gransz=16777216 /* ????free?granule,??received?0,gransz?granule??? */ AUTO SGA: Request 0xdc9c2628 status is INACTIVE /* ??????????,??????inactive?? */ AUTO SGA: Init bef rsz for request 0xdc9c2628 /* ????????before-process???? */ AUTO SGA: Set rq dc9c2628 status to PENDING /* ?request??pending?? */ AUTO SGA: 0xca000000 rem=3896, rcvd=16777216, 105, 16777216, 17 /* ???????0xca000000?16M??granule */ AUTO SGA: Returning 4 from kmgs_process for request dc9c2628 AUTO SGA: Process req dc9c2628 ret 4, 1, a AUTO SGA: Resize done for pool DEFAULT, 8192 /* ???default pool?resize */ AUTO SGA: Init aft rsz for request 0xdc9c2628 AUTO SGA: Request 0xdc9c2628 after processing AUTO SGA: IMMEDIATE, FG request 0x7fff917964a0 AUTO SGA: Receiver of memory is shared pool, size=17, state=0, flg=0 AUTO SGA: Donor of memory is DEFAULT buffer cache, size=105, state=0, flg=0 AUTO SGA: Memory requested=3896, remaining=0 AUTO SGA: Memory received=16777216, minreq=3896, gransz=16777216 AUTO SGA: Request 0x7fff917964a0 status is COMPLETE /* shared pool????16M?granule */ AUTO SGA: activated granule 0xca000000 of shared pool ?????partial granule????????????trace: AUTO SGA: Request 0xdc9c2628 after pre-processing, ret=0 AUTO SGA: IMMEDIATE, FG request 0xdc9c2628 AUTO SGA: Receiver of memory is shared pool, size=82, state=3, flg=1 AUTO SGA: Donor of memory is DEFAULT buffer cache, size=36, state=4, flg=1 /* ????????shared pool,?????default buffer cache */ AUTO SGA: Memory requested=4120, remaining=4120 AUTO SGA: Memory received=0, minreq=4120, gransz=16777216 AUTO SGA: Request 0xdc9c2628 status is INACTIVE AUTO SGA: Init bef rsz for request 0xdc9c2628 AUTO SGA: Set rq dc9c2628 status to PENDING AUTO SGA: Moving granule 0x93000000 of DEFAULT buffer cache to activate list AUTO SGA: Moving 1 granule 0x8c000000 from inuse to quiesce list of DEFAULT buffer cache for an immediate req /* ???buffer cache??????0x8c000000?granule??????inuse list, ???????quiesce list? */ AUTO SGA: Returning 0 from kmgs_process for request dc9c2628 AUTO SGA: Process req dc9c2628 ret 0, 1, 20a AUTO SGA: activated granule 0x93000000 of DEFAULT buffer cache AUTO SGA: NOT_FREE for imm req for gran 0x8c000000 / * ??dbwr??0x8c000000 granule????dirty buffer */ AUTO SGA: Returning 0 from kmgs_process for request dc9c2628 AUTO SGA: Process req dc9c2628 ret 0, 1, 20a AUTO SGA: NOT_FREE for imm req for gran 0x8c000000 AUTO SGA: Returning 0 from kmgs_process for request dc9c2628 AUTO SGA: Process req dc9c2628 ret 0, 1, 20a AUTO SGA: NOT_FREE for imm req for gran 0x8c000000 AUTO SGA: Returning 0 from kmgs_process for request dc9c2628 AUTO SGA: Process req dc9c2628 ret 0, 1, 20a AUTO SGA: NOT_FREE for imm req for gran 0x8c000000 AUTO SGA: Returning 0 from kmgs_process for request dc9c2628 AUTO SGA: Process req dc9c2628 ret 0, 1, 20a AUTO SGA: NOT_FREE for imm req for gran 0x8c000000 AUTO SGA: Returning 0 from kmgs_process for request dc9c2628 AUTO SGA: Process req dc9c2628 ret 0, 1, 20a AUTO SGA: NOT_FREE for imm req for gran 0x8c000000 ......................................... AUTO SGA: Rcv shared pool consuming 8192 from 0x8c000000 in granule 0x8c000000; owner is DEFAULT buffer cache AUTO SGA: Rcv shared pool consuming 90112 from 0x8c002000 in granule 0x8c000000; owner is DEFAULT buffer cache AUTO SGA: Rcv shared pool consuming 24576 from 0x8c01a000 in granule 0x8c000000; owner is DEFAULT buffer cache AUTO SGA: Rcv shared pool consuming 65536 from 0x8c022000 in granule 0x8c000000; owner is DEFAULT buffer cache AUTO SGA: Rcv shared pool consuming 131072 from 0x8c034000 in granule 0x8c000000; owner is DEFAULT buffer cache AUTO SGA: Rcv shared pool consuming 286720 from 0x8c056000 in granule 0x8c000000; owner is DEFAULT buffer cache AUTO SGA: Rcv shared pool consuming 98304 from 0x8c09e000 in granule 0x8c000000; owner is DEFAULT buffer cache AUTO SGA: Rcv shared pool consuming 106496 from 0x8c0b8000 in granule 0x8c000000; owner is DEFAULT buffer cache ..................... /* ??shared pool????0x8c000000 granule??chunk, ??granule?owner????default buffer cache */ AUTO SGA: Imm xfer 0x8c000000 from quiesce list of DEFAULT buffer cache to partial inuse list of shared pool /* ???0x8c000000 granule?default buffer cache????????shared pool????inuse list */ AUTO SGA: Returning 4 from kmgs_process for request dc9c2628 AUTO SGA: Process req dc9c2628 ret 4, 1, 20a AUTO SGA: Init aft rsz for request 0xdc9c2628 AUTO SGA: Request 0xdc9c2628 after processing AUTO SGA: IMMEDIATE, FG request 0x7fffe9bcd0e0 AUTO SGA: Receiver of memory is shared pool, size=83, state=0, flg=1 AUTO SGA: Donor of memory is DEFAULT buffer cache, size=35, state=0, flg=1 AUTO SGA: Memory requested=4120, remaining=0 AUTO SGA: Memory received=14934016, minreq=4120, gransz=16777216 AUTO SGA: Request 0x7fffe9bcd0e0 status is COMPLETE /* ????partial transfer?? */ ?????partial transfer??????DUMP_TRANSFER_OPS????0x8c000000 partial granule???????,?: SQL> oradebug setmypid; Statement processed. SQL> oradebug dump DUMP_TRANSFER_OPS 1; Statement processed. SQL> oradebug tracefile_name; /s01/admin/G10R2/udump/g10r2_ora_21482.trc =======================trace content============================== GRANULE SIZE is 16777216 COMPONENT NAME : shared pool Number of granules in partially inuse list (listid 4) is 23 Granule addr is 0x8c000000 Granule owner is DEFAULT buffer cache /* ?0x8c000000 granule?shared pool?partially inuse list, ?????owner??default buffer cache */ Granule 0x8c000000 dump from owner perspective gptr = 0x8c000000, num buf hdrs = 1989, num buffers = 156, ghdr = 0x8cffe000 / * ?????granule?granule header????0x8cffe000, ????156?buffer block,1989?buffer header */ /* ??granule??????,??????buffer cache??shared pool chunk */ BH:0x8cf76018 BA:(nil) st:11 flg:20000 BH:0x8cf76128 BA:(nil) st:11 flg:20000 BH:0x8cf76238 BA:(nil) st:11 flg:20000 BH:0x8cf76348 BA:(nil) st:11 flg:20000 BH:0x8cf76458 BA:(nil) st:11 flg:20000 BH:0x8cf76568 BA:(nil) st:11 flg:20000 BH:0x8cf76678 BA:(nil) st:11 flg:20000 BH:0x8cf76788 BA:(nil) st:11 flg:20000 BH:0x8cf76898 BA:(nil) st:11 flg:20000 BH:0x8cf769a8 BA:(nil) st:11 flg:20000 BH:0x8cf76ab8 BA:(nil) st:11 flg:20000 BH:0x8cf76bc8 BA:(nil) st:11 flg:20000 BH:0x8cf76cd8 BA:0x8c018000 st:1 flg:622202 ............... Address 0x8cf30000 to 0x8cf74000 not in cache Address 0x8cf74000 to 0x8d000000 in cache Granule 0x8c000000 dump from receivers perspective Dumping layout Address 0x8c000000 to 0x8c018000 in sga heap(1,3) (idx=1, dur=4) Address 0x8c018000 to 0x8c01a000 not in this pool Address 0x8c01a000 to 0x8c020000 in sga heap(1,3) (idx=1, dur=4) Address 0x8c020000 to 0x8c022000 not in this pool Address 0x8c022000 to 0x8c032000 in sga heap(1,3) (idx=1, dur=4) Address 0x8c032000 to 0x8c034000 not in this pool Address 0x8c034000 to 0x8c054000 in sga heap(1,3) (idx=1, dur=4) Address 0x8c054000 to 0x8c056000 not in this pool Address 0x8c056000 to 0x8c09c000 in sga heap(1,3) (idx=1, dur=4) Address 0x8c09c000 to 0x8c09e000 not in this pool Address 0x8c09e000 to 0x8c0b6000 in sga heap(1,3) (idx=1, dur=4) Address 0x8c0b6000 to 0x8c0b8000 not in this pool Address 0x8c0b8000 to 0x8c0d2000 in sga heap(1,3) (idx=1, dur=4) ???????granule?????shared granule??????,?????????buffer block,????1?shared subpool??????durtaion?4?chunk,duration=4?execution duration;??duration?chunk???????????,??extent???quiesce list??????????????free?execution duration?????????????,??????duration???extent(??????extent????granule)??????? ?????????????ASMM?????????,????: V$SGAINFODisplays summary information about the system global area (SGA). V$SGADisplays size information about the SGA, including the sizes of different SGA components, the granule size, and free memory. V$SGASTATDisplays detailed information about the SGA. V$SGA_DYNAMIC_COMPONENTSDisplays information about the dynamic SGA components. This view summarizes information based on all completed SGA resize operations since instance startup. V$SGA_DYNAMIC_FREE_MEMORYDisplays information about the amount of SGA memory available for future dynamic SGA resize operations. V$SGA_RESIZE_OPSDisplays information about the last 400 completed SGA resize operations. V$SGA_CURRENT_RESIZE_OPSDisplays information about SGA resize operations that are currently in progress. A resize operation is an enlargement or reduction of a dynamic SGA component. V$SGA_TARGET_ADVICEDisplays information that helps you tune SGA_TARGET. ?????????shared pool duration???,?????????

    Read the article

  • C# DataSet CheckBox Column With DevExpress DataGrid

    - by Goober
    Scenario I have a DevExpress DataGrid which is bound to a DataSet in C#. I want to populate each dataset row to contain a string in the first column and a checkbox in the second. My code below doesn't work quite how I want it to, and I'm not sure why..... The Code As you can see I've declared a dataset, but when I try and pass in a new checkbox object to the 2nd column it just displays the system name for the checkbox. DataSet prodTypeDS = new Dataset(); DataTable prodTypeDT = prodTypeDS.Tables.Add(); prodTypeDT.Columns.Add("MurexID", typeof(string)); prodTypeDT.Columns.Add("Haganise",typeof(CheckBox)); //WHY DOES THIS NOT WORK? //(Displays "System.Windows.Forms.CheckBox, CheckState: 0") //Instead of a checkbox. CheckBox c = new CheckBox(); prodTypeDS.Tables[0].Rows.Add("Test",c); //This doesn't work either.... prodTypeDS.Tables[0].Rows.Add("Test",c.CheckState); ......I hope this is just because it's a DevExpress datagrid....

    Read the article

  • Very strange iSeries Provider behavior

    - by AJ
    We've been given a "stored procedure" from our RPG folks that returns six data tables. Attempting to call it from .NET (C#, 3.5) using the iSeries Provider for .NET (tried using both V5R4 and V6R1), we are seeing different results based on how we call the stored proc. Here's way that we'd prefer to do it: using (var dbConnection = new iDB2Connection("connectionString")) { dbConnection.Open(); using(var cmd = dbConnection.CreateCommand()) { cmd.CommandType = CommandType.StoredProcedure; cmd.CommandText = "StoredProcName"; cmd.Parameters.Add(new iDB2Parameter("InParm1", iDB2DbType.Varchar).Value = thing; var ds = new DataSet(); var da = new iDB2DataAdapter(cmd); da.Fill(ds); } } Doing it this way, we get FIVE tables back in the result set. However, if we do this: cmd.CommandType = CommandType.Text; cmd.CommandText = "CALL StoredProcName('" + thing + "')"; We get back the expected SIX tables. I realize that there aren't many of us sorry .NET-to-DB2 folks out here, but I'm hoping someone has seen this before. TIA.

    Read the article

  • Need help limiting a join in Transact-sql

    - by MsLis
    I'm somewhat new to SQL and need help with query syntax. My issue involves 2 tables within a larger multi-table join under Transact-SQL (MS SQL Server 2000 Query Analyzer) I have ACCOUNTS and LOGINS, which are joined on 2 fields: Site & Subset. Both tables may have multiple rows for each Site/Subset combination. ACCOUNTS: | LOGINS: SITE SUBSET FIELD FIELD FIELD | SITE SUBSET USERID PASSWD alpha bravo blah blah blah | alpha bravo foo bar alpha charlie blah blah blah | alpha bravo bar foo alpha charlie bleh bleh blue | alpha charlie id ego delta bravo blah blah blah | delta bravo john welcome delta foxtrot blah blah blah | delta bravo jane welcome | delta bravo ken welcome | delta bravo barbara welcome I want to select all rows in ACCOUNTS which have LOGIN entries, but only 1 login per account. DESIRED RESULT: SITE SUBSET FIELD FIELD FIELD USERID PASSWD alpha bravo blah blah blah foo bar alpha charlie blah blah blah id ego alpha charlie bleh bleh blue id ego delta bravo blah blah blah jane welcome I don't really care which row from the login table I get, but the UserID and Password have to correspond. [Don't return invalid combinations like foo/foo or bar/bar] MS Access has a handy FIRST function, which can do this, but I haven't found an equivalent in TSQL. Also, if it makes a difference, other tables are joined to ACCOUNTS, but this is the only use of LOGINS in the structure. Thank you very much for any assistance.

    Read the article

  • SQL Server database with clustered GUID PKs - switch clustered index or switch to sequential (comb)

    - by Eyvind
    We have a database in which all the PKs are GUIDs, and most of the PKs are also the clustered index for the table. We know that this is bad (due to the random nature of GUIDs). So, it seems there are basically two options here (short of throwing out GUIDs as PKs altogether, which we cannot do (at least not at this time)). We could change the GUID generation algorithm to e.g. the one that NHibernate uses, as detailed in this post, or we could, for the tables that are under the heaviest use, change to a different clustered index, e.g. an IDENTITY column, and keep the "random" GUIDs as PKs. Is it possible to give any general recommendations in such a scenario? The application in question has 500+ tables, the largest one presently at about 1,5 million rows, a few tables around 500 000 rows, and the rest significantly lower (most of them well below 10K). Furthermore, the application is installed at several customer sites already, so we have to take any possible negative effects for existing customer into consideration. Thanks!

    Read the article

  • MS SQL Database with clustered GUID PKs - switch clustered index or switch to sequential (comb) GUID

    - by Eyvind
    We have a database in which all the PKs are GUIDs, and most of the PKs are also the clustered index for the table. We know that this is bad (due to the random nature of GUIDs). So, it seems there are basically two options here (short of throwing out GUIDs as PKs altogether, which we cannot do (at least not at this time)). We could change the GUID generation algorithm to e.g. the one that NHibernate uses, as detailed in this post, or we could, for the tables that are under the heaviest use, change to a different clustered index, e.g. an IDENTITY column, and keep the "random" GUIDs as PKs. Is it possible to give any general recommendations in such a scenario? The application in question has 500+ tables, the largest one presently at about 1,5 million rows, a few tables around 500 000 rows, and the rest significantly lower (most of them well below 10K). Furthermore, the application is installed at several customer sites already, so we have to take any possible negative effects for existing customer into consideration. Thanks!

    Read the article

  • OLAP Web Visualization and Reporting Recommendations

    - by Gok Demir
    I am preparing an offer for a customer. They proide weekly data to different organizations. There is huge amount data suits OLAP that needed to be visualized with charts and pivot tables on web and custom reports will be built by non-it persons (an easy gui). They will enter a date range, location which data columns to be included and generate report and optionally export the data to Excel. They currently prepare reports with MS Excel with Pivot Tables and but they need a better online tool now to show data to their customers. Tables are huge and need of drill-down functionality. My current knowledge Spring, Flex, MySql, Linux. I have some knowledge of PostgreSQL and MSSQL and Windows. What is the easiest way of doing this project. Do you think that SSRP (haven't tried yet) and ASP.NET better suits for this kind of job. Actually I prefer open source solutions. Flex have OLAP Data Grid control which do aggregation on client side. JasperServer seems promising but it seems I need enterprise version (multiple organizations and ad hoc queries). What about Modrian + Flex + PostgreSQL solution? Any previous experience will be appreciated. Yes I am confused with options.

    Read the article

  • How do I pass a lot of parameters to views in Django?

    - by Mark
    I'm very new to Django and I'm trying to build an application to present my data in tables and charts. Till now my learning process went very smooth, but now I'm a bit stuck. My pageview retrieves large amounts of data from a database and puts it in the context. The template then generates different html-tables. So far so good. Now I want to add different charts to the template. I manage to do this by defining <img src=".../> tags. The Matplotlib chart is generate in my chartview an returned via: response=HttpResponse(content_type='image/png') canvas.print_png(response) return response Now I have different questions: the data is retrieved twice from the database. Once in the pageview to render the tables, and again in the chartview for making the charts. What is the best way to pass the data, already in the context of the page to the chartview? I need a lot of charts, each with different datasets. I could make a chartview for each chart, but probably there is a better way. How do I pass the different dataset names to the chartview? Some charts have 20 datasets, so I don't think that passing these dataset parameters via the url (like: <imgm src="chart/dataset1/dataset2/.../dataset20/chart.png />) is the right way. Any advice?

    Read the article

  • Export the datagrid data to text in asp.net

    - by SRIRAM
    Problem:It will asks there is no assembly reference/namespace for Database Database db = DatabaseFactory.CreateDatabase(); DBCommandWrapper selectCommandWrapper = db.GetStoredProcCommandWrapper("sp_GetLatestArticles"); DataSet ds = db.ExecuteDataSet(selectCommandWrapper); StringBuilder str = new StringBuilder(); for(int i=0;i<=ds.Tables[0].Rows.Count - 1; i++) { for(int j=0;j<=ds.Tables[0].Columns.Count - 1; j++) { str.Append(ds.Tables[0].Rows[i][j].ToString()); } str.Append("<BR>"); } Response.Clear(); Response.AddHeader("content-disposition", "attachment;filename=FileName.txt"); Response.Charset = ""; Response.Cache.SetCacheability(HttpCacheability.NoCache); Response.ContentType = "application/vnd.text"; System.IO.StringWriter stringWrite = new System.IO.StringWriter(); System.Web.UI.HtmlTextWriter htmlWrite = new HtmlTextWriter(stringWrite); Response.Write(str.ToString()); Response.End();

    Read the article

  • Can YAML have inheritance?

    - by Jason
    This question involves a lot of symfony but it should be easy enough for someone to follow who only knows YAML and not symfony. My symfony models come from a three-step process: First, I create the tables in MySQL. Second, I run a symfony command (symfony doctrine:build-schema) to convert my table structure into a YAML file. Third, I run another symfony command (symfony doctrine:build-model) to convert the YAML file into PHP code. Here's the problem: there are some tables in the database that I don't want to end up in my symfony code. For example, let's say I have two tables: one called my_table and another called wordpress. The YAML file I end up with might look like this: MyTable: connection: doctrine tableName: my_table Wordpress: connection: doctrine tableName: wordpress That's great except the wordpress table has nothing to do with my symfony models. The result is that every single time I make a change to my database and generate this YAML file, I have to manually remove wordpress. It's annoying! I'd like to be able to create a file called baseConfig.php or something that looks like this: $config = array( 'MyTable' => array( 'connection' => 'doctrine', 'tableName' => 'my_table', ), 'Wordpress' => array( 'connection' => 'doctrine', 'tableName' => 'wordpress', ), ); And then I could have a separate file called config.php or something where I could make modifications to the base config: unset($config['Wordpress']); So my question is: is there any way to convert YAML into executable PHP code (as opposed to load YAML INTO PHP code like what sfYaml::load() does) to achieve this sort of thing? Or is there maybe some other way to achieve YAML inheritance? Thanks, Jason

    Read the article

  • Increase Query Speed in PostgreSQL

    - by Anthoni Gardner
    Hello, First time posting here, but an avid reader. I am experiancing slow query times on my database (all tested locally thus far) and not sure how to go about it. The database itself has 44 tables and some of them tables have over 1 Million records (mainly the movies, actresses and actors tables). The table is made via JMDB using the flat files on IMDB. Also the SQL query that I am about to show is from that said program (that too experiances very slow search times). I have tried to include as much information as I can, such as the explain plan etc. "QUERY PLAN" "HashAggregate (cost=46492.52..46493.50 rows=98 width=46)" " Output: public.movies.title, public.movies.movieid, public.movies.year" " - Append (cost=39094.17..46491.79 rows=98 width=46)" " - HashAggregate (cost=39094.17..39094.87 rows=70 width=46)" " Output: public.movies.title, public.movies.movieid, public.movies.year" " - Seq Scan on movies (cost=0.00..39093.65 rows=70 width=46)" " Output: public.movies.title, public.movies.movieid, public.movies.year" " Filter: (((title)::text ~~* '%Babe%'::text) AND ((title)::text !~~* '""%}'::text))" " - Nested Loop (cost=0.00..7395.94 rows=28 width=46)" " Output: public.movies.title, public.movies.movieid, public.movies.year" " - Seq Scan on akatitles (cost=0.00..7159.24 rows=28 width=4)" " Output: akatitles.movieid, akatitles.language, akatitles.title, " Filter: (((title)::text ~~* '%Babe%'::text) AND ((title)::text !~~* '""%}'::text))" " - Index Scan using movies_pkey on movies (cost=0.00..8.44 rows=1 width=46)" " Output: public.movies.movieid, public.movies.title, public.movies.year, public.movies.imdbid" " Index Cond: (public.movies.movieid = akatitles.movieid)" SELECT * FROM ((SELECT DISTINCT title, movieid, year FROM movies WHERE title ILIKE '%Babe%' AND NOT (title ILIKE '"%}')) UNION (SELECT movies.title, movies.movieid, movies.year FROM movies INNER JOIN akatitles ON movies.movieid=akatitles.movieid WHERE akatitles.title ILIKE '%Babe%' AND NOT (akatitles.title ILIKE '"%}'))) AS union_tmp2; Returns 612 Rows in 9078ms Database backup (plain text) is 1.61GB It's a really complex query and I am not fully cognizant on it, like I said it was spat out by JMDB. Do you have any suggestions on how I can increase the speed ? Regards Anthoni

    Read the article

  • Displaying Many-To-Many Database relationship in VB.NET 2008 with DataGrid, MS SQL 2008

    - by user337501
    Computer bombed while posting this, couldnt find a duplicate question but if there is one, forgive me. So, I've run into a wall. And rather than use a ladder to avoid it, I'd like go through it. I'm setting up what I can best describe as a many-to-many relationship in a database. To examplify, imagine I have three primary tables: Items, Categories, Sections(nevermind the potential redundancy) Then I have another table, Properties. Items, Categories, and Sections can be associated with many properties. A single property can be associated with one, all, or none of the other tables. The best way I can figure to do this is to have join tables make the relationship. i.e. tblItems----(Foreign Key)----tblItems_To_Properties----(Foreign Key)----tblProperties In this example, tblItems simply has an "ItemID" Primary Key. tblItems_To_Properties has its own Primary Key(tblItems_To_PropertiesID), a Foreign Key to the Item(ItemID) and a Foreign key to the Property(PropertyID). The Properties table simply has its primary key(PropertyID) I hope this example isnt too confusing...if I have to I can find a way to put a diagram up or something. My problem is, I want to display this in a DataGrid using the Master-Detail method(DevExpress GridControl). I use the tblItems as a test, and I can see the Items in the parent view, but in the child view I see(understandably) the join table and that is it. My goal is to make it so the Grid ignores the join table and shows the Properties table as the only child. Any help on this method or insight into another solution would be muuuuuuuch appreciat

    Read the article

  • Is CSS Inheritance in Internet Explorer 8 still buggy?

    - by rrrr
    I have a situation that I am looking at where certain CSS properties will not be inherited. This revolves around tables and IE8. Using the sample HTML below I cannot get the text within the table to inherit the green colour. This works in Firefox and Chrome, but not IE8 and from reading up this seems to have always been a problem in IE but was meant to be working in version 8 from what I read. I have tried to specify the inherit value everywhere possible, but to no avail so the question is whether the CSS inheritance support in IE8 is buggy, or am I missing something? I don't want answer changing inline CSS to be classes and I certainly dont wan't any comments on tables as this all stems from building and designing HTML emails where inline CSS and tables are essential. <html> <head></head> <body> <table style="color: green;"> <tr> <td> <span>Span</span> <p>Paragraph</p> <div>Div</div> <table style="color:inherit;"> <tr> <td>Table</td> </tr> </table> </td> </tr> </table> </body> </html>

    Read the article

  • EAV Database Sheme

    - by GLO
    Hello Stackoverflow comunity! I believe that my question has to do with all db guru here! Do you know the EAV DB Scheme ( http://en.wikipedia.org/wiki/Entity-attribute-value_model ) and what they say about the performing of this model. I wonder, If I break this model into smaller tables what the result is? Let's talk about it. I have a db with more that 100K records. A lot of categories and many items ( with different properties per category ) Everything is stored in a EAV. If I try to break this scheme and create for any category a unique table is something that will I have to avoid? Yes, I know that probably I'll have a lot of tables and I'll need to ALTER them if I want to add an extra field, BUT is this so wrong? I have also read that as many tables I have, the db will be populate with more files and this isn't good for any filesystem. Any suggestion? Thank you!

    Read the article

  • How to modernize an enormous legacy database?

    - by smayers81
    I have a question, just looking for suggestions here. So, my application is 'modernizing' a desktop application by converting it to the web, with an ICEFaces UI and server side written in Java. However, they are keeping around the same Oracle database, which at current count has about 700-900 tables and probably a billion total records in the tables. Some individual tables have 250 million rows, many have over 25 million. Needless to say, the database is not scaling well. As a result, the performance of the application is looking to be abysmal. The architects / decision makers-that-be have all either refused or are unwilling to restructure the persistence. So, basically we are putting a fresh coat of paint on a functional desktop application that currently serves most user needs and does so with relative ease and quick performance. I am having trouble sleeping at night thinking of how poorly this application is going to perform and how difficult it is going to be for everyday users to do their job. So, my question is, what options do I have to mitigate this impending disaster? Is there some type of intermediate layer I can put in between the database and the Java code to speed up performance while at the same time keeping the database structure intact? Caching is obviously an option, but I don't see that as being a cure-all. Is it possible to layer a NoSQL DB in between or something?

    Read the article

  • should this database table be normalized?

    - by oo
    i have taken over a database that stores fitness information and we were having a debate about a certain table and whether it should stay as one table or get broken up into three tables. Today, there is one table called: workouts that has the following fields id, exercise_id, reps, weight, date, person_id So if i did 2 sets of 3 different exercises on one day, i would have 6 records in that table for that day. for example: id, exercise_id, reps, weight, date, person_id 1, 1, 10, 100, 1/1/2010, 10 2, 1, 10, 100, 1/1/2010, 10 3, 1, 10, 100, 1/1/2010, 10 4, 2, 10, 100, 1/1/2010, 10 5, 2, 10, 100, 1/1/2010, 10 6, 2, 10, 100, 1/1/2010, 10 So the question is, given that there is some redundant data (date, personid, exercise_id) in multiple records, should this be normalized to three tables WorkoutSummary: - id - date - person_id WorkoutExercise: - id - workout_id (foreign key into WorkoutSummary) - exercise_id WorkoutSets: - id - workout_exercise_id (foreign key into WorkoutExercise) - reps - weight I would guess the downside is that the queries would be slower after this refactoring as now we would need to join 3 tables to do the same query that had no joins before. The benefit of the refactoring allows up in the future to add new fields at the workout summary level or the exercise level with out adding in more duplication. any feedback on this debate?

    Read the article

  • Subselecting with MDX

    - by Vince
    Greetings stack overflow community. I've recently started building an OLAP cube in SSAS2008 and have gotten stuck. I would be grateful if someone could at least point me towards the right direction. Situation: Two fact tables, same cube. FactCalls holds information about calls made by subscribers, FactTopups holds topup data. Both tables have numerous common dimensions one of them being the Subscriber dimension. FactCalls             FactTopups SubscriberKey      SubscriberKey CallDuration         DateKey CallCost               Topup Value ... What I am trying to achieve is to be able to build FactCalls reports based on distinct subscribers that have topped up their accounts within the last 7 days. What I am basically looking for an MDX equivalent to SQL's: select * from FactCalls where SubscriberKey in ( select distinct SubscriberKey from FactTopups where ... ); I've tried creating a degenerate dimension for both tables containing SubscriberKey and doing: Exist( [Calls Degenerate].[Subscriber Key].Children, [Topups Degenerate].[Subscriber Key].Children ) Without success. Kind regards, Vince

    Read the article

  • search dataset from xml file

    - by Anelim
    Hi, I need to filter the results I obtain when I load my xml file. For example I need to search the xml data for items with keyword "Chemistry" for example. The below xml example is a summary of my xml file. The data is loaded in a gridview. Could you help? Thanks! Xml File (summary): <CONTRACTS> <CONTRACT> <CONTRACTID>779</CONTRACTID> <NAME>ContractName</NAME> <KEYWORDS>Chemistry, Engineering, Chemical</KEYWORDS> <CONTRACTSTARTDATE>1/8/2005</CONTRACTSTARTDATE> <CONTRACTENDDATE>31/7/2008</CONTRACTENDDATE> <COMMODITIES><COMMODITY><COMMODITYCODE>CHEM</COMMODITYCODE> <COMMODITYNAME>Chemicals</COMMODITYNAME></COMMODITY></COMMODITIES> </CONTRACT></CONTRACTS> My code behind code is: Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load Dim ds As DataSet = New DataSet() ds.ReadXml(AppDomain.CurrentDomain.BaseDirectory + "/testxml.xml") Dim dtContract As DataTable = ds.Tables(0) Dim dtJoinCommodities As DataTable = ds.Tables(1) Dim dtCommodity As DataTable = ds.Tables(2) dtContract.Columns.Add("COMMODITYCODE") dtContract.Columns.Add("COMMODITYNAME") Dim count As Integer = 0 Dim commodityCode As String = Nothing Dim commodityName As String = Nothing Dim dRowJoinCommodity As DataRow Dim trimChar As Char() = {","c, " "c} Dim textboxstring As String = "KEYWORDS like 'pencil'" For Each dRow As DataRow In dtContract.Select(textboxstring) commodityCode = "" commodityName = "" count = dtContract.Rows.IndexOf(dRow) dRowJoinCommodity = dtJoinCommodities.Rows(count) For Each dRowCommodities As DataRow In dtCommodity.Rows If dRowCommodities("COMMODITIES_Id").ToString() = dRowJoinCommodity("COMMODITIES_ID").ToString() Then commodityCode = commodityCode + dRowCommodities("COMMODITYCODE").ToString() + ", " commodityName = commodityName + dRowCommodities("COMMODITYNAME").ToString() + ", " End If Next commodityCode = commodityCode.TrimEnd(trimChar) commodityName = commodityName.TrimEnd(trimChar) dRow("COMMODITYCODE") = commodityCode dRow("COMMODITYNAME") = commodityName Next GridView1.DataSource = dtContract GridView1.DataBind() End Sub

    Read the article

  • Practical size limitations for RDBMS

    - by grenade
    I am working on a project that must store very large datasets and associated reference data. I have never come across a project that required tables quite this large. I have proved that at least one development environment cannot cope at the database tier with the processing required by the complex queries against views that the application layer generates (views with multiple inner and outer joins, grouping, summing and averaging against tables with 90 million rows). The RDBMS that I have tested against is DB2 on AIX. The dev environment that failed was loaded with 1/20th of the volume that will be processed in production. I am assured that the production hardware is superior to the dev and staging hardware but I just don't believe that it will cope with the sheer volume of data and complexity of queries. Before the dev environment failed, it was taking in excess of 5 minutes to return a small dataset (several hundred rows) that was produced by a complex query (many joins, lots of grouping, summing and averaging) against the large tables. My gut feeling is that the db architecture must change so that the aggregations currently provided by the views are performed as part of an off-peak batch process. Now for my question. I am assured by people who claim to have experience of this sort of thing (which I do not) that my fears are unfounded. Are they? Can a modern RDBMS (SQL Server 2008, Oracle, DB2) cope with the volume and complexity I have described (given an appropriate amount of hardware) or are we in the realm of technologies like Google's BigTable? I'm hoping for answers from folks who have actually had to work with this sort of volume at a non-theoretical level.

    Read the article

  • Looking for a .Net ORM

    - by SLaks
    I'm looking for a .Net 3.5 ORM framework with a rather unusual set of requirements: I need to create and alter tables at runtime with schemas defined by my end-users. (Obviously, that wouldn't be strongly-typed; I'm looking for something like a DataTable there) I also want regular strongly-typed partial classes for rows in non-dynamic tables, with custom validation and other logic. (Like normal ORMs) I want to load the entire database (or some entire tables) once, and keep it in memory throughout the life of the (WinForms) GUI. (I have a shared SQL Server with a relatively slow connection) I also want regular LINQ support (like LINQ-to-SQL) for ASP.Net on the shared server (which has a fast connection to SQL Server) In addition to SQL Server, I also want to be able to use a single-file database that would support XCopy deployment (without installing SQL CE on the end-user's machine). (Probably Access or SQLite) Finally, it has to be free (unless it's OpenAccess) I'll probably have to write it myself, as I don't think there is an existing ORM that meets these requirements. However, I don't want to re-invent the wheel if there is one, hence this question. I'm using VS2010, but I don't know when my webhost (LFC) will upgrade to .Net 4.0

    Read the article

  • h2 (embedded mode ) database files problem

    - by aeter
    There is a h2-database file in my src directory (Java, Eclipse): h2test.db The problem: starting the h2.jar from the command line (and thus the h2 browser interface on port 8082), I have created 2 tables, 'test1' and 'test2' in h2test.db and I have put some data in them; when trying to access them from java code (JDBC), it throws me "table not found exception". A "show tables" from the java code shows a resultset with 0 rows. Also, when creating a new table ('newtest') from the java code (CREATE TABLE ... etc), I cannot see it when starting the h2.jar browser interface afterwards; just the other two tables ('test1' and 'test2') are shown (but then the newly created table 'newtest' is accessible from the java code). I'm inexperienced with embedded databases; I believe I'm doing something fundamentally wrong here. My assumption is, that I'm accessing the same file - once from the java app, and once from the h2 console-browser interface. I cannot seem to understand it, what am I doing wrong here?

    Read the article

  • How to Upload a file from client to server using OFBIZ?

    - by SIVAKUMAR.J
    Hi all, Im new to ofbiz.So is my question is have any mistake forgive me for my mistakes.Im new to ofbiz so i did not know some terminologies in ofbiz.Sometimes my question is not clear because of lack of knowledge in ofbiz.So try to understand my question and give me a good solution with respect to my level.Because some solutions are in very high level cannot able to understand for me.So please give the solution with good examples. My problem is i created a project inside the ofbiz/hot-deploy folder namely "productionmgntSystem".Inside the folder "ofbiz\hot-deploy\productionmgntSystem\webapp\productionmgntSystem" i created a .ftl file namely "app_details_1.ftl" .The following are the coding of this file <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"> <title>Insert title here</title> <script TYPE="TEXT/JAVASCRIPT" language=""JAVASCRIPT"> function uploadFile() { //alert("Before calling upload.jsp"); window.location='<@ofbizUrl>testing_service1</@ofbizUrl>' } </script> </head> <!-- <form action="<@ofbizUrl>testing_service1</@ofbizUrl>" enctype="multipart/form-data" name="app_details_frm"> --> <form action="<@ofbizUrl>logout1</@ofbizUrl>" enctype="multipart/form-data" name="app_details_frm"> <center style="height: 299px; "> <table border="0" style="height: 177px; width: 788px"> <tr style="height: 115px; "> <td style="width: 103px; "> <td style="width: 413px; "><h1>APPLICATION DETAILS</h1> <td style="width: 55px; "> </tr> <tr> <td style="width: 125px; ">Application name : </td> <td> <input name="app_name_txt" id="txt_1" value=" " /> </td> </tr> <tr> <td style="width: 125px; ">Excell sheet &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;: </td> <td> <input type="file" name="filename"/> </td> </tr> <tr> <td> <!-- <input type="button" name="logout1_cmd" value="Logout" onclick="logout1()"/> --> <input type="submit" name="logout_cmd" value="logout"/> </td> <td> <!-- <input type="submit" name="upload_cmd" value="Submit" /> --> <input type="button" name="upload1_cmd" value="Upload" onclick="uploadFile()"/> </td> </tr> </table> </center> </form> </html> the following coding is present in the file "ofbiz\hot-deploy\productionmgntSystem\webapp\productionmgntSystem\WEB-INF\controller.xml" ...... ....... ........ <request-map uri="testing_service1"> <security https="true" auth="true"/> <event type="java" path="org.ofbiz.productionmgntSystem.web_app_req.WebServices1" invoke="testingService"/> <response name="ok" type="view" value="ok_view"/> <response name="exception" type="view" value="exception_view"/> </request-map> .......... ............ .......... <view-map name="ok_view" type="ftl" page="ok_view.ftl"/> <view-map name="exception_view" type="ftl" page="exception_view.ftl"/> ................ ............. ............. The following are the coding present in the file "ofbiz\hot-deploy\productionmgntSystem\src\org\ofbiz\productionmgntSystem\web_app_req\WebServices1.java" package org.ofbiz.productionmgntSystem.web_app_req; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import java.io.DataInputStream; import java.io.FileOutputStream; import java.io.IOException; public class WebServices1 { public static String testingService(HttpServletRequest request, HttpServletResponse response) { //int i=0; String result="ok"; System.out.println("\n\n\t*************************************\n\tInside WebServices1.testingService(HttpServletRequest request, HttpServletResponse response)- Start"); String contentType=request.getContentType(); System.out.println("\n\n\t*************************************\n\tInside WebServices1.testingService(HttpServletRequest request, HttpServletResponse response)- contentType : "+contentType); String str=new String(); // response.setContentType("text/html"); //PrintWriter writer; if ((contentType != null) && (contentType.indexOf("multipart/form-data") >= 0)) { System.out.println("\n\n\t**********************************\n\tInside WebServices1.testingService(HttpServletRequest request, HttpServletResponse response) after if (contentType != null)"); try { // writer=response.getWriter(); System.out.println("\n\n\t**********************************\n\tInside WebServices1.testingService(HttpServletRequest request, HttpServletResponse response) - try Start"); DataInputStream in = new DataInputStream(request.getInputStream()); int formDataLength = request.getContentLength(); byte dataBytes[] = new byte[formDataLength]; int byteRead = 0; int totalBytesRead = 0; //this loop converting the uploaded file into byte code while (totalBytesRead < formDataLength) { byteRead = in.read(dataBytes, totalBytesRead,formDataLength); totalBytesRead += byteRead; } String file = new String(dataBytes); //for saving the file name String saveFile = file.substring(file.indexOf("filename=\"") + 10); saveFile = saveFile.substring(0, saveFile.indexOf("\n")); saveFile = saveFile.substring(saveFile.lastIndexOf("\\")+ 1,saveFile.indexOf("\"")); int lastIndex = contentType.lastIndexOf("="); String boundary = contentType.substring(lastIndex + 1,contentType.length()); int pos; //extracting the index of file pos = file.indexOf("filename=\""); pos = file.indexOf("\n", pos) + 1; pos = file.indexOf("\n", pos) + 1; pos = file.indexOf("\n", pos) + 1; int boundaryLocation = file.indexOf(boundary, pos) - 4; int startPos = ((file.substring(0, pos)).getBytes()).length; int endPos = ((file.substring(0, boundaryLocation)).getBytes()).length; //creating a new file with the same name and writing the content in new file FileOutputStream fileOut = new FileOutputStream("/"+saveFile); fileOut.write(dataBytes, startPos, (endPos - startPos)); fileOut.flush(); fileOut.close(); System.out.println("\n\n\t**********************************\n\tInside WebServices1.testingService(HttpServletRequest request, HttpServletResponse response) - try End"); } catch(IOException ioe) { System.out.println("\n\n\t*********************************\n\tInside WebServices1.testingService(HttpServletRequest request, HttpServletResponse response) - Catch IOException"); //ioe.printStackTrace(); return("exception"); } catch(Exception ex) { System.out.println("\n\n\t*********************************\n\tInside WebServices1.testingService(HttpServletRequest request, HttpServletResponse response) - Catch Exception"); return("exception"); } } else { System.out.println("\n\n\t********************************\n\tInside WebServices1.testingService(HttpServletRequest request, HttpServletResponse response) else part"); result="exception"; } System.out.println("\n\n\t*************************************\n\tInside WebServices1.testingService(HttpServletRequest request, HttpServletResponse response)- End"); return(result); } } I want to upload a file to the server.The file is get from user "<input type="file"..> tag in the "app_details_1.ftl" file & it is updated into the server by using the method "testingService(HttpServletRequest request, HttpServletResponse response)" in the class "WebServices1".But the file is not uploaded. Give me a good solution for uploading a file to the server. Thanks & Regards, Sivakumar.J

    Read the article

  • Ruby on Rails ActiveRecord/Include/Associations can't get my query to work

    - by Cypher
    I just started learning Rails and I'm just trying to set up query via associations. All the queries I try to write seem to be doing bizzare things and end up trying to query two tables parsed together with an '_' as one table. I have no clue why this would ever happen My tables are as follows: schools: id name variables: id name type var_entries: id variable_id entry school_entries: id school_id var_entry_id my rails association tables are $local = { :adapter => "mysql", :host => "localhost", :port => "3306".to_i, :database => "spy_2", :username =>"root", :password => "vertrigo" } class School < ActiveRecord::Base establish_connection $local has_many :school_entries has_many :var_entries, :through => school_entries end class Variable < ActiveRecord::Base establish_connection $local has_many :var_entries has_many :school_entries, :through => :var_entries end class VarEntry < ActiveRecord::Base establish_connection $local has_many_and_belongs_to :school_entries belongs_to :variables end class SchoolEntry < ActiveRecord::Base establish_connection $local belongs_to :school has_many :var_entries end I want to do this sql query: SELECT school_id, variable_id,rank FROM school_entries, variables, var_entries, schools WHERE var_entries.variable_id = variables.id AND school_entries.var_entry_id = var_entries.id AND schools.id = school_entries.school_id AND variables.type = 'number'; and put it into Rails notation: here is one of my many failed attempts schools = VarEntry.all(:include => [:school_entries, :variables], :conditions => "variables.type = 'number'") the error: 'const_missing': uninitialized constant VarEntry::Variables (NameError) if i remove variables schools = VarEntry.all(:include => [:school_entries, :variables], :conditions => "type = 'number'") the error is: Mysql::Error: Unkown column 'type' in 'where clause': SELECT * FROM 'var_entries' WHERE (type=number) (ActiveRecord::StatementInvalid) Can anyone tell me where I'm going horribly wrong?

    Read the article

  • Using repaint() method.

    - by owca
    I'm still struggling to create this game : http://stackoverflow.com/questions/2844190/choosing-design-method-for-ladder-like-word-game .I've got it almost working but there is a problem though. When I'm inserting a word and it's correct, the whole window should reload, and JButtons containing letters should be repainted with different style. But somehow repaint() method for the game panel (in Main method) doesn't affect it at all. What am I doing wrong ? Here's my code: Main: import java.util.Scanner; import javax.swing.*; import java.awt.*; public class Main { public static void main(String[] args){ final JFrame f = new JFrame("Ladder Game"); Scanner sc = new Scanner(System.in); System.out.println("Creating game data..."); System.out.println("Height: "); //setting height of the grid while (!sc.hasNextInt()) { System.out.println("int, please!"); sc.next(); } final int height = sc.nextInt(); /* * I'm creating Grid[]game. Each row of game contains Grid of Element[]line. * Each row of line contains Elements, which are single letters in the game. */ Grid[]game = new Grid[height]; for(int L = 0; L < height; L++){ Grid row = null; int i = L+1; String s; do { System.out.println("Length "+i+", please!"); s = sc.next(); } while (s.length() != i); Element[] line = new Element[s.length()]; Element single = null; String[] temp = null; String[] temp2 = new String[s.length()]; temp = s.split(""); for( int j = temp2.length; j>0; j--){ temp2[j-1] = temp[j]; } for (int k = 0 ; k < temp2.length ; k++) { if( k == 0 ){ single = new Element(temp2[k], 2); } else{ single = new Element(temp2[k], 1); } line[k] = single; } row = new Grid(line); game[L] = row; } //############################################ //THE GAME STARTS HERE //############################################ //create new game panel with box layout JPanel panel = new JPanel(); panel.setLayout(new BoxLayout(panel, BoxLayout.Y_AXIS)); panel.setBackground(Color.ORANGE); panel.setBorder(BorderFactory.createEmptyBorder(10, 10, 10, 10)); //for each row of the game array add panel containing letters Single panel //is drawn with Grid's paint() method and then returned here to be added for(int i = 0; i < game.length; i++){ panel.add(game[i].paint()); } f.setContentPane(panel); f.pack(); f.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); f.setVisible(true); boolean end = false; boolean word = false; String text; /* * Game continues until solved() returns true. First check if given word matches the length, * and then the value of any row. If yes - change state of each letter from EMPTY * to OTHER_LETTER. Then repaint the window. */ while( !end ){ while( !word ){ text = JOptionPane.showInputDialog("Input word: "); for(int i = 1; i< game.length; i++){ if(game[i].equalLength(text)){ if(game[i].equalValue(text)){ game[i].changeState(3); f.repaint(); //simple debug - I'm checking if letter, and //state values for each Element are proper for(int k=0; k<=i; k++){ System.out.print(game[k].e[k].letter()); } System.out.println(); for(int k=0; k<=i; k++){ System.out.print(game[k].e[k].getState()); } System.out.println(); //set word to true and ask for another word word = true; } } } } word = false; //check if the game has ended for(int i = 0; i < game.length; i++){ if(game[i].solved()){ end = true; } else { end = false; } } } } } Element: import javax.swing.*; import java.awt.*; public class Element { final int INVISIBLE = 0; final int EMPTY = 1; final int FIRST_LETTER = 2; final int OTHER_LETTER = 3; private int state; private String letter; public Element(){ } //empty block public Element(int state){ this("", 0); } //filled block public Element(String s, int state){ this.state = state; this.letter = s; } public JButton paint(){ JButton button = null; if( state == EMPTY ){ button = new JButton(" "); button.setBackground(Color.WHITE); } else if ( state == FIRST_LETTER ){ button = new JButton(letter); button.setBackground(Color.red); } else { button = new JButton(letter); button.setBackground(Color.yellow); } return button; } public void changeState(int s){ state = s; } public void setLetter(String s){ letter = s; } public String letter(){ return letter; } public int getState(){ return state; } } Grid: import javax.swing.*; import java.awt.*; public class Grid extends JPanel{ public Element[]e; private Grid[]g; public Grid(){} public Grid( Element[]elements ){ e = new Element[elements.length]; for(int i=0; i< e.length; i++){ e[i] = elements[i]; } } public Grid(Grid[]grid){ g = new Grid[grid.length]; for(int i=0; i<g.length; i++){ g[i] = grid[i]; } Dimension d = new Dimension(600, 600); setMinimumSize(d); setPreferredSize(new Dimension(d)); setMaximumSize(d); } //for Each element in line - change state to i public void changeState(int i){ for(int j=0; j< e.length; j++){ e[j].changeState(3); } } //create panel which will be single row of the game. Add elements to the panel. // return JPanel to be added to grid. public JPanel paint(){ JPanel panel = new JPanel(); panel.setLayout(new GridLayout(1, e.length)); panel.setBorder(BorderFactory.createEmptyBorder(2, 2, 2, 2)); for(int j = 0; j < e.length; j++){ panel.add(e[j].paint()); } return panel; } //check if the length of given string is equal to length of row public boolean equalLength(String s){ int len = s.length(); boolean equal = false; for(int j = 0; j < e.length; j++){ if(e.length == len){ equal = true; } } return equal; } //check if the value of given string is equal to values of elements in row public boolean equalValue(String s){ int len = s.length(); boolean equal = false; String[] temp = null; String[] temp2 = new String[len]; temp = s.split(""); for( int j = len; j>0; j--){ temp2[j-1] = temp[j]; } for(int j = 0; j < e.length; j++){ if( e[j].letter().equals(temp2[j]) ){ equal = true; } else { equal = false; } } if(equal){ for(int i = 0; i < e.length; i++){ e[i].changeState(3); } } return equal; } //check if the game has finished public boolean solved(){ boolean solved = false; for(int j = 0; j < e.length; j++){ if(e[j].getState() == 3){ solved = true; } else { solved = false; } } return solved; } }

    Read the article

  • Help! The log file for database 'tempdb' is full. Back up the transaction log for the database to fr

    - by michael.lukatchik
    We're running SQL Server 2000. In our database, we have an "Orders" table with approximately 750,000 rows. We can perform simple SELECT statements on this table. However, when we want to run a query like SELECT TOP 100 * FROM Orders ORDER BY Date_Ordered DESC, we receive the following message: Error: 9002, Severity: 17, State: 6 The log file for database 'tempdb' is full. Back up the transaction log for the database to free up some log space. We have other tables in our database which are similar in size of the amount of records that are in the tables (i.e. 700,000 records). On these tables, we can run any queries we'd like and we never receive a message about 'tempdb being full'. To resolve this, we've backed up our database, shrunk the actual database and also shrunk the database and files in the tempdb system database, but this hasn't resolved the issue. The size of our log file is set to autogrow. We're not sure where to go next. Are there any ideas why we still might be receiving this message? Error: 9002, Severity: 17, State: 6 The log file for database 'tempdb' is full. Back up the transaction log for the database to free up some log space.

    Read the article

< Previous Page | 453 454 455 456 457 458 459 460 461 462 463 464  | Next Page >