Search Results

Search found 17430 results on 698 pages for 'false positive'.

Page 386/698 | < Previous Page | 382 383 384 385 386 387 388 389 390 391 392 393  | Next Page >

  • Deploying Sinatra app on Dreamhost/Passenger with custom gems

    - by darkism
    I've got a Sinatra app that I'm trying to run on Dreamhost that makes use of pony to send email. In order to get the application up and running at the very beginning (before adding pony), I had to gem unpack rack and gem unpack sinatra into the vendor/ directory, so this was my config.ru: require 'vendor/rack/lib/rack' require 'vendor/sinatra/lib/sinatra' set :run, false set :environment, :production set :views, "views" require 'public/myapp.rb' run Sinatra::Application I have already done gem install pony and gem unpack pony (into vendor/). Afterwards, I tried adding require 'vendor/sinatra/lib/pony' to config.ru only to have Passenger complain about pony's dependencies (mime-types, tmail) not being found either! There has to be a better way to use other gems and tone down those long, ugly, redundant requires. Any thoughts?

    Read the article

  • display data from json file in datagrid

    - by kayn
    I want to display data from a json files in a data grid using dojo ver 1.0.0. I am able to diplay the data when i declare it on my code but when i store the same data in a json format so i can reference it in my script,i get an empty grid. This is my json file; { data: [ ['10''myfile','Css', 'CS Degree','Dr. Bottoman','This is mine'], ['10'myfile2','CS716', 'CS Degree','Prof Frank', 'This is course'], ['10'myfile3 ','CS714', 'CS Degree', 'Dr. Ree', 'Welcome'], ['14', 'myfile4','CS772', 'CS Degree', 'Mr. Boss', 'This will display content' ], ['18', 'myfile5','CS774', 'CS Degree','Ms. Kirk', 'This is networks.' ] ] } and below is my code; @import "../../../dojo/resources/dojo.css"; @import "../_grid/Grid.css"; body { font-size: 1.0em; } #grid { height: 400px; border: 1px solid silver; } .text-oneline { white-space: nowrap; overflow: hidden; text-overflow: ellipsis; } .text-scrolling { height: 4em; overflow: auto; } .text-scrolling { width: 21.5em; } dojo.require("dojox.grid.Grid"); dojo.require("dojox.grid._data.model"); dojo.require("dojo.parser"); <script type="text/javascript"> /*<span dojoType="dojo.data.ItemFileWriteStore" jsId="myStore" url="course.json"> </span>*/ data = [ ['10''myfile','Css', 'CS Degree','Dr. Bottoman','This is mine'], ['10'myfile2','CS716', 'CS Degree','Prof Frank', 'This is course'], ['10'myfile3 ','CS714', 'CS Degree', 'Dr. Ree', 'Welcome'], ['14', 'myfile4','CS772', 'CS Degree', 'Mr. Boss', 'This will display content' ], ['18', 'myfile5','CS774', 'CS Degree','Ms. Kirk', 'This is networks.' ] ]; getDetailData = function(inRowIndex) { var row = data[this.grid.dataRow % data.length ]; switch (this.index) { case 0: return row[5]; case 1: return row[2]; case 2: return row[0]; case 3: return row[1]; case 4: return row[3]; case 5: return row[4]; default: return row[this.index]; } } getName = function(inRowIndex) { var row = data[inRowIndex % data.length]; return row[1]; } // Main grid structure var gridCells = [ { type: 'dojox.GridRowView', width: '20px' }, { onBeforeRow: function(inDataIndex, inSubRows) { inSubRows[1].hidden = !detailRows[inDataIndex]; }, cells: [[ { name: 'Master', width: 3, get: getCheck, styles: 'text-align: center;' }, { name: 'Detail', get: getName, width: 60 }, ], [ { name: '', get: getDetail, colSpan: 2, styles: 'padding: 0; margin: 0;'} ]] } ]; // html for the +/- cell function getCheck(inRowIndex) { var image = (detailRows[inRowIndex] ? 'open.gif' : 'closed.gif'); var show = (detailRows[inRowIndex] ? 'false' : 'true') return ''; } // provide html for the Detail cell in the master grid function getDetail(inRowIndex) { var cell = this; // we can affect styles and content here, but we have to wait to access actual nodes setTimeout(function() { buildDetailgrid(inRowIndex, cell); }, 1); // look for a Detailgrid var Detailgrid = dijit.byId(makeDetailgridId(inRowIndex)); var h = (Detailgrid ? Detailgrid.cacheHeight : "120") + "px"; // insert a placeholder return ''; } // the Detail cell contains a Detailgrid which we set up below var DetailgridCells = [{ noscroll: true, cells: [ [ {name: "Brief Course Description",width: "auto"}, {name: "Course Code" }, {name: "Credits" }, {name: "Subject" }, {name: "Prerequisite" }, {name: "Lecturer"}], [] ]}]; var DetailgridProps = { structure: DetailgridCells, rowCount: 1, autoHeight: true, autoRender: false, "get": getDetailData }; // identify Detailgrids by their row indices function makeDetailgridId(inRowIndex) { return grid.widgetId + "Detailgrid"/+ inRowIndex/; } // if a Detailgrid exists at inRowIndex, detach it from the DOM function detachDetailgrid(inRowIndex) { var Detailgrid = dijit.byId(makeDetailgridId(inRowIndex)); if (Detailgrid) dojox.grid.removeNode(Detailgrid.domNode); } // render a Detailgrid into inCell at inRowIndex function buildDetailgrid(inRowIndex, inCell) { var n = inCell.getNode(inRowIndex).firstChild; var id = makeDetailgridId(inRowIndex); var Detailgrid = dijit.byId(id); if (Detailgrid) { n.appendChild(Detailgrid.domNode); } else { DetailgridProps.dataRow = inRowIndex; DetailgridProps.widgetId = id; Detailgrid = new dojox.VirtualGrid(DetailgridProps, n); } if (Detailgrid) { Detailgrid.render(); Detailgrid.cacheHeight = Detailgrid.domNode.offsetHeight; inCell.grid.rowHeightChanged(inRowIndex); } } // destroy Detailgrid at inRowIndex function destroyDetailgrid(inRowIndex) { var Detailgrid = dijit.byId(makeDetailgridId(inRowIndex)); if (Detailgrid) Detailgrid.destroy(); } // when user clicks the +/- detailRows = []; function toggleDetail(inIndex, inShow) { if (!inShow) detachDetailgrid(inIndex); detailRows[inIndex] = inShow; grid.updateRow(inIndex); } dojo.addOnLoad(function() { window["grid"] = dijit.byId("grid"); dojo.connect(grid, 'rowRemoved', destroyDetailgrid); }); Test grid

    Read the article

  • Embed ASP.NET server code in JavaScript

    - by hotcoder
    I've embeded the following server side code within <script tag. <% Dim dataTable As DataTable = cTab.getTabs(Session("UserID")) If (dataTable.Rows.Count > 0) Then For i As Int16 = 0 To dataTable.Rows.Count%> { contentEl: 'tab'+'<%dataTable.Rows(0)("TabID").ToString()%>', title: '<%dataTable.Rows(0)("TabName").ToString()%>', closable: false, autoScroll: true }, <% Next End If %> But it is not returning the desired results due to syntax problems. How can I write it correctly?

    Read the article

  • UpdatePanel, Repeater, DataBinding Problem

    - by Gordon Carpenter-Thompson
    In a user control, I've got a Repeater inside of an UpdatePanel (which id displayed inside of a ModalPopupExtender. The Repeater is databound using an array list of MyDTO objects. There are two buttons for each Item in the list. Upon binding the ImageURL and CommandArgument are set. This code works fine the first time around but the CommandArgument is wrong thereafter. It seems like the display is updated correctly but the DTO isn't and the CommandArgument sent is the one that has just been removed. Can anybody spot any problems with the code? ASCX <asp:UpdatePanel ID="ViewDataDetail" runat="server" ChildrenAsTriggers="true"> <Triggers> <asp:PostBackTrigger ControlID="ViewDataCloseButton" /> <asp:AsyncPostBackTrigger ControlID="DataRepeater" /> </Triggers> <ContentTemplate> <table width="100%" id="DataResults"> <asp:Repeater ID="DataRepeater" runat="server" OnItemCommand="DataRepeater_ItemCommand" OnItemDataBound="DataRepeater_ItemDataBound"> <HeaderTemplate> <tr> <th><b>Name</b></th> <th><b>&nbsp;</b></th> </tr> </HeaderTemplate> <ItemTemplate> <tr> <td> <b><%#((MyDTO)Container.DataItem).Name%></b> </td> <td> <asp:ImageButton CausesValidation="false" ID="DeleteData" CommandName="Delete" runat="server" /> <asp:ImageButton CausesValidation="false" ID="RunData" CommandName="Run" runat="server" /> </td> </tr> <tr> <td colspan="2"> <table> <tr> <td>Description : </td> <td><%#((MyDTO)Container.DataItem).Description%></td> </tr> <tr> <td>Search Text : </td> <td><%#((MyDTO)Container.DataItem).Text%></td> </tr> </table> </td> </tr> </ItemTemplate> </asp:Repeater> </table> </ContentTemplate> </asp:UpdatePanel> Code-Behind public DeleteData DeleteDataDelegate; public RetrieveData PopulateDataDelegate; public delegate ArrayList RetrieveData(); public delegate void DeleteData(String sData); protected void Page_Load(object sender, EventArgs e) { //load the initial data.. if (!Page.IsPostBack) { if (PopulateDataDelegate != null) { this.DataRepeater.DataSource = this.PopulateDataDelegate(); this.DataRepeater.DataBind(); } } } protected void DataRepeater_ItemCommand(object source, RepeaterCommandEventArgs e) { if (e.CommandName == "Delete") { if (DeleteDataDelegate != null) { DeleteDataDelegate((String)e.CommandArgument); BindDataToRepeater(); } } else if (e.CommandName == "Run") { String sRunning = (String)e.CommandArgument; this.ViewDataModalPopupExtender.Hide(); } } protected void DataRepeater_ItemDataBound(object source, RepeaterItemEventArgs e) { RepeaterItem item = e.Item; if (item != null && item.DataItem != null) { MyDTO oQuery = (MyDTO)item.DataItem; ImageButton oDeleteControl = (ImageButton) item.FindControl("DeleteData"); ImageButton oRunControl = (ImageButton)item.FindControl("RunData"); if (oDeleteControl != null && oRunControl !=null) { oRunControl.ImageUrl = "button_expand.gif"; oRunControl.CommandArgument = "MyID"; oDeleteControl.ImageUrl = "btn_remove.gif"; oDeleteControl.CommandArgument = "MyID"; } } } public void BindDataToRepeater() { this.DataRepeater.DataSource = this.PopulateDataDelegate(); this.DataRepeater.DataBind(); } public void ShowModal(object sender, EventArgs e) { BindDataToRepeater(); this.ViewDataModalPopupExtender.Show(); }

    Read the article

  • SQL Where question

    - by needshelp
    Hi all, I have a question about case statements and nulls in a where clause. I want to do the following: Declare @someVar int = null select column1 from TestTable t where t = case when @someVar is not null then @someVar else t end Here is the problem: Let's say @someVar is null. Let's also say that column1 from TestTable t has NULL column values. Then, my condition t = t in the case statement will always evaluate to false. I basically just want to be able to conditionally filter the column based on the value of @someVar if it's provided. Any help?

    Read the article

  • Delphi and prevent event handling

    - by pKarelian
    How do you prevent a new event handling to start when an event handling is already running? I press a button1 and event handler start e.g. slow printing job. There are several controls in form buttons, edits, combos and I want that a new event allowed only after running handler is finnished. I have used fRunning variable to lock handler in shared event handler. Is there more clever way to handle this? procedure TFormFoo.Button_Click(Sender: TObject); begin if not fRunning then try fRunning := true; if (Sender = Button1) then // Call something slow ... if (Sender = Button2) then // Call something ... if (Sender = Button3) then // Call something ... finally fRunning := false; end; end;

    Read the article

  • Results Delphi users who wish to use HID USB in windows

    - by Lex Dean
    Results Delphi users who wish to use HID USB in windows HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Enum\USB Contain a list if keys containing vender ID and Producer ID numbers that co inside with the USB web sites data base. These numbers and the GUID held within the key gives important information to execute the HID.dll that is otherwise imposable to execute. The Control Panel/System/Hardware/Device manager/USB Serial Bus Controllers/Mass Storage Devices/details simply lists the registry data. The access to the programmer has been documented through the API32.dll with a number of procedures that accesses the registry. But that is not the problem yet it looks like the problem!!!!!!!!! The key is info about the registry and how to use it. These keys are viewed in RegEdit.exe it’s self. Some parts of the registry like the USB have been given a windows security system type of protection with a Aurthz.dll to give the USB read and right protection. Even the api32.dll. Now only Microsoft give out these details and we all know Microsoft hate Delphi. Now C users have enjoyed this access for over 10 years now. Now some will make out that you should never give out such information because some idiot may make a stupid virus (true), but the argument is also do Delphi users need to be denied USB access for another ten years!!!!!!!!!!!!. What I do not have is the skill in is assembly code. I’m seeking for some one that can trace how regedit.exe gets its access through Aurthz.dll to access the USB data. So I’m asking all who reads this:- to partition any friend they have that has this skill to get the Aurthz.dll info needed. I find communicating with USB.org they reply when they have a positive email reply but do not bother should their email be a slightly negative policy. For all simple reasoning, all that USB had to do was to have a secure key as they have done, and to update the same data into a unsecured key every time the data is changed for USB developer to access. And not bother developers access to Aurthz.dll. Authz.dll with these functions for USB:- AuthzFreeResourceManager AuthzFreeContext AuthzAccessCheck(Flags: DWORD; AuthzClientContext: AUTHZ_CLIENT_CONTEXT_HANDLE; pRequest: PAUTHZ_ACCESS_REQUEST; AuditInfo: AUTHZ_AUDIT_INFO_HANDLE; pSecurityDescriptor: PSECURITY_DESCRIPTOR; OptionalSecurityDescriptorArray: PSECURITY_DESCRIPTOR; OptionalSecurityDescriptorCount: DWORD; //OPTIONAL, Var pReply: AUTHZ_ACCESS_REPLY; pAuthzHandle: PAUTHZ_ACCESS_CHECK_RESULTS_HANDLE): BOOl; AuthzInitializeContextFromSid(Flags: DWORD; UserSid: PSID; AuthzResourceManager: AUTHZ_RESOURCE_MANAGER_HANDLE; pExpirationTime: int64; Identifier: LUID; DynamicGroupArgs: PVOID; pAuthzClientContext: PAUTHZ_CLIENT_CONTEXT_HANDLE): BOOL; AuthzInitializeResourceManager(flags: DWORD; pfnAccessCheck: PFN_AUTHZ_DYNAMIC_ACCESS_CHECK; pfnComputeDynamicGroups: PFN_AUTHZ_COMPUTE_DYNAMIC_GROUPS; pfnFreeDynamicGroups: PFN_AUTHZ_FREE_DYNAMIC_GROUPS; ResourceManagerName: PWideChar; pAuthzResourceManager: PAUTHZ_RESOURCE_MANAGER_HANDLE): BOOL; further in Authz.h on kolers.com J Lex Dean.

    Read the article

  • Hello Operator, My Switch Is Bored

    - by Paul White
    This is a post for T-SQL Tuesday #43 hosted by my good friend Rob Farley. The topic this month is Plan Operators. I haven’t taken part in T-SQL Tuesday before, but I do like to write about execution plans, so this seemed like a good time to start. This post is in two parts. The first part is primarily an excuse to use a pretty bad play on words in the title of this blog post (if you’re too young to know what a telephone operator or a switchboard is, I hate you). The second part of the post looks at an invisible query plan operator (so to speak). 1. My Switch Is Bored Allow me to present the rare and interesting execution plan operator, Switch: Books Online has this to say about Switch: Following that description, I had a go at producing a Fast Forward Cursor plan that used the TOP operator, but had no luck. That may be due to my lack of skill with cursors, I’m not too sure. The only application of Switch in SQL Server 2012 that I am familiar with requires a local partitioned view: CREATE TABLE dbo.T1 (c1 int NOT NULL CHECK (c1 BETWEEN 00 AND 24)); CREATE TABLE dbo.T2 (c1 int NOT NULL CHECK (c1 BETWEEN 25 AND 49)); CREATE TABLE dbo.T3 (c1 int NOT NULL CHECK (c1 BETWEEN 50 AND 74)); CREATE TABLE dbo.T4 (c1 int NOT NULL CHECK (c1 BETWEEN 75 AND 99)); GO CREATE VIEW V1 AS SELECT c1 FROM dbo.T1 UNION ALL SELECT c1 FROM dbo.T2 UNION ALL SELECT c1 FROM dbo.T3 UNION ALL SELECT c1 FROM dbo.T4; Not only that, but it needs an updatable local partitioned view. We’ll need some primary keys to meet that requirement: ALTER TABLE dbo.T1 ADD CONSTRAINT PK_T1 PRIMARY KEY (c1);   ALTER TABLE dbo.T2 ADD CONSTRAINT PK_T2 PRIMARY KEY (c1);   ALTER TABLE dbo.T3 ADD CONSTRAINT PK_T3 PRIMARY KEY (c1);   ALTER TABLE dbo.T4 ADD CONSTRAINT PK_T4 PRIMARY KEY (c1); We also need an INSERT statement that references the view. Even more specifically, to see a Switch operator, we need to perform a single-row insert (multi-row inserts use a different plan shape): INSERT dbo.V1 (c1) VALUES (1); And now…the execution plan: The Constant Scan manufactures a single row with no columns. The Compute Scalar works out which partition of the view the new value should go in. The Assert checks that the computed partition number is not null (if it is, an error is returned). The Nested Loops Join executes exactly once, with the partition id as an outer reference (correlated parameter). The Switch operator checks the value of the parameter and executes the corresponding input only. If the partition id is 0, the uppermost Clustered Index Insert is executed, adding a row to table T1. If the partition id is 1, the next lower Clustered Index Insert is executed, adding a row to table T2…and so on. In case you were wondering, here’s a query and execution plan for a multi-row insert to the view: INSERT dbo.V1 (c1) VALUES (1), (2); Yuck! An Eager Table Spool and four Filters! I prefer the Switch plan. My guess is that almost all the old strategies that used a Switch operator have been replaced over time, using things like a regular Concatenation Union All combined with Start-Up Filters on its inputs. Other new (relative to the Switch operator) features like table partitioning have specific execution plan support that doesn’t need the Switch operator either. This feels like a bit of a shame, but perhaps it is just nostalgia on my part, it’s hard to know. Please do let me know if you encounter a query that can still use the Switch operator in 2012 – it must be very bored if this is the only possible modern usage! 2. Invisible Plan Operators The second part of this post uses an example based on a question Dave Ballantyne asked using the SQL Sentry Plan Explorer plan upload facility. If you haven’t tried that yet, make sure you’re on the latest version of the (free) Plan Explorer software, and then click the Post to SQLPerformance.com button. That will create a site question with the query plan attached (which can be anonymized if the plan contains sensitive information). Aaron Bertrand and I keep a close eye on questions there, so if you have ever wanted to ask a query plan question of either of us, that’s a good way to do it. The problem The issue I want to talk about revolves around a query issued against a calendar table. The script below creates a simplified version and adds 100 years of per-day information to it: USE tempdb; GO CREATE TABLE dbo.Calendar ( dt date NOT NULL, isWeekday bit NOT NULL, theYear smallint NOT NULL,   CONSTRAINT PK__dbo_Calendar_dt PRIMARY KEY CLUSTERED (dt) ); GO -- Monday is the first day of the week for me SET DATEFIRST 1;   -- Add 100 years of data INSERT dbo.Calendar WITH (TABLOCKX) (dt, isWeekday, theYear) SELECT CA.dt, isWeekday = CASE WHEN DATEPART(WEEKDAY, CA.dt) IN (6, 7) THEN 0 ELSE 1 END, theYear = YEAR(CA.dt) FROM Sandpit.dbo.Numbers AS N CROSS APPLY ( VALUES (DATEADD(DAY, N.n - 1, CONVERT(date, '01 Jan 2000', 113))) ) AS CA (dt) WHERE N.n BETWEEN 1 AND 36525; The following query counts the number of weekend days in 2013: SELECT Days = COUNT_BIG(*) FROM dbo.Calendar AS C WHERE theYear = 2013 AND isWeekday = 0; It returns the correct result (104) using the following execution plan: The query optimizer has managed to estimate the number of rows returned from the table exactly, based purely on the default statistics created separately on the two columns referenced in the query’s WHERE clause. (Well, almost exactly, the unrounded estimate is 104.289 rows.) There is already an invisible operator in this query plan – a Filter operator used to apply the WHERE clause predicates. We can see it by re-running the query with the enormously useful (but undocumented) trace flag 9130 enabled: Now we can see the full picture. The whole table is scanned, returning all 36,525 rows, before the Filter narrows that down to just the 104 we want. Without the trace flag, the Filter is incorporated in the Clustered Index Scan as a residual predicate. It is a little bit more efficient than using a separate operator, but residual predicates are still something you will want to avoid where possible. The estimates are still spot on though: Anyway, looking to improve the performance of this query, Dave added the following filtered index to the Calendar table: CREATE NONCLUSTERED INDEX Weekends ON dbo.Calendar(theYear) WHERE isWeekday = 0; The original query now produces a much more efficient plan: Unfortunately, the estimated number of rows produced by the seek is now wrong (365 instead of 104): What’s going on? The estimate was spot on before we added the index! Explanation You might want to grab a coffee for this bit. Using another trace flag or two (8606 and 8612) we can see that the cardinality estimates were exactly right initially: The highlighted information shows the initial cardinality estimates for the base table (36,525 rows), the result of applying the two relational selects in our WHERE clause (104 rows), and after performing the COUNT_BIG(*) group by aggregate (1 row). All of these are correct, but that was before cost-based optimization got involved :) Cost-based optimization When cost-based optimization starts up, the logical tree above is copied into a structure (the ‘memo’) that has one group per logical operation (roughly speaking). The logical read of the base table (LogOp_Get) ends up in group 7; the two predicates (LogOp_Select) end up in group 8 (with the details of the selections in subgroups 0-6). These two groups still have the correct cardinalities as trace flag 8608 output (initial memo contents) shows: During cost-based optimization, a rule called SelToIdxStrategy runs on group 8. It’s job is to match logical selections to indexable expressions (SARGs). It successfully matches the selections (theYear = 2013, is Weekday = 0) to the filtered index, and writes a new alternative into the memo structure. The new alternative is entered into group 8 as option 1 (option 0 was the original LogOp_Select): The new alternative is to do nothing (PhyOp_NOP = no operation), but to instead follow the new logical instructions listed below the NOP. The LogOp_GetIdx (full read of an index) goes into group 21, and the LogOp_SelectIdx (selection on an index) is placed in group 22, operating on the result of group 21. The definition of the comparison ‘the Year = 2013’ (ScaOp_Comp downwards) was already present in the memo starting at group 2, so no new memo groups are created for that. New Cardinality Estimates The new memo groups require two new cardinality estimates to be derived. First, LogOp_Idx (full read of the index) gets a predicted cardinality of 10,436. This number comes from the filtered index statistics: DBCC SHOW_STATISTICS (Calendar, Weekends) WITH STAT_HEADER; The second new cardinality derivation is for the LogOp_SelectIdx applying the predicate (theYear = 2013). To get a number for this, the cardinality estimator uses statistics for the column ‘theYear’, producing an estimate of 365 rows (there are 365 days in 2013!): DBCC SHOW_STATISTICS (Calendar, theYear) WITH HISTOGRAM; This is where the mistake happens. Cardinality estimation should have used the filtered index statistics here, to get an estimate of 104 rows: DBCC SHOW_STATISTICS (Calendar, Weekends) WITH HISTOGRAM; Unfortunately, the logic has lost sight of the link between the read of the filtered index (LogOp_GetIdx) in group 22, and the selection on that index (LogOp_SelectIdx) that it is deriving a cardinality estimate for, in group 21. The correct cardinality estimate (104 rows) is still present in the memo, attached to group 8, but that group now has a PhyOp_NOP implementation. Skipping over the rest of cost-based optimization (in a belated attempt at brevity) we can see the optimizer’s final output using trace flag 8607: This output shows the (incorrect, but understandable) 365 row estimate for the index range operation, and the correct 104 estimate still attached to its PhyOp_NOP. This tree still has to go through a few post-optimizer rewrites and ‘copy out’ from the memo structure into a tree suitable for the execution engine. One step in this process removes PhyOp_NOP, discarding its 104-row cardinality estimate as it does so. To finish this section on a more positive note, consider what happens if we add an OVER clause to the query aggregate. This isn’t intended to be a ‘fix’ of any sort, I just want to show you that the 104 estimate can survive and be used if later cardinality estimation needs it: SELECT Days = COUNT_BIG(*) OVER () FROM dbo.Calendar AS C WHERE theYear = 2013 AND isWeekday = 0; The estimated execution plan is: Note the 365 estimate at the Index Seek, but the 104 lives again at the Segment! We can imagine the lost predicate ‘isWeekday = 0’ as sitting between the seek and the segment in an invisible Filter operator that drops the estimate from 365 to 104. Even though the NOP group is removed after optimization (so we don’t see it in the execution plan) bear in mind that all cost-based choices were made with the 104-row memo group present, so although things look a bit odd, it shouldn’t affect the optimizer’s plan selection. I should also mention that we can work around the estimation issue by including the index’s filtering columns in the index key: CREATE NONCLUSTERED INDEX Weekends ON dbo.Calendar(theYear, isWeekday) WHERE isWeekday = 0 WITH (DROP_EXISTING = ON); There are some downsides to doing this, including that changes to the isWeekday column may now require Halloween Protection, but that is unlikely to be a big problem for a static calendar table ;)  With the updated index in place, the original query produces an execution plan with the correct cardinality estimation showing at the Index Seek: That’s all for today, remember to let me know about any Switch plans you come across on a modern instance of SQL Server! Finally, here are some other posts of mine that cover other plan operators: Segment and Sequence Project Common Subexpression Spools Why Plan Operators Run Backwards Row Goals and the Top Operator Hash Match Flow Distinct Top N Sort Index Spools and Page Splits Singleton and Range Seeks Bitmaps Hash Join Performance Compute Scalar © 2013 Paul White – All Rights Reserved Twitter: @SQL_Kiwi

    Read the article

  • HTTP Authentication with Web References

    - by Thor
    I have a web reference created from the WSDL, but I'm not allowed to call the function unless I pass in the username / password; the original code for the XML toolkit was: Set client = CreateObject("MSSOAP.SOAPClient30") URL = "http://" & host & "/_common/webservices/Trend?wsdl" client.mssoapinit (URL) client.ConnectorProperty("WinHTTPAuthScheme") = 1 client.ConnectorProperty("AuthUser") = user client.ConnectorProperty("AuthPassword") = passwd On Error GoTo err Dim result1() As String result1 = client.getTrendData(expression, startDate, endDate, limitFromStart, maxRecords How do I add the AuthUser/AuthPassword to my new code? New code: ALCServer.TrendClient tc = new WindowsFormsApplication1.ALCServer.TrendClient(); foreach(string s in tc.getTrendData(textBox2.Text, "5/25/2009", "5/28/2009", false, 500)) textBox1.Text+= s;

    Read the article

  • Update data programmatically using EntityDataSource

    - by Vinay
    Hello guys, I want to update the data using programmatically in code behind using EntityDataSource. I have done the samething using Update method of LINQ Datasource. sample code snippet int id = Convert.ToInt32(e.CommandArgument); ListDictionary keyValues = new ListDictionary(); ListDictionary newValues = new ListDictionary(); ListDictionary oldValues = new ListDictionary(); keyValues.Add("ReviewID", id); oldValues.Add("IsActive", "IsActive"); newValues.Add("IsActive", "false"); GridDataSource.Update(keyValues,newValues, oldValues); Can I achieve the same thing using EntityDataSource? Thanks, Vinay

    Read the article

  • javascript: how to make input selected

    - by Syom
    i have to use the following function, to change the input's type <input class="input" id="pas" type="text" name="password" style="color: #797272;" value= "<?php if ($_POST[password] != '') {echo '';} else {echo '????????';}?>" onclick="if (this.value =='????????') { this.value=''; this.style.color='black'; change(); }" /> <script type="text/javascript"> function change() { var input=document.getElementById('pas'); var input2= input.cloneNode(false); input2.type='password'; input.parentNode.replaceChild(input2,input); } </script> it works fine, but i have to reclick on input, because it creates a new input. so what can i do, if i want it to create a new input with cursor in it? thanks

    Read the article

  • iPhone Shell - is there any?

    - by alee
    While working on iphone security architecture, i came to know that i can run applications from other applications in iphone. referring to the following url http://iphonedevelopertips.com/cocoa/launching-other-apps-within-an-iphone-application.html for example, i can put a link in a website with following hyperlink skype:// will result skype to run and call at particular number. Now i have few concerns here. is there a shell running in background in iphone, so that it allows other application to run basic app commands. if the above statement is true then how can i enable or run commands directly into iphone shell? if above statements are false, then could you please explain how these commands are being executed? is this part of iPhone SDK? or this funcationality is iPhone OS

    Read the article

  • iPhone Shell - is there any?

    - by alee
    While working on iphone security architecture, i came to know that i can run applications from other applications in iphone. referring to the following url http://iphonedevelopertips.com/cocoa/launching-other-apps-within-an-iphone-application.html for example, i can put a link in a website with following hyperlink skype:// will result skype to run and call at particular number. Now i have few concerns here. is there a shell running in background in iphone, so that it allows other application to run basic app commands. if the above statement is true then how can i enable or run commands directly into iphone shell? if above statements are false, then could you please explain how these commands are being executed? is this part of iPhone SDK? or this funcationality is iPhone OS

    Read the article

  • Why is my onItemSelectedListener not called in a ListView?

    - by Janusz
    I'm using a ListView that is setup like this: <ListView android:id="@android:id/list" android:layout_width="fill_parent" android:layout_height="fill_parent" android:longClickable="false" android:choiceMode="singleChoice"> </ListView> In my code I add an OnItemSelectedListener to the ListView like this: getListView().setAdapter(adapter); getListView().setOnItemSelectedListener(this); my Activity implements the listener like this: @Override public void onItemSelected(AdapterView<?> parent, View view, int position, long id) { Log.d("LocateByCategory", "ListItemSelected: Parent: " + parent.toString() + " View: " + view.toString() + " Position: " + " Id: " + id); } My hope was, that I would see this debug output the moment I click on something in the list. But the debug output is never shown in LogCat.

    Read the article

  • Get the currently saved object in a view in Django

    - by mridang
    Hi, I has a Django view which is accessed through an AJAX call. It's a pretty simple one — all it does is simply pass the request to a form object and save the data. Here's a snippet from my view: form = AddSiteForm(request.user, request.POST) if form.is_valid(): obj = form.save(commit=False) obj.user = request.user obj.save() data['status'] = 'success' data['html'] = render_to_string('site.html', locals(), context_instance=RequestContext(request)) return HttpResponse(simplejson.dumps(data), mimetype='application/json') How do I get the currently saved object (including the internally generated id column) and pass it to the template? Any help guys? Mridang

    Read the article

  • Blacklight Expander problem while expanding programmatically

    - by David Brunelle
    Hi, I am doing a silverlight project in Silverlight 4 and I included the BlackLight project in my project so that I could use their new controls, especialy the dockpanel and the autoexpander, which is causing me at the moment some little problems. What I would like to do is to have several auto-expander that will expand or collapsed when I click on button. In my case more specifically, each auto-expander have a set of parameter to fill, which in turn will fill the other expander and the current one would collaps and the just filled one would expand. The idea is simple, yet when I use a button which is on one of my expander, it only works once... It would expand/collapse the first time, then after that, nothing. I trace the code and it seems to go through just fine, but the property value won't change Here is my code Private Sub BtnExpand_Click(ByVal sender As System.Object, ByVal e As System.Windows.RoutedEventArgs) Handles BtnExpand.Click ClientExpander.IsExpanded = False ProjetExpander.IsExpanded = True End Sub Could it be a known bug, or must I reset some flags to make it work? Thanks.

    Read the article

  • Flex3 / Air 2: NativeProcess doesn't accepts standard input data (Error #2044 & #3218)

    - by Edward
    Hi: I'm trying to open cmd.exe on a new process and pass some code to programatically eject a device; but when trying to do this all I get is: "Error #2044: Unhandled IOErrorEvent:. text=Error #3218: Error while writing data to NativeProcess.standardInput." Here's my code: private var NP:NativeProcess = new NativeProcess(); private function EjectDevice():void { var RunDLL:File = new File("C:\\Windows\\System32\\cmd.exe"); var NPI:NativeProcessStartupInfo = new NativeProcessStartupInfo(); NPI.executable = RunDLL; NP.start(NPI); NP.addEventListener(Event.STANDARD_OUTPUT_CLOSE, CatchOutput, false, 0, true); NP.standardInput.writeUTFBytes("start C:\\Windows\\System32\\rundll32.exe shell32.dll,Control_RunDLL hotplug.dll"); NP.closeInput(); } I also tried with writeUTF instead of writeUTFBytes, but I still get the error. Does anyone have an idea of what I'm doing wrong?. Thanks for your time :) Edward.

    Read the article

  • Unable to get the Current User's Token information

    - by Ram
    Hi, I have been trying to get the currently logged-in user's token information using the following code : [DllImport("wtsapi32.dll", CharSet = CharSet.Auto, SetLastError = true)] static extern bool WTSQueryUserToken(int sessionId, out IntPtr tokenHandle); [DllImport("kernel32.dll", CharSet = CharSet.Auto, SetLastError = true)] static extern uint WTSGetActiveConsoleSessionId(); static void Main(string[] args) { try { int sessionID = (int)WTSGetActiveConsoleSessionId(); if (sessionID != -1) { System.IntPtr currentToken = IntPtr.Zero; bool bRet = WTSQueryUserToken(sessionID, out currentToken); Console.WriteLine("bRet : " + bRet.ToString()); } } catch (Exception) { } } The problem is that, bRet is always false and currentToken is always 0. I am getting the sessionid as 1. Could someone tell me what's going wrong here? I want to use this token information to pass it to the CreateProcessAsUser function from a windows service. Thanks, Ram

    Read the article

  • Co-Authors Wordpress Plugin: coauthors_wp_list_authors function not working correctly

    - by rayne
    The Co-Authors Plus Plugin for Wordpress has a very annoying bug. The custom function coauthors_wp_list_authors lists authors the same way the wordpress function wp_list_authors does, but it does not include authors in the list who don't have a post of their own - if they have only entries in which they are listed as co-author but not as author, they will not be included in the list. That is of course missing a very important point. I've looked at the faulty SQL statement, but unfortunately my knowledge of advanced SQL, especially when it comes to JOINs, as well as my knowledge of the wp database structure is too limited and I remain clueless. There is a topic in the WP support forum, but unfortunately the information there is very outdated and the fix is not applicable anymore. I couldn't find any other, more current solutions on the internet. I'd be glad if somewhere here could help fix the SQL statement so it also lists co-authors who don't have posts where they're the sole author, as well as display the correct post count for all authors. Here's the entire function for reference purposes with a comment marking the SQL statement: function coauthors_wp_list_authors($args = '') { global $wpdb, $coauthors_plus; $defaults = array( 'optioncount' => false, 'exclude_admin' => true, 'show_fullname' => false, 'hide_empty' => true, 'feed' => '', 'feed_image' => '', 'feed_type' => '', 'echo' => true, 'style' => 'list', 'html' => true ); $r = wp_parse_args( $args, $defaults ); extract($r, EXTR_SKIP); $return = ''; $authors = $wpdb->get_results("SELECT ID, user_nicename from $wpdb->users " . ($exclude_admin ? "WHERE user_login <> 'admin' " : '') . "ORDER BY display_name"); $author_count = array(); # this is the SQL statement which doesn't work correctly: $query = "SELECT DISTINCT $wpdb->users.ID AS post_author, $wpdb->terms.name AS user_name, $wpdb->term_taxonomy.count AS count"; $query .= " FROM $wpdb->posts"; $query .= " INNER JOIN $wpdb->term_relationships ON ($wpdb->posts.ID = $wpdb->term_relationships.object_id)"; $query .= " INNER JOIN $wpdb->term_taxonomy ON ($wpdb->term_relationships.term_taxonomy_id = $wpdb->term_taxonomy.term_taxonomy_id)"; $query .= " INNER JOIN $wpdb->terms ON ($wpdb->term_taxonomy.term_id = $wpdb->terms.term_id)"; $query .= " INNER JOIN $wpdb->users ON ($wpdb->terms.name = $wpdb->users.user_login)"; $query .= " WHERE post_type = 'post' AND " . get_private_posts_cap_sql( 'post' ); $query .= " AND $wpdb->term_taxonomy.taxonomy = '$coauthors_plus->coauthor_taxonomy'"; $query .= " GROUP BY post_author"; foreach ((array) $wpdb->get_results($query) as $row) { $author_count[$row->post_author] = $row->count; } foreach ( (array) $authors as $author ) { $link = ''; $author = get_userdata( $author->ID ); $posts = (isset($author_count[$author->ID])) ? $author_count[$author->ID] : 0; $name = $author->display_name; if ( $show_fullname && ($author->first_name != '' && $author->last_name != '') ) $name = "$author->first_name $author->last_name"; if( !$html ) { if ( $posts == 0 ) { if ( ! $hide_empty ) $return .= $name . ', '; } else $return .= $name . ', '; continue; } if ( !($posts == 0 && $hide_empty) && 'list' == $style ) $return .= '<li>'; if ( $posts == 0 ) { if ( ! $hide_empty ) $link = $name; } else { $link = '<a href="' . get_author_posts_url($author->ID, $author->user_nicename) . '" title="' . esc_attr( sprintf(__("Posts by %s", 'co-authors-plus'), $author->display_name) ) . '">' . $name . '</a>'; if ( (! empty($feed_image)) || (! empty($feed)) ) { $link .= ' '; if (empty($feed_image)) $link .= '('; $link .= '<a href="' . get_author_feed_link($author->ID) . '"'; if ( !empty($feed) ) { $title = ' title="' . esc_attr($feed) . '"'; $alt = ' alt="' . esc_attr($feed) . '"'; $name = $feed; $link .= $title; } $link .= '>'; if ( !empty($feed_image) ) $link .= "<img src=\"" . esc_url($feed_image) . "\" style=\"border: none;\"$alt$title" . ' />'; else $link .= $name; $link .= '</a>'; if ( empty($feed_image) ) $link .= ')'; } if ( $optioncount ) $link .= ' ('. $posts . ')'; } if ( !($posts == 0 && $hide_empty) && 'list' == $style ) $return .= $link . '</li>'; else if ( ! $hide_empty ) $return .= $link . ', '; } $return = trim($return, ', '); if ( ! $echo ) return $return; echo $return; }

    Read the article

  • jni4net - how to set the absolute path to jni4net.j-0.7.1.0.jar

    - by w1z
    Guys, help me... I use jni4net in my WCF service. in the ctor of the service I try to create BridgeSetup object. var bridgeSetup = new BridgeSetup(false); bridgeSetup.AddAllJarsClassPath("."); Bridge.CreateJVM(bridgeSetup); As I understand in this moment jni4ne tryes to generate jni4net.j-0.7.1.0.dll from jni4net.j-0.7.1.0.jar. It tryes to find jni4net.j-0.7.1.0.jar near the jni4net.n-0.7.1.0.dll and can't. So I get next error... c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\fileprocessingservice\76f0fa69\5db44426\assembly\dl3\4fa263c6\f424b7fa_c8ccca01\jni4net.j-0.7.1.0.DLL Anybody know how to solve the problem? Thanks..

    Read the article

  • Sinatra 1.0 fastcgi deployment

    - by TheMoonMaster
    I am trying to deploy my sinatra app to my hosting(shared) and I keep getting this error. /usr/lib/ruby/gems/1.8/gems/rack-1.1.0/lib/rack/handler/fastcgi.rb:23:in `initialize': Address family not supported by protocol - socket(2) (Errno::EAFNOSUPPORT) from /usr/lib/ruby/gems/1.8/gems/rack-1.1.0/lib/rack/handler/fastcgi.rb:23:in `new' from /usr/lib/ruby/gems/1.8/gems/rack-1.1.0/lib/rack/handler/fastcgi.rb:23:in `run' from /usr/lib/ruby/gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb:946:in `run!' from /usr/lib/ruby/gems/1.8/gems/sinatra-1.0/lib/sinatra/main.rb:25 from dispatch.fcgi:17 I have no idea what this means and I have tried many different things to fix it but nothing I tried seemed to work. My dispatch.fcgi is the following #!/usr/bin/ruby require 'rubygems' require 'sinatra' fastcgi_log = File.open("fastcgi.log", "a") STDOUT.reopen fastcgi_log STDERR.reopen fastcgi_log STDOUT.sync = true set :logging, false set :server, "FastCGI" load 'simple.rb' And finally, my .htaccess (fcgid is how my host told me to set it up) RewriteEngine on AddHandler fcgid-script .fcgi Options +FollowSymLinks +ExecCGI RewriteRule ^(.*)$ dispatch.fcgi [QSA,L]

    Read the article

  • Problem using Winforms WebBrowser control as editor

    - by thecaptain0220
    I am currently working on a project where I am using a WebBrowser control as an editor. I have design mode turned on and it seems to be working. The issue im having is when I try to save the Document and load another it pops up the "This document has been modified." message. What I am trying to do is as simple as this if (frontPage) { frontPage = false; frontContent = webEditor.DocumentText; webEditor.DocumentText = backContent; } else { frontPage = true; backContent = webEditor.DocumentText; webEditor.DocumentText = frontContent; } Like I said everytime I enter some text and run this code it just pops up a message saying its been modified and asks if I want to save. How can I get around this?

    Read the article

  • Partitioned Repository for WebCenter Content using Oracle Database 11g

    - by Adao Junior
    One of the biggest challenges for content management solutions is related to the storage management due the high volumes of the unstoppable growing of information. Even if you have storage appliances and a lot of terabytes, thinks like backup, compression, deduplication, storage relocation, encryption, availability could be a nightmare. One standard option that you have with the Oracle WebCenter Content is to store data to the database. And the Oracle Database allows you leverage features like compression, deduplication, encryption and seamless backup. But with a huge volume, the challenge is passed to the DBA to keep the WebCenter Content Database up and running. One solution is the use of DB partitions for your content repository, but what are the implications of this? Can I fit this with my business requirements? Well, yes. It’s up to you how you will manage that, you just need a good plan. During you “storage brainstorm plan” take in your mind what you need, such as storage petabytes of documents? You need everything on-line? There’s a way to logically separate the “good content” from the “legacy content”? The first thing that comes to my mind is to use the creation date of the document, but you need to remember that this document could receive a lot of revisions and maybe you can consider the revision creation date. Your plan can have also complex rules like per Document Type or per a custom metadata like department or an hybrid per date, per DocType and an specific virtual folder. Extrapolation the use, you can have your repository distributed in different servers, different disks, different disk types (Such as ssds, sas, sata, tape,…), separated accordingly your business requirements, separating the “hot” content from the legacy and easily matching your compliance requirements. If you think to use by revision, the simple way is to consider the dId, that is the sequential unique id for every content created using the WebCenter Content or the dLastModified that is the date field of the FileStorage table that contains the date of inclusion of the content to the DB Table using SecureFiles. Using the scenario of partitioned repository using an hierarchical separation by date, we will transform the FileStorage table in an partitioned table using  “Partition by Range” of the dLastModified column (You can use the dId or a join with other tables for other metadata such as dDocType, Security, etc…). The test scenario bellow covers: Previous existent data on the JDBC Storage to be migrated to the new partitioned JDBC Storage Partition by Date Automatically generation of new partitions based on a pre-defined interval (Available only with Oracle Database 11g+) Deduplication and Compression for legacy data Oracle WebCenter Content 11g PS5 (Could present some customizations that do not affect the test scenario) For the test case you need some data stored using JDBC Storage to be the “legacy” data. If you do not have done before, just create an Storage rule pointed to the JDBC Storage: Enable the metadata StorageRule in the UI and upload some documents using this rule. For this test case you can run using the schema owner or an dba user. We will use the schema owner TESTS_OCS. I can’t forgot to tell that this is just a test and you should do a proper backup of your environment. When you use the schema owner, you need some privileges, using the dba user grant the privileges needed: REM Grant privileges required for online redefinition. GRANT EXECUTE ON DBMS_REDEFINITION TO TESTS_OCS; GRANT ALTER ANY TABLE TO TESTS_OCS; GRANT DROP ANY TABLE TO TESTS_OCS; GRANT LOCK ANY TABLE TO TESTS_OCS; GRANT CREATE ANY TABLE TO TESTS_OCS; GRANT SELECT ANY TABLE TO TESTS_OCS; REM Privileges required to perform cloning of dependent objects. GRANT CREATE ANY TRIGGER TO TESTS_OCS; GRANT CREATE ANY INDEX TO TESTS_OCS; In our test scenario we will separate the content as Legacy, Day1, Day2, Day3 and Future. This last one will partitioned automatically using 3 tablespaces in a round robin mode. In a real scenario the partition rule could be per month, per year or any rule that you choose. Table spaces for the test scenario: CREATE TABLESPACE TESTS_OCS_PART_LEGACY DATAFILE 'tests_ocs_part_legacy.dat' SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_DAY1 DATAFILE 'tests_ocs_part_day1.dat' SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_DAY2 DATAFILE 'tests_ocs_part_day2.dat' SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_DAY3 DATAFILE 'tests_ocs_part_day3.dat' SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_ROUND_ROBIN_A 'tests_ocs_part_round_robin_a.dat' DATAFILE SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_ROUND_ROBIN_B 'tests_ocs_part_round_robin_b.dat' DATAFILE SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_ROUND_ROBIN_C 'tests_ocs_part_round_robin_c.dat' DATAFILE SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; Before start, gather optimizer statistics on the actual FileStorage table: EXEC DBMS_STATS.GATHER_TABLE_STATS(USER, 'FileStorage', cascade => TRUE); Now check if is possible execute the redefinition process: EXEC DBMS_REDEFINITION.CAN_REDEF_TABLE('TESTS_OCS', 'FileStorage',DBMS_REDEFINITION.CONS_USE_PK); If no errors messages, you are good to go. Create a Partitioned Interim FileStorage table. You need to create a new table with the partition information to act as an interim table: CREATE TABLE FILESTORAGE_Part ( DID NUMBER(*,0) NOT NULL ENABLE, DRENDITIONID VARCHAR2(30 CHAR) NOT NULL ENABLE, DLASTMODIFIED TIMESTAMP (6), DFILESIZE NUMBER(*,0), DISDELETED VARCHAR2(1 CHAR), BFILEDATA BLOB ) LOB (BFILEDATA) STORE AS SECUREFILE ( ENABLE STORAGE IN ROW NOCACHE LOGGING KEEP_DUPLICATES NOCOMPRESS ) PARTITION BY RANGE (DLASTMODIFIED) INTERVAL (NUMTODSINTERVAL(1,'DAY')) STORE IN (TESTS_OCS_PART_ROUND_ROBIN_A, TESTS_OCS_PART_ROUND_ROBIN_B, TESTS_OCS_PART_ROUND_ROBIN_C) ( PARTITION FILESTORAGE_PART_LEGACY VALUES LESS THAN (TO_DATE('05-APR-2012 12.00.00 AM', 'DD-MON-YYYY HH.MI.SS AM')) TABLESPACE TESTS_OCS_PART_LEGACY LOB (BFILEDATA) STORE AS SECUREFILE ( TABLESPACE TESTS_OCS_PART_LEGACY RETENTION NONE DEDUPLICATE COMPRESS HIGH ), PARTITION FILESTORAGE_PART_DAY1 VALUES LESS THAN (TO_DATE('06-APR-2012 07.25.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) TABLESPACE TESTS_OCS_PART_DAY1 LOB (BFILEDATA) STORE AS SECUREFILE ( TABLESPACE TESTS_OCS_PART_DAY1 RETENTION AUTO KEEP_DUPLICATES COMPRESS ), PARTITION FILESTORAGE_PART_DAY2 VALUES LESS THAN (TO_DATE('06-APR-2012 07.55.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) TABLESPACE TESTS_OCS_PART_DAY2 LOB (BFILEDATA) STORE AS SECUREFILE ( TABLESPACE TESTS_OCS_PART_DAY2 RETENTION AUTO KEEP_DUPLICATES NOCOMPRESS ), PARTITION FILESTORAGE_PART_DAY3 VALUES LESS THAN (TO_DATE('06-APR-2012 07.58.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) TABLESPACE TESTS_OCS_PART_DAY3 LOB (BFILEDATA) STORE AS SECUREFILE ( TABLESPACE TESTS_OCS_PART_DAY3 RETENTION AUTO KEEP_DUPLICATES NOCOMPRESS ) ); After the creation you should see your partitions defined. Note that only the fixed range partitions have been created, none of the interval partition have been created. Start the redefinition process: BEGIN DBMS_REDEFINITION.START_REDEF_TABLE( uname => 'TESTS_OCS' ,orig_table => 'FileStorage' ,int_table => 'FileStorage_PART' ,col_mapping => NULL ,options_flag => DBMS_REDEFINITION.CONS_USE_PK ); END; This operation can take some time to complete, depending how many contents that you have and on the size of the table. Using the DBA user you can check the progress with this command: SELECT * FROM v$sesstat WHERE sid = 1; Copy dependent objects: DECLARE redefinition_errors PLS_INTEGER := 0; BEGIN DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS( uname => 'TESTS_OCS' ,orig_table => 'FileStorage' ,int_table => 'FileStorage_PART' ,copy_indexes => DBMS_REDEFINITION.CONS_ORIG_PARAMS ,copy_triggers => TRUE ,copy_constraints => TRUE ,copy_privileges => TRUE ,ignore_errors => TRUE ,num_errors => redefinition_errors ,copy_statistics => FALSE ,copy_mvlog => FALSE ); IF (redefinition_errors > 0) THEN DBMS_OUTPUT.PUT_LINE('>>> FileStorage to FileStorage_PART temp copy Errors: ' || TO_CHAR(redefinition_errors)); END IF; END; With the DBA user, verify that there's no errors: SELECT object_name, base_table_name, ddl_txt FROM DBA_REDEFINITION_ERRORS; *Note that will show 2 lines related to the constrains, this is expected. Synchronize the interim table FileStorage_PART: BEGIN DBMS_REDEFINITION.SYNC_INTERIM_TABLE( uname => 'TESTS_OCS', orig_table => 'FileStorage', int_table => 'FileStorage_PART'); END; Gather statistics on the new table: EXEC DBMS_STATS.GATHER_TABLE_STATS(USER, 'FileStorage_PART', cascade => TRUE); Complete the redefinition: BEGIN DBMS_REDEFINITION.FINISH_REDEF_TABLE( uname => 'TESTS_OCS', orig_table => 'FileStorage', int_table => 'FileStorage_PART'); END; During the execution the FileStorage table is locked in exclusive mode until finish the operation. After the last command the FileStorage table is partitioned. If you have contents out of the range partition, you should see the new partitions created automatically, not generating an error if you “forgot” to create all the future ranges. You will see something like: You now can drop the FileStorage_PART table: border-bottom-width: 1px; border-bottom-style: solid; text-align: left; border-left-color: silver; border-left-width: 1px; border-left-style: solid; padding-bottom: 4px; line-height: 12pt; background-color: #f4f4f4; margin-top: 20px; margin-right: 0px; margin-bottom: 10px; margin-left: 0px; padding-left: 4px; width: 97.5%; padding-right: 4px; font-family: 'Courier New', Courier, monospace; direction: ltr; max-height: 200px; font-size: 8pt; overflow-x: auto; overflow-y: auto; border-top-color: silver; border-top-width: 1px; border-top-style: solid; cursor: text; border-right-color: silver; border-right-width: 1px; border-right-style: solid; padding-top: 4px; " id="codeSnippetWrapper"> DROP TABLE FileStorage_PART PURGE; To check the FileStorage table is valid and is partitioned, use the command: SELECT num_rows,partitioned FROM user_tables WHERE table_name = 'FILESTORAGE'; You can list the contents of the FileStorage table in a specific partition, per example: SELECT * FROM FileStorage PARTITION (FILESTORAGE_PART_LEGACY) Some useful commands that you can use to check the partitions, note that you need to run using a DBA user: SELECT * FROM DBA_TAB_PARTITIONS WHERE table_name = 'FILESTORAGE';   SELECT * FROM DBA_TABLESPACES WHERE tablespace_name like 'TESTS_OCS%'; After the redefinition process complete you have a new FileStorage table storing all content that has the Storage rule pointed to the JDBC Storage and partitioned using the rule set during the creation of the temporary interim FileStorage_PART table. At this point you can test the WebCenter Content downloading the documents (Original and Renditions). Note that the content could be already in the cache area, take a look in the weblayout directory to see if a file with the same id is there, then click on the web rendition of your test file and see if have created the file and you can open, this means that is all working. The redefinition process can be repeated many times, this allow you test what the better layout, over and over again. Now some interesting maintenance actions related to the partitions: Make an tablespace read only. No issues viewing, the WebCenter Content do not alter the revisions When try to delete an content that is part of an read only tablespace, an error will occurs and the document will not be deleted The only way to prevent errors today is creating an custom component that checks the partitions and if you have an document in an “Read Only” repository, execute the deletion process of the metadata and mark the document to be deleted on the next db maintenance, like a new redefinition. Take an tablespace off-line for archiving purposes or any other reason. When you try open an document that is included in this tablespace will receive an error that was unable to retrieve the content, but the others online tablespaces are not affected. Same behavior when deleting documents. Again, an custom component is the solution. If you have an document “out of range”, the component can show an message that the repository for that document is offline. This can be extended to a option to the user to request to put online again. Moving some legacy content to an offline repository (table) using the Exchange option to move the content from one partition to a empty nonpartitioned table like FileStorage_LEGACY. Note that this option will remove the registers from the FileStorage and will not be able to open the stored content. You always need to keep in mind the indexes and constrains. An redefinition separating the original content (vault) from the renditions and separate by date ate the same time. This could be an option for DAM environments that want to have an special place for the renditions and put the original files in a storage with less performance. The process will be the same, you just need to change the script of the interim table to use composite partitioning. Will be something like: CREATE TABLE FILESTORAGE_RenditionPart ( DID NUMBER(*,0) NOT NULL ENABLE, DRENDITIONID VARCHAR2(30 CHAR) NOT NULL ENABLE, DLASTMODIFIED TIMESTAMP (6), DFILESIZE NUMBER(*,0), DISDELETED VARCHAR2(1 CHAR), BFILEDATA BLOB ) LOB (BFILEDATA) STORE AS SECUREFILE ( ENABLE STORAGE IN ROW NOCACHE LOGGING KEEP_DUPLICATES NOCOMPRESS ) PARTITION BY LIST (DRENDITIONID) SUBPARTITION BY RANGE (DLASTMODIFIED) ( PARTITION Vault VALUES ('primaryFile') ( SUBPARTITION FILESTORAGE_VAULT_LEGACY VALUES LESS THAN (TO_DATE('05-APR-2012 12.00.00 AM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_VAULT_DAY1 VALUES LESS THAN (TO_DATE('06-APR-2012 07.25.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_VAULT_DAY2 VALUES LESS THAN (TO_DATE('06-APR-2012 07.55.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_VAULT_DAY3 VALUES LESS THAN (TO_DATE('06-APR-2012 07.58.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_VAULT_FUTURE VALUES LESS THAN (MAXVALUE) ) ,PARTITION WebLayout VALUES ('webViewableFile') ( SUBPARTITION FILESTORAGE_WEBLAYOUT_LEGACY VALUES LESS THAN (TO_DATE('05-APR-2012 12.00.00 AM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_WEBLAYOUT_DAY1 VALUES LESS THAN (TO_DATE('06-APR-2012 07.25.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_WEBLAYOUT_DAY2 VALUES LESS THAN (TO_DATE('06-APR-2012 07.55.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_WEBLAYOUT_DAY3 VALUES LESS THAN (TO_DATE('06-APR-2012 07.58.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_WEBLAYOUT_FUTURE VALUES LESS THAN (MAXVALUE) ) ,PARTITION Special VALUES ('Special') ( SUBPARTITION FILESTORAGE_SPECIAL_LEGACY VALUES LESS THAN (TO_DATE('05-APR-2012 12.00.00 AM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_SPECIAL_DAY1 VALUES LESS THAN (TO_DATE('06-APR-2012 07.25.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_SPECIAL_DAY2 VALUES LESS THAN (TO_DATE('06-APR-2012 07.55.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_SPECIAL_DAY3 VALUES LESS THAN (TO_DATE('06-APR-2012 07.58.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_SPECIAL_FUTURE VALUES LESS THAN (MAXVALUE) ) )ENABLE ROW MOVEMENT; The next post related to partitioned repository will come with an sample component to handle the possible exceptions when you need to take off line an tablespace/partition or move to another place. Also, we can include some integration to the Retention Management and Records Management. Another subject related to partitioning is the ability to create an FileStore Provider pointed to a different database, raising the level of the distributed storage vs. performance. Let us know if this is important to you or you have an use case not listed, leave a comment. Cross-posted on the blog.ContentrA.com

    Read the article

  • Switching from LinqToXYZ to LinqToObjects

    - by spender
    In answering this question, it got me thinking... I often use this pattern: collectionofsomestuff //here it's LinqToEntities .Select(something=>new{something.Name,something.SomeGuid}) .ToArray() //From here on it's LinqToObjects .Select(s=>new SelectListItem() { Text = s.Name, Value = s.SomeGuid.ToString(), Selected = false }) Perhaps I'd split it over a couple of lines, but essentially, at the ToArray point, I'm effectively enumerating my query and storing the resulting sequence so that I can further process it with all the goodness of a full CLR to hand. As I have no interest in any kind of manipulation of the intermediate list, I use ToArray over ToList as there's less overhead. I do this all the time, but I wonder if there is a better pattern for this kind of problem?

    Read the article

  • Problem passing json into jquery graph(flot)

    - by Adam McMahon
    I trying to retrieve some json to pass into a flot graph. I know that json is right because I hard coded it to check, but I'm pretty sure that I'm not passing right because It's not showing up. Here's the javascript: var total = $.ajax({ type: "POST", async: false, url: "../api/?key=xxx&api=report&crud=return_months&format=json" }).responseText; //var total = $.evalJSON(total); var plot = $.plot($("#placeholder"),total); here's the json: [ { data: [[1,12], [2,43], [3,10], [4,17], ], label: "E-File"}, { data: [[1,25], [2,35], [3,3], [4,5], ], label: "Bank Products" }, { data: [[1,41], [2,87], [3,30], [4,29], ], label: "All Returns" } ], {series: {lines: { show: true },points: { show: true }}, grid: { hoverable: true, clickable: true }, yaxis: { min: 0, max: 100 }, xaxis: { ticks: [[1,"January"],[2,"February"],[3,"March"],[4,"April"],[5,"May"],[6,"June"],[7,"July"],[8,"August"],[9,"September"],[10,"October"],[11,"November"],[12,"December"]] }}

    Read the article

< Previous Page | 382 383 384 385 386 387 388 389 390 391 392 393  | Next Page >