Search Results

Search found 15670 results on 627 pages for 'multi level'.

Page 583/627 | < Previous Page | 579 580 581 582 583 584 585 586 587 588 589 590  | Next Page >

  • Blit Queue Optimization Algorithm

    - by martona
    I'm looking to implement a module that manages a blit queue. There's a single surface, and portions of this surface (bounded by rectangles) are copied to elsewhere within the surface: add_blt(rect src, point dst); There can be any number of operations posted, in order, to the queue. Eventually the user of the queue will stop posting blits, and ask for an optimal set of operations to actually perform on the surface. The task of the module is to ensure that no pixel is copied unnecessarily. This gets tricky because of overlaps of course. A blit could re-blit a previously copied pixel. Ideally blit operations would be subdivided in the optimization phase in such a way that every block goes to its final place with a single operation. It's tricky but not impossible to put this together. I'm just trying to not reinvent the wheel. I looked around on the 'net, and the only thing I found was the SDL_BlitPool Library which assumes that the source surface differs from the destination. It also does a lot of grunt work, seemingly unnecessarily: regions and similar building blocks are a given. I'm looking for something higher-level. Of course, I'm not going to look a gift horse in the mouth, and I also don't mind doing actual work... If someone can come forward with a basic idea that makes this problem seem less complex than it does right now, that'd be awesome too. EDIT: Thinking about aaronasterling's answer... could this work? Implement customized region handler code that can maintain metadata for every rectangle it contains. When the region handler splits up a rectangle, it will automatically associate the metadata of this rectangle with the resulting sub-rectangles. When the optimization run starts, create an empty region handled by the above customized code, call this the master region Iterate through the blt queue, and for every entry: Let srcrect be the source rectangle for the blt beng examined Get the intersection of srcrect and master region into temp region Remove temp region from master region, so master region no longer covers temp region Promote srcrect to a region (srcrgn) and subtract temp region from it Offset temp region and srcrgn with the vector of the current blt: their union will cover the destination area of the current blt Add to master region all rects in temp region, retaining the original source metadata (step one of adding the current blt to the master region) Add to master region all rects in srcrgn, adding the source information for the current blt (step two of adding the current blt to the master region) Optimize master region by checking if adjacent sub-rectangles that are merge candidates have the same metadata. Two sub-rectangles are merge candidates if (r1.x1 == r2.x1 && r1.x2 == r2.x2) | (r1.y1 == r2.y1 && r1.y2 == r2.y2). If yes, combine them. Enumerate master region's sub-rectangles. Every rectangle returned is an optimized blt operation destination. The associated metadata is the blt operation`s source.

    Read the article

  • How to reduce virtual memory by optimising my PHP code?

    - by iCeR
    My current code (see below) uses 147MB of virtual memory! My provider has allocated 100MB by default and the process is killed once run, causing an internal error. The code is utilising curl multi and must be able to loop with more than 150 iterations whilst still minimizing the virtual memory. The code below is only set at 150 iterations and still causes the internal server error. At 90 iterations the issue does not occur. How can I adjust my code to lower the resource use / virtual memory? Thanks! <?php function udate($format, $utimestamp = null) { if ($utimestamp === null) $utimestamp = microtime(true); $timestamp = floor($utimestamp); $milliseconds = round(($utimestamp - $timestamp) * 1000); return date(preg_replace('`(?<!\\\\)u`', $milliseconds, $format), $timestamp); } $url = 'https://www.testdomain.com/'; $curl_arr = array(); $master = curl_multi_init(); for($i=0; $i<150; $i++) { $curl_arr[$i] = curl_init(); curl_setopt($curl_arr[$i], CURLOPT_URL, $url); curl_setopt($curl_arr[$i], CURLOPT_RETURNTRANSFER, 1); curl_setopt($curl_arr[$i], CURLOPT_SSL_VERIFYHOST, FALSE); curl_setopt($curl_arr[$i], CURLOPT_SSL_VERIFYPEER, FALSE); curl_multi_add_handle($master, $curl_arr[$i]); } do { curl_multi_exec($master,$running); } while($running > 0); for($i=0; $i<150; $i++) { $results = curl_multi_getcontent ($curl_arr[$i]); $results = explode("<br>", $results); echo $results[0]; echo "<br>"; echo $results[1]; echo "<br>"; echo udate('H:i:s:u'); echo "<br><br>"; usleep(100000); } ?> Processor Information Total processors: 8 Processor #1 Vendor GenuineIntel Name Intel(R) Xeon(R) CPU E5405 @ 2.00GHz Speed 1995.120 MHz Cache 6144 KB Processor #2 Vendor GenuineIntel Name Intel(R) Xeon(R) CPU E5405 @ 2.00GHz Speed 1995.120 MHz Cache 6144 KB Processor #3 Vendor GenuineIntel Name Intel(R) Xeon(R) CPU E5405 @ 2.00GHz Speed 1995.120 MHz Cache 6144 KB Processor #4 Vendor GenuineIntel Name Intel(R) Xeon(R) CPU E5405 @ 2.00GHz Speed 1995.120 MHz Cache 6144 KB Processor #5 Vendor GenuineIntel Name Intel(R) Xeon(R) CPU E5405 @ 2.00GHz Speed 1995.120 MHz Cache 6144 KB Processor #6 Vendor GenuineIntel Name Intel(R) Xeon(R) CPU E5405 @ 2.00GHz Speed 1995.120 MHz Cache 6144 KB Processor #7 Vendor GenuineIntel Name Intel(R) Xeon(R) CPU E5405 @ 2.00GHz Speed 1995.120 MHz Cache 6144 KB Processor #8 Vendor GenuineIntel Name Intel(R) Xeon(R) CPU E5405 @ 2.00GHz Speed 1995.120 MHz Cache 6144 KB Memory Information Memory for crash kernel (0x0 to 0x0) notwithin permissible range Memory: 8302344k/9175040k available (2176k kernel code, 80272k reserved, 901k data, 228k init, 7466304k highmem) System Information Linux server3.server.com 2.6.18-194.17.1.el5PAE #1 SMP Wed Sep 29 13:31:51 EDT 2010 i686 i686 i386 GNU/Linux Physical Disks SCSI device sda: 1952448512 512-byte hdwr sectors (999654 MB) sda: Write Protect is off sda: Mode Sense: 03 00 00 08 SCSI device sda: drive cache: write back SCSI device sda: 1952448512 512-byte hdwr sectors (999654 MB) sda: Write Protect is off sda: Mode Sense: 03 00 00 08 SCSI device sda: drive cache: write back sd 0:1:0:0: Attached scsi disk sda sd 4:0:0:0: Attached scsi removable disk sdb sd 0:1:0:0: Attached scsi generic sg4 type 0 sd 4:0:0:0: Attached scsi generic sg7 type 0 Current Memory Usage total used free shared buffers cached Mem: 8306672 7847384 459288 0 487912 6444548 -/+ buffers/cache: 914924 7391748 Swap: 4095992 496 4095496 Total: 12402664 7847880 4554784 Current Disk Usage Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup00-LogVol00 898G 307G 546G 36% / /dev/sda1 99M 19M 76M 20% /boot none 4.0G 0 4.0G 0% /dev/shm /var/tmpMnt 4.0G 1.8G 2.0G 48% /tmp

    Read the article

  • Why oh why doesn't my asp.net treeview update?

    - by Brendan
    I'm using an ASP.net treeview on a page with a custom XmlDataSource. When the user clicks on a node of the tree, a detailsview pops up and edits a bunch of things about the underlying object. All this works properly, and the underlying object gets updated in my background object-management classes. Yay! However, my treeview just isn't updating the display. Either immediately (which i would like it to), or on full page re-load (which is the minimal useful level i need it to be at). Am i subclassing XmlDataSource poorly? I really don't know. Can anyone point me in a good direction? Thanks! The markup looks about like this (chaff removed): <data:DefinitionDataSource runat="server" ID="DefinitionTreeSource" RootDefinitionID="uri:1"></data:DefinitionDataSource> <asp:TreeView ID="TreeView" runat="server" DataSourceID="DefinitionTreeSource"> <DataBindings> <asp:TreeNodeBinding DataMember="definition" TextField="name" ValueField="id" /> </DataBindings> </asp:TreeView> <asp:DetailsView ID="DetailsView1" runat="server" AutoGenerateRows="False" DataKeyNames="Id" DataSourceID="DefinitionSource" DefaultMode="Edit"> <Fields> <asp:BoundField DataField="Name" HeaderText="Name" HeaderStyle-Wrap="false" SortExpression="Name" /> <asp:CommandField ShowCancelButton="False" ShowInsertButton="True" ShowEditButton="True" ButtonType="Button" /> </Fields> </asp:DetailsView> And the DefinitionTreeSource code looks like this: public class DefinitionDataSource : XmlDataSource { public string RootDefinitionID { get { if (ViewState["RootDefinitionID"] != null) return ViewState["RootDefinitionID"] as String; return null; } set { if (!Object.Equals(ViewState["RootDefinitionID"], value)) { ViewState["RootDefinitionID"] = value; DataBind(); } } } public DefinitionDataSource() { } public override void DataBind() { base.DataBind(); setData(); } private void setData() { String defXML = "<?xml version=\"1.0\" ?>"; Test.Management.TestManager.Definition root = Test.Management.TestManager.Definition.GetDefinitionById(RootDefinitionID); if (root != null) this.Data = defXML + root.ToXMLString(); else this.Data = defXML + "<definition id=\"null\" name=\"Set Root Node\" />"; } } }

    Read the article

  • Need help with regex parsing (in perl)

    - by Charlie
    Hi all, need some help parsing an html file in perl. I used the LWP module to retrieve a webpage into $_ with $/ undefined so there are no newline issues. Then I'm trying to find all strings matching a pattern. How do I do that? I know how to find 1 instance of it, but how do I match all instances? and what data structure would the results go to? a multi dimensional array? my text (excerpt) looks like the following: <TR> <TD BGCOLOR=EEEEEE><A HREF="/program.cgi?pid=1233"><FONT FACE="ARIAL,HELVETICA,SANS-SERIF" SIZE=2>Title 1</A></FONT></TD> <TD BGCOLOR=EEEEEE nowrap><FONT FACE="ARIAL,HELVETICA" SIZE=2>Jun 27 2010 3:00PM</FONT></TD> <TD BGCOLOR=EEEEEE>&nbsp;</TD> </TR> <TR><TD BGCOLOR=EEEEEE COLSPAN=3><IMG SRC="http://images.domain.com/images/spacer.gif" WIDTH=1 HEIGHT=2><BR></TD></TR> <TR><TD COLSPAN=3 BGCOLOR=999999><IMG SRC="http://images.domain.com/images/spacer.gif" HEIGHT=1 WIDTH=1></TD></TR> <TR><TD COLSPAN=3 ><IMG SRC="http://images.domain.com/images/spacer.gif" WIDTH=1 HEIGHT=2><BR></TD></TR> <TR> <TD><A HREF="/program.cgi?pid=1234"><FONT FACE="ARIAL,HELVETICA,SANS-SERIF" SIZE=2>Title 2</A></FONT></TD> <TD nowrap><FONT FACE="ARIAL,HELVETICA" SIZE=2>Jun 29 2010 7:00PM</FONT></TD> <TD>&nbsp;</TD> </TR> <TR><TD COLSPAN=3><IMG SRC="http://images.domain.com/images/spacer.gif" WIDTH=1 HEIGHT=2><BR></TD></TR> <TR><TD COLSPAN=3 BGCOLOR=999999><IMG SRC="http://images.domain.com/images/spacer.gif" HEIGHT=1 WIDTH=1></TD></TR> <TR><TD COLSPAN=3 BGCOLOR=EEEEEE><IMG SRC="http://images.domain.com/images/spacer.gif" WIDTH=1 HEIGHT=2><BR></TD></TR> <TR> <TD BGCOLOR=EEEEEE><A HREF="/program.cgi?pid=1235"><FONT FACE="ARIAL,HELVETICA,SANS-SERIF" SIZE=2>Title 3</A></FONT></TD> <TD BGCOLOR=EEEEEE nowrap><FONT FACE="ARIAL,HELVETICA" SIZE=2>Jul 3 2010 7:00PM</FONT></TD> <TD BGCOLOR=EEEEEE>&nbsp;</TD> </TR> I want to get the following into an array (or any structure): { ["/program.cgi?pdi=1233", "Title 1"], ["/program.cgi?pdi=1234", "Title 2"], ["/program.cgi?pdi=1235", "Title 3"] } Thanks

    Read the article

  • Security review of an authenticated Diffie Hellman variant

    - by mtraut
    EDIT I'm still hoping for some advice on this, i tried to clarify my intentions... When i came upon device pairing in my mobile communication framework i studied a lot of papers on this topic and and also got some input from previous questions here. But, i didn't find a ready to implement protocol solution - so i invented a derivate and as i'm no crypto geek i'm not sure about the security caveats of the final solution: The main questions are Is SHA256 sufficient as a commit function? Is the addition of the shared secret as an authentication info in the commit string safe? What is the overall security of the 1024 bit group DH I assume at most 2^-24 bit probability of succesful MITM attack (because of 24 bit challenge). Is this plausible? What may be the most promising attack (besides ripping the device out off my numb, cold hands) This is the algorithm sketch For first time pairing, a solution proposed in "Key agreement in peer-to-peer wireless networks" (DH-SC) is implemented. I based it on a commitment derived from: A fix "UUID" for the communicating entity/role (128 bit, sent at protocol start, before commitment) The public DH key (192 bit private key, based on the 1024 bit Oakley group) A 24 bit random challenge Commit is computed using SHA256 c = sha256( UUID || DH pub || Chall) Both parties exchange this commitment, open and transfer the plain content of the above values. The 24 bit random is displayed to the user for manual authentication DH session key (128 bytes, see above) is computed When the user opts for persistent pairing, the session key is stored with the remote UUID as a shared secret Next time devices connect, commit is computed by additionally hashing the previous DH session key before the random challenge. For sure it is not transfered when opening. c = sha256( UUID || DH pub || DH sess || Chall) Now the user is not bothered authenticating when the local party can derive the same commitment using his own, stored previous DH session key. After succesful connection the new DH session key becomes the new shared secret. As this does not exactly fit the protocols i found so far (and as such their security proofs), i'd be very interested to get an opinion from some more crypto enabled guys here. BTW. i did read about the "EKE" protocol, but i'm not sure what the extra security level is.

    Read the article

  • SQL Server - Get Inserted Record Identity Value when Using a View's Instead Of Trigger

    - by CuppM
    For several tables that have identity fields, we are implementing a Row Level Security scheme using Views and Instead Of triggers on those views. Here is a simplified example structure: -- Table CREATE TABLE tblItem ( ItemId int identity(1,1) primary key, Name varchar(20) ) go -- View CREATE VIEW vwItem AS SELECT * FROM tblItem -- RLS Filtering Condition go -- Instead Of Insert Trigger CREATE TRIGGER IO_vwItem_Insert ON vwItem INSTEAD OF INSERT AS BEGIN -- RLS Security Checks on inserted Table -- Insert Records Into Table INSERT INTO tblItem (Name) SELECT Name FROM inserted; END go If I want to insert a record and get its identity, before implementing the RLS Instead Of trigger, I used: DECLARE @ItemId int; INSERT INTO tblItem (Name) VALUES ('MyName'); SELECT @ItemId = SCOPE_IDENTITY(); With the trigger, SCOPE_IDENTITY() no longer works - it returns NULL. I've seen suggestions for using the OUTPUT clause to get the identity back, but I can't seem to get it to work the way I need it to. If I put the OUTPUT clause on the view insert, nothing is ever entered into it. -- Nothing is added to @ItemIds DECLARE @ItemIds TABLE (ItemId int); INSERT INTO vwItem (Name) OUTPUT INSERTED.ItemId INTO @ItemIds VALUES ('MyName'); If I put the OUTPUT clause in the trigger on the INSERT statement, the trigger returns the table (I can view it from SQL Management Studio). I can't seem to capture it in the calling code; either by using an OUTPUT clause on that call or using a SELECT * FROM (). -- Modified Instead Of Insert Trigger w/ Output CREATE TRIGGER IO_vwItem_Insert ON vwItem INSTEAD OF INSERT AS BEGIN -- RLS Security Checks on inserted Table -- Insert Records Into Table INSERT INTO tblItem (Name) OUTPUT INSERTED.ItemId SELECT Name FROM inserted; END go -- Calling Code INSERT INTO vwItem (Name) VALUES ('MyName'); The only thing I can think of is to use the IDENT_CURRENT() function. Since that doesn't operate in the current scope, there's an issue of concurrent users inserting at the same time and messing it up. If the entire operation is wrapped in a transaction, would that prevent the concurrency issue? BEGIN TRANSACTION DECLARE @ItemId int; INSERT INTO tblItem (Name) VALUES ('MyName'); SELECT @ItemId = IDENT_CURRENT('tblItem'); COMMIT TRANSACTION Does anyone have any suggestions on how to do this better? I know people out there who will read this and say "Triggers are EVIL, don't use them!" While I appreciate your convictions, please don't offer that "suggestion".

    Read the article

  • I need help converting a C# string from one character encoding to another?

    - by Handleman
    According to Spolsky I can't call myself a developer, so there is a lot of shame behind this question... Scenario: From a C# application, I would like to take a string value from a SQL db and use it as the name of a directory. I have a secure (SSL) FTP server on which I want to set the current directory using the string value from the DB. Problem: Everything is working fine until I hit a string value with a "special" character - I seem unable to encode the directory name correctly to satisfy the FTP server. The code example below uses "special" character é as an example uses WinSCP as an external application for the ftps comms does not show all the code required to setup the Process "_winscp". sends commands to the WinSCP exe by writing to the process standardinput for simplicity, does not get the info from the DB, but instead simply declares a string (but I did do a .Equals to confirm that the value from the DB is the same as the declared string) makes three attempts to set the current directory on the FTP server using different string encodings - all of which fail makes an attempt to set the directory using a string that was created from a hand-crafted byte array - which works Process _winscp = new Process(); byte[] buffer; string nameFromString = "Sinéad O'Connor"; _winscp.StandardInput.WriteLine("cd \"" + nameFromString + "\""); buffer = Encoding.UTF8.GetBytes(nameFromString); _winscp.StandardInput.WriteLine("cd \"" + Encoding.UTF8.GetString(buffer) + "\""); buffer = Encoding.ASCII.GetBytes(nameFromString); _winscp.StandardInput.WriteLine("cd \"" + Encoding.ASCII.GetString(buffer) + "\""); byte[] nameFromBytes = new byte[] { 83, 105, 110, 130, 97, 100, 32, 79, 39, 67, 111, 110, 110, 111, 114 }; _winscp.StandardInput.WriteLine("cd \"" + Encoding.Default.GetString(nameFromBytes) + "\""); The UTF8 encoding changes é to 101 (decimal) but the FTP server doesn't like it. The ASCII encoding changes é to 63 (decimal) but the FTP server doesn't like it. When I represent é as value 130 (decimal) the FTP server is happy, except I can't find a method that will do this for me (I had to manually contruct the string from explicit bytes). Anyone know what I should do to my string to encode the é as 130 and make the FTP server happy and finally elevate me to level 1 developer by explaining the only single thing a developer should understand?

    Read the article

  • Advice on "Invalid Pointer Operation" when using complex records

    - by Xaz
    Env: Delphi 2007 <JustificationI tend to use complex records quite frequently as they offer almost all of the advantages of classes but with much simpler handling.</Justification Anyhoo, one particularly complex record I have just implemented is trashing memory (later leading to an "Invalid Pointer Operation" error). This is an example of the memory trashing code: sSignature := gProfiles.Profile[_stPrimary].Signature.Formatted(True); On the second time i call it i get "Invalid Pointer Operation" It works OK if i call it like this: AProfile := gProfiles.Profile[_stPrimary]; ASignature := AProfile.Signature; sSignature := ASignature.Formatted(True); Background Code: gProfiles: TProfiles; TProfiles = Record private FPrimaryProfileID: Integer; FCachedProfile: TProfile; ... public < much code removed > property Profile[ProfileType: TProfileType]: TProfile Read GetProfile; end; function TProfiles.GetProfile(ProfileType: TProfileType): TProfile; begin case ProfileType of _stPrimary : Result := ProfileByID(FPrimaryProfileID); ... end; end; function TProfiles.ProfileByID(iID: Integer): TProfile; begin <snip> if LoadProfileOfID(iID, FCachedProfile) then begin Result := FCachedProfile; end else ... end; TProfile = Record private ... public ... Signature: TSignature; ... end; TSignature = Record private public PlainTextFormat : string; HTMLFormat : string; // The text to insert into a message when using this profile function Formatted(bHTML: boolean): string; end; function TSignature.Formatted(bHTML: boolean): string; begin if bHTML then result := HTMLFormat else result := PlainTextFormat; < SNIP MUCH CODE > end; OK, so I have a record within a record within a record, which is approaching Inception level confusion and I'm the first to admit is not really a good model. Clearly i am going to have to restructure it. What I would like from you gurus is a better understanding of why it is trashing the memory (something to do with the string object that is created then freed...) so that i can avoid making these kinds of errors in future. Thanks

    Read the article

  • WPF: Binding to ListBoxItem.IsSelected doesn't work for off-screen items

    - by Qwertie
    In my program I have a set of view-model objects to represent items in a ListBox (multi-select is allowed). The viewmodel has an IsSelected property that I would like to bind to the ListBox so that selection state is managed in the viewmodel rather than in the listbox itself. However, apparently the ListBox doesn't maintain bindings for most of the off-screen items, so in general the IsSelected property is not synchronized correctly. Here is some code that demonstrates the problem. First XAML: <StackPanel> <StackPanel Orientation="Horizontal"> <TextBlock>Number of selected items: </TextBlock> <TextBlock Text="{Binding NumItemsSelected}"/> </StackPanel> <ListBox ItemsSource="{Binding Items}" Height="200" SelectionMode="Extended"> <ListBox.ItemContainerStyle> <Style TargetType="{x:Type ListBoxItem}"> <Setter Property="IsSelected" Value="{Binding IsSelected}"/> </Style> </ListBox.ItemContainerStyle> </ListBox> <Button Name="TestSelectAll" Click="TestSelectAll_Click">Select all</Button> </StackPanel> C# Select All handler: private void TestSelectAll_Click(object sender, RoutedEventArgs e) { foreach (var item in _dataContext.Items) item.IsSelected = true; } C# viewmodel: public class TestItem : NPCHelper { TestDataContext _c; string _text; public TestItem(TestDataContext c, string text) { _c = c; _text = text; } public override string ToString() { return _text; } bool _isSelected; public bool IsSelected { get { return _isSelected; } set { _isSelected = value; FirePropertyChanged("IsSelected"); _c.FirePropertyChanged("NumItemsSelected"); } } } public class TestDataContext : NPCHelper { public TestDataContext() { for (int i = 0; i < 200; i++) _items.Add(new TestItem(this, i.ToString())); } ObservableCollection<TestItem> _items = new ObservableCollection<TestItem>(); public ObservableCollection<TestItem> Items { get { return _items; } } public int NumItemsSelected { get { return _items.Where(it => it.IsSelected).Count(); } } } public class NPCHelper : INotifyPropertyChanged { public event PropertyChangedEventHandler PropertyChanged; public void FirePropertyChanged(string prop) { if (PropertyChanged != null) PropertyChanged(this, new PropertyChangedEventArgs(prop)); } } Two separate problems can be observed. If you click the first item and then press Shift+End, all 200 items should be selected; however, the heading reports that only 21 items are selected. If you click "Select all" then all items are indeed selected. If you then click an item in the ListBox you would expect the other 199 items to be deselected, but this does not happen. Instead, only the items that are on the screen (and a few others) are deselected. All 199 items will not be deselected unless you first scroll through the list from beginning to end (and even then, oddly enough, it doesn't work if you perform scrolling with the little scroll box). My questions are: Can someone explain precisely why this occurs? Can I avoid or work around the problem?

    Read the article

  • How do I tie a cmbBox that selects all drives (local and network) into a treeNode VB

    - by jpavlov
    How do i tie in a selected item from a cmbBox with a treeView? I am looking to just obtain the value of the one selected drive Thanks. Imports System Imports System.IO Imports System.IO.File Imports System.Windows.Forms Public Class F_Treeview_Demo Private Sub F_Treeview_Demo_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load ' Initialize the local directory treeview Dim nodeText As String = "" Dim sb As New C_StringBuilder With My.Computer.FileSystem 'Read in the number of drives For i As Integer = 0 To .Drives.Count - 1 '** Build the drive's node text sb.ClearText() sb.AppendText(.Drives(i).Name) cmbDrives.Items.Add(sb.FullText) Next End With ListRootNodes() End Sub Private Sub btnExit_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles btnExit.Click Application.Exit() End Sub Private Sub tvwLocalFolders_AfterSelect(ByVal sender As Object, ByVal e As System.Windows.Forms.TreeViewEventArgs) _ Handles tvwLocalFolders.AfterSelect ' Display the path for the selected node Dim folder As String = tvwLocalFolders.SelectedNode.Tag lblLocalPath.Text = folder ListView1.Items.Clear() Dim childNode As TreeNode = e.Node.FirstNode Dim parentPath As String = AddChar(e.Node.Tag) End Sub Private Sub AddToList(ByVal nodes As TreeNodeCollection) For Each node As TreeNode In nodes If node.Checked Then ListView1.Items.Add(node.Text) ListView1.Items.Add(Chr(13)) AddToList(node.Nodes) End If Next End Sub Private Sub tvwLocalFolders_BeforeExpand(ByVal sender As Object, ByVal e As System.Windows.Forms.TreeViewCancelEventArgs) _ Handles tvwLocalFolders.BeforeExpand ' Display the path for the selected node lblLocalPath.Text = e.Node.Tag ' Populate all child nodes below the selected node Dim parentPath As String = AddChar(e.Node.Tag) tvwLocalFolders.BeginUpdate() Dim childNode As TreeNode = e.Node.FirstNode 'this i added Dim smallNode As TreeNode = e.Node.FirstNode Do While childNode IsNot Nothing ListLocalSubFolders(childNode, parentPath & childNode.Text) childNode = childNode.NextNode ''this i added ListLocalFiles(smallNode, parentPath & smallNode.Text) Loop tvwLocalFolders.EndUpdate() tvwLocalFolders.Refresh() ' Select the node being expanded tvwLocalFolders.SelectedNode = e.Node ListView1.Items.Clear() AddToList(tvwLocalFolders.Nodes) ListView1.Items.Add(Environment.NewLine) End Sub Private Sub ListRootNodes() ' Add all local drives to the Local treeview Dim nodeText As String = "" Dim sb As New C_StringBuilder With My.Computer.FileSystem For i As Integer = 0 To .Drives.Count - 1 '** Build the drive's node text sb.ClearText() sb.AppendText(.Drives(i).Name) nodeText = sb.FullText nodeText = Me.cmbDrives.SelectedItem '** Add the drive to the treeview Dim driveNode As TreeNode driveNode = tvwLocalFolders.Nodes.Add(nodeText) 'driveNode.Tag = .Drives(i).Name '** Add the next level of subfolders 'ListLocalSubFolders(driveNode, .Drives(i).Name) ListLocalSubFolders(driveNode, nodeText) 'driveNode = Nothing Next End With End Sub Private Sub ListLocalFiles(ByVal ParentNode As TreeNode, ByVal PParentPath As String) Dim FileNode As String = "" Try For Each FileNode In Directory.GetFiles(PParentPath) Dim smallNode As TreeNode smallNode = ParentNode.Nodes.Add(FilenameFromPath(FileNode)) With smallNode .ImageIndex = 0 .SelectedImageIndex = 1 .Tag = FileNode End With smallNode = Nothing Next Catch ex As Exception End Try End Sub Private Sub ListLocalSubFolders(ByVal ParentNode As TreeNode, _ ByVal ParentPath As String) ' Add all local subfolders below the passed Local treeview node Dim FolderNode As String = "" Try For Each FolderNode In Directory.GetDirectories(ParentPath) Dim childNode As TreeNode childNode = ParentNode.Nodes.Add(FilenameFromPath(FolderNode)) With childNode .ImageIndex = 0 .SelectedImageIndex = 1 .Tag = FolderNode End With childNode = Nothing Next Catch ex As Exception End Try End Sub Private Sub ComboBox1_SelectedIndexChanged(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles cmbDrives.SelectedIndexChanged End Sub Private Sub lblLocalPath_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles lblLocalPath.Click End Sub Private Sub grpLocalFileSystem_Enter(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles grpLocalFileSystem.Enter End Sub Private Sub btn1_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles btn1.Click ' lbl1.Text = End Sub Private Sub ListView1_SelectedIndexChanged(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles ListView1.SelectedIndexChanged End Sub End Class

    Read the article

  • jQuery action being called when selector isn't met?

    - by dougoftheabaci
    I've been working on a prototype for a client's web site and I've run into a rather significant snag. You can view the prototype here. As you can see, the way it works is you can scroll a set of slides horizontally and, by clicking one, open a stack containing yet more slides. If you then click again on an image in that stack it opens up a lightbox. Clicking on another stack or the close button will close that stack (and open another, as case may be). That all works great. However you get some weird behavior if you do the following: Click to open any stack. Click to open an image's light box (this works best if you click on the image that's level with the main list). Close the light box and the stack either by clicking the close button or clicking on another stack. Click back to the first stack. Instead of reopening the stack, you get the lightbox. This confuses me as the light box should only ever be called if there is a class on the containing UL and that class is removed when the lightbox is closed. I've checked and double-checked this, it's definitely missing. Here are the respective functions: $("ul.hide a.lightbox").live("click",function(){ $("ul.show").removeClass("show").addClass("hide"); $(this).parent().parent().removeClass("hide").addClass("show"); $("ul.hide").animate({opacity: 0.2}); $("ul.show").animate({opacity: 1}); $("#next").animate({opacity: 0.2}); $("#prev").animate({opacity: 0.2}); return false; }); $("ul.show a.lightbox").live("click",function(){ $(this).fancybox().trigger("click"); return false; }); As you can see, in order for the lightbox to be called the containing UL has to have the class of show. However, if you check it with Firebug it won't. For those who are curious, the added .trigger("click"); is because the lightbox will require a double-click to launch otherwise. Any idea how I can fix this?

    Read the article

  • Minutia on Objective-C Categories and Extensions.

    - by Matt Wilding
    I learned something new while trying to figure out why my readwrite property declared in a private Category wasn't generating a setter. It was because my Category was named: // .m @interface MyClass (private) @property (readwrite, copy) NSArray* myProperty; @end Changing it to: // .m @interface MyClass () @property (readwrite, copy) NSArray* myProperty; @end and my setter is synthesized. I now know that Class Extension is not just another name for an anonymous Category. Leaving a Category unnamed causes it to morph into a different beast: one that now gives compile-time method implementation enforcement and allows you to add ivars. I now understand the general philosophies underlying each of these: Categories are generally used to add methods to any class at runtime, and Class Extensions are generally used to enforce private API implementation and add ivars. I accept this. But there are trifles that confuse me. First, at a hight level: Why differentiate like this? These concepts seem like similar ideas that can't decide if they are the same, or different concepts. If they are the same, I would expect the exact same things to be possible using a Category with no name as is with a named Category (which they are not). If they are different, (which they are) I would expect a greater syntactical disparity between the two. It seems odd to say, "Oh, by the way, to implement a Class Extension, just write a Category, but leave out the name. It magically changes." Second, on the topic of compile time enforcement: If you can't add properties in a named Category, why does doing so convince the compiler that you did just that? To clarify, I'll illustrate with my example. I can declare a readonly property in the header file: // .h @interface MyClass : NSObject @property (readonly, copy) NSString* myString; @end Now, I want to head over to the implementation file and give myself private readwrite access to the property. If I do it correctly: // .m @interface MyClass () @property (readonly, copy) NSString* myString; @end I get a warning when I don't synthesize, and when I do, I can set the property and everything is peachy. But, frustratingly, if I happen to be slightly misguided about the difference between Category and Class Extension and I try: // .m @interface MyClass (private) @property (readonly, copy) NSString* myString; @end The compiler is completely pacified into thinking that the property is readwrite. I get no warning, and not even the nice compile error "Object cannot be set - either readonly property or no setter found" upon setting myString that I would had I not declared the readwrite property in the Category. I just get the "Does not respond to selector" exception at runtime. If adding ivars and properties is not supported by (named) Categories, is it too much to ask that the compiler play by the same rules? Am I missing some grand design philosophy?

    Read the article

  • What web platform is right for me?

    - by egervari
    I've been looking at web frameworks like Rails, Grails, etc. I'm used to doing applications in Spring Framework with Hibernate... and I want something more productive. One of the things I realized is that while some of the things in Grails is sexy, there are some serious problems with it. Grails' controllers: 1) are implemented awfully. They don't seem to be able to extend from super classes at runtime. I tried this to add base actions and helper methods, and this seems to cause grails to blow up. 2) are based on an obsolete request parameters model (rather than form backing objects, which are much nicer). 3) are hard to test. Command objects are treated totally differently... and it's actually MUCH harder to write the test than it is to write the controller code. 4) Command objects operate totally differently. They are pre-validated and bound, which causes a lot of inconsistencies than basic parameter model. 5) Command objects are not reusable, and it's a pain in the rear to reuse most of the stuff from the domain classes, like constraints and fields. This is TRIVIAL to do in basic Spring. Why the hell was it not trivial to do in Grails? 6) The scaffolding that is generated is pure crap. It doesn't generalize inserts and updates... and it actually copy/pastes a pile of code in two views: create.gsp and edit.gsp. The views themselves are gargantuan piles of doggie do-do. This is further compounded by the fact that it uses low-level parameters and not objects. Integration tests are 30x slower than a Spring integration test. It is disgusting. Some mocking tests are so hard to write and aren't guaranteed to work when it's deployed, that I think it discourages fast, tdd test cycles. Most things seem to screw up grails while it's running, like adding a taglib, or anything really. The server restart problem wasn't solved at all. I'm starting to think going with Spring/Hibernate/Java is the only way to go. While there is a pretty big cost at startup, I know it'll eventually smooth out. It sucks I can't use a language like Scala... because idiomatically, it is so incompatible with Hibernate. This app is also not a run-of-the-mill UI over a database. It's got some of that, but it's not going to be a slouch. I am deathly scared of Grails now because of how crap it is in the Controller layer. Suggestions on what I can do?

    Read the article

  • Hibernate - EhCache - Which region to Cache associations/sets/collections ??

    - by lifeisnotfair
    Hi all, I am a newcomer to hibernate. It would be great if someone could comment over the following query that i have: Say i have a parent class and each parent has multiple children. So the mapping file of parent class would be something like: parent.hbm.xml <hibernate-mapping > <class name="org.demo.parent" table="parent" lazy="true"> <cache usage="read-write" region="org.demo.parent"/> <id name="id" column="id" type="integer" length="10"> <generator class="native"> </generator> </id> <property name="name" column="name" type="string" length="50"/> <set name="children" lazy="true"> <cache usage="read-write" region="org.demo.parent.children" /> <key column="parent_id"/> <one-to-many class="org.demo.children"/> </set> </class> </hibernate-mapping> children.hbm.xml <hibernate-mapping > <class name="org.demo.children" table="children" lazy="true"> <cache usage="read-write" region="org.demo.children"/> <id name="id" column="id" type="integer" length="10"> <generator class="native"> </generator> </id> <property name="name" column="name" type="string" length="50"/> <many-to-one name="parent_id" column="parent_id" type="integer" length="10" not-null="true"/> </class> </hibernate-mapping> So for the set children, should we specify the region org.demo.parent.children where it should cache the association or should we use the cache region of org.demo.children where the children would be getting cached. I am using EHCache as the 2nd level cache provider. I tried to search for the answer to this question but couldnt find any answer in this direction. It makes more sense to use org.demo.children but I dont know in which scenarios one should use a separate cache region for associations/sets/collections as in the above case. Kindly provide your inputs also let me know if I am not clear in my question. Thanks all.

    Read the article

  • emacs: how do I use edebug on code that is defined in a macro?

    - by Cheeso
    I don't even know the proper terminology for this lisp syntax, so I don't know if the words I'm using to ask the question, make sense. But the question makes sense, I'm sure. So let me just show you. cc-mode (cc-fonts.el) has things called "matchers" which are bits of code that run to decide how to fontify a region of code. That sounds simple enough, but the matcher code is in a form I don't completely understand, with babckticks and comma-atsign and just comma and so on, and furthermore it is embedded in a c-lang-defcost, which itself is a macro. And I want to run edebug on that code. Look: (c-lang-defconst c-basic-matchers-after "Font lock matchers for various things that should be fontified after generic casts and declarations are fontified. Used on level 2 and higher." t `(;; Fontify the identifiers inside enum lists. (The enum type ;; name is handled by `c-simple-decl-matchers' or ;; `c-complex-decl-matchers' below. ,@(when (c-lang-const c-brace-id-list-kwds) `((,(c-make-font-lock-search-function (concat "\\<\\(" (c-make-keywords-re nil (c-lang-const c-brace-id-list-kwds)) "\\)\\>" ;; Disallow various common punctuation chars that can't come ;; before the '{' of the enum list, to avoid searching too far. "[^\]\[{}();,/#=]*" "{") '((c-font-lock-declarators limit t nil) (save-match-data (goto-char (match-end 0)) (c-put-char-property (1- (point)) 'c-type 'c-decl-id-start) (c-forward-syntactic-ws)) (goto-char (match-end 0))))))) I am reading up on lisp syntax to figure out what those things are and what to call them, but aside from that, how can I run edebug on the code that follows the comment that reads ;; Fontify the identifiers inside enum lists. ? I know how to run edebug on a defun - just invoke edebug-defun within the function's definition, and off I go. Is there a corresponding thing I need to do to edebug the cc-mode matcher code forms?

    Read the article

  • Why do I get an extra newline in the middle of a UTF-8 character with XML::Parser?

    - by René Nyffenegger
    I encountered a problem dealing with UTF-8, XML and Perl. The following is the smallest piece of code and data in order to reproduce the problem. Here's an XML file that needs to be parsed: <?xml version="1.0" encoding="utf-8"?> <test> <words>???????????? ??????? ????????? ???? ???????????? ??????</words> <words>???????????? ??????? ????????? ???? ???????????? ??????</words> <words>???????????? ??????? ????????? ???? ???????????? ??????</words> [<words> .... </words> 148 times repeated] <words>???????????? ??????? ????????? ???? ???????????? ??????</words> <words>???????????? ??????? ????????? ???? ???????????? ??????</words> </test> The parsing is done with this perl script: use warnings; use strict; use XML::Parser; use Data::Dump; my $in_words = 0; my $xml_parser=new XML::Parser(Style=>'Stream'); $xml_parser->setHandlers ( Start => \&start_element, End => \&end_element, Char => \&character_data, Default => \&default); open OUT, '>out.txt'; binmode (OUT, ":utf8"); open XML, 'xml_test.xml' or die; $xml_parser->parse(*XML); close XML; close OUT; sub start_element { my($parseinst, $element, %attributes) = @_; if ($element eq 'words') { $in_words = 1; } else { $in_words = 0; } } sub end_element { my($parseinst, $element, %attributes) = @_; if ($element eq 'words') { $in_words = 0; } } sub default { # nothing to see here; } sub character_data { my($parseinst, $data) = @_; if ($in_words) { if ($in_words) { print OUT "$data\n"; } } } When the script is run, it produces the out.txt file. The problem is in this file on line 147. The 22th character (which in utf-8 consists of \xd6 \xb8) is split between the d6 and b8 with a new line. This should not happen. Now, I am interested if someone else has this problem or can reproduce it. And why I am getting this problem. I am running this script on Windows: C:\temp>perl -v This is perl, v5.10.0 built for MSWin32-x86-multi-thread (with 5 registered patches, see perl -V for more detail) Copyright 1987-2007, Larry Wall Binary build 1003 [285500] provided by ActiveState http://www.ActiveState.com Built May 13 2008 16:52:49

    Read the article

  • PHP MySQL Zend-ACL - Find all inherited items (Children / Parents)

    - by Scoobler
    I have one MySQL DB table like the following, the resources table: id | name | type 1 | guest | user 2 | member | user 3 | moderator | user 4 | owner | user 5 | admin | user 6 | index | controller Onto the next table, the rules table: id | user_id | rule | resource_id | extras 1 | 2 | 3 | 1 | null 2 | 3 | 3 | 2 | null 3 | 4 | 3 | 3 | null 4 | 5 | 3 | 4 | null 5 | 6 | 1 | 1 | index,login,register 6 | 6 | 2 | 2 | login,register 7 | 6 | 1 | 2 | logout OK, sorry for the length, but I am trying to give a full picture of what I am trying to do. So the way it works, a role (aka user) can be granted (rule: 1) access to a controller, a role can inherit (rule: 3) access from another role or a role and be denied (rule: 2) access to a controller. (A user is a resource and a controller is a resource) Access to actions are granted / denied using the extras column. This all works, its not a problem with setting up the ACL within zend. What I am now trying to do is show the relationships; to do that I need to find the lowest level a role is granted access to a controller stopping if it has explicitly been removed. I plan on listing the roles. When I click a role, I want it to show all the controllers that role has access to. Then clicking on a controller shows the actions the role is allowed to do. So in the example above, a guest is allowed to view the index action of the index controller along with the login action. A member inherits the same access, but is then denied access to the login action and register action. A moderator inherits the rules of a member. So if I were to select the role moderator. I want to see the controller index listed. If I click on the controller, it should show the allowed actions as being action: index. (which was originally granted to the guest, but hasn't since been dissallowed) Is there any examples to doing this. I am obviously working with the Zend MVC (PHP) and MySQL. Even just a persudo code example would be a helpful starting point - this is one of the last parts of the jigsaw I am putting together. P.S. Obviously I have the ACL object - is it going to be easier to interigate that or is it better to do it my self via PHP/MySQL? The aim will be, show what a role can access which will then allow me to add or edit a role, controller and action in a GUI style (that is somewhat the easy bit) - currently I am updating the DB manually as I have been building the site.

    Read the article

  • NSNotifications vs delegate for multiple instances of same protocol

    - by Brent Traut
    I could use some architectural advice. I've run into the following problem a few times now and I've never found a truly elegant way to solve it. The issue, described at the highest level possible:I have a parent class that would like to act as the delegate for multiple children (all using the same protocol), but when the children call methods on the parent, the parent no longer knows which child is making the call. I would like to use loose coupling (delegates/protocols or notifications) rather than direct calls. I don't need multiple handlers, so notifications seem like they might be overkill. To illustrate the problem, let me try a super-simplified example: I start with a parent view controller (and corresponding view). I create three child views and insert each of them into the parent view. I would like the parent view controller to be notified whenever the user touches one of the children. There are a few options to notify the parent: Define a protocol. The parent implements the protocol and sets itself as the delegate to each of the children. When the user touches a child view, its view controller calls its delegate (the parent). In this case, the parent is notified that a view is touched, but it doesn't know which one. Not good enough. Same as #1, but define the methods in the protocol to also pass some sort of identifier. When the child tells its delegate that it was touched, it also passes a pointer to itself. This way, the parent know exactly which view was touched. It just seems really strange for an object to pass a reference to itself. Use NSNotifications. The parent defines a separate method for each of the three children and then subscribes to the "viewWasTouched" notification for each of the three children as the notification sender. The children don't need to attach themselves to the user dictionary, but they do need to send the notification with a pointer to themselves as the scope. Same as #4, but rather than using separate methods, the parent could just use one with a switch case or other branching along with the notification's sender to determine which path to take. Create multiple man-in-the-middle classes that act as the delegates to the child views and then call methods on the parent either with a pointer to the child or with some other differentiating factor. This approach doesn't seem scalable. Are any of these approaches considered best practice? I can't say for sure, but it feels like I'm missing something more obvious/elegant.

    Read the article

  • Loosely coupled implicit conversion

    - by ltjax
    Implicit conversion can be really useful when types are semantically equivalent. For example, imagine two libraries that implement a type identically, but in different namespaces. Or just a type that is mostly identical, except for some semantic-sugar here and there. Now you cannot pass one type into a function (in one of those libraries) that was designed to use the other, unless that function is a template. If it's not, you have to somehow convert one type into the other. This should be trivial (or otherwise the types are not so identical after-all!) but calling the conversion explicitly bloats your code with mostly meaningless function-calls. While such conversion functions might actually copy some values around, they essentially do nothing from a high-level "programmers" point-of-view. Implicit conversion constructors and operators could obviously help, but they introduce coupling, so that one of those types has to know about the other. Usually, at least when dealing with libraries, that is not the case, because the presence of one of those types makes the other one redundant. Also, you cannot always change libraries. Now I see two options on how to make implicit conversion work in user-code: The first would be to provide a proxy-type, that implements conversion-operators and conversion-constructors (and assignments) for all the involved types, and always use that. The second requires a minimal change to the libraries, but allows great flexibility: Add a conversion-constructor for each involved type that can be externally optionally enabled. For example, for a type A add a constructor: template <class T> A( const T& src, typename boost::enable_if<conversion_enabled<T,A>>::type* ignore=0 ) { *this = convert(src); } and a template template <class X, class Y> struct conversion_enabled : public boost::mpl::false_ {}; that disables the implicit conversion by default. Then to enable conversion between two types, specialize the template: template <> struct conversion_enabled<OtherA, A> : public boost::mpl::true_ {}; and implement a convert function that can be found through ADL. I would personally prefer to use the second variant, unless there are strong arguments against it. Now to the actual question(s): What's the preferred way to associate types for implicit conversion? Are my suggestions good ideas? Are there any downsides to either approach? Is allowing conversions like that dangerous? Should library implementers in-general supply the second method when it's likely that their type will be replicated in software they are most likely beeing used with (I'm thinking of 3d-rendering middle-ware here, where most of those packages implement a 3D vector).

    Read the article

  • Data Warehouse ETL slow - change primary key in dimension?

    - by Jubbles
    I have a working MySQL data warehouse that is organized as a star schema and I am using Talend Open Studio for Data Integration 5.1 to create the ETL process. I would like this process to run once per day. I have estimated that one of the dimension tables (dimUser) will have approximately 2 million records and 23 columns. I created a small test ETL process in Talend that worked, but given the amount of data that may need to be updated daily, the current performance will not cut it. It takes the ETL process four minutes to UPDATE or INSERT 100 records to dimUser. If I assumed a linear relationship between the count of records and the amount of time to UPDATE or INSERT, then there is no way the ETL can finish in 3-4 hours (my hope), let alone one day. Since I'm unfamiliar with Java, I wrote the ETL as a Python script and ran into the same problem. Although, I did discover that if I did only INSERT, the process went much faster. I am pretty sure that the bottleneck is caused by the UPDATE statements. The primary key in dimUser is an auto-increment integer. My friend suggested that I scrap this primary key and replace it with a multi-field primary key (in my case, 2-3 fields). Before I rip the test data out of my warehouse and change the schema, can anyone provide suggestions or guidelines related to the design of the data warehouse the ETL process how realistic it is to have an ETL process INSERT or UPDATE a few million records each day will my friend's suggestion significantly help If you need any further information, just let me know and I'll post it. UPDATE - additional information: mysql> describe dimUser; Field Type Null Key Default Extra user_key int(10) unsigned NO PRI NULL auto_increment id_A int(10) unsigned NO NULL id_B int(10) unsigned NO NULL field_4 tinyint(4) unsigned NO 0 field_5 varchar(50) YES NULL city varchar(50) YES NULL state varchar(2) YES NULL country varchar(50) YES NULL zip_code varchar(10) NO 99999 field_10 tinyint(1) NO 0 field_11 tinyint(1) NO 0 field_12 tinyint(1) NO 0 field_13 tinyint(1) NO 1 field_14 tinyint(1) NO 0 field_15 tinyint(1) NO 0 field_16 tinyint(1) NO 0 field_17 tinyint(1) NO 1 field_18 tinyint(1) NO 0 field_19 tinyint(1) NO 0 field_20 tinyint(1) NO 0 create_date datetime NO 2012-01-01 00:00:00 last_update datetime NO 2012-01-01 00:00:00 run_id int(10) unsigned NO 999 I used a surrogate key because I had read that it was good practice. Since, from a business perspective, I want to keep aware of potential fraudulent activity (say for 200 days a user is associated with state X and then the next day they are associated with state Y - they could have moved or their account could have been compromised), so that is why geographic data is kept. The field id_B may have a few distinct values of id_A associated with it, but I am interested in knowing distinct (id_A, id_B) tuples. In the context of this information, my friend suggested that something like (id_A, id_B, zip_code) be the primary key. For the large majority of daily ETL processes (80%), I only expect the following fields to be updated for existing records: field_10 - field_14, last_update, and run_id (this field is a foreign key to my etlLog table and is used for ETL auditing purposes).

    Read the article

  • Perl kill(0, $pid) in Windows always returning 1

    - by banshee_walk_sly
    I'm trying to make a Perl script that will run a set of other programs in Windows. I need to be able to capture the stdout, stderr, and exit code of the process, and I need to be able to see if a process exceeds it's allotted execution time. Right now, the pertinent part of my code looks like: ... $pid = open3($wtr, $stdout, $stderr, $command); if($time < 0){ waitpid($pid, 0); $return = $? >> 8; $death_sig = $? & 127; $core_dump = $? & 128; } else{ # Do timeout stuff, currently not working as planned print "pid: $pid\n"; my $elapsed = 0; #THIS LOOP ONLY TERMINATES WHEN $time > $elapsed ...? while(kill 0, $pid and $time > $elapsed){ Time::HiRes::usleep(1000); # sleep for milliseconds $elapsed += 1; $return = $? >> 8; $death_sig = $? & 127; $core_dump = $? & 128; } if($elapsed >= $time){ $status = "FAIL"; print $log "TIME LIMIT EXCEEDED\n"; } } #these lines are needed to grab the stdout and stderr in arrays so # I may reuse them in multiple logs if(fileno $stdout){ @stdout = <$stdout>; } if(fileno $stderr){ @stderr = <$stderr>; } ... Everything is working correctly if $time = -1 (no timeout is needed), but the system thinks that kill 0, $pid is always 1. This makes my loop run for the entirety of the time allowed. Some extra details just for clarity: This is being run on Windows. I know my process does terminate because I have get all the expected output. Perl version: This is perl, v5.10.1 built for MSWin32-x86-multi-thread (with 2 registered patches, see perl -V for more detail) Copyright 1987-2009, Larry Wall Binary build 1007 [291969] provided by ActiveState http://www.ActiveState.com Built Jan 26 2010 23:15:11 I appreciate your help :D For that future person who may have a similar issue I got the code to work, here is the modified code sections: $pid = open3($wtr, $stdout, $stderr, $command); close($wtr); if($time < 0){ waitpid($pid, 0); } else{ print "pid: $pid\n"; my $elapsed = 0; while(waitpid($pid, WNOHANG) <= 0 and $time > $elapsed){ Time::HiRes::usleep(1000); # sleep for milliseconds $elapsed += 1; } if($elapsed >= $time){ $status = "FAIL"; print $log "TIME LIMIT EXCEEDED\n"; } } $return = $? >> 8; $death_sig = $? & 127; $core_dump = $? & 128; if(fileno $stdout){ @stdout = <$stdout>; } if(fileno $stderr){ @stderr = <$stderr>; } close($stdout); close($stderr);

    Read the article

  • Play! Framework - Can my view template be localised when rendering it as an AsyncResult?

    - by avik
    I've recently started using the Play! framework (v2.0.4) for writing a Java web application. In the majority of my controllers I'm following the paradigm of suspending the HTTP request until the promise of a web service response has been fulfilled. Once the promise has been fulfilled, I return an AsyncResult. This is what most of my actions look like (with a bunch of code omitted): public static Result myActionMethod() { Promise<MyWSResponse> wsResponse; // Perform a web service call that will return the promise of a MyWSResponse... return async(wsResponse.map(new Function<MyWSResponse, Result>() { @Override public Result apply(MyWSResponse response) { // Validate response... return ok(myScalaViewTemplate.render(response.data())); } })); } I'm now trying to internationalise my app, but hit the following error when I try to render a template from an async method: [error] play - Waiting for a promise, but got an error: There is no HTTP Context available from here. java.lang.RuntimeException: There is no HTTP Context available from here. at play.mvc.Http$Context.current(Http.java:27) ~[play_2.9.1.jar:2.0.4] at play.mvc.Http$Context$Implicit.lang(Http.java:124) ~[play_2.9.1.jar:2.0.4] at play.i18n.Messages.get(Messages.java:38) ~[play_2.9.1.jar:2.0.4] at views.html.myScalaViewTemplate$.apply(myScalaViewTemplate.template.scala:40) ~[classes/:na] at views.html.myScalaViewTemplate$.render(myScalaViewTemplate.template.scala:87) ~[classes/:na] at views.html.myScalaViewTemplate.render(myScalaViewTemplate.template.scala) ~[classes/:na] In short, where I've got a message bundle lookup in my view template, some Play! code is attempting to access the original HTTP request and retrieve the accept-languages header, in order to know which message bundle to use. But it seems that the HTTP request is inaccessible from the async method. I can see a couple of (unsatisfactory) ways to work around this: Go back to the 'one thread per request' paradigm and have threads block waiting for responses. Figure out which language to use at Controller level, and feed that choice into my template. I also suspect this might not be an issue on trunk. I know that there is a similar issue in 2.0.4 with regards to not being able to access or modify the Session object which has recently been fixed. However I'm stuck on 2.0.4 for the time being, so is there a better way that I can resolve this problem?

    Read the article

  • I need to copy only selected files and folders in PHP

    - by OM The Eternity
    I am using the following code, in which initially i am taking the difference of two folder structure and then the out put needs to be copied to other folder. here is the code below.. $source = '/var/www/html/copy1'; $mirror = '/var/www/html/copy2'; function scan_dir_recursive($dir, $rel = null) { $all_paths = array(); $new_paths = scandir($dir); foreach ($new_paths as $path) { if ($path == '.' || $path == '..') { continue; } if ($rel === null) { $path_with_rel = $path; } else { $path_with_rel = $rel . DIRECTORY_SEPARATOR . $path; } $full_path = $dir . DIRECTORY_SEPARATOR . $path; $all_paths[] = $path_with_rel; if (is_dir($full_path)) { $all_paths = array_merge( $all_paths, scan_dir_recursive($full_path, $path_with_rel) ); } } return $all_paths; } $diff_paths = array_diff( scan_dir_recursive($mirror), scan_dir_recursive($source) ); /*$diff_path = array_diff($mirror,$original);*/ echo "<pre>Difference ";print_r($diff_paths); foreach($diff_paths as $path) { echo $source1 = "var/www/html/copy2/".$path; echo "<br>"; $des = "var/www/html/copy1/".$path; copy_recursive_dirs($source1, $des); } function copy_recursive_dirs($dirsource, $dirdest) { $dir_handle=opendir($dirsource); mkdir($dirdest,0777); while(false!==($file=readdir($dir_handle))) {/*echo "<pre>"; print_r($file);*/ if($file!="." && $file!="..") { if(is_dir($dirsource.DIRECTORY_SEPARATOR.$file)) { //Copy the file at the same level in the destination folder copy_recursive_dirs($dirsource.DIRECTORY_SEPARATOR.$file, $dirdest.DIRECTORY_SEPARATOR.$file); } else{ //Copy the dir at the same lavel in the destination folder copy ($dirsource.DIRECTORY_SEPARATOR.$file, $dirdest.DIRECTORY_SEPARATOR.$file); } } } closedir($dir_handle); return true; } Whenever I execute the script I get the difference output but do not get the other copy on second folder as per code... Pls help me in rectifying...

    Read the article

  • Using Selenium-IDE with a rich Javascript application?

    - by Darien
    Problem At my workplace, we're trying to find the best way to create automated-tests for an almost wholly javascript-driven intranet application. Right now we're stuck trying to find a good tradeoff between: Application code in reusable and nest-able GUI components. Tests which are easily created by the testing team Tests which can be recorded once and then automated Tests which do not break after small cosmetic changes to the site XPath expressions (or other possible expressions, like jQuery selectors) naively generated from Selenium-IDE are often non-repeatable and very fragile. Conversely, having the JS code generate special unique ID values for every important DOM-element on the page... well, that is its own headache, complicated by re-usable GUI components and IDs needing to be consistent when the test is re-run. What successes have other people had with this kind of thing? How do you do automated application-level testing of a rich JS interface? Limitations We are using JavascriptMVC 2.0, hopefully 3.0 soon so that we can upgrade to jQuery 1.4.x. The test-making folks are mostly trained to use Selenium IDE to directly record things. The test leads would prefer a page-unique HTML ID on each clickable element on the page... Training the testers to write or alter special expressions (such as telling them which HTML class-names are important branching points) is a no-go. We try to make re-usable javascript components, but this means very few GUI components can treat themselves (or what they contain) as unique. Some of our components already use HTML ID values in their operation. I'd like to avoid doing this anyway, but it complicates the idea of ID-based testing. It may be possible to add custom facilities (like a locator-builder or new locator method) to the Selenium-IDE installation testers use. Almost everything that goes on occurs within a single "page load" from a conventional browser perspective, even when items are saved Current thoughts I'm considering a system where a custom locator-builder (javascript code) for Selenium-IDE will talk with our application code as the tester is recording. In this way, our application becomes partially responsible for generating a mostly-flexible expression (XPath or jQuery) for any given DOM element. While this can avoid requiring more training for testers, I worry it may be over-thinking things.

    Read the article

  • to write log files in two different files

    - by Sun
    my application run on customized client framework,the client framework used log4net to log their own log files. we are(our application) has to use the same log4net to log our log files in our own path(say our customized path). currently the our log files are created but log are not writing in that file.it is writting in the client framework log file. searched lot of sites the link http://stackoverflow.com/questions/308436/log4net-programmatcially-specify-multiple-loggers-with-multiple-file-appenders helped me to configure the log4net config programatically, still im log statemets are not written in my log file.the code used as below public class TraceLog { private string message = string.Empty; private static ILog ILogger = null; private static TraceLog instance = new TraceLog(); private TraceLog() { SetLevel("Log4net.MainForm", "ALL"); AddAppender("Log4net.MainForm", CreateFileAppender("FileAppender", "C:\\mylog.log")); } public static TraceLog Instance { get { return instance; } } public void Debug(string logMessage) { message = PrepareLog(logMessage); ILogger.Debug(message); } protected string PrepareLog(string logMessage) { string message = GetFileMethodLineNumberInfo(); message += logMessage; return message; } protected string GetFileMethodLineNumberInfo() { StackTrace stackTrace = new StackTrace(true); // The position 3 is relative to the index of the specified method StackFrame stackFrame = stackTrace.GetFrame(3); return (stackFrame.GetMethod().DeclaringType.Name + "/" + stackFrame.GetMethod().Name + "/" + stackFrame.GetFileLineNumber() + ":"); } private static void SetLevel(string loggerName, string levelName) { ILogger = LogManager.GetLogger(loggerName); log4net.Repository.Hierarchy.Logger l = (log4net.Repository.Hierarchy.Logger)ILogger.Logger; l.Level = l.Hierarchy.LevelMap[levelName]; } private static void AddAppender(string loggerName, IAppender appender) { ILogger = LogManager.GetLogger(loggerName); log4net.Repository.Hierarchy.Logger l = (log4net.Repository.Hierarchy.Logger)ILogger.Logger; l.AddAppender(appender); } private static IAppender CreateFileAppender(string name, string fileName) { FileAppender appender = new FileAppender(); appender.Name = name; appender.File = fileName; appender.AppendToFile = true; //PatternLayout layout = new PatternLayout(); //layout.ConversionPattern = "%d [%t] %-5p %c [%x] - %m%n"; //layout.ActivateOptions(); //appender.Layout = layout; appender.ActivateOptions(); return appender; } } } anyone pls help how to solve this

    Read the article

< Previous Page | 579 580 581 582 583 584 585 586 587 588 589 590  | Next Page >