Search Results

Search found 6012 results on 241 pages for 'jeremy quick'.

Page 227/241 | < Previous Page | 223 224 225 226 227 228 229 230 231 232 233 234  | Next Page >

  • Problem upgrading kernel on debian 3.1

    - by exhuma
    Hi, I have a quite old box in a remote server farm. So I have no direct access. Only remote SSH (and via SSH to a serial console). I haven't updated this box in ages. Now, whenever I want to install a new package, a dependency to glibc appears. Unfortunately, the install of glibc depends on a 2.6 kernel and I am running a venerable 2.4 kernel (one more reason to upgrade). The problem is, that the install of a new kernel has an indirect (over locales) dependency to glibc. So, to install glibc, I need a new kernel. For a new kernel, I need to upgrade glibc. Essentially I am blocked. What's the best way to proceed considering I have no "hardware" access? Here's a quick transcript of the upgrade process: [green:~]% sudo aptitude install linux-image-686 Reading Package Lists... Done Building Dependency Tree Reading extended state information Initializing package states... Done Reading task descriptions... Done The following packages are unused and will be REMOVED: gcc-4.3-base The following NEW packages will be automatically installed: dash libc6-i686 libparse-recdescent-perl linux-image-2.6-686 linux-image-2.6.18-6-686 module-init-tools yaird The following packages have been kept back: adduser apache2 apache2-mpm-prefork apache2-utils apache2.2-common apt apt-utils aptitude autoconf autotools-dev awstats base-files base-passwd [...snip...] util-linux vacation vim vim-common wamerican wbritish wget whiptail whois wwwconfig-common zlib1g The following NEW packages will be installed: dash libc6-i686 libparse-recdescent-perl linux-image-2.6-686 linux-image-2.6.18-6-686 linux-image-686 module-init-tools yaird The following packages will be upgraded: hotplug libc6 2 packages upgraded, 8 newly installed, 1 to remove and 277 not upgraded. Need to get 0B/22.7MB of archives. After unpacking 52.1MB will be used. Do you want to continue? [Y/n/?] Writing extended state information... Done Preconfiguring packages ... (Reading database ... 34065 files and directories currently installed.) Preparing to replace libc6 2.3.6.ds1-13 (using .../libc6_2.7-18lenny2_i386.deb) ... Checking for services that may need to be restarted... Checking init scripts... WARNING: init script for postgresql not found. [ --- libc6 config screen appears here --- ] WARNING: POSIX threads library NPTL requires kernel version 2.6.8 or later. If you use a kernel 2.4, please upgrade it before installing glibc. The installation of a 2.6 kernel _could_ ask you to install a new libc first, this is NOT a bug, and should *NOT* be reported. In that case, please add etch sources to your /etc/apt/sources.list and run: apt-get install -t etch linux-image-2.6 Then reboot into this new kernel, and proceed with your upgrade dpkg: error processing /var/cache/apt/archives/libc6_2.7-18lenny2_i386.deb (--unpack): subprocess pre-installation script returned error exit status 1 Errors were encountered while processing: /var/cache/apt/archives/libc6_2.7-18lenny2_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1) Ack! Something bad happened while installing packages. Trying to recover: dpkg: dependency problems prevent configuration of locales: locales depends on glibc-2.7-1; however: Package glibc-2.7-1 is not installed. dpkg: error processing locales (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: locales Reading Package Lists... Done Building Dependency Tree Reading extended state information Initializing package states... Done Reading task descriptions... Done Now, if I follow the instrunctions as promted I get the following. Note that I am using aptitude instead of apt-get to benefit from the better dependency tracking. I did try with apt-get first. But that let me to the same problem. [green:~]% sudo aptitude install -t etch linux-image-2.6.26-2-686 Reading Package Lists... Done Building Dependency Tree Reading extended state information Initializing package states... Done Reading task descriptions... Done E: Unable to correct problems, you have held broken packages. E: Unable to correct dependencies, some packages cannot be installed E: Unable to resolve some dependencies! Some packages had unmet dependencies. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following packages have unmet dependencies: linux-image-2.6.26-2-686: Depends: initramfs-tools (>= 0.55) but it is not installable or yaird (>= 0.0.13) but it is not installable or linux-initramfs-tool which is a virtual package. Any ideas?

    Read the article

  • Paging, sorting and filtering in a stored procedure (SQL Server)

    - by Fruitbat
    I was looking at different ways of writing a stored procedure to return a "page" of data. This was for use with the asp ObjectDataSource, but it could be considered a more general problem. The requirement is to return a subset of the data based on the usual paging paremeters, startPageIndex and maximumRows, but also a sortBy parameter to allow the data to be sorted. Also there are some parameters passed in to filter the data on various conditions. One common way to do this seems to be something like this: [Method 1] ;WITH stuff AS ( SELECT CASE WHEN @SortBy = 'Name' THEN ROW_NUMBER() OVER (ORDER BY Name) WHEN @SortBy = 'Name DESC' THEN ROW_NUMBER() OVER (ORDER BY Name DESC) WHEN @SortBy = ... ELSE ROW_NUMBER() OVER (ORDER BY whatever) END AS Row, ., ., ., FROM Table1 INNER JOIN Table2 ... LEFT JOIN Table3 ... WHERE ... (lots of things to check) ) SELECT * FROM stuff WHERE (Row > @startRowIndex) AND (Row <= @startRowIndex + @maximumRows OR @maximumRows <= 0) ORDER BY Row One problem with this is that it doesn't give the total count and generally we need another stored procedure for that. This second stored procedure has to replicate the parameter list and the complex WHERE clause. Not nice. One solution is to append an extra column to the final select list, (SELECT COUNT(*) FROM stuff) AS TotalRows. This gives us the total but repeats it for every row in the result set, which is not ideal. [Method 2] An interesting alternative is given here (http://www.4guysfromrolla.com/articles/032206-1.aspx) using dynamic SQL. He reckons that the performance is better because the CASE statement in the first solution drags things down. Fair enough, and this solution makes it easy to get the totalRows and slap it into an output parameter. But I hate coding dynamic SQL. All that 'bit of SQL ' + STR(@parm1) +' bit more SQL' gubbins. [Method 3] The only way I can find to get what I want, without repeating code which would have to be synchronised, and keeping things reasonably readable is to go back to the "old way" of using a table variable: DECLARE @stuff TABLE (Row INT, ...) INSERT INTO @stuff SELECT CASE WHEN @SortBy = 'Name' THEN ROW_NUMBER() OVER (ORDER BY Name) WHEN @SortBy = 'Name DESC' THEN ROW_NUMBER() OVER (ORDER BY Name DESC) WHEN @SortBy = ... ELSE ROW_NUMBER() OVER (ORDER BY whatever) END AS Row, ., ., ., FROM Table1 INNER JOIN Table2 ... LEFT JOIN Table3 ... WHERE ... (lots of things to check) SELECT * FROM stuff WHERE (Row > @startRowIndex) AND (Row <= @startRowIndex + @maximumRows OR @maximumRows <= 0) ORDER BY Row (Or a similar method using an IDENTITY column on the table variable). Here I can just add a SELECT COUNT on the table variable to get the totalRows and put it into an output parameter. I did some tests and with a fairly simple version of the query (no sortBy and no filter), method 1 seems to come up on top (almost twice as quick as the other 2). Then I decided to test probably I needed the complexity and I needed the SQL to be in stored procedures. With this I get method 1 taking nearly twice as long as the other 2 methods. Which seems strange. Is there any good reason why I shouldn't spurn CTEs and stick with method 3? UPDATE - 15 March 2012 I tried adapting Method 1 to dump the page from the CTE into a temporary table so that I could extract the TotalRows and then select just the relevant columns for the resultset. This seemed to add significantly to the time (more than I expected). I should add that I'm running this on a laptop with SQL Server Express 2008 (all that I have available) but still the comparison should be valid. I looked again at the dynamic SQL method. It turns out I wasn't really doing it properly (just concatenating strings together). I set it up as in the documentation for sp_executesql (with a parameter description string and parameter list) and it's much more readable. Also this method runs fastest in my environment. Why that should be still baffles me, but I guess the answer is hinted at in Hogan's comment.

    Read the article

  • Forcing a transaction to rollback on validation errors in Seam

    - by Chris Williams
    Quick version: We're looking for a way to force a transaction to rollback when specific situations occur during the execution of a method on a backing bean but we'd like the rollback to happen without having to show the user a generic 500 error page. Instead, we'd like the user to see the form she just submitted and a FacesMessage that indicates what the problem was. Long version: We've got a few backing beans that use components to perform a few related operations in the database (using JPA/Hibernate). During the process, an error can occur after some of the database operations have happened. This could be for a few different reasons but for this question, let's assume there's been a validation error that is detected after some DB writes have happened that weren't detectible before the writes occurred. When this happens, we'd like to make sure all of the db changes up to this point will be rolled back. Seam can deal with this because if you throw a RuntimeException out of the current FacesRequest, Seam will rollback the current transaction. The problem with this is that the user is shown a generic error page. In our case, we'd actually like the user to be shown the page she was on with a descriptive message about what went wrong, and have the opportunity to correct the bad input that caused the problem. The solution we've come up with is to throw an Exception from the component that discovers the validation problem with the annotation: @ApplicationException( rollback = true ) Then our backing bean can catch this exception, assume the component that threw it has published the appropriate FacesMessage, and simply return null to take the user back to the input page with the error displayed. The ApplicationException annotation tells Seam to rollback the transaction and we're not showing the user a generic error page. This worked well for the first place we used it that happened to only be doing inserts. The second place we tried to use it, we have to delete something during the process. In this second case, everything works if there's no validation error. If a validation error does happen, the rollback Exception is thrown and the transaction is marked for rollback. Even if no database modifications have happened to be rolled back, when the user fixes the bad data and re-submits the page, we're getting: java.lang.IllegalArgumentException: Removing a detached instance The detached instance is lazily loaded from another object (there's a many to one relationship). That parent object is loaded when the backing bean is instantiated. Because the transaction was rolled back after the validation error, the object is now detached. Our next step was to change this page from conversation scope to page scope. When we did this, Seam can't even render the page after the validation error because our page has to hit the DB to render and the transaction has been marked for rollback. So my question is: how are other people dealing with handling errors cleanly and properly managing transactions at the same time? Better yet, I'd love to be able to use everything we have now if someone can spot something I'm doing wrong that would be relatively easy to fix. I've read the Seam Framework article on Unified error page and exception handling but this is geared more towards more generic errors your application might encounter.

    Read the article

  • Performing Aggregate Functions on Multi-Million Row Tables

    - by Daniel Short
    I'm having some serious performance issues with a multi-million row table that I feel I should be able to get results from fairly quick. Here's a run down of what I have, how I'm querying it, and how long it's taking: I'm running SQL Server 2008 Standard, so Partitioning isn't currently an option I'm attempting to aggregate all views for all inventory for a specific account over the last 30 days. All views are stored in the following table: CREATE TABLE [dbo].[LogInvSearches_Daily]( [ID] [bigint] IDENTITY(1,1) NOT NULL, [Inv_ID] [int] NOT NULL, [Site_ID] [int] NOT NULL, [LogCount] [int] NOT NULL, [LogDay] [smalldatetime] NOT NULL, CONSTRAINT [PK_LogInvSearches_Daily] PRIMARY KEY CLUSTERED ( [ID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 90) ON [PRIMARY] ) ON [PRIMARY] This table has 132,000,000 records, and is over 4 gigs. A sample of 10 rows from the table: ID Inv_ID Site_ID LogCount LogDay -------------------- ----------- ----------- ----------- ----------------------- 1 486752 48 14 2009-07-21 00:00:00 2 119314 51 16 2009-07-21 00:00:00 3 313678 48 25 2009-07-21 00:00:00 4 298863 0 1 2009-07-21 00:00:00 5 119996 0 2 2009-07-21 00:00:00 6 463777 534 7 2009-07-21 00:00:00 7 339976 503 2 2009-07-21 00:00:00 8 333501 570 4 2009-07-21 00:00:00 9 453955 0 12 2009-07-21 00:00:00 10 443291 0 4 2009-07-21 00:00:00 (10 row(s) affected) I have the following index on LogInvSearches_Daily: /****** Object: Index [IX_LogInvSearches_Daily_LogDay] Script Date: 05/12/2010 11:08:22 ******/ CREATE NONCLUSTERED INDEX [IX_LogInvSearches_Daily_LogDay] ON [dbo].[LogInvSearches_Daily] ( [LogDay] ASC ) INCLUDE ( [Inv_ID], [LogCount]) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] I need to pull inventory only from the Inventory for a specific account id. I have an index on the Inventory as well. I'm using the following query to aggregate the data and give me the top 5 records. This query is currently taking 24 seconds to return the 5 rows: StmtText ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- SELECT TOP 5 Sum(LogCount) AS Views , DENSE_RANK() OVER(ORDER BY Sum(LogCount) DESC, Inv_ID DESC) AS Rank , Inv_ID FROM LogInvSearches_Daily D (NOLOCK) WHERE LogDay DateAdd(d, -30, getdate()) AND EXISTS( SELECT NULL FROM propertyControlCenter.dbo.Inventory (NOLOCK) WHERE Acct_ID = 18731 AND Inv_ID = D.Inv_ID ) GROUP BY Inv_ID (1 row(s) affected) StmtText ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |--Top(TOP EXPRESSION:((5))) |--Sequence Project(DEFINE:([Expr1007]=dense_rank)) |--Segment |--Segment |--Sort(ORDER BY:([Expr1006] DESC, [D].[Inv_ID] DESC)) |--Stream Aggregate(GROUP BY:([D].[Inv_ID]) DEFINE:([Expr1006]=SUM([LOALogs].[dbo].[LogInvSearches_Daily].[LogCount] as [D].[LogCount]))) |--Sort(ORDER BY:([D].[Inv_ID] ASC)) |--Nested Loops(Inner Join, OUTER REFERENCES:([D].[Inv_ID])) |--Nested Loops(Inner Join, OUTER REFERENCES:([Expr1011], [Expr1012], [Expr1010])) | |--Compute Scalar(DEFINE:(([Expr1011],[Expr1012],[Expr1010])=GetRangeWithMismatchedTypes(dateadd(day,(-30),getdate()),NULL,(6)))) | | |--Constant Scan | |--Index Seek(OBJECT:([LOALogs].[dbo].[LogInvSearches_Daily].[IX_LogInvSearches_Daily_LogDay] AS [D]), SEEK:([D].[LogDay] > [Expr1011] AND [D].[LogDay] < [Expr1012]) ORDERED FORWARD) |--Index Seek(OBJECT:([propertyControlCenter].[dbo].[Inventory].[IX_Inventory_Acct_ID]), SEEK:([propertyControlCenter].[dbo].[Inventory].[Acct_ID]=(18731) AND [propertyControlCenter].[dbo].[Inventory].[Inv_ID]=[LOA (13 row(s) affected) I tried using a CTE to pick up the rows first and aggregate them, but that didn't run any faster, and gives me essentially the same execution plan. (1 row(s) affected) StmtText ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --SET SHOWPLAN_TEXT ON; WITH getSearches AS ( SELECT LogCount -- , DENSE_RANK() OVER(ORDER BY Sum(LogCount) DESC, Inv_ID DESC) AS Rank , D.Inv_ID FROM LogInvSearches_Daily D (NOLOCK) INNER JOIN propertyControlCenter.dbo.Inventory I (NOLOCK) ON Acct_ID = 18731 AND I.Inv_ID = D.Inv_ID WHERE LogDay DateAdd(d, -30, getdate()) -- GROUP BY Inv_ID ) SELECT Sum(LogCount) AS Views, Inv_ID FROM getSearches GROUP BY Inv_ID (1 row(s) affected) StmtText ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |--Stream Aggregate(GROUP BY:([D].[Inv_ID]) DEFINE:([Expr1004]=SUM([LOALogs].[dbo].[LogInvSearches_Daily].[LogCount] as [D].[LogCount]))) |--Sort(ORDER BY:([D].[Inv_ID] ASC)) |--Nested Loops(Inner Join, OUTER REFERENCES:([D].[Inv_ID])) |--Nested Loops(Inner Join, OUTER REFERENCES:([Expr1008], [Expr1009], [Expr1007])) | |--Compute Scalar(DEFINE:(([Expr1008],[Expr1009],[Expr1007])=GetRangeWithMismatchedTypes(dateadd(day,(-30),getdate()),NULL,(6)))) | | |--Constant Scan | |--Index Seek(OBJECT:([LOALogs].[dbo].[LogInvSearches_Daily].[IX_LogInvSearches_Daily_LogDay] AS [D]), SEEK:([D].[LogDay] > [Expr1008] AND [D].[LogDay] < [Expr1009]) ORDERED FORWARD) |--Index Seek(OBJECT:([propertyControlCenter].[dbo].[Inventory].[IX_Inventory_Acct_ID] AS [I]), SEEK:([I].[Acct_ID]=(18731) AND [I].[Inv_ID]=[LOALogs].[dbo].[LogInvSearches_Daily].[Inv_ID] as [D].[Inv_ID]) ORDERED FORWARD) (8 row(s) affected) (1 row(s) affected) So given that I'm getting good Index Seeks in my execution plan, what can I do to get this running faster? Thanks, Dan

    Read the article

  • Difference between SQL 2005 and SQL 2008 for inserting multiple rows with XML

    - by Sam Dahan
    I am using the following SQL code for inserting multiple rows of data in a table. The data is passed to the stored procedure using an XML variable : INSERT INTO MyTable SELECT SampleTime = T.Item.value('SampleTime[1]', 'datetime'), Volume1 = T.Item.value('Volume1[1]', 'float'), Volume2 = T.Item.value('Volume2[1]', 'float') FROM @xml.nodes('//Root/MyRecord') T(item) I have a whole bunch of unit tests to verify that I am inserting the right information, the right number of records, etc.. when I call the stored procedure. All fine and dandy - that is, until we began to monkey around with the compatibility level of the database. The code above worked beautifully as long as we kept the compatibility level of the DB at 90 (SQL 2005). When we set the compatibility level at 100 (SQL 2008), the unit tests failed, because the stored procedure using the code above times out. The unit tests are dropping the database, re-creating it from scripts, and running the tests on the brand new DB, so it's not - I think - a question of the 'old compatibility level' sticking around. Using the SQL Management studio, I made up a quick test SQL script. Using the same XML chunk, I alter the DB compat level , truncate the table, then use the code above to insert 650 rows. When the level is 90 (SQL 2005), it runs in milliseconds. When the level is 100 (SQL 2008) it sometimes takes over a minute, sometimes runs in milliseconds. I'd appreciate any insight anyone might have into that. EDIT The script takes over a minute to run with my actual data, which has more rows than I show here, is a real table, and has an index. With the following example code, the difference goes between milliseconds and around 5 seconds. --use [master] --ALTER DATABASE MyDB SET compatibility_level =100 use [MyDB] declare @xml xml set @xml = '<?xml version="1.0"?> <Root xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <Record> <SampleTime>2009-01-24T00:00:00</SampleTime> <Volume1>0</Volume1> <Volume2>0</Volume2> </Record> ..... 653 records, sample time spaced out 4 hours ........ </Root>' DECLARE @myTable TABLE( ID int IDENTITY(1,1) NOT NULL, [SampleTime] [datetime] NOT NULL, [Volume1] [float] NULL, [Volume2] [float] NULL) INSERT INTO @myTable select T.Item.value('SampleTime[1]', 'datetime') as SampleTime, Volume1 = T.Item.value('Volume1[1]', 'float'), Volume2 = T.Item.value('Volume2[1]', 'float') FROM @xml.nodes('//Root/Record') T(item) I uncomment the 2 lines at the top, select them and run just that (the ALTER DATABASE statement), then comment the 2 lines, deselect any text and run the whole thing. When I change from 90 to 100, it runs all the time in 5 seconds (I change the level once, but I run the series several times to see if I have consistent results). When I change from 100 to 90, it runs in milliseconds all the time. Just so you can play with it too. I am using SQL Server 2008 R2 standard edition.

    Read the article

  • jQuery .live fireing multiple times!

    - by cstrzelc
    Greetings Guru's, This is a little hard to explain, but I'll give it a shot. I have a quick question regarding to the .live() function in JQuery. I'm going to simplify the example here. I have a page "index.php" that has a container "#display_files_container" which is populated with anchor links that are generated dynamically by a different page "process.php". The links are loaded into that same <div> when those links are selected based on the attributes of that link. See Examples: INDEX.PHP <html> <head><title>index.php</title> <!-- this function below loads process.php and passes it the dirid variable via post. I then use this post variable inside of process.php to pull other links from the database --> <script language="text/javascript"> $('.directory').live("click", function() { $('#display_files_container').load('plugins/project_files/process.php', {dirid: $(this).attr('dirid')}); }); </script> </head> <?php /*This code initial populates the link array so we have the first links populated before the users clicks for the first time*/ some code to fetch the $current_directory_list array from the database initially.... >? <body> <div id='display_files_container'> <?php /*Cycle through the array and echo out all the links which have been pulled from DB*/ for($i=0;$i<$current_directory_count;$i++) { echo "<a href='#' class='directory' dirid='".$current_directory_list[$i]['id']." '>".$current_directory_list[$i]['directory_name']. "</a> "; } ?> </div> </body> </html> PROCESS.PHP this file includes code to populate the $current_directory_list[] array from the database based on the post variable "$_POST['dirid']" that was sent from the .click() method in index.php. It then echo's the results out and we display them in the #display_files_container container. When you click on those links the process repeats. This works..... you can click though the directory tree and it loads the new links every time. However, it seems to want to .load() the process.php file many times over for one click. The number of times process.php is loaded seems to increase the more the links are clicked. So for example you can click on a link and firebug reports that process.php was loaded 23 times..... Eventually I would imagine I would record a stackoverflow. Please let me know if you have any ideas. Are there any ways that I can assure that .live() loads the process.php file only once? Thanks, -cs

    Read the article

  • Rendering a Long Document on iPad

    - by benjismith
    I'm implementing a document viewer with highlighting/annotation capabilities for a custom document format on iPad. The documents are kind of long (100 to 200 pages, if printed on paper) and I've had a hard time finding the right approach. Here are the requirments: 1) Basic rich-text styling: control of left/right margins. Control of font name, size, foreground/background color, and line spacing. Bold, italics, underline, etc. 2) Selection and highlighting of arbitrary text regions (not limited to paragraph boundaries, like in Safari/UIWebView). 3) Customization of the Cut/Copy/Paste popup (what is that thing called anyhow? UIActionBar?) This is one of the essential requirements of the app. My first implementation was based on UIWebView. I just rendered the document as HTML with CSS for text styling. But I couldn't get the kind of text selection behavior I wanted (across paragraph boundaries) and the UIActionBar can't be customized from within UIWebView. So I started working on a javascript approach, faking the device text-selection behavior using JQuery to trap touch events and dynamically modifying the DOM to change the background color of selected regions of text. I built a fake UIActionBar control as a hidden DIV, positioning it and unhiding it whenever there was an active selection region. Not too shabby. The main problem is that it's SLOOOOOOOW. Scrolling through the document is nice and quick, but dynamically changing the DOM is not very snappy. Plus, I couldn't figure out how to recreate the magnifier loupe, so my fake text-selection GUI doesn't look quite the same as the native implementation. Also, I haven't yet implemented the communication bridge between the javascript layer and the objective-c layer (where the rest of the app lives), but it was shaping up to be a huge hassle. So I've been looking at CoreText, but there are precious few examples on the web. I spent a little time with this simple little demo: http://github.com/jonasschnelli/I7CoreTextExample/ It shows how to use CoreText to draw an NSAttributedText string into a UIView. But it has its own problems: It doesn't implement text-selection behavior, and it doesn't present a UIActionBar, so I don't have any idea how to make that happen. And, more importantly, it tries to draw the entire document all at once, with significant performance degradations for long documents. My documents can have thousands of paragraphs, and less than 1% of the document is ever on screen at a time. On the plus side, these documents already contain precise formatting information. I know the exact page-position of every line of text, so I don't need a layout engine. Does anyone know how to implement this sort of view using CoreText? I understand that a full-fledged implementation is overkill for a question like this, but I'm looking for a good CoreText example with a few basic requirements: 1) Precise layout & formatting control (using the formatting metrics and text styles I've already calculated). 2) Arbitrary selection of text. 3) Customization of the UIActionBar. 4) Efficient recycling of resources for off-screen objects. I'd be happy to implement my own recycling when text elements scroll off-screen, but wouldn't that require re-implementing UIScrollView? I'm brand-new to iPhone development, and still getting used to Objective-C, but I've been working in other languages (Java, C#, flex/actionscript, etc) for more than ten years, so I feel confident in my ability to get the work done, if only I had a better feel for the iPhone SDK and the common coding patterns for stuff like this. Is it just me, or does the SDK documentation really suck? Anyhow, thanks for your help!

    Read the article

  • Why is my mdadm raid-1 recovery so slow?

    - by dimmer
    On a system I'm running Ubuntu 10.04. My raid-1 restore started out fast but quickly became ridiculously slow (at this rate the restore will take 150 days!): dimmer@paimon:~$ cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid1 sdc1[2] sdb1[1] 1953513408 blocks [2/1] [_U] [====>................] recovery = 24.4% (477497344/1953513408) finish=217368.0min speed=113K/sec unused devices: <none> Eventhough I have set the kernel variables to reasonably quick values: dimmer@paimon:~$ cat /proc/sys/dev/raid/speed_limit_min 1000000 dimmer@paimon:~$ cat /proc/sys/dev/raid/speed_limit_max 100000000 I am using 2 2.0TB Western Digital Hard Disks, WDC WD20EARS-00M and WDC WD20EARS-00J. I believe they have been partitioned such that their sectors are aligned. dimmer@paimon:/sys$ sudo parted /dev/sdb GNU Parted 2.2 Using /dev/sdb Welcome to GNU Parted! Type 'help' to view a list of commands. (parted) p Model: ATA WDC WD20EARS-00M (scsi) Disk /dev/sdb: 2000GB Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags 1 1049kB 2000GB 2000GB ext4 (parted) unit s (parted) p Number Start End Size File system Name Flags 1 2048s 3907028991s 3907026944s ext4 (parted) q dimmer@paimon:/sys$ sudo parted /dev/sdc GNU Parted 2.2 Using /dev/sdc Welcome to GNU Parted! Type 'help' to view a list of commands. (parted) p Model: ATA WDC WD20EARS-00J (scsi) Disk /dev/sdc: 2000GB Sector size (logical/physical): 512B/4096B Partition Table: gpt Number Start End Size File system Name Flags 1 1049kB 2000GB 2000GB ext4 I am beginning to think that I have a hardware problem, otherwise I can't imagine why the mdadm restore should be so slow. I have done a benchmark on /dev/sdc using Ubuntu's disk utility GUI app, and the results looked normal so I know that sdc has the capability to write faster than this. I also had the same problem on a similar WD drive that I RMAd because of bad sectors. I suppose it's possible they sent me a replacement with bad sectors too, although there are no SMART values showing them yet. Any ideas? Thanks. As requested, output of top sorted by cpu usage (notice there is ~0 cpu usage). iowait is also zero which seems strange: top - 11:35:13 up 2 days, 9:40, 3 users, load average: 2.87, 2.58, 2.30 Tasks: 142 total, 1 running, 141 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.2%sy, 0.0%ni, 99.8%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 3096304k total, 1482164k used, 1614140k free, 617672k buffers Swap: 1526132k total, 0k used, 1526132k free, 535416k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 45 root 20 0 0 0 0 S 0 0.0 2:17.02 scsi_eh_0 1 root 20 0 2808 1752 1204 S 0 0.1 0:00.46 init 2 root 20 0 0 0 0 S 0 0.0 0:00.00 kthreadd 3 root RT 0 0 0 0 S 0 0.0 0:00.02 migration/0 4 root 20 0 0 0 0 S 0 0.0 0:00.17 ksoftirqd/0 5 root RT 0 0 0 0 S 0 0.0 0:00.00 watchdog/0 6 root RT 0 0 0 0 S 0 0.0 0:00.02 migration/1 ... dmesg errors, definitely looking like hardware: [202884.000157] ata5.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen [202884.007015] ata5.00: failed command: FLUSH CACHE EXT [202884.013728] ata5.00: cmd ea/00:00:00:00:00/00:00:00:00:00/a0 tag 0 [202884.013730] res 40/00:00:ff:59:2e/00:00:35:00:00/e0 Emask 0x4 (timeout) [202884.033667] ata5.00: status: { DRDY } [202884.040329] ata5: hard resetting link [202889.400050] ata5: link is slow to respond, please be patient (ready=0) [202894.048087] ata5: COMRESET failed (errno=-16) [202894.054663] ata5: hard resetting link [202899.412049] ata5: link is slow to respond, please be patient (ready=0) [202904.060107] ata5: COMRESET failed (errno=-16) [202904.066646] ata5: hard resetting link [202905.840056] ata5: SATA link up 3.0 Gbps (SStatus 123 SControl 300) [202905.849178] ata5.00: configured for UDMA/133 [202905.849188] ata5: EH complete [203899.000292] ata5.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen [203899.007096] ata5.00: failed command: IDENTIFY DEVICE [203899.013841] ata5.00: cmd ec/00:01:00:00:00/00:00:00:00:00/00 tag 0 pio 512 in [203899.013843] res 40/00:00:ff:f9:f6/00:00:38:00:00/e0 Emask 0x4 (timeout) [203899.041232] ata5.00: status: { DRDY } [203899.048133] ata5: hard resetting link [203899.816134] ata5: SATA link up 3.0 Gbps (SStatus 123 SControl 300) [203899.826062] ata5.00: configured for UDMA/133 [203899.826079] ata5: EH complete [204375.000200] ata5.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen [204375.007421] ata5.00: failed command: IDENTIFY DEVICE [204375.014799] ata5.00: cmd ec/00:01:00:00:00/00:00:00:00:00/00 tag 0 pio 512 in [204375.014800] res 40/00:00:ff:0c:0f/00:00:39:00:00/e0 Emask 0x4 (timeout) [204375.044374] ata5.00: status: { DRDY } [204375.051842] ata5: hard resetting link [204380.408049] ata5: link is slow to respond, please be patient (ready=0) [204384.440076] ata5: SATA link up 3.0 Gbps (SStatus 123 SControl 300) [204384.449938] ata5.00: configured for UDMA/133 [204384.449955] ata5: EH complete [204395.988135] ata5.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen [204395.988140] ata5.00: failed command: IDENTIFY DEVICE [204395.988147] ata5.00: cmd ec/00:01:00:00:00/00:00:00:00:00/00 tag 0 pio 512 in [204395.988149] res 40/00:00:ff:0c:0f/00:00:39:00:00/e0 Emask 0x4 (timeout) [204395.988151] ata5.00: status: { DRDY } [204395.988156] ata5: hard resetting link [204399.320075] ata5: SATA link up 3.0 Gbps (SStatus 123 SControl 300) [204399.330487] ata5.00: configured for UDMA/133 [204399.330503] ata5: EH complete

    Read the article

  • JUnit testing, exception in threa main

    - by Crystal
    I am new to JUnit and am trying to follow my prof's example. I have a Person class and a PersonTest class. When I try to compile PersonTest.java, I get the following error: Exception in thread "main" java.lang.NoSuchMethodError: main I am not really sure why since I followed his example. Person.java public class Person implements Comparable { String firstName; String lastName; String telephone; String email; public Person() { firstName = ""; lastName = ""; telephone = ""; email = ""; } public Person(String firstName) { this.firstName = firstName; } public Person(String firstName, String lastName, String telephone, String email) { this.firstName = firstName; this.lastName = lastName; this.telephone = telephone; this.email = email; } public String getFirstName() { return firstName; } public void setFirstName(String firstName) { this.firstName = firstName; } public String getLastName() { return lastName; } public void setLastName(String lastName) { this.lastName = lastName; } public String getTelephone() { return telephone; } public void setTelephone(String telephone) { this.telephone = telephone; } public String getEmail() { return email; } public void setEmail(String email) { this.email = email; } public int compareTo(Object o) { String s1 = this.lastName + this.firstName; String s2 = ((Person) o).lastName + ((Person) o).firstName; return s1.compareTo(s2); } public boolean equals(Object otherObject) { // a quick test to see if the objects are identical if (this == otherObject) { return true; } // must return false if the explicit parameter is null if (otherObject == null) { return false; } if (!(otherObject instanceof Person)) { return false; } Person other = (Person) otherObject; return firstName.equals(other.firstName) && lastName.equals(other.lastName) && telephone.equals(other.telephone) && email.equals(other.email); } public int hashCode() { return this.email.toLowerCase().hashCode(); } public String toString() { return getClass().getName() + "[firstName = " + firstName + '\n' + "lastName = " + lastName + '\n' + "telephone = " + telephone + '\n' + "email = " + email + "]"; } } PersonTest.java import org.junit.Test; // JDK 5.0 annotation support import static org.junit.Assert.assertTrue; // Using JDK 5.0 static imports import static org.junit.Assert.assertFalse; // Using JDK 5.0 static imports import junit.framework.JUnit4TestAdapter; // Need this to be compatible with old test driver public class PersonTest { /** A test to verify equals method. */ @Test public void checkEquals() { Person p1 = new Person("jj", "aa", "[email protected]", "1112223333"); assertTrue(p1.equals(p1)); // first check in equals method assertFalse(p1.equals(null)); // second check in equals method assertFalse(p1.equals(new Object())); // third chk in equals method Person p2 = new Person("jj", "aa", "[email protected]", "1112223333"); assertTrue(p1.equals(p2)); // check for deep comparison p1 = new Person("jj", "aa", "[email protected]", "1112223333"); p2 = new Person("kk", "aa", "[email protected]", "1112223333"); assertFalse(p1.equals(p2)); // check for deep comkparison } }

    Read the article

  • How can I get SQL Server transactions to use record-level locks?

    - by Joe White
    We have an application that was originally written as a desktop app, lo these many years ago. It starts a transaction whenever you open an edit screen, and commits if you click OK, or rolls back if you click Cancel. This worked okay for a desktop app, but now we're trying to move to ADO.NET and SQL Server, and the long-running transactions are problematic. I found that we'll have a problem when multiple users are all trying to edit (different subsets of) the same table at the same time. In our old database, each user's transaction would acquire record-level locks to every record they modified during their transaction; since different users were editing different records, everyone gets their own locks and everything works. But in SQL Server, as soon as one user edits a record inside a transaction, SQL Server appears to get a lock on the entire table. When a second user tries to edit a different record in the same table, the second user's app simply locks up, because the SqlConnection blocks until the first user either commits or rolls back. I'm aware that long-running transactions are bad, and I know that the best solution would be to change these screens so that they no longer keep transactions open for a long time. But since that would mean some invasive and risky changes, I also want to research whether there's a way to get this code up and running as-is, just so I know what my options are. How can I get two different users' transactions in SQL Server to lock individual records instead of the entire table? Here's a quick-and-dirty console app that illustrates the issue. I've created a database called "test1", with one table called "Values" that just has ID (int) and Value (nvarchar) columns. If you run the app, it asks for an ID to modify, starts a transaction, modifies that record, and then leaves the transaction open until you press ENTER. I want to be able to start the program and tell it to update ID 1; let it get its transaction and modify the record; start a second copy of the program and tell it to update ID 2; have it able to update (and commit) while the first app's transaction is still open. Currently it freezes at step 4, until I go back to the first copy of the app and close it or press ENTER so it commits. The call to command.ExecuteNonQuery blocks until the first connection is closed. public static void Main() { Console.Write("ID to update: "); var id = int.Parse(Console.ReadLine()); Console.WriteLine("Starting transaction"); using (var scope = new TransactionScope()) using (var connection = new SqlConnection(@"Data Source=localhost\sqlexpress;Initial Catalog=test1;Integrated Security=True")) { connection.Open(); var command = connection.CreateCommand(); command.CommandText = "UPDATE [Values] SET Value = 'Value' WHERE ID = " + id; Console.WriteLine("Updating record"); command.ExecuteNonQuery(); Console.Write("Press ENTER to end transaction: "); Console.ReadLine(); scope.Complete(); } } Here are some things I've already tried, with no change in behavior: Changing the transaction isolation level to "read uncommitted" Specifying a "WITH (ROWLOCK)" on the UPDATE statement

    Read the article

  • Apache/PHP serving file multiple times

    - by easement
    I have a system with a download.php page. The page takes and id and loads a file based on from the DB Record and then serves it up. I've noticed a couple instances where files are requested multiple times in short time spans (20ms). Times that are too quick for human input. There are plenty of instances where the downloader functions fine. However, in taking a closer look at the downloader’s usage, I did see some interesting behavior. For instance, the IP address xxx.xxx.xxx.xxx (which is one in a range owned by xxxxxx.de in Germany) came to the site through Google. They browsed around and then came to the page http://site.com/xxxx/press+125.php There they issued a request for /download.php?id=/ZZ/n+aH55Y= (a PDF) at 9:04:23AM. That alone is not a big deal. However, what is interesting is that the server seems to have been quite preoccupied with serving that request. In the logs the request first completes between 9:09:48 and 9:10:00. It looks like the user must have gotten tired of waiting during that time and requested the document two more times. Between 09:14:47 and 09:15:00 the same request appears again, except it is from 9:04:43AM, 20ms later than the first request. Then it pops up a third time, with a request that started at 09:05:06 completing between 09:19:55 and 09:19:58! I’m suspicious of that document. In looking through the logs I see other instances where it takes the server a little while to handle that specific file. Check out this list of requests from zzz.zzz.zzz.zzz[different than above] for the file /download.php?id=/ZZ/n+aH55Y= (the same docuemnt as before): Request time Complete Time 04:32:43 04:33:36 04:32:50 04:33:36 04:32:51 04:33:38 04:33:05 04:33:38 04:33:34 04:33:42 04:33:05 04:33:42 So something is definitely going on. Whether it has to do with this specific document tripping up the server, the download.php page’s code, or if we’re just seeing the evidence of some server level overload as it plays out in real time I’m not yet sure. In fairness, there are other instances of people downloading /download.php?id=/ZZ/n+aH55Y= (the same PDF) without error. However, it is interesting that the multiple processes only seem to happen with this one file, and then only when it is accessed through the page http://site.com/press+125.php . It bears further investigation if there’s something amiss inside the code that causes the system to fire off multiple download requests that occupy the server. I don't know if this press+125.php is a rabbit hole, but there is weird consicence. Any ideas? I'm totally out of ideas. Apache maxed out? Things like that. ///DOWNLOAD.php $file = new files(); $file->comparison_filter("id", "=", $id); //sql to load if ($file->load()) { $file->serve(); } //FILES function serve() { if ($this->is_loaded) { if (file_exists($this->get_value("filename"))) { if ($this->get_value("content_type") != "") { header("Content-Type: " . $this->get_value("content_type")); } header("Content-Length: " . filesize($this->get_value("filename"))); if ($this->get_value("flag_image") == 0 || $this->get_value("flag_image") == false) { header("Cache-Control: private"); header("Content-Disposition: attachment; filename=" . urlencode($this->get_value("original_filename"))); } set_time_limit(0); @readfile($this->get_value("filename")); exit; } } }

    Read the article

  • Why is the this-pointer needed to access inherited attributes?

    - by Shadow
    Hi, assume the following class is given: class Base{ public: Base() {} Base( const Base& b) : base_attr(b.base_attr) {} void someBaseFunction() { .... } protected: SomeType base_attr; }; When I want a class to inherit from this one and include a new attribute for the derived class, I would write: class Derived: public Base { public: Derived() {} Derived( const Derived& d ) : derived_attr(d.derived_attr) { this->base_attr = d.base_attr; } void SomeDerivedFunction() { .... } private: SomeOtherType derived_attr; }; This works for me (let's ignore eventually missing semicolons or such please). However, when I remove the "this-" in the copy constructor of the derived class, the compiler complains that "'base_attr' was not declared in this scope". I thought that, when inheriting from a class, the protected attributes would then also be accessible directly. I did not know that the "this-" pointer was needed. I am now confused if it is actually correct what I am doing there, especially the copy-constructor of the Derived-class. Because each Derived object is supposed to have a base_attr and a derived_attr and they obviously need to be initialized/set correctly. And because Derived is inheriting from Base, I don't want to explicitly include an attribute named "base_attr" in the Derived-class. IMHO doing so would generally destroy the idea behind inheritance, as everything would have to be defined again. EDIT Thank you all for the quick answers. I completely forgot the fact that the classes actually are templates. Please, see the new examples below, which are actually compiling when including "this-" and are failing when omiting "this-" in the copy-constructor of the Derived-class: Base-class: #include <iostream> template<class T> class Base{ public: Base() : base_attr(0) {} Base( const Base& b) : base_attr(b.base_attr) {} void baseIncrement() { ++base_attr; } void printAttr() { std::cout << "Base Attribute: " << base_attr << std::endl; } protected: T base_attr; }; Derived-class: #include "base.hpp" template< class T > class Derived: public Base<T>{ public: Derived() : derived_attr(1) {} Derived( const Derived& d) : derived_attr(d.derived_attr) { this->base_attr = d.base_attr; } void derivedIncrement() { ++derived_attr; } protected: T derived_attr; }; and for completeness also the main function: #include "derived.hpp" int main() { Derived<int> d; d.printAttr(); d.baseIncrement(); d.printAttr(); Derived<int> d2(d); d2.printAttr(); return 0; }; I am using g++-4.3.4. Although I understood now that it seems to come from the fact that I use template-class definitions, I did not quite understand what is causing the problem when using templates and why it works when not using templates. Could someone please further clarify this?

    Read the article

  • C++ Compile problem when using Windows - CodeGear

    - by Carlos
    This is a follow-up question to this one i made earlier. Btw thanks Neil Butterworth for you help http://stackoverflow.com/questions/2461977/problem-compiling-c-in-codegear A quick recap. Im currently developing a C++ program for university, I used Netbeans 6.8 on my personal computer (Mac) and all works perfect. When I try them on my windows partition or at the university PC's using CodeGear RAD Studio 2009 & 2010 i was getting a few compile errors which were solved by adding the following header file: #include <string> However now the program does compile but it doesn't run, just a blank console. And am getting the following in the CodeGear event's log: Thread Start: Thread ID: 2024. Process Project1.exe (3280) Process Start: C:\Users\Carlos\Documents\RAD Studio\Projects\Debug\Project1.exe. Base Address: $00400000. Process Project1.exe (3280) Module Load: Project1.exe. Has Debug Info. Base Address: $00400000. Process Project1.exe (3280) Module Load: ntdll.dll. No Debug Info. Base Address: $77E80000. Process Project1.exe (3280) Module Load: KERNEL32.dll. No Debug Info. Base Address: $771C0000. Process Project1.exe (3280) Module Load: KERNELBASE.dll. No Debug Info. Base Address: $75FE0000. Process Project1.exe (3280) Module Load: cc32100.dll. No Debug Info. Base Address: $32A00000. Process Project1.exe (3280) Module Load: USER32.dll. No Debug Info. Base Address: $77980000. Process Project1.exe (3280) Module Load: GDI32.dll. No Debug Info. Base Address: $75F50000. Process Project1.exe (3280) Module Load: LPK.dll. No Debug Info. Base Address: $75AB0000. Process Project1.exe (3280) Module Load: USP10.dll. No Debug Info. Base Address: $76030000. Process Project1.exe (3280) Module Load: msvcrt.dll. No Debug Info. Base Address: $776A0000. Process Project1.exe (3280) Module Load: ADVAPI32.dll. No Debug Info. Base Address: $777D0000. Process Project1.exe (3280) Module Load: SECHOST.dll. No Debug Info. Base Address: $77960000. Process Project1.exe (3280) Module Load: RPCRT4.dll. No Debug Info. Base Address: $762F0000. Process Project1.exe (3280) Module Load: SspiCli.dll. No Debug Info. Base Address: $759F0000. Process Project1.exe (3280) Module Load: CRYPTBASE.dll. No Debug Info. Base Address: $759E0000. Process Project1.exe (3280) Module Load: IMM32.dll. No Debug Info. Base Address: $763F0000. Process Project1.exe (3280) Module Load: MSCTF.dll. No Debug Info. Base Address: $75AD0000. Process Project1.exe (3280) I would really appreciate any help or ideas on how to solve this problem. P.S: In the case anyone wonders why am I sticking with CodeGear is because is the IDE professors use to evaluate our assignments.

    Read the article

  • Matrix Multiplication with Threads: Why is it not faster?

    - by prelic
    Hey all, So I've been playing around with pthreads, specifically trying to calculate the product of two matrices. My code is extremely messy because it was just supposed to be a quick little fun project for myself, but the thread theory I used was very similar to: #include <pthread.h> #include <stdio.h> #include <stdlib.h> #define M 3 #define K 2 #define N 3 #define NUM_THREADS 10 int A [M][K] = { {1,4}, {2,5}, {3,6} }; int B [K][N] = { {8,7,6}, {5,4,3} }; int C [M][N]; struct v { int i; /* row */ int j; /* column */ }; void *runner(void *param); /* the thread */ int main(int argc, char *argv[]) { int i,j, count = 0; for(i = 0; i < M; i++) { for(j = 0; j < N; j++) { //Assign a row and column for each thread struct v *data = (struct v *) malloc(sizeof(struct v)); data->i = i; data->j = j; /* Now create the thread passing it data as a parameter */ pthread_t tid; //Thread ID pthread_attr_t attr; //Set of thread attributes //Get the default attributes pthread_attr_init(&attr); //Create the thread pthread_create(&tid,&attr,runner,data); //Make sure the parent waits for all thread to complete pthread_join(tid, NULL); count++; } } //Print out the resulting matrix for(i = 0; i < M; i++) { for(j = 0; j < N; j++) { printf("%d ", C[i][j]); } printf("\n"); } } //The thread will begin control in this function void *runner(void *param) { struct v *data = param; // the structure that holds our data int n, sum = 0; //the counter and sum //Row multiplied by column for(n = 0; n< K; n++){ sum += A[data->i][n] * B[n][data->j]; } //assign the sum to its coordinate C[data->i][data->j] = sum; //Exit the thread pthread_exit(0); } source: http://macboypro.com/blog/2009/06/29/matrix-multiplication-in-c-using-pthreads-on-linux/ For the non-threaded version, I used the same setup (3 2-d matrices, dynamically allocated structs to hold r/c), and added a timer. First trials indicated that the non-threaded version was faster. My first thought was that the dimensions were too small to notice a difference, and it was taking longer to create the threads. So I upped the dimensions to about 50x50, randomly filled, and ran it, and I'm still not seeing any performance upgrade with the threaded version. What am I missing here?

    Read the article

  • Passing a list of files to javac

    - by Robert Menteer
    How can I get the javac task to use an existing fileset? In my build.xml I have created several filesets to be used in multiple places throughout build file. Here is how they have been defined: <fileset dir = "${src}" id = "java.source.all"> <include name = "**/*.java" /> </fileset> <fileset dir = "${src}" id = "java.source.examples"> <include name = "**/Examples/**/*.java" /> </fileset> <fileset dir = "${src}" id = "java.source.tests"> <include name = "**/Tests/*.java" /> </fileset> <fileset dir = "${src}" id = "java.source.project"> <include name = "**/*.java" /> <exclude name = "**/Examples/**/*.java" /> <exclude name = "**/Tests/**/*.java" /> </fileset> I have also used macrodef to compile the java files so the javac task does not need to be repeated multiple times. The macro looks like this: <macrodef name="compile"> <attribute name="sourceref"/> <sequential> <javac srcdir = "${src}" destdir = "${build}" classpathref = "classpath" includeantruntime = "no" debug = "${debug}"> <filelist dir="." files="@{sourceref}" /> <-- email is about this </javac> </sequential> What I'm trying to do is compile only the classes that are needed for specific targets not all the targets in the source tree. And do so without having to specify the files every time. Here are how the targets are defined: <target name = "compile-examples" depends = "init"> <compile sourceref = "${toString:java.source.examples}" /> </target> <target name = "compile-project" depends = "init"> <compile sourceref = "${toString:java.source.project}" /> </target> <target name = "compile-tests" depends = "init"> <compile sourceref = "${toString:java.source.tests}" /> </target> As you can see each target specifies the java files to be compiled as a simi-colon separated list of absolute file names. The only problem with this is that javac does not support filelist. It also does not support fileset, path or pathset. I've tried using but it treats the list as a single file name. Another thing I tried is sending the reference directly (not using toString) and using but include does not have a ref attribute. SO THE QUESTION IS: How do you get the javac task to use a reference to a fileset that was defined in another part of the build file? I'm not interested in solutions that cause me to have multiple javac tasks. Completely re-writting the macro is acceptable. Changes to the targets are also acceptable provided redundant code between targets is kept to a minimum. p.s. Another problem is that fileset wants a comma separated list. I've only done a brief search for a way to convert semi-colons to commas and haven't found a way to do that. p.p.s. Sorry for the yelling but some people are too quick to post responses that don't address the subject.

    Read the article

  • Best way to add an extra (nested) form in the middle of a tabbed form

    - by Scharrels
    I've got a web application, consisting mainly of a big form with information. The form is split into multiple tabs, to make it more readable for the user: <form> <div id="tabs"> <ul> <li><a href="#tab1">Tab1</a></li> <li><a href="#tab2">Tab2</a></li> </ul> <div id="tab1">A big table with a lot of input rows</div> <div id="tab2">A big table with a lot of input rows</div> </div> </form> The form is dynamically extended (extra rows are added to the tables). Every 10 seconds the form is serialized and synchronized with the server. I now want to add an interactive form on one of the tabs: when a user enters a name in a field, this information is sent to the server and an id associated with that name is returned. This id is used as an identifier for some dynamically added form fields. A quick sketchup of such a page would look like this: <form action="bigform.php"> <div id="tabs"> <ul> <li><a href="#tab1">Tab1</a></li> <li><a href="#tab2">Tab2</a></li> </ul> <div id="tab1">A big table with a lot of input rows</div> <div id="tab2"> <div class="associatedinfo"> <p>Information for Joe</p> <ul> <li><input name="associated[26][]" /></li> <li><input name="associated[26][]" /></li> </ul> </div> <div class="associatedinfo"> <p>Information for Jill</p> <ul> <li><input name="associated[12][]" /></li> <li><input name="associated[12][]" /></li> </ul> </div> <div id="newperson"> <form action="newform.php"> <p>Add another person:</p> <input name="extra" /><input type="submit" value="Add" /> </form> </div> </div> </div> </form> The above will not work: nested forms are not allowed in HTML. However, I really need to display the form on that tab: it's part of the functionality of that page. I also want the behaviour of a separate form: when the user hits return in the form field, the "Add" submit button is pressed and a submit action is triggered on the partial form. What is the best way to solve this problem?

    Read the article

  • Using GA in GUI

    - by AlexT
    Sorry if this isn't clear as I'm writing this on a mobile device and I'm trying to make it quick. I've written a basic Genetic Algorithm with a binary encoding (genes) that builds a fitness value and evolves through several iterations using tournament selection, mutation and crossover. As a basic command-line example it seems to work. The problem I've got is with applying a genetic algorithm within a GUI as I am writing a maze-solving program that uses the GA to find a method through a maze. How do I turn my random binary encoded genes and fitness function (add all the binary values together) into a method to control a bot around a maze? I have built a basic GUI in Java consisting of a maze of labels (like a grid) with the available routes being in blue and the walls being in black. To reiterate my GA performs well and contains what any typical GA would (fitness method, get and set population, selection, crossover, etc) but now I need to plug it into a GUI to get my maze running. What needs to go where in order to get a bot that can move in different directions depending on what the GA says? Rough pseudocode would be great if possible As requested, an Individual is built using a separate class (Indiv), with all the main work being done in a Pop class. When a new individual is instantiated an array of ints represent the genes of said individual, with the genes being picked at random from a number between 0 and 1. The fitness function merely adds together the value of these genes and in the Pop class handles selection, mutation and crossover of two selected individuals. There's not much else to it, the command line program just shows evolution over n generations with the total fitness improving over each iteration. EDIT: It's starting to make a bit more sense now, although there are a few things that are bugging me... As Adamski has suggested I want to create an "Agent" with the options shown below. The problem I have is where the random bit string comes into play here. The agent knows where the walls are and has it laid out in a 4 bit string (i.e. 0111), but how does this affect the random 32 bit string? (i.e. 10001011011001001010011011010101) If I have the following maze (x is the start place, 2 is the goal, 1 is the wall): x 1 1 1 1 0 0 1 0 0 1 0 0 0 2 If I turn left I'm facing the wrong way and the agent will move completely off the maze if it moves forward. I assume that the first generation of the string will be completely random and it will evolve as the fitness grows but I don't get how the string will work within a maze. So, to get this straight... The fitness is the result of when the agent is able to move and is by a wall. The genes are a string of 32 bits, split into 16 sets of 2 bits to show the available actions and for the robot to move the two bits need to be passed with four bits from the agent showings its position near the walls. If the move is to go past a wall the move isn't made and it is deemed invalid and if the move is made and if a new wall is found then the fitness goes up. Is that right?

    Read the article

  • Drawing outlines around organic shapes

    - by ThunderChunky_SF
    One thing that seems particularly easy to do in the Flash IDE but difficult to do with code is to outline an organic shape. In the IDE you can just use the inkbucket tool to draw a stroke around something. Using nothing but code it seems much trickier. One method I've seen is to add a glow filter to the shape in question and just mess with the strength. But what if i want to only show the outline? What I'd like to do is to collect all of the points that make up the edge of the shape and then just connect the dots. I've actually gotten so far as to collect all of the points with a quick and dirty edge detection script that I wrote. So now I have a Vector of all the points that makeup my shape. How do I connect them in the proper sequence so it actually looks like the original object? For anyone who is interested here is my edge detection script: // Create a new sprite which we'll use for our outline var sp:Sprite = new Sprite(); var radius:int = 50; sp.graphics.beginFill(0x00FF00, 1); sp.graphics.drawCircle(0, 0, radius); sp.graphics.endFill(); sp.x = stage.stageWidth / 2; sp.y = stage.stageHeight / 2; // Create a bitmap data object to draw our vector data var bmd:BitmapData = new BitmapData(sp.width, sp.height, true, 0); // Use a transform matrix to translate the drawn clip so that none of its // pixels reside in negative space. The draw method will only draw starting // at 0,0 var mat:Matrix = new Matrix(1, 0, 0, 1, radius, radius); bmd.draw(sp, mat); // Pass the bitmap data to an actual bitmap var bmp:Bitmap = new Bitmap(bmd); // Add the bitmap to the stage addChild(bmp); // Grab all of the pixel data from the bitmap data object var pixels:Vector.<uint> = bmd.getVector(bmd.rect); // Setup a vector to hold our stroke points var points:Vector.<Point> = new Vector.<Point>; // Loop through all of the pixels of the bitmap data object and // create a point instance for each pixel location that isn't // transparent. var l:int = pixels.length; for(var i:int = 0; i < l; ++i) { // Check to see if the pixel is transparent if(pixels[i] != 0) { var pt:Point; // Check to see if the pixel is on the first or last // row. We'll grab everything from these rows to close the outline if(i <= bmp.width || i >= (bmp.width * bmp.height) - bmp.width) { pt = new Point(); pt.x = int(i % bmp.width); pt.y = int(i / bmp.width); points.push(pt); continue; } // Check to see if the current pixel is on either extreme edge if(int(i % bmp.width) == 0 || int(i % bmp.width) == bmp.width - 1) { pt = new Point(); pt.x = int(i % bmp.width); pt.y = int(i / bmp.width); points.push(pt); continue; } // Check to see if the previous or next pixel are transparent, // if so save the current one. if(i > 0 && i < bmp.width * bmp.height) { if(pixels[i - 1] == 0 || pixels[i + 1] == 0) { pt = new Point(); pt.x = int(i % bmp.width); pt.y = int(i / bmp.width); points.push(pt); } } } }

    Read the article

  • Running code when all threads are finished processing.

    - by rich97
    Quick note: Java and Android noob here, I'm open to you telling me I'm stupid (as long as you tell me why.) I have an android application which requires me start multiple threads originating from various classes and only advance to the next activity once all threads have done their job. I also want to add a "failsafe" timeout in case one the the threads takes too long (HTTP request taking too long or something.) I searched Stack Overflow and found a post saying that I should create a class to keep a running total of open threads and then use a timer to poll for when all the threads are completed. I think I've created a working class to do this for me, it's untested as of yet but has no errors showing in eclipse. Is this a correct implementation? Are there any APIs that I should be made aware of (such as classes in the Java or Android APIs that could be used in place of the abstract classes at the bottom of the class?) package com.dmp.geofix.libs; import java.util.ArrayList; import java.util.Iterator; import java.util.Timer; import java.util.TimerTask; public class ThreadMonitor { private Timer timer = null; private TimerTask timerTask = null; private OnSuccess onSuccess = null; private OnError onError = null; private static ArrayList<Thread> threads; private final int POLL_OPEN_THREADS = 100; private final int TIMEOUT = 10000; public ThreadMonitor() { timerTask = new PollThreadsTask(); } public ThreadMonitor(OnSuccess s) { timerTask = new PollThreadsTask(); onSuccess = s; } public ThreadMonitor(OnError e) { timerTask = new PollThreadsTask(); onError = e; } public ThreadMonitor(OnSuccess s, OnError e) { timerTask = new PollThreadsTask(); onSuccess = s; onError = e; } public void start() { Iterator<Thread> i = threads.iterator(); while (i.hasNext()) { i.next().start(); } timer = new Timer(); timer.schedule(timerTask, 0, POLL_OPEN_THREADS); } public void finish() { Iterator<Thread> i = threads.iterator(); while (i.hasNext()) { i.next().interrupt(); } threads.clear(); timer.cancel(); } public void addThread(Thread t) { threads.add(t); } public void removeThread(Thread t) { threads.remove(t); t.interrupt(); } class PollThreadsTask extends TimerTask { private int timeElapsed = 0; @Override public void run() { timeElapsed += POLL_OPEN_THREADS; if (timeElapsed <= TIMEOUT) { if (threads.isEmpty() == false) { if (onSuccess != null) { onSuccess.run(); } } } else { if (onError != null) { onError.run(); } finish(); } } } public abstract class OnSuccess { public abstract void run(); } public abstract class OnError { public abstract void run(); } }

    Read the article

  • GetAcceptExSockaddrs returns garbage! Does anyone know why?

    - by David
    Hello, I'm trying to write a quick/dirty echoserver in Delphi, but I notice that GetAcceptExSockaddrs seems to be writing to only the first 4 bytes of the structure I pass it. USES SysUtils; TYPE BOOL = LongBool; DWORD = Cardinal; LPDWORD = ^DWORD; short = SmallInt; ushort = Word; uint16 = Word; uint = Cardinal; ulong = Cardinal; SOCKET = uint; PVOID = Pointer; _HANDLE = DWORD; _in_addr = packed record s_addr : ulong; end; _sockaddr_in = packed record sin_family : short; sin_port : uint16; sin_addr : _in_addr; sin_zero : array[0..7] of Char; end; P_sockaddr_in = ^_sockaddr_in; _Overlapped = packed record Internal : Int64; Offset : Int64; hEvent : _HANDLE; end; LP_Overlapped = ^_Overlapped; IMPORTS function _AcceptEx (sListenSocket, sAcceptSocket : SOCKET; lpOutputBuffer : PVOID; dwReceiveDataLength, dwLocalAddressLength, dwRemoteAddressLength : DWORD; lpdwBytesReceived : LPDWORD; lpOverlapped : LP_OVERLAPPED) : BOOL; stdcall; external MSWinsock name 'AcceptEx'; procedure _GetAcceptExSockaddrs (lpOutputBuffer : PVOID; dwReceiveDataLength, dwLocalAddressLength, dwRemoteAddressLength : DWORD; LocalSockaddr : P_Sockaddr_in; LocalSockaddrLength : LPINT; RemoteSockaddr : P_Sockaddr_in; RemoteSockaddrLength : LPINT); stdcall; external MSWinsock name 'GetAcceptExSockaddrs'; CONST BufDataSize = 8192; BufAddrSize = SizeOf (_sockaddr_in) + 16; VAR ListenSock, AcceptSock : SOCKET; Addr, LocalAddr, RemoteAddr : _sockaddr_in; LocalAddrSize, RemoteAddrSize : INT; Buf : array[1..BufDataSize + BufAddrSize * 2] of Byte; BytesReceived : DWORD; Ov : _Overlapped; BEGIN //WSAStartup, create listen socket, bind to port 1066 on any interface, listen //Create event for overlapped (autoreset, initally not signalled) //Create accept socket if _AcceptEx (ListenSock, AcceptSock, @Buf, BufDataSize, BufAddrSize, BufAddrSize, @BytesReceived, @Ov) then WinCheck ('SetEvent', _SetEvent (Ov.hEvent)) else if GetLastError <> ERROR_IO_PENDING then WinCheck ('AcceptEx', GetLastError); {do WaitForMultipleObjects} _GetAcceptExSockaddrs (@Buf, BufDataSize, BufAddrSize, BufAddrSize, @LocalAddr, @LocalAddrSize, @RemoteAddr, @RemoteAddrSize); So if I run this, connect to it with Telnet (on same computer, connecting to localhost) and then type a key, WaitForMultipleObjects will unblock and GetAcceptExSockaddrs will run. But the result is garbage! RemoteAddr.sin_family = -13894 RemoteAddr.sin_port = 64 and the rest is zeroes. What gives? Thanks in advance!

    Read the article

  • System.Threading.Timer Doesn't Trigger my TimerCallBack Delegate

    - by Tom Kong
    Hi, I am writing my first Windows Service using C# and I am having some trouble with my Timer class. When the service is started, it runs as expected but the code will not execute again (I want it to run every minute) Please take a quick look at the attached source and let me know if you see any obvious mistakes! TIA using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Diagnostics; using System.Linq; using System.ServiceProcess; using System.Text; using System.Threading; using System.IO; namespace CXO001 { public partial class Service1 : ServiceBase { public Service1() { InitializeComponent(); } /* * Aim: To calculate and update the Occupancy values for the different Sites * * Method: Retrieve data every minute, updating a public value which can be polled */ protected override void OnStart(string[] args) { Daemon(); } public void Daemon() { TimerCallback tcb = new TimerCallback(On_Tick); TimeSpan duetime = new TimeSpan(0, 0, 1); TimeSpan interval = new TimeSpan(0, 1, 0); Timer querytimer = new Timer(tcb, null, duetime, interval); } protected override void OnStop() { } static int[] floorplanids = new int[] { 115, 114, 107, 108 }; public static List<Record> Records = new List<Record>(); static bool firstrun = true; public static void On_Tick(object timercallback) { //Update occupancy data for the last minute //Save a copy of the public values to HDD with a timestamp string starttime; if (Records.Count > 0) { starttime = Records.Last().TS; firstrun = false; } else { starttime = DateTime.Today.AddHours(7).ToString(); firstrun = true; } DateTime endtime = DateTime.Now; GetData(starttime, endtime); } public static void GetData(string starttime, DateTime endtime) { string connstr = "Data Source = 192.168.1.123; Initial Catalog = Brickstream_OPS; User Id = Brickstream; Password = bstas;"; DataSet resultds = new DataSet(); //Get the occupancy for each Zone foreach (int zone in floorplanids) { SQL s = new SQL(); string querystr = "SELECT SUM(DIRECTIONAL_METRIC.NUM_TO_ENTER - DIRECTIONAL_METRIC.NUM_TO_EXIT) AS 'Occupancy' FROM REPORT_OBJECT INNER JOIN REPORT_OBJ_METRIC ON REPORT_OBJECT.REPORT_OBJ_ID = REPORT_OBJ_METRIC.REPORT_OBJECT_ID INNER JOIN DIRECTIONAL_METRIC ON REPORT_OBJ_METRIC.REP_OBJ_METRIC_ID = DIRECTIONAL_METRIC.REP_OBJ_METRIC_ID WHERE (REPORT_OBJ_METRIC.M_START_TIME BETWEEN '" + starttime + "' AND '" + endtime.ToString() + "') AND (REPORT_OBJECT.FLOORPLAN_ID = '" + zone + "');"; resultds = s.Go(querystr, connstr, zone.ToString(), resultds); } List<Record> result = new List<Record>(); int c = 0; foreach (DataTable dt in resultds.Tables) { Record r = new Record(); r.TS = DateTime.Now.ToString(); r.Zone = dt.TableName; if (!firstrun) { r.Occupancy = (dt.Rows[0].Field<int>("Occupancy")) + (Records[c].Occupancy); } else { r.Occupancy = dt.Rows[0].Field<int>("Occupancy"); } result.Add(r); c++; } Records = result; MrWriter(); } public static void MrWriter() { StringBuilder output = new StringBuilder("Time,Zone,Occupancy\n"); foreach (Record r in Records) { output.Append(r.TS); output.Append(","); output.Append(r.Zone); output.Append(","); output.Append(r.Occupancy.ToString()); output.Append("\n"); } output.Append(firstrun.ToString()); output.Append(DateTime.Now.ToFileTime()); string filePath = @"C:\temp\CXO.csv"; File.WriteAllText(filePath, output.ToString()); } } }

    Read the article

  • GUI Freezes and Output to JTextField from user input

    - by user2929005
    I am having I hope only two issues currently. I have had help on this on a previous question that I have asked but now I am running into different issues. My current issues now is that I need to print out into a JTextField a sorted array. I do not have questions about how to sort the array I just would like help on how to get the array to print to the JTextField when the Bubble Sort button is pressed. Currently when I press the button it freezes. It freezes at this line. list.add(Integer.valueOf(input.nextInt())); please help Thank you. import java.awt.EventQueue; import javax.swing.*; import java.awt.event.*; import java.util.*; public class Sorting { private JFrame frame; private JTextArea inputArea; private JTextField outputArea; ArrayList<Integer> list = new ArrayList<Integer>(); Scanner input = new Scanner(System.in); /** * Launch the application. */ public static void main(String[] args) { EventQueue.invokeLater(new Runnable() { public void run() { try { Sorting window = new Sorting(); window.frame.setVisible(true); } catch (Exception e) { e.printStackTrace(); } } }); } /** * Create the application. */ public Sorting() { initialize(); } /** * Initialize the contents of the frame. */ private void initialize() { frame = new JFrame(); frame.setBounds(100, 100, 450, 300); frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); frame.setTitle("Sorting"); frame.getContentPane().setLayout(null); JButton bubbleButton = new JButton("Bubble Sort"); bubbleButton.addActionListener(new ActionListener() { public void actionPerformed(ActionEvent arg0) { list.add(Integer.valueOf(input.nextInt())); // for(int i = 0; i < list.size(); i++){ // // } } }); bubbleButton.setBounds(10, 211, 114, 23); frame.getContentPane().add(bubbleButton); JButton mergeButton = new JButton("Merge Sort"); mergeButton.addActionListener(new ActionListener() { public void actionPerformed(ActionEvent e) { } }); mergeButton.setBounds(305, 211, 114, 23); frame.getContentPane().add(mergeButton); JButton quickButton = new JButton("Quick Sort"); quickButton.addActionListener(new ActionListener() { public void actionPerformed(ActionEvent e) { } }); quickButton.setBounds(163, 211, 114, 23); frame.getContentPane().add(quickButton); inputArea = new JTextArea(); inputArea.setBounds(10, 36, 414, 51); frame.getContentPane().add(inputArea); outputArea = new JTextField(); outputArea.setEditable(false); outputArea.setBounds(10, 98, 414, 59); frame.getContentPane().add(outputArea); outputArea.setColumns(10); JLabel label = new JLabel("Please Enter 5 Numbers"); label.setHorizontalAlignment(SwingConstants.CENTER); label.setBounds(10, 11, 414, 14); frame.getContentPane().add(label); } }

    Read the article

  • Loading multiple copies of a group of DLLs in the same process

    - by george
    Background I'm maintaining a plugin for an application. I'm Using Visual C++ 2003. The plugin is composed of several DLLs - there's the main DLL, that's the one that the application loads using LoadLibrary, and there are several utility DLLs that are used by the main DLL and by each other. Dependencies generally look like this: plugin.dll - utilA.dll, utilB.dll utilA.dll - utilB.dll utilB.dll - utilA.dll, utilC.dll You get the picture. Some of the dependencies between the DLLs are load-time and some run-time. All the DLL files are stored in the executable's directory (not a requirement, just how it works now). The problem There's a new requirement - running multiple instances of the plugin within the application. The application runs each instance of a plugin in its own thread, i.e. each thread calls The plugin's code, however, is nothing but thread-safe - lots of global variables etc.. Unfortunately, fixing the whole thing isn't currently an option, so I need a way to load multiple (at most 3) copies of the plugin's DLLs in the same process. Option 1: The distinct names approach Creating 3 copies of each DLL file, so that each file has a distinct name. e.g. plugin1.dll, plugin2.dll, plugin3.dll, utilA1.dll, utilA2.dll, utilA3.dll, utilB1.dll, etc.. The application will load plugin1.dll, plugin2.dll and plugin3.dll. The files will be in the executable's directory. For each group of DLLs to know each other by name (so the inter-dependencies work), the names need to be known at compilation time - meaning the DLLs need to be compiled multiple times, only each time with different output file names. Not very complicated, but I'd hate having 3 copies of the VS project files, and don't like having to compile the same files over and over. Option 2: The side-by-side assemblies approach Creating 3 copies of the DLL files, each group in its own directory, and defining each group as an assembly by putting an assembly manifest file in the directory, listing the plugin's DLLs. Each DLL will have an application manifest pointing to the assembly, so that the loader finds the copies of the utility DLLs that reside in the same directory. The manifest needs to be embedded for it to be found when a DLL is loaded using LoadLibrary. I'll use mt.exe from a later VS version for the job, since VS2003 has no built-in manifest embedding support. I've tried this approach with partial success - dependencies are found during load-time of the DLLs, but not when a DLL function is called that loads another DLL. This seems to be the expected behavior according to this article - A DLL's activation context is only used at the DLL's load-time, and afterwards it's deactivated and the process's activation context is used. I haven't yet tried working around this using ISOLATION_AWARE_ENABLED. Questions Got any other options? Any quick & dirty solution will do. :-) Will ISOLATION_AWARE_ENABLED even work with VS2003? Comments will be greatly appreciated. Thanks!

    Read the article

  • How to open a server port outside of an OpenVPN tunnel with a pf firewall on OSX (BSD)

    - by Timbo
    I have a Mac mini that I use as a media server running XBMC and serves media from my NAS to my stereo and TV (which has been color calibrated with a Spyder3Express, happy). The Mac runs OSX 10.8.2 and the internet connection is tunneled for general privacy over OpenVPN through Tunnelblick. I believe my anonymous VPN provider pushes "redirect_gateway" to OpenVPN/Tunnelblick because when on it effectively tunnels all non-LAN traffic in- and outbound. As an unwanted side effect that also opens the boxes server ports unprotected to the outside world and bypasses my firewall-router (Netgear SRX5308). I have run nmap from outside the LAN on the VPN IP and the server ports on the mini are clearly visible and connectable. The mini has the following ports open: ssh/22, ARD/5900 and 8080+9090 for the XBMC iOS client Constellation. I also have Synology NAS which apart from LAN file serving over AFP and WebDAV only serves up an OpenVPN/1194 and a PPTP/1732 server. When outside of the LAN I connect to this from my laptop over OpenVPN and over PPTP from my iPhone. I only want to connect through AFP/548 from the mini to the NAS. The border firewall (SRX5308) just works excellently, stable and with a very high throughput when streaming from various VOD services. My connection is a 100/10 with a close to theoretical max throughput. The ruleset is as follows Inbound: PPTP/1723 Allow always to 10.0.0.40 (NAS/VPN server) from a restricted IP range >corresponding to possible cell provider range OpenVPN/1194 Allow always to 10.0.0.40 (NAS/VPN server) from any Outbound: Default outbound policy: Allow Always OpenVPN/1194 TCP Allow always from 10.0.0.40 (NAS) to a.b.8.1-a.b.8.254 (VPN provider) OpenVPN/1194 UDP Allow always to 10.0.0.40 (NAS) to a.b.8.1-a.b.8.254 (VPN provider) Block always from NAS to any On the Mini I have disabled the OSX Application Level Firewall because it throws popups which don't remember my choices from one time to another and that's annoying on a media server. Instead I run Little Snitch which controls outgoing connections nicely on an application level. I have configured the excellent OSX builtin firewall pf (from BSD) as follows pf.conf (Apple App firewall tie-ins removed) (# replaced with % to avoid formatting errors) ### macro name for external interface. eth_if = "en0" vpn_if = "tap0" ### wifi_if = "en1" ### %usb_if = "en3" ext_if = $eth_if LAN="{10.0.0.0/24}" ### General housekeeping rules ### ### Drop all blocked packets silently set block-policy drop ### all incoming traffic on external interface is normalized and fragmented ### packets are reassembled. scrub in on $ext_if all fragment reassemble scrub in on $vpn_if all fragment reassemble scrub out all ### exercise antispoofing on the external interface, but add the local ### loopback interface as an exception, to prevent services utilizing the ### local loop from being blocked accidentally. ### set skip on lo0 antispoof for $ext_if inet antispoof for $vpn_if inet ### spoofing protection for all interfaces block in quick from urpf-failed ############################# block all ### Access to the mini server over ssh/22 and remote desktop/5900 from LAN/en0 only pass in on $eth_if proto tcp from $LAN to any port {22, 5900, 8080, 9090} ### Allow all udp and icmp also, necessary for Constellation. Could be tightened. pass on $eth_if proto {udp, icmp} from $LAN to any ### Allow AFP to 10.0.0.40 (NAS) pass out on $eth_if proto tcp from any to 10.0.0.40 port 548 ### Allow OpenVPN tunnel setup over unprotected link (en0) only to VPN provider IPs ### and port ranges pass on $eth_if proto tcp from any to a.b.8.0/24 port 1194:1201 ### OpenVPN Tunnel rules. All traffic allowed out, only in to ports 4100-4110 ### Outgoing pings ok pass in on $vpn_if proto {tcp, udp} from any to any port 4100:4110 pass out on $vpn_if proto {tcp, udp, icmp} from any to any So what are my goals and what does the above setup achieve? (until you tell me otherwise :) 1) Full LAN access to the above ports on the mini/media server (including through my own VPN server) 2) All internet traffic from the mini/media server is anonymized and tunneled over VPN 3) If OpenVPN/Tunnelblick on the mini drops the connection, nothing is leaked both because of pf and the router outgoing ruleset. It can't even do a DNS lookup through the router. So what do I have to hide with all this? Nothing much really, I just got carried away trying to stop port scans through the VPN tunnel :) In any case this setup works perfectly and it is very stable. The Problem at last! I want to run a minecraft server and I installed that on a separate user account on the mini server (user=mc) to keep things partitioned. I don't want this server accessible through the anonymized VPN tunnel because there are lots more port scans and hacking attempts through that than over my regular IP and I don't trust java in general. So I added the following pf rule on the mini: ### Allow Minecraft public through user mc pass in on $eth_if proto {tcp,udp} from any to any port 24983 user mc pass out on $eth_if proto {tcp, udp} from any to any user mc And these additions on the border firewall: Inbound: Allow always TCP/UDP from any to 10.0.0.40 (NAS) Outbound: Allow always TCP port 80 from 10.0.0.40 to any (needed for online account checkups) This works fine but only when the OpenVPN/Tunnelblick tunnel is down. When up no connection is possbile to the minecraft server from outside of LAN. inside LAN is always OK. Everything else functions as intended. I believe the redirect_gateway push is close to the root of the problem, but I want to keep that specific VPN provider because of the fantastic throughput, price and service. The Solution? How can I open up the minecraft server port outside of the tunnel so it's only available over en0 not the VPN tunnel? Should I a static route? But I don't know which IPs will be connecting...stumbles How secure would to estimate this setup to be and do you have other improvements to share? I've searched extensively in the last few days to no avail...If you've read this far I bet you know the answer :)

    Read the article

  • Doing XML extracts with XSLT without having to read the whole DOM tree into memory?

    - by Thorbjørn Ravn Andersen
    I have a situation where I want to extract some information from some very large but regular XML files (just had to do it with a 500 Mb file), and where XSLT would be perfect. Unfortunately those XSLT implementations I am aware of (except the most expensive version of Saxon) does not support only having the necessary part of the DOM read in but reads in the whole tree. This cause the computer to swap to death. The XPath in question is //m/e[contains(.,'foobar') so it is essentially just a grep. Is there an XSLT implementation which can do this? Or an XSLT implementation which given suitable "advice" can do this trick of pruning away the parts in memory which will not be needed again? I'd prefer a Java implementation but both Windows and Linux are viable native platforms. EDIT: The input XML looks like: <log> <!-- Fri Jun 26 12:09:27 CEST 2009 --> <e h='12:09:27,284' l='org.apache.catalina.session.ManagerBase' z='1246010967284' t='ContainerBackgroundProcessor[StandardEngine[Catalina]]' v='10000'> <m>Registering Catalina:type=Manager,path=/axsWHSweb-20090626,host=localhost</m></e> <e h='12:09:27,284' l='org.apache.catalina.session.ManagerBase' z='1246010967284' t='ContainerBackgroundProcessor[StandardEngine[Catalina]]' v='10000'> <m>Force random number initialization starting</m></e> <e h='12:09:27,284' l='org.apache.catalina.session.ManagerBase' z='1246010967284' t='ContainerBackgroundProcessor[StandardEngine[Catalina]]' v='10000'> <m>Getting message digest component for algorithm MD5</m></e> <e h='12:09:27,284' l='org.apache.catalina.session.ManagerBase' z='1246010967284' t='ContainerBackgroundProcessor[StandardEngine[Catalina]]' v='10000'> <m>Completed getting message digest component</m></e> <e h='12:09:27,284' l='org.apache.catalina.session.ManagerBase' z='1246010967284' t='ContainerBackgroundProcessor[StandardEngine[Catalina]]' v='10000'> <m>getDigest() 0</m></e> ...... </log> Essentialy I want to select some m-nodes (and I know the XPath is wrong for that, it was just a quick hack), but maintain the XML layout. EDIT: It appears that STX may be what I am looking for (I can live with another transformation language), and that Joost is an implementation hereof. Any experiences? EDIT: I found that Saxon 6.5.4 with -Xmx1500m could load my XML, so this allowed me to use my XPaths right now. This is just a lucky stroke so I'd still like to solve this generically - this means scriptable which in turn means no handcrafted Java filtering first. EDIT: Oh, by the way. This is a log file very similar to what is generated by the log4j XMLLayout. The reason for XML is to be able to do exactly this, namely do queries on the log. This is the initial try, hence the simple question. Later I'd like to be able to ask more complex questions - therefore I'd like the query language to be able to handle the input file.

    Read the article

< Previous Page | 223 224 225 226 227 228 229 230 231 232 233 234  | Next Page >