Search Results

Search found 5942 results on 238 pages for 'total starnger'.

Page 223/238 | < Previous Page | 219 220 221 222 223 224 225 226 227 228 229 230  | Next Page >

  • Macports irssi & perl5 installation issues

    - by Dmitri DB
    Long time reader, first time poster. Big, appreciative thanks for everyone's collective questioning and answering here and at stackoverflow, it's helped me quite a lot over the time I've been learning answers through these sites! Apologies in advance if I didn't search hard enough on posts already up on this site to find out what I could do about this issue, but I thought I'd just reach out for the sake of trying at least once. I've experienced this issue while starting up my macports-installed version of irssi: 13:25 -!- Irssi: Error in script dispatch: 13:25 Can't locate lib.pm in @INC (@INC contains: /opt/local/lib/perl5/site_perl/5.12.4/darwin-multi-2level /opt/local/lib/perl5/site_perl/5.12.4 /opt/local/lib/perl5/vendor_perl/5.12.4/darwin-multi-2level /opt/local/lib/perl5/vendor_perl/5.12.4 /opt/local/lib/perl5/5.12.4/darwin-multi-2level /opt/local/lib/perl5/5.12.4 /opt/local/lib/perl5/site_perl/5.12.3/darwin-multi-2level /opt/local/lib/perl5/site_perl/5.12.3 /opt/local/lib/perl5/site_perl /opt/local/lib/perl5/vendor_perl .) at (eval 18) line 1. 13:25 BEGIN failed--compilation aborted at (eval 18) line 1. 13:25 Huh, strange. I looked into it a bit: [email protected] /opt/local/lib/perl5 ?- find . -name "lib.pm" -ls 14673887 16 -r--r--r-- 1 root admin 6853 25 Jun 23:39 ./5.12.4/darwin-thread-multi- 2level/lib.pm [email protected] /opt/local/lib/perl5 ?- l 5.12.4/darwin-thread-multi-2level total 1864 drwxr-xr-x 55 root admin 1870 28 Jun 19:28 . drwxr-xr-x 158 root admin 5372 28 Jun 19:28 .. -rw-r--r-- 1 root admin 177814 25 Jun 23:39 .packlist drwxr-xr-x 6 root admin 204 28 Jun 19:28 B -r--r--r-- 1 root admin 25714 25 Jun 23:39 B.pm drwxr-xr-x 64 root admin 2176 28 Jun 19:28 CORE drwxr-xr-x 3 root admin 102 28 Jun 19:28 Compress -r--r--r-- 1 root admin 3000 25 Jun 23:39 Config.pm -r--r--r-- 1 root admin 228094 25 Jun 23:39 Config.pod -r--r--r-- 1 root admin 409 25 Jun 23:39 Config_git.pl -r--r--r-- 1 root admin 38759 25 Jun 23:39 Config_heavy.pl -r--r--r-- 1 root admin 21174 25 Jun 23:39 Cwd.pm -r--r--r-- 1 root admin 63535 25 Jun 23:39 DB_File.pm drwxr-xr-x 3 root admin 102 28 Jun 19:28 Data drwxr-xr-x 5 root admin 170 28 Jun 19:28 Devel drwxr-xr-x 4 root admin 136 28 Jun 19:28 Digest -r--r--r-- 1 root admin 25185 25 Jun 23:39 DynaLoader.pm drwxr-xr-x 22 root admin 748 28 Jun 19:28 Encode -r--r--r-- 1 root admin 29731 25 Jun 23:39 Encode.pm -r--r--r-- 1 root admin 6736 25 Jun 23:39 Errno.pm -r--r--r-- 1 root admin 5445 25 Jun 23:39 Fcntl.pm drwxr-xr-x 5 root admin 170 28 Jun 19:28 File drwxr-xr-x 3 root admin 102 28 Jun 19:28 Filter -r--r--r-- 1 root admin 1819 25 Jun 23:39 GDBM_File.pm drwxr-xr-x 4 root admin 136 28 Jun 19:28 Hash drwxr-xr-x 3 root admin 102 28 Jun 19:28 I18N drwxr-xr-x 11 root admin 374 28 Jun 19:28 IO -r--r--r-- 1 root admin 1404 25 Jun 23:39 IO.pm drwxr-xr-x 6 root admin 204 28 Jun 19:28 IPC drwxr-xr-x 4 root admin 136 28 Jun 19:28 List drwxr-xr-x 4 root admin 136 28 Jun 19:28 MIME drwxr-xr-x 3 root admin 102 28 Jun 19:28 Math -r--r--r-- 1 root admin 2519 25 Jun 23:39 NDBM_File.pm -r--r--r-- 1 root admin 4208 25 Jun 23:39 O.pm -r--r--r-- 1 root admin 15563 25 Jun 23:39 Opcode.pm -r--r--r-- 1 root admin 21011 25 Jun 23:39 POSIX.pm -r--r--r-- 1 root admin 58962 25 Jun 23:39 POSIX.pod drwxr-xr-x 5 root admin 170 28 Jun 19:28 PerlIO -r--r--r-- 1 root admin 2515 25 Jun 23:39 SDBM_File.pm drwxr-xr-x 4 root admin 136 28 Jun 19:28 Scalar -r--r--r-- 1 root admin 10837 25 Jun 23:39 Socket.pm -r--r--r-- 1 root admin 41003 25 Jun 23:39 Storable.pm drwxr-xr-x 4 root admin 136 28 Jun 19:28 Sys drwxr-xr-x 3 root admin 102 28 Jun 19:28 Text drwxr-xr-x 5 root admin 170 28 Jun 19:28 Time drwxr-xr-x 3 root admin 102 28 Jun 19:28 Unicode -r--r--r-- 1 root admin 14462 25 Jun 23:39 attributes.pm drwxr-xr-x 38 root admin 1292 28 Jun 19:28 auto -r--r--r-- 1 root admin 19892 25 Jun 23:39 encoding.pm -r--r--r-- 1 root admin 6853 25 Jun 23:39 lib.pm -r--r--r-- 1 root admin 11044 25 Jun 23:39 mro.pm -r--r--r-- 1 root admin 997 25 Jun 23:39 ops.pm -r--r--r-- 1 root admin 13945 25 Jun 23:39 re.pm drwxr-xr-x 3 root admin 102 28 Jun 19:28 threads -r--r--r-- 1 root admin 33283 25 Jun 23:39 threads.pm So, it sort of seems to me that the permissions which perl5 got installed with for these modules has gotten mixed up somehow? I'm not really a perl user beyond enjoying it for massive directory-recursive find/replace operations within text files, so I haven't much of an idea what the permissions here are supposed to look like, and I'm not really sure how to go about determining how macports has gone and installed perl this way when it's otherwise worked without failure for years now. Does anyone have any recommendations for the sanest path towards rectifying this issue? Also, is there any interesting reason as to why the macports default for the perl5 port installs 5.12.4, and not 5.16.0, which has to be explicitly installed via the perl5.16 port? Thanks again!

    Read the article

  • Skipping scheduled self-tests and predicting drive EOL

    - by Steve Madsen
    For a few weeks now, smartd has been reporting that it is skipping some of its scheduled self-tests on the weekends: Apr 24 18:29:32 calvin smartd[4758]: Device: /dev/sda, skip scheduled Offline Immediate Test; 40% remaining of current Self-Test. Apr 24 18:29:33 calvin smartd[4758]: Device: /dev/sdb, skip scheduled Offline Immediate Test; 50% remaining of current Self-Test. The drives in this RAID-1 array are set to run an offline test four times a day, a short self-test at 2am every day, and a long self-test on Saturdays at 2am. For some reason, it looks like the long self-test is taking longer, causing the other scheduled tests to be skipped. First question: is this a sign of likely drive failure? Then today, smartd reported that a self-test failed. Here is the output of smartctl -a /dev/sdb: smartctl version 5.38 [i686-pc-linux-gnu] Copyright (C) 2002-8 Bruce Allen Home page is http://smartmontools.sourceforge.net/ === START OF INFORMATION SECTION === Model Family: Seagate Barracuda 7200.8 family Device Model: ST3250823AS Serial Number: 3ND1GNBC Firmware Version: 3.03 User Capacity: 250,059,350,016 bytes Device is: In smartctl database [for details use: -P show] ATA Version is: 7 ATA Standard is: Exact ATA specification draft version not indicated Local Time is: Sun Apr 25 13:15:34 2010 EDT SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED General SMART Values: Offline data collection status: (0x82) Offline data collection activity was completed without error. Auto Offline Data Collection: Enabled. Self-test execution status: ( 0) The previous self-test routine completed without error or no self-test has ever been run. Total time to complete Offline data collection: ( 430) seconds. Offline data collection capabilities: (0x5b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. No Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. General Purpose Logging supported. Short self-test routine recommended polling time: ( 1) minutes. Extended self-test routine recommended polling time: ( 84) minutes. SMART Attributes Data Structure revision number: 10 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000f 047 039 006 Pre-fail Always - 168450357 3 Spin_Up_Time 0x0003 098 098 000 Pre-fail Always - 0 4 Start_Stop_Count 0x0032 100 100 020 Old_age Always - 33 5 Reallocated_Sector_Ct 0x0033 100 100 036 Pre-fail Always - 9 7 Seek_Error_Rate 0x000f 087 060 030 Pre-fail Always - 654745480 9 Power_On_Hours 0x0032 055 055 000 Old_age Always - 40141 10 Spin_Retry_Count 0x0013 100 100 097 Pre-fail Always - 0 12 Power_Cycle_Count 0x0032 100 100 020 Old_age Always - 51 194 Temperature_Celsius 0x0022 037 062 000 Old_age Always - 37 (0 17 0 0) 195 Hardware_ECC_Recovered 0x001a 047 039 000 Old_age Always - 168450357 197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0010 100 100 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x003e 200 200 000 Old_age Always - 0 200 Multi_Zone_Error_Rate 0x0000 100 253 000 Old_age Offline - 0 202 TA_Increase_Count 0x0032 100 253 000 Old_age Always - 0 SMART Error Log Version: 1 No Errors Logged SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Short offline Completed without error 00% 40131 - # 2 Extended offline Completed: read failure 30% 40129 379795511 # 3 Short offline Completed without error 00% 40084 - # 4 Short offline Completed without error 00% 40060 - # 5 Short offline Completed without error 00% 40036 - # 6 Short offline Completed without error 00% 40013 - # 7 Short offline Completed without error 00% 39990 - # 8 Extended offline Completed without error 00% 39977 - # 9 Short offline Completed without error 00% 39919 - #10 Short offline Completed without error 00% 39895 - #11 Short offline Completed without error 00% 39872 - #12 Short offline Completed without error 00% 39848 - #13 Short offline Completed without error 00% 39824 - #14 Short offline Completed without error 00% 39801 - #15 Extended offline Completed without error 00% 39789 - #16 Short offline Completed without error 00% 39754 - #17 Short offline Completed without error 00% 39732 - #18 Short offline Completed without error 00% 39707 - #19 Short offline Completed without error 00% 39683 - #20 Short offline Completed without error 00% 39660 - #21 Short offline Completed without error 00% 39636 - SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testing Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay. Given that this drive is about 4.5 years old, I am probably tempting fate by keeping it in service. SMART doesn't seem to get much respect as a reliable way to predict drive failure. What else can I use to get an early indication of drive failure?

    Read the article

  • How do I use connect to DB2 with DBI and mod_perl?

    - by Matthew
    I'm having issues with getting DBI's IBM DB2 driver to work with mod_perl. My test script is: #!/usr/bin/perl use strict; use CGI; use Data::Dumper; use DBI; { my $q; my $dsn; my $username; my $password; my $sth; my $dbc; my $row; $q = CGI->new; print $q->header; print $q->start_html(); $dsn = "DBI:DB2:SAMPLE"; $username = "username"; $password = "password"; print "<pre>".$q->escapeHTML(Dumper(\%ENV))."</pre>"; $dbc = DBI->connect($dsn, $username, $password); $sth = $dbc->prepare("SELECT * FROM SOME_TABLE WHERE FIELD='SOMETHING'"); $sth->execute(); $row = $sth->fetchrow_hashref(); print "<pre>".$q->escapeHTML(Dumper($row))."</pre>"; print $q->end_html; } This script works as CGI but not under mod_perl. I get this error in apache's error log: DBD::DB2::dr connect warning: [unixODBC][Driver Manager]Data source name not found, and no default driver specified at /usr/lib/perl5/site_perl/5.8.8/Apache/DBI.pm line 190. DBI connect('SAMPLE','username',...) failed: [unixODBC][Driver Manager]Data source name not found, and no default driver specified at /data/www/perl/test.pl line 15 First of all, why is it using ODBC? The native DB2 driver is installed (hence it works as CGI). Running Apache 2.2.3, mod_perl 2.0.4 under RHEL5. This guy had the same problem as me: http://www.mail-archive.com/[email protected]/msg22909.html But I have no idea how he fixed it. What does mod_php4 have to do with mod_perl? Any help would be greatly appreciated, I'm having no luck with google. Update: As james2vegas pointed out, the problem has something to do with PHP: I disable PHP all together I get the a different error: Total Environment allocation failure! Did you set up your DB2 client environment? I believe this error is to do with environment variables not being set up correctly, namely DB2INSTANCE. However, I'm not able to turn off PHP to resolve this problem (I need it for some legacy applications). So I now have 2 questions: How can I fix the original issue without disabling PHP all together? How can I fix the environment issue? I've set DB2INSTANCE, DB2_PATH and SQLLIB variables correctly using SetEnv and PerlSetEnv in httpd.conf, but with no luck. Note: I've edited the code to determine if the problem was to do with Global Variable Persistence.

    Read the article

  • Recommended ASP.NET Shared Hosting (USA)

    - by coffeeaddict
    Ok, I have to admit I'm getting fed up with www.discountasp.net's pricing model and this annoyance has built up over the past 8 years or so. I've been with them for years and absolutely love them on the technical side, however it's getting ridiculously expensive for so little that you get. I mean here's my scenario: 1) I am running 2 SQL Server databases which costs me $10/ea per month so that's $20/month for 2 and I only get 500 mb disk space which is horrible 2) I am paying $10/mo just for the hosting itself which I only get 1 gig of disk space! I mean common! 3) I am simply running 2 small apps (Screwturn Wiki & Subtext Blog)...so I don't really care if it's up 99% or not, it's not worth paying a total of $300 just to keep these 2 apps running over discountasp.net Anyone else feel the same? Yes, I know they have great support, probably have great servers running behind this but in the end I really don't care as long as my site is up 95% or better. Yes, the hosting toolset rocks. But you know I bet you I can find a similar set somewhere else. I like how I can totally control IIS 7 at discountasp and I can control my own app pool etc. That's very powerful and essential. But anyone have any good alternatives to discountasp that gives me close to the same at a much more reasonable cost point? I mean http://www.m6.net/prices.aspx gives you 10 SQL Databases for $7 and 200 gigs disk space! I don't know about their tools or support but just looking at those numbers and some other hosts I've seen, I feel that discountasp.net is way out of line. They don't even offer any purchasing discounts such as it would be nice if my 2nd SQL Server is only $5/month not $10...stuff like this, to make it much more realistic and fair. Opinions (people who do have discountasp.net, people who have left them, or people who have another host they like)??? But geez $300 just to host a couple DBs and lightweight open source apps? Not worth the price they are charging. I'm almost at a price point that enables me to get a decent dedicated server! I really don't care about beta ASP.NET frameworks support. Not a big deal to me. If you have alternative suggestions rather than your experience with discountasp, I'd like to know how their toolset is. Do you have complete control over your DB in terms of adding users, and same goes for the web app pool, etc.? Discountasp.net's control panel rocks. I don't want to loose the ability to at least control and add virtual directories, recycle my dedicated app pool myself, backup my sql database myself, through tools which is what discountasp does give you. I'd also want to know that the hoster at least gets the latest and greatest in terms of non-beta ASP.NET related frameworks available to its shared hosters.

    Read the article

  • WPF ListBox/View Data Binding weird result

    - by Aviatrix
    I have this problem when i try to synchronize a observable list with listbox/view it displays the first item X times (x total amount of records in the list) but it doesn't change the variable's here is the XAML <ListBox x:Name="PostListView" BorderThickness="0" MinHeight="300" Background="{x:Null}" BorderBrush="{x:Null}" Foreground="{x:Null}" VerticalContentAlignment="Top" ScrollViewer.HorizontalScrollBarVisibility="Disabled" ScrollViewer.VerticalScrollBarVisibility="Disabled" DataContext="{Binding Source={StaticResource PostListData}}" ItemsSource="{Binding Mode=OneWay}" IsSynchronizedWithCurrentItem="True" MinWidth="332" SelectedIndex="0" SelectionMode="Extended" AlternationCount="1"> <ListBox.ItemTemplate> <DataTemplate> <DockPanel x:Name="SinglePost" VerticalAlignment="Top" ScrollViewer.CanContentScroll="True" ClipToBounds="True" Width="333" Height="70" d:LayoutOverrides="VerticalAlignment" d:IsEffectDisabled="True"> <DockPanel.DataContext> <local:PostList/> </DockPanel.DataContext> <StackPanel x:Name="AvatarNickHolder" Width="60"> <Label x:Name="Nick" HorizontalAlignment="Center" Margin="5,0" VerticalAlignment="Top" Height="15" Content="{Binding Path=pUsername, FallbackValue=pUsername}" FontFamily="Arial" FontSize="10.667" Padding="5,0"/> <Image x:Name="Avatar" HorizontalAlignment="Center" Margin="5,0,5,5" VerticalAlignment="Top" Width="50" Height="50" IsHitTestVisible="False" Source="1045443356IMG_0972.jpg" Stretch="UniformToFill"/> </StackPanel> <TextBlock x:Name="userPostText" Margin="0,0,5,0" VerticalAlignment="Center" FontSize="10.667" Text="{Binding Path=pMsg, FallbackValue=pMsg}" TextWrapping="Wrap"/> </DockPanel> </DataTemplate> </ListBox.ItemTemplate> </ListBox> and here is the ovservable list class public class PostList : ObservableCollection<PostData> { public PostList() : base() { Add(new PostData("this is test msg", "Cather", "1045443356IMG_0972.jpg")); Add(new PostData("this is test msg1", "t1", "1045443356IMG_0972.jpg")); Add(new PostData("this is test msg2", "t2", "1045443356IMG_0972.jpg")); Add(new PostData("this is test msg3", "t3", "1045443356IMG_0972.jpg")); Add(new PostData("this is test msg4", "t4", "1045443356IMG_0972.jpg")); Add(new PostData("this is test msg5", "t5", "1045443356IMG_0972.jpg")); // Add(new PostData("Isak", "Dinesen")); // Add(new PostData("Victor", "Hugo")); // Add(new PostData("Jules", "Verne")); } } public class PostData { private string Username; private string Msg; private string Avatar; private string LinkAttached; private string PicAttached; private string VideoAttached; public PostData(string msg ,string username, string avatar=null, string link=null,string pic=null ,string video=null) { this.Username = username; this.Msg = msg; this.Avatar = avatar; this.LinkAttached = link; this.PicAttached = pic; this.VideoAttached = video; } public string pMsg { get { return Msg; } set { Msg = value; } } public string pUsername { get { return Username; } set { Username = value; } } public string pAvatar { get { return Avatar; } set { Avatar = value; } } public string pLink { get { return LinkAttached; } set { LinkAttached = value; } } public string pPic { get { return PicAttached; } set { PicAttached = value; } } public string pVideo { get { return VideoAttached; } set { VideoAttached = value; } } } Any ideas ?

    Read the article

  • Maven changelog plugin with Mercurial problem

    - by doom2.wad
    I have configured my Maven2 project to generate a changelog report from a Mercurial repository (accessible via file:// protocol) but the goal execution fails with the following message: + Error stacktraces are turned on. [INFO] Scanning for projects... [INFO] Searching repository for plugin with prefix: 'changelog'. [INFO] ------------------------------------------------------------------------ [INFO] Building Phobos3 Prototype [INFO] task-segment: [changelog:changelog] [INFO] ------------------------------------------------------------------------ [INFO] [changelog:changelog {execution: default-cli}] [INFO] Generating changed sets xml to: D:\Documents and Settings\501845922\Workspace\phobos3.prototype\target\changelog.xml [INFO] EXECUTING: hg log --verbose [WARNING] Could not figure out: abort: Invalid argument [ERROR] EXECUTION FAILED Execution of cmd : log failed with exit code: -1. Working directory was: D:\Documents and Settings\501845922\Workspace\phobos3.prototype Your Hg installation seems to be valid and complete. Hg version: 1.4.3+20100201 (OK) [ERROR] Provider message: [ERROR] EXECUTION FAILED Execution of cmd : log failed with exit code: -1. Working directory was: D:\Documents and Settings\501845922\Workspace\phobos3.prototype Your Hg installation seems to be valid and complete. Hg version: 1.4.3+20100201 (OK) [ERROR] Command output: [ERROR] [INFO] ------------------------------------------------------------------------ [ERROR] BUILD ERROR [INFO] ------------------------------------------------------------------------ [INFO] An error has occurred in Change Log report generation. Embedded error: An error has occurred during changelog command : Command failed. [INFO] ------------------------------------------------------------------------ [INFO] Trace org.apache.maven.lifecycle.LifecycleExecutionException: An error has occurred in Change Log report generation. at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(DefaultLifecycleExecutor.java:719) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeStandaloneGoal(DefaultLifecycleExecutor.java:569) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoal(DefaultLifecycleExecutor.java:539) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoalAndHandleFailures(DefaultLifecycleExecutor.java:387) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeTaskSegments(DefaultLifecycleExecutor.java:348) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.execute(DefaultLifecycleExecutor.java:180) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:328) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:138) at org.apache.maven.cli.MavenCli.main(MavenCli.java:362) at org.apache.maven.cli.compat.CompatibleMain.main(CompatibleMain.java:60) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at org.codehaus.classworlds.Launcher.launchEnhanced(Launcher.java:315) at org.codehaus.classworlds.Launcher.launch(Launcher.java:255) at org.codehaus.classworlds.Launcher.mainWithExitCode(Launcher.java:430) at org.codehaus.classworlds.Launcher.main(Launcher.java:375) Caused by: org.apache.maven.plugin.MojoExecutionException: An error has occurred in Change Log report generation. at org.apache.maven.reporting.AbstractMavenReport.execute(AbstractMavenReport.java:79) at org.apache.maven.plugin.DefaultPluginManager.executeMojo(DefaultPluginManager.java:490) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(DefaultLifecycleExecutor.java:694) ... 17 more Caused by: org.apache.maven.reporting.MavenReportException: An error has occurred during changelog command : at org.apache.maven.plugin.changelog.ChangeLogReport.generateChangeSetsFromSCM(ChangeLogReport.java:555) at org.apache.maven.plugin.changelog.ChangeLogReport.getChangedSets(ChangeLogReport.java:393) at org.apache.maven.plugin.changelog.ChangeLogReport.executeReport(ChangeLogReport.java:340) at org.apache.maven.reporting.AbstractMavenReport.generate(AbstractMavenReport.java:98) at org.apache.maven.reporting.AbstractMavenReport.execute(AbstractMavenReport.java:73) ... 19 more Caused by: org.apache.maven.plugin.MojoExecutionException: Command failed. at org.apache.maven.plugin.changelog.ChangeLogReport.checkResult(ChangeLogReport.java:705) at org.apache.maven.plugin.changelog.ChangeLogReport.generateChangeSetsFromSCM(ChangeLogReport.java:467) ... 23 more [INFO] ------------------------------------------------------------------------ [INFO] Total time: 3 seconds [INFO] Finished at: Thu Apr 29 17:10:06 CEST 2010 [INFO] Final Memory: 5M/10M [INFO] ------------------------------------------------------------------------ What did I miss in a configuration? (I hope it is a configuration problem not a Maven plugins related bug!:) My repository URL seems to be ok (the plugin has been complaining before, I fixed that up), I also set a date format for parsing (also been complaining, also fixed). target/changelog.xml being promised was not generated at all. Maven 2.2.1 Mercurial 1.4.3 Windows XP SP3 mvn scm:changelog command provides an expected output. Thanks for any suggestions, I haven't googled up anything (nor binged up;).

    Read the article

  • How can I get the output of a command terminated by a alarm() call in Perl?

    - by rockyurock
    Case 1 If I run below command i.e iperf in UL only, then i am able to capture the o/p in txt file @output = readpipe("iperf.exe -u -c 127.0.0.1 -p 5001 -b 3600k -t 10 -i 1"); open FILE, ">Misplay_DL.txt" or die $!; print FILE @output; close FILE; Case 2 When I run iperf in DL mode , as we know server will start listening in cont. mode like below even after getting data from client (Here i am using server and client on LAN) @output = system("iperf.exe -u -s -p 5001 -i 1"); on server side: D:\_IOT_SESSION_RELATED\SEEM_ELEMESNTS_AT_COMM_PORT_CONF\Tput_Related_Tools\AUTO MATION_APP_\AUTOMATION_UTILITYiperf.exe -u -s -p 5001 ------------------------------------------------------------ Server listening on UDP port 5001 Receiving 1470 byte datagrams UDP buffer size: 8.00 KByte (default) ------------------------------------------------------------ [1896] local 192.168.5.101 port 5001 connected with 192.168.5.101 port 4878 [ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams [1896] 0.0- 2.0 sec 881 KBytes 3.58 Mbits/sec 0.000 ms 0/ 614 (0%) command prompt does not appear , process is contd... on client side: D:\_IOT_SESSION_RELATED\SEEM_ELEMESNTS_AT_COMM_PORT_CONF\Tput_Related_Tools\AUTO MATION_APP_\AUTOMATION_UTILITYiperf.exe -u -c 192.168.5.101 -p 5001 -b 3600k -t 2 -i 1 ------------------------------------------------------------ Client connecting to 192.168.5.101, UDP port 5001 Sending 1470 byte datagrams UDP buffer size: 8.00 KByte (default) ------------------------------------------------------------ [1880] local 192.168.5.101 port 4878 connected with 192.168.5.101 port 5001 [ ID] Interval Transfer Bandwidth [1880] 0.0- 1.0 sec 441 KBytes 3.61 Mbits/sec [1880] 1.0- 2.0 sec 439 KBytes 3.60 Mbits/sec [1880] 0.0- 2.0 sec 881 KBytes 3.58 Mbits/sec [1880] Server Report: [1880] 0.0- 2.0 sec 881 KBytes 3.58 Mbits/sec 0.000 ms 0/ 614 (0%) [1880] Sent 614 datagrams D:\_IOT_SESSION_RELATED\SEEM_ELEMESNTS_AT_COMM_PORT_CONF\Tput_Related_Tools\AUTO MATION_APP_\AUTOMATION_UTILITY so with this as server is cont. listening and never terminates so can't take output of server side to a txt file as it is going to the next command itself to create a txt file so i adopted the alarm() function to terminate the server side (iperf.exe -u -s -p 5001) commands after it received all data from the client. could anybody suggest me the way.. Here is my code: #! /usr/bin/perl -w my $command = "iperf.exe -u -s -p 5001"; my @output; eval { local $SIG{ALRM} = sub { die "Timeout\n" }; alarm 20; #@output = `$command`; #my @output = readpipe("iperf.exe -u -s -p 5001"); #my @output = exec("iperf.exe -u -s -p 5001"); my @output = system("iperf.exe -u -s -p 5001"); alarm 0; }; if ($@) { warn "$command timed out.\n"; } else { print "$command successful. Output was:\n", @output; } open FILE, ">display.txt" or die $!; print FILE @output_1; close FILE; i know that with system command i cannot capture the o/p to a txt file but i tried with readpipe() and exec() calls also but in vain... could some one please take a look and let me know why the iperf.exe -u -s -p 5001 is not terminating even after the alarm call and to take the out put to a txt file

    Read the article

  • Problem with socket communication between C# and Flex

    - by Chris Lee
    Hi all, I am implementing a simulated b/s stock data system. I am using flex and c# for client and server sides. I found flash has a security policy and I handled the policy-file-request in my server code. But seems it doesn't work, because the code jumped out at "socket.Receive(b)" after connection. I've tried sending message on client in the connection handler, in that case the server can receive correct message. But the auto-generated "policy-file-request" can never be received, and the client can get no data sending from server. Here I put my code snippet. my ActionScript code: public class StockClient extends Sprite { private var hostName:String = "192.168.84.103"; private var port:uint = 55555; private var socket:XMLSocket; public function StockClient() { socket = new XMLSocket(); configureListeners(socket); socket.connect(hostName, port); } public function send(data:Object) : void{ socket.send(data); } private function configureListeners(dispatcher:IEventDispatcher):void { dispatcher.addEventListener(Event.CLOSE, closeHandler); dispatcher.addEventListener(Event.CONNECT, connectHandler); dispatcher.addEventListener(IOErrorEvent.IO_ERROR, ioErrorHandler); dispatcher.addEventListener(ProgressEvent.PROGRESS, progressHandler); dispatcher.addEventListener(SecurityErrorEvent.SECURITY_ERROR, securityErrorHandler); dispatcher.addEventListener(ProgressEvent.SOCKET_DATA, dataHandler); } private function closeHandler(event:Event):void { trace("closeHandler: " + event); } private function connectHandler(event:Event):void { trace("connectHandler: " + event); //following testing message can be received, but client can't invoke data handler //send("<policy-file-request/>"); } private function dataHandler(event:ProgressEvent):void { //never fired trace("dataHandler: " + event); } private function ioErrorHandler(event:IOErrorEvent):void { trace("ioErrorHandler: " + event); } private function progressHandler(event:ProgressEvent):void { trace("progressHandler loaded:" + event.bytesLoaded + " total: " + event.bytesTotal); } private function securityErrorHandler(event:SecurityErrorEvent):void { trace("securityErrorHandler: " + event); } } my C# code: const int PORT_NUMBER = 55555; const String BEGIN_REQUEST = "begin"; const String END_REQUEST = "end"; const String POLICY_REQUEST = "<policy-file-request/>\u0000"; const String POLICY_FILE = "<?xml version=\"1.0\"?>\n" + "<!DOCTYPE cross-domain-policy SYSTEM \"http://www.adobe.com/xml/dtds/cross-domain-policy.dtd\">\n" + "<cross-domain-policy> \n" + " <allow-access-from domain=\"*\" to-ports=\"55555\"/> \n" + "</cross-domain-policy>\u0000"; ................ private void startListening() { provider = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp); provider.Bind(new IPEndPoint(IPAddress.Parse("192.168.84.103"), PORT_NUMBER)); provider.Listen(10); isListened = true; while (isListened) { Socket socket = provider.Accept(); Console.WriteLine("connect!"); byte[] b = new byte[1024]; int receiveLength = 0; try { // code jump out at this statement receiveLength = socket.Receive(b); } catch (Exception e) { Debug.WriteLine(e.ToString()); } String request = System.Text.Encoding.UTF8.GetString(b, 0, receiveLength); Console.WriteLine("request:"+request); if (request == POLICY_REQUEST) { socket.Send(Encoding.UTF8.GetBytes(POLICY_FILE)); Console.WriteLine("response:" + POLICY_FILE); } else if (request == END_REQUEST) { Dispose(socket); } else { StartSocket(socket); break; } } } Sorry for the long code, please someone help with it, thanks a million

    Read the article

  • Cocoa Basic HTTP Authentication : Advice Needed..

    - by Kristiaan
    Hello all, im looking to read the contents of a webpage that is secured with a user name and password. this is a mac OS X application NOT an iphone app so most of the things i have read on here or been suggested to read do not seem to work. Also i am a total beginner with Xcode and Obj C i was told to have a look at a website that provided sample code to http auth however so far i have had little luck in getting this working. below is the main code for the button press in my application, there is also another unit called Base64 below that has some code in i had to change to even get it compiling (no idea if what i changed is correct mind you). NSURL *url = [NSURL URLWithString:@"my URL"]; NSString *userName = @"UN"; NSString *password = @"PW"; NSError *myError = nil; // create a plaintext string in the format username:password NSMutableString *loginString = (NSMutableString*)[@"" stringByAppendingFormat:@"%@:%@", userName, password]; // employ the Base64 encoding above to encode the authentication tokens char *encodedLoginData = [base64 encode:[loginString dataUsingEncoding:NSUTF8StringEncoding]]; // create the contents of the header NSString *authHeader = [@"Basic " stringByAppendingFormat:@"%@", [NSString stringWithCString:encodedLoginData length:strlen(encodedLoginData)]]; //NSString *authHeader = [@"Basic " stringByAppendingFormat:@"%@", loginString];//[NSString stringWithString:loginString length:strlen(loginString)]]; NSMutableURLRequest *request = [NSMutableURLRequest requestWithURL: url cachePolicy: NSURLRequestReloadIgnoringCacheData timeoutInterval: 3]; // add the header to the request. Here's the $$$!!! [request addValue:authHeader forHTTPHeaderField:@"Authorization"]; // perform the reqeust NSURLResponse *response; NSData *data = [NSURLConnection sendSynchronousRequest: request returningResponse: &response error: &myError]; //*error = myError; // POW, here's the content of the webserver's response. NSString *result = [NSString stringWithCString:[data bytes] length:[data length]]; [myTextView setString:result]; code from the BASE64 file #import "base64.h" static char *alphabet = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+-"; @implementation Base64 +(char *)encode:(NSData *)plainText { // create an adequately sized buffer for the output. every 3 bytes // become four basically with padding to the next largest integer // divisible by four. char * encodedText = malloc((((([plainText length] % 3) + [plainText length]) / 3) * 4) + 1); char* inputBuffer = malloc([plainText length]); inputBuffer = (char *)[plainText bytes]; int i; int j = 0; // encode, this expands every 3 bytes to 4 for(i = 0; i < [plainText length]; i += 3) { encodedText[j++] = alphabet[(inputBuffer[i] & 0xFC) >> 2]; encodedText[j++] = alphabet[((inputBuffer[i] & 0x03) << 4) | ((inputBuffer[i + 1] & 0xF0) >> 4)]; if(i + 1 >= [plainText length]) // padding encodedText[j++] = '='; else encodedText[j++] = alphabet[((inputBuffer[i + 1] & 0x0F) << 2) | ((inputBuffer[i + 2] & 0xC0) >> 6)]; if(i + 2 >= [plainText length]) // padding encodedText[j++] = '='; else encodedText[j++] = alphabet[inputBuffer[i + 2] & 0x3F]; } // terminate the string encodedText[j] = 0; return encodedText;//outputBuffer; } @end when executing the code it stops on the following line with a EXC_BAD_ACCESS ?!?!? NSString *authHeader = [@"Basic " stringByAppendingFormat:@"%@", [NSString stringWithCString:encodedLoginData length:strlen(encodedLoginData)]]; any help would be appreciated as i am a little clueless on this problem, not being very literate with Cocoa, objective c, xcode is only adding fuel to this fire for me.

    Read the article

  • Optimal storage of data structure for fast lookup and persistence

    - by Mikael Svenson
    Scenario I have the following methods: public void AddItemSecurity(int itemId, int[] userIds) public int[] GetValidItemIds(int userId) Initially I'm thinking storage on the form: itemId -> userId, userId, userId and userId -> itemId, itemId, itemId AddItemSecurity is based on how I get data from a third party API, GetValidItemIds is how I want to use it at runtime. There are potentially 2000 users and 10 million items. Item id's are on the form: 2007123456, 2010001234 (10 digits where first four represent the year). AddItemSecurity does not have to perform super fast, but GetValidIds needs to be subsecond. Also, if there is an update on an existing itemId I need to remove that itemId for users no longer in the list. I'm trying to think about how I should store this in an optimal fashion. Preferably on disk (with caching), but I want the code maintainable and clean. If the item id's had started at 0, I thought about creating a byte array the length of MaxItemId / 8 for each user, and set a true/false bit if the item was present or not. That would limit the array length to little over 1mb per user and give fast lookups as well as an easy way to update the list per user. By persisting this as Memory Mapped Files with the .Net 4 framework I think I would get decent caching as well (if the machine has enough RAM) without implementing caching logic myself. Parsing the id, stripping out the year, and store an array per year could be a solution. The ItemId - UserId[] list can be serialized directly to disk and read/write with a normal FileStream in order to persist the list and diff it when there are changes. Each time a new user is added all the lists have to updated as well, but this can be done nightly. Question Should I continue to try out this approach, or are there other paths which should be explored as well? I'm thinking SQL server will not perform fast enough, and it would give an overhead (at least if it's hosted on a different server), but my assumptions might be wrong. Any thought or insights on the matter is appreciated. And I want to try to solve it without adding too much hardware :) [Update 2010-03-31] I have now tested with SQL server 2008 under the following conditions. Table with two columns (userid,itemid) both are Int Clustered index on the two columns Added ~800.000 items for 180 users - Total of 144 million rows Allocated 4gb ram for SQL server Dual Core 2.66ghz laptop SSD disk Use a SqlDataReader to read all itemid's into a List Loop over all users If I run one thread it averages on 0.2 seconds. When I add a second thread it goes up to 0.4 seconds, which is still ok. From there on the results are decreasing. Adding a third thread brings alot of the queries up to 2 seonds. A forth thread, up to 4 seconds, a fifth spikes some of the queries up to 50 seconds. The CPU is roofing while this is going on, even on one thread. My test app takes some due to the speedy loop, and sql the rest. Which leads me to the conclusion that it won't scale very well. At least not on my tested hardware. Are there ways to optimize the database, say storing an array of int's per user instead of one record per item. But this makes it harder to remove items.

    Read the article

  • php crashes with no core file and this message : apc_mmap failed

    - by greg0ire
    Description of the problem Regularly, cron php processes crash on our production server, which result in mails with the following body : PHP Fatal error: PHP Startup: apc_mmap: mmap failed: in Unknown on line 0 Segmentation fault (core dumped) I think the Segmentation fault (core dumped) should result in core files being handled by apport and then written in /var/crashes, but the files I can see there are there since yesterday, although the last crash occured today : -rw-r----- 1 root whoopsie 1138528 mai 22 04:09 _usr_bin_php5.0.crash -rw-r----- 1 frontoffice whoopsie 1166373 mai 20 18:00 _usr_bin_php5.1005.crash -rw-r----- 1 frontoffice whoopsie 81622658 mai 22 00:05 _usr_sbin_php5-fpm.1005.crash I tried to download the last one anyway, and ran gdb /usr/sbin/php5-fpm /tmp/_usr_sbin_php5-fpm.1005.crash, only to be told that the file is not a core file (its format was not recognized). Here is the server's apc configuration : cat /etc/php5/cli/conf.d/20-apc.ini extension=apc.so apc.shm_size=512M apc.ttl=3600 apc.user_ttl=3600 apc.enable_cli=1 I'm mostly worried about the apc.shm_size… isn't it too high or too low ? I understand it has to do with the size of memory segments. Question(s) What could be the problem ? How can I troubleshoot it (how can I get a valid core file ?) ? System information free total used free shared buffers cached Mem: 5081296 4354684 726612 0 374744 959968 -/+ buffers/cache: 3019972 2061324 Swap: 522236 516888 5348 cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=12.04 DISTRIB_CODENAME=precise DISTRIB_DESCRIPTION="Ubuntu 12.04.2 LTS" php -v PHP 5.4.17-1~precise+1 (cli) (built: Jul 17 2013 18:14:06) Copyright (c) 1997-2013 The PHP Group Zend Engine v2.4.0, Copyright (c) 1998-2013 Zend Technologies php -i excerpt : Configuration apc APC Support => enabled Version => 3.1.13 APC Debugging => Disabled MMAP Support => Enabled MMAP File Mask => Locking type => pthread mutex Locks Serialization Support => php Revision => $Revision: 327136 $ Build Date => Nov 20 2012 18:41:36 Directive => Local Value => Master Value apc.cache_by_default => On => On apc.canonicalize => On => On apc.coredump_unmap => Off => Off apc.enable_cli => On => On apc.enabled => On => On apc.file_md5 => Off => Off apc.file_update_protection => 2 => 2 apc.filters => no value => no value apc.gc_ttl => 3600 => 3600 apc.include_once_override => Off => Off apc.lazy_classes => Off => Off apc.lazy_functions => Off => Off apc.max_file_size => 1M => 1M apc.mmap_file_mask => no value => no value apc.num_files_hint => 1000 => 1000 apc.preload_path => no value => no value apc.report_autofilter => Off => Off apc.rfc1867 => Off => Off apc.rfc1867_freq => 0 => 0 apc.rfc1867_name => APC_UPLOAD_PROGRESS => APC_UPLOAD_PROGRESS apc.rfc1867_prefix => upload_ => upload_ apc.rfc1867_ttl => 3600 => 3600 apc.serializer => default => default apc.shm_segments => 1 => 1 apc.shm_size => 512M => 512M apc.shm_strings_buffer => 4M => 4M apc.slam_defense => On => On apc.stat => On => On apc.stat_ctime => Off => Off apc.ttl => 3600 => 3600 apc.use_request_time => On => On apc.user_entries_hint => 4096 => 4096 apc.user_ttl => 3600 => 3600 apc.write_lock => On => On php -m [PHP Modules] apc bcmath bz2 calendar Core ctype curl date dba dom ereg exif fileinfo filter ftp gd gettext hash iconv imagick intl json ldap libxml mbstring memcache memcached mhash mysql mysqli openssl pcntl pcre PDO pdo_mysql pdo_pgsql pdo_sqlite pgsql Phar posix Reflection session shmop SimpleXML soap sockets SPL sqlite3 standard sysvmsg sysvsem sysvshm tidy tokenizer wddx xml xmlreader xmlwriter zip zlib [Zend Modules] ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 39531 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 39531 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited

    Read the article

  • How should I clean up hung grandchild processes when an alarm trips in Perl?

    - by brian d foy
    I have a parallelized automation script which needs to call many other scripts, some of which hang because they (incorrectly) wait for standard input. That's not a big deal because I catch those with alarm. The trick is to shut down those hung grandchild processes when the child shuts down. I thought various incantations of SIGCHLD, waiting, and process groups could do the trick, but they all block and the grandchildren aren't reaped. My solution, which works, just doesn't seem like it is the right solution. I'm not especially interested in the Windows solution just yet, but I'll eventually need that too. Mine only works for Unix, which is fine for now. I wrote a small script that takes the number of simultaneous parallel children to run and the total number of forks: $ fork_bomb <parallel jobs> <number of forks> $ fork_bomb 8 500 This will probably hit the per-user process limit within a couple of minutes. Many solutions I've found just tell you to increase the per-user process limit, but I need this to run about 300,000 times, so that isn't going to work. Similarly, suggestions to re-exec and so on to clear the process table aren't what I need. I'd like to actually fix the problem instead of slapping duct tape over it. I crawl the process table looking for the child processes and shut down the hung processes individually in the SIGALRM handler, which needs to die because the rest of real code has no hope of success after that. The kludgey crawl through the process table doesn't bother me from a performance perspective, but I wouldn't mind not doing it: use Parallel::ForkManager; use Proc::ProcessTable; my $pm = Parallel::ForkManager->new( $ARGV[0] ); my $alarm_sub = sub { kill 9, map { $_->{pid} } grep { $_->{ppid} == $$ } @{ Proc::ProcessTable->new->table }; die "Alarm rang for $$!\n"; }; foreach ( 0 .. $ARGV[1] ) { print "."; print "\n" unless $count++ % 50; my $pid = $pm->start and next; local $SIG{ALRM} = $alarm_sub; eval { alarm( 2 ); system "$^X -le '<STDIN>'"; # this will hang alarm( 0 ); }; $pm->finish; } If you want to run out of processes, take out the kill. I thought that setting a process group would work so I could kill everything together, but that blocks: my $alarm_sub = sub { kill 9, -$$; # blocks here die "Alarm rang for $$!\n"; }; foreach ( 0 .. $ARGV[1] ) { print "."; print "\n" unless $count++ % 50; my $pid = $pm->start and next; setpgrp(0, 0); local $SIG{ALRM} = $alarm_sub; eval { alarm( 2 ); system "$^X -le '<STDIN>'"; # this will hang alarm( 0 ); }; $pm->finish; } The same thing with POSIX's setsid didn't work either, and I think that actually broke things in a different way since I'm not really daemonizing this. Curiously, Parallel::ForkManager's run_on_finish happens too late for the same clean-up code: the grandchildren are apparently already disassociated from the child processes at that point.

    Read the article

  • Help me alter this query to get the desired results - New*

    - by sandeepan
    Please dump these data first CREATE TABLE IF NOT EXISTS `all_tag_relations` ( `id_tag_rel` int(10) NOT NULL AUTO_INCREMENT, `id_tag` int(10) unsigned NOT NULL DEFAULT '0', `id_tutor` int(10) DEFAULT NULL, `id_wc` int(10) unsigned DEFAULT NULL, PRIMARY KEY (`id_tag_rel`), KEY `All_Tag_Relations_FKIndex1` (`id_tag`), KEY `id_wc` (`id_wc`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=19 ; INSERT INTO `all_tag_relations` (`id_tag_rel`, `id_tag`, `id_tutor`, `id_wc`) VALUES (1, 1, 1, NULL), (2, 2, 1, NULL), (3, 6, 2, NULL), (4, 7, 2, NULL), (8, 3, 1, 1), (9, 4, 1, 1), (10, 5, 2, 2), (11, 4, 2, 2), (15, 8, 1, 3), (16, 9, 1, 3), (17, 10, 1, 4), (18, 4, 1, 4), (19, 1, 2, 5), (20, 4, 2, 5); CREATE TABLE IF NOT EXISTS `tags` ( `id_tag` int(10) unsigned NOT NULL AUTO_INCREMENT, `tag` varchar(255) DEFAULT NULL, PRIMARY KEY (`id_tag`), UNIQUE KEY `tag` (`tag`), KEY `id_tag` (`id_tag`), KEY `tag_2` (`tag`), KEY `tag_3` (`tag`), KEY `tag_4` (`tag`), FULLTEXT KEY `tag_5` (`tag`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=11 ; INSERT INTO `tags` (`id_tag`, `tag`) VALUES (1, 'Sandeepan'), (2, 'Nath'), (3, 'first'), (4, 'class'), (5, 'new'), (6, 'Bob'), (7, 'Cratchit'), (8, 'more'), (9, 'fresh'), (10, 'second'); CREATE TABLE IF NOT EXISTS `webclasses` ( `id_wc` int(10) NOT NULL AUTO_INCREMENT, `id_author` int(10) NOT NULL, `name` varchar(50) DEFAULT NULL, PRIMARY KEY (`id_wc`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=5 ; INSERT INTO `webclasses` (`id_wc`, `id_author`, `name`) VALUES (1, 1, 'first class'), (2, 2, 'new class'), (3, 1, 'more fresh'), (4, 1, 'second class'), (5, 2, 'sandeepan class'); About the system - The system consists of tutors and classes. - The data in the table All_Tag_Relations stores tag relations for each tutor registered and each class created by a tutor. The tag relations are used for searching classes. The current data dump corresponds to tutor "Sandeepan Nath" who has created classes named "first class", "more fresh", "second class" and tutor "Bob Cratchit" who has created classes "new class" and "Sandeepan class". I am trying for a search query performs AND logic on the search keywords and returns wvery such class for which the search terms are present in the class name or its tutor name To make it easy, following is the list of search terms and desired results:- Search term result classes (check the id_wc in the results) first class 1 Sandeepan Nath class 1 Sandeepan Nath 1,3 Bob Cratchit 2 Sandeepan Nath bob none Sandeepan Class 1,4,5 I have so far reached upto this query -- Two keywords search SET @tag1 = 4, @tag2 = 1; -- Setting some user variables to see where the ids go. SELECT wc.id_wc, sum( DISTINCT ( wtagrels.id_tag = @tag1 ) ) AS key_1_class_matches, sum( DISTINCT ( wtagrels.id_tag = @tag2 ) ) AS key_2_class_matches, sum( DISTINCT ( ttagrels.id_tag = @tag1 ) ) AS key_1_tutor_matches, sum( DISTINCT ( ttagrels.id_tag = @tag2 ) ) AS key_2_tutor_matches, sum( DISTINCT ( ttagrels.id_tag = wtagrels.id_tag ) ) AS key_class_tutor_matches FROM WebClasses as wc join all_tag_relations AS wtagrels on wc.id_wc = wtagrels.id_wc join all_tag_relations as ttagrels on (wc.id_author = ttagrels.id_tutor) WHERE ( wtagrels.id_tag = @tag1 OR wtagrels.id_tag = @tag2 OR ttagrels.id_tag = @tag1 OR ttagrels.id_tag = @tag2 ) GROUP BY wtagrels.id_wc LIMIT 0 , 20 For search with 1 or 3 terms, remove/add the variable part in this query. Tabulating my observation of the values of key_1_class_matches, key_2_class_matches,key_1_tutor_matches (say, class keys),key_2_tutor_matches for various cases (say, tutor keys). Search term expected result Observation first class 1 for class 1, all class keys+all tutor keys =1 Sandeepan Nath class 1 for class 1, one class key+ all tutor keys = 1 Sandeepan Nath 1,3 both tutor keys =1 for these classes Bob Cratchit 2 both tutor keys = 1 Sandeepan Nath bob none no complete tutor matches for any class I found a pattern that, for any case, the class(es) which should appear in the result have the highest number of matches (all class keys and tutor keys). E.g. searching "first class", only for class =1, total of key matches = 4(1+1+1+1) searching "Sandeepan Nath", for classes 1, 3,4(all classes by Sandeepan Nath) have all the tutor keys matching. But no pattern in the search for "Sandeepan Class" - classes 1,4,5 should match. Now, how do I put a condition into the query, based on that pattern so that only those classes are returned. Do I need to use full text search here because it gives a scoring/rank value indicating the strength of the match? Any sample query would help. Please note - I have already found solution for showing classes when any/all of the search terms match with the class name. http://stackoverflow.com/questions/3030022/mysql-help-me-alter-this-search-query-to-get-desired-results But if all the search terms are in tutor name, it does not work. So, I am modifying the query and experimenting.

    Read the article

  • image processing algorithm in MATLAB

    - by user261002
    I am trying to reconstruct an algorithm belong to this paper: Decomposition of biospeckle images in temporary spectral bands Here is an explanation of the algorithm: We recorded a sequence of N successive speckle images with a sampling frequency fs. In this way it was possible to observe how a pixel evolves through the N images. That evolution can be treated as a time series and can be processed in the following way: Each signal corresponding to the evolution of every pixel was used as input to a bank of filters. The intensity values were previously divided by their temporal mean value to minimize local differences in reflectivity or illumination of the object. The maximum frequency that can be adequately analyzed is determined by the sampling theorem and s half of sampling frequency fs. The latter is set by the CCD camera, the size of the image, and the frame grabber. The bank of filters is outlined in Fig. 1. In our case, ten 5° order Butterworth11 filters were used, but this number can be varied according to the required discrimination. The bank was implemented in a computer using MATLAB software. We chose the Butter-worth filter because, in addition to its simplicity, it is maximally flat. Other filters, an infinite impulse response, or a finite impulse response could be used. By means of this bank of filters, ten corresponding signals of each filter of each temporary pixel evolution were obtained as output. Average energy Eb in each signal was then calculated: where pb(n) is the intensity of the filtered pixel in the nth image for filter b divided by its mean value and N is the total number of images. In this way, en values of energy for each pixel were obtained, each of hem belonging to one of the frequency bands in Fig. 1. With these values it is possible to build ten images of the active object, each one of which shows how much energy of time-varying speckle there is in a certain frequency band. False color assignment to the gray levels in the results would help in discrimination. and here is my MATLAB code base on that : clear all for i=0:39 str = num2str(i); str1 = strcat(str,'.mat'); load(str1); D{i+1}=A; end new_max = max(max(A)); new_min = min(min(A)); for i=20:180 for j=20:140 ts = []; for k=1:40 ts = [ts D{k}(i,j)]; %%% kth image pixel i,j --- ts is time series end ts = double(ts); temp = mean(ts); ts = ts-temp; ts = ts/temp; N = 5; % filter order W = [0.00001 0.05;0.05 0.1;0.1 0.15;0.15 0.20;0.20 0.25;0.25 0.30;0.30 0.35;0.35 0.40;0.40 0.45;0.45 0.50]; N1 = 5; for ind = 1:10 Wn = W(ind,:); [B,A] = butter(N1,Wn); ts_f(ind,:) = filter(B,A,ts); end for ind=1:10 imag_test1{ind}(i,j) =sum((ts_f(ind,:)./mean(ts_f(ind,:))).^2); end end end for i=1:10 temp_imag = imag_test1{i}(:,:); x=isnan(temp_imag); temp_imag(x)=0; temp_imag=medfilt2(temp_imag); t_max = max(max(temp_imag)); t_min = min(min(temp_imag)); temp_imag = (temp_imag-t_min).*(double(new_max-new_min)/double(t_max-t_min))+double(new_min); imag_test2{i}(:,:) = temp_imag; end for i=1:10 A=imag_test2{i}(:,:); B=A/max(max(A)); B=histeq(B); figure,imshow(B) colorbar end but I am not getting the same result as paper. has anybody has aby idea why? or where I have gone wrong? Refrence Link to the paper

    Read the article

  • Ajax - How refresh <DIV> after submit

    - by user107712
    Hi, How refresh part of page ("DIV") after my application release a submit? I'm use JQuery with plugin ajaxForm. I set my target with "divResult", but the page repeat your content inside the "divResult". Sources: $(document).ready(function() { $("#formSearch").submit(function() { var options = { target:"#divResult", url: "http://localhost:8081/sniper/estabelecimento/pesquisar.action" } $(this).ajaxSubmit(options); return false; }); }) Page ... ... <div id="divResult" class="quadro_conteudo" > <table id="tableResult" class="tablesorter"> <thead> <tr> <th style="text-align:center;"> <input id="checkTodos" type="checkbox" title="Marca/Desmarcar todos" /> </th> <th scope="col">Name</th> <th scope="col">Phone</th> </tr> </thead> <tbody> <s:iterator value="entityList"> <s:url id="urlEditar" action="editar"><s:param name="id" value="%{id}"/></s:url> <tr> <td style="text-align:center;"><s:checkbox id="checkSelecionado" name="selecionados" theme="simple" fieldValue="%{id}"></s:checkbox></td> <td> <s:a href="%{urlEditar}"><s:property value="name"/></s:a></td> <td> <s:a href="%{urlEditar}"><s:property value="phone"/></s:a></td> </tr> </s:iterator> </tbody> </table> <div id="pager" class="pager"> <form> <img src="<%=request.getContextPath()%>/plugins/jquery/tablesorter/addons/pager/icons/first.png" class="first"/> <img src="<%=request.getContextPath()%>/plugins/jquery/tablesorter/addons/pager/icons/prev.png" class="prev"/> <input type="text" class="pagedisplay"/> <img src="<%=request.getContextPath()%>/plugins/jquery/tablesorter/addons/pager/icons/next.png" class="next"/> <img src="<%=request.getContextPath()%>/plugins/jquery/tablesorter/addons/pager/icons/last.png" class="last"/> <select class="pagesize"> <option selected="selected" value="10">10</option> <option value="20">20</option> <option value="30">30</option> <option value="40">40</option> <option value="<s:property value="totalRegistros"/>">todos</option> </select> <s:label>Total de registros: <s:property value="totalRegistros"/></s:label> </form> </div> <br/> </div> Thanks!!!

    Read the article

  • Creating Persistent Drive Labels With UDEV Using /dev/disk/by-path

    - by Matt
    I have a new BackBlaze Pod (BackBlaze Pod 2.0). It has 45 3TB drives and they when I first set it up they were labeled /dev/sda through /dev/sdz and /dev/sdaa through /dev/sdas. I used mdadm to setup three really big 15 drive RAID6 arrays. However, since first setup a few weeks ago I had a couple of the hard drives fail on me. I've replaced them but now the arrays are complaining because they can't find the missing drives. When I list the the disks... ls -l /dev/sd* I see that /dev/sda /dev/sdf /dev/sdk /dev/sdp no longer appear and now there are 4 new ones... /dev/sdau /dev/sdav /dev/sdaw /dev/sdax I also just found that I can do this... ls -l /dev/disk/by-path/ total 0 lrwxrwxrwx 1 root root 10 Sep 19 18:08 pci-0000:02:04.0-scsi-0:0:0:0 -> ../../sdau lrwxrwxrwx 1 root root 9 Sep 19 18:08 pci-0000:02:04.0-scsi-0:1:0:0 -> ../../sdb lrwxrwxrwx 1 root root 9 Sep 19 18:08 pci-0000:02:04.0-scsi-0:2:0:0 -> ../../sdc lrwxrwxrwx 1 root root 9 Sep 19 18:08 pci-0000:02:04.0-scsi-0:3:0:0 -> ../../sdd lrwxrwxrwx 1 root root 9 Sep 19 18:08 pci-0000:02:04.0-scsi-0:4:0:0 -> ../../sde lrwxrwxrwx 1 root root 10 Sep 19 18:08 pci-0000:02:04.0-scsi-2:0:0:0 -> ../../sdae lrwxrwxrwx 1 root root 9 Sep 19 18:08 pci-0000:02:04.0-scsi-2:1:0:0 -> ../../sdg lrwxrwxrwx 1 root root 9 Sep 19 18:08 pci-0000:02:04.0-scsi-2:2:0:0 -> ../../sdh lrwxrwxrwx 1 root root 9 Sep 19 18:08 pci-0000:02:04.0-scsi-2:3:0:0 -> ../../sdi lrwxrwxrwx 1 root root 9 Sep 19 18:08 pci-0000:02:04.0-scsi-2:4:0:0 -> ../../sdj lrwxrwxrwx 1 root root 10 Sep 19 18:08 pci-0000:02:04.0-scsi-3:0:0:0 -> ../../sdav lrwxrwxrwx 1 root root 9 Sep 19 18:08 pci-0000:02:04.0-scsi-3:1:0:0 -> ../../sdl lrwxrwxrwx 1 root root 9 Sep 19 18:08 pci-0000:02:04.0-scsi-3:2:0:0 -> ../../sdm lrwxrwxrwx 1 root root 9 Sep 19 18:08 pci-0000:02:04.0-scsi-3:3:0:0 -> ../../sdn lrwxrwxrwx 1 root root 9 Sep 19 18:08 pci-0000:02:04.0-scsi-3:4:0:0 -> ../../sdo lrwxrwxrwx 1 root root 10 Sep 19 18:08 pci-0000:04:04.0-scsi-0:0:0:0 -> ../../sdax lrwxrwxrwx 1 root root 9 Sep 19 18:08 pci-0000:04:04.0-scsi-0:1:0:0 -> ../../sdq lrwxrwxrwx 1 root root 9 Sep 19 18:08 pci-0000:04:04.0-scsi-0:2:0:0 -> ../../sdr lrwxrwxrwx 1 root root 9 Sep 19 18:08 pci-0000:04:04.0-scsi-0:3:0:0 -> ../../sds lrwxrwxrwx 1 root root 9 Sep 19 18:08 pci-0000:04:04.0-scsi-0:4:0:0 -> ../../sdt lrwxrwxrwx 1 root root 9 Sep 19 18:08 pci-0000:04:04.0-scsi-2:0:0:0 -> ../../sdu lrwxrwxrwx 1 root root 9 Sep 19 18:08 pci-0000:04:04.0-scsi-2:1:0:0 -> ../../sdv lrwxrwxrwx 1 root root 9 Sep 19 18:08 pci-0000:04:04.0-scsi-2:2:0:0 -> ../../sdw lrwxrwxrwx 1 root root 9 Sep 19 18:08 pci-0000:04:04.0-scsi-2:3:0:0 -> ../../sdx lrwxrwxrwx 1 root root 9 Sep 19 18:08 pci-0000:04:04.0-scsi-2:4:0:0 -> ../../sdy lrwxrwxrwx 1 root root 9 Sep 19 18:08 pci-0000:04:04.0-scsi-3:0:0:0 -> ../../sdz I didn't list them all....you can see the problem above. They're sorted by scsi id here but sda is missing...replaced by sdau...etc... So obviously the arrays are complaining. Is it possible to get Linux to reread the drive labels in the correct order or am I screwed? My initial design with 15 drive arrays is not ideal. With 3TB drives the rebuild times were taking 3 or 4 days....maybe more. I'm scrapping the whole design and I think I am going to go with 6 x 7 RAID5 disk arrays and 3 hot spares to make the arrays a bit easier to manage and shorten the rebuild times. But I'd like to clean up the drive labels so they aren't out of order. I haven't figured out how to do this yet. Does anyone know how to get this straightened out? Thanks, Matt

    Read the article

  • Help me alter this query to get the desired results

    - by sandeepan
    Please dump these data first CREATE TABLE IF NOT EXISTS `all_tag_relations` ( `id_tag_rel` int(10) NOT NULL AUTO_INCREMENT, `id_tag` int(10) unsigned NOT NULL DEFAULT '0', `id_tutor` int(10) DEFAULT NULL, `id_wc` int(10) unsigned DEFAULT NULL, PRIMARY KEY (`id_tag_rel`), KEY `All_Tag_Relations_FKIndex1` (`id_tag`), KEY `id_wc` (`id_wc`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=19 ; INSERT INTO `all_tag_relations` (`id_tag_rel`, `id_tag`, `id_tutor`, `id_wc`) VALUES (1, 1, 1, NULL), (2, 2, 1, NULL), (3, 6, 2, NULL), (4, 7, 2, NULL), (8, 3, 1, 1), (9, 4, 1, 1), (10, 5, 2, 2), (11, 4, 2, 2), (15, 8, 1, 3), (16, 9, 1, 3), (17, 10, 1, 4), (18, 4, 1, 4), (19, 1, 2, 5), (20, 4, 2, 5); CREATE TABLE IF NOT EXISTS `tags` ( `id_tag` int(10) unsigned NOT NULL AUTO_INCREMENT, `tag` varchar(255) DEFAULT NULL, PRIMARY KEY (`id_tag`), UNIQUE KEY `tag` (`tag`), KEY `id_tag` (`id_tag`), KEY `tag_2` (`tag`), KEY `tag_3` (`tag`), KEY `tag_4` (`tag`), FULLTEXT KEY `tag_5` (`tag`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=11 ; INSERT INTO `tags` (`id_tag`, `tag`) VALUES (1, 'Sandeepan'), (2, 'Nath'), (3, 'first'), (4, 'class'), (5, 'new'), (6, 'Bob'), (7, 'Cratchit'), (8, 'more'), (9, 'fresh'), (10, 'second'); CREATE TABLE IF NOT EXISTS `webclasses` ( `id_wc` int(10) NOT NULL AUTO_INCREMENT, `id_author` int(10) NOT NULL, `name` varchar(50) DEFAULT NULL, PRIMARY KEY (`id_wc`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=5 ; INSERT INTO `webclasses` (`id_wc`, `id_author`, `name`) VALUES (1, 1, 'first class'), (2, 2, 'new class'), (3, 1, 'more fresh'), (4, 1, 'second class'), (5, 2, 'sandeepan class'); About the system - The system consists of tutors and classes. - The data in the table All_Tag_Relations stores tag relations for each tutor registered and each class created by a tutor. The tag relations are used for searching classes. The current data dump corresponds to tutor "Sandeepan Nath" who has created classes named "first class", "more fresh", "second class" and tutor "Bob Cratchit" who has created classes "new class" and "Sandeepan class". I am trying for a search query performs AND logic on the search keywords and returns wvery such class for which the search terms are present in the class name or its tutor name To make it easy, following is the list of search terms and desired results:- Search term result classes (check the id_wc in the results) first class 1 Sandeepan Nath class 1 Sandeepan Nath 1,3 Bob Cratchit 2 Sandeepan Nath bob none Sandeepan Class 1,4,5 I have so far reached upto this query -- Two keywords search SET @tag1 = 4, @tag2 = 1; -- Setting some user variables to see where the ids go. SELECT wc.id_wc, sum( DISTINCT ( wtagrels.id_tag = @tag1 ) ) AS key_1_class_matches, sum( DISTINCT ( wtagrels.id_tag = @tag2 ) ) AS key_2_class_matches, sum( DISTINCT ( ttagrels.id_tag = @tag1 ) ) AS key_1_tutor_matches, sum( DISTINCT ( ttagrels.id_tag = @tag2 ) ) AS key_2_tutor_matches, sum( DISTINCT ( ttagrels.id_tag = wtagrels.id_tag ) ) AS key_class_tutor_matches FROM WebClasses as wc join all_tag_relations AS wtagrels on wc.id_wc = wtagrels.id_wc join all_tag_relations as ttagrels on (wc.id_author = ttagrels.id_tutor) WHERE ( wtagrels.id_tag = @tag1 OR wtagrels.id_tag = @tag2 OR ttagrels.id_tag = @tag1 OR ttagrels.id_tag = @tag2 ) GROUP BY wtagrels.id_wc LIMIT 0 , 20 For search with 1 or 3 terms, remove/add the variable part in this query. Tabulating my observation of the values of key_1_class_matches, key_2_class_matches,key_1_tutor_matches (say, class keys),key_2_tutor_matches for various cases (say, tutor keys). Search term expected result Observation first class 1 for class 1, all class keys+all tutor keys =1 Sandeepan Nath class 1 for class 1, one class key+ all tutor keys = 1 Sandeepan Nath 1,3 both tutor keys =1 for these classes Bob Cratchit 2 both tutor keys = 1 Sandeepan Nath bob none no complete tutor matches for any class I found a pattern that, for any case, the class(es) which should appear in the result have the highest number of matches (all class keys and tutor keys). E.g. searching "first class", only for class =1, total of key matches = 4(1+1+1+1) searching "Sandeepan Nath", for classes 1, 3,4(all classes by Sandeepan Nath) have all the tutor keys matching. But no pattern in the search for "Sandeepan Class" - classes 1,4,5 should match. Now, how do I put a condition into the query, based on that pattern so that only those classes are returned. Do I need to use full text search here because it gives a scoring/rank value indicating the strength of the match? Any sample query would help. Please note - I have already found solution for showing classes when any/all of the search terms match with the class name. http://stackoverflow.com/questions/3030022/mysql-help-me-alter-this-search-query-to-get-desired-results But if all the search terms are in tutor name, it does not work. So, I am modifying the query and experimenting.

    Read the article

  • java - BigDecimal

    - by Mk12
    I was trying to make my own class for currencies using longs, but Apparently I should use BigDecimal (and then whenever I print it just add the $ sign before it). Could someone please get me started? What would be the best way to use BigDecimals for Dollar currencies, like making it at least but no more than 2 decimal places for the cents, etc. The api for BigDecimal is huge, and I don't know which methods to use. Also, BigDecimal has better precision, but isn't that all lost if it passes through a double? if I do new BigDecimal(24.99), how will it be different than using a double? Or should I use the constructor that uses a String instead? EDIT: I decided to use BigDecimals, and then use: private static final java.text.NumberFormat moneyt = java.text.NumberFormat.getCurrencyInstance(); { money.setRoundingMode(RoundingMode.HALF_EVEN); } and then whenever I display the BigDecimals, to use money.format(theBigDecimal). Is this alright? Should I have the BigDecimal rounding it too? Because then it doesn't get rounded before it does another operation.. if so, could you show me how? And how should I create the BigDecimals? new BigDecimal("24.99") ? Well, after many comments to Vineet Reynolds (thanks for keeping coming back and answering), this is what I have decided. I use BigDecimals and a NumberFormat. Here is where I create the NumberFormat instance. private static final NumberFormat money; static { money = NumberFormat.getCurrencyInstance(Locale.CANADA); money.setRoundingMode(RoundingMode.HALF_EVEN); } Here is my BigDecimal: private final BigDecimal price; Whenever I want to display the price, or another BigDecimal that I got through calculations of price, I use: money.format(price) to get the String. Whenever I want to store the price, or a calculation from price, in a database or in a field or anywhere, I use (for a field): myCalculatedResult = price.add(new BigDecimal("34.58")).setScale(2, RoundingMode.HALF_EVEN); .. but I'm thinking now maybe I should not have the NumberFormat round, but when I want to display do this: System.out.print(money.format(price.setScale(2, RoundingMode.HALF_EVEN); That way to ensure the model and things displayed in the view are the same. I don't do: price = price.setScale(2, RoundingMode.HALF_EVEN); Because then it would always round to 2 decimal places and wouldn't be as precise in calculations. So its all solved now, I guess. But is there any shortcut to typing calculatedResult.setScale(2, RoundingMode.HALF_EVEN) all the time? All I can think of is static importing HALF_EVEN... EDIT: I've changed my mind a bit, I think if I store a value, I won't round it unless I have no more operations to do with it, e.g. if it was the final total that will be charged to someone. I will only round things at the end, or whenever necessary, and I will still use NumberFormat for the currency formatting, but since I always want rounding for display, I made a static method for display: public static String moneyFormat(BigDecimal price) { return money.format(price.setScale(2, RoundingMode.HALF_EVEN)); } So values stored in variables won't be rounded, and I'll use that method to display prices.

    Read the article

  • New project created with Flex Mojo's archetype throws Cannot Find Parent Project-Maven Exception

    - by ignorant
    This is probably a silly question but I just cant seem to figure out. I'm completely new to flex and maven. Maven 2.2.1: Maven 2.2.1 unzipped,M2_HOME set and repository altered to point to different drive location in settings.xml Flex 4.0: Installed Created a multi-modular webapp project using flexmojo: mvn archetype:generate -DarchetypeRepository=http://repository.sonatype.org/content/groups/flexgroup -DarchetypeGroupId=org.sonatype.flexmojos -DarchetypeArtifactId=flexmojos-archetypes-modular-webapp -DarchetypeVersion=RELEASE with following options groupId=com.test artifactId=test version=1.0-snapshot package=com.tests * Creates * test |-- pom.xml |--swc -pom.xml |--swf -pom.xml `--war -pom.xml Parent pom has swc, swf, war as modules. Dependency is war-swf-swc. With parent artifactId of swf, swc, war set to swf, swc, test respectively. On executing mvn on test folder(for that matter clean or anything) I get this following error. G:\Projects\testmvn -e + Error stacktraces are turned on. [INFO] Scanning for projects... Downloading: http://repo1.maven.org/maven2/com/test/swc/1.0-snapshot/swc-1.0-snapshot.pom [INFO] Unable to find resource 'com.test:swc:pom:1.0-snapshot' in repository central (http://repo1.maven.org/maven2) [INFO] ------------------------------------------------------------------------ [ERROR] FATAL ERROR [INFO] ------------------------------------------------------------------------ [INFO] Failed to resolve artifact. GroupId: com.test ArtifactId: swc Version: 1.0-snapshot Reason: Unable to download the artifact from any repository com.test:swc:pom:1.0-snapshot from the specified remote repositories: central (http://repo1.maven.org/maven2) [INFO] ------------------------------------------------------------------------ [INFO] Trace org.apache.maven.reactor.MavenExecutionException: Cannot find parent: com.test:swc for project: com.test:swc-swc:swc:1.0-snapshot for project com.test:swc-swc:swc:1.0-snapshot at org.apache.maven.DefaultMaven.getProjects(DefaultMaven.java:404) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:272) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:138) at org.apache.maven.cli.MavenCli.main(MavenCli.java:362) at org.apache.maven.cli.compat.CompatibleMain.main(CompatibleMain.java:60) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:585) at org.codehaus.classworlds.Launcher.launchEnhanced(Launcher.java:315) at org.codehaus.classworlds.Launcher.launch(Launcher.java:255) at org.codehaus.classworlds.Launcher.mainWithExitCode(Launcher.java:430) at org.codehaus.classworlds.Launcher.main(Launcher.java:375) Caused by: org.apache.maven.project.ProjectBuildingException: Cannot find parent: com.test:swc for project: com.test:swc-swc:swc:1.0-snapshot for project com.test:swc-swc:swc:1.0-snapshot at org.apache.maven.project.DefaultMavenProjectBuilder.assembleLineage(DefaultMavenProjectBuilder.java:1396) at org.apache.maven.project.DefaultMavenProjectBuilder.buildInternal(DefaultMavenProjectBuilder.java:823) at org.apache.maven.project.DefaultMavenProjectBuilder.buildFromSourceFileInternal(DefaultMavenProjectBuilder.java:508) at org.apache.maven.project.DefaultMavenProjectBuilder.build(DefaultMavenProjectBuilder.java:200) at org.apache.maven.DefaultMaven.getProject(DefaultMaven.java:604) at org.apache.maven.DefaultMaven.collectProjects(DefaultMaven.java:487) at org.apache.maven.DefaultMaven.collectProjects(DefaultMaven.java:560) at org.apache.maven.DefaultMaven.getProjects(DefaultMaven.java:391) ... 12 more Caused by: org.apache.maven.project.ProjectBuildingException: POM 'com.test:swc' not found in repository: Unable to download the artifact from any repository com.test:swc:pom:1.0-snapshot from the specified remote repositories: central (http://repo1.maven.org/maven2) for project com.test:swc at org.apache.maven.project.DefaultMavenProjectBuilder.findModelFromRepository(DefaultMavenProjectBuilder.java:605) at org.apache.maven.project.DefaultMavenProjectBuilder.assembleLineage(DefaultMavenProjectBuilder.java:1392) ... 19 more Caused by: org.apache.maven.artifact.resolver.ArtifactNotFoundException: Unable to download the artifact from any repository com.test:swc:pom:1.0-snapshot from the specified remote repositories: central (http://repo1.maven.org/maven2) at org.apache.maven.artifact.resolver.DefaultArtifactResolver.resolve(DefaultArtifactResolver.java:228) at org.apache.maven.artifact.resolver.DefaultArtifactResolver.resolve(DefaultArtifactResolver.java:90) at org.apache.maven.project.DefaultMavenProjectBuilder.findModelFromRepository(DefaultMavenProjectBuilder.java:558) ... 20 more Caused by: org.apache.maven.wagon.ResourceDoesNotExistException: Unable to download the artifact from any repository at org.apache.maven.artifact.manager.DefaultWagonManager.getArtifact(DefaultWagonManager.java:404) at org.apache.maven.artifact.resolver.DefaultArtifactResolver.resolve(DefaultArtifactResolver.java:216) ... 22 more [INFO] ------------------------------------------------------------------------ [INFO] Total time: 1 second [INFO] Finished at: Tue Jun 15 19:22:15 GMT+02:00 2010 [INFO] Final Memory: 1M/2M [INFO] ------------------------------------------------------------------------ Looks like its trying to download the project from maven's central repository instead of building it. What am I missing?

    Read the article

  • How can I handle parameterized queries in Drupal?

    - by Anthony Gatlin
    We have a client who is currently using Lotus Notes/Domino as their content management system and web server. For many reasons, we are recommending they sunset their Notes/Domino implementation and transition onto a more modern platform--such as Drupal. The client has several web applications which would be a natural fit for Drupal. However, I am unsure of the best way to implement one of the web applications in Drupal. I am running into a knowledge barrier and wondered if any of you could fill in the gaps. Situation The client has a Lotus Domino application which serves as a front-end for querying a large DB2 data store and returning a result set (generally in table form) to a user via the web. The web application provides access to approximately 100 pre-defined queries--50 of which are public and 50 of which are secured. Most of the queries accept some set of user selected parameters as input. The output of the queries is typically returned to users in a list (table) format. A limited number of result sets allow drill-down through the HTML table into detail records. The query parameters often involve database queries themselves. For example, a single query may pull a list of company divisions into a drop-down. Once a division is selected, second drop-down with the departments from that division is populated--but perhaps only departments which meet some special criteria--such as those having taken a loss within a specific time frame. Most queries have 2-4 parameters with the average probably being 3. The application involves no data entry. None of the back-end data is ever modified by the web application. All access is purely based around querying data and viewing results. The queries change relatively infrequently, and the current system has been in place for approximately 10 years. There may be 10-20 query additions, modifications, or other changes in a given year. The client simply desires to change the presentation platform but absolutely does not want to re-do the 100 database queries. Once the project is implemented, the client wants their staff to take over and manage future changes. The client's staff have no background in Drupal or PHP but are somewhat willing to learn as necessary. How would you transition this into Drupal? My major knowledge void relates to how we would manage the query parameters and access the queries themselves. Here are a few specific questions but feel free to chime in on any issue related to this implementation. Would we have to build 100 forms by hand--with each form containing the parameters for a given query? If so, how would we do this? Approximately how long would it take to build/configure each of these forms? Is there a better way than manually building 100 forms? (I understand using CCK to enter data into custom content types but since we aren't adding any nodes, I am a little stuck as to how this might work.) Would it be possible for the internal staff to learn to create these query parameter forms--even if they are unfamiliar with Drupal today? Would they be required to do any PHP programming? How would we take the query parameters from a form and execute a query against DB2? Would this require a custom module? If so, would it require one module total or one module per query? (Note: There is apparently a DB2 driver available for Drupal. See http://groups.drupal.org/node/5511.) Note: I am not looking for CMS recommendations other than Drupal as Drupal nicely fits all of the client's other requirements, and I hope to help them standardize on a single platform. Any assistance you can provide would be helpful. Thank you in advance for your help!

    Read the article

  • WCF REST Question, Binding, Configuration

    - by Ethan McGee
    I am working on a WCF rest interface using json. I have wrapped the service in a windows service to host the service but I am now having trouble getting the service to be callable. I am not sure exactly what is wrong. The basic idea is that I want to host the service on a remote server so I want the service mapped to port localhost:7600 so that it can be invoked by posting data to [server_ip]:7600. The problem is most likely in the configuration file, since I am new to WCF and Rest I wasn't really sure what to type for the configuration so sorry if it's a total mess. I removed several chunks of code and comments to make it a little easier to read. These functions should have no bearing on the service since they call only C# functions. WCF Service Code using System; using System.Collections.Generic; using System.Linq; using System.ServiceModel; using System.ServiceModel.Activation; using System.ServiceModel.Web; using System.Text; namespace PCMiler_Connect { public class ZIP_List_Container { public string[] ZIP_List { get; set; } public string Optimized { get; set; } public string Calc_Type { get; set; } public string Cross_International_Borders { get; set; } public string Use_Kilometers { get; set; } public string Hazard_Level { get; set; } public string OK_To_Change_Destination { get; set; } } [ServiceContract] [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] [ServiceBehavior(InstanceContextMode = InstanceContextMode.PerCall)] public class PCMiler_Webservice { [WebInvoke(Method = "POST", UriTemplate = "", ResponseFormat = WebMessageFormat.Json, RequestFormat = WebMessageFormat.Json), OperationContract] public List<string> Calculate_Distance(ZIP_List_Container container) { return new List<string>(){ distance.ToString(), time.ToString() }; } } } XML Config File <?xml version="1.0" encoding="utf-8"?> <configuration> <system.serviceModel> <services> <service name="PCMiler_Connect.PCMiler_Webservice"> <endpoint address="" behaviorConfiguration="jsonBehavior" binding="webHttpBinding" bindingConfiguration="" contract="PCMiler_Connect.PCMiler_Webservice" /> <host> <baseAddresses> <add baseAddress="http://localhost:7600/" /> </baseAddresses> </host> </service> </services> <behaviors> <endpointBehaviors> <behavior name="jsonBehavior"> <enableWebScript/> </behavior> </endpointBehaviors> </behaviors> </system.serviceModel> </configuration> Service Wrapper using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Diagnostics; using System.Linq; using System.ServiceProcess; using System.ServiceModel; using System.Text; using System.Threading; namespace PCMiler_WIN_Service { public partial class Service1 : ServiceBase { ServiceHost host; public Service1() { InitializeComponent(); } protected override void OnStart(string[] args) { host = new ServiceHost(typeof(PCMiler_Connect.PCMiler_Webservice)); Thread thread = new Thread(new ThreadStart(host.Open)); } protected override void OnStop() { if (host != null) { host.Close(); host = null; } } } }

    Read the article

  • Internet Explorer ajax request not returning anything

    - by Ryan Giglio
    At the end of my registration process you get to a payment screen where you can enter a coupon code, and there is an AJAX call which fetches the coupon from the database and returns it to the page so it can be applied to your total before it is submitted to paypal. It works great in Firefox, Chrome, and Safari, but in Internet Explorer, nothing happens. The (data) being returned to the jQuery function appears to be null. jQuery Post function applyPromo() { var enteredCode = $("#promoCode").val(); $(".promoDiscountContainer").css("display", "block"); $(".promoDiscount").html("<img src='/images/loading.gif' alt='Loading...' title='Loading...' height='18' width='18' />"); $.post("/ajax/lookup-promo.php", { promoCode : enteredCode }, function(data){ if ( data != "error" ) { var promoType = data.getElementsByTagName('promoType').item(0).childNodes.item(0).data; var promoAmount = data.getElementsByTagName('promoAmount').item(0).childNodes.item(0).data; $(".promoDiscountContainer").css("display", "block"); $(".totalWithPromoContainer").css("display", "block"); if (promoType == "percent") { $("#promoDiscount").html("-" + promoAmount + "%"); var newPrice = (originalPrice - (originalPrice * (promoAmount / 100))); $("#totalWithPromo").html(" $" + newPrice); if ( promoAmount == 100 ) { skipPayment(); } } else { $("#promoDiscount").html("-$" + promoAmount); var newPrice = originalPrice - promoAmount; $("#totalWithPromo").html(" $" + newPrice); } $("#paypalPrice").val(newPrice + ".00"); $("#promoConfirm").css("display", "none"); $("#promoConfirm").html("Promotion Found"); finalPrice = newPrice; } else { $(".promoDiscountContainer").css("display", "none"); $(".totalWithPromoContainer").css("display", "none"); $("#promoDiscount").html(""); $("#totalWithPromo").html(""); $("#paypalPrice").val(originalPrice + ".00"); $("#promoConfirm").css("display", "block"); $("#promoConfirm").html("Promotion Not Found"); finalPrice = originalPrice; } }, "xml"); } Corresponding PHP Page include '../includes/dbConn.php'; $enteredCode = $_POST['promoCode']; $result = mysql_query( "SELECT * FROM promotion WHERE promo_code = '" . $enteredCode . "' LIMIT 1"); $currPromo = mysql_fetch_array( $result ); if ( $currPromo ) { if ( $currPromo['percent_off'] != "" ) { header("content-type:application/xml;charset=utf-8"); echo "<?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"yes\"?>"; echo "<promo>"; echo "<promoType>percent</promoType>"; echo "<promoAmount>" . $currPromo['percent_off'] . "</promoAmount>"; echo "</promo>"; } else if ( $currPromo['fixed_off'] != "") { header("content-type:application/xml;charset=utf-8"); echo "<?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"yes\"?>"; echo "<promo>"; echo "<promoType>fixed</promoType>"; echo "<promoAmount>" . $currPromo['fixed_off'] . "</promoAmount>"; echo "</promo>"; } } else { echo "error"; } When I run the code in IE, I get a javascript error on the Javascript line that says var promoType = data.getElementsByTagName('promoType').item(0).childNodes.item(0).data; Here's a screenshot of the IE debugger

    Read the article

  • Crop circular or elliptical image from original UIImage

    - by vikas ojha
    I am working on openCV for detecting the face .I want face to get cropped once its detected.Till now I got the face and have marked the rect/ellipse around it on iPhone. Please help me out in cropping the face in circular/elliptical pattern (UIImage *) opencvFaceDetect:(UIImage *)originalImage { cvSetErrMode(CV_ErrModeParent); IplImage *image = [self CreateIplImageFromUIImage:originalImage]; // Scaling down /* Creates IPL image (header and data) ----------------cvCreateImage CVAPI(IplImage*) cvCreateImage( CvSize size, int depth, int channels ); */ IplImage *small_image = cvCreateImage(cvSize(image->width/2,image->height/2), IPL_DEPTH_8U, 3); /*SMOOTHES DOWN THYE GUASSIAN SURFACE--------:cvPyrDown*/ cvPyrDown(image, small_image, CV_GAUSSIAN_5x5); int scale = 2; // Load XML NSString *path = [[NSBundle mainBundle] pathForResource:@"haarcascade_frontalface_default" ofType:@"xml"]; CvHaarClassifierCascade* cascade = (CvHaarClassifierCascade*)cvLoad([path cStringUsingEncoding:NSASCIIStringEncoding], NULL, NULL, NULL); // Check whether the cascade has loaded successfully. Else report and error and quit if( !cascade ) { NSLog(@"ERROR: Could not load classifier cascade\n"); //return; } //Allocate the Memory storage CvMemStorage* storage = cvCreateMemStorage(0); // Clear the memory storage which was used before cvClearMemStorage( storage ); CGColorSpaceRef colorSpace; CGContextRef contextRef; CGRect face_rect; // Find whether the cascade is loaded, to find the faces. If yes, then: if( cascade ) { CvSeq* faces = cvHaarDetectObjects(small_image, cascade, storage, 1.1f, 3, 0, cvSize(20, 20)); cvReleaseImage(&small_image); // Create canvas to show the results CGImageRef imageRef = originalImage.CGImage; colorSpace = CGColorSpaceCreateDeviceRGB(); contextRef = CGBitmapContextCreate(NULL, originalImage.size.width, originalImage.size.height, 8, originalImage.size.width * 4, colorSpace, kCGImageAlphaPremultipliedLast|kCGBitmapByteOrderDefault); //VIKAS CGContextDrawImage(contextRef, CGRectMake(0, 0, originalImage.size.width, originalImage.size.height), imageRef); CGContextSetLineWidth(contextRef, 4); CGContextSetRGBStrokeColor(contextRef, 1.0, 1.0, 1.0, 0.5); // Draw results on the iamge:Draw all components of face in the form of small rectangles // Loop the number of faces found. for(int i = 0; i < faces->total; i++) { NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init]; // Calc the rect of faces // Create a new rectangle for drawing the face CvRect cvrect = *(CvRect*)cvGetSeqElem(faces, i); // CGRect face_rect = CGContextConvertRectToDeviceSpace(contextRef, // CGRectMake(cvrect.x * scale, cvrect.y * scale, cvrect.width * scale, cvrect.height * scale)); face_rect = CGContextConvertRectToDeviceSpace(contextRef, CGRectMake(cvrect.x*scale, cvrect.y , cvrect.width*scale , cvrect.height*scale*1.25 )); facedetectapp=(FaceDetectAppDelegate *)[[UIApplication sharedApplication]delegate]; facedetectapp.grabcropcoordrect=face_rect; NSLog(@" FACE off %f %f %f %f",facedetectapp.grabcropcoordrect.origin.x,facedetectapp.grabcropcoordrect.origin.y,facedetectapp.grabcropcoordrect.size.width,facedetectapp.grabcropcoordrect.size.height); CGContextStrokeRect(contextRef, face_rect); //CGContextFillEllipseInRect(contextRef,face_rect); CGContextStrokeEllipseInRect(contextRef,face_rect); [pool release]; } } CGImageRef imageRef = CGImageCreateWithImageInRect([originalImage CGImage],face_rect); UIImage *returnImage = [UIImage imageWithCGImage:imageRef]; CGImageRelease(imageRef); CGContextRelease(contextRef); CGColorSpaceRelease(colorSpace); cvReleaseMemStorage(&storage); cvReleaseHaarClassifierCascade(&cascade); return returnImage; } } Thanks Vikas

    Read the article

  • Minimum-Waste Print Job Grouping Algorithm?

    - by Matt Mc
    I work at a publishing house and I am setting up one of our presses for "ganging", in other words, printing multiple jobs simultaneously. Given that different print jobs can have different quantities, and anywhere from 1 to 20 jobs might need to be considered at a time, the problem would be to determine which jobs to group together to minimize waste (waste coming from over-printing on smaller-quantity jobs in a given set, that is). Given the following stable data: All jobs are equal in terms of spatial size--placement on paper doesn't come into consideration. There are three "lanes", meaning that three jobs can be printed simultaneously. Ideally, each lane has one job. Part of the problem is minimizing how many lanes each job is run on. If necessary, one job could be run on two lanes, with a second job on the third lane. The "grouping" waste from a given set of jobs (let's say the quantities of them are x, y and z) would be the highest number minus the two lower numbers. So if x is the higher number, the grouping waste would be (x - y) + (x - z). Otherwise stated, waste is produced by printing job Y and Z (in excess of their quantities) up to the quantity of X. The grouping waste would be a qualifier for the given set, meaning it could not exceed a certain quantity or the job would simply be printed alone. So the question is stated: how to determine which sets of jobs are grouped together, out of any given number of jobs, based on the qualifiers of 1) Three similar quantities OR 2) Two quantities where one is approximately double the other, AND with the aim of minimal total grouping waste across the various sets. (Edit) Quantity Information: Typical job quantities can be from 150 to 350 on foreign languages, or 500 to 1000 on English print runs. This data can be used to set up some scenarios for an algorithm. For example, let's say you had 5 jobs: 1000, 500, 500, 450, 250 By looking at it, I can see a couple of answers. Obviously (1000/500/500) is not efficient as you'll have a grouping waste of 1000. (500/500/450) is better as you'll have a waste of 50, but then you run (1000) and (250) alone. But you could also run (1000/500) with 1000 on two lanes, (500/250) with 500 on two lanes and then (450) alone. In terms of trade-offs for lane minimization vs. wastage, we could say that any grouping waste over 200 is excessive. (End Edit) ...Needless to say, quite a problem. (For me.) I am a moderately skilled programmer but I do not have much familiarity with algorithms and I am not fully studied in the mathematics of the area. I'm I/P writing a sort of brute-force program that simply tries all options, neglecting any option tree that seems to have excessive grouping waste. However, I can't help but hope there's an easier and more efficient method. I've looked at various websites trying to find out more about algorithms in general and have been slogging my way through the symbology, but it's slow going. Unfortunately, Wikipedia's articles on the subject are very cross-dependent and it's difficult to find an "in". The only thing I've been able to really find would seem to be a definition of the rough type of algorithm I need: "Exclusive Distance Clustering", one-dimensionally speaking. I did look at what seems to be the popularly referred-to algorithm on this site, the Bin Packing one, but I was unable to see exactly how it would work with my problem. Any help is appreciated. :)

    Read the article

  • Windows installation repair option not showing up

    - by Carl
    I'm trying to repair an existing Windows XP installation. Following the instructions from http://www.microsoft.com/windowsxp/using/helpandsupport/learnmore/tips/doug92.mspx this should work: When the Press any key to boot from CD message is displayed on your screen, press a key to start your computer from the Windows XP CD. Press ENTER when you see the message To setup Windows XP now, and then press ENTER displayed on the Welcome to Setup screen. Do not choose the option to press R to use the Recovery Console. In the Windows XP Licensing Agreement, press F8 to agree to the license agreement. Make sure that your current installation of Windows XP is selected in the box, and then press R to repair Windows XP. Follow the instructions on the screen to complete Setup. On step 5 pressing R does nothing and there is nothing on the screen saying it would. When I just select to install I get a message that a previous installation is there and proceeding will destroy it and installed applications, I can optionally select a directory other than c:\windows, and I can optionally format before continuing. I had tried to go from SP2-SP3. It failed, and then I couldn't get to Safe Mode. I put the SP1 disk back in to do a repair, and I don't see that option. (I don't have an SP2 boot/install disk, I just have the non-boot upgrade package.) UPDATE: Upon loading the Recovery Console, I get a message saying The system registry does not appear to have an active ControlSet key. The system registry may be damaged. You can try restarting it with the Last Known Good configuration or you can try repairing the installation of Windows using the setup program's repair and recovery options. I then did bootcfg /scan - "successful" ... Total installs: 1 ... [1] c:\windows - with the c:\windows command prompt below it. bootcfg /list gives [1] Windows XP Pro; OS Load Options /noexecute=optin /fastdetect; OS Location: c:\windows I followed the instructions at http://michaelstevenstech.com/XPrepairinstall.htm - "Warning 2" link copy E:\i386\ntldr C:\ copy E:\i386\ntdetect.com C:\ attrib -h -r -s C:\boot.ini del C:\boot.ini BootCfg /Rebuild I added /fastdetect when it asked for options. I re-ran Windows setup - no change - no repair option. UPDATE: I followed the procedure at http://support.microsoft.com/default.aspx?scid=kb;en-us;307545 I rebooted. I now get a quick message on bootup to select the boot - 1: [blank] ; Windows XP Professional ; Windows Recover Console. The "1: " is new. The rest is the way it was when all was okay. Selecting 1: and the next one gives the same result - I get to a login icon, and then it asks for a password, with the blinking cursor, but I can't type anything. I reboot with the Windows CD. Now I see a repair option for installation "1: " I selected R on that, and it did "Setup is copying files..." and rebooted when it was done. Then it booted, and I got a window saying "Setup will complete in approximately 39 minutes." That's where I am now. I wasn't expecting this last part - I did a repair several months ago and I don't recall that. UPDATE: Booted up. Asked if I wanted to register Windows online. All my icons are there, and the old desktop documents. Good. All the applications I tried from the Start Menu work (tested a few), except Corel Photopaint - I get registry entry not found errors. Windows ran for a while, then froze. The mouse and keyboard don't work. Pressing the power button got Windows to shut down. I probably need to put SP2 on it, and then all the updates for my laptop for XP Pro SP2 (drivers), there's a bunch. The mouse and keyboard quit working again. That wasn't a problem when I first set up this laptop. I've ran 4 times now. Two mouse/keyboards hangs by pressing Ctrl-C (to copy text from a notepad document), and two by selecting Start-Run (wasn't able to type anything in the box).

    Read the article

< Previous Page | 219 220 221 222 223 224 225 226 227 228 229 230  | Next Page >