Search Results

Search found 8570 results on 343 pages for 'power saving'.

Page 331/343 | < Previous Page | 327 328 329 330 331 332 333 334 335 336 337 338  | Next Page >

  • Paying great programmers more than average programmers

    - by Kelly French
    It's fairly well recognized that some programmers are up to 10 times more productive than others. Joel mentions this topic on his blog. There is a whole blog devoted to the idea of the "10x productive programmer". In years since the original study, the general finding that "There are order-of-magnitude differences among programmers" has been confirmed by many other studies of professional programmers (Curtis 1981, Mills 1983, DeMarco and Lister 1985, Curtis et al. 1986, Card 1987, Boehm and Papaccio 1988, Valett and McGarry 1989, Boehm et al 2000). Fred Brooks mentions the wide range in the quality of designers in his "No Silver Bullet" article, The differences are not minor--they are rather like the differences between Salieri and Mozart. Study after study shows that the very best designers produce structures that are faster, smaller, simpler, cleaner, and produced with less effort. The differences between the great and the average approach an order of magnitude. The study that Brooks cites is: H. Sackman, W.J. Erikson, and E.E. Grant, "Exploratory Experimental Studies Comparing Online and Offline Programming Performance," Communications of the ACM, Vol. 11, No. 1 (January 1968), pp. 3-11. The way programmers are paid by employers these days makes it almost impossible to pay the great programmers a large multiple of what the entry-level salary is. When the starting salary for a just-graduated entry-level programmer, we'll call him Asok (From Dilbert), is $40K, even if the top programmer, we'll call him Linus, makes $120K that is only a multiple of 3. I'd be willing to be that Linus does much more than 3 times what Asok does, so why wouldn't we expect him to get paid more as well? Here is a quote from Stroustrup: "The companies are complaining because they are hurting. They can't produce quality products as cheaply, as reliably, and as quickly as they would like. They correctly see a shortage of good developers as a part of the problem. What they generally don't see is that inserting a good developer into a culture designed to constrain semi-skilled programmers from doing harm is pointless because the rules/culture will constrain the new developer from doing anything significantly new and better." This leads to two questions. I'm excluding self-employed programmers and contractors. If you disagree that's fine but please include your rationale. It might be that the self-employed or contract programmers are where you find the top-10 earners, but please provide a explanation/story/rationale along with any anecdotes. [EDIT] I thought up some other areas in which talent/ability affects pay. Financial traders (commodities, stock, derivatives, etc.) designers (fashion, interior decorators, architects, etc.) professionals (doctor, lawyer, accountant, etc.) sales Questions: Why aren't the top 1% of programmers paid like A-list movie stars? What would the industry be like if we did pay the "Smart and gets things done" programmers 6, 8, or 10 times what an intern makes? [Footnote: I posted this question after submitting it to the Stackoverflow podcast. It was included in episode 77 and I've written more about it as a Codewright's Tale post 'Of Rockstars and Bricklayers'] Epilogue: It's probably unfair to exclude contractors and the self-employed. One aspect of the highest earners in other fields is that they are free-agents. The competition for their skills is what drives up their earning power. This means they can not be interchangeable or otherwise treated as a plug-and-play resource. I liked the example in one answer of a major league baseball team trying to field two first-basemen. Also, something that Joel mentioned in the Stackoverflow podcast (#77). There are natural dynamics to shrink any extreme performance/pay ranges between the highs and lows. One is the peer pressure of organizations to pay within a given range, another is the likelyhood that the high performer will realize their undercompensation and seek greener pastures.

    Read the article

  • Un-failing over a Cisco PIX 515e

    - by ABrown
    We had a power outage at our data center last week and when our dual PIX 515E running IOS 7.0(8) (configured with a failover cable) came back, they were in a failed over state where the Secondary unit is active and the Primary unit is standby I have tried 'failover reset', 'failover active', and 'failover reload-standby' as well as executing reloads on both units in a variety of orders, and they don't come back Primary/Active Secondary/Standby. The only thing in my arsenal that I haven't tried is driving to the data center and performing a hard reboot, which I hate to do. I have read How Failover Works on the Cisco Secure Firewall and it seems like this should be wicked straight forward. output of show failover on Primary: Failover On Cable status: Normal Failover unit Primary Failover LAN Interface: N/A - Serial-based failover enabled Unit Poll frequency 15 seconds, holdtime 45 seconds Interface Poll frequency 15 seconds Interface Policy 1 Monitored Interfaces 2 of 250 maximum Version: Ours 7.0(8), Mate 7.0(8) Last Failover at: 02:52:05 UTC Mar 10 2010 This host: Primary - Standby Ready Active time: 0 (sec) Interface outside (x.x.x.165): Normal Interface inside (y.y.y.3): Normal Other host: Secondary - Active Active time: 897045 (sec) Interface outside (x.x.x.164): Normal Interface inside (y.y.y.4): Normal Stateful Failover Logical Update Statistics Link : Unconfigured. output of show failover on Secondary: Failover On Cable status: Normal Failover unit Secondary Failover LAN Interface: N/A - Serial-based failover enabled Unit Poll frequency 15 seconds, holdtime 45 seconds Interface Poll frequency 15 seconds Interface Policy 1 Monitored Interfaces 2 of 250 maximum Version: Ours 7.0(8), Mate 7.0(8) Last Failover at: 02:03:04 UTC Feb 28 2010 This host: Secondary - Active Active time: 896925 (sec) Interface outside (x.x.x.164): Normal Interface inside (y.y.y.4): Normal Other host: Primary - Standby Ready Active time: 0 (sec) Interface outside (x.x.x.165): Normal Interface inside (y.y.y.3): Normal Stateful Failover Logical Update Statistics Link : Unconfigured. I'm seeing the following in my syslog: Mar 10 03:05:00 fw1 %PIX-5-111008: User 'enable_15' executed the 'failover reset' command. Mar 10 03:05:09 fw1 %PIX-5-111008: User 'enable_15' executed the 'failover reload-standby' command. Mar 10 03:05:12 fw1 %PIX-6-720032: (VPN-Secondary) HA status callback: id=3,seq=200,grp=0,event=406,op=20,my=Active,peer=Failed. Mar 10 03:05:12 fw1 %PIX-6-720028: (VPN-Secondary) HA status callback: Peer state Failed. Mar 10 03:06:09 fw1 %PIX-6-720032: (VPN-Secondary) HA status callback: id=3,seq=200,grp=0,event=401,op=0,my=Active,peer=Failed. Mar 10 03:06:09 fw1 %PIX-6-720024: (VPN-Secondary) HA status callback: Control channel is down. Mar 10 03:06:09 fw1 %PIX-6-720032: (VPN-Secondary) HA status callback: id=3,seq=200,grp=0,event=401,op=1,my=Active,peer=Failed. Mar 10 03:06:10 fw1 %PIX-6-720024: (VPN-Secondary) HA status callback: Control channel is up. Mar 10 03:06:10 fw1 %PIX-6-720032: (VPN-Secondary) HA status callback: id=3,seq=200,grp=0,event=411,op=2,my=Active,peer=Failed. Mar 10 03:06:23 fw1 %PIX-6-720032: (VPN-Secondary) HA status callback: id=3,seq=200,grp=0,event=406,op=80,my=Active,peer=Standby Ready. Mar 10 03:06:23 fw1 %PIX-6-720028: (VPN-Secondary) HA status callback: Peer state Standby Ready. Mar 10 03:06:24 fw2 %PIX-6-720027: (VPN-Primary) HA status callback: My state Standby Ready. Mar 10 03:07:05 fw1 %PIX-5-111008: User 'enable_15' executed the 'failover reset' command. Mar 10 03:07:31 fw1 %PIX-5-111008: User 'enable_15' executed the 'failover active' command. Mar 10 03:08:04 fw1 %PIX-5-611103: User logged out: Uname: enable_1 Mar 10 03:08:04 fw1 %PIX-6-315011: SSH session from admin1_int on interface inside for user "pix" terminated normally Mar 10 03:08:39 fw1 %PIX-6-720032: (VPN-Secondary) HA status callback: id=3,seq=200,grp=0,event=406,op=20,my=Active,peer=Failed. Mar 10 03:08:39 fw1 %PIX-6-720028: (VPN-Secondary) HA status callback: Peer state Failed. Mar 10 03:09:10 fw1 %PIX-6-605005: Login permitted from admin1_int/36891 to inside:192.168.4.4/ssh for user "pix" Mar 10 03:09:23 fw1 %PIX-5-111008: User 'enable_15' executed the 'failover reset' command. Mar 10 03:09:38 fw1 %PIX-6-720032: (VPN-Secondary) HA status callback: id=3,seq=200,grp=0,event=401,op=0,my=Active,peer=Failed. Mar 10 03:09:39 fw1 %PIX-6-720024: (VPN-Secondary) HA status callback: Control channel is down. Mar 10 03:09:39 fw1 %PIX-6-720032: (VPN-Secondary) HA status callback: id=3,seq=200,grp=0,event=401,op=1,my=Active,peer=Failed. Mar 10 03:09:39 fw1 %PIX-6-720024: (VPN-Secondary) HA status callback: Control channel is up. Mar 10 03:09:39 fw1 %PIX-6-720032: (VPN-Secondary) HA status callback: id=3,seq=200,grp=0,event=411,op=2,my=Active,peer=Failed. Mar 10 03:09:52 fw1 %PIX-6-720032: (VPN-Secondary) HA status callback: id=3,seq=200,grp=0,event=406,op=80,my=Active,peer=Standby Ready. Mar 10 03:09:52 fw1 %PIX-6-720028: (VPN-Secondary) HA status callback: Peer state Standby Ready. Mar 10 03:09:53 fw2 %PIX-6-720027: (VPN-Primary) HA status callback: My state Standby Ready. I'm not exactly sure how to interpret that syslog data. Primary doesn't seem to even try to become Active. When I reload the individual units separately, my connections are retained, so it doesn't seem like I have a real hardware failure. Is there something I can query (IOS or SNMP) to check for hardware issues? Any thoughts? My IOS-fu is weak. Thanks for any help you might provide, Aaron

    Read the article

  • In WPF, how do I get a command in a Control Template to bind to a property in a parent?

    - by Keith
    I am relatively new to WPF and sometimes it makes my head explode. However, I do like the power behind it, especially when used with the MVVM model. I have a control template that contains a button. I use that control template inside of a custom control. I want to add a property on the custom control that will bind to the command property of the button inside the control template. Basically, it is a combo box with a button to the right of it to allow a user to pop up a search dialog. Since this control could appear on a usercontrol multiple times, I need to be able to potentially bind each control to a different command (Searh products, search customers, etc). However, I have been unable to figure out how to do this Here is some sample XAML <Style TargetType="{x:Type m:SelectionFieldControl}"> <Setter Property="LookupTemplate" Value="{StaticResource LookupTemplate}" /> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="{x:Type m:SelectionFieldControl}"> <Border BorderThickness="{TemplateBinding Border.BorderThickness}" Padding="{TemplateBinding Control.Padding}" BorderBrush="{TemplateBinding Border.BorderBrush}" Background="{TemplateBinding Panel.Background}" SnapsToDevicePixels="True" Focusable="False"> <Grid> <Grid.ColumnDefinitions> <ColumnDefinition Width="Auto" MinWidth="0" SharedSizeGroup="{Binding LabelShareSizeGroupName, RelativeSource={RelativeSource FindAncestor, AncestorType={x:Type m:BaseFieldControl}}}" /> <ColumnDefinition Width="1*" /> <ColumnDefinition Width="Auto" SharedSizeGroup="{Binding WidgetsShareSizeGroupName, RelativeSource={RelativeSource FindAncestor, AncestorType={x:Type m:BaseFieldControl}}}" /> </Grid.ColumnDefinitions> <!-- Customized Value Part --> <ComboBox x:Name="PART_Value" Grid.Column="1" Margin="4,2,0,1" SelectedValue="{Binding Path=SelectionField.Value, RelativeSource={RelativeSource FindAncestor, AncestorType={x:Type m:SelectionFieldControl}}}" IsEnabled="{Binding Field.IsNotReadOnly, RelativeSource={RelativeSource FindAncestor, AncestorType={x:Type m:SelectionFieldControl}}}" Visibility="{Binding Field.IsInEditMode, Converter={StaticResource TrueToVisible}, RelativeSource={RelativeSource FindAncestor, AncestorType={x:Type m:SelectionFieldControl}}}" FontFamily="{StaticResource FontFamily_Default}" FontSize="11px"> <ComboBox.ItemsPanel> <ItemsPanelTemplate> <VirtualizingStackPanel IsVirtualizing="True" VirtualizationMode="Recycling"/> </ItemsPanelTemplate> </ComboBox.ItemsPanel> </ComboBox> <StackPanel Grid.Column="2" Orientation="Horizontal" Name="PART_Extra" Focusable="False"> <ContentControl Name="PART_LookupContent" Template="{Binding LookupTemplate, RelativeSource={RelativeSource FindAncestor, AncestorType={x:Type m:SelectionFieldControl}}}" Focusable="False"/> </StackPanel> </Grid> </Border> </ControlTemplate> </Setter.Value> </Setter> </Style> I thought I could get it to work by doing something like this <Button Command="{Binding RelativeSource={RelativeSource FindAncestor, AncestorType={x:Type SelectionFieldControl}}, Path=ShowSearchCommand}" Margin="2" /> but it does not work. Any help would be greatly appreciated.

    Read the article

  • Problem with RAID5 (mdadm) - disk detached

    - by poscaman
    Having these lines in /var/log/syslog Apr 18 16:53:05 Server kernel: [4487878.816036] ata4: EH in SWNCQ mode,QC:qc_active 0x1 sactive 0x1 Apr 18 16:53:05 Server kernel: [4487878.816058] ata4: SWNCQ:qc_active 0x1 defer_bits 0x0 last_issue_tag 0x0 Apr 18 16:53:05 Server kernel: [4487878.816059] dhfis 0x1 dmafis 0x1 sdbfis 0x0 Apr 18 16:53:05 Server kernel: [4487878.816093] ata4: ATA_REG 0x40 ERR_REG 0x0 Apr 18 16:53:05 Server kernel: [4487878.816108] ata4: tag : dhfis dmafis sdbfis sacitve Apr 18 16:53:05 Server kernel: [4487878.816125] ata4: tag 0x0: 1 1 0 1 Apr 18 16:53:05 Server kernel: [4487878.816150] ata4.00: exception Emask 0x0 SAct 0x1 SErr 0x0 action 0x6 frozen Apr 18 16:53:05 Server kernel: [4487878.816178] ata4.00: failed command: WRITE FPDMA QUEUED Apr 18 16:53:05 Server kernel: [4487878.816199] ata4.00: cmd 61/08:00:00:88:e0/00:00:e8:00:00/40 tag 0 ncq 4096 out Apr 18 16:53:05 Server kernel: [4487878.816200] res 40/00:00:01:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout) Apr 18 16:53:05 Server kernel: [4487878.816253] ata4.00: status: { DRDY } Apr 18 16:53:05 Server kernel: [4487878.816272] ata4: hard resetting link Apr 18 16:53:05 Server kernel: [4487878.816274] ata4: nv: skipping hardreset on occupied port Apr 18 16:53:06 Server kernel: [4487879.676029] ata4: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Apr 18 16:53:07 Server kernel: [4487880.416749] ata4.00: n_sectors mismatch 3907029168 != 268435455 Apr 18 16:53:07 Server kernel: [4487880.416752] ata4.00: revalidation failed (errno=-19) Apr 18 16:53:07 Server kernel: [4487880.416773] ata4.00: limiting speed to UDMA/133:PIO2 Apr 18 16:53:11 Server kernel: [4487884.676024] ata4: hard resetting link Apr 18 16:53:11 Server kernel: [4487884.676027] ata4: nv: skipping hardreset on occupied port Apr 18 16:53:12 Server kernel: [4487885.144032] ata4: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Apr 18 16:53:12 Server kernel: [4487885.240185] ata4.00: failed to IDENTIFY (INIT_DEV_PARAMS failed, err_mask=0x80) Apr 18 16:53:12 Server kernel: [4487885.240190] ata4.00: revalidation failed (errno=-5) Apr 18 16:53:12 Server kernel: [4487885.240210] ata4.00: disabled Apr 18 16:53:17 Server kernel: [4487890.144023] ata4: hard resetting link Apr 18 16:53:17 Server kernel: [4487891.024033] ata4: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Apr 18 16:53:17 Server kernel: [4487891.033357] ata4.00: ATA-8: WDC WD20EARS-00S8B1, 80.00A80, max UDMA/133 Apr 18 16:53:17 Server kernel: [4487891.033360] ata4.00: 3907029168 sectors, multi 1: LBA48 NCQ (depth 31/32) Apr 18 16:53:17 Server kernel: [4487891.048347] ata4.00: configured for UDMA/133 Apr 18 16:53:17 Server kernel: [4487891.048361] sd 3:0:0:0: [sdc] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE Apr 18 16:53:17 Server kernel: [4487891.048365] sd 3:0:0:0: [sdc] Sense Key : Aborted Command [current] [descriptor] Apr 18 16:53:17 Server kernel: [4487891.048369] Descriptor sense data with sense descriptors (in hex): Apr 18 16:53:17 Server kernel: [4487891.048371] 72 0b 00 00 00 00 00 0c 00 0a 80 00 00 00 00 00 Apr 18 16:53:17 Server kernel: [4487891.048378] 00 00 00 00 Apr 18 16:53:17 Server kernel: [4487891.048382] sd 3:0:0:0: [sdc] Add. Sense: No additional sense information Apr 18 16:53:17 Server kernel: [4487891.048385] sd 3:0:0:0: [sdc] CDB: Write(10): 2a 00 e8 e0 88 00 00 00 08 00 Apr 18 16:53:17 Server kernel: [4487891.048393] end_request: I/O error, dev sdc, sector 3907028992 Apr 18 16:53:17 Server kernel: [4487891.048420] sd 3:0:0:0: rejecting I/O to offline device Apr 18 16:53:17 Server kernel: [4487891.048440] sd 3:0:0:0: rejecting I/O to offline device Apr 18 16:53:17 Server kernel: [4487891.048458] end_request: I/O error, dev sdc, sector 3907028992 Apr 18 16:53:17 Server kernel: [4487891.048477] md: super_written gets error=-5, uptodate=0 Apr 18 16:53:17 Server kernel: [4487891.048482] raid5: Disk failure on sdc, disabling device. Apr 18 16:53:17 Server kernel: [4487891.048483] raid5: Operation continuing on 3 devices. Apr 18 16:53:17 Server kernel: [4487891.048525] ata4: EH complete Apr 18 16:53:17 Server kernel: [4487891.048554] sd 3:0:0:0: rejecting I/O to offline device Apr 18 16:53:17 Server kernel: [4487891.048576] sd 3:0:0:0: rejecting I/O to offline device Apr 18 16:53:17 Server kernel: [4487891.048596] sd 3:0:0:0: rejecting I/O to offline device Apr 18 16:53:17 Server kernel: [4487891.048615] sd 3:0:0:0: [sdc] READ CAPACITY(16) failed Apr 18 16:53:17 Server kernel: [4487891.048617] sd 3:0:0:0: [sdc] Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK Apr 18 16:53:17 Server kernel: [4487891.048620] sd 3:0:0:0: [sdc] Sense not available. Apr 18 16:53:17 Server kernel: [4487891.048624] sd 3:0:0:0: rejecting I/O to offline device Apr 18 16:53:17 Server kernel: [4487891.048643] sd 3:0:0:0: rejecting I/O to offline device Apr 18 16:53:17 Server kernel: [4487891.048663] sd 3:0:0:0: rejecting I/O to offline device Apr 18 16:53:17 Server kernel: [4487891.048681] sd 3:0:0:0: [sdc] READ CAPACITY failed Apr 18 16:53:17 Server kernel: [4487891.048683] sd 3:0:0:0: [sdc] Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK Apr 18 16:53:17 Server kernel: [4487891.048685] sd 3:0:0:0: [sdc] Sense not available. Apr 18 16:53:17 Server kernel: [4487891.048689] sd 3:0:0:0: rejecting I/O to offline device Apr 18 16:53:17 Server kernel: [4487891.048709] sd 3:0:0:0: rejecting I/O to offline device Apr 18 16:53:17 Server kernel: [4487891.048800] sd 3:0:0:0: rejecting I/O to offline device Apr 18 16:53:17 Server kernel: [4487891.048860] sd 3:0:0:0: rejecting I/O to offline device Apr 18 16:53:17 Server kernel: [4487891.049028] sd 3:0:0:0: [sdc] Asking for cache data failed Apr 18 16:53:17 Server kernel: [4487891.049048] sd 3:0:0:0: [sdc] Assuming drive cache: write through Apr 18 16:53:17 Server kernel: [4487891.049071] sdc: detected capacity change from 2000398934016 to 0 Apr 18 16:53:17 Server kernel: [4487891.049080] ata4.00: detaching (SCSI 3:0:0:0) Apr 18 16:53:18 Server kernel: [4487891.061149] sd 3:0:0:0: [sdc] Stopping disk Apr 18 16:53:18 Server kernel: [4487891.485492] RAID5 conf printout: Apr 18 16:53:18 Server kernel: [4487891.485496] --- rd:4 wd:3 Apr 18 16:53:18 Server kernel: [4487891.485500] disk 0, o:1, dev:sdb Apr 18 16:53:18 Server kernel: [4487891.485502] disk 1, o:0, dev:sdc Apr 18 16:53:18 Server kernel: [4487891.485504] disk 2, o:1, dev:sdd Apr 18 16:53:18 Server kernel: [4487891.485506] disk 3, o:1, dev:sde Apr 18 16:53:18 Server kernel: [4487891.497014] RAID5 conf printout: Apr 18 16:53:18 Server kernel: [4487891.497016] --- rd:4 wd:3 Apr 18 16:53:18 Server kernel: [4487891.497018] disk 0, o:1, dev:sdb Apr 18 16:53:18 Server kernel: [4487891.497019] disk 2, o:1, dev:sdd Apr 18 16:53:18 Server kernel: [4487891.497021] disk 3, o:1, dev:sde Apr 18 16:53:18 Server kernel: [4487891.838719] scsi 3:0:0:0: Direct-Access ATA WDC WD20EARS-00S 80.0 PQ: 0 ANSI: 5 Apr 18 16:53:18 Server kernel: [4487891.838886] sd 3:0:0:0: Attached scsi generic sg3 type 0 Apr 18 16:53:18 Server kernel: [4487891.838911] sd 3:0:0:0: [sdf] 3907029168 512-byte logical blocks: (2.00 TB/1.81 TiB) Apr 18 16:53:18 Server kernel: [4487891.838964] sd 3:0:0:0: [sdf] Write Protect is off Apr 18 16:53:18 Server kernel: [4487891.838967] sd 3:0:0:0: [sdf] Mode Sense: 00 3a 00 00 Apr 18 16:53:18 Server kernel: [4487891.838988] sd 3:0:0:0: [sdf] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 18 16:53:20 Server kernel: [4487891.839147] sdf: unknown partition table Apr 18 16:53:20 Server kernel: [4487893.130026] sd 3:0:0:0: [sdf] Attached SCSI disk Right now, i'm unable to do anything on /dev/sdc. Is there any way to try to re-attach it? I don't want to power-down the server unless absolutely necessary System: Debian Stable 2.6.32-5-amd64 mdadm version 3.1.4-1+8efb9d1 cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] md0 : active raid5 sdb[0] sdc[4](F) sde[3] sdd[2] 5860543488 blocks level 5, 64k chunk, algorithm 2 [4/3] [U_UU] unused devices: <none> mdadm --examine --scan ARRAY /dev/md0 UUID=1a7744b5:912ec7af:f82a9565:e3b453b4

    Read the article

  • 'undefined method init for Mysql:Class'

    - by sscirrus
    I've been having problems with a MySQL Server installation that got messed up after a power outage. Configuration Intel i5 Mac running OS X 10.6.5 Ruby 1.9.2 installed Rails 3.0.1 installed MySQL Server (finally) installed and running I completely reinstalled MySQL, which deleted the local development/test/production databases. So, I have run create database development; in MySQL to get the dev database ready for a migration. Current Goal Run rake db:migrate to get my databases back again. (I cannot currently access my databases or Mysql at all from Rails.) Error Using the gem 'mysql', '2.8.1' and run rake db:migrate, I get the error: rake aborted! undefined method 'init' for Mysql:Class Stack Trace: /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/activerecord-3.0.1/lib/active_record/connection_adapters/mysql_adapter.rb:30:in 'mysql_connection' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/activerecord-3.0.1/lib/active_record/connection_adapters/abstract/connection_pool.rb:230:in 'new_connection' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/activerecord-3.0.1/lib/active_record/connection_adapters/abstract/connection_pool.rb:238:in 'checkout_new_connection' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/activerecord-3.0.1/lib/active_record/connection_adapters/abstract/connection_pool.rb:194:in 'block (2 levels) in checkout' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/activerecord-3.0.1/lib/active_record/connection_adapters/abstract/connection_pool.rb:190:in 'loop' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/activerecord-3.0.1/lib/active_record/connection_adapters/abstract/connection_pool.rb:190:in 'block in checkout' /Users/sscirrus/.rvm/rubies/ruby-1.9.2-p0/lib/ruby/1.9.1/monitor.rb:201:in 'mon_synchronize' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/activerecord-3.0.1/lib/active_record/connection_adapters/abstract/connection_pool.rb:189:in 'checkout' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/activerecord-3.0.1/lib/active_record/connection_adapters/abstract/connection_pool.rb:96:in 'connection' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/activerecord-3.0.1/lib/active_record/connection_adapters/abstract/connection_pool.rb:318:in 'retrieve_connection' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/activerecord-3.0.1/lib/active_record/connection_adapters/abstract/connection_specification.rb:97:in 'retrieve_connection' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/activerecord-3.0.1/lib/active_record/connection_adapters/abstract/connection_specification.rb:89:in 'connection' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/activerecord-3.0.1/lib/active_record/migration.rb:486:in 'initialize' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/activerecord-3.0.1/lib/active_record/migration.rb:433:in 'new' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/activerecord-3.0.1/lib/active_record/migration.rb:433:in 'up' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/activerecord-3.0.1/lib/active_record/migration.rb:415:in 'migrate' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/activerecord-3.0.1/lib/active_record/railties/databases.rake:142:in 'block (2 levels) in <top (required)>' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/rake-0.8.7/lib/rake.rb:636:in 'call' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/rake-0.8.7/lib/rake.rb:636:in 'block in execute' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/rake-0.8.7/lib/rake.rb:631:in 'each' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/rake-0.8.7/lib/rake.rb:631:in 'execute' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/rake-0.8.7/lib/rake.rb:597:in 'block in invoke_with_call_chain' /Users/sscirrus/.rvm/rubies/ruby-1.9.2-p0/lib/ruby/1.9.1/monitor.rb:201:in 'mon_synchronize' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/rake-0.8.7/lib/rake.rb:590:in 'invoke_with_call_chain' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/rake-0.8.7/lib/rake.rb:583:in 'invoke' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/rake-0.8.7/lib/rake.rb:2051:in 'invoke_task' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/rake-0.8.7/lib/rake.rb:2029:in 'block (2 levels) in top_level' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/rake-0.8.7/lib/rake.rb:2029:in 'each' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/rake-0.8.7/lib/rake.rb:2029:in 'block in top_level' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/rake-0.8.7/lib/rake.rb:2068:in 'standard_exception_handling' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/rake-0.8.7/lib/rake.rb:2023:in 'top_level' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/rake-0.8.7/lib/rake.rb:2001:in 'block in run' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/rake-0.8.7/lib/rake.rb:2068:in 'standard_exception_handling' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/rake-0.8.7/lib/rake.rb:1998:in 'run' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/rake-0.8.7/bin/rake:31:in '<top (required)>' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/bin/rake:19:in 'load' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/bin/rake:19:in '<main>'

    Read the article

  • Why do I get this error when I try to push my SQLite3 to Postgresql (via Taps) on Cedar Stack?

    - by rhodee
    I've done quite a bit of research on Heroku Dev Center and I am now looking to the community for help. Here is my problem. I can not push my db to Heroku Cedar Stack. I am trying to migrate a sqlite database to postgresql via Taps gem. When I am ready to deploy I run: bundle install --without production heroku run db:push I get the following result: Running db:seed attached to terminal... up, run.17 sh: db:seed: not found heroku run rake db:migrate And when I run the migration: heroku run rake db:migrate I get the following: Running rake db:migrate attached to terminal... up, run.18 rake aborted! No Rakefile found (looking for: rakefile, Rakefile, rakefile.rb, Rakefile.rb) /usr/local/lib/ruby/1.9.1/rake.rb:2367:in `raw_load_rakefile' /usr/local/lib/ruby/1.9.1/rake.rb:2007:in `block in load_rakefile' /usr/local/lib/ruby/1.9.1/rake.rb:2058:in `standard_exception_handling' /usr/local/lib/ruby/1.9.1/rake.rb:2006:in `load_rakefile' /usr/local/lib/ruby/1.9.1/rake.rb:1991:in `run' /usr/local/bin/rake:31:in `<main>' Everytime I push to Heroku (git push heroku master) it fails because my gem file is attempting to install sqlite3 gem-even though its inside of the development and test groups in my Gemfile. My database.yml production environment still points to sqlite adapter even after I have run the following command successfully: heroku config:add BUNDLE_WITHOUT="test development" --app app_name_on_heroku Out of ideas. Please help. If its useful I can post results of my gemfile, heroku ps and logs. Cheers UPDATE: After following @John's direction I now receive the following terminal message. Sending schema Schema: 100% |==========================================| Time: 00:00:07 Sending indexes schema_migrat: 100% |==========================================| Time: 00:00:00 Sending data 4 tables, 6 records schema_migrat: 0% | | ETA: --:--:-- Saving session to push_201111070749.dat.. !!! Caught Server Exception HTTP CODE: 500 Taps Server Error: LoadError: no such file to load -- sequel/adapters/ And the following warnings: ["/app/.bundle/gems/ruby/1.9.1/gems/sequel-3.20.0/lib/sequel/core.rb:249:in require'", "/app/.bundle/gems/ruby/1.9.1/gems/sequel-3.20.0/lib/sequel/core.rb:249:inblock in tsk_require'", "/app/.bundle/gems/ruby/1.9.1/gems/sequel-3.20.0/lib/sequel/core.rb:72:in block in check_requiring_thread'", "<internal:prelude>:10:insynchronize'", "/app/.bundle/gems/ruby/1.9.1/gems/sequel-3.20.0/lib/sequel/core.rb:69:in check_requiring_thread'", "/app/.bundle/gems/ruby/1.9.1/gems/sequel-3.20.0/lib/sequel/core.rb:249:intsk_require'", "/app/.bundle/gems/ruby/1.9.1/gems/sequel-3.20.0/lib/sequel/database/connecting.rb:25:in adapter_class'", "/app/.bundle/gems/ruby/1.9.1/gems/sequel-3.20.0/lib/sequel/database/connecting.rb:54:inconnect'", "/app/.bundle/gems/ruby/1.9.1/gems/sequel-3.20.0/lib/sequel/core.rb:119:in connect'", "/app/lib/taps/db_session.rb:14:inconn'", "/app/lib/taps/server.rb:91:in block in <class:Server>'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:865:incall'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:865:in block in route'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:521:ininstance_eval'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:521:in route_eval'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:500:inblock (2 levels) in route!'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:497:in catch'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:497:inblock in route!'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:476:in each'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:476:inroute!'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:601:in dispatch!'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:411:inblock in call!'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:566:in instance_eval'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:566:inblock in invoke'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:566:in catch'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:566:ininvoke'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:411:in call!'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:399:incall'", "/app/.bundle/gems/ruby/1.9.1/gems/rack-1.2.1/lib/rack/auth/basic.rb:25:in call'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:979:inblock in call'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:1005:in synchronize'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:979:incall'", "/home/heroku_rack/lib/static_assets.rb:9:in call'", "/home/heroku_rack/lib/last_access.rb:15:incall'", "/app/.bundle/gems/ruby/1.9.1/gems/rack-1.2.1/lib/rack/urlmap.rb:47:in block in call'", "/app/.bundle/gems/ruby/1.9.1/gems/rack-1.2.1/lib/rack/urlmap.rb:41:ineach'", "/app/.bundle/gems/ruby/1.9.1/gems/rack-1.2.1/lib/rack/urlmap.rb:41:in call'", "/home/heroku_rack/lib/date_header.rb:14:incall'", "/app/.bundle/gems/ruby/1.9.1/gems/rack-1.2.1/lib/rack/builder.rb:77:in call'", "/app/.bundle/gems/ruby/1.9.1/gems/thin-1.2.7/lib/thin/connection.rb:76:inblock in pre_process'", "/app/.bundle/gems/ruby/1.9.1/gems/thin-1.2.7/lib/thin/connection.rb:74:in catch'", "/app/.bundle/gems/ruby/1.9.1/gems/thin-1.2.7/lib/thin/connection.rb:74:inpre_process'", "/app/.bundle/gems/ruby/1.9.1/gems/thin-1.2.7/lib/thin/connection.rb:57:in process'", "/app/.bundle/gems/ruby/1.9.1/gems/thin-1.2.7/lib/thin/connection.rb:42:inreceive_data'", "/app/.bundle/gems/ruby/1.9.1/gems/eventmachine-0.12.10/lib/eventmachine.rb:256:in run_machine'", "/app/.bundle/gems/ruby/1.9.1/gems/eventmachine-0.12.10/lib/eventmachine.rb:256:inrun'", "/app/.bundle/gems/ruby/1.9.1/gems/thin-1.2.7/lib/thin/backends/base.rb:57:in start'", "/app/.bundle/gems/ruby/1.9.1/gems/thin-1.2.7/lib/thin/server.rb:156:instart'", "/app/.bundle/gems/ruby/1.9.1/gems/thin-1.2.7/lib/thin/controllers/controller.rb:80:in start'", "/app/.bundle/gems/ruby/1.9.1/gems/thin-1.2.7/lib/thin/runner.rb:177:inrun_command'", "/app/.bundle/gems/ruby/1.9.1/gems/thin-1.2.7/lib/thin/runner.rb:143:in run!'", "/app/.bundle/gems/ruby/1.9.1/gems/thin-1.2.7/bin/thin:6:in'", "/usr/ruby1.9.2/bin/thin:19:in load'", "/usr/ruby1.9.2/bin/thin:19:in'"]

    Read the article

  • Java update/install via group policy

    - by Maximus
    I trying to deploy the latest Java RE version via GP, Java 7 update 9. I want to update computers that are currently running an older version of Java, a mixture of 7.6 and 7.7, some computers are running versions as old as 6.31. Some are running a mixture of both. I would also like this GP to install Java if it's not installed. Previously I used push out Java updates to users machines as Java didn't remove the old version. So when it was done the user would restart their browser or pc to start using the latest version. Not the best way to manage it as it leaves the old version installed but it worked. I've created group policies before for printer deployment, log on drive mapping scripts, but never software deployment. I've extracted the Java MSI and created a transform file to suppress reboot etc using orca. As described on this site http://ivan.dretvic.com/2011/06/how-to-package-and-deploy-java-jre-1-6-0_26-via-group-policy/. I have also tried saving the edited MSI directly and that didn't work either. But it just won't deploy. I have tried to enable logging as suggested on this site http://openofficetechnology.com/node/32, GPO logging via UserEnvDebugLevel, Software deployment logging via AppmgmtDebugLevel and MSI logging, but there is no log C:\Windows\Debug\UserMode\userenv.log being created. The windows event viewer has the following errors: Error 24/10/2012 11:44:04 AM - "Failed to apply changes to software installation settings. Software changes could not be applied. A previous log entry with details should exist. The error was : %%1612" Information 24/10/2012 11:44:04 AM - "The removal of the assignment of application Java 7 Update 9 - FB Java Transform from policy JavaDeploy succeeded." Error 24/10/2012 11:44:04 AM - "The install of application Java 7 Update 9 - FB Java Transform from policy JavaDeploy failed. The error was : %%1612" There is a log created for MSI logging and it's as below. It says the source is invalid but it exists on the share and the PC that I'm testing has permissions and I've included the recommendation here Group Policy installation failed error 1274 to enable "Always wait for the network at computer startup and logon" === Verbose logging started: 24/10/2012 11:43:59 Build type: SHIP UNICODE 5.00.7601.00 Calling process: C:\Windows\system32\svchost.exe === MSI (c) (9C:EC) [11:43:59:898]: Resetting cached policy values MSI (c) (9C:EC) [11:43:59:898]: Machine policy value 'Debug' is 3 MSI (c) (9C:EC) [11:43:59:898]: ******* RunEngine: ******* Product: {26a24ae4-039d-4ca4-87b4-2f83217009ff} ******* Action: ******* CommandLine: ********** MSI (c) (9C:EC) [11:43:59:898]: Client-side and UI is none or basic: Running entire install on the server. MSI (c) (9C:EC) [11:43:59:898]: Grabbed execution mutex. MSI (c) (9C:EC) [11:44:03:431]: Cloaking enabled. MSI (c) (9C:EC) [11:44:03:431]: Attempting to enable all disabled privileges before calling Install on Server MSI (c) (9C:EC) [11:44:03:439]: Incrementing counter to disable shutdown. Counter after increment: 0 MSI (s) (2C:70) [11:44:03:574]: Running installation inside multi-package transaction {26a24ae4-039d-4ca4-87b4-2f83217009ff} MSI (s) (2C:70) [11:44:03:574]: Grabbed execution mutex. MSI (s) (2C:7C) [11:44:03:607]: Resetting cached policy values MSI (s) (2C:7C) [11:44:03:607]: Machine policy value 'Debug' is 3 MSI (s) (2C:7C) [11:44:03:607]: ******* RunEngine: ******* Product: {26a24ae4-039d-4ca4-87b4-2f83217009ff} ******* Action: ******* CommandLine: ********** MSI (s) (2C:7C) [11:44:03:607]: Machine policy value 'DisableUserInstalls' is 0 MSI (s) (2C:7C) [11:44:03:623]: User policy value 'SearchOrder' is 'nmu' MSI (s) (2C:7C) [11:44:03:624]: User policy value 'DisableMedia' is 0 MSI (s) (2C:7C) [11:44:03:624]: Machine policy value 'AllowLockdownMedia' is 0 MSI (s) (2C:7C) [11:44:03:624]: SOURCEMGMT: Media enabled only if package is safe. MSI (s) (2C:7C) [11:44:03:624]: SOURCEMGMT: Looking for sourcelist for product {26a24ae4-039d-4ca4-87b4-2f83217009ff} MSI (s) (2C:7C) [11:44:03:624]: SOURCEMGMT: Adding {26a24ae4-039d-4ca4-87b4-2f83217009ff}; to potential sourcelist list (pcode;disk;relpath). MSI (s) (2C:7C) [11:44:03:624]: SOURCEMGMT: Now checking product {26a24ae4-039d-4ca4-87b4-2f83217009ff} MSI (s) (2C:7C) [11:44:03:624]: SOURCEMGMT: Media is enabled for product. MSI (s) (2C:7C) [11:44:03:624]: SOURCEMGMT: Attempting to use LastUsedSource from source list. MSI (s) (2C:7C) [11:44:03:624]: SOURCEMGMT: Processing net source list. MSI (s) (2C:7C) [11:44:03:624]: SOURCEMGMT: Trying source \\server\share\deployment\Java\stable\x32\. MSI (s) (2C:7C) [11:44:03:650]: Note: 1: 2303 2: 5 3: \\server\share\ MSI (s) (2C:7C) [11:44:03:650]: Note: 1: 1325 2: deployment MSI (s) (2C:7C) [11:44:03:650]: ConnectToSource: CreatePath/CreateFilePath failed with: -2147483648 1325 -2147483648 MSI (s) (2C:7C) [11:44:03:650]: ConnectToSource (con't): CreatePath/CreateFilePath failed with: -2147483648 -2147483648 MSI (s) (2C:7C) [11:44:03:650]: SOURCEMGMT: net source '\\server\share\deployment\Java\stable\x32\' is invalid. MSI (s) (2C:7C) [11:44:03:650]: Note: 1: 1706 2: -2147483647 3: jre1.7.0_09.msi MSI (s) (2C:7C) [11:44:03:650]: SOURCEMGMT: Processing media source list. MSI (s) (2C:7C) [11:44:04:668]: Note: 1: 2203 2: 3: -2147287037 MSI (s) (2C:7C) [11:44:04:668]: SOURCEMGMT: Source is invalid due to missing/inaccessible package. MSI (s) (2C:7C) [11:44:04:668]: Note: 1: 1706 2: -2147483647 3: jre1.7.0_09.msi MSI (s) (2C:7C) [11:44:04:668]: SOURCEMGMT: Processing URL source list. MSI (s) (2C:7C) [11:44:04:668]: Note: 1: 1402 2: UNKNOWN\URL 3: 2 MSI (s) (2C:7C) [11:44:04:668]: Note: 1: 1706 2: -2147483647 3: jre1.7.0_09.msi MSI (s) (2C:7C) [11:44:04:668]: Note: 1: 1706 2: 3: jre1.7.0_09.msi MSI (s) (2C:7C) [11:44:04:668]: SOURCEMGMT: Failed to resolve source MSI (s) (2C:7C) [11:44:04:668]: MainEngineThread is returning 1612 MSI (s) (2C:70) [11:44:04:670]: User policy value 'DisableRollback' is 0 MSI (s) (2C:70) [11:44:04:670]: Machine policy value 'DisableRollback' is 0 MSI (s) (2C:70) [11:44:04:670]: Incrementing counter to disable shutdown. Counter after increment: 0 MSI (s) (2C:70) [11:44:04:670]: Note: 1: 1402 2: HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Installer\Rollback\Scripts 3: 2 MSI (s) (2C:70) [11:44:04:671]: Note: 1: 1402 2: HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Installer\Rollback\Scripts 3: 2 MSI (s) (2C:70) [11:44:04:671]: Note: 1: 1402 2: HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Installer\InProgress 3: 2 MSI (s) (2C:70) [11:44:04:671]: Note: 1: 1402 2: HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Installer\InProgress 3: 2 MSI (s) (2C:70) [11:44:04:671]: Decrementing counter to disable shutdown. If counter >= 0, shutdown will be denied. Counter after decrement: -1 MSI (s) (2C:70) [11:44:04:671]: Restoring environment variables MSI (c) (9C:EC) [11:44:04:675]: Decrementing counter to disable shutdown. If counter >= 0, shutdown will be denied. Counter after decrement: -1 MSI (c) (9C:EC) [11:44:04:675]: MainEngineThread is returning 1612 === Verbose logging stopped: 24/10/2012 11:44:04 === I'm not sure what my next approach should be. Any help would be much appreciated. Thanks.

    Read the article

  • jquery hover not working properly other than IE6

    - by Kranthi
    Hi All, We developed navigation bar using jQuery 1.4.2. Functionality is to show submneus for different menu items when user hovers on it. It is working perfectly in IE6 but we see some weird problems in other browsers. In Firefox, when the page gets loaded, it works fine but when we hit f5, the submenu wont appear on hover. To get submenu we need to click on any other menu item. In Chrome, its the same to add on, some time even we click on any menu item, and hover submenu wont show up. In Safari, nothing shows up most of the times, but on clicking 5-6 menu items, submenu is shown.When we see loading text in safari it shows the submenu. but on every click the loading text wont appear.. We are very much confused..is it the browser behavior/code/jquery?? Below is the snippet: Html: <ul> <li><a class="mainLinks" href="google.com">Support</a> <ul><li>Sublink1</li></ul> </ul> Html code is absolutely fine. Jquery: var timeout = null; var ie = (document.all) ? true : false; $(document).ready(function(){ var $mainLink = null; var $subLink = null; $(".mainLinks").each(function(){ if ($(this).hasClass("current")) { $(this).mouseout(function() { var $this = $(this); timeout = setTimeout(function() { $(".popUpNav", $this.parent()).css({ visibility : 'hidden' }); $('.popUpArrow').hide(); ieCompat('show'); }, 200); }); } else { $(this).hover(function() { reset(); ieCompat('hide'); // Saving this for later use in the popUpNav hover event $mainLink = $(this); $popUpNav = $(".popUpNav", $mainLink.parent()); // Default width is width of one column var popupWidth = $('.popUpNavSection').width() + 20; // Calculate popup width depending on the number of columns var numColumns = $popUpNav.find('.popUpNavSection').length; if (numColumns != 0) { popupWidth *= numColumns; } var elPos = $mainLink.position(); var leftOffset = 0; if (elPos.left + popupWidth > 950) { leftOffset = elPos.left + popupWidth - 948; } $popUpNav.css({ top : elPos.top + 31 + 'px', left : elPos.left - leftOffset + 'px', visibility : 'visible', width : popupWidth + 'px' }); $('.popUpArrow').css({ left : elPos.left + Math.round(($mainLink.width() / 2)) + 20 + 'px', top : '27px' }).show(); }, function() { var $this = $(this); timeout = setTimeout(function() { $(".popUpNav", $this.parent()).css({ visibility : 'hidden' }); $('.popUpArrow').hide() ieCompat('show'); }, 200); } ); } }); $(".subLinks").hover( function(e) { $subLink = $(this); var elPos = $subLink.position(); var popupWidth = $(".popUpNavLv2",$subLink.parent()).width(); var leftOffset = 0; ieCompat('hide'); $(".popUpNavLv2",$subLink.parent()).css({ top : elPos.top + 32 + 'px', left : elPos.left - leftOffset + 'px', visibility : 'visible' }); }, function() { var $this = $(this); timeout = setTimeout(function() { $(".popUpNavLv2", $this.parent()).css({ visibility : 'hidden' }); }, 200); ieCompat('show'); } ); $('.popUpNav').hover( function() { clearTimeout(timeout); $mainLink.addClass('current'); $(this).css('visibility', 'visible'); $('.popUpArrow').show(); }, function() { $mainLink.removeClass('current'); $(this).css('visibility', 'hidden'); $('.popUpArrow').hide(); ieCompat('show'); } ); $('.popUpNavLv2').hover( function() { clearTimeout(timeout); $(this).css('visibility', 'visible'); ieCompat('hide'); }, function() { ieCompat('show'); $(this).css('visibility', 'hidden'); } ); // If on mac, reduce left padding on the tabs if (/mac os x/.test(navigator.userAgent.toLowerCase())) { $('.mainLinks, .mainLinksHome').css('padding-left', '23px'); } }); Thanks a lot in advance for looking into it. Thanks | Kranthi

    Read the article

  • Oracle performance problem

    - by jreid42
    We are using an Oracle 11G machine that is very powerful; has redundant storage etc. It's a beast from what I have been told. We just got this DB for a tool that when I first came on as a coop had like 20 people using, now its upwards of 150 people. I am the only one working on it :( We currently have a system in place that distributes PERL scripts across our entire data center essentially giving us a sort of "grid" computing power. The Perl scripts run a sort of simulation and report back the results to the database. They do selects / inserts. The load is not very high for each script but it could be happening across 20-50 systems at the same time. We then have multiple data centers and users all hitting the same database with this same approach. Our main problem with this is that our database is getting overloaded with connections and having to drop some. We sometimes have upwards of 500 connections. These are old perl scripts and they do not handle this well. Essentially they fail and the results are lost. I would rather avoid having to rewrite a lot of these as they are poorly written, and are a headache to even look at. The database itself is not overloaded, just the connection overhead is too high. We open a connection, make a quick query and then drop the connection. Very short connections but many of them. The database team has basically said we need to lower the number of connections or they are going to ignore us. Because this is distributed across our farm we cant implement persistent connections. I do this with our webserver; but its on a fixed system. The other ones are perl scripts that get opened and closed by the distribution tool and thus arent always running. What would be my best approach to resolving this issue? The scripts themselves can wait for a connection to be open. They do not need to act immediately. Some sort of queing system? I've been suggested to set up a few instances of a tool called "SQL Relay". Maybe one in each data center. How reliable is this tool? How good is this approach? Would it work for what we need? We could have one for each data center and relay requests through it to our main database, keeping a pipeline of open persistent connections? Does this make sense? Is there any other suggestions you can make? Any ideas? Any help would be greatly appreciated. Sadly I am just a coop student working for a very big company and somehow all of this has landed all on my shoulders (there is literally nobody to ask for help; its a hardware company, everybody is hardware engineers, and the database team is useless and in India) and I am quite lost as what the best approach would be? I am extremely overworked and this problem is interfering with on going progress and basically needs to be resolved as quickly as possible; preferably without rewriting the whole system, purchasing hardware (not gonna happen), or shooting myself in the foot. HELP LOL!

    Read the article

  • Rails User-Profile model challenges

    - by Craig
    I am attempting to create an enrollment process similar to SO's: route to an OpenID provider provider returns the user's information to the UsersController (a guess) UsersController creates user, then routes to the ProfilesController's new or edit action. For now, I'm simply trying to create the user, then route to the ProfilesController's new or edit action (not sure which I should be using). Here's what I have thus far: Models: class User < ActiveRecord::Base has_one :profile end class Profile < ActiveRecord::Base belongs_to :user end Routes: map.resources :users do |user| user.resource :profile end new_user_profile GET /users/:user_id/profile/new(.:format) {:controller=>"profiles", :action=>"new"} edit_user_profile GET /users/:user_id/profile/edit(.:format) {:controller=>"profiles", :action=>"edit"} user_profile GET /users/:user_id/profile(.:format) {:controller=>"profiles", :action=>"show"} PUT /users/:user_id/profile(.:format) {:controller=>"profiles", :action=>"update"} DELETE /users/:user_id/profile(.:format) {:controller=>"profiles", :action=>"destroy"} POST /users/:user_id/profile(.:format) {:controller=>"profiles", :action=>"create"} users GET /users(.:format) {:controller=>"users", :action=>"index"} POST /users(.:format) {:controller=>"users", :action=>"create"} new_user GET /users/new(.:format) {:controller=>"users", :action=>"new"} edit_user GET /users/:id/edit(.:format) {:controller=>"users", :action=>"edit"} user GET /users/:id(.:format) {:controller=>"users", :action=>"show"} PUT /users/:id(.:format) {:controller=>"users", :action=>"update"} DELETE /users/:id(.:format) {:controller=>"users", :action=>"destroy"} Controllers: class UsersController < ApplicationController # generate new-user form def new @user = User.new end # process new-user-form post def create @user = User.new(params[:user]) if @user.save redirect_to new_user_profile_path(@user) ... end end # generate edit-user form def edit @user = User.find(params[:id]) end # process edit-user-form post def update @user = User.find(params[:id]) respond_to do |format| if @user.update_attributes(params[:user]) flash[:notice] = 'User was successfully updated.' format.html { redirect_to(users_path) } format.xml { head :ok } ... end end end class ProfilesController < ApplicationController before_filter :get_user def get_user @user = User.find(params[:user_id]) end # generate new-profile form def new @user.profile = Profile.new @profile = @user.profile end # process new-profile-form post def create @user.profile = Profile.new(params[:profile]) @profile = @user.profile respond_to do |format| if @profile.save flash[:notice] = 'Profile was successfully created.' format.html { redirect_to(@profile) } format.xml { render :xml => @profile, :status => :created, :location => @profile } ... end end end # generate edit-profile form def edit @profile = @user.profile end # generate edit-profile-form post def update @profile = @user.profile respond_to do |format| if @profile.update_attributes(params[:profile]) flash[:notice] = 'Profile was successfully updated.' # format.html { redirect_to(@profile) } format.html { redirect_to(user_profile(@user)) } format.xml { head :ok } else format.html { render :action => "edit" } format.xml { render :xml => @profile.errors, :status => :unprocessable_entity } end end end Edit-User View: ... <% form_for(@user) do |f| %> ... New-Profile View: ... <% form_for([@user,@profile]) do |f| %> .. I'm having two problems: When saving an edit to the User model, the UsersController attempts to route to http://localhost:3000/users/1/profile.%23%3Cprofile:0x10438e3e8%3E, instead of http://localhost:3000/users/1/profile When the new-profile form is being rendered, it throws an error that reads: undefined method `user_profiles_path' for # Is it better to create a blank profile when the user is created (in the UsersController), then edit it OR follow the rest-ful convention of creating the profile in the ProfilesController (as I have done)? What am I missing? I did review Associating Two Models in Rails (user and profile), but it didn't address my needs. Thanks for your time.

    Read the article

  • Can't Remove Logical Drive/Array from HP P400

    - by Myles
    This is my first post here. Thank you in advance for any assistance with this matter. I'm trying to remove a logical drive (logical drive 2) and an array (array "B") from my Smart Array P400. The host is a DL580 G5 running 64-bit Red Hat Enterprise Linux Server release 5.7 (Tikanga). I am unable to remove the array using either hpacucli or cpqacuxe. I believe it is because of "OS Status: LOCKED". The file system that lives on this array has been unmounted. I do not want to reboot the host. Is there some way to "release" this logical drive so I can remove the array? Note that I do not need to preserve the data on logical drive 2. I intend to physically remove the drives from the machine and replace them with larger drives. I'm using the cciss kernel module that ships with Red Hat 5.7. Here is some information pertaining to the host and the P400 configuration: [root@gort ~]# cat /etc/redhat-release Red Hat Enterprise Linux Server release 5.7 (Tikanga) [root@gort ~]# uname -a Linux gort 2.6.18-274.el5 #1 SMP Fri Jul 8 17:36:59 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux [root@gort ~]# rpm -qa | egrep '^(hp|cpq)' cpqacuxe-9.30-15.0 hp-health-9.25-1551.7.rhel5 hpsmh-7.1.2-3 hpdiags-9.3.0-466 hponcfg-3.1.0-0 hp-snmp-agents-9.25-2384.8.rhel5 hpacucli-9.30-15.0 [root@gort ~]# hpacucli HP Array Configuration Utility CLI 9.30.15.0 Detecting Controllers...Done. Type "help" for a list of supported commands. Type "exit" to close the console. => ctrl all show config detail Smart Array P400 in Slot 0 (Embedded) Bus Interface: PCI Slot: 0 Cache Serial Number: PA82C0J9SVW34U RAID 6 (ADG) Status: Enabled Controller Status: OK Hardware Revision: D Firmware Version: 7.22 Rebuild Priority: Medium Expand Priority: Medium Surface Scan Delay: 15 secs Surface Scan Mode: Idle Wait for Cache Room: Disabled Surface Analysis Inconsistency Notification: Disabled Post Prompt Timeout: 0 secs Cache Board Present: True Cache Status: OK Cache Ratio: 25% Read / 75% Write Drive Write Cache: Disabled Total Cache Size: 256 MB Total Cache Memory Available: 208 MB No-Battery Write Cache: Disabled Cache Backup Power Source: Batteries Battery/Capacitor Count: 1 Battery/Capacitor Status: OK SATA NCQ Supported: True Logical Drive: 1 Size: 136.7 GB Fault Tolerance: RAID 1 Heads: 255 Sectors Per Track: 32 Cylinders: 35132 Strip Size: 128 KB Full Stripe Size: 128 KB Status: OK Caching: Enabled Unique Identifier: 600508B100184A395356573334550002 Disk Name: /dev/cciss/c0d0 Mount Points: /boot 101 MB, /tmp 7.8 GB, /usr 3.9 GB, /usr/local 2.0 GB, /var 3.9 GB, / 2.0 GB, /local 113.2 GB OS Status: LOCKED Logical Drive Label: A0027AA78DEE Mirror Group 0: physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SAS, 146 GB, OK) Mirror Group 1: physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SAS, 146 GB, OK) Drive Type: Data Array: A Interface Type: SAS Unused Space: 0 MB Status: OK Array Type: Data physicaldrive 1I:1:1 Port: 1I Box: 1 Bay: 1 Status: OK Drive Type: Data Drive Interface Type: SAS Size: 146 GB Rotational Speed: 10000 Firmware Revision: HPDE Serial Number: 3NM57RF40000983878FX Model: HP DG146BB976 Current Temperature (C): 29 Maximum Temperature (C): 35 PHY Count: 2 PHY Transfer Rate: Unknown, Unknown physicaldrive 1I:1:2 Port: 1I Box: 1 Bay: 2 Status: OK Drive Type: Data Drive Interface Type: SAS Size: 146 GB Rotational Speed: 10000 Firmware Revision: HPDE Serial Number: 3NM55VQC000098388524 Model: HP DG146BB976 Current Temperature (C): 29 Maximum Temperature (C): 36 PHY Count: 2 PHY Transfer Rate: Unknown, Unknown Logical Drive: 2 Size: 546.8 GB Fault Tolerance: RAID 5 Heads: 255 Sectors Per Track: 32 Cylinders: 65535 Strip Size: 64 KB Full Stripe Size: 256 KB Status: OK Caching: Enabled Parity Initialization Status: Initialization Completed Unique Identifier: 600508B100184A395356573334550003 Disk Name: /dev/cciss/c0d1 Mount Points: None OS Status: LOCKED Logical Drive Label: A5C9C6F81504 Drive Type: Data Array: B Interface Type: SAS Unused Space: 0 MB Status: OK Array Type: Data physicaldrive 1I:1:3 Port: 1I Box: 1 Bay: 3 Status: OK Drive Type: Data Drive Interface Type: SAS Size: 146 GB Rotational Speed: 10000 Firmware Revision: HPDE Serial Number: 3NM2H5PE00009802NK19 Model: HP DG146ABAB4 Current Temperature (C): 30 Maximum Temperature (C): 37 PHY Count: 1 PHY Transfer Rate: Unknown physicaldrive 1I:1:4 Port: 1I Box: 1 Bay: 4 Status: OK Drive Type: Data Drive Interface Type: SAS Size: 146 GB Rotational Speed: 10000 Firmware Revision: HPDE Serial Number: 3NM28YY400009750MKPJ Model: HP DG146ABAB4 Current Temperature (C): 31 Maximum Temperature (C): 36 PHY Count: 1 PHY Transfer Rate: 3.0Gbps physicaldrive 2I:1:5 Port: 2I Box: 1 Bay: 5 Status: OK Drive Type: Data Drive Interface Type: SAS Size: 146 GB Rotational Speed: 10000 Firmware Revision: HPDE Serial Number: 3NM2FGYV00009802N3GN Model: HP DG146ABAB4 Current Temperature (C): 30 Maximum Temperature (C): 38 PHY Count: 1 PHY Transfer Rate: Unknown physicaldrive 2I:1:6 Port: 2I Box: 1 Bay: 6 Status: OK Drive Type: Data Drive Interface Type: SAS Size: 146 GB Rotational Speed: 10000 Firmware Revision: HPDE Serial Number: 3NM8AFAK00009920MMV1 Model: HP DG146BB976 Current Temperature (C): 31 Maximum Temperature (C): 41 PHY Count: 2 PHY Transfer Rate: Unknown, Unknown physicaldrive 2I:1:7 Port: 2I Box: 1 Bay: 7 Status: OK Drive Type: Data Drive Interface Type: SAS Size: 146 GB Rotational Speed: 10000 Firmware Revision: HPDE Serial Number: 3NM2FJQD00009801MSHQ Model: HP DG146ABAB4 Current Temperature (C): 29 Maximum Temperature (C): 39 PHY Count: 1 PHY Transfer Rate: Unknown

    Read the article

  • ASPX ajax form post help

    - by StealthRT
    Hey all, i have this peice of code that allows a user to select a jpg image, resize it and uploads it to the server driectory. The problem being is that it reloads the aspx page when it saves the image. My question is-is there any way to do this same thing but with ajax so that it doesn't leave the page after submitting it? I've done this pleanty of times with classic asp pages but never with a aspx page. Here is the code for the ASPX page: <%@ Page Trace="False" Language="vb" aspcompat="false" debug="true" validateRequest="false"%> <%@ Import Namespace=System.Drawing %> <%@ Import Namespace=System.Drawing.Imaging %> <%@ Import Namespace=System.Drawing.Text %> <%@ Import Namespace=System %> <%@ Import Namespace=System.IO %> <%@ Import Namespace=System.Web %> <%@ Import Namespace=System.ServiceProcess %> <%@ Import Namespace=Microsoft.Data.Odbc %> <%@ Import Namespace=System.Data.Odbc %> <%@ Import Namespace=MySql.Data.MySqlClient %> <%@ Import Namespace=MySql.Data %> <%@ Import Namespace=System.Drawing.Drawing2D %> <%@ Import Namespace="System.Data" %> <%@ Import Namespace="System.Data.ADO" %> <%@ Import Namespace=ADODB %> <SCRIPT LANGUAGE="VBScript" runat="server"> const Lx = 200 const Ly = 60 const upload_dir = "/img/avatar/" const upload_original = "tmpAvatar" const upload_thumb = "thumb" const upload_max_size = 256 dim fileExt dim newWidth, newHeight as integer dim l2 dim fileFld as HTTPPostedFile Dim originalimg As System.Drawing.Image dim msg dim upload_ok as boolean </script> <% Dim theID, theEmail, maleOrFemale theID = Request.QueryString("ID") theEmail = Request.QueryString("eMail") maleOrFemale = Request.QueryString("MF") randomize() upload_ok = false if lcase(Request.ServerVariables("REQUEST_METHOD"))="post" then fileFld = request.files(0) if fileFld.ContentLength > upload_max_size * 1024 then msg = "Sorry, the image must be less than " & upload_max_size & "Kb" else try fileExt = System.IO.Path.GetExtension(fileFld.FileName).ToLower() if fileExt = ".jpg" then originalImg = System.Drawing.Image.FromStream(fileFld.InputStream) if originalImg.Height > Ly then newWidth = Ly * (originalImg.Width / originalImg.Height) newHeight = Ly end if Dim thumb As New Bitmap(newWidth, newHeight) Dim gr_dest As Graphics = Graphics.FromImage(thumb) dim sb = new SolidBrush(System.Drawing.Color.White) gr_dest.SmoothingMode = System.Drawing.Drawing2D.SmoothingMode.HighQuality gr_dest.CompositingQuality = System.Drawing.Drawing2D.CompositingQuality.HighQuality gr_dest.FillRectangle(sb, 0, 0, thumb.Width, thumb.Height) gr_dest.DrawImage(originalImg, 0, 0, thumb.Width, thumb.Height) try originalImg.save(Server.MapPath(upload_dir & upload_original & fileExt), originalImg.rawformat) thumb.save(Server.MapPath(upload_dir & theID & fileExt), originalImg.rawformat) msg = "Uploaded " & fileFld.FileName & " to " & Server.MapPath(upload_dir & upload_original & fileExt) upload_ok = true File.Delete(Server.MapPath(upload_dir & upload_original & fileExt)) catch msg = "Sorry, there was a problem saving your avatar. Please try again." end try if not thumb is nothing then thumb.Dispose() thumb = nothing end if else msg = "That image does not seem to be a JPG. Upload only JPG images." end if catch msg = "That image does not seem to be a JPG." end try end if if not originalImg is nothing then originalImg.Dispose() originalImg = nothing end if end if %><head> <meta http-equiv="pragma" content="no-cache" /> </head> <html> <script type="text/javascript" src="js/jquery-1.3.min.js"></script> <form enctype="multipart/form-data" method="post" runat="server" id="sendImg"> <input type="file" name="upload_file" id="upload_file" style="-moz-opacity: 0; opacity:0; filter: alpha(opacity=0); margin-top: 5px; float:left; cursor:pointer;" onChange="$('#sendImg').submit();" > <input type="submit" value="Upload" style="visibility:hidden; display:none;"> </form> </body> </html> Any help would be great! :o) David

    Read the article

  • How can I make a jQuery UI 'draggable()' div draggable for touchscreen?

    - by artlung
    I have a jQuery UI draggable() that works in Firefox and Chrome. The user interface concept is basically click to create a "post-it" type item. Basically, I click or tap on div#everything (100% high and wide) that listens for clicks, and an input textarea displays. You add text, and then when you're done it saves it. You can drag this element around. That is working on normal browsers, but on an iPad I can test with I can't drag the items around. If I touch to select (it then dims slightly), I can't then drag it. It won't drag left or right at all. I can drag up or down, but I'm not dragging the individual div, I'm dragging the whole webpage. So here's the code I use to capture clicks: $('#everything').bind('click', function(e){ var elem = document.createElement('DIV'); STATE.top = e.pageY; STATE.left = e.pageX; var e = $(elem).css({ top: STATE.top, left: STATE.left }).html('<textarea></textarea>') .addClass('instance') .bind('click', function(event){ return false; }); $(this).append(e); }); And here's the code I use to "save" the note and turn the input div into just a display div: $('textarea').live('mouseleave', function(){ var val = jQuery.trim($(this).val()); STATE.content = val; if (val == '') { $(this).parent().remove(); } else { var div = $(this).parent(); div.text(val).css({ height: '30px' }); STATE.height = 30; if ( div.width() !== div[0].clientWidth || div.height () !== div[0].clientHeight ) { while (div.width() !== div[0].clientWidth || div.height () !== div[0].clientHeight) { var h = div.height() + 10; STATE.height = h; div.css({ height: (h) + 'px' }); // element just got scrollbars } } STATE.guid = uniqueID() div.addClass('savedNote').attr('id', STATE.guid).draggable({ stop: function() { var offset = $(this).offset(); STATE.guid = $(this).attr('id'); STATE.top = offset.top; STATE.left = offset.left; STATE.content = $(this).text(); STATE.height = $(this).height(); STATE.save(); } }); STATE.save(); $(this).remove(); } }); And I have this code when I load the page for saved notes: $('.savedNote').draggable({ stop: function() { STATE.guid = $(this).attr('id'); var offset = $(this).offset(); STATE.top = offset.top; STATE.left = offset.left; STATE.content = $(this).text(); STATE.height = $(this).height(); STATE.save(); } }); My STATE object handles saving the notes. Onload, this is the whole html body: <body> <div id="everything"></div> <div class="instance savedNote" id="iddd1b0969-c634-8876-75a9-b274ff87186b" style="top:134px;left:715px;height:30px;">Whatever dude</div> <div class="instance savedNote" id="id8a129f06-7d0c-3cb3-9212-0f38a8445700" style="top:131px;left:347px;height:30px;">Appointment 11:45am</div> <div class="instance savedNote" id="ide92e3d13-afe8-79d7-bc03-818d4c7a471f" style="top:144px;left:65px;height:80px;">What do you think of a board where you can add writing as much as possible?</div> <div class="instance savedNote" id="idef7fe420-4c19-cfec-36b6-272f1e9b5df5" style="top:301px;left:534px;height:30px;">This was submitted</div> <div class="instance savedNote" id="id93b3b56f-5e23-1bd1-ddc1-9be41f1efb44" style="top:390px;left:217px;height:30px;">Hello world from iPad.</div> </body> So, my question is really: how can I make this work better on iPad? I'm not set on jQuery UI, I'm wondering if this is something I'm doing wrong with jQuery UI, or jQuery, or whether there may be better frameworks for doing cross-platform/backward compatible draggable() elements that will work for touchscreen UIs. More general comments about how to write UI components like this would be welcome as well. Thanks!

    Read the article

  • Active Directory Password Policy Problem

    - by Will
    To Clarify: my question is why isn't my password policy applying to people in the domain. Hey guys, having trouble with our password policy in Active Directory. Sometimes it just helps me to type out what I’m seeing It appears to not be applying properly across the board. I am new to this environment and AD in general but I think I have a general grasp of what should be going on. It’s a pretty simple AD setup without too many Group Policies being applied. It looks something like this DOMAIN Default Domain Policy (link enabled) Password Policy (link enabled and enforce) Personal OU Force Password Change (completely empty nothing in this GPO) IT OU Lockout Policy (link enabled and enforced) CS OU Lockout Policy Accouting OU Lockout Policy The password policy and default domain policy both define the same things under Computer ConfigWindows seetings sec settings Account Policies / Password Policy Enforce password History : 24 passwords remembered Maximum Password age : 180 days Min password age: 14 days Minimum Password Length: 6 characters Password must meet complexity requirements: Enabled Store Passwords using reversible encryption: Disabled Account Policies / Account Lockout Policy Account Lockout Duration 10080 Minutes Account Lockout Threshold: 5 invalid login attempts Reset Account Lockout Counter after : 30 minutes IT lockout This just sets the screen saver settings to lock computers when the user is Idle. After running Group Policy modeling it seems like the password policy and default domain policy is getting applied to everyone. Here is the results of group policy modeling on MO-BLANCKM using the mblanck account, as you can see the policies are both being applied , with nothing important being denied Group Policy Results NCLGS\mblanck on NCLGS\MO-BLANCKM Data collected on: 12/29/2010 11:29:44 AM Summary Computer Configuration Summary General Computer name NCLGS\MO-BLANCKM Domain NCLGS.local Site Default-First-Site-Name Last time Group Policy was processed 12/29/2010 10:17:58 AM Group Policy Objects Applied GPOs Name Link Location Revision Default Domain Policy NCLGS.local AD (15), Sysvol (15) WSUS-52010 NCLGS.local/WSUS/Clients AD (54), Sysvol (54) Password Policy NCLGS.local AD (58), Sysvol (58) Denied GPOs Name Link Location Reason Denied Local Group Policy Local Empty Security Group Membership when Group Policy was applied BUILTIN\Administrators Everyone S-1-5-21-507921405-1326574676-682003330-1003 BUILTIN\Users NT AUTHORITY\NETWORK NT AUTHORITY\Authenticated Users NCLGS\MO-BLANCKM$ NCLGS\Admin-ComputerAccounts-GP NCLGS\Domain Computers WMI Filters Name Value Reference GPO(s) None Component Status Component Name Status Last Process Time Group Policy Infrastructure Success 12/29/2010 10:17:59 AM EFS recovery Success (no data) 10/28/2010 9:10:34 AM Registry Success 10/28/2010 9:10:32 AM Security Success 10/28/2010 9:10:34 AM User Configuration Summary General User name NCLGS\mblanck Domain NCLGS.local Last time Group Policy was processed 12/29/2010 11:28:56 AM Group Policy Objects Applied GPOs Name Link Location Revision Default Domain Policy NCLGS.local AD (7), Sysvol (7) IT-Lockout NCLGS.local/Personal/CS AD (11), Sysvol (11) Password Policy NCLGS.local AD (5), Sysvol (5) Denied GPOs Name Link Location Reason Denied Local Group Policy Local Empty Force Password Change NCLGS.local/Personal Empty Security Group Membership when Group Policy was applied NCLGS\Domain Users Everyone BUILTIN\Administrators BUILTIN\Users NT AUTHORITY\INTERACTIVE NT AUTHORITY\Authenticated Users LOCAL NCLGS\MissingSkidEmail NCLGS\Customer_Service NCLGS\Email_Archive NCLGS\Job Ticket Users NCLGS\Office Staff NCLGS\CUSTOMER SERVI-1 NCLGS\Prestige_Jobs_Email NCLGS\Telecommuters NCLGS\Everyone - NCL WMI Filters Name Value Reference GPO(s) None Component Status Component Name Status Last Process Time Group Policy Infrastructure Success 12/29/2010 11:28:56 AM Registry Success 12/20/2010 12:05:51 PM Scripts Success 10/13/2010 10:38:40 AM Computer Configuration Windows Settings Security Settings Account Policies/Password Policy Policy Setting Winning GPO Enforce password history 24 passwords remembered Password Policy Maximum password age 180 days Password Policy Minimum password age 14 days Password Policy Minimum password length 6 characters Password Policy Password must meet complexity requirements Enabled Password Policy Store passwords using reversible encryption Disabled Password Policy Account Policies/Account Lockout Policy Policy Setting Winning GPO Account lockout duration 10080 minutes Password Policy Account lockout threshold 5 invalid logon attempts Password Policy Reset account lockout counter after 30 minutes Password Policy Local Policies/Security Options Network Security Policy Setting Winning GPO Network security: Force logoff when logon hours expire Enabled Default Domain Policy Public Key Policies/Autoenrollment Settings Policy Setting Winning GPO Enroll certificates automatically Enabled [Default setting] Renew expired certificates, update pending certificates, and remove revoked certificates Disabled Update certificates that use certificate templates Disabled Public Key Policies/Encrypting File System Properties Winning GPO [Default setting] Policy Setting Allow users to encrypt files using Encrypting File System (EFS) Enabled Certificates Issued To Issued By Expiration Date Intended Purposes Winning GPO SBurns SBurns 12/13/2007 5:24:30 PM File Recovery Default Domain Policy For additional information about individual settings, launch Group Policy Object Editor. Public Key Policies/Trusted Root Certification Authorities Properties Winning GPO [Default setting] Policy Setting Allow users to select new root certification authorities (CAs) to trust Enabled Client computers can trust the following certificate stores Third-Party Root Certification Authorities and Enterprise Root Certification Authorities To perform certificate-based authentication of users and computers, CAs must meet the following criteria Registered in Active Directory only Administrative Templates Windows Components/Windows Update Policy Setting Winning GPO Allow Automatic Updates immediate installation Enabled WSUS-52010 Allow non-administrators to receive update notifications Enabled WSUS-52010 Automatic Updates detection frequency Enabled WSUS-52010 Check for updates at the following interval (hours): 1 Policy Setting Winning GPO Configure Automatic Updates Enabled WSUS-52010 Configure automatic updating: 4 - Auto download and schedule the install The following settings are only required and applicable if 4 is selected. Scheduled install day: 0 - Every day Scheduled install time: 03:00 Policy Setting Winning GPO No auto-restart with logged on users for scheduled automatic updates installations Disabled WSUS-52010 Re-prompt for restart with scheduled installations Enabled WSUS-52010 Wait the following period before prompting again with a scheduled restart (minutes): 30 Policy Setting Winning GPO Reschedule Automatic Updates scheduled installations Enabled WSUS-52010 Wait after system startup (minutes): 1 Policy Setting Winning GPO Specify intranet Microsoft update service location Enabled WSUS-52010 Set the intranet update service for detecting updates: http://lavender Set the intranet statistics server: http://lavender (example: http://IntranetUpd01) User Configuration Administrative Templates Control Panel/Display Policy Setting Winning GPO Hide Screen Saver tab Enabled IT-Lockout Password protect the screen saver Enabled IT-Lockout Screen Saver Enabled IT-Lockout Screen Saver executable name Enabled IT-Lockout Screen Saver executable name sstext3d.scr Policy Setting Winning GPO Screen Saver timeout Enabled IT-Lockout Number of seconds to wait to enable the Screen Saver Seconds: 1800 System/Power Management Policy Setting Winning GPO Prompt for password on resume from hibernate / suspend Enabled IT-Lockout

    Read the article

  • Need help modifying my Custom Replace code based on string passed to it

    - by fraXis
    Hello, I have a C# program that will open a text file, parse it for certain criteria using a RegEx statement, and then write a new text file with the changed criteria. For example: I have a text file with a bunch of machine codes in it such as: X0.109Y0Z1.G0H2E1 My C# program will take this and turn it into: X0.109Y0G54G0T3 G43Z1.H2M08 (Note: the T3 value is really the H value (H2 in this case) + 1). T = H + 1 It works great, because the line usually always starts with X so the RegEx statement always matches. My RegEx that works with my first example is as follows: //Regex pattern for: //- X(value)Y(value)Z(value)G(value)H(value)E(value) //- X(value)Y(value)Z(value)G(value)H(value)E(value)M(value) //- X(value)Y(value)Z(value)G(value)H(value)E(value)A(value) //- X(value)Y(value)Z(value)G(value)H(value)E(value)M(value)A(value) //value can be positive or negative, integer or floating point number with multiple decimal places or without any private Regex regReal = new Regex("^(X([-]|[.]|[-.]|[0-9])[0-9]*[.]*[0-9]*){1}(Y([-]|[.]|[-.]|[0-9])[0-9]*[.]*[0-9]*){1}(Z([-]|[.]|[-.]|[0-9])[0-9]*[.]*[0-9]*){1}(G([-]|[.]|[-.]|[0-9])[0-9]*[.]*[0-9]*){1}(H([-]|[.]|[-.]|[0-9])[0-9]*[.]*[0-9]*){1}(E([-]|[.]|[-.]|[0-9])[0-9]*[.]*[0-9]*){1}(M([-]|[.]|[-.]|[0-9])[0-9]*[.]*[0-9]*)?(A([-]|[.]|[-.]|[0-9])[0-9]*[.]*[0-9]*)?$"); This RegEx works great because sometimes the line of code could also have an M or A at the end such as: X0.109Y0Z1.G0H2E1A2 My problem is now I have run into some lines of code that have this: G90G0X1.5Y-0.036E1Z3.H1 and I need to turn it into this: G90G0X1.5Y-0.036G54T2 G43Z3.H1M08 Can someone please modify my RegEx and code to turn this: G90G0X1.5Y-0.036E1Z3.H1 into: G90G0X1.5Y-0.036G54T2 G43Z3.H1M08 But sometimes the values could be a little different such as: G(value)G(value)X(value)Y(value)E(value)Z(value)H(value) G(value)G(value)X(value)Y(value)E(value)Z(value)H(value)A(value) G(value)G(value)X(value)Y(value)E(value)Z(value)H(value)A(value)(M)value G(value)G(value)X(value)Y(value)E(value)Z(value)H(value)M(value)(A)value But also (this is where Z is moved to a different spot) G(value)G(value)X(value)Y(value)Z(value)E(value)H(value) G(value)G(value)X(value)Y(value)Z(value)E(value)H(value)A(value) G(value)G(value)X(value)Y(value)Z(value)E(value)H(value)A(value)(M)value G(value)G(value)X(value)Y(value)Z(value)E(value)H(value)M(value)(A)value Here is my code that needs to be changed (I did not include the open and saving of the text file since that is pretty standard stuff). //Regex pattern for: //- X(value)Y(value)Z(value)G(value)H(value)E(value) //- X(value)Y(value)Z(value)G(value)H(value)E(value)M(value) //- X(value)Y(value)Z(value)G(value)H(value)E(value)A(value) //- X(value)Y(value)Z(value)G(value)H(value)E(value)M(value)A(value) //value can be pozitive or negative, integer or floating point number with multiple decimal places or without any private Regex regReal = new Regex("^(X([-]|[.]|[-.]|[0-9])[0-9]*[.]*[0-9]*){1}(Y([-]|[.]|[-.]|[0-9])[0-9]*[.]*[0-9]*){1}(Z([-]|[.]|[-.]|[0-9])[0-9]*[.]*[0-9]*){1}(G([-]|[.]|[-.]|[0-9])[0-9]*[.]*[0-9]*){1}(H([-]|[.]|[-.]|[0-9])[0-9]*[.]*[0-9]*){1}(E([-]|[.]|[-.]|[0-9])[0-9]*[.]*[0-9]*){1}(M([-]|[.]|[-.]|[0-9])[0-9]*[.]*[0-9]*)?(A([-]|[.]|[-.]|[0-9])[0-9]*[.]*[0-9]*)?$"); private string CheckAndModifyLine(string line) { if (regReal.IsMatch(line)) //Check the first Regex with line string { return CustomReplace(line); } else { return line; } } private string CustomReplace(string input) { string returnValue = String.Empty; int zPos = input.IndexOf("Z"); int gPos = input.IndexOf("G"); int hPos = input.IndexOf("H"); int ePos = input.IndexOf("E"); int aPos = input.IndexOf("A"); int hValue = Int32.Parse(input.Substring(hPos + 1, ePos - hPos - 1)) + 1; //get H number //remove A value returnValue = ((aPos == -1) ? input : input.Substring(0, aPos)); //replace Z value returnValue = Regex.Replace(returnValue, "Z[-]?\\d*\\.*\\d*", "G54"); //replace H value returnValue = Regex.Replace(returnValue, "H\\d*\\.*\\d*", "T" + hValue.ToString() + ((aPos == -1) ? String.Empty : input.Substring(aPos, input.Length - aPos))); //replace E, or E and M value returnValue = Regex.Replace(returnValue, "E\\d*\\.*\\d(M\\d*\\.*\\d)?", Environment.NewLine + "G43" + input.Substring(zPos, gPos - zPos) + input.Substring(hPos, ePos - hPos) + "M08"); return returnValue; } I tried to modify the above code to match the new line of text I am encountering (and split into two lines like my first example) but I am failing miserably. Thanks so much.

    Read the article

  • How does one get rid of fishy behavior in Windows?

    - by Tom Wijsman
    After I had boot my computer this morning there suddenly flooded water from the top of the screen, after which some fishes dropped into it. Now I can barely see what I am doing because the water distorts the view. Sometimes the fish follow the cursor so I need to move it away or wait for the fish to mind their own business. This makes it very annoying to use my system. What have I tried? Reboot the system. This caused the water to deplete from the desktop. Upon reboot, the screen was refilled with water and fishes. Attach another monitor. Same problem, fills that monitor as well and gives me extra fish. Clicking the fish. Makes them turn direction. Right clicking the fish. Changes color of the fish, not really useful. I'm locked out of changing the background or screen saver settings. Hence, I had to post the lady below... Safe mode doesn't save me from the fishes. It does give me another background there, but I can't screenshot easily. Other user accounts experience this as well. The Guest account seems to experience more fish than the other accounts. Using HijackThis, OTL Timekeeper List, Syninternal Autoruns, RootKitRevealer, ShellExView and similar tools I can't seem to find any entries that could be it, the Sysinternals tools show everything as verified. I'm suspecting this to be a driver problem. Randomly removing drivers doesn't seem to alleviate the problem. When removing the Graphics Drivers, it makes my screen black. While that could be considered the solution, it's not what I want. Changing the time / date settings does also not seem to affect the fishes. Changing the time a few years in the future, I would have expected the fishes to be dead. But, the same fishes are still there... They simply won't die! Tried to get used to them. They are really bothering me, looks like they require food. I don't know how to give them food, but apparently they get it elsewhere during reboot... Tried to disable my mouse pointer and use the keyboard. This works, they now swim around more randomly. They do put their attention to huge changes on the screen, so I need to type slow. Or otherwise I can't see what I'm tying exactly. Hold my laptop upside down. This seems to affect the water and fishes, but the water stays in the screen. They seem super resistant against water sickness and confusion though... What does the problem look like? What do I need? A way to get rid of these fishes on my screen forever, they are really annoying me a lot and I'm about to crack the screen to see if that makes them escape. Do you have any idea why this problem is occurring? What are my considerations? Buying an USB fish tank could make the fish leave the screen, I am uncertain though whether the fish could leave the screen through the USB cable. Using the FISh (programming language) which seems to provide EXPRESSIVE POWER and EFFICIENT EXECUTION, I can however not find any examples on how to remove fish. What are my Specifications? I'm using a Sony Vaio Fishy laptop. Sony VAIO VGN-Fishy, VAIO. Processor: 1337 MHz, Intel Core 2 Duo, T5432, 1 MB, Intel PM965 Express, 667 MHz. Memory: 1024 MB, DDR2-SDRAM, 667 MHz, 2 x 1024 MB, 4 GB. Disk Drive: 50 GB, Serial ATA, 5400 RPM. Storage Media: Memory Stick™, Memory Stick PRO™. Display: 15.4 ", 1280 x 800 pixels, LCD. Video: GeForce 8400M GT, 128 MB. Optical Drive: DVD±R/RW DL, 24 x, 24 x, 24 x, 6 x, 4 x, 6 x, 4 x, 5 x, 5 x, 8 x, 8 x, 8 x, 8 x, 6 x, 6 x, 24 x, 24 x, 24 x, 16 x. Camera: 1.3 MP, 30 fps. Networking: 2.0+EDR. Keyboard: Touchpad, AZERTY. Operating System/Software: Windows Vista Home Premium. Security: Kensington. Weight & Dimensions: 98.8 oz (2800 g), 14 " (355.8 mm), 10 " (254.4 mm), 0.98 " (24.9 mm). Other features: 100 BASE-TX/10 BASE-T, 802.11a/b/g/n/Draft n, V92/V.90, fishes. Plz! Help me...

    Read the article

  • JSF2 and Richfaces 3.3.3 application on tomcat 6.0 crashes with a StackOverflowError

    - by Vivek Madapura V
    Hi, I am using JSF 2 and richfaces 3.3.3 for an application hosted on tomcat 6.0.20. The application crashes as soon as a request is made via the browser (Mozilla and IE). My web.xml looks like this: <?xml version="1.0" encoding="UTF-8"?> <web-app xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://java.sun.com/xml/ns/javaee" xmlns:web="http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd" id="WebApp_ID" version="2.5"> <display-name>TestJSF</display-name> <welcome-file-list> <welcome-file>pages/login.xhtml</welcome-file> </welcome-file-list> <servlet> <servlet-name>Faces Servlet</servlet-name> <servlet-class>javax.faces.webapp.FacesServlet</servlet-class> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>Faces Servlet</servlet-name> <url-pattern>/faces/*</url-pattern> </servlet-mapping> <servlet-mapping> <servlet-name>Faces Servlet</servlet-name> <url-pattern>*.xhtml</url-pattern> </servlet-mapping> <context-param> <description>State saving method: 'client' or 'server' (=default). See JSF Specification 2.5.2</description> <param-name>javax.faces.STATE_SAVING_METHOD</param-name> <param-value>server</param-value> </context-param> <context-param> <param-name>javax.faces.DISABLE_FACELET_JSF_VIEWHANDLER</param-name> <param-value>true</param-value> </context-param> <context-param> <param-name>org.richfaces.SKIN</param-name> <param-value>blueSky</param-value> </context-param> <context-param> <param-name>org.richfaces.CONTROL_SKINNING</param-name> <param-value>enable</param-value> </context-param> <context-param> <param-name>javax.faces.DEFAULT_SUFFIX</param-name> <param-value>.xhtml</param-value> </context-param> <context-param> <param-name>javax.faces.FACELETS_SKIP_COMMENTS</param-name> <param-value>true</param-value> </context-param> <listener> <listener-class>com.sun.faces.config.ConfigureListener</listener-class> </listener> <filter> <display-name>RichFaces Filter</display-name> <filter-name>richfaces</filter-name> <filter-class>org.ajax4jsf.Filter</filter-class> </filter> <filter-mapping> <filter-name>richfaces</filter-name> <servlet-name>Faces Servlet</servlet-name> <dispatcher>REQUEST</dispatcher> <dispatcher>FORWARD</dispatcher> <dispatcher>INCLUDE</dispatcher> </filter-mapping> </web-app> The exception is javax.servlet.ServletException: Servlet execution threw an exception org.ajax4jsf.webapp.BaseFilter.doFilter(BaseFilter.java:530) com.sun.faces.context.ExternalContextImpl.dispatch(ExternalContextImpl.java:542) com.sun.faces.application.view.JspViewHandlingStrategy.executePageToBuildView(JspViewHandlingStrategy.java:359) com.sun.faces.application.view.JspViewHandlingStrategy.buildView(JspViewHandlingStrategy.java:150) com.sun.faces.application.view.JspViewHandlingStrategy.renderView(JspViewHandlingStrategy.java:190) com.sun.faces.application.view.MultiViewHandler.renderView(MultiViewHandler.java:127) org.ajax4jsf.application.ViewHandlerWrapper.renderView(ViewHandlerWrapper.java:100) org.ajax4jsf.application.AjaxViewHandler.renderView(AjaxViewHandler.java:176) com.sun.faces.lifecycle.RenderResponsePhase.execute(RenderResponsePhase.java:117) com.sun.faces.lifecycle.Phase.doPhase(Phase.java:97) com.sun.faces.lifecycle.LifecycleImpl.render(LifecycleImpl.java:135) javax.faces.webapp.FacesServlet.service(FacesServlet.java:309) The stack trace is recursively logged with this until the StackOverflowError occurrs. If I remove all the configurations related to Richfaces, the application works like charm. Any advice is much appreciated.

    Read the article

  • Matlab: Optimization by perturbing variable

    - by S_H
    My main script contains following code: %# Grid and model parameters nModel=50; nModel_want=1; nI_grid1=5; Nth=1; nRow.Scale1=5; nCol.Scale1=5; nRow.Scale2=5^2; nCol.Scale2=5^2; theta = 90; % degrees a_minor = 2; % range along minor direction a_major = 5; % range along major direction sill = var(reshape(Deff_matrix_NthModel,nCell.Scale1,1)); % variance of the coarse data matrix of size nRow.Scale1 X nCol.Scale1 %# Covariance computation % Scale 1 for ihRow = 1:nRow.Scale1 for ihCol = 1:nCol.Scale1 [cov.Scale1(ihRow,ihCol),heff.Scale1(ihRow,ihCol)] = general_CovModel(theta, ihCol, ihRow, a_minor, a_major, sill, 'Exp'); end end % Scale 2 for ihRow = 1:nRow.Scale2 for ihCol = 1:nCol.Scale2 [cov.Scale2(ihRow,ihCol),heff.Scale2(ihRow,ihCol)] = general_CovModel(theta, ihCol/(nCol.Scale2/nCol.Scale1), ihRow/(nRow.Scale2/nRow.Scale1), a_minor, a_major, sill/(nRow.Scale2*nCol.Scale2), 'Exp'); end end %# Scale-up of fine scale values by averaging [covAvg.Scale2,var_covAvg.Scale2,varNorm_covAvg.Scale2] = general_AverageProperty(nRow.Scale2/nRow.Scale1,nCol.Scale2/nCol.Scale1,1,nRow.Scale1,nCol.Scale1,1,cov.Scale2,1); I am using two functions, general_CovModel() and general_AverageProperty(), in my main script which are given as following: function [cov,h_eff] = general_CovModel(theta, hx, hy, a_minor, a_major, sill, mod_type) % mod_type should be in strings angle_rad = theta*(pi/180); % theta in degrees, angle_rad in radians R_theta = [sin(angle_rad) cos(angle_rad); -cos(angle_rad) sin(angle_rad)]; h = [hx; hy]; lambda = a_minor/a_major; D_lambda = [lambda 0; 0 1]; h_2prime = D_lambda*R_theta*h; h_eff = sqrt((h_2prime(1)^2)+(h_2prime(2)^2)); if strcmp(mod_type,'Sph')==1 || strcmp(mod_type,'sph') ==1 if h_eff<=a cov = sill - sill.*(1.5*(h_eff/a_minor)-0.5*((h_eff/a_minor)^3)); else cov = sill; end elseif strcmp(mod_type,'Exp')==1 || strcmp(mod_type,'exp') ==1 cov = sill-(sill.*(1-exp(-(3*h_eff)/a_minor))); elseif strcmp(mod_type,'Gauss')==1 || strcmp(mod_type,'gauss') ==1 cov = sill-(sill.*(1-exp(-((3*h_eff)^2/(a_minor^2))))); end and function [PropertyAvg,variance_PropertyAvg,NormVariance_PropertyAvg]=... general_AverageProperty(blocksize_row,blocksize_col,blocksize_t,... nUpscaledRow,nUpscaledCol,nUpscaledT,PropertyArray,omega) % This function computes average of a property and variance of that averaged % property using power averaging PropertyAvg=zeros(nUpscaledRow,nUpscaledCol,nUpscaledT); %# Average of property for k=1:nUpscaledT, for j=1:nUpscaledCol, for i=1:nUpscaledRow, sum=0; for a=1:blocksize_row, for b=1:blocksize_col, for c=1:blocksize_t, sum=sum+(PropertyArray((i-1)*blocksize_row+a,(j-1)*blocksize_col+b,(k-1)*blocksize_t+c).^omega); % add all the property values in 'blocksize_x','blocksize_y','blocksize_t' to one variable end end end PropertyAvg(i,j,k)=(sum/(blocksize_row*blocksize_col*blocksize_t)).^(1/omega); % take average of the summed property end end end %# Variance of averageed property variance_PropertyAvg=var(reshape(PropertyAvg,... nUpscaledRow*nUpscaledCol*nUpscaledT,1),1,1); %# Normalized variance of averageed property NormVariance_PropertyAvg=variance_PropertyAvg./(var(reshape(... PropertyArray,numel(PropertyArray),1),1,1)); Question: Using Matlab, I would like to optimize covAvg.Scale2 such that it matches closely with cov.Scale1 by perturbing/varying any (or all) of the following variables 1) a_minor 2) a_major 3) theta Thanks.

    Read the article

  • Help to solve "Robbery Problem"

    - by peiska
    Hello, Can anybody help me with this problem in C or Java? The problem is taken from here: http://acm.pku.edu.cn/JudgeOnline/problem?id=1104 Inspector Robstop is very angry. Last night, a bank has been robbed and the robber has not been caught. And this happened already for the third time this year, even though he did everything in his power to stop the robber: as quickly as possible, all roads leading out of the city were blocked, making it impossible for the robber to escape. Then, the inspector asked all the people in the city to watch out for the robber, but the only messages he got were of the form "We don't see him." But this time, he has had enough! Inspector Robstop decides to analyze how the robber could have escaped. To do that, he asks you to write a program which takes all the information the inspector could get about the robber in order to find out where the robber has been at which time. Coincidentally, the city in which the bank was robbed has a rectangular shape. The roads leaving the city are blocked for a certain period of time t, and during that time, several observations of the form "The robber isn't in the rectangle Ri at time ti" are reported. Assuming that the robber can move at most one unit per time step, your program must try to find the exact position of the robber at each time step. Input The input contains the description of several robberies. The first line of each description consists of three numbers W, H, t (1 <= W,H,t <= 100) where W is the width, H the height of the city and t is the time during which the city is locked. The next contains a single integer n (0 <= n <= 100), the number of messages the inspector received. The next n lines (one for each of the messages) consist of five integers ti, Li, Ti, Ri, Bi each. The integer ti is the time at which the observation has been made (1 <= ti <= t), and Li, Ti, Ri, Bi are the left, top, right and bottom respectively of the (rectangular) area which has been observed. (1 <= Li <= Ri <= W, 1 <= Ti <= Bi <= H; the point (1, 1) is the upper left hand corner, and (W, H) is the lower right hand corner of the city.) The messages mean that the robber was not in the given rectangle at time ti. The input is terminated by a test case starting with W = H = t = 0. This case should not be processed. Output For each robbery, first output the line "Robbery #k:", where k is the number of the robbery. Then, there are three possibilities: If it is impossible that the robber is still in the city considering the messages, output the line "The robber has escaped." In all other cases, assume that the robber really is in the city. Output one line of the form "Time step : The robber has been at x,y." for each time step, in which the exact location can be deduced. (x and y are the column resp. row of the robber in time step .) Output these lines ordered by time . If nothing can be deduced, output the line "Nothing known." and hope that the inspector will not get even more angry. Output a blank line after each processed case.

    Read the article

  • KVM Slow performance on XP Guest

    - by Gregg Leventhal
    The system is very slow to do anything, even browse a local folder, and CPU sits at 100% frequently. Guest is XP 32 bit. Host is Scientific Linux 6.2, Libvirt 0.10, Guest XP OS shows ACPI Multiprocessor HAL and a virtIO driver for NIC and SCSI. Installed. CPUInfo on host: processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 42 model name : Intel(R) Core(TM) i7-2600 CPU @ 3.40GHz stepping : 7 cpu MHz : 3200.000 cache size : 8192 KB physical id : 0 siblings : 8 core id : 0 cpu cores : 4 apicid : 0 initial apicid : 0 fpu : yes fpu_exception : yes cpuid level : 13 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx lahf_lm ida arat epb xsaveopt pln pts dts tpr_shadow vnmi flexpriority ept vpid bogomips : 6784.93 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management: <memory unit='KiB'>4194304</memory> <currentMemory unit='KiB'>4194304</currentMemory> <vcpu placement='static' cpuset='0'>1</vcpu> <os> <type arch='x86_64' machine='rhel6.3.0'>hvm</type> <boot dev='hd'/> </os> <features> <acpi/> <apic/> <pae/> </features> <cpu mode='custom' match='exact'> <model fallback='allow'>SandyBridge</model> <vendor>Intel</vendor> <feature policy='require' name='vme'/> <feature policy='require' name='tm2'/> <feature policy='require' name='est'/> <feature policy='require' name='vmx'/> <feature policy='require' name='osxsave'/> <feature policy='require' name='smx'/> <feature policy='require' name='ss'/> <feature policy='require' name='ds'/> <feature policy='require' name='tsc-deadline'/> <feature policy='require' name='dtes64'/> <feature policy='require' name='ht'/> <feature policy='require' name='pbe'/> <feature policy='require' name='tm'/> <feature policy='require' name='pdcm'/> <feature policy='require' name='ds_cpl'/> <feature policy='require' name='xtpr'/> <feature policy='require' name='acpi'/> <feature policy='require' name='monitor'/> <feature policy='force' name='sse'/> <feature policy='force' name='sse2'/> <feature policy='force' name='sse4.1'/> <feature policy='force' name='sse4.2'/> <feature policy='force' name='ssse3'/> <feature policy='force' name='x2apic'/> </cpu> <clock offset='localtime'> <timer name='rtc' tickpolicy='catchup'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/libexec/qemu-kvm</emulator> <disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='none'/> <source file='/var/lib/libvirt/images/Server-10-9-13.qcow2'/> <target dev='vda' bus='virtio'/> <alias name='virtio-disk0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </disk>

    Read the article

  • Inflector::humanize($key) converts Date of joining TO Date Of Joining

    - by Aruna
    Hi, I have a Form and i am submitting them like using function submit($formid = null,$fillerid=null) { $this->data['Result']['form_id']=$formid; $this->data['Result']['submitter_id']=$fillerid; $this->data['Result']['submitter']=$this->Session->read('filler'); echo "submitter: ".$this->Session->read('filler'); $results=$this->Form->hasResults($this->data); //echo http_build_query($_POST); if(empty($results)){ foreach ($_POST as $key => $value): if(is_array($value)){ $value = implode('', $_POST[$key]); $this->data['Result']['value']=$value; } else{ $this->data['Result']['value']=$value; } $this->data['Result']['form_id']=$formid; $this->data['Result']['submitter_id']=$fillerid; $this->data['Result']['label']=Inflector::humanize($key); $this->data['Result']['submitter']=$this->Session->read('filler'); $this->Form->submitForm($this->data); endforeach; $this->Session->setFlash('Your entry has been submitted.'); } I am having A fORM LIKE <form method="post" action="/FormBuilder/index.php/forms/submit/1/4" id="ResultSubmit"> <div class="input text"><label for="1">Firstname</label><input type="text" value="" style="width: 300px;" id="1" name="Firstname"/></div> <br/> <div class="input text"><label for="2">Last Name</label><input type="text" value="" style="width: 300px;" id="2" name="Last Name"/></div> <br/> <div class="input text"><label for="3">Age</label><input type="text" value="" style="width: 200px;" id="3" name="Age"/></div> <br/> <center> <span id="errmsg3"/> </center> <div class="input textarea"><label for="4">Address</label><textarea style="height: 300px;" id="4" rows="6" cols="30" name="Address"/></div> <br/> <div class="input text"><label for="5">Date Of Joining</label><input type="text" value="" style="width: 300px;" id="5" name="Date of joining"/></div><br/> <div class="input text"><label for="6">Email - Id</label><input type="text" value="" style="width: 300px;" id="6" name="Email - id"/></div> <br/> <div class="input text"> <label for="7">Personal Number</label><input type="text" value="" maxlength="3" style="width: 30px;" id="7" name="Personal Number[]"/><input type="text" value="" style="width: 30px;" maxlength="3" id="7-1" name="Personal Number[]"/><input type="text" value="" style="width: 70px;" maxlength="4" id="7-2" name="Personal Number[]"/></div> <span id="errmsg7"/> <br/> <div class="input select"><label for="8">Gender</label><select id="8" name="Gender"> MaleFemale <div class="input text"><label for="9">Official Number</label><input type="text" value="" style="width: 200px;" id="9" name="Official Number"/></div><br/> <div class="input select"><label for="10">Experience</label><select id="10" name="Experience"> <option value="Fresher">Fresher</option><option yrs="" 5="" value="Below">Below 5 Yrs</option><option yrs="" 10="" value="Above">Above 10 yrs</option></select></div><br/> actually My input has the names as Firstname Last Name Age Address Date of joining Email - id Personal Number Gender Official Number But when i use Inflector::humanize($key) for saving the names which has white space characters they have converted into like Date Of Joining i.e.., O and J becomes Capital letters... But i need to save them as such as Date of joining.. How to do so???

    Read the article

  • Fluent NHibermate and Polymorphism and a Newbie!

    - by Andy Baker
    I'm a fluent nhibernate newbie and I'm struggling mapping a hierarchy of polymorhophic objects. I've produced the following Model that recreates the essence of what I'm doing in my real application. I have a ProductList and several specialised type of products; public class MyProductList { public virtual int Id { get; set; } public virtual string Name {get;set;} public virtual IList<Product> Products { get; set; } public MyProductList() { Products = new List<Product>(); } } public class Product { public virtual int Id { get; set; } public virtual string ProductDescription {get;set;} } public class SizedProduct : Product { public virtual decimal Size {get;set;} } public class BundleProduct : Product { public virtual Product BundleItem1 {get;set;} public virtual Product BundleItem2 {get;set;} } Note that I have a specialised type of Product called BundleProduct that has two products attached. I can add any of the specialised types of product to MyProductList and a bundle Product can be made up of any of the specialised types of product too. Here is the fluent nhibernate mapping that I'm using; public class MyListMap : ClassMap<MyList> { public MyListMap() { Id(ml => ml.Id); Map(ml => ml.Name); HasManyToMany(ml => ml.Products).Cascade.All(); } } public class ProductMap : ClassMap<Product> { public ProductMap() { Id(prod => prod.Id); Map(prod => prod.ProductDescription); } } public class SizedProductMap : SubclassMap<SizedProduct> { public SizedProductMap() { Map(sp => sp.Size); } } public class BundleProductMap : SubclassMap<BundleProduct> { public BundleProductMap() { References(bp => bp.BundleItem1).Cascade.All(); References(bp => bp.BundleItem2).Cascade.All(); } } I haven't configured have any reverse mappings, so a product doesn't know which Lists it belongs to or which bundles it is part of. Next I add some products to my list; MyList ml = new MyList() { Name = "Example" }; ml.Products.Add(new Product() { ProductDescription = "PSU" }); ml.Products.Add(new SizedProduct() { ProductDescription = "Extension Cable", Size = 2.0M }); ml.Products.Add(new BundleProduct() { ProductDescription = "Fan & Cable", BundleItem1 = new Product() { ProductDescription = "Fan Power Cable" }, BundleItem2 = new SizedProduct() { ProductDescription = "80mm Fan", Size = 80M } }); When I persist my list to the database and reload it, the list itself contains the items I expect ie MyList[0] has a type of Product, MyList[1] has a type of SizedProduct, and MyList[2] has a type of BundleProduct - great! If I navigate to the BundleProduct, I'm not able to see the types of Product attached to the BundleItem1 or BundleItem2 instead they are always proxies to the Product - in this example BundleItem2 should be a SizedProduct. Is there anything I can do to resove this either in my model or the mapping? Thanks in advance for your help.

    Read the article

  • Why do we need different CPU architecture for server & mini/mainframe & mixed-core?

    - by claws
    Hello, I was just wondering what other CPU architectures are available other than INTEL & AMD. So, found List of CPU architectures on Wikipedia. It categorizes notable CPU architectures into following categories. Embedded CPU architectures Microcomputer CPU architectures Workstation/Server CPU architectures Mini/Mainframe CPU architectures Mixed core CPU architectures I was analyzing the purposes and have few doubts. I taking Microcomputer CPU (PC) architecture as reference and comparing others. Embedded CPU architecture: They are a completely new world. Embedded systems are small & do very specific task mostly real time & low power consuming so we do not need so many & such wide registers available in a microcomputer CPU (typical PC). In other words we do need a new small & tiny architecture. Hence new architecture & new instruction RISC. The above point also clarifies why do we need a separate operating system (RTOS). Workstation/Server CPU architectures I don't know what is a workstation. Someone clarify regarding the workstation. As of the server. It is dedicated to run a specific software (server software like httpd, mysql etc.). Even if other processes run we need to give server process priority therefore there is a need for new scheduling scheme and thus we need operating system different than general purpose one. If you have any more points for the need of server OS please mention. But I don't get why do we need a new CPU Architecture. Why cant Microcomputer CPU architecture do the job. Can someone please clarify? Mini/Mainframe CPU architectures Again I don't know what are these & what miniframes or mainframes used for? I just know they are very big and occupy complete floor. But I never read about some real world problems they are trying to solve. If any one working on one of these. Share your knowledge. Can some one clarify its purpose & why is it that microcomputer CPU archicture not suitable for it? Is there a new kind of operating system for this too? Why? Mixed core CPU architectures Never heard of these. If possible please keep your answer in this format: XYZ CPU architectures Purpose of XYZ Need for a new architecture. why can't current microcomputer CPU architecture work? They go upto 3GHZ & have upto 8 cores. Need for a new Operating System Why do we need a new kind of operating system for this kind of archictures?

    Read the article

  • Recover RAID 5 data after created new array instead of re-using

    - by Brigadieren
    Folks please help - I am a newb with a major headache at hand (perfect storm situation). I have a 3 1tb hdd on my ubuntu 11.04 configured as software raid 5. The data had been copied weekly onto another separate off the computer hard drive until that completely failed and was thrown away. A few days back we had a power outage and after rebooting my box wouldn't mount the raid. In my infinite wisdom I entered mdadm --create -f... command instead of mdadm --assemble and didn't notice the travesty that I had done until after. It started the array degraded and proceeded with building and syncing it which took ~10 hours. After I was back I saw that that the array is successfully up and running but the raid is not I mean the individual drives are partitioned (partition type f8 ) but the md0 device is not. Realizing in horror what I have done I am trying to find some solutions. I just pray that --create didn't overwrite entire content of the hard driver. Could someone PLEASE help me out with this - the data that's on the drive is very important and unique ~10 years of photos, docs, etc. Is it possible that by specifying the participating hard drives in wrong order can make mdadm overwrite them? when I do mdadm --examine --scan I get something like ARRAY /dev/md/0 metadata=1.2 UUID=f1b4084a:720b5712:6d03b9e9:43afe51b name=<hostname>:0 Interestingly enough name used to be 'raid' and not the host hame with :0 appended. Here is the 'sanitized' config entries: DEVICE /dev/sdf1 /dev/sde1 /dev/sdd1 CREATE owner=root group=disk mode=0660 auto=yes HOMEHOST <system> MAILADDR root ARRAY /dev/md0 metadata=1.2 name=tanserv:0 UUID=f1b4084a:720b5712:6d03b9e9:43afe51b Here is the output from mdstat cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid5 sdd1[0] sdf1[3] sde1[1] 1953517568 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU] unused devices: <none> fdisk shows the following: fdisk -l Disk /dev/sda: 80.0 GB, 80026361856 bytes 255 heads, 63 sectors/track, 9729 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000bf62e Device Boot Start End Blocks Id System /dev/sda1 * 1 9443 75846656 83 Linux /dev/sda2 9443 9730 2301953 5 Extended /dev/sda5 9443 9730 2301952 82 Linux swap / Solaris Disk /dev/sdb: 750.2 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000de8dd Device Boot Start End Blocks Id System /dev/sdb1 1 91201 732572001 8e Linux LVM Disk /dev/sdc: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00056a17 Device Boot Start End Blocks Id System /dev/sdc1 1 60801 488384001 8e Linux LVM Disk /dev/sdd: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000ca948 Device Boot Start End Blocks Id System /dev/sdd1 1 121601 976760001 fd Linux raid autodetect Disk /dev/dm-0: 1250.3 GB, 1250254913536 bytes 255 heads, 63 sectors/track, 152001 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/dm-0 doesn't contain a valid partition table Disk /dev/sde: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x93a66687 Device Boot Start End Blocks Id System /dev/sde1 1 121601 976760001 fd Linux raid autodetect Disk /dev/sdf: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xe6edc059 Device Boot Start End Blocks Id System /dev/sdf1 1 121601 976760001 fd Linux raid autodetect Disk /dev/md0: 2000.4 GB, 2000401989632 bytes 2 heads, 4 sectors/track, 488379392 cylinders Units = cylinders of 8 * 512 = 4096 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 524288 bytes / 1048576 bytes Disk identifier: 0x00000000 Disk /dev/md0 doesn't contain a valid partition table Per suggestions I did clean up the superblocks and re-created the array with --assume-clean option but with no luck at all. Is there any tool that will help me to revive at least some of the data? Can someone tell me what and how the mdadm --create does when syncs to destroy the data so I can write a tool to un-do whatever was done? After the re-creating of the raid I run fsck.ext4 /dev/md0 and here is the output root@tanserv:/etc/mdadm# fsck.ext4 /dev/md0 e2fsck 1.41.14 (22-Dec-2010) fsck.ext4: Superblock invalid, trying backup blocks... fsck.ext4: Bad magic number in super-block while trying to open /dev/md0 The superblock could not be read or does not describe a correct ext2 filesystem. If the device is valid and it really contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: e2fsck -b 8193 Per Shanes' suggestion I tried root@tanserv:/home/mushegh# mkfs.ext4 -n /dev/md0 mke2fs 1.41.14 (22-Dec-2010) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=128 blocks, Stripe width=256 blocks 122101760 inodes, 488379392 blocks 24418969 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=0 14905 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 102400000, 214990848 and run fsck.ext4 with every backup block but all returned the following: root@tanserv:/home/mushegh# fsck.ext4 -b 214990848 /dev/md0 e2fsck 1.41.14 (22-Dec-2010) fsck.ext4: Invalid argument while trying to open /dev/md0 The superblock could not be read or does not describe a correct ext2 filesystem. If the device is valid and it really contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: e2fsck -b 8193 <device> Any suggestions? Regards!

    Read the article

  • PyML 0.7.2 - How to prevent accuracy from dropping after storing/loading a classifier?

    - by Michael Aaron Safyan
    This is a followup from "Save PyML.classifiers.multi.OneAgainstRest(SVM()) object?". The solution to that question was close, but not quite right, (the SparseDataSet is broken, so attempting to save/load with that dataset container type will fail, no matter what. Also, PyML is inconsistent in terms of whether labels should be numbers or strings... it turns out that the oneAgainstRest function is actually not good enough, because the labels need to be strings and simultaneously convertible to floats, because there are places where it is assumed to be a string and elsewhere converted to float) and so after a great deal of hacking and such I was finally able to figure out a way to save and load my multi-class classifier without it blowing up with an error.... however, although it is no longer giving me an error message, it is still not quite right as the accuracy of the classifier drops significantly when it is saved and then reloaded (so I'm still missing a piece of the puzzle). I am currently using the following custom mutli-class classifier for training, saving, and loading: class SVM(object): def __init__(self,features_or_filename,labels=None,kernel=None): if isinstance(features_or_filename,str): filename=features_or_filename; if labels!=None: raise ValueError,"Labels must be None if loading from a file."; with open(os.path.join(filename,"uniquelabels.list"),"rb") as uniquelabelsfile: self.uniquelabels=sorted(list(set(pickle.load(uniquelabelsfile)))); self.labeltoindex={}; for idx,label in enumerate(self.uniquelabels): self.labeltoindex[label]=idx; self.classifiers=[]; for classidx, classname in enumerate(self.uniquelabels): self.classifiers.append(PyML.classifiers.svm.loadSVM(os.path.join(filename,str(classname)+".pyml.svm"),datasetClass = PyML.VectorDataSet)); else: features=features_or_filename; if labels==None: raise ValueError,"Labels must not be None when training."; self.uniquelabels=sorted(list(set(labels))); self.labeltoindex={}; for idx,label in enumerate(self.uniquelabels): self.labeltoindex[label]=idx; points = [[float(xij) for xij in xi] for xi in features]; self.classifiers=[PyML.SVM(kernel) for label in self.uniquelabels]; for i in xrange(len(self.uniquelabels)): currentlabel=self.uniquelabels[i]; currentlabels=['+1' if k==currentlabel else '-1' for k in labels]; currentdataset=PyML.VectorDataSet(points,L=currentlabels,positiveClass='+1'); self.classifiers[i].train(currentdataset,saveSpace=False); def accuracy(self,pts,labels): logger=logging.getLogger("ml"); correct=0; total=0; classindexes=[self.labeltoindex[label] for label in labels]; h=self.hypotheses(pts); for idx in xrange(len(pts)): if h[idx]==classindexes[idx]: logger.info("RIGHT: Actual \"%s\" == Predicted \"%s\"" %(self.uniquelabels[ classindexes[idx] ], self.uniquelabels[ h[idx] ])); correct+=1; else: logger.info("WRONG: Actual \"%s\" != Predicted \"%s\"" %(self.uniquelabels[ classindexes[idx] ], self.uniquelabels[ h[idx] ])) total+=1; return float(correct)/float(total); def prediction(self,pt): h=self.hypothesis(pt); if h!=None: return self.uniquelabels[h]; return h; def predictions(self,pts): h=self.hypotheses(self,pts); return [self.uniquelabels[x] if x!=None else None for x in h]; def hypothesis(self,pt): bestvalue=None; bestclass=None; dataset=PyML.VectorDataSet([pt]); for classidx, classifier in enumerate(self.classifiers): val=classifier.decisionFunc(dataset,0); if (bestvalue==None) or (val>bestvalue): bestvalue=val; bestclass=classidx; return bestclass; def hypotheses(self,pts): bestvalues=[None for pt in pts]; bestclasses=[None for pt in pts]; dataset=PyML.VectorDataSet(pts); for classidx, classifier in enumerate(self.classifiers): for ptidx in xrange(len(pts)): val=classifier.decisionFunc(dataset,ptidx); if (bestvalues[ptidx]==None) or (val>bestvalues[ptidx]): bestvalues[ptidx]=val; bestclasses[ptidx]=classidx; return bestclasses; def save(self,filename): if not os.path.exists(filename): os.makedirs(filename); with open(os.path.join(filename,"uniquelabels.list"),"wb") as uniquelabelsfile: pickle.dump(self.uniquelabels,uniquelabelsfile,pickle.HIGHEST_PROTOCOL); for classidx, classname in enumerate(self.uniquelabels): self.classifiers[classidx].save(os.path.join(filename,str(classname)+".pyml.svm")); I am using the latest version of PyML (0.7.2, although PyML.__version__ is 0.7.0). When I construct the classifier with a training dataset, the reported accuracy is ~0.87. When I then save it and reload it, the accuracy is less than 0.001. So, there is something here that I am clearly not persisting correctly, although what that may be is completely non-obvious to me. Would you happen to know what that is?

    Read the article

< Previous Page | 327 328 329 330 331 332 333 334 335 336 337 338  | Next Page >