Search Results

Search found 13178 results on 528 pages for 'subsonic active record'.

Page 501/528 | < Previous Page | 497 498 499 500 501 502 503 504 505 506 507 508  | Next Page >

  • Fluent NHibernate Many to one mapping

    - by Jit
    I am new to Hibernate world. It may be a silly question, but I am not able to solve it. I am testing many to One relationship of tables and trying to insert record. I have a Department table and Employee table. Employee and Dept has many to One relationship here. I am using Fluent NHibernate to add records. All codes below. Pls help - SQL Code create table Dept ( Id int primary key identity, DeptName varchar(20), DeptLocation varchar(20)) create table Employee ( Id int primary key identity, EmpName varchar(20),EmpAge int, DeptId int references Dept(Id)) Class Files public partial class Dept { public virtual System.String DeptLocation { get; set; } public virtual System.String DeptName { get; set; } public virtual System.Int32 Id { get; private set; } public virtual IList<Employee> Employees { get; set; } } public partial class Employee { public virtual System.Int32 DeptId { get; set; } public virtual System.Int32 EmpAge { get; set; } public virtual System.String EmpName { get; set; } public virtual System.Int32 Id { get; private set; } public virtual Project.Model.Dept Dept { get; set; } } Mapping Files public class DeptMapping : ClassMap { public DeptMapping() { Id(x = x.Id); Map(x = x.DeptName); Map(x = x.DeptLocation); HasMany(x = x.Employees) .Inverse() .Cascade.All(); } } public class EmployeeMapping : ClassMap { public EmployeeMapping() { Id(x = x.Id); Map(x = x.EmpName); Map(x = x.EmpAge); Map(x = x.DeptId); References(x = x.Dept) .Cascade.None(); } } My Code to add try { Dept dept = new Dept(); dept.DeptLocation = "Austin"; dept.DeptName = "Store"; Employee emp = new Employee(); emp.EmpName = "Ron"; emp.EmpAge = 30; IList<Employee> empList = new List<Employee>(); empList.Add(emp); dept.Employees = empList; emp.Dept = dept; IRepository<Dept> rDept = new Repository<Dept>(); rDept.SaveOrUpdate(dept); } catch (Exception ex) { Console.WriteLine(ex.Message); } Here i am getting error as InnerException = {"Invalid column name 'Dept_id'."} Message = "could not insert: [Project.Model.Employee][SQL: INSERT INTO [Employee] (EmpName, EmpAge, DeptId, Dept_id) VALUES (?, ?, ?, ?); select SCOPE_IDENTITY()]"

    Read the article

  • Cocoa nextEventMatchingMask not receiving NSMouseMoved event

    - by Jonny
    Hello, I created a local event loop and showed up a borderless window (derived from NSPanel), I found in the event loop there's no NSMouseMoved event received, although I can receive Mouse button down/up events. What should I do to get the NSMouseMoved events? I found making the float window as key window can receive the NSMouseMoved events, but I don't want to change key window. And it appears this is possible, because I found after clicking the test App Icon in System Dock Bar, I can receive the mousemoved events, and the key window/mainwindow are unchanged. Here's the my test code: (Create a Cocoa App project names FloatWindowTest, and put a button to link it with the onClick: IBAction). Thanks in advance! -Jonny #import "FloatWindowTestAppDelegate.h" @interface FloatWindow : NSPanel @end @interface FloatWindowContentView : NSView @end @implementation FloatWindowTestAppDelegate @synthesize window; - (void)delayedAction:(id)sender { // What does this function do? // 1. create a float window // 2. create a local event loop // 3. print out the events got from nextEventMatchingMask. // 4. send it to float window. // What is the problem? // In local event loop, althrough the event mask has set NSMouseMovedMask // there's no mouse moved messages received. // FloatWindow* floatWindow = [[FloatWindow alloc] init]; NSEvent* event = [NSApp currentEvent]; NSPoint screenOrigin = [[self window] convertBaseToScreen:[event locationInWindow]]; [floatWindow setFrameTopLeftPoint:screenOrigin]; [floatWindow orderFront:nil]; //Making the float window as Key window will work, however //change active window is not anticipated. //[floatWindow makeKeyAndOrderFront:nil]; BOOL done = NO; while (!done) { NSAutoreleasePool* pool = [NSAutoreleasePool new]; NSUInteger eventMask = NSLeftMouseDownMask| NSLeftMouseUpMask| NSMouseMovedMask| NSMouseEnteredMask| NSMouseExitedMask| NSLeftMouseDraggedMask; NSEvent* event = [NSApp nextEventMatchingMask:eventMask untilDate:[NSDate distantFuture] inMode:NSDefaultRunLoopMode dequeue:YES]; //why I cannot get NSMouseMoved event?? NSLog(@"new event %@", [event description]); [floatWindow sendEvent:event]; [pool drain]; } [floatWindow release]; return; } -(IBAction)onClick:(id)sender { //Tried to postpone the local event loop //after return from button's mouse tracking loop. //but not fixes this problem. [[NSRunLoop currentRunLoop] performSelector:@selector(delayedAction:) target:self argument:nil order:0 modes:[NSArray arrayWithObject:NSDefaultRunLoopMode]]; } @end @implementation FloatWindow - (id)init { NSRect contentRect = NSMakeRect(200,300,200,300); self = [super initWithContentRect:contentRect styleMask:NSTitledWindowMask backing:NSBackingStoreBuffered defer:YES]; if (self) { [self setLevel:NSFloatingWindowLevel]; NSRect frameRect = [self frameRectForContentRect:contentRect]; NSView* view = [[[FloatWindowContentView alloc] initWithFrame:frameRect] autorelease]; [self setContentView:view]; [self setAcceptsMouseMovedEvents:YES]; [self setIgnoresMouseEvents:NO]; } return self; } - (BOOL)becomesKeyOnlyIfNeeded { return YES; } - (void)becomeMainWindow { NSLog(@"becomeMainWindow"); [super becomeMainWindow]; } - (void)becomeKeyWindow { NSLog(@"becomeKeyWindow"); [super becomeKeyWindow]; } @end @implementation FloatWindowContentView - (BOOL)acceptsFirstMouse:(NSEvent *)theEvent { return YES; } - (BOOL)acceptsFirstResponder { return YES; } - (id)initWithFrame:(NSRect)frameRect { self = [super initWithFrame:frameRect]; if (self) { NSTrackingArea* area; area = [[NSTrackingArea alloc] initWithRect:frameRect options:NSTrackingActiveAlways| NSTrackingMouseMoved| NSTrackingMouseEnteredAndExited owner:self userInfo:nil]; [self addTrackingArea:area]; [area release]; } return self; } - (void)drawRect:(NSRect)rect { [[NSColor redColor] set]; NSRectFill([self bounds]); } - (BOOL)becomeFirstResponder { NSLog(@"becomeFirstResponder"); return [super becomeFirstResponder]; } @end

    Read the article

  • Python: Improving long cumulative sum

    - by Bo102010
    I have a program that operates on a large set of experimental data. The data is stored as a list of objects that are instances of a class with the following attributes: time_point - the time of the sample cluster - the name of the cluster of nodes from which the sample was taken code - the name of the node from which the sample was taken qty1 = the value of the sample for the first quantity qty2 = the value of the sample for the second quantity I need to derive some values from the data set, grouped in three ways - once for the sample as a whole, once for each cluster of nodes, and once for each node. The values I need to derive depend on the (time sorted) cumulative sums of qty1 and qty2: the maximum value of the element-wise sum of the cumulative sums of qty1 and qty2, the time point at which that maximum value occurred, and the values of qty1 and qty2 at that time point. I came up with the following solution: dataset.sort(key=operator.attrgetter('time_point')) # For the whole set sys_qty1 = 0 sys_qty2 = 0 sys_combo = 0 sys_max = 0 # For the cluster grouping cluster_qty1 = defaultdict(int) cluster_qty2 = defaultdict(int) cluster_combo = defaultdict(int) cluster_max = defaultdict(int) cluster_peak = defaultdict(int) # For the node grouping node_qty1 = defaultdict(int) node_qty2 = defaultdict(int) node_combo = defaultdict(int) node_max = defaultdict(int) node_peak = defaultdict(int) for t in dataset: # For the whole system ###################################################### sys_qty1 += t.qty1 sys_qty2 += t.qty2 sys_combo = sys_qty1 + sys_qty2 if sys_combo > sys_max: sys_max = sys_combo # The Peak class is to record the time point and the cumulative quantities system_peak = Peak(time_point=t.time_point, qty1=sys_qty1, qty2=sys_qty2) # For the cluster grouping ################################################## cluster_qty1[t.cluster] += t.qty1 cluster_qty2[t.cluster] += t.qty2 cluster_combo[t.cluster] = cluster_qty1[t.cluster] + cluster_qty2[t.cluster] if cluster_combo[t.cluster] > cluster_max[t.cluster]: cluster_max[t.cluster] = cluster_combo[t.cluster] cluster_peak[t.cluster] = Peak(time_point=t.time_point, qty1=cluster_qty1[t.cluster], qty2=cluster_qty2[t.cluster]) # For the node grouping ##################################################### node_qty1[t.node] += t.qty1 node_qty2[t.node] += t.qty2 node_combo[t.node] = node_qty1[t.node] + node_qty2[t.node] if node_combo[t.node] > node_max[t.node]: node_max[t.node] = node_combo[t.node] node_peak[t.node] = Peak(time_point=t.time_point, qty1=node_qty1[t.node], qty2=node_qty2[t.node]) This produces the correct output, but I'm wondering if it can be made more readable/Pythonic, and/or faster/more scalable. The above is attractive in that it only loops through the (large) dataset once, but unattractive in that I've essentially copied/pasted three copies of the same algorithm. To avoid the copy/paste issues of the above, I tried this also: def find_peaks(level, dataset): def grouping(object, attr_name): if attr_name == 'system': return attr_name else: return object.__dict__[attrname] cuml_qty1 = defaultdict(int) cuml_qty2 = defaultdict(int) cuml_combo = defaultdict(int) level_max = defaultdict(int) level_peak = defaultdict(int) for t in dataset: cuml_qty1[grouping(t, level)] += t.qty1 cuml_qty2[grouping(t, level)] += t.qty2 cuml_combo[grouping(t, level)] = (cuml_qty1[grouping(t, level)] + cuml_qty2[grouping(t, level)]) if cuml_combo[grouping(t, level)] > level_max[grouping(t, level)]: level_max[grouping(t, level)] = cuml_combo[grouping(t, level)] level_peak[grouping(t, level)] = Peak(time_point=t.time_point, qty1=node_qty1[grouping(t, level)], qty2=node_qty2[grouping(t, level)]) return level_peak system_peak = find_peaks('system', dataset) cluster_peak = find_peaks('cluster', dataset) node_peak = find_peaks('node', dataset) For the (non-grouped) system-level calculations, I also came up with this, which is pretty: dataset.sort(key=operator.attrgetter('time_point')) def cuml_sum(seq): rseq = [] t = 0 for i in seq: t += i rseq.append(t) return rseq time_get = operator.attrgetter('time_point') q1_get = operator.attrgetter('qty1') q2_get = operator.attrgetter('qty2') timeline = [time_get(t) for t in dataset] cuml_qty1 = cuml_sum([q1_get(t) for t in dataset]) cuml_qty2 = cuml_sum([q2_get(t) for t in dataset]) cuml_combo = [q1 + q2 for q1, q2 in zip(cuml_qty1, cuml_qty2)] combo_max = max(cuml_combo) time_max = timeline.index(combo_max) q1_at_max = cuml_qty1.index(time_max) q2_at_max = cuml_qty2.index(time_max) However, despite this version's cool use of list comprehensions and zip(), it loops through the dataset three times just for the system-level calculations, and I can't think of a good way to do the cluster-level and node-level calaculations without doing something slow like: timeline = defaultdict(int) cuml_qty1 = defaultdict(int) #...etc. for c in cluster_list: timeline[c] = [time_get(t) for t in dataset if t.cluster == c] cuml_qty1[c] = [q1_get(t) for t in dataset if t.cluster == c] #...etc. Does anyone here at Stack Overflow have suggestions for improvements? The first snippet above runs well for my initial dataset (on the order of a million records), but later datasets will have more records and clusters/nodes, so scalability is a concern. This is my first non-trivial use of Python, and I want to make sure I'm taking proper advantage of the language (this is replacing a very convoluted set of SQL queries, and earlier versions of the Python version were essentially very ineffecient straight transalations of what that did). I don't normally do much programming, so I may be missing something elementary. Many thanks!

    Read the article

  • Objective-C woes: cellForRowAtIndexPath crashes.

    - by Mr. McPepperNuts
    I want to the user to be able to search for a record in a DB. The fetch and the results returned work perfectly. I am having a hard time setting the UItableview to display the result tho. The application continually crashes at cellForRowAtIndexPath. Please, someone help before I have a heart attack over here. Thank you. @implementation SearchViewController @synthesize mySearchBar; @synthesize textToSearchFor; @synthesize myGlobalSearchObject; @synthesize results; @synthesize tableView; @synthesize tempString; #pragma mark - #pragma mark View lifecycle - (void)viewDidLoad { [super viewDidLoad]; } #pragma mark - #pragma mark Table View - (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath { //handle selection; push view } - (NSInteger)numberOfSectionsInTableView:(UITableView *)tableView{ /* if(nullResulSearch == TRUE){ return 1; }else { return[results count]; } */ return[results count]; } - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { return 1; // Test hack to display multiple rows. } - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"Search Cell Identifier"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if(cell == nil){ cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleValue2 reuseIdentifier:CellIdentifier] autorelease]; } NSLog(@"TEMPSTRING %@", tempString); cell.textLabel.text = tempString; return cell; } #pragma mark - #pragma mark Memory management - (void)didReceiveMemoryWarning { // Releases the view if it doesn't have a superview. [super didReceiveMemoryWarning]; } - (void)viewDidUnload { self.tableView = nil; } - (void)dealloc { [results release]; [mySearchBar release]; [textToSearchFor release]; [myGlobalSearchObject release]; [super dealloc]; } #pragma mark - #pragma mark Search Function & Fetch Controller - (NSManagedObject *)SearchDatabaseForText:(NSString *)passdTextToSearchFor{ NSManagedObject *searchObj; UndergroundBaseballAppDelegate *appDelegate = [[UIApplication sharedApplication] delegate]; NSManagedObjectContext *managedObjectContext = appDelegate.managedObjectContext; NSFetchRequest *request = [[NSFetchRequest alloc] init]; NSPredicate *predicate = [NSPredicate predicateWithFormat:@"name == [c]%@", passdTextToSearchFor]; NSEntityDescription *entity = [NSEntityDescription entityForName:@"Entry" inManagedObjectContext:managedObjectContext]; NSSortDescriptor *sortDescriptor = [[NSSortDescriptor alloc] initWithKey:@"name" ascending:NO]; NSArray *sortDescriptors = [[NSArray alloc] initWithObjects:sortDescriptor, nil]; [request setSortDescriptors:sortDescriptors]; [request setEntity: entity]; [request setPredicate: predicate]; NSError *error; results = [managedObjectContext executeFetchRequest:request error:&error]; if([results count] == 0){ NSLog(@"No results found"); searchObj = nil; nullResulSearch == TRUE; }else{ if ([[[results objectAtIndex:0] name] caseInsensitiveCompare:passdTextToSearchFor] == 0) { NSLog(@"results %@", [[results objectAtIndex:0] name]); searchObj = [results objectAtIndex:0]; nullResulSearch == FALSE; }else{ NSLog(@"No results found"); searchObj = nil; nullResulSearch == TRUE; } } [tableView reloadData]; [request release]; [sortDescriptors release]; return searchObj; } - (void)searchBarSearchButtonClicked:(UISearchBar *)searchBar{ textToSearchFor = mySearchBar.text; NSLog(@"textToSearchFor: %@", textToSearchFor); myGlobalSearchObject = [self SearchDatabaseForText:textToSearchFor]; NSLog(@"myGlobalSearchObject: %@", myGlobalSearchObject); tempString = [myGlobalSearchObject valueForKey:@"name"]; NSLog(@"tempString: %@", tempString); } @end *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '*** -[UILongPressGestureRecognizer isEqualToString:]: unrecognized selector sent to instance 0x3d46c20'

    Read the article

  • Confirm disk is broken when it passes all diagnostics

    - by Halfgaar
    I have a system with a potentially broken disk, but the disk passes all manner of diagnostics. I have been unable to confirm that the disk is broken. What are my options? I could just replace the disk, but because this situation is very similar to another more severe situation I have (long story), I'd like to actually make a proper diagnosis as opposed to randomly binning hardware. The issue and history is this: I had a Debian Linux PC (500 MHz P3) acting as router, nagios and munin. It crashed every couple of weeks. No logs or dmesg could be obtained (because it's an old Compaq that only boots when you configure it as keyboardless, making connecting a keyboard later, once it's booted, impossible). At the time, I just replaced the computer with another Compaq (P4 2.4 GHz) because I thought the hardware was faulty. However, it still crashed every couple of weeks. the difference is that on this computer, I can still SSH into it. It gives all kinds of errors on hda. I'd like to confirm that the disk is broken, but nothing I do confirms this: SMART error logs shows no errors. Normally when a disk starts acting up, SMART my pass, but it still records a read-error in the error log. SMART self-test (smartctl -t long /dev/sda) completes without errors. re-allocated sector count (a tell-tale parameter) has been 31 all its life, even when the disk was still in use in my desktop PC years ago, and it still is. The figure never changed. dd if=/dev/sda of=/dev/null bs=4096 passes with flying colors. What else can I do to assess the health of the drive? Again, this is not about making this router fully functional again, this is a disk forensic question, because it just so happens that I have another server that potentially has the same problem, and knowing the answer to this will possibly help me greatly. For the record, below are logs and such. This is the smartctl -a output: smartctl 5.40 2010-07-12 r3124 [i686-pc-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net === START OF INFORMATION SECTION === Model Family: Seagate Barracuda 7200.7 and 7200.7 Plus family Device Model: ST3120026A Serial Number: 5JT1CLQM Firmware Version: 3.06 User Capacity: 120,034,123,776 bytes Device is: In smartctl database [for details use: -P show] ATA Version is: 6 ATA Standard is: ATA/ATAPI-6 T13 1410D revision 2 Local Time is: Mon Jul 1 21:18:33 2013 CEST SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED General SMART Values: Offline data collection status: (0x82) Offline data collection activity was completed without error. Auto Offline Data Collection: Enabled. Self-test execution status: ( 24) The self-test routine was aborted by the host. Total time to complete Offline data collection: ( 430) seconds. Offline data collection capabilities: (0x5b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. No Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. No General Purpose Logging support. Short self-test routine recommended polling time: ( 1) minutes. Extended self-test routine recommended polling time: ( 85) minutes. SMART Attributes Data Structure revision number: 10 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000f 050 046 006 Pre-fail Always - 47766662 3 Spin_Up_Time 0x0003 097 096 000 Pre-fail Always - 0 4 Start_Stop_Count 0x0032 100 100 020 Old_age Always - 10 5 Reallocated_Sector_Ct 0x0033 100 100 036 Pre-fail Always - 31 7 Seek_Error_Rate 0x000f 084 060 030 Pre-fail Always - 820305 9 Power_On_Hours 0x0032 048 048 000 Old_age Always - 46373 10 Spin_Retry_Count 0x0013 100 100 097 Pre-fail Always - 0 12 Power_Cycle_Count 0x0032 100 100 020 Old_age Always - 605 194 Temperature_Celsius 0x0022 036 065 000 Old_age Always - 36 195 Hardware_ECC_Recovered 0x001a 050 046 000 Old_age Always - 47766662 197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0010 100 100 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x003e 200 196 000 Old_age Always - 6 200 Multi_Zone_Error_Rate 0x0000 100 253 000 Old_age Offline - 0 202 Data_Address_Mark_Errs 0x0032 100 253 000 Old_age Always - 0 SMART Error Log Version: 1 No Errors Logged SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Extended offline Aborted by host 80% 46361 - # 2 Extended offline Completed without error 00% 46358 - # 3 Short offline Completed without error 00% 12046 - # 4 Extended offline Completed without error 00% 10472 - # 5 Short offline Completed without error 00% 10471 - # 6 Short offline Completed without error 00% 10471 - # 7 Short offline Completed without error 00% 6770 - # 8 Extended offline Aborted by host 90% 5958 - # 9 Extended offline Aborted by host 90% 5951 - #10 Short offline Completed without error 00% 5024 - #11 Extended offline Aborted by host 80% 5024 - #12 Short offline Completed without error 00% 3697 - #13 Short offline Completed without error 00% 237 - #14 Short offline Completed without error 00% 145 - #15 Short offline Completed without error 00% 69 - #16 Extended offline Completed without error 00% 68 - #17 Short offline Completed without error 00% 66 - #18 Short offline Completed without error 00% 49 - #19 Short offline Completed without error 00% 29 - #20 Short offline Completed without error 00% 29 - SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testing Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay. And this is the dmesg error when it has crashed (which repeats for a bunch of different sectors): [1755091.211136] sd 0:0:0:0: [sda] Unhandled error code [1755091.211144] sd 0:0:0:0: [sda] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [1755091.211151] sd 0:0:0:0: [sda] CDB: Read(10): 28 00 08 fe ad 38 00 00 08 00 [1755091.211166] end_request: I/O error, dev sda, sector 150908216

    Read the article

  • Exposing the AnyConnect HTTPS service to outside network

    - by Maciej Swic
    We have a Cisco ASA 5505 with firmware ASA9.0(1) and ASDM 7.0(2). It is configured with a public ip address, and when trying to reach it from the outside by HTTPS for AnyConnect VPN, we get the following log output: 6 Nov 12 2012 07:01:40 <client-ip> 51000 <asa-ip> 443 Built inbound TCP connection 2889 for outside:<client-ip>/51000 (<client-ip>/51000) to identity:<asa-ip>/443 (<asa-ip>/443) 6 Nov 12 2012 07:01:40 <client-ip> 50999 <asa-ip> 443 Built inbound TCP connection 2890 for outside:<client-ip>/50999 (<client-ip>/50999) to identity:<asa-ip>/443 (<asa-ip>/443) 6 Nov 12 2012 07:01:40 <client-ip> 51000 <asa-ip> 443 Teardown TCP connection 2889 for outside:<client-ip>/51000 to identity:<asa-ip>/443 duration 0:00:00 bytes 0 No valid adjacency 6 Nov 12 2012 07:01:40 <client-ip> 50999 <asa-ip> 443 Teardown TCP connection 2890 for outside:<client-ip>/50999 to identity:<asa-ip>/443 duration 0:00:00 bytes 0 No valid adjacency We finished the startup wizard and the anyconnect vpn wizard and here is the resulting configuration: Cryptochecksum: 12262d68 23b0d136 bb55644a 9c08f86b : Saved : Written by enable_15 at 07:08:30.519 UTC Mon Nov 12 2012 ! ASA Version 9.0(1) ! hostname vpn domain-name office.<redacted>.com enable password <redacted> encrypted passwd <redacted> encrypted names ip local pool vpn-pool 192.168.67.2-192.168.67.253 mask 255.255.255.0 ! interface Ethernet0/0 switchport access vlan 2 ! interface Ethernet0/1 ! interface Ethernet0/2 ! interface Ethernet0/3 ! interface Ethernet0/4 ! interface Ethernet0/5 ! interface Ethernet0/6 ! interface Ethernet0/7 ! interface Vlan1 nameif inside security-level 100 ip address 192.168.68.250 255.255.255.0 ! interface Vlan2 nameif outside security-level 0 ip address <redacted> 255.255.255.248 ! ftp mode passive dns server-group DefaultDNS domain-name office.<redacted>.com object network obj_any subnet 0.0.0.0 0.0.0.0 pager lines 24 logging enable logging asdm informational mtu outside 1500 mtu inside 1500 icmp unreachable rate-limit 1 burst-size 1 no asdm history enable arp timeout 14400 no arp permit-nonconnected ! object network obj_any nat (inside,outside) dynamic interface timeout xlate 3:00:00 timeout pat-xlate 0:00:30 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02 timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00 timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00 timeout sip-provisional-media 0:02:00 uauth 0:05:00 absolute timeout tcp-proxy-reassembly 0:01:00 timeout floating-conn 0:00:00 dynamic-access-policy-record DfltAccessPolicy user-identity default-domain LOCAL http server enable http 192.168.68.0 255.255.255.0 inside no snmp-server location no snmp-server contact snmp-server enable traps snmp authentication linkup linkdown coldstart warmstart crypto ipsec ikev2 ipsec-proposal DES protocol esp encryption des protocol esp integrity sha-1 md5 crypto ipsec ikev2 ipsec-proposal 3DES protocol esp encryption 3des protocol esp integrity sha-1 md5 crypto ipsec ikev2 ipsec-proposal AES protocol esp encryption aes protocol esp integrity sha-1 md5 crypto ipsec ikev2 ipsec-proposal AES192 protocol esp encryption aes-192 protocol esp integrity sha-1 md5 crypto ipsec ikev2 ipsec-proposal AES256 protocol esp encryption aes-256 protocol esp integrity sha-1 md5 crypto ipsec security-association pmtu-aging infinite crypto dynamic-map SYSTEM_DEFAULT_CRYPTO_MAP 65535 set ikev2 ipsec-proposal AES256 AES192 AES 3DES DES crypto map outside_map 65535 ipsec-isakmp dynamic SYSTEM_DEFAULT_CRYPTO_MAP crypto map outside_map interface outside crypto map inside_map 65535 ipsec-isakmp dynamic SYSTEM_DEFAULT_CRYPTO_MAP crypto map inside_map interface inside crypto ca trustpoint _SmartCallHome_ServerCA crl configure crypto ca trustpoint ASDM_TrustPoint0 enrollment self subject-name CN=vpn proxy-ldc-issuer crl configure crypto ca trustpool policy crypto ca certificate chain _SmartCallHome_ServerCA certificate ca 6ecc7aa5a7032009b8cebcf4e952d491 <redacted> quit crypto ca certificate chain ASDM_TrustPoint0 certificate f678a050 <redacted> quit crypto ikev2 policy 1 encryption aes-256 integrity sha group 5 2 prf sha lifetime seconds 86400 crypto ikev2 policy 10 encryption aes-192 integrity sha group 5 2 prf sha lifetime seconds 86400 crypto ikev2 policy 20 encryption aes integrity sha group 5 2 prf sha lifetime seconds 86400 crypto ikev2 policy 30 encryption 3des integrity sha group 5 2 prf sha lifetime seconds 86400 crypto ikev2 policy 40 encryption des integrity sha group 5 2 prf sha lifetime seconds 86400 crypto ikev2 enable outside client-services port 443 crypto ikev2 remote-access trustpoint ASDM_TrustPoint0 telnet timeout 5 ssh 192.168.68.0 255.255.255.0 inside ssh timeout 5 console timeout 0 vpn-addr-assign local reuse-delay 60 dhcpd auto_config outside ! dhcpd address 192.168.68.254-192.168.68.254 inside ! threat-detection basic-threat threat-detection statistics access-list no threat-detection statistics tcp-intercept ssl trust-point ASDM_TrustPoint0 inside ssl trust-point ASDM_TrustPoint0 outside webvpn enable outside enable inside anyconnect image disk0:/anyconnect-win-3.1.01065-k9.pkg 1 anyconnect image disk0:/anyconnect-linux-3.1.01065-k9.pkg 2 anyconnect image disk0:/anyconnect-macosx-i386-3.1.01065-k9.pkg 3 anyconnect profiles GM-AnyConnect_client_profile disk0:/GM-AnyConnect_client_profile.xml anyconnect enable tunnel-group-list enable group-policy GroupPolicy_GM-AnyConnect internal group-policy GroupPolicy_GM-AnyConnect attributes wins-server none dns-server value 192.168.68.254 vpn-tunnel-protocol ikev2 ssl-client default-domain value office.<redacted>.com webvpn anyconnect profiles value GM-AnyConnect_client_profile type user username <redacted> password <redacted> encrypted tunnel-group GM-AnyConnect type remote-access tunnel-group GM-AnyConnect general-attributes address-pool vpn-pool default-group-policy GroupPolicy_GM-AnyConnect tunnel-group GM-AnyConnect webvpn-attributes group-alias GM-AnyConnect enable ! class-map inspection_default match default-inspection-traffic ! ! policy-map type inspect dns preset_dns_map parameters message-length maximum client auto message-length maximum 512 policy-map global_policy class inspection_default inspect dns preset_dns_map inspect ftp inspect h323 h225 inspect h323 ras inspect rsh inspect rtsp inspect esmtp inspect sqlnet inspect skinny inspect sunrpc inspect xdmcp inspect sip inspect netbios inspect tftp inspect ip-options ! service-policy global_policy global prompt hostname context call-home reporting anonymous Cryptochecksum:12262d6823b0d136bb55644a9c08f86b : end Clearly we are missing something, but the question is, what?

    Read the article

  • Spotlight Infinite Indexing issue (external data drive)

    - by Manca Weeks
    This is an external drive, formerly a boot drive which is now in use only to access music files (sibelius, audio, midi, live, logic etc.) without transferring the data into a new boot system, partly because of the issue I am about to describe, but mostly because the majority of the data is mainly there for archival purposes. The user is a composer and prominent musician and needs to be able to rehash the data at will. I have tried several things - here is a list: - make complete filesystem clone with antonio diaz's ddrescue - run Disk Warrior on copy, repair whatever errors occurred - wipe out all ACLs on entire drive - set all permissions to the same value - wide open 777 - remove any system data (applications, system files, including hidden files to the best of my knowledge) by selecting only non-system/app data and using Carbon Copy Cloner to put only the data of interest onto a newly formatted drive - transfer data to newly formatted drive folder by folder, resetting the spotlight index in between adding each to observe for issues (interesting here is that no issues occurred except for in Documents folder - when I transferred only the Documents folder to a newly formatted drive on its own - no trouble. It appears almost as thought it may not be the content but the quantity or specific combination of data that results in problems) - use DataRescue to transfer the data to yet another newly formatted drive to expose any missed hidden files Between each of the above steps I stopped Spotlight (search for anything beginning with md in Activity Monitor - All Processes and quitting it), deleted the .Spotlight-V100 directory from the affected drive. Restart Splotlight indexing by adding drive to Spotlight privacy list and removing it. In each case the same issue occurs - Spotlight begins indexing normally (or so it seems), then the index estimated time increases, usually to 4 hours remaining. This is where it gets stuck and continues to predict 4 hours remaining but never finishes. Sometimes I can't eject the drive and have to quit the md.. processes from Activity Monitor to be able to eject the drive without Force Eject. Once I disconnect the drive after the 4 hours remaining situation - if I reattach it, Spotlight forever estimates remaining time and never gets going again. So there it is. It is apparently not a filesystem issue, not a permissions issue and not tied to any particular piece of hardware or protocol (used USB and FW drives). I have tried this on several machines (3 to be precise) and in 10.5.8 and 10.6.5. Simply disabling Spotlight on this volume is not an option because the owner has no clue where things are as the data on the volume dates back to music projects and compositions from 2003 and before. He needs to be able to query for results. Anyone got any ideas? ---update 2-6-11 Since I have not received any responses except the one below which appears to misunderstand my point, I am updating this post hoping to get more responses. I have used the terminal command sudo opensnoop -p PID where PID is the mdworker process ID to try and determine what Spotlight is doing and hopefully find the files it's having trouble with. Here's what happens: After indexing for a few hours, mdworker is gone. It no longer shows up in Activity Monitor under "All Processes" and the Terminal window with the opensnoop result stops moving. I then proceeded to execute the same command on mds to see what it was doing and here's what I get, repeatedly: 501 57 mds 21 / 501 57 mds 21 /Volumes/Sno Leppard 501 57 mds 21 /Volumes/Tiger 501 57 mds 21 /Volumes/Leppard 501 57 mds 21 /Volumes/Disk Warrior 501 57 mds 21 /Volumes/ONM Data These represent all the volumes currently mounted in the system. All except ONM Data, which is the one I am trying to index, are excluded from SPotlight indexing at the moment. The sequence above repeats over and over, with slight variation, sometimes skipping one of the volumes. Questions - what happened to mdworker? What is mds doing? I will let this run until tomorrow morning and throughout the day and monitor for any changes. Any input would be very much appreciated. Even if you're not sure what the ultimate answer is, please alert me to anything you think I may be missing. Hopefully at some point we will figure this out... Thanks, M __final edit__ I finally resolved the issue and here is how I did it. I used the terminal command "sudo opensnoop -p PID" where the PID is the process id of the processes I was monitoring. I was looking at all instances of mds and mdworker running in the system. After the third time through indexing the same data set (see info above), I contacted Apple and got to their highest level of support - they were flabbergasted as well. They advised me to install yet another default 10.6.6 system and try again. The same pattern repeated - mds and mdworker(s) would start indexing and eventually the spotlight icon would say 6 hours remaining and all mdworkers were gone, mds at 90% or so of CPU. But I did finally figure out that the first time mdworker stopped like that, the last file it touched was always in the same folder. I excluded that folder from spotlight search and the rest of the data set indexed within about 2 hours with no strange behavior or failures. I copied that folder to another machine and Spotlight barfed immediately. Exclude that folder and all is well again. I have no clue what is causing this behavior, still, but I did find a functional solution to the problem. Anyone with a similar problem - run opensnoop on all instances of mds and mdworker and wait patiently for wdworker to exit. Look at the last file it touched and exclude the enclosing folder from being indexed. I was able to repeat the issue and solution on 2 different installs and 2 different copies of the data set. Hope this helps. If we find an actual cause of the folder being such a problem (it is called MICHAEL BRECKER RECORD SOLOS and contains almost 1 GB of audio related files - performer, live, SD2 - things like that), I will edit again to let you all know. Thanks for ay attempts to help, M

    Read the article

  • searchdisplaycontroller: change the text of the searchbar

    - by kudorgyozo
    Hello, SHORT DESCRIPTION OF PROBLEM: I want to set the text of a searchbar without automatically triggering the search display controller that is bound to it. LONG DESCRIPTION OF PROBLEM: I have an iphone application with a search bar and a search display controller. The searchdisplaycontroller is used for autocomplete. For autocomplete i use an sqlite database. The user enters the first few letters of a keyword and the result are shown in the table of the searchdisplaycontroller. An sql select query is executed for every character typed. this part works ok, the letters have been entered and the results are visible. The problem is the following: If the user selects a row from the table I want to change the text of the searchbar to the text that was selected in the autocomplete results table. I also want to hide the search display controller. This is not working. After the search display controller disappears the textbox in the search bar is empty. I have no idea what's wrong. I didn't think something so simple as changing the text of a textbox can get so complicated. I have tried to change the text in 2 methods: First in the didSelectRowAtIndexPath method (for the search results table of the search display controller), but that didn't help. The text was there while the search display controller was active (animating away) but after that the textbox was empty. - (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath { NSLog(@"didSelectRowAtIndexPath"); if (allKeywords.count 0) { didSelectKeyword = TRUE; self.selectedKeyword = (NSString *)[allKeywords objectAtIndex: indexPath.row]; NSLog(@"keyword didselectrow at idxp: %@", self.selectedKeyword); shouldReloadResults = FALSE; [[self.mainViewController keywordSearchBar] setText: selectedKeyword]; //shouldReloadResults = TRUE; [[mainViewController searchDisplayController] setActive:NO animated: YES]; } } I also tried to change it in the searchDisplayControllerDidEndSearch method but that didn't help either. The textbox was...again.. empty. edit: actually it wasnt empty the text was there but after the disappearing animation the results of the autocomplete table are still there. So it ends up getting worse. - (void)searchDisplayControllerDidEndSearch:(UISearchDisplayController *)controller { NSLog(@"searchDisplayControllerDidEndSearch"); if (didSelectKeyword) { shouldReloadResults = FALSE; NSLog(@"keyword sdc didendsearch: %@", selectedKeyword); [[mainViewController keywordSearchBar] setText: selectedKeyword]; // In this method i call another method which selects the data from sqlite. This part is working. - (BOOL)searchDisplayController:(UISearchDisplayController *)controller shouldReloadTableForSearchString:(NSString *)searchString { NSLog(@"shouldReloadResults : %i", shouldReloadResults); if (shouldReloadResults) { NSLog(@"shouldReloadTableForSearchString: %@", searchString); [self GetKeywords: searchString : @"GB"]; NSLog(@"shouldReloadTableForSearchString vege"); return YES; } return NO; } Please ask me if it's not clear what my problem is. I need your help. Thank you

    Read the article

  • how can i convert a .tpl file to a .php file? [closed]

    - by kim
    What do I do?? I am building a site and there is a categories.tpl that I want to go where sitemap.php is. sorry i am brand new to all this. let me try to be more clear.id show you a picture but it is marking it as spam. i have a menu at the top of my site like with any retail site. [About Cart Account and Products]. when you click products it takes to you the sitemap.php file. however i need the content from the categories.tpl to appear instead. (Categories in prestashop is another way of saying products) here is the categories.tpl code: {include file=$tpl_dir./breadcrumb.tpl} {include file=$tpl_dir./errors.tpl} {if $category-id AND $category-active} {$category-name|escape:'htmlall':'UTF-8'} {$nb_products|intval} {if $nb_products1}{l s='products'}{else}{l s='product'}{/if} {if $scenes} <!-- Scenes --> {include file=$tpl_dir./scenes.tpl scenes=$scenes} {else} <!-- Category image --> {if $category->id_image} <img src="{$link->getCatImageLink($category->link_rewrite, $category->id_image, 'category')}" alt="{$category->name|escape:'htmlall':'UTF-8'}" title="{$category->name|escape:'htmlall':'UTF-8'}" id="categoryImage" /> {/if} {/if} {if $category->description} <div class="cat_desc">{$category->description}</div> {/if} {if isset($subcategories)} <!-- Subcategories --> <div id="subcategories"> <h3>{l s='Subcategories'}</h3> <ul class="inline_list"> {foreach from=$subcategories item=subcategory} <li> <a href="{$link->getCategoryLink($subcategory.id_category, $subcategory.link_rewrite)|escape:'htmlall':'UTF-8'}" title="{$subcategory.name|escape:'htmlall':'UTF-8'}"> {if $subcategory.id_image} <img src="{$link->getCatImageLink($subcategory.link_rewrite, $subcategory.id_image, 'medium')}" alt="" /> {else} <img src="{$img_cat_dir}default-medium.jpg" alt="" /> {/if} </a> <br /> <a href="{$link->getCategoryLink($subcategory.id_category, $subcategory.link_rewrite)|escape:'htmlall':'UTF-8'}">{$subcategory.name|escape:'htmlall':'UTF-8'}</a> </li> {/foreach} </ul> <br class="clear"/> </div> {/if} {if $products} {include file=$tpl_dir./product-sort.tpl} {include file=$tpl_dir./product-list.tpl products=$products} {include file=$tpl_dir./pagination.tpl} {elseif !isset($subcategories)} <p class="warning">{l s='There is no product in this category.'}</p> {/if} {elseif $category-id} {l s='This category is currently unavailable.'} {/if} and here is the sitemap.php include(dirname(FILE).'/config/config.inc.php'); include(dirname(FILE).'/header.php'); include(dirname(FILE).'/product-sort.php'); $nbProducts = intval(Product::getNewProducts(intval($cookie-id_lang), isset($p) ? intval($p) - 1 : NULL, isset($n) ? intval($n) : NULL, true)); include(dirname(FILE).'/pagination.php'); $smarty-assign(array( 'products' = Product::getNewProducts(intval($cookie-id_lang), intval($p) - 1, intval($n), false, $orderBy, $orderWay), 'nbProducts' = intval($nbProducts))); $smarty-display(_PS_THEME_DIR_.'new-products.tpl'); include(dirname(FILE).'/footer.php'); ?

    Read the article

  • calling delphi dll from c#

    - by Wouter Roux
    Hi, I have a Delphi dll defined like this: TMPData = record Lastname, Firstname: array[0..40] of char; Birthday: TDateTime; Pid: array[0..16] of char; Title: array[0..20] of char; Female: Boolean; Street: array[0..40] of char; ZipCode: array[0..10] of char; City: array[0..40] of char; Phone, Fax, Department, Company: array[0..20] of char; Pn: array[0..40] of char; In: array[0..16] of char; Hi: array[0..8] of char; Account: array[0..20] of char; Valid, Status: array[0..10] of char; Country, NameAffix: array[0..20] of char; W, H: single; Bp: array[0..10] of char; SocialSecurityNumber: array[0..9] of char; State: array[0..2] of char; end; function Init(const tmpData: TMPData; var ErrorCode: integer; ResetFatalError: boolean = false): boolean; procedure GetData(out tmpData: TMPData); My current c# signatures looks like this: [StructLayout(LayoutKind.Sequential, CharSet = CharSet.Ansi)] public struct TMPData { [MarshalAs(UnmanagedType.LPStr, SizeConst = 40)] public string Lastname; [MarshalAs(UnmanagedType.LPStr, SizeConst = 40)] public string Firstname; [MarshalAs(UnmanagedType.R8)] public double Birthday; [MarshalAs(UnmanagedType.LPStr, SizeConst = 16)] public string Pid; [MarshalAs(UnmanagedType.LPStr, SizeConst = 20)] public string Title; [MarshalAs(UnmanagedType.Bool)] public bool Female; [MarshalAs(UnmanagedType.LPStr, SizeConst = 40)] public string Street; [MarshalAs(UnmanagedType.LPStr, SizeConst = 10)] public string ZipCode; [MarshalAs(UnmanagedType.LPStr, SizeConst = 40)] public string City; [MarshalAs(UnmanagedType.LPStr, SizeConst = 20)] public string Phone; [MarshalAs(UnmanagedType.LPStr, SizeConst = 20)] public string Fax; [MarshalAs(UnmanagedType.LPStr, SizeConst = 20)] public string Department; [MarshalAs(UnmanagedType.LPStr, SizeConst = 20)] public string Company; [MarshalAs(UnmanagedType.LPStr, SizeConst = 40)] public string Pn; [MarshalAs(UnmanagedType.LPStr, SizeConst = 16)] public string In; [MarshalAs(UnmanagedType.LPStr, SizeConst = 8)] public string Hi; [MarshalAs(UnmanagedType.LPStr, SizeConst = 20)] public string Account; [MarshalAs(UnmanagedType.LPStr, SizeConst = 10)] public string Valid; [MarshalAs(UnmanagedType.LPStr, SizeConst = 10)] public string Status; [MarshalAs(UnmanagedType.LPStr, SizeConst = 20)] public string Country; [MarshalAs(UnmanagedType.LPStr, SizeConst = 20)] public string NameAffix; [MarshalAs(UnmanagedType.I4)] public int W; [MarshalAs(UnmanagedType.I4)] public int H; [MarshalAs(UnmanagedType.LPStr, SizeConst = 10)] public string Bp; [MarshalAs(UnmanagedType.LPStr, SizeConst = 9)] public string SocialSecurityNumber; [MarshalAs(UnmanagedType.LPStr, SizeConst = 2)] public string State; } [DllImport("MyDll.dll")] [return: MarshalAs(UnmanagedType.Bool)] public static extern bool Init(TMPData tmpData, int ErrorCode, bool ResetFatalError); [DllImport("MyDll.dll")] [return: MarshalAs(UnmanagedType.Bool)] public static extern bool GetData(out TMPData tmpData); I first call Init setting the BirthDay, LastName and FirstName. I then call GetData but the TMPData structure I get back is incorrect. The FirstName, LastName and Birthday fields are populated but the data is incorrect. Is the mapping correct? ( "array[0..40] of char" equal to "[MarshalAs(UnmanagedType.LPStr, SizeConst = 40)]" )?

    Read the article

  • Cisco ASA - Enable communication between same security level

    - by Conor
    I have recently inherited a network with a Cisco ASA (running version 8.2). I am trying to configure it to allow communication between two interfaces configured with the same security level (DMZ-DMZ) "same-security-traffic permit inter-interface" has been set, but hosts are unable to communicate between the interfaces. I am assuming that some NAT settings are causing my issue. Below is my running config: ASA Version 8.2(3) ! hostname asa enable password XXXXXXXX encrypted passwd XXXXXXXX encrypted names ! interface Ethernet0/0 switchport access vlan 400 ! interface Ethernet0/1 switchport access vlan 400 ! interface Ethernet0/2 switchport access vlan 420 ! interface Ethernet0/3 switchport access vlan 420 ! interface Ethernet0/4 switchport access vlan 450 ! interface Ethernet0/5 switchport access vlan 450 ! interface Ethernet0/6 switchport access vlan 500 ! interface Ethernet0/7 switchport access vlan 500 ! interface Vlan400 nameif outside security-level 0 ip address XX.XX.XX.10 255.255.255.248 ! interface Vlan420 nameif public security-level 20 ip address 192.168.20.1 255.255.255.0 ! interface Vlan450 nameif dmz security-level 50 ip address 192.168.10.1 255.255.255.0 ! interface Vlan500 nameif inside security-level 100 ip address 192.168.0.1 255.255.255.0 ! ftp mode passive clock timezone JST 9 same-security-traffic permit inter-interface same-security-traffic permit intra-interface object-group network DM_INLINE_NETWORK_1 network-object host XX.XX.XX.11 network-object host XX.XX.XX.13 object-group service ssh_2220 tcp port-object eq 2220 object-group service ssh_2251 tcp port-object eq 2251 object-group service ssh_2229 tcp port-object eq 2229 object-group service ssh_2210 tcp port-object eq 2210 object-group service DM_INLINE_TCP_1 tcp group-object ssh_2210 group-object ssh_2220 object-group service zabbix tcp port-object range 10050 10051 object-group service DM_INLINE_TCP_2 tcp port-object eq www group-object zabbix object-group protocol TCPUDP protocol-object udp protocol-object tcp object-group service http_8029 tcp port-object eq 8029 object-group network DM_INLINE_NETWORK_2 network-object host 192.168.20.10 network-object host 192.168.20.30 network-object host 192.168.20.60 object-group service imaps_993 tcp description Secure IMAP port-object eq 993 object-group service public_wifi_group description Service allowed on the Public Wifi Group. Allows Web and Email. service-object tcp-udp eq domain service-object tcp-udp eq www service-object tcp eq https service-object tcp-udp eq 993 service-object tcp eq imap4 service-object tcp eq 587 service-object tcp eq pop3 service-object tcp eq smtp access-list outside_access_in remark http traffic from outside access-list outside_access_in extended permit tcp any object-group DM_INLINE_NETWORK_1 eq www access-list outside_access_in remark ssh from outside to web1 access-list outside_access_in extended permit tcp any host XX.XX.XX.11 object-group ssh_2251 access-list outside_access_in remark ssh from outside to penguin access-list outside_access_in extended permit tcp any host XX.XX.XX.10 object-group ssh_2229 access-list outside_access_in remark http from outside to penguin access-list outside_access_in extended permit tcp any host XX.XX.XX.10 object-group http_8029 access-list outside_access_in remark ssh from outside to internal hosts access-list outside_access_in extended permit tcp any host XX.XX.XX.13 object-group DM_INLINE_TCP_1 access-list outside_access_in remark dns service to internal host access-list outside_access_in extended permit object-group TCPUDP any host XX.XX.XX.13 eq domain access-list dmz_access_in extended permit ip 192.168.10.0 255.255.255.0 any access-list dmz_access_in extended permit tcp any host 192.168.10.29 object-group DM_INLINE_TCP_2 access-list public_access_in remark Web access to DMZ websites access-list public_access_in extended permit object-group TCPUDP any object-group DM_INLINE_NETWORK_2 eq www access-list public_access_in remark General web access. (HTTP, DNS & ICMP and Email) access-list public_access_in extended permit object-group public_wifi_group any any pager lines 24 logging enable logging asdm informational mtu outside 1500 mtu public 1500 mtu dmz 1500 mtu inside 1500 no failover icmp unreachable rate-limit 1 burst-size 1 no asdm history enable arp timeout 60 global (outside) 1 interface global (dmz) 2 interface nat (public) 1 0.0.0.0 0.0.0.0 nat (dmz) 1 0.0.0.0 0.0.0.0 nat (inside) 1 0.0.0.0 0.0.0.0 static (inside,outside) tcp interface 2229 192.168.0.29 2229 netmask 255.255.255.255 static (inside,outside) tcp interface 8029 192.168.0.29 www netmask 255.255.255.255 static (dmz,outside) XX.XX.XX.13 192.168.10.10 netmask 255.255.255.255 dns static (dmz,outside) XX.XX.XX.11 192.168.10.30 netmask 255.255.255.255 dns static (dmz,inside) 192.168.0.29 192.168.10.29 netmask 255.255.255.255 static (dmz,public) 192.168.20.30 192.168.10.30 netmask 255.255.255.255 dns static (dmz,public) 192.168.20.10 192.168.10.10 netmask 255.255.255.255 dns static (inside,dmz) 192.168.10.0 192.168.0.0 netmask 255.255.255.0 dns access-group outside_access_in in interface outside access-group public_access_in in interface public access-group dmz_access_in in interface dmz route outside 0.0.0.0 0.0.0.0 XX.XX.XX.9 1 timeout xlate 3:00:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02 timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00 timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00 timeout sip-provisional-media 0:02:00 uauth 0:05:00 absolute timeout tcp-proxy-reassembly 0:01:00 dynamic-access-policy-record DfltAccessPolicy http server enable http 192.168.0.0 255.255.255.0 inside no snmp-server location no snmp-server contact snmp-server enable traps snmp authentication linkup linkdown coldstart crypto ipsec security-association lifetime seconds 28800 crypto ipsec security-association lifetime kilobytes 4608000 telnet timeout 5 ssh 192.168.0.0 255.255.255.0 inside ssh timeout 20 console timeout 0 dhcpd dns 61.122.112.97 61.122.112.1 dhcpd auto_config outside ! dhcpd address 192.168.20.200-192.168.20.254 public dhcpd enable public ! dhcpd address 192.168.0.200-192.168.0.254 inside dhcpd enable inside ! threat-detection basic-threat threat-detection statistics host threat-detection statistics access-list no threat-detection statistics tcp-intercept ntp server 130.54.208.201 source public webvpn ! class-map inspection_default match default-inspection-traffic ! ! policy-map type inspect dns preset_dns_map parameters message-length maximum client auto message-length maximum 512 policy-map global_policy class inspection_default inspect dns preset_dns_map inspect ftp inspect h323 h225 inspect h323 ras inspect ip-options inspect netbios inspect rsh inspect rtsp inspect skinny inspect esmtp inspect sqlnet inspect sunrpc inspect tftp inspect sip inspect xdmcp !

    Read the article

  • image processing algorithm in MATLAB

    - by user261002
    I am trying to reconstruct an algorithm belong to this paper: Decomposition of biospeckle images in temporary spectral bands Here is an explanation of the algorithm: We recorded a sequence of N successive speckle images with a sampling frequency fs. In this way it was possible to observe how a pixel evolves through the N images. That evolution can be treated as a time series and can be processed in the following way: Each signal corresponding to the evolution of every pixel was used as input to a bank of filters. The intensity values were previously divided by their temporal mean value to minimize local differences in reflectivity or illumination of the object. The maximum frequency that can be adequately analyzed is determined by the sampling theorem and s half of sampling frequency fs. The latter is set by the CCD camera, the size of the image, and the frame grabber. The bank of filters is outlined in Fig. 1. In our case, ten 5° order Butterworth11 filters were used, but this number can be varied according to the required discrimination. The bank was implemented in a computer using MATLAB software. We chose the Butter-worth filter because, in addition to its simplicity, it is maximally flat. Other filters, an infinite impulse response, or a finite impulse response could be used. By means of this bank of filters, ten corresponding signals of each filter of each temporary pixel evolution were obtained as output. Average energy Eb in each signal was then calculated: where pb(n) is the intensity of the filtered pixel in the nth image for filter b divided by its mean value and N is the total number of images. In this way, en values of energy for each pixel were obtained, each of hem belonging to one of the frequency bands in Fig. 1. With these values it is possible to build ten images of the active object, each one of which shows how much energy of time-varying speckle there is in a certain frequency band. False color assignment to the gray levels in the results would help in discrimination. and here is my MATLAB code base on that : clear all for i=0:39 str = num2str(i); str1 = strcat(str,'.mat'); load(str1); D{i+1}=A; end new_max = max(max(A)); new_min = min(min(A)); for i=20:180 for j=20:140 ts = []; for k=1:40 ts = [ts D{k}(i,j)]; %%% kth image pixel i,j --- ts is time series end ts = double(ts); temp = mean(ts); ts = ts-temp; ts = ts/temp; N = 5; % filter order W = [0.00001 0.05;0.05 0.1;0.1 0.15;0.15 0.20;0.20 0.25;0.25 0.30;0.30 0.35;0.35 0.40;0.40 0.45;0.45 0.50]; N1 = 5; for ind = 1:10 Wn = W(ind,:); [B,A] = butter(N1,Wn); ts_f(ind,:) = filter(B,A,ts); end for ind=1:10 imag_test1{ind}(i,j) =sum((ts_f(ind,:)./mean(ts_f(ind,:))).^2); end end end for i=1:10 temp_imag = imag_test1{i}(:,:); x=isnan(temp_imag); temp_imag(x)=0; temp_imag=medfilt2(temp_imag); t_max = max(max(temp_imag)); t_min = min(min(temp_imag)); temp_imag = (temp_imag-t_min).*(double(new_max-new_min)/double(t_max-t_min))+double(new_min); imag_test2{i}(:,:) = temp_imag; end for i=1:10 A=imag_test2{i}(:,:); B=A/max(max(A)); B=histeq(B); figure,imshow(B) colorbar end but I am not getting the same result as paper. has anybody has aby idea why? or where I have gone wrong? Refrence Link to the paper

    Read the article

  • Optimal storage of data structure for fast lookup and persistence

    - by Mikael Svenson
    Scenario I have the following methods: public void AddItemSecurity(int itemId, int[] userIds) public int[] GetValidItemIds(int userId) Initially I'm thinking storage on the form: itemId -> userId, userId, userId and userId -> itemId, itemId, itemId AddItemSecurity is based on how I get data from a third party API, GetValidItemIds is how I want to use it at runtime. There are potentially 2000 users and 10 million items. Item id's are on the form: 2007123456, 2010001234 (10 digits where first four represent the year). AddItemSecurity does not have to perform super fast, but GetValidIds needs to be subsecond. Also, if there is an update on an existing itemId I need to remove that itemId for users no longer in the list. I'm trying to think about how I should store this in an optimal fashion. Preferably on disk (with caching), but I want the code maintainable and clean. If the item id's had started at 0, I thought about creating a byte array the length of MaxItemId / 8 for each user, and set a true/false bit if the item was present or not. That would limit the array length to little over 1mb per user and give fast lookups as well as an easy way to update the list per user. By persisting this as Memory Mapped Files with the .Net 4 framework I think I would get decent caching as well (if the machine has enough RAM) without implementing caching logic myself. Parsing the id, stripping out the year, and store an array per year could be a solution. The ItemId - UserId[] list can be serialized directly to disk and read/write with a normal FileStream in order to persist the list and diff it when there are changes. Each time a new user is added all the lists have to updated as well, but this can be done nightly. Question Should I continue to try out this approach, or are there other paths which should be explored as well? I'm thinking SQL server will not perform fast enough, and it would give an overhead (at least if it's hosted on a different server), but my assumptions might be wrong. Any thought or insights on the matter is appreciated. And I want to try to solve it without adding too much hardware :) [Update 2010-03-31] I have now tested with SQL server 2008 under the following conditions. Table with two columns (userid,itemid) both are Int Clustered index on the two columns Added ~800.000 items for 180 users - Total of 144 million rows Allocated 4gb ram for SQL server Dual Core 2.66ghz laptop SSD disk Use a SqlDataReader to read all itemid's into a List Loop over all users If I run one thread it averages on 0.2 seconds. When I add a second thread it goes up to 0.4 seconds, which is still ok. From there on the results are decreasing. Adding a third thread brings alot of the queries up to 2 seonds. A forth thread, up to 4 seconds, a fifth spikes some of the queries up to 50 seconds. The CPU is roofing while this is going on, even on one thread. My test app takes some due to the speedy loop, and sql the rest. Which leads me to the conclusion that it won't scale very well. At least not on my tested hardware. Are there ways to optimize the database, say storing an array of int's per user instead of one record per item. But this makes it harder to remove items.

    Read the article

  • Using UUIDs for cheap equals() and hashCode()

    - by Tom McIntyre
    I have an immutable class, TokenList, which consists of a list of Token objects, which are also immutable: @Immutable public final class TokenList { private final List<Token> tokens; public TokenList(List<Token> tokens) { this.tokens = Collections.unmodifiableList(new ArrayList(tokens)); } public List<Token> getTokens() { return tokens; } } I do several operations on these TokenLists that take multiple TokenLists as inputs and return a single TokenList as the output. There can be arbitrarily many TokenLists going in, and each can have arbitrarily many Tokens. These operations are expensive, and there is a good chance that the same operation (ie the same inputs) will be performed multiple times, so I would like to cache the outputs. However, performance is critical, and I am worried about the expense of performing hashCode() and equals() on these objects that may contain arbitrarily many elements (as they are immutable then hashCode could be cached, but equals will still be expensive). This led me to wondering whether I could use a UUID to provide equals() and hashCode() simply and cheaply by making the following updates to TokenList: @Immutable public final class TokenList { private final List<Token> tokens; private final UUID uuid; public TokenList(List<Token> tokens) { this.tokens = Collections.unmodifiableList(new ArrayList(tokens)); this.uuid = UUID.randomUUID(); } public List<Token> getTokens() { return tokens; } public UUID getUuid() { return uuid; } } And something like this to act as a cache key: @Immutable public final class TopicListCacheKey { private final UUID[] uuids; public TopicListCacheKey(TopicList... topicLists) { uuids = new UUID[topicLists.length]; for (int i = 0; i < uuids.length; i++) { uuids[i] = topicLists[i].getUuid(); } } @Override public int hashCode() { return Arrays.hashCode(uuids); } @Override public boolean equals(Object other) { if (other == this) return true; if (other instanceof TopicListCacheKey) return Arrays.equals(uuids, ((TopicListCacheKey) other).uuids); return false; } } I figure that there are 2^128 different UUIDs and I will probably have at most around 1,000,000 TokenList objects active in the application at any time. Given this, and the fact that the UUIDs are used combinatorially in cache keys, it seems that the chances of this producing the wrong result are vanishingly small. Nevertheless, I feel uneasy about going ahead with it as it just feels 'dirty'. Are there any reasons I should not use this system? Will the performance costs of the SecureRandom used by UUID.randomUUID() outweigh the gains (especially since I expect multiple threads to be doing this at the same time)? Are collisions going to be more likely than I think? Basically, is there anything wrong with doing it this way?? Thanks.

    Read the article

  • Can't configure PAM + LDAP on Debian Lenny - Getting error=49 on server logs

    - by Jorge Suárez de Lis
    I've been migrating some servers and desktops using Ubuntu 10.04 from getting the users from an old OpenLDAP implementation to a newer Centos Active Directory. I haven't had any problems so far, until I reached a Debian Lenny server. I've set up the server as the others, setting /etc/ldap.conf and /etc/ldap/ldap.conf. However, when I issue "getent passwd", I get nothing from the LDAP server. Reading the pam_ldap manpage, I realized that /etc/ldap.conf was not an accepted file by pam_ldap -it worked with Ubuntu though-, so I renamed it to /etc/pam_ldap.conf. Same result. However, once I've changed the name of this file, when I login using SSH I get this on the LDAP server logs: [20/Jul/2012:11:19:40 +0200] conn=16501 fd=155 slot=155 connection from x.x.x.50 to 10.1.176.237 [20/Jul/2012:11:19:40 +0200] conn=16501 op=0 BIND dn="uid=ubuntu,ou=Applications,ou=CITIUS,dc=inv,dc=usc,dc=es" method=128 version=3 [20/Jul/2012:11:19:40 +0200] conn=16501 op=0 RESULT err=0 tag=97 nentries=0 etime=0 dn="uid=ubuntu,ou=applications,ou=citius,dc=inv,dc=usc,dc=es" [20/Jul/2012:11:19:40 +0200] conn=16501 op=1 SRCH base="ou=People,ou=CITIUS,dc=inv,dc=usc,dc=es" scope=2 filter="(uid=jorge.suarez)" attrs=ALL [20/Jul/2012:11:19:40 +0200] conn=16501 op=1 RESULT err=0 tag=101 nentries=1 etime=0 notes=U [20/Jul/2012:11:19:40 +0200] conn=16501 op=2 BIND dn="uid=jorge.suarez,ou=People,ou=CITIUS,dc=inv,dc=usc,dc=es" method=128 version=3 [20/Jul/2012:11:19:40 +0200] conn=16501 op=2 RESULT err=49 tag=97 nentries=0 etime=0 The password isn't working. I don't know that could be wrong, anything else seems to be OK. That user/password is working from another clients: [20/Jul/2012:11:29:39 +0200] conn=16528 fd=188 slot=188 connection from x.x.x.224 to 10.1.176.237 [20/Jul/2012:11:29:39 +0200] conn=16528 op=0 BIND dn="uid=ubuntu,ou=Applications,ou=CITIUS,dc=inv,dc=usc,dc=es" method=128 version=3 [20/Jul/2012:11:29:39 +0200] conn=16528 op=0 RESULT err=0 tag=97 nentries=0 etime=0 dn="uid=ubuntu,ou=applications,ou=citius,dc=inv,dc=usc,dc=es" [20/Jul/2012:11:29:39 +0200] conn=16528 op=1 SRCH base="ou=People,ou=CITIUS,dc=inv,dc=usc,dc=es" scope=2 filter="(uid=jorge.suarez)" attrs=ALL [20/Jul/2012:11:29:39 +0200] conn=16528 op=1 RESULT err=0 tag=101 nentries=1 etime=0 notes=U [20/Jul/2012:11:29:39 +0200] conn=16528 op=2 BIND dn="uid=jorge.suarez,ou=People,ou=CITIUS,dc=inv,dc=usc,dc=es" method=128 version=3 [20/Jul/2012:11:29:39 +0200] conn=16528 op=2 RESULT err=0 tag=97 nentries=0 etime=0 dn="uid=jorge.suarez,ou=people,ou=citius,dc=inv,dc=usc,dc=es" I'm using SSHA for storing passwords on the LDAP server. Maybe this is not supported by Debian Lenny? On pam_ldap.conf, I've set up this, as in all the other servers: # Do not hash the password at all; presume # the directory server will do it, if # necessary. This is the default. pam_password md5 Also tried clear, but it didn't work. Anyways, it's weird that issuing getent passwd still gets me no users. However, if I use pamtest from the package libpam-dotfile to test login, it works. # pamtest ssh jorge.suarez Trying to authenticate <jorge.suarez> for service <ssh>. Password: Authentication successful. # pamtest foo jorge.suarez Trying to authenticate <jorge.suarez> for service <foo>. Password: Authentication successful. But "su" won't work also: # su jorge.suarez Id. descoñecido: jorge.suarez Just the output from getent passwd : # getent passwd root:x:0:0:root:/root:/bin/bash daemon:x:1:1:daemon:/usr/sbin:/bin/sh bin:x:2:2:bin:/bin:/bin/sh sys:x:3:3:sys:/dev:/bin/sh sync:x:4:65534:sync:/bin:/bin/sync games:x:5:60:games:/usr/games:/bin/sh man:x:6:12:man:/var/cache/man:/bin/sh lp:x:7:7:lp:/var/spool/lpd:/bin/sh mail:x:8:8:mail:/var/mail:/bin/sh news:x:9:9:news:/var/spool/news:/bin/sh uucp:x:10:10:uucp:/var/spool/uucp:/bin/sh proxy:x:13:13:proxy:/bin:/bin/sh www-data:x:33:33:www-data:/var/www:/bin/sh backup:x:34:34:backup:/var/backups:/bin/sh list:x:38:38:Mailing List Manager:/var/list:/bin/sh irc:x:39:39:ircd:/var/run/ircd:/bin/sh gnats:x:41:41:Gnats Bug-Reporting System (admin):/var/lib/gnats:/bin/sh nobody:x:65534:65534:nobody:/nonexistent:/bin/sh libuuid:x:100:101::/var/lib/libuuid:/bin/sh Debian-exim:x:101:103::/var/spool/exim4:/bin/false statd:x:102:65534::/var/lib/nfs:/bin/false sshd:x:104:65534::/var/run/sshd:/usr/sbin/nologin luser:x:1000:1000:Usuario local de Burdeos,,,:/home/luser:/bin/bash messagebus:x:105:107::/var/run/dbus:/bin/false sge-admin:x:1001:1001:Administrador do SGE,,,:/home/cluster/sge-admin:/bin/bash ntp:x:107:110::/home/ntp:/bin/false haldaemon:x:108:111:Hardware abstraction layer,,,:/var/run/hald:/bin/false vde2-net:x:109:114::/var/run/vde2:/bin/false uml-net:x:110:115::/home/uml-net:/bin/false polkituser:x:111:116:PolicyKit,,,:/var/run/PolicyKit:/bin/false Debian-pxe:x:113:65534:Dummy user for Debian pxe package,,,:/home/Debian-pxe:/bin/false Nscd was stopped from the beginning.

    Read the article

  • Win32: IProgressDialog will not disappear until you mouse over it.

    - by Ian Boyd
    i'm using the Win32 progress dialog. The damnest thing is that when i call: progressDialog.StopProgressDialog(); it doesn't disappear. It stays on screen until the user moves her mouse over it - then it suddenly disappers. The call to StopProgressDialog returns right away (i.e. it's not a synchronous call). i can prove this by doing things after the call has returned: private void button1_Click(object sender, EventArgs e) { //Force red background to prove we've started this.BackColor = Color.Red; this.Refresh(); //Start a progress dialog IProgressDialog pd = (IProgressDialog)new ProgressDialog(); pd.StartProgressDialog(this.Handle, null, PROGDLG.Normal, IntPtr.Zero); //The long running operation System.Threading.Thread.Sleep(10000); //Stop the progress dialog pd.SetLine(1, "Stopping Progress Dialog", false, IntPtr.Zero); pd.StopProgressDialog(); pd = null; //Return form to normal color to prove we've stopped. this.BackColor = SystemColors.Control; this.Refresh(); } The form: starts gray turns red to show we've stared turns back to gray color to show we've called stop So the call to StopProgressDialog has returned, except the progress dialog is still sitting there, mocking me, showing the message: Stopping Progress Dialog Doesn't Appear for 10 seconds Additionally, the progress dialog does not appear on screen until the System.Threading.Thread.Sleep(10000); ten second sleep is over. Not limited to .NET WinForms The same code also fails in Delphi, which is also an object wrapper around Window's windows: procedure TForm1.Button1Click(Sender: TObject); var pd: IProgressDialog; begin Self.Color := clRed; Self.Repaint; pd := CoProgressDialog.Create; pd.StartProgressDialog(Self.Handle, nil, PROGDLG_NORMAL, nil); Sleep(10000); pd.SetLine(1, StringToOleStr('Stopping Progress Dialog'), False, nil); pd.StopProgressDialog; pd := nil; Self.Color := clBtnFace; Self.Repaint; end; PreserveSig An exception would be thrown if StopProgressDialog was failing. Most of the methods in IProgressDialog, when translated into C# (or into Delphi), use the compiler's automatic mechanism of converting failed COM HRESULTS into a native language exception. In other words the following two signatures will throw an exception if the COM call returned an error HRESULT (i.e. a value less than zero): //C# void StopProgressDialog(); //Delphi procedure StopProgressDialog; safecall; Whereas the following lets you see the HRESULT's and react yourself: //C# [PreserveSig] int StopProgressDialog(); //Delphi function StopProgressDialog: HRESULT; stdcall; HRESULT is a 32-bit value. If the high-bit is set (or the value is negative) it is an error. i am using the former syntax. So if StopProgressDialog is returning an error it will be automatically converted to a language exception. Note: Just for SaG i used the [PreserveSig] syntax, the returned HRESULT is zero; MsgWait? The symptom is similar to what Raymond Chen described once, which has to do with the incorrect use of PeekMessage followed by MsgWaitForMultipleObjects: "Sometimes my program gets stuck and reports one fewer record than it should. I have to jiggle the mouse to get the value to update. After a while longer, it falls two behind, then three..." But that would mean that the failure is in IProgressDialog, since it fails equally well on CLR .NET WinForms and native Win32 code.

    Read the article

  • php soapclient responds as 'null' but xml response is ok using __getLastResponse

    - by Roger S
    I'm looking for help with a php soapclient call: The call is to a remote panel and I want to pull off the log data. There are various services published via the WSDL and I am able to use several with no issues. Here's the list: array(12) { [0]=> string(77) "GetGPTimerChannelsResponse GetGPTimerChannels(GetGPTimerChannels $parameters)" [1]=> string(74) "GetGPTimerChannelResponse GetGPTimerChannel(GetGPTimerChannel $parameters)" [2]=> string(74) "SetGPTimerChannelResponse SetGPTimerChannel(SetGPTimerChannel $parameters)" [3]=> string(47) "GetSlaveResponse GetSlave(GetSlave $parameters)" [4]=> string(71) "GetLogDataInlineResponse GetLogDataInline(GetLogDataInline $parameters)" [5]=> string(71) "GetLogItemInlineResponse GetLogItemInline(GetLogItemInline $parameters)" [6]=> string(59) "GetAlarmListResponse GetAlarmList(GetAlarmList $parameters)" [7]=> string(50) "GetSyslogResponse GetSyslog(GetSyslog $parameters)" [8]=> string(47) "SetSlaveResponse SetSlave(SetSlave $parameters)" [9]=> string(53) "GetVersionResponse GetVersion(GetVersion $parameters)" [10]=> string(53) "GetTDBInfoResponse GetTDBInfo(GetTDBInfo $parameters)" [11]=> string(53) "GetLogItemResponse GetLogItem(GetLogItem $parameters)" } I am trying to use the GetLogDataInline service which requires four parameters - these I'm passing as an array and all seems OK when doing the connection (the request/response takes around 30 seconds to process). When I look at a var_dump / print_r of the result, it returns NULL, whereas when I do a call to the __getLastResponse all of the data I need to populate my local database is there. Below is the code for the soapclient call: $client = new soapclient ("http://192.168.1.126/cgi-bin/cgi.cgi?WSDL", array("trace" => 1, "exceptions" => true, "cache_wsdl" => WSDL_CACHE_NONE, "features" => SOAP_SINGLE_ELEMENT_ARRAYS )); $params = array ( "ResponseType" => "Xml", "Step" => 15, "Start" => "2012-06-14T12:00:00+01:00", "End" => "2012-06-14T23:59:45+01:00" ); $result = $client -> GetLogDataInline ($params); An extract from the var_dump / __getLastResponse is: var_dump($client) object(SoapClient)#1 (4) { ["trace"]= int(1) ["_features"]= int(1) ["_soap_version"]= int(1) ["sdl"]= resource(4) of type (Unknown) } var_dump($result) NULL __getLastResponse 15 2012-06-14T12:00:00+01:00 2012-06-14T23:59:45+01:00 External Temperature Workshop Lux Level Electricity Pulse 100wh Workshop PIR Active Boiler Flow Temp Outside Lux Workshop_Light_Override_Hours Water Consumption (Litres) Electricity kWh Consumption Water Consumption M3 Gas kWh Consumption Outside Lighting Heating Effective Setpoint Router Reset Workshop Lighting State Heating Run Hours Heating On Receptin Percent RH Reception Temp Reception RH Reception Temp 2012-06-14T12:00:00+01:00 -85.6 2.1 -1.0 Off -91.5 13.8 0.0 -1.0 0.0 0.0 -0.1 Off 21.0 Off Off 0.0 Off 53.0 22.0 53.0 22.0 2012-06-14T12:00:15+01:00 -85.4 2.1 -1.0 Off -91.8 13.8 0.0 -1.0 0.0 0.0 -0.1 Off 21.0 Off Off 0.0 Off 53.0 22.0 53.0 22.0 2012-06-14T12:00:30+01:00 -85.4 2.1 -1.0 Off -91.8 13.8 0.0 -1.0 0.0 0.0 -0.1 Off 21.0 Off Off 0.0 Off 53.0 22.0 53.0 22.0 2012-06-14T12:00:45+01:00 -85.4 2.1 -1.0 Off -91.5 13.7 0.0 -1.0 0.0 0.0 -0.1 Off 21.0 Off Off 0.0 Off 54.0 22.0 54.0 22.0 (this should be xml but doesn't appear to have copied correctly) If you've got this far, thanks for taking the time. Any help would be much appreciated.

    Read the article

  • Core Plot never stops asking for data, hangs device

    - by Ben Collins
    I'm trying to set up a core plot that looks somewhat like the AAPLot example on the core-plot wiki. I have set up my plot like this: - (void)initGraph:(CPXYGraph*)graph forDays:(NSUInteger)numDays { self.cplhView.hostedLayer = graph; graph.paddingLeft = 30.0; graph.paddingTop = 20.0; graph.paddingRight = 30.0; graph.paddingBottom = 20.0; CPXYPlotSpace *plotSpace = (CPXYPlotSpace*)graph.defaultPlotSpace; plotSpace.xRange = [CPPlotRange plotRangeWithLocation:CPDecimalFromFloat(0) length:CPDecimalFromFloat(numDays)]; plotSpace.yRange = [CPPlotRange plotRangeWithLocation:CPDecimalFromFloat(0) length:CPDecimalFromFloat(1)]; CPLineStyle *lineStyle = [CPLineStyle lineStyle]; lineStyle.lineColor = [CPColor blackColor]; lineStyle.lineWidth = 2.0f; // Axes NSLog(@"Setting up axes"); CPXYAxisSet *xyAxisSet = (id)graph.axisSet; CPXYAxis *xAxis = xyAxisSet.xAxis; // xAxis.majorIntervalLength = CPDecimalFromFloat(7); // xAxis.minorTicksPerInterval = 7; CPXYAxis *yAxis = xyAxisSet.yAxis; // yAxis.majorIntervalLength = CPDecimalFromFloat(0.1); // Line plot with gradient fill NSLog(@"Setting up line plot"); CPScatterPlot *dataSourceLinePlot = [[[CPScatterPlot alloc] initWithFrame:graph.bounds] autorelease]; dataSourceLinePlot.identifier = @"Data Source Plot"; dataSourceLinePlot.dataLineStyle = nil; dataSourceLinePlot.dataSource = self; [graph addPlot:dataSourceLinePlot]; CPColor *areaColor = [CPColor colorWithComponentRed:1.0 green:1.0 blue:1.0 alpha:0.6]; CPGradient *areaGradient = [CPGradient gradientWithBeginningColor:areaColor endingColor:[CPColor clearColor]]; areaGradient.angle = -90.0f; CPFill *areaGradientFill = [CPFill fillWithGradient:areaGradient]; dataSourceLinePlot.areaFill = areaGradientFill; dataSourceLinePlot.areaBaseValue = CPDecimalFromString(@"320.0"); // OHLC plot NSLog(@"OHLC Plot"); CPLineStyle *whiteLineStyle = [CPLineStyle lineStyle]; whiteLineStyle.lineColor = [CPColor whiteColor]; whiteLineStyle.lineWidth = 1.0f; CPTradingRangePlot *ohlcPlot = [[[CPTradingRangePlot alloc] initWithFrame:graph.bounds] autorelease]; ohlcPlot.identifier = @"OHLC"; ohlcPlot.lineStyle = whiteLineStyle; ohlcPlot.stickLength = 2.0f; ohlcPlot.plotStyle = CPTradingRangePlotStyleOHLC; ohlcPlot.dataSource = self; NSLog(@"Data source set, adding plot"); [graph addPlot:ohlcPlot]; } And my delegate methods like this: #pragma mark - #pragma mark CPPlotdataSource Methods - (NSUInteger)numberOfRecordsForPlot:(CPPlot *)plot { NSUInteger maxCount = 0; NSLog(@"Getting number of records."); if (self.data1 && [self.data1 count] > maxCount) { maxCount = [self.data1 count]; } if (self.data2 && [self.data2 count] > maxCount) { maxCount = [self.data2 count]; } if (self.data3 && [self.data3 count] > maxCount) { maxCount = [self.data3 count]; } NSLog(@"%u records", maxCount); return maxCount; } - (NSNumber *)numberForPlot:(CPPlot *)plot field:(NSUInteger)fieldEnum recordIndex:(NSUInteger)index { NSLog(@"Getting record @ idx %u", index); return [NSNumber numberWithInt:index]; } All the code above is in the view controller for the view hosting the plot, and when initGraph is called, numDays is 30. I realize of course that this plot, if it even worked, would look nothing like the AAPLot example. The problem I'm having is that the view is never shown. It finished loading because viewDidLoad is the method that calls initGraph above, and the NSLog statements indicate that initGraph finishes. What's strange is that I return a value of 54 from numberOfRecordsForPlot, but the plot asks for more than 54 data points. in fact, it never stops asking. The NSLog statement in numberForPlot:field:recordIndex prints forever, going from 0 to 54 and then looping back around and continuing. What's going on? Why won't the plot stop asking for data and draw itself?

    Read the article

  • Recover RAID 5 data after created new array instead of re-using

    - by Brigadieren
    Folks please help - I am a newb with a major headache at hand (perfect storm situation). I have a 3 1tb hdd on my ubuntu 11.04 configured as software raid 5. The data had been copied weekly onto another separate off the computer hard drive until that completely failed and was thrown away. A few days back we had a power outage and after rebooting my box wouldn't mount the raid. In my infinite wisdom I entered mdadm --create -f... command instead of mdadm --assemble and didn't notice the travesty that I had done until after. It started the array degraded and proceeded with building and syncing it which took ~10 hours. After I was back I saw that that the array is successfully up and running but the raid is not I mean the individual drives are partitioned (partition type f8 ) but the md0 device is not. Realizing in horror what I have done I am trying to find some solutions. I just pray that --create didn't overwrite entire content of the hard driver. Could someone PLEASE help me out with this - the data that's on the drive is very important and unique ~10 years of photos, docs, etc. Is it possible that by specifying the participating hard drives in wrong order can make mdadm overwrite them? when I do mdadm --examine --scan I get something like ARRAY /dev/md/0 metadata=1.2 UUID=f1b4084a:720b5712:6d03b9e9:43afe51b name=<hostname>:0 Interestingly enough name used to be 'raid' and not the host hame with :0 appended. Here is the 'sanitized' config entries: DEVICE /dev/sdf1 /dev/sde1 /dev/sdd1 CREATE owner=root group=disk mode=0660 auto=yes HOMEHOST <system> MAILADDR root ARRAY /dev/md0 metadata=1.2 name=tanserv:0 UUID=f1b4084a:720b5712:6d03b9e9:43afe51b Here is the output from mdstat cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid5 sdd1[0] sdf1[3] sde1[1] 1953517568 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU] unused devices: <none> fdisk shows the following: fdisk -l Disk /dev/sda: 80.0 GB, 80026361856 bytes 255 heads, 63 sectors/track, 9729 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000bf62e Device Boot Start End Blocks Id System /dev/sda1 * 1 9443 75846656 83 Linux /dev/sda2 9443 9730 2301953 5 Extended /dev/sda5 9443 9730 2301952 82 Linux swap / Solaris Disk /dev/sdb: 750.2 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000de8dd Device Boot Start End Blocks Id System /dev/sdb1 1 91201 732572001 8e Linux LVM Disk /dev/sdc: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00056a17 Device Boot Start End Blocks Id System /dev/sdc1 1 60801 488384001 8e Linux LVM Disk /dev/sdd: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000ca948 Device Boot Start End Blocks Id System /dev/sdd1 1 121601 976760001 fd Linux raid autodetect Disk /dev/dm-0: 1250.3 GB, 1250254913536 bytes 255 heads, 63 sectors/track, 152001 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/dm-0 doesn't contain a valid partition table Disk /dev/sde: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x93a66687 Device Boot Start End Blocks Id System /dev/sde1 1 121601 976760001 fd Linux raid autodetect Disk /dev/sdf: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xe6edc059 Device Boot Start End Blocks Id System /dev/sdf1 1 121601 976760001 fd Linux raid autodetect Disk /dev/md0: 2000.4 GB, 2000401989632 bytes 2 heads, 4 sectors/track, 488379392 cylinders Units = cylinders of 8 * 512 = 4096 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 524288 bytes / 1048576 bytes Disk identifier: 0x00000000 Disk /dev/md0 doesn't contain a valid partition table Per suggestions I did clean up the superblocks and re-created the array with --assume-clean option but with no luck at all. Is there any tool that will help me to revive at least some of the data? Can someone tell me what and how the mdadm --create does when syncs to destroy the data so I can write a tool to un-do whatever was done? After the re-creating of the raid I run fsck.ext4 /dev/md0 and here is the output root@tanserv:/etc/mdadm# fsck.ext4 /dev/md0 e2fsck 1.41.14 (22-Dec-2010) fsck.ext4: Superblock invalid, trying backup blocks... fsck.ext4: Bad magic number in super-block while trying to open /dev/md0 The superblock could not be read or does not describe a correct ext2 filesystem. If the device is valid and it really contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: e2fsck -b 8193 Per Shanes' suggestion I tried root@tanserv:/home/mushegh# mkfs.ext4 -n /dev/md0 mke2fs 1.41.14 (22-Dec-2010) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=128 blocks, Stripe width=256 blocks 122101760 inodes, 488379392 blocks 24418969 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=0 14905 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 102400000, 214990848 and run fsck.ext4 with every backup block but all returned the following: root@tanserv:/home/mushegh# fsck.ext4 -b 214990848 /dev/md0 e2fsck 1.41.14 (22-Dec-2010) fsck.ext4: Invalid argument while trying to open /dev/md0 The superblock could not be read or does not describe a correct ext2 filesystem. If the device is valid and it really contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: e2fsck -b 8193 <device> Any suggestions? Regards!

    Read the article

  • "Invalid Handle Object" when plotting 2 figures Matlab

    - by pinnacler
    I'm having a difficult time understanding the paradigm of Matlab classes vs compared to c++. I wrote code the other day, and I thought it should work. It did not... until I added <handle after the classdef. So I have two classes, landmarks and robot, both are called from within the simulation class. This is the main loop of obj.simulation.animate() and it works, until I try to plot two things at once. DATA.path is a record of all the places a robot has been on the map, and it's updated every time the position is updated. When I try to plot it, by uncommenting the two marked lines below, I get this error: ??? Error using == set Invalid handle object. Error in == simulationsimulation.animate at 45 set(l.lm,'XData',obj.landmarks.apparentPositions(:,1),'YData',obj.landmarks.apparentPositions(:,2)); %INITIALIZE GLOBALS global DATA XX XX = [obj.robot.x ; obj.robot.y]; DATA.i=1; DATA.path = XX; %Setup Plots fig=figure; xlabel('meters'), ylabel('meters') set(fig, 'name', 'Phil''s AWESOME 80''s Robot Simulator') xymax = obj.landmarks.mapSize*3; xymin = -(obj.landmarks.mapSize*3); l.lm=scatter([0],[0],'b+'); %"UNCOMMENT ME"l.pth= plot(0,0,'k.','markersize',2,'erasemode','background'); % vehicle path axis([xymin xymax xymin xymax]); %Simulation Loop for n = 1:720, %Calculate and Set Heading/Location XX = [obj.robot.x;obj.robot.y]; store_data(XX); if n == 120, DATA.path end %Update Position headingChange = navigate(n); obj.robot.updatePosition(headingChange); obj.landmarks.updatePerspective(obj.robot.heading, obj.robot.x, obj.robot.y); %Animate %"UNCOMMENT ME" set(l.pth, 'xdata', DATA.path(1,1:DATA.i), 'ydata', DATA.path(2,1:DATA.i)); set(l.lm,'XData',obj.landmarks.apparentPositions(:,1),'YData',obj.landmarks.apparentPositions(:,2)); rectangle('Position',[-2,-2,4,4]); drawnow This is the classdef for landmarks classdef landmarks <handle properties fixedPositions; %# positions in a fixed coordinate system. [ x, y ] mapSize; %Map Size. Value is side of square x; y; heading; headingChange; end properties (Dependent) apparentPositions end methods function obj = landmarks(mapSize, numberOfTrees) obj.mapSize = mapSize; obj.fixedPositions = obj.mapSize * rand([numberOfTrees, 2]) .* sign(rand([numberOfTrees, 2]) - 0.5); end function apparent = get.apparentPositions(obj) currentPosition = [obj.x ; obj.y]; apparent = bsxfun(@minus,(obj.fixedPositions)',currentPosition)'; apparent = ([cosd(obj.heading) -sind(obj.heading) ; sind(obj.heading) cosd(obj.heading)] * (apparent)')'; end function updatePerspective(obj,tempHeading,tempX,tempY) obj.heading = tempHeading; obj.x = tempX; obj.y = tempY; end end end To me, this is how I understand things. I created a figure l.lm that has about 100 xy points. I can rotate this figure by using set(l.lm,'XData',obj.landmarks.apparentPositions(:,1),'YData',obj.landmarks.apparentPositions(:,2)); When I do that, things work. When I try to plot a second group of XY points, stored in DATA.path, it craps out and I can't figure out why.

    Read the article

  • Does anyone know how to appropriately deal with user timezones in rails 2.3?

    - by Amazing Jay
    We're building a rails app that needs to display dates (and more importantly, calculate them) in multiple timezones. Can anyone point me towards how to work with user timezones in rails 2.3(.5 or .8) The most inclusive article I've seen detailing how user time zones are supposed to work is here: http://wiki.rubyonrails.org/howtos/time-zones... although it is unclear when this was written or for what version of rails. Specifically it states that: "Time.zone - The time zone that is actually used for display purposes. This may be set manually to override config.time_zone on a per-request basis." Keys terms being "display purposes" and "per-request basis". Locally on my machine, this is true. However on production, neither are true. Setting Time.zone persists past the end of the request (to all subsequent requests) and also affects the way AR saves to the DB (basically treating any date as if it were already in UTC even when its not), thus saving completely inappropriate values. We run Ruby Enterprise Edition on production with passenger. If this is my problem, do we need to switch to JRuby or something else? To illustrate the problem I put the following actions in my ApplicationController right now: def test p_time = Time.now.utc s_time = Time.utc(p_time.year, p_time.month, p_time.day, p_time.hour) logger.error "TIME.ZONE" + Time.zone.inspect logger.error ENV['TZ'].inspect logger.error p_time.inspect logger.error s_time.inspect jl = JunkLead.create! jl.date_at = s_time logger.error s_time.inspect logger.error jl.date_at.inspect jl.save! logger.error s_time.inspect logger.error jl.date_at.inspect render :nothing => true, :status => 200 end def test2 Time.zone = 'Mountain Time (US & Canada)' logger.error "TIME.ZONE" + Time.zone.inspect logger.error ENV['TZ'].inspect render :nothing => true, :status => 200 end def test3 Time.zone = 'UTC' logger.error "TIME.ZONE" + Time.zone.inspect logger.error ENV['TZ'].inspect render :nothing => true, :status => 200 end and they yield the following: Processing ApplicationController#test (for 98.202.196.203 at 2010-12-24 22:15:50) [GET] TIME.ZONE#<ActiveSupport::TimeZone:0x2c57a68 @tzinfo=#<TZInfo::DataTimezone: Etc/UTC>, @name="UTC", @utc_offset=0> nil Fri Dec 24 22:15:50 UTC 2010 Fri Dec 24 22:00:00 UTC 2010 Fri Dec 24 22:00:00 UTC 2010 Fri, 24 Dec 2010 22:00:00 UTC +00:00 Fri Dec 24 22:00:00 UTC 2010 Fri, 24 Dec 2010 22:00:00 UTC +00:00 Completed in 21ms (View: 0, DB: 4) | 200 OK [http://www.dealsthatmatter.com/test] Processing ApplicationController#test2 (for 98.202.196.203 at 2010-12-24 22:15:53) [GET] TIME.ZONE#<ActiveSupport::TimeZone:0x2c580a8 @tzinfo=#<TZInfo::DataTimezone: America/Denver>, @name="Mountain Time (US & Canada)", @utc_offset=-25200> nil Completed in 143ms (View: 1, DB: 3) | 200 OK [http://www.dealsthatmatter.com/test2] Processing ApplicationController#test (for 98.202.196.203 at 2010-12-24 22:15:59) [GET] TIME.ZONE#<ActiveSupport::TimeZone:0x2c580a8 @tzinfo=#<TZInfo::DataTimezone: America/Denver>, @name="Mountain Time (US & Canada)", @utc_offset=-25200> nil Fri Dec 24 22:15:59 UTC 2010 Fri Dec 24 22:00:00 UTC 2010 Fri Dec 24 22:00:00 UTC 2010 Fri, 24 Dec 2010 15:00:00 MST -07:00 Fri Dec 24 22:00:00 UTC 2010 Fri, 24 Dec 2010 15:00:00 MST -07:00 Completed in 20ms (View: 0, DB: 4) | 200 OK [http://www.dealsthatmatter.com/test] Processing ApplicationController#test3 (for 98.202.196.203 at 2010-12-24 22:16:03) [GET] TIME.ZONE#<ActiveSupport::TimeZone:0x2c57a68 @tzinfo=#<TZInfo::DataTimezone: Etc/UTC>, @name="UTC", @utc_offset=0> nil Completed in 17ms (View: 0, DB: 2) | 200 OK [http://www.dealsthatmatter.com/test3] Processing ApplicationController#test (for 98.202.196.203 at 2010-12-24 22:16:04) [GET] TIME.ZONE#<ActiveSupport::TimeZone:0x2c57a68 @tzinfo=#<TZInfo::DataTimezone: Etc/UTC>, @name="UTC", @utc_offset=0> nil Fri Dec 24 22:16:05 UTC 2010 Fri Dec 24 22:00:00 UTC 2010 Fri Dec 24 22:00:00 UTC 2010 Fri, 24 Dec 2010 22:00:00 UTC +00:00 Fri Dec 24 22:00:00 UTC 2010 Fri, 24 Dec 2010 22:00:00 UTC +00:00 Completed in 151ms (View: 0, DB: 4) | 200 OK [http://www.dealsthatmatter.com/test] It should be clear above that the 2nd call to /test shows Time.zone set to Mountain, even though it shouldn't. Additionally, checking the database reveals that the test action when run after test2 saved a JunkLead record with a date of 2010-12-22 15:00:00, which is clearly wrong.

    Read the article

  • Generate JFreeChart in servlet

    - by San4o
    I'm trying to generate graphs using JFreeChart. I added record in web.xml, installed jfreechart library. Compiled servlet. Below code of servlet has shown: import java.io.IOException; import java.io.OutputStream; import java.io.File; import javax.servlet.*; import javax.servlet.http.*; import java.awt.Color; import org.jfree.chart.ChartFactory; import org.jfree.chart.ChartUtilities; import org.jfree.chart.JFreeChart; import org.jfree.data.general.DefaultPieDataset; public class diagram extends HttpServlet { public void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { doTestPieChart(request,response); } protected void doTestPieChart(HttpServletRequest request, HttpServletResponse response) throws IOException, ServletException { OutputStream out = response.getOutputStream(); try { DefaultPieDataset dataset = new DefaultPieDataset(); dataset.setValue("Graphic Novels", 192); dataset.setValue("History", 125); dataset.setValue("Military Fiction", 236); dataset.setValue("Mystery", 547); dataset.setValue("Performing Arts", 210); dataset.setValue("Science, Non-Fiction", 70); dataset.setValue("Science Fiction", 989); JFreeChart chart = ChartFactory.createPieChart( "Books by Type", dataset, true, false, false ); chart.setBackgroundPaint(Color.white); response.setContentType("image/png"); ChartUtilities.writeChartAsPNG(out, chart, 400, 300); /* ServletContext sc = getServletContext(); String filename = sc.getRealPath("pie.png"); File file = new File(filename); ChartUtilities.saveChartAsPNG(file,chart,400,300); */ } catch (Exception e) { System.out.println(e.toString()); } finally { out.close(); } } } When i launch the servlet, mistake has shown : HTTP Status 500 - type Exception report message description The server encountered an internal error () that prevented it from fulfilling this request. exception javax.servlet.ServletException: Error instantiating servlet class diagram org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298) org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:852) org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588) org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489) java.lang.Thread.run(Thread.java:619) root cause java.lang.NoClassDefFoundError: org/jfree/data/general/PieDataset java.lang.Class.getDeclaredConstructors0(Native Method) java.lang.Class.privateGetDeclaredConstructors(Class.java:2389) java.lang.Class.getConstructor0(Class.java:2699) java.lang.Class.newInstance0(Class.java:326) java.lang.Class.newInstance(Class.java:308) org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298) org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:852) org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588) org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489) java.lang.Thread.run(Thread.java:619) root cause java.lang.ClassNotFoundException: org.jfree.data.general.PieDataset org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1484) org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1329) java.lang.ClassLoader.loadClassInternal(ClassLoader.java:316) java.lang.Class.getDeclaredConstructors0(Native Method) java.lang.Class.privateGetDeclaredConstructors(Class.java:2389) java.lang.Class.getConstructor0(Class.java:2699) java.lang.Class.newInstance0(Class.java:326) java.lang.Class.newInstance(Class.java:308) org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298) org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:852) org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588) org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489) java.lang.Thread.run(Thread.java:619) note The full stack trace of the root cause is available in the Apache Tomcat/6.0.24 logs. Help me to solve a problem. and where is problem?

    Read the article

  • Mysql query help - Alter this mysql query to get these results?

    - by sandeepan-nath
    Please execute the following queries first to set up so that you can help me:- CREATE TABLE IF NOT EXISTS `Tutor_Details` ( `id_tutor` int(10) NOT NULL auto_increment, `firstname` varchar(100) NOT NULL default '', `surname` varchar(155) NOT NULL default '', PRIMARY KEY (`id_tutor`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=41 ; INSERT INTO `Tutor_Details` (`id_tutor`,`firstname`, `surname`) VALUES (1, 'Sandeepan', 'Nath'), (2, 'Bob', 'Cratchit'); CREATE TABLE IF NOT EXISTS `Classes` ( `id_class` int(10) unsigned NOT NULL auto_increment, `id_tutor` int(10) unsigned NOT NULL default '0', `class_name` varchar(255) default NULL, PRIMARY KEY (`id_class`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=229 ; INSERT INTO `Classes` (`id_class`,`class_name`, `id_tutor`) VALUES (1, 'My Class', 1), (2, 'Sandeepan Class', 2); CREATE TABLE IF NOT EXISTS `Tags` ( `id_tag` int(10) unsigned NOT NULL auto_increment, `tag` varchar(255) default NULL, PRIMARY KEY (`id_tag`), UNIQUE KEY `tag` (`tag`), KEY `id_tag` (`id_tag`), KEY `tag_2` (`tag`), KEY `tag_3` (`tag`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=18 ; INSERT INTO `Tags` (`id_tag`, `tag`) VALUES (1, 'Bob'), (6, 'Class'), (2, 'Cratchit'), (4, 'Nath'), (3, 'Sandeepan'), (5, 'My'); CREATE TABLE IF NOT EXISTS `Tutors_Tag_Relations` ( `id_tag` int(10) unsigned NOT NULL default '0', `id_tutor` int(10) default NULL, KEY `Tutors_Tag_Relations` (`id_tag`), KEY `id_tutor` (`id_tutor`), KEY `id_tag` (`id_tag`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; INSERT INTO `Tutors_Tag_Relations` (`id_tag`, `id_tutor`) VALUES (3, 1), (4, 1), (1, 2), (2, 2); CREATE TABLE IF NOT EXISTS `Class_Tag_Relations` ( `id_tag` int(10) unsigned NOT NULL default '0', `id_class` int(10) default NULL, `id_tutor` int(10) NOT NULL, KEY `Class_Tag_Relations` (`id_tag`), KEY `id_class` (`id_class`), KEY `id_tag` (`id_tag`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; INSERT INTO `Class_Tag_Relations` (`id_tag`, `id_class`, `id_tutor`) VALUES (5, 1, 1), (6, 1, 1), (3, 2, 2), (6, 2, 2); In the present system data which I have given , tutor named "Sandeepan Nath" has has created class named "My Class" and tutor named "Bob Cratchit" has created class named "Sandeepan Class". Requirement - To execute a single query with limit on the results to show search results as per AND logic on the search keywords like this:- If "Sandeepan Class" is searched , Tutor Sandeepan Nath's record from Tutor Details table is returned(because "Sandeepan" is the firstname of Sandeepan Nath and Class is present in class name of Sandeepan's class) If "Class" is searched Both the tutors from the Tutor_details table are fetched because Class is present in the name of the class created by both the tutors. Following is what I have so far achieved (PHP Mysql):- <?php $searchTerm1 = "Sandeepan"; $searchTerm2 = "Class"; mysql_select_db("test"); $sql = "SELECT td.* FROM Tutor_Details AS td LEFT JOIN Tutors_Tag_Relations AS ttagrels ON td.id_tutor = ttagrels.id_tutor LEFT JOIN Classes AS wc ON td.id_tutor = wc.id_tutor LEFT JOIN Class_Tag_Relations AS wtagrels ON td.id_tutor = wtagrels.id_tutor LEFT JOIN Tags as t1 on ((t1.id_tag = ttagrels.id_tag) OR (t1.id_tag = wtagrels.id_tag)) LEFT JOIN Tags as t2 on ((t2.id_tag = ttagrels.id_tag) OR (t2.id_tag = wtagrels.id_tag)) where t1.tag LIKE '%".$searchTerm1."%' AND t2.tag LIKE '%".$searchTerm2."%' GROUP BY td.id_tutor LIMIT 10 "; $result = mysql_query($sql); echo $sql; if($result) { while($rec = mysql_fetch_object($result)) $recs[] = $rec; //$rec = mysql_fetch_object($result); echo "<br><br>"; if(is_array($recs)) { foreach($recs as $each) { print_r($each); echo "<br>"; } } } ?> But the results are :- If "Sandeepan Nath" is searched, it does not return any tutor (instead of only Sandeepan's row) If "Sandeepan Class" is searched, it returns Sandeepan's row (instead of Both tutors ) If "Bob Class" is searched, it correctly returns Bob's row If "Bob Cratchit" is searched, it does not return any tutor (instead of only

    Read the article

  • copying link id to an input field

    - by zurna
    The code works fine when links are images. But I have a problem when there are flash movies. Another page opens with undefined in the address bar when it needs to copy the link id to imagesID input box. <script type="text/javascript"> var $input = $("#imagesID"); // <-- your input field $('a.thumb').click(function() { var value = $input.val(); var id = $(this).attr('id'); if (value.match(id)) { value = value.replace(id + ';', ''); } else { value += id + ';'; } $input.val(value); }); </script> <ul class="thumbs"> <li> <a class="thumb" id="62"> <img src="/FLPM/media/images/2A9L1V2X_sm.jpg" alt="Dock" id="62" class="floatLeft" /> </a> <br /> <a href="?Process=&IMAGEID=62" class="thumb"><span class="floatLeft">DELETE</span></a> </li> <li> <a class="thumb" id="61"> <img src="/FLPM/media/images/0E7Q9Z0C_sm.jpg" alt="Desert Landscape" id="61" class="floatLeft" /> </a> <br /> <a href="?Process=&IMAGEID=61" class="thumb"><span class="floatLeft">DELETE</span></a> </li> <li> <a class="thumb" id="60"> <img src="/FLPM/media/images/8R5D7M8O_sm.jpg" alt="Creek" id="60" class="floatLeft" /> </a> <br /> <a href="?Process=&IMAGEID=60" class="thumb"><span class="floatLeft">DELETE</span></a> </li> <li> <a class="thumb" id="59"> <object classid="clsid:D27CDB6E-AE6D-11cf-96B8-444553540000" codebase="http://active.macromedia.com/flash4/cabs/swflash.cab#version=4,0,0,0" id="59" width="150" height="100" style="float:left; border:5px solid #CCCCCC; margin:5px 10px 10px 0;"> <param name="scale" value="exactfit"> <param name="AllowScriptAccess" value="always" /> <embed name="name" src="http://www.refinethetaste.com/FLPM/media/flashes/7P4A6K7M.swf" quality="high" scale="exactfit" width="150" height="100" type="application/x-shockwave-flash" AllowScriptAccess="always" pluginspage="http://www.macromedia.com/shockwave/download/index.cgi? P1_Prod_Version=ShockwaveFlash"> </embed> </object> </a> <br /> <a href="?Process=&IMAGEID=59" class="thumb"><span class="floatLeft">DELETE</span></a> </li> </ul>

    Read the article

  • Windows installation repair option not showing up

    - by Carl
    I'm trying to repair an existing Windows XP installation. Following the instructions from http://www.microsoft.com/windowsxp/using/helpandsupport/learnmore/tips/doug92.mspx this should work: When the Press any key to boot from CD message is displayed on your screen, press a key to start your computer from the Windows XP CD. Press ENTER when you see the message To setup Windows XP now, and then press ENTER displayed on the Welcome to Setup screen. Do not choose the option to press R to use the Recovery Console. In the Windows XP Licensing Agreement, press F8 to agree to the license agreement. Make sure that your current installation of Windows XP is selected in the box, and then press R to repair Windows XP. Follow the instructions on the screen to complete Setup. On step 5 pressing R does nothing and there is nothing on the screen saying it would. When I just select to install I get a message that a previous installation is there and proceeding will destroy it and installed applications, I can optionally select a directory other than c:\windows, and I can optionally format before continuing. I had tried to go from SP2-SP3. It failed, and then I couldn't get to Safe Mode. I put the SP1 disk back in to do a repair, and I don't see that option. (I don't have an SP2 boot/install disk, I just have the non-boot upgrade package.) UPDATE: Upon loading the Recovery Console, I get a message saying The system registry does not appear to have an active ControlSet key. The system registry may be damaged. You can try restarting it with the Last Known Good configuration or you can try repairing the installation of Windows using the setup program's repair and recovery options. I then did bootcfg /scan - "successful" ... Total installs: 1 ... [1] c:\windows - with the c:\windows command prompt below it. bootcfg /list gives [1] Windows XP Pro; OS Load Options /noexecute=optin /fastdetect; OS Location: c:\windows I followed the instructions at http://michaelstevenstech.com/XPrepairinstall.htm - "Warning 2" link copy E:\i386\ntldr C:\ copy E:\i386\ntdetect.com C:\ attrib -h -r -s C:\boot.ini del C:\boot.ini BootCfg /Rebuild I added /fastdetect when it asked for options. I re-ran Windows setup - no change - no repair option. UPDATE: I followed the procedure at http://support.microsoft.com/default.aspx?scid=kb;en-us;307545 I rebooted. I now get a quick message on bootup to select the boot - 1: [blank] ; Windows XP Professional ; Windows Recover Console. The "1: " is new. The rest is the way it was when all was okay. Selecting 1: and the next one gives the same result - I get to a login icon, and then it asks for a password, with the blinking cursor, but I can't type anything. I reboot with the Windows CD. Now I see a repair option for installation "1: " I selected R on that, and it did "Setup is copying files..." and rebooted when it was done. Then it booted, and I got a window saying "Setup will complete in approximately 39 minutes." That's where I am now. I wasn't expecting this last part - I did a repair several months ago and I don't recall that. UPDATE: Booted up. Asked if I wanted to register Windows online. All my icons are there, and the old desktop documents. Good. All the applications I tried from the Start Menu work (tested a few), except Corel Photopaint - I get registry entry not found errors. Windows ran for a while, then froze. The mouse and keyboard don't work. Pressing the power button got Windows to shut down. I probably need to put SP2 on it, and then all the updates for my laptop for XP Pro SP2 (drivers), there's a bunch. The mouse and keyboard quit working again. That wasn't a problem when I first set up this laptop. I've ran 4 times now. Two mouse/keyboards hangs by pressing Ctrl-C (to copy text from a notepad document), and two by selecting Start-Run (wasn't able to type anything in the box).

    Read the article

< Previous Page | 497 498 499 500 501 502 503 504 505 506 507 508  | Next Page >