Search Results

Search found 7078 results on 284 pages for 'extended range'.

Page 44/284 | < Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >

  • Regarding partitions for dual-booting Ubuntu with pre-existing Windows 7

    - by Shasteriskt
    I have zero actual experience with configuring disk partitions and the stuff I have read for the past few hours have been confusing me a bit, so please bear with me. First of all, I'd like to explain what I'm setting to achieve: Windows 7 with: C:\ Windows 7 (pre-existing installation) D:\ Data (Already exists and has files already) Ubuntu 11 - Does not exist yet, but I already have a LiveCD in hand. \root directory for Ubuntu \home on its own partition I plan \swap on its own partition with around 8GB Here is the current situation: I have a single 500 GB hard-disk with Windows 7 x64 installed, and the current partition schemes is as follows: System Reserved: 100 MB (Primary, Active) C: 100 GB - Where Windows 7 is installed (Primary) D: 365 GB - Where my files are located, LOTS of free space (Primary) Now, I would like to shrink my D: drive and create around 40 GB of unallocated disk space for the Ubuntu installation, but here what's confusing me a bit: I'm thinking I would create an extended partition and subdivide it into 3 logical partitions for the Ubuntu setup I had in mind. (If you think my setup is a bad idea, please let me know & why. I also hope you can suggest a better one...) I am aware that I can only have up to 4 primary partitions, or 3 primary partitions with 1 extended parition max. Now, does the System Recovery portion count as one primary partition? I'm really new to these things and it is totally unclear to me. In shrinking my D: drive using Windows 7's Disk Management tool, I would get an unallocated free space which I don't know how to make an extended partition from. It seems like I can only create a primary partition from it, not an extended one. How do I go about it? (I'd also like to note, if it is of any importance, that I am trying to avoid using the option to install Ubuntu alongside Windows, and much rather prefer using the custom install where I can specify which drives I wish to use and stuff. Somehow I feel its safer that way.)

    Read the article

  • SQL SERVER – How to Roll Back SQL Server Database Changes

    - by Pinal Dave
    In a perfect scenario, no unexpected and unplanned changes occur. There are no unpleasant surprises, no inadvertent changes. However, even with all precautions and testing, there is sometimes a need to revert a structure or data change. One of the methods that can be used in this situation is to use an older database backup that has the records or database object structure you want to revert to. For this method, you have to have the adequate full database backup and a tool that will help you with comparison and synchronization is preferred. In this article, we will focus on another method: rolling back the changes. This can be done by using: An option in SQL Server Management Studio T-SQL, or ApexSQL Log The first two solutions have been described in this article The disadvantages of these methods are that you have to know when exactly the change you want to revert happened and that all transactions on the database executed in a specific time range are rolled back – the ones you want to undo and the ones you don’t. How to easily roll back SQL Server database changes using ApexSQL Log? The biggest challenge is to roll back just specific changes, not all changes that happened in a specific time range. While SQL Server Management Studio option and T-SQL read and roll forward all transactions in the transaction log files, I will show you a solution that finds and scripts only the specific changes that match your criteria. Therefore, you don’t need to worry about all other database changes that you don’t want to roll back. ApexSQL Log is a SQL Server disaster recovery tool that reads transaction logs and provides a wide range of filters that enable you to easily rollback only specific data changes. First, connect to the online database where you want to roll back the changes. Once you select the database, ApexSQL Log will show its recovery model. Note that changes can be rolled back even for a database in the Simple recovery model, when no database and transaction log backups are available. However, ApexSQL Log achieves best results when the database is in the Full recovery model and you have a chain of subsequent transaction log backups, back to the moment when the change occurred. In this example, we will use only the online transaction log. In the next step, use filters to read only the transactions that happened in a specific time range. To remove noise, it’s recommended to use as many filters as possible. Besides filtering by the time of the transaction, ApexSQL Log can filter by the operation type: Table name: As well as transaction state (committed, aborted, running, and unknown), name of the user who committed the change, specific field values, server process IDs, and transaction description. You can select only the tables affected by the changes you want to roll back. However, if you’re not certain which tables were affected, you can leave them all selected and once the results are shown in the main grid, analyze them to find the ones you to roll back. When you set the filters, you can select how to present the results. ApexSQL Log can automatically create undo or redo scripts, export the transactions into an XML, HTML, CSV, SQL, or SQL Bulk file, and create a batch file that you can use for unattended transaction log reading. In this example, I will open the results in the grid, as I want to analyze them before rolling back the transactions. The results contain information about the transaction, as well as who and when made it. For UPDATEs, ApexSQL Log shows both old and new values, so you can easily see what has happened. To create an UNDO script that rolls back the changes, select the transactions you want to roll back and click Create undo script in the menu. For the DELETE statement selected in the screenshot above, the undo script is: INSERT INTO [Sales].[PersonCreditCard] ([BusinessEntityID], [CreditCardID], [ModifiedDate]) VALUES (297, 8010, '20050901 00:00:00.000') When it comes to rolling back database changes, ApexSQL Log has a big advantage, as it rolls back only specific transactions, while leaving all other transactions that occurred at the same time range intact. That makes ApexSQL Log a good solution for rolling back inadvertent data and schema changes on your SQL Server databases. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: ApexSQL

    Read the article

  • Exploring TCP throughput with DTrace

    - by user12820842
    One key measure to use when assessing TCP throughput is assessing the amount of unacknowledged data in the pipe. This is sometimes termed the Bandwidth Delay Product (BDP) (note that BDP is often used more generally as the product of the link capacity and the end-to-end delay). In DTrace terms, the amount of unacknowledged data in bytes for the connection is the different between the next sequence number to send and the lowest unacknoweldged sequence number (tcps_snxt - tcps_suna). According to the theory, when the number of unacknowledged bytes for the connection is less than the receive window of the peer, the path bandwidth is the limiting factor for throughput. In other words, if we can fill the pipe without the peer TCP complaining (by virtue of its window size reaching 0), we are purely bandwidth-limited. If the peer's receive window is too small however, the sending TCP has to wait for acknowledgements before it can send more data. In this case the round-trip time (RTT) limits throughput. In such cases the effective throughput limit is the window size divided by the RTT, e.g. if the window size is 64K and the RTT is 0.5sec, the throughput is 128K/s. So a neat way to visually determine if the receive window of clients may be too small should be to compare the distribution of BDP values for the server versus the client's advertised receive window. If the BDP distribution overlaps the send window distribution such that it is to the right (or lower down in DTrace since quantizations are displayed vertically), it indicates that the amount of unacknowledged data regularly exceeds the client's receive window, so that it is possible that the sender may have more data to send but is blocked by a zero-window on the client side. In the following example, we compare the distribution of BDP values to the receive window advertised by the receiver (10.175.96.92) for a large file download via http. # dtrace -s tcp_tput.d ^C BDP(bytes) 10.175.96.92 80 value ------------- Distribution ------------- count -1 | 0 0 | 6 1 | 0 2 | 0 4 | 0 8 | 0 16 | 0 32 | 0 64 | 0 128 | 0 256 | 3 512 | 0 1024 | 0 2048 | 9 4096 | 14 8192 | 27 16384 | 67 32768 |@@ 1464 65536 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 32396 131072 | 0 SWND(bytes) 10.175.96.92 80 value ------------- Distribution ------------- count 16384 | 0 32768 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 17067 65536 | 0 Here we have a puzzle. We can see that the receiver's advertised window is in the 32768-65535 range, while the amount of unacknowledged data in the pipe is largely in the 65536-131071 range. What's going on here? Surely in a case like this we should see zero-window events, since the amount of data in the pipe regularly exceeds the window size of the receiver. We can see that we don't see any zero-window events since the SWND distribution displays no 0 values - it stays within the 32768-65535 range. The explanation is straightforward enough. TCP Window scaling is in operation for this connection - the Window Scale TCP option is used on connection setup to allow a connection to advertise (and have advertised to it) a window greater than 65536 bytes. In this case the scaling shift is 1, so this explains why the SWND values are clustered in the 32768-65535 range rather than the 65536-131071 range - the SWND value needs to be multiplied by two since the reciever is also scaling its window by a shift factor of 1. Here's the simple script that compares BDP and SWND distributions, fixed to take account of window scaling. #!/usr/sbin/dtrace -s #pragma D option quiet tcp:::send / (args[4]-tcp_flags & (TH_SYN|TH_RST|TH_FIN)) == 0 / { @bdp["BDP(bytes)", args[2]-ip_daddr, args[4]-tcp_sport] = quantize(args[3]-tcps_snxt - args[3]-tcps_suna); } tcp:::receive / (args[4]-tcp_flags & (TH_SYN|TH_RST|TH_FIN)) == 0 / { @swnd["SWND(bytes)", args[2]-ip_saddr, args[4]-tcp_dport] = quantize((args[4]-tcp_window)*(1 tcps_snd_ws)); } And here's the fixed output. # dtrace -s tcp_tput_scaled.d ^C BDP(bytes) 10.175.96.92 80 value ------------- Distribution ------------- count -1 | 0 0 | 39 1 | 0 2 | 0 4 | 0 8 | 0 16 | 0 32 | 0 64 | 0 128 | 0 256 | 3 512 | 0 1024 | 0 2048 | 4 4096 | 9 8192 | 22 16384 | 37 32768 |@ 99 65536 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 3858 131072 | 0 SWND(bytes) 10.175.96.92 80 value ------------- Distribution ------------- count 512 | 0 1024 | 1 2048 | 0 4096 | 2 8192 | 4 16384 | 7 32768 | 14 65536 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 1956 131072 | 0

    Read the article

  • Creating static NAT blocks outbound traffic Cisco ASA

    - by natediggs
    Hi Everyone, I have two web servers sitting behind a Cisco ASA 5505, which I don't have much experience with. I'm trying to create two static NATs. One static NAT that goes to xx.xx.xx.150 and another that goes to xx.xx.xx.151. I've created the static NAT for the .150 web server and it works FINE. Incoming and outgoing traffic work great. This is the staging web server. I now need to duplicate the setup for the production web server. So, I connect the webserver to the firewall, change the public IP address on one of the NICs reboot the server and I have outbound internet access. Then I run the command: static (inside,outside) xx.xx.xx.150 192.168.1.x which is successful. I then run the command: access-list acl-outside permit tcp any host xx.xx.xx.150 eq 80 Which is successful. I then try to browse the internet and I get nothing. I try to telnet in through port 80 and I get nothing (though I'm guessing because the response to the telnet request is being blocked). I've tried this with the production web server and then I tried it with another web server that is for internal testing and have the exact same problem. Both work fine until I run the static NAT rule and then no outbound internet access. I have a feeling that it's something simple that I'm missing, but my limited experience with this device is killing me. Below I've pasted the current configuration. I'm currently trying to get this to work on the .153 server which is the internal testing server. Once I can verify that works, I'll try it with production. : Saved : ASA Version 8.2(4) ! hostname QG domain-name XX.com enable password passwd names ! interface Ethernet0/0 switchport access vlan 2 ! interface Ethernet0/1 ! interface Ethernet0/2 ! interface Ethernet0/3 ! interface Ethernet0/4 ! interface Ethernet0/5 ! interface Ethernet0/6 ! interface Ethernet0/7 ! interface Vlan1 nameif inside security-level 100 ip address 192.168.1.1 255.255.255.0 ! interface Vlan2 nameif outside security-level 0 ip address XX.XX.XX.148 255.255.255.0 ! interface Vlan3 shutdown no forward interface Vlan1 nameif dmz security-level 50 ip address dhcp ! boot system disk0:/asa824.bin ftp mode passive clock timezone EST -5 clock summer-time EDT recurring dns server-group DefaultDNS domain-name fw.XXgroup.com same-security-traffic permit inter-interface access-list acl-outside extended permit tcp any host XX.XX.XX.150 eq www access-list acl-outside extended permit tcp any host XX.XX.XX.150 eq https access-list acl-outside extended permit tcp any host XX.XX.XX.151 eq www access-list acl-outside extended permit tcp any host XX.XX.XX.151 eq https access-list acl-outside extended permit tcp any host XX.XX.XX.153 eq www access-list inside_access_in extended permit ip 192.168.1.0 255.255.255.0 any access-list inside_nat0_outbound extended permit ip any 192.168.1.32 255.255.255.240 pager lines 24 logging enable logging asdm informational mtu inside 1500 mtu outside 1500 mtu dmz 1500 ip local pool VPNIPs 192.168.1.35-192.168.1.44 mask 255.255.255.0 icmp unreachable rate-limit 1 burst-size 1 asdm image disk0:/asdm-635.bin no asdm history enable arp timeout 14400 global (outside) 1 interface nat (inside) 0 access-list inside_nat0_outbound nat (inside) 1 0.0.0.0 0.0.0.0 static (inside,outside) XX.XX.XX150 192.168.1.100 netmask 255.255.255.255 static (inside,outside) XX.XX.XX153 192.168.1.102 netmask 255.255.255.255 access-group acl-outside in interface outside route outside 0.0.0.0 0.0.0.0 XX.XX.XX129 1 timeout xlate 3:00:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02 timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00 timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00 timeout sip-provisional-media 0:02:00 uauth 0:05:00 absolute timeout tcp-proxy-reassembly 0:01:00 dynamic-access-policy-record DfltAccessPolicy aaa authorization command LOCAL http server enable http 192.168.1.0 255.255.255.0 inside http 0.0.0.0 0.0.0.0 outside no snmp-server location no snmp-server contact snmp-server enable traps snmp authentication linkup linkdown coldstart crypto ipsec transform-set ESP-3DES-SHA esp-3des esp-sha-hmac crypto ipsec security-association lifetime seconds 28800 crypto ipsec security-association lifetime kilobytes 4608000 crypto dynamic-map outside_dyn_map 20 set pfs group1 crypto dynamic-map outside_dyn_map 20 set transform-set ESP-3DES-SHA crypto map outside_map 65535 ipsec-isakmp dynamic outside_dyn_map crypto map outside_map interface outside crypto isakmp enable outside crypto isakmp policy 10 authentication crack encryption 3des hash sha group 2 lifetime 86400 no crypto isakmp nat-traversal client-update enable telnet timeout 5 ssh timeout 5 console timeout 0 dhcpd auto_config outside ! dhcpd address 192.168.1.2-192.168.1.33 inside dhcpd dns 208.77.88.4 interface inside dhcpd enable inside ! threat-detection basic-threat threat-detection statistics access-list no threat-detection statistics tcp-intercept webvpn enable outside svc image disk0:/sslclient-win-1.1.0.154.pkg 1 svc image disk0:/anyconnect-win-2.5.2019-k9.pkg 2 svc enable group-policy ATSAdmin internal group-policy ATSAdmin attributes dns-server value 208.77.88.4 208.85.174.9 vpn-tunnel-protocol IPSec svc webvpn webvpn url-list none svc keep-installer installed svc rekey method ssl svc ask enable username qgadmin password /oHfeGQ/R.bd3KPR encrypted privilege 15 username benl password 0HNIGQNI0uruJvhW encrypted privilege 0 username benl attributes vpn-group-policy ATSAdmin username kuzma password rH7MM7laoynyvf9U encrypted privilege 0 username kuzma attributes vpn-group-policy ATSAdmin username nate password BXHOURyT37e4O5mt encrypted privilege 0 username nate attributes vpn-group-policy ATSAdmin tunnel-group ATSAdmin type remote-access tunnel-group ATSAdmin general-attributes address-pool VPNIPs default-group-policy ATSAdmin tunnel-group SSLVPN type remote-access tunnel-group SSLVPN general-attributes address-pool VPNIPs default-group-policy ATSAdmin ! class-map inspection_default match default-inspection-traffic ! ! policy-map type inspect dns preset_dns_map parameters message-length maximum 512 policy-map global_policy class inspection_default inspect dns preset_dns_map inspect ftp inspect h323 h225 inspect h323 ras inspect rsh inspect rtsp inspect esmtp inspect sqlnet inspect skinny inspect sunrpc inspect xdmcp inspect sip inspect netbios inspect tftp inspect ip-options ! service-policy global_policy global privilege cmd level 3 mode exec command perfmon privilege cmd level 3 mode exec command ping privilege cmd level 3 mode exec command who privilege cmd level 3 mode exec command logging privilege cmd level 3 mode exec command failover privilege show level 5 mode exec command running-config privilege show level 3 mode exec command reload privilege show level 3 mode exec command mode privilege show level 3 mode exec command firewall privilege show level 3 mode exec command interface privilege show level 3 mode exec command clock privilege show level 3 mode exec command dns-hosts privilege show level 3 mode exec command access-list privilege show level 3 mode exec command logging privilege show level 3 mode exec command ip privilege show level 3 mode exec command failover privilege show level 3 mode exec command asdm privilege show level 3 mode exec command arp privilege show level 3 mode exec command route privilege show level 3 mode exec command ospf privilege show level 3 mode exec command aaa-server privilege show level 3 mode exec command aaa privilege show level 3 mode exec command crypto privilege show level 3 mode exec command vpn-sessiondb privilege show level 3 mode exec command ssh privilege show level 3 mode exec command dhcpd privilege show level 3 mode exec command vpn privilege show level 3 mode exec command blocks privilege show level 3 mode exec command uauth privilege show level 3 mode configure command interface privilege show level 3 mode configure command clock privilege show level 3 mode configure command access-list privilege show level 3 mode configure command logging privilege show level 3 mode configure command ip privilege show level 3 mode configure command failover privilege show level 5 mode configure command asdm privilege show level 3 mode configure command arp privilege show level 3 mode configure command route privilege show level 3 mode configure command aaa-server privilege show level 3 mode configure command aaa privilege show level 3 mode configure command crypto privilege show level 3 mode configure command ssh privilege show level 3 mode configure command dhcpd privilege show level 5 mode configure command privilege privilege clear level 3 mode exec command dns-hosts privilege clear level 3 mode exec command logging privilege clear level 3 mode exec command arp privilege clear level 3 mode exec command aaa-server privilege clear level 3 mode exec command crypto privilege cmd level 3 mode configure command failover privilege clear level 3 mode configure command logging privilege clear level 3 mode configure command arp privilege clear level 3 mode configure command crypto privilege clear level 3 mode configure command aaa-server prompt hostname context call-home profile CiscoTAC-1 no active destination address http https://tools.cisco.com/its/service/oddce/services/DDCEService destination address email [email protected] destination transport-method http subscribe-to-alert-group diagnostic subscribe-to-alert-group environment subscribe-to-alert-group inventory periodic monthly subscribe-to-alert-group configuration periodic monthly subscribe-to-alert-group telemetry periodic daily Cryptochecksum:0ed0580e151af288d865f4f3603d792a : end asdm image disk0:/asdm-635.bin no asdm history enable

    Read the article

  • Optimized Image Loading in a UIScrollView

    - by Michael Gaylord
    I have a UIScrollView that has a set of images loaded side-by-side inside it. You can see an example of my app here: http://www.42restaurants.com. My problem comes in with memory usage. I want to lazy load the images as they are about to appear on the screen and unload images that aren't on screen. As you can see in the code I work out at a minimum which image I need to load and then assign the loading portion to an NSOperation and place it on an NSOperationQueue. Everything works great apart from a jerky scrolling experience. I don't know if anyone has any ideas as to how I can make this even more optimized, so that the loading time of each image is minimized or so that the scrolling is less jerky. - (void)scrollViewDidScroll:(UIScrollView *)scrollView{ [self manageThumbs]; } - (void) manageThumbs{ int centerIndex = [self centerThumbIndex]; if(lastCenterIndex == centerIndex){ return; } if(centerIndex >= totalThumbs){ return; } NSRange unloadRange; NSRange loadRange; int totalChange = lastCenterIndex - centerIndex; if(totalChange > 0){ //scrolling backwards loadRange.length = fabsf(totalChange); loadRange.location = centerIndex - 5; unloadRange.length = fabsf(totalChange); unloadRange.location = centerIndex + 6; }else if(totalChange < 0){ //scrolling forwards unloadRange.length = fabsf(totalChange); unloadRange.location = centerIndex - 6; loadRange.length = fabsf(totalChange); loadRange.location = centerIndex + 5; } [self unloadImages:unloadRange]; [self loadImages:loadRange]; lastCenterIndex = centerIndex; return; } - (void) unloadImages:(NSRange)range{ UIScrollView *scrollView = (UIScrollView *)[[self.view subviews] objectAtIndex:0]; for(int i = 0; i < range.length && range.location + i < [scrollView.subviews count]; i++){ UIView *subview = [scrollView.subviews objectAtIndex:(range.location + i)]; if(subview != nil && [subview isKindOfClass:[ThumbnailView class]]){ ThumbnailView *thumbView = (ThumbnailView *)subview; if(thumbView.loaded){ UnloadImageOperation *unloadOperation = [[UnloadImageOperation alloc] initWithOperableImage:thumbView]; [queue addOperation:unloadOperation]; [unloadOperation release]; } } } } - (void) loadImages:(NSRange)range{ UIScrollView *scrollView = (UIScrollView *)[[self.view subviews] objectAtIndex:0]; for(int i = 0; i < range.length && range.location + i < [scrollView.subviews count]; i++){ UIView *subview = [scrollView.subviews objectAtIndex:(range.location + i)]; if(subview != nil && [subview isKindOfClass:[ThumbnailView class]]){ ThumbnailView *thumbView = (ThumbnailView *)subview; if(!thumbView.loaded){ LoadImageOperation *loadOperation = [[LoadImageOperation alloc] initWithOperableImage:thumbView]; [queue addOperation:loadOperation]; [loadOperation release]; } } } } EDIT: Thanks for the really great responses. Here is my NSOperation code and ThumbnailView code. I tried a couple of things over the weekend but I only managed to improve performance by suspending the operation queue during scrolling and resuming it when scrolling is finished. Here are my code snippets: //In the init method queue = [[NSOperationQueue alloc] init]; [queue setMaxConcurrentOperationCount:4]; //In the thumbnail view the loadImage and unloadImage methods - (void) loadImage{ if(!loaded){ NSString *filename = [NSString stringWithFormat:@"%03d-cover-front", recipe.identifier, recipe.identifier]; NSString *directory = [NSString stringWithFormat:@"RestaurantContent/%03d", recipe.identifier]; NSString *path = [[NSBundle mainBundle] pathForResource:filename ofType:@"png" inDirectory:directory]; UIImage *image = [UIImage imageWithContentsOfFile:path]; imageView = [[ImageView alloc] initWithImage:image andFrame:CGRectMake(0.0f, 0.0f, 176.0f, 262.0f)]; [self addSubview:imageView]; [self sendSubviewToBack:imageView]; [imageView release]; loaded = YES; } } - (void) unloadImage{ if(loaded){ [imageView removeFromSuperview]; imageView = nil; loaded = NO; } } Then my load and unload operations: - (id) initWithOperableImage:(id<OperableImage>) anOperableImage{ self = [super init]; if (self != nil) { self.image = anOperableImage; } return self; } //This is the main method in the load image operation - (void)main { [image loadImage]; } //This is the main method in the unload image operation - (void)main { [image unloadImage]; }

    Read the article

  • MultiWidget in MultiWidget how to compress the first one?

    - by sacabuche
    I have two MultiWidget one inside the other, but the problem is that the MultiWidget contained don't return compress, how do i do to get the right value from the first widget? In this case from SplitTimeWidget class SplitTimeWidget(forms.MultiWidget): """ Widget written to split widget into hours and minutes. """ def __init__(self, attrs=None): widgets = ( forms.Select(attrs=attrs, choices=([(hour,hour) for hour in range(0,24)])), forms.Select(attrs=attrs, choices=([(minute, str(minute).zfill(2)) for minute in range(0,60)])), ) super(SplitTimeWidget, self).__init__(widgets, attrs) def decompress(self, value): if value: return [value.hour, value.minute] return [None, None] class DateTimeSelectWidget (forms.MultiWidget): """ A widget that splits date into Date and Hours, minutes, seconds with selects """ date_format = DateInput.format def __init__(self, attrs=None, date_format=None): if date_format: self.date_format = date_format #if time_format: # self.time_format = time_format hours = [(hour,str(hour)+' h') for hour in range(0,24)] minutes = [(minute,minute) for minute in range(0,60)] seconds = minutes #not used always in 0s widgets = ( DateInput(attrs=attrs, format=self.date_format), SplitTimeWidget(attrs=attrs), ) super(DateTimeSelectWidget,self).__init__(widgets, attrs) def decompress(self, value): if value: return [value.date(), value.time()] else: [None, None, None]

    Read the article

  • Creating a function in ruby

    - by Micke
    Hello, i have some problem with creating a function to my Rails app. I want it to work like this: str = "string here" if str.within_range?(3..30) puts "It's withing the range!" end To do that i added this into my application helper: def within_range?(range) if self.is_a?(String) (range).include?(self.size) elsif self.is_a?(Integer) (range).include?(self) end end But i get this error: undefined method `within_range?' for &quot;&quot;:String Do you know what the problem is? and could you please help me. If there is a easier way please say so then. Thanks in advance, Micke.

    Read the article

  • How can I show figures separately in matplotlib?

    - by Federico Ramponi
    Say that I have two figures in matplotlib, with one plot per figure: import matplotlib.pyplot as plt f1 = plt.figure() plt.plot(range(0,10)) f2 = plt.figure() plt.plot(range(10,20)) Then I show both in one shot plt.show() Is there a way to show them separately, i.e. to show just f1? Or better: how can I manage the figures separately like in the following 'wishful' code (that doesn't work): f1 = plt.figure() f1.plot(range(0,10)) f1.show()

    Read the article

  • Problem with closing excel by c#

    - by phenevo
    Hi, I've got unit test with this code: Excel.Application objExcel = new Excel.Application(); Excel.Workbook objWorkbook = (Excel.Workbook)(objExcel.Workbooks._Open(@"D:\Selenium\wszystkieSeba2.xls", true, false, Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value)); Excel.Worksheet ws = (Excel.Worksheet)objWorkbook.Sheets[1]; Excel.Range r = ws.get_Range("A1", "I2575"); DateTime dt = DateTime.Now; Excel.Range cellData = null; Excel.Range cellKwota = null; string cellValueData = null; string cellValueKwota = null; double dataTransakcji = 0; string dzien = null; string miesiac = null; int nrOperacji = 1; int wierszPoczatkowy = 11; int pozostalo = 526; cellData = r.Cells[wierszPoczatkowy, 1] as Excel.Range; cellKwota = r.Cells[wierszPoczatkowy, 6] as Excel.Range; if ((cellData != null) && (cellKwota != null)) { object valData = cellData.Value2; object valKwota = cellKwota.Value2; if ((valData != null) && (valKwota != null)) { cellValueData = valData.ToString(); dataTransakcji = Convert.ToDouble(cellValueData); Console.WriteLine("data transakcji to: " + dataTransakcji); dt = DateTime.FromOADate((double)dataTransakcji); dzien = dt.Day.ToString(); miesiac = dt.Month.ToString(); cellValueKwota = valKwota.ToString(); } } r.Cells[wierszPoczatkowy, 8] = "ok"; objWorkbook.Save(); objWorkbook.Close(true, @"C:\Documents and Settings\Administrator\Pulpit\Selenium\wszystkieSeba2.xls", true); objExcel.Quit(); Why after finish test I'm still having excel in process (it does'nt close) And : is there something I can improve to better perfomance ??

    Read the article

  • Encoding issue - 2nd band of ISO-8859-1 values do not get encoded?

    - by bstack
    Hello, I want to send the pound sign character i.e. '£' encoded as ISO-8859-1 across the wire. I perform this by doing the following: var _encoding = Encoding.GetEncoding("iso-8859-1"); var _requestContent = _encoding.GetBytes(requestContent); var _request = (HttpWebRequest)WebRequest.Create(target); _request.Headers[HttpRequestHeader.ContentEncoding] = _encoding.WebName; _request.Method = "POST"; _request.ContentType = "application/x-www-form-urlencoded; charset=iso-8859-1"; _request.ContentLength = _requestContent.Length; _requestStream = _request.GetRequestStream(); _requestStream.Write(_requestContent, 0, _requestContent.Length); _requestStream.Flush(); _requestStream.Close(); When I put a breakpoint at the target, I expect to receive the following: '%a3', however I receive '%u00a3' instead. ISO-8859-1 is divided into 2 groups of characters: (ref: http://en.wikipedia.org/wiki/ISO_8859-1) The lower range 20 to 7E - is where all characters seem to be encoded correctly The higher range A0 to FF - is where all characters seem to encode to their Unicode equivalent value As '£' is in higher range A0 to FF, it gets encoded to %u00a3. In fact when I use the first few characters of the higher range A0 to FF i.e. '¡¢£¤¥¦§¨©ª«¬®', I get '%u00a1%u00a2%u00a3%u00a4%u00a5%u00a6%u00a7%u00a8%u00a9%u00aa%u00ab%u00ac%u00ae'. This behaviour is consistent. The question I have is why do characters in the higher range A0 to FF get encoded to their unicode value - and not to their equivalent ISO-8859-1 value? Help would be greatly appreciated... Billy

    Read the article

  • algorithm for python itertools.permutations

    - by zaharpopov
    Can someone please explain algorithm for itertools.permutations routine in Python standard lib 2.6? I see its code in the documentation but don't undestand why it work? Thanks Code is: def permutations(iterable, r=None): # permutations('ABCD', 2) --> AB AC AD BA BC BD CA CB CD DA DB DC # permutations(range(3)) --> 012 021 102 120 201 210 pool = tuple(iterable) n = len(pool) r = n if r is None else r if r > n: return indices = range(n) cycles = range(n, n-r, -1) yield tuple(pool[i] for i in indices[:r]) while n: for i in reversed(range(r)): cycles[i] -= 1 if cycles[i] == 0: indices[i:] = indices[i+1:] + indices[i:i+1] cycles[i] = n - i else: j = cycles[i] indices[i], indices[-j] = indices[-j], indices[i] yield tuple(pool[i] for i in indices[:r]) break else: return

    Read the article

  • integer Max value constants in SQL Server T-SQL?

    - by AaronLS
    Are there any constants in T-SQL like there are in some other languages that provide the max and min values ranges of data types such as int? I have a code table where each row has an upper and lower range column, and I need an entry that represents a range where the upper range is the maximum value an int can hold(sort of like a hackish infinity). I would prefer not to hard code it and instead use something like SET UpperRange = int.Max

    Read the article

  • Common Pitfalls in Python

    - by Anurag Uniyal
    Today I was bitten again by "Mutable default arguments" after many years. I usually don't use mutable default arguments unless needed but I think with time I forgot about that, and today in the application I added tocElements=[] in a pdf generation function's argument list and now 'Table of Content' gets longer and longer after each invocation of "generate pdf" :) My question is what other things should I add to my list of things to MUST avoid? Mutable default arguments Import modules always same way e.g. from y import x and import x are different things, they are treated as different modules. Do not use range in place of lists because range() will become an iterator anyway, the following will fail: myIndexList = [0,1,3] isListSorted = myIndexList == range(3) # will fail in 3.0 isListSorted = myIndexList == list(range(3)) # will not same thing can be mistakenly done with xrange: `myIndexList == xrange(3)`. Catching multiple exceptions try: raise KeyError("hmm bug") except KeyError,TypeError: print TypeError It prints "hmm bug", though it is not a bug, it looks like we are catching exceptions of type KeyError,TypeError but instead we are catching KeyError only as variable TypeError, use this instead: try: raise KeyError("hmm bug") except (KeyError,TypeError): print TypeError

    Read the article

  • Excel VBA Select Case Loop Sub

    - by Zack
    In my excel file, I have a table setup with formulas. with Cells from Range("B2:B12"), Range ("D2:D12"), and etc every other row containing the answers to these formulas. for these cells (with the formula answers), I need to apply conditional formatting, but I have 7 conditions, so I've been using "select case" in VBA to change their interior background based on their number. I have the select case function currently set up within the sheet code, as opposed to it's own macro Private Sub Worksheet_Change(ByVal Target As Range) Dim iColor As Integer If Not Intersect(Target, Range("B2:L12")) Is Nothing Then Select Case Target Case 0 iColor = 2 Case 0.01 To 0.49 iColor = 36 Case 0.5 To 0.99 iColor = 6 Case 1 To 1.99 iColor = 44 Case 2 To 2.49 iColor = 45 Case 2.5 To 2.99 iColor = 46 Case 3 To 5 iColor = 3 End Select Target.Interior.ColorIndex = iColor End If End Sub but using this method, you must be actually entering the value into the cell for the formatting to work. which is why I want to write a subroutine to to do this as a macro. I can input my data, let the formulas work, and when everything is ready, I can run the macro and format those specific cells. I want an easy way to do this, obviously I could waste a load of time, typing out all the cases for every cell, but I figured it'd be easier with a loop. how would I go about writing a select case loop to change the formatting on a a specific range of cells every other row? thank you in advance.

    Read the article

  • implementing a download manager that supports resuming

    - by Idan K
    hi, I intend on writing a small download manager in C++ that supports resuming (and multiple connections per download). From the info I gathered so far, when sending the http request I need to add a header field with a key of "Range" and the value "bytes=startoff-endoff". Then the server returns a http response with the data between those offsets. So roughly what I have in mind is to split the file to the number of allowed connections per file and send a http request per splitted part with the appropriate "Range". So if I have a 4mb file and 4 allowed connections, I'd split the file to 4 and have 4 http requests going, each with the appropriate "Range" field. Implementing the resume feature would involve remembering which offsets are already downloaded and simply not request those. Is this the right way to do this? What if the web server doesn't support resuming? (my guess is it will ignore the "Range" and just send the entire file) When sending the http requests, should I specify in the range the entire splitted size? Or maybe ask smaller pieces, say 1024k per request? When reading the data, should I write it immediately to the file or do some kind of buffering? I guess it could be wasteful to write small chunks. Should I use a memory mapped file? If I remember correctly, it's recommended for frequent reads rather than writes (I could be wrong). Is it memory wise? What if I have several downloads simultaneously? If I'm not using a memory mapped file, should I open the file per allowed connection? Or when needing to write to the file simply seek? (if I did use a memory mapped file this would be really easy, since I could simply have several pointers). Note: I'll probably be using Qt, but this is a general question so I left code out of it.

    Read the article

  • NHibernate criteria construction

    - by brianberns
    I am trying to recreate something like the following SQL using NHibernate criteria: select Range, count(*) from ( select case when ent.ID between 'A' and 'N' then 'A-M' else 'Other' end as Range from Subject ) tbl group by tbl.Range I am able to create the inner select as follows: session.CreateCriteria<Subject>() .SetProjection( Projections.Conditional( Expression.Between("Name", "A", "N"), Projections.Constant("A-N"), Projections.Constant("Other"))) .List(); However, I can't figure out how to pipe those results into a grouping by row count. Any suggestions? Thanks. -- Brian

    Read the article

  • Excel Data List Validation in Powershell

    - by idazuwaika
    Hi, I have a range named "STATE". I want to set data validation in range ("A1") to only take value within this range using Powershell. Below is what I have tried. Does not work. I dont know what to put as 4th and 5th parameters. The first 3 are Excel constants equivalent to xlValidateList, xlValidAlertStop and xlBetween respectively. $ws.Range("A1").Validation.Add(3, 1, 1, "=STATE", 0) Please help. Thanks.

    Read the article

  • I am trying to use user-defined functions to print out an inputted letter out of stars, but i need h

    - by lm
    def horizline(col): for col in range (col): print("*", end='') print() def vertline(rows, col): for rows in range (rows-2): print ("*", end='') for col in range (col-2): print(' ', end='') print("*") def functionA(width): horizline(width) vereline(width) horizline(width) vertline(width) print() #def funtionB(width): #def functionC(width): #def functionE(width): def main(): width=int(input("Please enter a width for the letter: ")) lenght=int(input("Please enter a lenght for the letter: ")) letter=input("Enter one of the capital letters: A,B,C,E ") if(width>=5 and width<=20): functionA functionB(width,length) functionC(width,length) functionE(width,length) else: print("You have entered an incorrect value") main()

    Read the article

  • Calling an Excel Add-In method from C# application or vice versa

    - by Jude
    I have an Excel VBA add-in with a public method in a bas file. This method currently creates a VB6 COM object, which exists in a running VB6 exe/vbp. The VB6 app loads in data and then the Excel add-in method can call methods on the VB6 COM object to load the data into an existing Excel xls. This is all currently working. We have since converted our VB6 app to C#. My question is: What is the best/easiest way to mimic this behavior with the C#/.NET app? I'm thinking I may not be able to pull the data from the .NET app into Excel from the add-in method since the .Net app needs to be running with data loaded (so no using a stand-alone C# class library). Maybe we can, instead, push the data from .NET to Excel by accessing the VBA add-in method from the C# code? The following is the existing VBA method accessing the VB6 app: Public Sub UpdateInDataFromApp() Dim wkbInData As Workbook Dim oFPW As Object Dim nMaxCols As Integer Dim nMaxRows As Integer Dim j As Integer Dim sName As String Dim nCol As Integer Dim nRow As Integer Dim sheetCnt As Integer Dim nDepth As Integer Dim sPath As String Dim vData As Variant Dim SheetRange As Range Set wkbInData = wkbOpen("InData.xls") sPath = g_sPathXLSfiles & "\" 'Note: the following will bring up fpw app if not already running Set oFPW = CreateObject("FPW.CProfilesData") If oFPW Is Nothing Then MsgBox "Unable to reference " & sApp Else . . . sheetCnt = wkbInData.Sheets.Count 'get number of sheets in indata workbook For j = 2 To sheetCnt 'set counter to loop over all sheets except the first one which is not input data fields With wkbInData.Worksheets(j) Set SheetRange = .UsedRange End With With SheetRange nMaxRows = .Rows.Count 'get range of sheet(j) nMaxCols = .Columns.Count 'get range of sheet(j) Range(.Cells(2, 2), .Cells(nMaxRows, nMaxCols)).ClearContents 'Clears data from data range (51 Columns) Range(.Cells(2, 2), .Cells(nMaxRows, nMaxCols)).ClearComments End With With oFPW 'vb6 object For nRow = 2 To nMaxRows ' loop through rows sName = SheetRange.Cells(nRow, 1) 'Field name vData = .vntGetSymbol(sName, 0) 'Check if vb6 app identifies the name nDepth = .GetInputTableDepth(sName) 'Get number of data items for this field name from vb6 app nMaxCols = nDepth + 2 'nDepth=0, is single data item For nCol = 2 To nMaxCols 'loop over deep screen fields nDepth = nCol - 2 'current depth vData = .vntGetSymbol(sName, nDepth) 'Get Data from vb6 app If LenB(vData) > 0 And IsNumeric(vData) Then 'Check if data returned SheetRange.Cells(nRow, nCol) = vData 'Poke the data in Else SheetRange.Cells(nRow, nCol) = vData 'Poke a zero in End If Next 'nCol Next 'nRow End With Set SheetRange = Nothing Next 'j End If Set wkbInData = Nothing Set oFPW = Nothing Exit Sub . . . End Sub Any help would be appreciated.

    Read the article

  • Comparing Ranges in Excel

    - by mcoolbeth
    In the Excel Interop libraries, is there functionality to determine whether a given Range object is contained within another Range object? It would be simple enough for me to compare the row and column indices of each Range, but things become more complicated when you want to compare two ranges that may be on different worksheets.

    Read the article

  • python- scipy optimization

    - by pear
    In scipy fmin_slsqp (Sequential Least Squares Quadratic Programming), I tried reading the code 'slsqp.py' provided with the scipy package, to find what are the criteria to get the exit_modes 0? I cannot find which statements in the code produce this exit mode? Please help me 'slsqp.py' code as follows, exit_modes = { -1 : "Gradient evaluation required (g & a)", 0 : "Optimization terminated successfully.", 1 : "Function evaluation required (f & c)", 2 : "More equality constraints than independent variables", 3 : "More than 3*n iterations in LSQ subproblem", 4 : "Inequality constraints incompatible", 5 : "Singular matrix E in LSQ subproblem", 6 : "Singular matrix C in LSQ subproblem", 7 : "Rank-deficient equality constraint subproblem HFTI", 8 : "Positive directional derivative for linesearch", 9 : "Iteration limit exceeded" } def fmin_slsqp( func, x0 , eqcons=[], f_eqcons=None, ieqcons=[], f_ieqcons=None, bounds = [], fprime = None, fprime_eqcons=None, fprime_ieqcons=None, args = (), iter = 100, acc = 1.0E-6, iprint = 1, full_output = 0, epsilon = _epsilon ): # Now do a lot of function wrapping # Wrap func feval, func = wrap_function(func, args) # Wrap fprime, if provided, or approx_fprime if not if fprime: geval, fprime = wrap_function(fprime,args) else: geval, fprime = wrap_function(approx_fprime,(func,epsilon)) if f_eqcons: # Equality constraints provided via f_eqcons ceval, f_eqcons = wrap_function(f_eqcons,args) if fprime_eqcons: # Wrap fprime_eqcons geval, fprime_eqcons = wrap_function(fprime_eqcons,args) else: # Wrap approx_jacobian geval, fprime_eqcons = wrap_function(approx_jacobian, (f_eqcons,epsilon)) else: # Equality constraints provided via eqcons[] eqcons_prime = [] for i in range(len(eqcons)): eqcons_prime.append(None) if eqcons[i]: # Wrap eqcons and eqcons_prime ceval, eqcons[i] = wrap_function(eqcons[i],args) geval, eqcons_prime[i] = wrap_function(approx_fprime, (eqcons[i],epsilon)) if f_ieqcons: # Inequality constraints provided via f_ieqcons ceval, f_ieqcons = wrap_function(f_ieqcons,args) if fprime_ieqcons: # Wrap fprime_ieqcons geval, fprime_ieqcons = wrap_function(fprime_ieqcons,args) else: # Wrap approx_jacobian geval, fprime_ieqcons = wrap_function(approx_jacobian, (f_ieqcons,epsilon)) else: # Inequality constraints provided via ieqcons[] ieqcons_prime = [] for i in range(len(ieqcons)): ieqcons_prime.append(None) if ieqcons[i]: # Wrap ieqcons and ieqcons_prime ceval, ieqcons[i] = wrap_function(ieqcons[i],args) geval, ieqcons_prime[i] = wrap_function(approx_fprime, (ieqcons[i],epsilon)) # Transform x0 into an array. x = asfarray(x0).flatten() # Set the parameters that SLSQP will need # meq = The number of equality constraints if f_eqcons: meq = len(f_eqcons(x)) else: meq = len(eqcons) if f_ieqcons: mieq = len(f_ieqcons(x)) else: mieq = len(ieqcons) # m = The total number of constraints m = meq + mieq # la = The number of constraints, or 1 if there are no constraints la = array([1,m]).max() # n = The number of independent variables n = len(x) # Define the workspaces for SLSQP n1 = n+1 mineq = m - meq + n1 + n1 len_w = (3*n1+m)*(n1+1)+(n1-meq+1)*(mineq+2) + 2*mineq+(n1+mineq)*(n1-meq) \ + 2*meq + n1 +(n+1)*n/2 + 2*m + 3*n + 3*n1 + 1 len_jw = mineq w = zeros(len_w) jw = zeros(len_jw) # Decompose bounds into xl and xu if len(bounds) == 0: bounds = [(-1.0E12, 1.0E12) for i in range(n)] elif len(bounds) != n: raise IndexError, \ 'SLSQP Error: If bounds is specified, len(bounds) == len(x0)' else: for i in range(len(bounds)): if bounds[i][0] > bounds[i][1]: raise ValueError, \ 'SLSQP Error: lb > ub in bounds[' + str(i) +'] ' + str(bounds[4]) xl = array( [ b[0] for b in bounds ] ) xu = array( [ b[1] for b in bounds ] ) # Initialize the iteration counter and the mode value mode = array(0,int) acc = array(acc,float) majiter = array(iter,int) majiter_prev = 0 # Print the header if iprint >= 2 if iprint >= 2: print "%5s %5s %16s %16s" % ("NIT","FC","OBJFUN","GNORM") while 1: if mode == 0 or mode == 1: # objective and constraint evaluation requird # Compute objective function fx = func(x) # Compute the constraints if f_eqcons: c_eq = f_eqcons(x) else: c_eq = array([ eqcons[i](x) for i in range(meq) ]) if f_ieqcons: c_ieq = f_ieqcons(x) else: c_ieq = array([ ieqcons[i](x) for i in range(len(ieqcons)) ]) # Now combine c_eq and c_ieq into a single matrix if m == 0: # no constraints c = zeros([la]) else: # constraints exist if meq > 0 and mieq == 0: # only equality constraints c = c_eq if meq == 0 and mieq > 0: # only inequality constraints c = c_ieq if meq > 0 and mieq > 0: # both equality and inequality constraints exist c = append(c_eq, c_ieq) if mode == 0 or mode == -1: # gradient evaluation required # Compute the derivatives of the objective function # For some reason SLSQP wants g dimensioned to n+1 g = append(fprime(x),0.0) # Compute the normals of the constraints if fprime_eqcons: a_eq = fprime_eqcons(x) else: a_eq = zeros([meq,n]) for i in range(meq): a_eq[i] = eqcons_prime[i](x) if fprime_ieqcons: a_ieq = fprime_ieqcons(x) else: a_ieq = zeros([mieq,n]) for i in range(mieq): a_ieq[i] = ieqcons_prime[i](x) # Now combine a_eq and a_ieq into a single a matrix if m == 0: # no constraints a = zeros([la,n]) elif meq > 0 and mieq == 0: # only equality constraints a = a_eq elif meq == 0 and mieq > 0: # only inequality constraints a = a_ieq elif meq > 0 and mieq > 0: # both equality and inequality constraints exist a = vstack((a_eq,a_ieq)) a = concatenate((a,zeros([la,1])),1) # Call SLSQP slsqp(m, meq, x, xl, xu, fx, c, g, a, acc, majiter, mode, w, jw) # Print the status of the current iterate if iprint > 2 and the # major iteration has incremented if iprint >= 2 and majiter > majiter_prev: print "%5i %5i % 16.6E % 16.6E" % (majiter,feval[0], fx,linalg.norm(g)) # If exit mode is not -1 or 1, slsqp has completed if abs(mode) != 1: break majiter_prev = int(majiter) # Optimization loop complete. Print status if requested if iprint >= 1: print exit_modes[int(mode)] + " (Exit mode " + str(mode) + ')' print " Current function value:", fx print " Iterations:", majiter print " Function evaluations:", feval[0] print " Gradient evaluations:", geval[0] if not full_output: return x else: return [list(x), float(fx), int(majiter), int(mode), exit_modes[int(mode)] ]

    Read the article

  • number of days in a period that fall within another period

    - by thomas
    I have 2 independent but contiguous date ranges. The first range is the start and end date for a project. Lets say start = 3/21/10 and end = 5/16/10. The second range is a month boundary (say 3/1/10 to 3/31/10, 4/1/10 to 4/30/10, etc.) I need to figure out how many days in each month fall into the first range. The answer to my example above is March = 10, April = 30, May = 16. I am trying to figure out an excel formula or VBA function that will give me this value. Any thoughts on an algorithm for this? I feel it should be rather easy but I can't seem to figure it out. I have a formula which will return TRUE/FALSE if ANY part of the month range is within the project start/end but not the number of days. That function is below. return month_start <= project_end And month_end >= project_start

    Read the article

  • How to catch a carp-warning?

    - by sid_com
    I tried to catch a carp-warning ( carp "$start is $end" if (warnings::enabled()); ) with eval but it didn't work, so I looked in the eval-documentation and I discovered, that eval catches only syntax-errors, run-time-errors or executed die-statements. How could I catch a carp-warning? #!/usr/bin/env perl use warnings; use strict; use 5.012; use List::Util qw(max min); use Number::Range; my @array; my $max = 20; print "Input (max $max): "; my $in = <>; $in =~ s/\s+//g; $in =~ s/(?<=\d)-/../g; eval { my $range = new Number::Range( $in ); @array = sort { $a <=> $b } $range->range; }; if ( $@ =~ /\d+ is > \d+/ ) { die $@ }; # catch the carp-warning doesn't work die "Input greater than $max not allowed $!" if defined $max and max( @array ) > $max; die "Input '0' or less not allowed $!" if min( @array ) < 1; say "@array";

    Read the article

  • Awk filtering values between two files when regions intersect (any solutions welcome)

    - by user964689
    This is building upon an earlier question Awk conditional filter one file based on another (or other solutions) I have an awk program that outputs a column from rows in a text file 'refGene.txt if values in that row match 2 out of 3 values in another text file. I need to include an additional criteria for finding a match between the two files. The criteria is inclusion if the range of the 2 numberical values specified in each row in file 1 overlap with the range of the two values in a row in refGene.txt. An example of a line in File 1: chr1 10 20 chr2 10 20 and an example line in file 2(refGene.txt) of the matching columns ($3, $5, $ 6): chr1 5 30 Currently the awk program does not treat this as a match because although the first column matches neither the 2nd or 3rd columns do no. But I would like a way to treat this as a match because the region 10-20 in file 1 is WITHIN the range of 5-30 in refGene.txt. However the second line in file 1 should NOT match because the first column does not match, which is necessary. If there is a way to include cases when any of the range in file 1 overlaps with any of the range in refGene.txt that would be really helpful. It should also replace the below conditional statements as it would also find all the cases currently described below. Please let me know if my question is unclear. Any help is really appreciated, thanks it advance! (solutions do not have to be in awk) Rubal FILES=/files/*txt for f in $FILES ; do awk ' BEGIN { FS = "\t"; } FILENAME == ARGV[1] { pair[ $1, $2, $3 ] = 1; next; } { if ( pair[ $3, $5, $6 ] == 1 ) { print $13; } } ' $(basename $f) /files/refGene.txt > /files/results/$(basename $f) ; done

    Read the article

< Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >