Search Results

Search found 9446 results on 378 pages for 'ssh keys'.

Page 366/378 | < Previous Page | 362 363 364 365 366 367 368 369 370 371 372 373  | Next Page >

  • ASA 5505 Vlan question

    - by Wayne
    I am setting up a cisco asa 5505 with the base license. I can communicate from inside-outside, outside-inside, inside-home, which is my desired traffic security. I can get http, ssh, and other access from inside-home, but I can't ping from inside-home (192.168.110.0 host to 192.168.7.1 or 192.168.7.0 host). Can someone explain. My config is listed below interface Vlan1<br> nameif inside<br> security-level 100<br> ip address 192.168.110.254 255.255.255.0 <br> !<br> interface Vlan2<br> nameif outside<br> security-level 0<br> pppoe client vpdn group birdie<br> ip address removedIP 255.255.255.255 pppoe <br> !<br> interface Vlan3<br> no forward interface Vlan1<br> nameif home<br> security-level 50<br> ip address 192.168.7.1 255.255.255.0 <br> ! <br> interface Ethernet0/0<br> switchport access vlan 2<br> ! <br> interface Ethernet0/1<br> ! <br> interface Ethernet0/2<br> ! <br> interface Ethernet0/3<br> ! <br> interface Ethernet0/4<br> switchport access vlan 3<br> ! <br> interface Ethernet0/5<br> shutdown <br> ! <br> interface Ethernet0/6<br> shutdown <br> ! <br> interface Ethernet0/7<br> shutdown <br> ! <br> ftp mode passive<br> clock timezone EST -5<br> clock summer-time EDT recurring<br> access-list Outside-In extended permit icmp any any <br> access-list Outside-In extended permit tcp any any eq www <br> access-list Outside-In extended permit tcp any any eq https <br> access-list Outside-In extended permit tcp any any eq 5969 <br> access-list inside_nat0_outbound extended permit ip any 192.168.111.0 255.255.255.224 <br> access-list standardUser_splitTunnelAcl1 extended permit ip 192.168.111.0 255.255.255.0 any <br> access-list standardUser_splitTunnelAcl1 extended permit ip 192.168.110.0 255.255.255.0 <br>any access-list inside_in extended permit icmp any any <br> access-list inside_in extended permit ip any any <br> access-list home_in extended permit icmp any any <br> access-list home_in extended permit ip any any <br> pager lines 24<br> logging enable<br> logging asdm informational<br> mtu inside 1492<br> mtu outside 1492<br> mtu home 1500 <br> ip local pool vpnuser 192.168.111.5-192.168.111.20<br> icmp unreachable rate-limit 1 burst-size 1<br> asdm image disk0:/asdm-524.bin<br> no asdm history enable<br> arp timeout 14400<br> nat-control <br> global (outside) 1 interface<br> nat (inside) 0 access-list inside_nat0_outbound<br> nat (inside) 1 0.0.0.0 0.0.0.0<br> nat (home) 1 192.168.7.0 255.255.255.0<br> static (inside,outside) tcp interface https 192.168.110.6 https netmask 255.255.255.255 <br> static (inside,outside) tcp interface www 192.168.110.6 www netmask 255.255.255.255 <br> static (inside,outside) tcp interface 5969 192.168.110.12 5969 netmask 255.255.255.255 <br> static (inside,home) 192.168.110.0 192.168.110.0 netmask 255.255.255.0 <br> access-group inside_in in interface inside<br> access-group Outside-In in interface outside<br> access-group home_in in interface home<br> route outside 0.0.0.0 0.0.0.0 RemovedIP 1<br>

    Read the article

  • Yii, Generate unquie ids one each tr element of CGridView

    - by Snow_Mac
    I have a CDbActiveRecord setup and I have a instance of the CGridView class setup as a widget. Basically my end game is I need a table, but each row to contain the primary key of the row associated with the Active Record. Such as: <tr id="123"> <td> Column value 1 </td> <td> Col 2 </td> <td> Col 3 </td> </tr> That's the specific of the row that I'm looking for. Here's the code I've got so far to produce a table. (The json variable is set because this is inside a controller and the widget is returned as json.) // get the content id for the version list $contentID_v = Yii::app()->request->getParam("id"); // setup the criteria to fetch related items $versionCdbCriteria = new CDbCriteria; $versionCdbCriteria->compare("contentID",$contentID_v); // setting up the active data provider for the version $vActiveDP = new CActiveDataProvider("FactsheetcontentVersion", array( "criteria" => $versionCdbCriteria, 'pagination' => array('PageSize' => $this->paginationSize), 'keyAttribute'=>'vID', )); $json_data .= $this->widget('zii.widgets.grid.CGridView', array( 'dataProvider' => $vActiveDP, 'columns' => array( 'title', 'created', 'createdBy' ), 'showTableOnEmpty' => 'false', ),true); This is what it produces for my active record. <div class="grid-view" id="yw0"> <div class="summary">Displaying 1-1 of 1 result(s).</div> <table class="items"><thead> <tr><th id="yw0_c0">Factsheettitle</th> <th id="yw0_c1"><a href="jq/work/admin/index.php?r=factsheetManager/Editor &amp;id=25601&amp;getV=true&amp;_=1341694154760&amp;FactsheetcontentVersion_sort=created">Created</a> </th> <th id="yw0_c2"><a href="jq/work/admin/index.php?r=factsheetManager/ Editor&amp;id=25601&amp;getV=true&amp;_=1341694154760&amp;FactsheetcontentVersion_sort=createdBy">Created By</a> </th> </tr></thead> <tbody><tr class="odd"><td>Distribution</td><td>0000-00-00 00:00:00</td><td>NULL</td></tr></tbody> </table> <div title="jq/work/admin/index.php?r=factsheetManager/Editor&amp;id=12&amp;id=25601&amp;getV=true&amp;_=1341694154760" style="display:none" class="keys"><span>8</span></div> </div>

    Read the article

  • Moving a Drupal between linux servers, best practice to avoid file-ownership problems

    - by zero
    I want to port over a Drupal commons 6x24 from a local LAMP-stack to a production webserver. Both systems run OpenSuse Linux. How do I do this, what are the most important steps. How should I handle file-ownership. It's important for me to have to have full control of the file ownership. If I use the wwwrun account, I frequently run into problems, due to a very strict webserver-admin. See for example the long history of looking for fixes and solutions see this thread and even more interesting see this very long and impressive thread here. All troubles I run into have to do with file-owernship and permissions. This is my current setup; Note: This was just a quick hacked installation - quick and dirty. Well my interest is after the general options i have in the port of a drupal from linux to linux linux-vi17:/srv/www/htdocs/com624 # ls -l insgesamt 224 -rwxrwxrwx 1 root www 45285 19. Jan 00:54 CHANGELOG.txt -rwxrwxrwx 1 root www 925 19. Jan 00:54 COPYRIGHT.txt -rwxrwxrwx 1 root www 206 19. Jan 00:54 cron.php drwxrwxrwx 2 root www 4096 19. Jan 00:54 includes -rwxrwxrwx 1 root www 923 19. Jan 00:54 index.php -rwxrwxrwx 1 root www 1244 19. Jan 00:54 INSTALL.mysql.txt -rwxrwxrwx 1 root www 1011 19. Jan 00:54 INSTALL.pgsql.txt -rwxrwxrwx 1 root www 47073 19. Jan 00:54 install.php -rwxrwxrwx 1 root www 15572 19. Jan 00:54 INSTALL.txt -rwxrwxrwx 1 root www 14940 19. Jan 00:54 LICENSE.txt -rwxrwxrwx 1 root www 1858 19. Jan 00:54 MAINTAINERS.txt drwxrwxrwx 3 root www 4096 19. Jan 00:54 misc drwxrwxrwx 35 root www 4096 19. Jan 00:54 modules drwxrwxrwx 4 root www 4096 19. Jan 00:54 profiles -rwxrwxrwx 1 root www 1470 19. Jan 00:54 robots.txt drwxrwxrwx 2 root www 4096 19. Jan 00:54 scripts drwxrwxrwx 4 root www 4096 19. Jan 00:54 sites drwxrwxrwx 7 root www 4096 19. Jan 00:54 themes -rwxrwxrwx 1 root www 26250 19. Jan 00:54 update.php -rwxrwxrwx 1 root www 4864 19. Jan 00:54 UPGRADE.txt -rwxrwxrwx 1 root www 294 19. Jan 00:54 xmlrpc.php linux-vi17:/srv/www/htdocs/com624 # thx to BetaRides answer here a quick overview on the drush functionality with rsync http://drush.ws/ core-rsync Rsync the Drupal tree to/from another server using ssh. Examples: drush rsync @dev @stage Rsync Drupal root from dev to stage (one of which must be local). drush rsync ./ @stage:%files/img Rsync all files in the current directory to the 'img' directory in the file storage folder on stage. Arguments: source May be rsync path or site alias. See rsync documentation and example.aliases.drushrc.php. destination May be rsync path or site alias. See rsync documentation and example.aliases.drushrc.php. Options: --mode The unary flags to pass to rsync; --mode=rultz implies rsync -rultz. Default is -az. --RSYNC-FLAG Most rsync flags passed to drush sync will be passed on to rsync. See rsync documentation. --exclude-conf Excludes settings.php from being rsynced. Default. --include-conf Allow settings.php to be rsynced --exclude-files Exclude the files directory. --exclude-sites Exclude all directories in "sites/" except for "sites/all". --exclude-other-sites Exclude all directories in "sites/" except for "sites/all" and the site directory for the site being synced. Note: if the site directory is different between the source and destination, use --exclude-sites followed by "drush rsync @from:%site @to:%site" --exclude-paths List of paths to exclude, seperated by : (Unix-based systems) or ; (Windows). --include-paths List of paths to include, seperated by : (Unix-based systems) or ; (Windows). Topics: docs-aliases Site aliases overview with examples Aliases: rsync

    Read the article

  • Perl MiniWebserver

    - by snikolov
    hey guys i have config this miniwebserver, however i require the server to download a file in the local directory i am getting a problem can you please fix my issue thanks !/usr/bin/perl use strict; use Socket; use IO::Socket; my $buffer; my $file; my $length; my $output; Simple web server in Perl Serves out .html files, echos form data sub parse_form { my $data = $_[0]; my %data; foreach (split /&/, $data) { my ($key, $val) = split /=/; $val =~ s/+/ /g; $val =~ s/%(..)/chr(hex($1))/eg; $data{$key} = $val;} return %data; } Setup and create socket my $port = shift; defined($port) or die "Usage: $0 portno\n"; my $DOCUMENT_ROOT = $ENV{'HOME'} . "public"; my $server = new IO::Socket::INET(Proto = 'tcp', LocalPort = $port, Listen = SOMAXCONN, Reuse = 1); $server or die "Unable to create server socket: $!" ; Await requests and handle them as they arrive while (my $client = $server-accept()) { $client-autoflush(1); my %request = (); my %data; { -------- Read Request --------------- local $/ = Socket::CRLF; while (<$client>) { chomp; # Main http request if (/\s*(\w+)\s*([^\s]+)\s*HTTP\/(\d.\d)/) { $request{METHOD} = uc $1; $request{URL} = $2; $request{HTTP_VERSION} = $3; } # Standard headers elsif (/:/) { (my $type, my $val) = split /:/, $_, 2; $type =~ s/^\s+//; foreach ($type, $val) { s/^\s+//; s/\s+$//; } $request{lc $type} = $val; } # POST data elsif (/^$/) { read($client, $request{CONTENT}, $request{'content-length'}) if defined $request{'content-length'}; last; } } } -------- SORT OUT METHOD --------------- if ($request{METHOD} eq 'GET') { if ($request{URL} =~ /(.*)\?(.*)/) { $request{URL} = $1; $request{CONTENT} = $2; %data = parse_form($request{CONTENT}); } else { %data = (); } $data{"_method"} = "GET"; } elsif ($request{METHOD} eq 'POST') { %data = parse_form($request{CONTENT}); $data{"_method"} = "POST"; } else { $data{"_method"} = "ERROR"; } ------- Serve file ---------------------- my $localfile = $DOCUMENT_ROOT.$request{URL}; Send Response if (open(FILE, "<$localfile")) { print $client "HTTP/1.0 200 OK", Socket::CRLF; print $client "Content-type: text/html", Socket::CRLF; print $client Socket::CRLF; my $buffer; while (read(FILE, $buffer, 4096)) { print $client $buffer; } $data{"_status"} = "200"; } else { $file = 'a.pl'; open(INFILE, $file); while (<INFILE>) { $output .= $_; ##output of the file } $length = length($output); print $client "'HTTP/1.0 200 OK", Socket::CRLF; print $client "Content-type: application/octet-stream", Socket::CRLF; print $client "Content-Length:".$length, Socket::CRLF; print $client "Accept-Ranges: bytes", Socket::CRLF; print $client 'Content-Disposition: attachment; filename="test.zip"', Socket::CRLF; print $client $output, Socket::CRLF; print $client 'Content-Transfer-Encoding: binary"', Socket::CRLF; print $client "Connection: Keep-Alive", Socket::CRLF; # #$data{"_status"} = "404"; # } close(FILE); Log Request print ($DOCUMENT_ROOT.$request{URL},"\n"); foreach (keys(%data)) { print (" $_ = $data{$_}\n"); } ----------- Close Connection and loop ------------------ close $client; } END

    Read the article

  • How can a single disk in a hardware SATA RAID-10 array bring the entire array to a screeching halt?

    - by Stu Thompson
    Prelude: I'm a code-monkey that's increasingly taken on SysAdmin duties for my small company. My code is our product, and increasingly we provide the same app as SaaS. About 18 months ago I moved our servers from a premium hosting centric vendor to a barebones rack pusher in a tier IV data center. (Literally across the street.) This ment doing much more ourselves--things like networking, storage and monitoring. As part the big move, to replace our leased direct attached storage from the hosting company, I built a 9TB two-node NAS based on SuperMicro chassises, 3ware RAID cards, Ubuntu 10.04, two dozen SATA disks, DRBD and . It's all lovingly documented in three blog posts: Building up & testing a new 9TB SATA RAID10 NFSv4 NAS: Part I, Part II and Part III. We also setup a Cacit monitoring system. Recently we've been adding more and more data points, like SMART values. I could not have done all this without the awesome boffins at ServerFault. It's been a fun and educational experience. My boss is happy (we saved bucket loads of $$$), our customers are happy (storage costs are down), I'm happy (fun, fun, fun). Until yesterday. Outage & Recovery: Some time after lunch we started getting reports of sluggish performance from our application, an on-demand streaming media CMS. About the same time our Cacti monitoring system sent a blizzard of emails. One of the more telling alerts was a graph of iostat await. Performance became so degraded that Pingdom began sending "server down" notifications. The overall load was moderate, there was not traffic spike. After logging onto the application servers, NFS clients of the NAS, I confirmed that just about everything was experiencing highly intermittent and insanely long IO wait times. And once I hopped onto the primary NAS node itself, the same delays were evident when trying to navigate the problem array's file system. Time to fail over, that went well. Within 20 minuts everything was confirmed to be back up and running perfectly. Post-Mortem: After any and all system failures I perform a post-mortem to determine the cause of the failure. First thing I did was ssh back into the box and start reviewing logs. It was offline, completely. Time for a trip to the data center. Hardware reset, backup an and running. In /var/syslog I found this scary looking entry: Nov 15 06:49:44 umbilo smartd[2827]: Device: /dev/twa0 [3ware_disk_00], 6 Currently unreadable (pending) sectors Nov 15 06:49:44 umbilo smartd[2827]: Device: /dev/twa0 [3ware_disk_07], SMART Prefailure Attribute: 1 Raw_Read_Error_Rate changed from 171 to 170 Nov 15 06:49:45 umbilo smartd[2827]: Device: /dev/twa0 [3ware_disk_10], 16 Currently unreadable (pending) sectors Nov 15 06:49:45 umbilo smartd[2827]: Device: /dev/twa0 [3ware_disk_10], 4 Offline uncorrectable sectors Nov 15 06:49:45 umbilo smartd[2827]: Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error Nov 15 06:49:45 umbilo smartd[2827]: # 1 Short offline Completed: read failure 90% 6576 3421766910 Nov 15 06:49:45 umbilo smartd[2827]: # 2 Short offline Completed: read failure 90% 6087 3421766910 Nov 15 06:49:45 umbilo smartd[2827]: # 3 Short offline Completed: read failure 10% 5901 656821791 Nov 15 06:49:45 umbilo smartd[2827]: # 4 Short offline Completed: read failure 90% 5818 651637856 Nov 15 06:49:45 umbilo smartd[2827]: So I went to check the Cacti graphs for the disks in the array. Here we see that, yes, disk 7 is slipping away just like syslog says it is. But we also see that disk 8's SMART Read Erros are fluctuating. There are no messages about disk 8 in syslog. More interesting is that the fluctuating values for disk 8 directly correlate to the high IO wait times! My interpretation is that: Disk 8 is experiencing an odd hardware fault that results in intermittent long operation times. Somehow this fault condition on the disk is locking up the entire array Maybe there is a more accurate or correct description, but the net result has been that the one disk is impacting the performance of the whole array. The Question(s) How can a single disk in a hardware SATA RAID-10 array bring the entire array to a screeching halt? Am I being naïve to think that the RAID card should have dealt with this? How can I prevent a single misbehaving disk from impacting the entire array? Am I missing something?

    Read the article

  • Nhibernate Migration from 1.0.2.0 to 2.1.2 and many-to-one save problems

    - by Meska
    Hi, we have an old, big asp.net application with nhibernate, which we are extending and upgrading some parts of it. NHibernate that was used was pretty old ( 1.0.2.0), so we decided to upgrade to ( 2.1.2) for the new features. HBM files are generated through custom template with MyGeneration. Everything went quite smoothly, except for one thing. Lets say we have to objects Blog and Post. Blog can have many posts, so Post will have many-to-one relationship. Due to the way that this application operates, relationship is done not through primary keys, but through Blog.Reference column. Sample mapings and .cs files: <?xml version="1.0" encoding="utf-8" ?> <id name="Id" column="Id" type="Guid"> <generator class="assigned"/> </id> <property column="Reference" type="Int32" name="Reference" not-null="true" /> <property column="Name" type="String" name="Name" length="250" /> </class> <?xml version="1.0" encoding="utf-8" ?> <id name="Id" column="Id" type="Guid"> <generator class="assigned"/> </id> <property column="Reference" type="Int32" name="Reference" not-null="true" /> <property column="Name" type="String" name="Name" length="250" /> <many-to-one name="Blog" column="BlogId" class="SampleNamespace.BlogEntity,SampleNamespace" property-ref="Reference" /> </class> And class files class BlogEntity { public Guid Id { get; set; } public int Reference { get; set; } public string Name { get; set; } } class PostEntity { public Guid Id { get; set; } public int Reference { get; set; } public string Name { get; set; } public BlogEntity Blog { get; set; } } Now lets say that i have a Blog with Id 1D270C7B-090D-47E2-8CC5-A3D145838D9C and with Reference 1 In old nhibernate such thing was possible: //this Blog already exists in database BlogEntity blog = new BlogEntity(); blog.Id = Guid.Empty; blog.Reference = 1; //Reference is unique, so we can distinguish Blog by this field blog.Name = "My blog"; //this is new Post, that we are trying to insert PostEntity post = new PostEntity(); post.Id = Guid.NewGuid(); post.Name = "New post"; post.Reference = 1234; post.Blog = blog; session.Save(post); However, in new version, i get an exception that cannot insert NULL into Post.BlogId. As i understand, in old version, for nhibernate it was enough to have Blog.Reference field, and it could retrieve entity by that field, and attach it to PostEntity, and when saving PostEntity, everything would work correctly. And as i understand, new NHibernate tries only to retrieve by Blog.Id. How to solve this? I cannot change DB design, nor can i assign an Id to BlogEntity, as objects are out of my control (they come prefilled as generic "ojbects" like this from external source)

    Read the article

  • System user authentication via web interface [closed]

    - by donodarazao
    Background: We have one pretty slow and expensive satellite Internet connection that is shared in a network with 5-50 users. To limit traffic, users shall pay a certain sum of money per hour. Routing and traffic accounting on user basis is done by a opensuse 10.3 server. Login is done via pppoe, and for each connection, username, bytes_sent, bytes_rcvd, start_time, end_time,etc are written into a mysql database. Now it was decided that we want to change from time-based to volume-based pricing. As the original developer who installed the system a couple of years ago isn't available, I'm trying to do the changes. Although I'm absolutely new to all this, there is some progress. However, there's one point I'm absolutely stuck. Up to now, only administrators can access connection details and billing information via a web interface. But as volume-based prices are less transparent to users than time-based prices, it is essential that users themselves can check their connections and how much they cost via the web interface. For this, we need some kind of user authentication. Actual question: How to develop such a user authentication? Every user has a linux system user account. With this user name and password, connection to the pppoe-server is made by the client machines. I thought about two possibles ways to authenticate users: First possibility: Users type username and password in a form. This is then somehow checked. We already have to possibilities to change passwords via the web interface. Here are parts of the code: Part of the Perl script the homepage is linked to: #!/usr/bin/perl use CGI; use CGI::Carp qw(fatalsToBrowser); use lib '../lib'; use own_perl_module; my @error; my $data; $query = new CGI; $username = $query->param('username') || ''; $oldpasswd = $query->param('oldpasswd') || ''; $passwd = $query->param('passwd') || ''; $passwd2 = $query->param('passwd2') || ''; own_perl_module::connect(); if ($query->param('submit')) { my $benutzer = own_perl_module::select_benutzer(username => $username) or push @error, "user not exists"; push @error, "your password?!?" unless $passwd; unless (@error) { own_perl_module::update_benutzer($benutzer->{id}, { oldpasswd => $oldpasswd, passwd => $passwd, passwd2 => $passwd2 }, error => \@error) and push @error, "Password changed."; } } Here's part of the sub update_benutzer in the own_perl_module: if ($dat-{passwd} ne '') { my $username = $dat-{username} || $select-{username}; my $system = "./chpasswd.pl '$username' '$dat-{passwd}'" . (defined($dat-{oldpasswd}) ? " '$dat-{oldpasswd}'" : undef); my $answer = $system; if ($? != 0) { chomp($answer); push @$error, $answer || "error changing password ($?)"; Here's chpasswd.pl: #!/usr/bin/perl use FileHandle; use IPC::Open3; local $username = shift; local $passwd = shift; local $oldpasswd = shift; local $chat = { 'Old Password: $' => sub { print POUT "$oldpasswd\n"; }, 'New password: $' => sub { print POUT "$passwd\n"; }, 'Re-enter new password: $' => sub { print POUT "$passwd\n"; }, '(.*)\n$' => sub { print "$1\n"; exit 1; } }; local $/ = \1; my $command; if (defined($oldpasswd)) { $command = "sudo -u '$username' /usr/bin/passwd"; } else { $command = "sudo /usr/bin/passwd '$username'"; } $pid = open3(\*POUT, \*PIN, \*PERR, $command) or die; my $buffer; LOOP: while($_ = <PERR>) { $buffer .= $_; foreach (keys(%$chat)) { if ($buffer =~ /$_/i) { $buffer = undef; &{$chat->{$_}}; } } } exit; Could this somehow be adjusted to verify users, but not changing user passwords? The second possibility I see: all pppoe connections are logged in the mysql database. If I could somehow retrieve the username (or uid) of the user connected by pppoe, this could be used to authenticate users. Users could only check their internet connections and costs when they are online (and thus paying money), but this could be tolerated. Here's a line of the script that inserts connections into the database: my $username = $ENV{PEERNAME}; I thought it would be easy to use this variable, but $username seems to be always empty in test-scripts (print $username). Any idea how to retrieve the user connected to the pppoe server? Sorry for the long question! Any help would be very much appreciated. :)

    Read the article

  • map<string, vector<string>> reassignment of vector value

    - by user2950936
    I am trying to write a program that takes lines from an input file, sorts the lines into 'signatures' for the purpose of combining all words that are anagrams of each other. I have to use a map, storing the 'signatures' as the keys and storing all words that match those signatures into a vector of strings. Afterwards I must print all words that are anagrams of each other on the same line. Here is what I have so far: #include <iostream> #include <string> #include <algorithm> #include <map> #include <fstream> using namespace std; string signature(const string&); void printMap(const map<string, vector<string>>&); int main(){ string w1,sig1; vector<string> data; map<string, vector<string>> anagrams; map<string, vector<string>>::iterator it; ifstream myfile; myfile.open("words.txt"); while(getline(myfile, w1)) { sig1=signature(w1); anagrams[sig1]=data.push_back(w1); //to my understanding this should always work, } //either by inserting a new element/key or //by pushing back the new word into the vector<string> data //variable at index sig1, being told that the assignment operator //cannot be used in this way with these data types myfile.close(); printMap(anagrams); return 0; } string signature(const string& w) { string sig; sig=sort(w.begin(), w.end()); return sig; } void printMap(const map& m) { for(string s : m) { for(int i=0;i<m->second.size();i++) cout << m->second.at(); cout << endl; } } The first explanation is working, didn't know it was that simple! However now my print function is giving me: prob2.cc: In function âvoid printMap(const std::map<std::basic_string<char>, std::vector<std::basic_string<char> > >&)â: prob2.cc:43:36: error: cannot bind âstd::basic_ostream<char>::__ostream_type {aka std::basic_ostream<char>}â lvalue to âstd::basic_ostream<char>&&â In file included from /opt/centos/devtoolset-1.1/root/usr/lib/gcc/x86_64-redhat-linux/4.7.2/../../../../include/c++/4.7.2/iostream:40:0, Tried many variations and they always complain about binding void printMap(const map<string, vector<string>> &mymap) { for(auto &c : mymap) cout << c.first << endl << c.second << endl; }

    Read the article

  • algorithm q: Fuzzy matching of structured data

    - by user86432
    I have a fairly small corpus of structured records sitting in a database. Given a tiny fraction of the information contained in a single record, submitted via a web form (so structured in the same way as the table schema), (let us call it the test record) I need to quickly draw up a list of the records that are the most likely matches for the test record, as well as provide a confidence estimate of how closely the search terms match a record. The primary purpose of this search is to discover whether someone is attempting to input a record that is duplicate to one in the corpus. There is a reasonable chance that the test record will be a dupe, and a reasonable chance the test record will not be a dupe. The records are about 12000 bytes wide and the total count of records is about 150,000. There are 110 columns in the table schema and 95% of searches will be on the top 5% most commonly searched columns. The data is stuff like names, addresses, telephone numbers, and other industry specific numbers. In both the corpus and the test record it is entered by hand and is semistructured within an individual field. You might at first blush say "weight the columns by hand and match word tokens within them", but it's not so easy. I thought so too: if I get a telephone number I thought that would indicate a perfect match. The problem is that there isn't a single field in the form whose token frequency does not vary by orders of magnitude. A telephone number might appear 100 times in the corpus or 1 time in the corpus. The same goes for any other field. This makes weighting at the field level impractical. I need a more fine-grained approach to get decent matching. My initial plan was to create a hash of hashes, top level being the fieldname. Then I would select all of the information from the corpus for a given field, attempt to clean up the data contained in it, and tokenize the sanitized data, hashing the tokens at the second level, with the tokens as keys and frequency as value. I would use the frequency count as a weight: the higher the frequency of a token in the reference corpus, the less weight I attach to that token if it is found in the test record. My first question is for the statisticians in the room: how would I use the frequency as a weight? Is there a precise mathematical relationship between n, the number of records, f(t), the frequency with which a token t appeared in the corpus, the probability o that a record is an original and not a duplicate, and the probability p that the test record is really a record x given the test and x contain the same t in the same field? How about the relationship for multiple token matches across multiple fields? Since I sincerely doubt that there is, is there anything that gets me close but is better than a completely arbitrary hack full of magic factors? Barring that, has anyone got a way to do this? I'm especially keen on other suggestions that do not involve maintaining another table in the database, such as a token frequency lookup table :). This is my first post on StackOverflow, thanks in advance for any replies you may see fit to give.

    Read the article

  • Help with refactoring PHP code

    - by Richard Knop
    I had some troubles implementing Lawler's algorithm but thanks to SO and a bounty of 200 reputation I finally managed to write a working implementation: http://stackoverflow.com/questions/2466928/lawlers-algorithm-implementation-assistance I feel like I'm using too many variables and loops there though so I am trying to refactor the code. It should be simpler and shorter yet remain readable. Does it make sense to make a class for this? Any advice or even help with refactoring this piece of code is welcomed: <?php /* * @name Lawler's algorithm PHP implementation * @desc This algorithm calculates an optimal schedule of jobs to be * processed on a single machine (in reversed order) while taking * into consideration any precedence constraints. * @author Richard Knop * */ $jobs = array(1 => array('processingTime' => 2, 'dueDate' => 3), 2 => array('processingTime' => 3, 'dueDate' => 15), 3 => array('processingTime' => 4, 'dueDate' => 9), 4 => array('processingTime' => 3, 'dueDate' => 16), 5 => array('processingTime' => 5, 'dueDate' => 12), 6 => array('processingTime' => 7, 'dueDate' => 20), 7 => array('processingTime' => 5, 'dueDate' => 27), 8 => array('processingTime' => 6, 'dueDate' => 40), 9 => array('processingTime' => 3, 'dueDate' => 10)); // precedence constrainst, i.e job 2 must be completed before job 5 etc $successors = array(2=>5, 7=>9); $n = count($jobs); $optimalSchedule = array(); for ($i = $n; $i >= 1; $i--) { // jobs not required to precede any other job $arr = array(); foreach ($jobs as $k => $v) { if (false === array_key_exists($k, $successors)) { $arr[] = $k; } } // calculate total processing time $totalProcessingTime = 0; foreach ($jobs as $k => $v) { if (true === array_key_exists($k, $arr)) { $totalProcessingTime += $v['processingTime']; } } // find the job that will go to the end of the optimal schedule array $min = null; $x = 0; $lastKey = null; foreach($arr as $k) { $x = $totalProcessingTime - $jobs[$k]['dueDate']; if (null === $min || $x < $min) { $min = $x; $lastKey = $k; } } // add the job to the optimal schedule array $optimalSchedule[$lastKey] = $jobs[$lastKey]; // remove job from the jobs array unset($jobs[$lastKey]); // remove precedence constraint from the successors array if needed if (true === in_array($lastKey, $successors)) { foreach ($successors as $k => $v) { if ($lastKey === $v) { unset($successors[$k]); } } } } // reverse the optimal schedule array and preserve keys $optimalSchedule = array_reverse($optimalSchedule, true); // add tardiness to the array $i = 0; foreach ($optimalSchedule as $k => $v) { $optimalSchedule[$k]['tardiness'] = 0; $j = 0; foreach ($optimalSchedule as $k2 => $v2) { if ($j <= $i) { $optimalSchedule[$k]['tardiness'] += $v2['processingTime']; } $j++; } $i++; } echo '<pre>'; print_r($optimalSchedule); echo '</pre>';

    Read the article

  • curl can't verify cert using capath, but can with cacert option

    - by phylae
    I am trying to use curl to connect to a site using HTTPS. But curl is failing to verify the SSL cert. $ curl --verbose --capath ./certs/ --head https://example.com/ * About to connect() to example.com port 443 (#0) * Trying 1.1.1.1... connected * Connected to example.com (1.1.1.1) port 443 (#0) * successfully set certificate verify locations: * CAfile: none CApath: ./certs/ * SSLv3, TLS handshake, Client hello (1): * SSLv3, TLS handshake, Server hello (2): * SSLv3, TLS handshake, CERT (11): * SSLv3, TLS alert, Server hello (2): * SSL certificate problem, verify that the CA cert is OK. Details: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed * Closing connection #0 curl: (60) SSL certificate problem, verify that the CA cert is OK. Details: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed More details here: http://curl.haxx.se/docs/sslcerts.html curl performs SSL certificate verification by default, using a "bundle" of Certificate Authority (CA) public keys (CA certs). If the default bundle file isn't adequate, you can specify an alternate file using the --cacert option. If this HTTPS server uses a certificate signed by a CA represented in the bundle, the certificate verification probably failed due to a problem with the certificate (it might be expired, or the name might not match the domain name in the URL). If you'd like to turn off curl's verification of the certificate, use the -k (or --insecure) option. I know about the -k option. But I do actually want to verify the cert. The certs directory has been properly hashed with c_rehash . and it contains: A Verisign intermediate cert Two self-signed certs The above site should be verified with the Verisign intermediate cert. When I use the --cacert option instead (and point directly to the Verisign cert) curl is able to verify the SSL cert. $ curl --verbose --cacert ./certs/verisign-intermediate-ca.crt --head https://example.com/ * About to connect() to example.com port 443 (#0) * Trying 1.1.1.1... connected * Connected to example.com (1.1.1.1) port 443 (#0) * successfully set certificate verify locations: * CAfile: ./certs/verisign-intermediate-ca.crt CApath: /etc/ssl/certs * SSLv3, TLS handshake, Client hello (1): * SSLv3, TLS handshake, Server hello (2): * SSLv3, TLS handshake, CERT (11): * SSLv3, TLS handshake, Server finished (14): * SSLv3, TLS handshake, Client key exchange (16): * SSLv3, TLS change cipher, Client hello (1): * SSLv3, TLS handshake, Finished (20): * SSLv3, TLS change cipher, Client hello (1): * SSLv3, TLS handshake, Finished (20): * SSL connection using RC4-SHA * Server certificate: * subject: C=US; ST=State; L=City; O=Company; OU=ou1; CN=example.com * start date: 2011-04-17 00:00:00 GMT * expire date: 2012-04-15 23:59:59 GMT * common name: example.com (matched) * issuer: C=US; O=VeriSign, Inc.; OU=VeriSign Trust Network; OU=Terms of use at https://www.verisign.com/rpa (c)10; CN=VeriSign Class 3 Secure Server CA - G3 * SSL certificate verify ok. > HEAD / HTTP/1.1 > User-Agent: curl/7.19.7 (x86_64-pc-linux-gnu) libcurl/7.19.7 OpenSSL/0.9.8k zlib/1.2.3.3 libidn/1.15 > Host: example.com > Accept: */* > < HTTP/1.1 404 Not Found HTTP/1.1 404 Not Found < Cache-Control: must-revalidate,no-cache,no-store Cache-Control: must-revalidate,no-cache,no-store < Content-Type: text/html;charset=ISO-8859-1 Content-Type: text/html;charset=ISO-8859-1 < Content-Length: 1267 Content-Length: 1267 < Server: Jetty(7.2.2.v20101205) Server: Jetty(7.2.2.v20101205) < * Connection #0 to host example.com left intact * Closing connection #0 * SSLv3, TLS alert, Client hello (1): In addition, if I try hitting one of the sites using a self signed cert and the --capath option, it also works. (Let me know if I should post an example of that.) This implies that curl is finding the cert directory, and it is properly hash. Finally, I am able to verify the SSL cert with openssl, using its -CApath option. $ openssl s_client -CApath ./certs/ -connect example.com:443 CONNECTED(00000003) depth=3 /C=US/O=VeriSign, Inc./OU=Class 3 Public Primary Certification Authority verify return:1 depth=2 /C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=(c) 2006 VeriSign, Inc. - For authorized use only/CN=VeriSign Class 3 Public Primary Certification Authority - G5 verify return:1 depth=1 /C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use at https://www.verisign.com/rpa (c)10/CN=VeriSign Class 3 Secure Server CA - G3 verify return:1 depth=0 /C=US/ST=State/L=City/O=Company/OU=ou1/CN=example.com verify return:1 --- Certificate chain 0 s:/C=US/ST=State/L=City/O=Company/OU=ou1/CN=example.com i:/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use at https://www.verisign.com/rpa (c)10/CN=VeriSign Class 3 Secure Server CA - G3 --- Server certificate -----BEGIN CERTIFICATE----- <cert removed> -----END CERTIFICATE----- subject=/C=US/ST=State/L=City/O=Company/OU=ou1/CN=example.com issuer=/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use at https://www.verisign.com/rpa (c)10/CN=VeriSign Class 3 Secure Server CA - G3 --- No client certificate CA names sent --- SSL handshake has read 1563 bytes and written 435 bytes --- New, TLSv1/SSLv3, Cipher is RC4-SHA Server public key is 2048 bit Secure Renegotiation IS NOT supported Compression: NONE Expansion: NONE SSL-Session: Protocol : TLSv1 Cipher : RC4-SHA Session-ID: D65C4C6D52E183BF1E7543DA6D6A74EDD7D6E98EB7BD4D48450885188B127717 Session-ID-ctx: Master-Key: 253D4A3477FDED5FD1353D16C1F65CFCBFD78276B6DA1A078F19A51E9F79F7DAB4C7C98E5B8F308FC89C777519C887E2 Key-Arg : None Start Time: 1303258052 Timeout : 300 (sec) Verify return code: 0 (ok) --- QUIT DONE How can I get curl to verify this cert using the --capath option?

    Read the article

  • extensible database design: automatic ALTER TABLE or serialize() field BLOB ?

    - by mario
    I want an adaptable database scheme. But still use a simple table data gateway in my application, where I just pass an $data[] array for storing. The basic columns are settled in the initial table scheme. There will however arise a couple of meta fields later (ca 10-20). I want some flexibility there and not adapt the database manually each time, or -worse- change the application logic just because of new fields. So now there are two options which seem workable yet not overkill. But I'm not sure about the scalability or database drawbacks. (1) Automatic ALTER TABLE. Whenever the $data array is to be saved, the keys are compared against the current database columns. New columns get defined before the $data is INSERTed into the table. Actually seems simple enough in test code: function save($data, $table="forum") { // columns if ($new_fields = array_diff(array_keys($data), known_fields($table))) { extend_schema($table, $new_fields, $data); } // save $columns = implode("`, `", array_keys($data)); $qm = str_repeat(",?", count(array_keys($data)) - 1); echo ("INSERT INTO `$table` (`$columns`) VALUES (?$qm);"); function known_fields($table) { return unserialize(@file_get_contents("db:$table")) ?: array("id"); function extend_schema($table, $new_fields, $data) { foreach ($new_fields as $field) { echo("ALTER TABLE `$table` ADD COLUMN `$field` VARCHAR;"); Since it is mostly meta information fields, adding them just as VARCHAR seems sufficient. Nobody will query by them anyway. So the database is really just used as storage here. However, while I might want to add a lot of new $data fields on the go, they will not always be populated. (2) serialize() fields into BLOB. Any new/extraneous meta fields could be opaque to the database. Simply sorting out the virtual fields from the real database columns is simple. And the meta fields can just be serialize()d into a blob/text field then: function ext_save($data, $table="forum") { $db_fields = array("id", "content", "flags", "ext"); // disjoin foreach (array_diff(array_keys($data),$db_fields) as $key) { $data["ext"][$key] = $data[$key]; unset($data[$key]); } $data["ext"] = serialize($data["ext"]); Unserializing and unpacking this 'ext' column on read queries is a minor overhead. The advantage is that there won't be any sparsely filled columns in the database, so I guess it's compacter and faster than the AUTO ALTER TABLE approach. Of course, this method prevents ever using one of the new fields in a WHERE or GROUP BY clause. But I think none of the possible meta fields (user_agent, author_ip, author_img, votes, hits, last_modified, ..) would/should ever be used there anyway. So I currently prefer the 'ext' blob approach, even if it's a one-way ticket. How are such columns called usually? (looking for examples/doc) Would you use XML serialization for (very theoretical) in-database queries? The adapting table scheme seems a "cleaner" interface, even if most columns might remain empty then. How does that impact speed? How many such sparse VARCHAR fields can MySQL/innodb stomach? But most importantly: Is there any standard implementation for this? A pseudo ORM with automatic ALTER TABLE tricks? Storing a simple column list seems workable, but something like pdo::getColumnMeta would be more robust.

    Read the article

  • Many-to-many mapping with LINQ

    - by Alexander
    I would like to perform LINQ to SQL mapping in C#, in a many-to-many relationship, but where data is not mandatory. To be clear: I have a news site/blog, and there's a table called Posts. A blog can relate to many categories at once, so there is a table called CategoriesPosts that links with foreign keys with the Posts table and with Categories table. I've made each table with an identity primary key, an id field in each one, if it matters in this case. In C# I defined a class for each table, defined each field as explicitly as possible. The Post class, as well as Category class, have a EntitySet to link to CategoryPost objects, and CategoryPost class has 2 EntityRef members to link to 2 objects of each other type. The problem is that a Post may relate or not to any category, as well as a category may have posts in it or not. I didn't find a way to make an EntitySet<CategoryPost?> or something like that. So when I added the first post, all went well with not a single SQL statement. Also, this post was present in the output. When I tried to add the second post I got an exception, Object reference not set to an instance of an object, regarding to the CategoryPost member. Post: [Table(Name="tm_posts")] public class Post : IDataErrorInfo { public Post() { //Initialization of NOT NULL fields with their default values } [Column(Name = "id", DbType = "int", CanBeNull = false, IsDbGenerated = true, IsPrimaryKey = true)] public int ID { get; set; } private EntitySet<CategoryPost> _categoryRef = new EntitySet<CategoryPost>(); [Association(Name = "tm_rel_categories_posts_fk2", IsForeignKey = true, Storage = "_categoryRef", ThisKey = "ID", OtherKey = "PostID")] public EntitySet<CategoryPost> CategoryRef { get { return _categoryRef; } set { _categoryRef.Assign(value); } } } CategoryPost [Table(Name = "tm_rel_categories_posts")] public class CategoryPost { [Column(Name = "id", DbType = "int", CanBeNull = false, IsDbGenerated = true, IsPrimaryKey = true)] public int ID { get; set; } [Column(Name = "fk_post", DbType = "int", CanBeNull = false)] public int PostID { get; set; } [Column(Name = "fk_category", DbType = "int", CanBeNull = false)] public int CategoryID { get; set; } private EntityRef<Post> _post = new EntityRef<Post>(); [Association(Name = "tm_rel_categories_posts_fk2", IsForeignKey = true, Storage = "_post", ThisKey = "PostID", OtherKey = "ID")] public Post Post { get { return _post.Entity; } set { _post.Entity = value; } } private EntityRef<Category> _category = new EntityRef<Category>(); [Association(Name = "tm_rel_categories_posts_fk", IsForeignKey = true, Storage = "_category", ThisKey = "CategoryID", OtherKey = "ID")] public Category Category { get { return _category.Entity; } set { _category.Entity = value; } } } Category [Table(Name="tm_categories")] public class Category { [Column(Name = "id", DbType = "int", CanBeNull = false, IsDbGenerated = true, IsPrimaryKey = true)] public int ID { get; set; } [Column(Name = "fk_parent", DbType = "int", CanBeNull = true)] public int ParentID { get; set; } private EntityRef<Category> _parent = new EntityRef<Category>(); [Association(Name = "tm_posts_fk2", IsForeignKey = true, Storage = "_parent", ThisKey = "ParentID", OtherKey = "ID")] public Category Parent { get { return _parent.Entity; } set { _parent.Entity = value; } } [Column(Name = "name", DbType = "varchar(100)", CanBeNull = false)] public string Name { get; set; } } So what am I doing wrong? How to make it possible to insert a post that doesn't belong to any category? How to insert categories with no posts?

    Read the article

  • Ubuntu cannot access internet, LAN is fine

    - by Kevin Southworth
    I have an Ubuntu 8.04 LTS server that is directly connected to our Comcast Business Gateway modem and I have configured it with 1 of our 5 allotted Static IPs. My other machines on our LAN can connect to this server (via ssh, web, ping, etc.) but I cannot access this server from outside our network, and this machine cannot get out to the internet either (ping google.com fails with unknown host). Here is my /etc/networking/interfaces file: auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 173.162.54.19 netmask 255.255.255.248 broadcast 173.162.54.23 gateway 173.162.54.22 and my /etc/resolv.conf: nameserver 68.87.77.130 nameserver 68.87.72.130 output from sudo route -n: Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 173.162.54.16 0.0.0.0 255.255.255.248 U 0 0 0 eth0 0.0.0.0 173.162.54.22 0.0.0.0 UG 100 0 0 eth0 I have a Windows 2008 machine with an almost identical Static IP, static DNS setup and it works correctly, can access it within the LAN and also from public internet, the Windows machine and the Ubuntu machine are both directly connected to the Comcast Business Gateway. I have tried rebooting Ubuntu, rebooting my Comcast modem, but nothing seems to make it work. I'm an Ubuntu noob, is there some other config I need to apply to make this work? UPDATE: Yes I am able to ping my default gateway 173.162.54.22 output of iptables --list -n: Chain INPUT (policy DROP) target prot opt source destination ufw-before-input all -- 0.0.0.0/0 0.0.0.0/0 ufw-after-input all -- 0.0.0.0/0 0.0.0.0/0 Chain FORWARD (policy DROP) target prot opt source destination ufw-before-forward all -- 0.0.0.0/0 0.0.0.0/0 ufw-after-forward all -- 0.0.0.0/0 0.0.0.0/0 Chain OUTPUT (policy ACCEPT) target prot opt source destination ufw-before-output all -- 0.0.0.0/0 0.0.0.0/0 ufw-after-output all -- 0.0.0.0/0 0.0.0.0/0 Chain ufw-after-forward (1 references) target prot opt source destination LOG all -- 0.0.0.0/0 0.0.0.0/0 limit: avg 3/min burst 10 LOG flags 0 level 4 prefix `[UFW BLOCK FORWARD]: ' RETURN all -- 0.0.0.0/0 0.0.0.0/0 Chain ufw-after-input (1 references) target prot opt source destination RETURN udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:137 RETURN udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:138 RETURN tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:139 RETURN tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:445 RETURN udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:67 RETURN udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:68 LOG all -- 0.0.0.0/0 0.0.0.0/0 limit: avg 3/min burst 10 LOG flags 0 level 4 prefix `[UFW BLOCK INPUT]: ' RETURN all -- 0.0.0.0/0 0.0.0.0/0 Chain ufw-after-output (1 references) target prot opt source destination RETURN all -- 0.0.0.0/0 0.0.0.0/0 Chain ufw-before-forward (1 references) target prot opt source destination ufw-user-forward all -- 0.0.0.0/0 0.0.0.0/0 RETURN all -- 0.0.0.0/0 0.0.0.0/0 Chain ufw-before-input (1 references) target prot opt source destination ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED DROP all -- 0.0.0.0/0 0.0.0.0/0 ctstate INVALID ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 icmp type 3 ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 icmp type 4 ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 icmp type 11 ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 icmp type 12 ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 icmp type 8 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp spt:67 dpt:68 ufw-not-local all -- 0.0.0.0/0 0.0.0.0/0 ACCEPT all -- 224.0.0.0/4 0.0.0.0/0 ACCEPT all -- 0.0.0.0/0 224.0.0.0/4 ufw-user-input all -- 0.0.0.0/0 0.0.0.0/0 RETURN all -- 0.0.0.0/0 0.0.0.0/0 Chain ufw-before-output (1 references) target prot opt source destination ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW,RELATED,ESTABLISHED ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 state NEW,RELATED,ESTABLISHED ufw-user-output all -- 0.0.0.0/0 0.0.0.0/0 RETURN all -- 0.0.0.0/0 0.0.0.0/0 Chain ufw-not-local (1 references) target prot opt source destination RETURN all -- 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL RETURN all -- 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type MULTICAST RETURN all -- 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type BROADCAST LOG all -- 0.0.0.0/0 0.0.0.0/0 limit: avg 3/min burst 10 LOG flags 0 level 4 prefix `[UFW BLOCK NOT-TO-ME]: ' DROP all -- 0.0.0.0/0 0.0.0.0/0 Chain ufw-user-forward (1 references) target prot opt source destination RETURN all -- 0.0.0.0/0 0.0.0.0/0 Chain ufw-user-input (1 references) target prot opt source destination ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:80 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:22 RETURN all -- 0.0.0.0/0 0.0.0.0/0 Chain ufw-user-output (1 references) target prot opt source destination RETURN all -- 0.0.0.0/0 0.0.0.0/0

    Read the article

  • Scala: Recursively building all pathes in a graph?

    - by DarqMoth
    Trying to build all existing paths for an udirected graph defined as a map of edges using the following algorithm: Start: with a given vertice A Find an edge (X.A, X.B) or (X.B, X.A), add this edge to path Find all edges Ys fpr which either (Y.C, Y.B) or (Y.B, Y.C) is true For each Ys: A=B, goto Start Providing edges are defined as the following map, where keys are tuples consisting of two vertices: val edges = Map( ("n1", "n2") -> "n1n2", ("n1", "n3") -> "n1n3", ("n3", "n4") -> "n3n4", ("n5", "n1") -> "n5n1", ("n5", "n4") -> "n5n4") As an output I need to get a list of ALL pathes where each path is a list of adjecent edges like this: val allPaths = List( List(("n1", "n2") -> "n1n2"), List(("n1", "n3") -> "n1n3", ("n3", "n4") -> "n3n4"), List(("n5", "n1") -> "n5n1"), List(("n5", "n4") -> "n5n4"), List(("n2", "n1") -> "n1n2", ("n1", "n3") -> "n1n3", ("n3", "n4") -> "n3n4", ("n5", "n4") -> "n5n4")) //... //... more pathes to go } Note: Edge XY = (x,y) - "xy" and YX = (y,x) - "yx" exist as one instance only, either as XY or YX So far I have managed to implement code that duplicates edges in the path, which is wrong and I can not find the error: object Graph2 { type Vertice = String type Edge = ((String, String), String) type Path = List[((String, String), String)] val edges = Map( //(("v1", "v2") , "v1v2"), (("v1", "v3") , "v1v3"), (("v3", "v4") , "v3v4") //(("v5", "v1") , "v5v1"), //(("v5", "v4") , "v5v4") ) def main(args: Array[String]): Unit = { val processedVerticies: Map[Vertice, Vertice] = Map() val processedEdges: Map[(Vertice, Vertice), (Vertice, Vertice)] = Map() val path: Path = List() println(buildPath(path, "v1", processedVerticies, processedEdges)) } /** * Builds path from connected by edges vertices starting from given vertice * Input: map of edges * Output: list of connected edges like: List(("n1", "n2") -> "n1n2"), List(("n1", "n3") -> "n1n3", ("n3", "n4") -> "n3n4"), List(("n5", "n1") -> "n5n1"), List(("n5", "n4") -> "n5n4"), List(("n2", "n1") -> "n1n2", ("n1", "n3") -> "n1n3", ("n3", "n4") -> "n3n4", ("n5", "n4") -> "n5n4")) */ def buildPath(path: Path, vertice: Vertice, processedVerticies: Map[Vertice, Vertice], processedEdges: Map[(Vertice, Vertice), (Vertice, Vertice)]): List[Path] = { println("V: " + vertice + " VM: " + processedVerticies + " EM: " + processedEdges) if (!processedVerticies.contains(vertice)) { val edges = children(vertice) println("Edges: " + edges) val x = edges.map(edge => { if (!processedEdges.contains(edge._1)) { addToPath(vertice, processedVerticies.++(Map(vertice -> vertice)), processedEdges, path, edge) } else { println("ALready have edge: "+edge+" Return path:"+path) path } }) val y = x.toList y } else { List(path) } } def addToPath( vertice: Vertice, processedVerticies: Map[Vertice, Vertice], processedEdges: Map[(Vertice, Vertice), (Vertice, Vertice)], path: Path, edge: Edge): Path = { val newPath: Path = path ::: List(edge) val key = edge._1 val nextVertice = neighbor(vertice, key) val x = buildPath (newPath, nextVertice, processedVerticies, processedEdges ++ (Map((vertice, nextVertice) -> (vertice, nextVertice))) ).flatten // need define buidPath type x } def children(vertice: Vertice) = { edges.filter(p => (p._1)._1 == vertice || (p._1)._2 == vertice) } def containsPair(x: (Vertice, Vertice), m: Map[(Vertice, Vertice), (Vertice, Vertice)]): Boolean = { m.contains((x._1, x._2)) || m.contains((x._2, x._1)) } def neighbor(vertice: String, key: (String, String)): String = key match { case (`vertice`, x) => x case (x, `vertice`) => x } } Running this results in: List(List(((v1,v3),v1v3), ((v1,v3),v1v3), ((v3,v4),v3v4))) Why is that?

    Read the article

  • Problem with the output of Jquery function .offset in IE

    - by vandalk
    Hello! I'm new to jquery and javascript, and to web site developing overall, and I'm having a problem with the .offset function. I have the following code working fine on chrome and FF but not working on IE: $(document).keydown(function(k){ var keycode=k.which; var posk=$('html').offset(); var centeryk=screen.availHeight*0.4; var centerxk=screen.availWidth*0.4; $("span").text(k.which+","+posk.top+","+posk.left); if (keycode==37){ k.preventDefault(); $("html,body").stop().animate({scrollLeft:-1*posk.left-centerxk}) }; if (keycode==38){ k.preventDefault(); $("html,body").stop().animate({scrollTop:-1*posk.top-centeryk}) }; if (keycode==39){ k.preventDefault(); $("html,body").stop().animate({scrollLeft:-1*posk.left+centerxk}) }; if (keycode==40){ k.preventDefault(); $("html,body").stop().animate({scrollTop:-1*posk.top+centeryk}) }; }); hat I want it to do is to scroll the window a set percentage using the arrow keys, so my thought was to find the current coordinates of the top left corner of the document and add a percentage relative to the user screen to it and animate the scroll so that the content don't jump and the user looses focus from where he was. The $("span").text are just so I know what's happening and will be turned into comments when the code is complete. So here is what happens, on Chrome and Firefox the output of the $("span").text for the position variables is correct, starting at 0,0 and always showing how much of the content was scrolled in coordinates, but on IE it starts on -2,-2 and never gets out of it, even if I manually scroll the window until the end of it and try using the right arrow key it will still return the initial value of -2,-2 and scroll back to the beggining. I tried substituting the offset for document.body.scrollLetf and scrollTop but the result is the same, only this time the coordinates are 0,0. Am I doing something wrong? Or is this some IE bug? Is there a way around it or some other function I can use and achieve the same results? On another note, I did other two navigating options for the user in this section of the site, one is to click and drag anywhere on the screen to move it: $("html").mousedown(function(e) { var initx=e.pageX var inity=e.pageY $(document).mousemove(function(n) { var x_inc= initx-n.pageX; var y_inc= inity-n.pageY; window.scrollBy(x_inc*0.7,y_inc*0.7); initx=n.pageX; inity=n.pageY //$("span").text(initx+ "," +inity+ "," +x_inc+ "," +y_inc+ "," +e.pageX+ "," +e.pageY+ "," +n.pageX+ "," +n.pageY); // cancel out any text selections document.body.focus(); // prevent text selection in IE document.onselectstart = function () { return false; }; // prevent IE from trying to drag an image document.ondragstart = function() { return false; }; // prevent text selection (except IE) return false; }); }); $("html").mouseup(function() { $(document).unbind('mousemove'); }); The only part of this code I didn't write was the preventing text selection lines, these ones I found in a tutorial about clicking and draging objects, anyway, this code works fine on Chrome, FireFox and IE, though on Firefox and IE it's more often to happen some moviment glitches while you drag, sometimes it seems the "scrolling" is a litlle jagged, it's only a visual thing and not that much significant but if there's a way to prevent it I would like to know.

    Read the article

  • Using VBA / Macro to highlight changes in excel

    - by Zaj
    I have a spread sheet that I send out to various locations to have information on it updated and then sent back to me. However, I had to put validation and lock the cells to force users to input accurate information. Then I can to use VBA to disable the work around of cut copy and paste functions. And additionally I inserted a VBA function to force users to open the excel file in Macros. Now I'm trying to track the changes so that I know what was updated when I recieve the sheet back. However everytime i do this I get an error when someone savesthe document and randomly it will lock me out of the document completely. I have my code pasted below, can some one help me create code in the VBA forum to highlight changes instead of through excel's share/track changes option? ThisWorkbook (Code): Option Explicit Const WelcomePage = "Macros" Private Sub Workbook_BeforeClose(Cancel As Boolean) Call ToggleCutCopyAndPaste(True) 'Turn off events to prevent unwanted loops Application.EnableEvents = False 'Evaluate if workbook is saved and emulate default propmts With ThisWorkbook If Not .Saved Then Select Case MsgBox("Do you want to save the changes you made to '" & .Name & "'?", _ vbYesNoCancel + vbExclamation) Case Is = vbYes 'Call customized save routine Call CustomSave Case Is = vbNo 'Do not save Case Is = vbCancel 'Set up procedure to cancel close Cancel = True End Select End If 'If Cancel was clicked, turn events back on and cancel close, 'otherwise close the workbook without saving further changes If Not Cancel = True Then .Saved = True Application.EnableEvents = True .Close savechanges:=False Else Application.EnableEvents = True End If End With End Sub Private Sub Workbook_BeforeSave(ByVal SaveAsUI As Boolean, Cancel As Boolean) 'Turn off events to prevent unwanted loops Application.EnableEvents = False 'Call customized save routine and set workbook's saved property to true '(To cancel regular saving) Call CustomSave(SaveAsUI) Cancel = True 'Turn events back on an set saved property to true Application.EnableEvents = True ThisWorkbook.Saved = True End Sub Private Sub Workbook_Open() Call ToggleCutCopyAndPaste(False) 'Unhide all worksheets Application.ScreenUpdating = False Call ShowAllSheets Application.ScreenUpdating = True End Sub Private Sub CustomSave(Optional SaveAs As Boolean) Dim ws As Worksheet, aWs As Worksheet, newFname As String 'Turn off screen flashing Application.ScreenUpdating = False 'Record active worksheet Set aWs = ActiveSheet 'Hide all sheets Call HideAllSheets 'Save workbook directly or prompt for saveas filename If SaveAs = True Then newFname = Application.GetSaveAsFilename( _ fileFilter:="Excel Files (*.xls), *.xls") If Not newFname = "False" Then ThisWorkbook.SaveAs newFname Else ThisWorkbook.Save End If 'Restore file to where user was Call ShowAllSheets aWs.Activate 'Restore screen updates Application.ScreenUpdating = True End Sub Private Sub HideAllSheets() 'Hide all worksheets except the macro welcome page Dim ws As Worksheet Worksheets(WelcomePage).Visible = xlSheetVisible For Each ws In ThisWorkbook.Worksheets If Not ws.Name = WelcomePage Then ws.Visible = xlSheetVeryHidden Next ws Worksheets(WelcomePage).Activate End Sub Private Sub ShowAllSheets() 'Show all worksheets except the macro welcome page Dim ws As Worksheet For Each ws In ThisWorkbook.Worksheets If Not ws.Name = WelcomePage Then ws.Visible = xlSheetVisible Next ws Worksheets(WelcomePage).Visible = xlSheetVeryHidden End Sub Private Sub Workbook_Activate() Call ToggleCutCopyAndPaste(False) End Sub Private Sub Workbook_Deactivate() Call ToggleCutCopyAndPaste(True) End Sub This is in my ModuleCode: Option Explicit Sub ToggleCutCopyAndPaste(Allow As Boolean) 'Activate/deactivate cut, copy, paste and pastespecial menu items Call EnableMenuItem(21, Allow) ' cut Call EnableMenuItem(19, Allow) ' copy Call EnableMenuItem(22, Allow) ' paste Call EnableMenuItem(755, Allow) ' pastespecial 'Activate/deactivate drag and drop ability Application.CellDragAndDrop = Allow 'Activate/deactivate cut, copy, paste and pastespecial shortcut keys With Application Select Case Allow Case Is = False .OnKey "^c", "CutCopyPasteDisabled" .OnKey "^v", "CutCopyPasteDisabled" .OnKey "^x", "CutCopyPasteDisabled" .OnKey "+{DEL}", "CutCopyPasteDisabled" .OnKey "^{INSERT}", "CutCopyPasteDisabled" Case Is = True .OnKey "^c" .OnKey "^v" .OnKey "^x" .OnKey "+{DEL}" .OnKey "^{INSERT}" End Select End With End Sub Sub EnableMenuItem(ctlId As Integer, Enabled As Boolean) 'Activate/Deactivate specific menu item Dim cBar As CommandBar Dim cBarCtrl As CommandBarControl For Each cBar In Application.CommandBars If cBar.Name <> "Clipboard" Then Set cBarCtrl = cBar.FindControl(ID:=ctlId, recursive:=True) If Not cBarCtrl Is Nothing Then cBarCtrl.Enabled = Enabled End If Next End Sub Sub CutCopyPasteDisabled() 'Inform user that the functions have been disabled MsgBox " Cutting, copying and pasting have been disabled in this workbook. Please hard key in data. " End Sub

    Read the article

  • OpenVPN IPv6 over IPv4 tunnel

    - by user66779
    Today I installed OpenVPN 2.3rc2 on both my windows 7 client machine and centos 6 server. This new version of OpenVPN provides full compatibility for IPv6. The Problem: I am currently able to connect to the server (through the IPv4 tunnel) and ping the IPv6 address which is assigned to my client and I can also ping the tun0 interface on the server. However, I cannot browse to any IPv6 websites. My vps provider has given me this: 2607:f840:0044:0022:0000:0000:0000:0000/64 is routed to this server (2607:f840:0:3f:0:0:0:eda). This is ifconfig after setup with OpenVPN running: eth0 Link encap:Ethernet HWaddr 00:16:3E:12:77:54 inet addr:208.111.39.160 Bcast:208.111.39.255 Mask:255.255.255.0 inet6 addr: 2607:f740:0:3f::eda/64 Scope:Global inet6 addr: fe80::216:3eff:fe12:7754/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:2317253 errors:0 dropped:7263 overruns:0 frame:0 TX packets:1977414 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1696120096 (1.5 GiB) TX bytes:1735352992 (1.6 GiB) Interrupt:29 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) tun0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:10.8.0.1 P-t-P:10.8.0.2 Mask:255.255.255.255 inet6 addr: 2607:f740:44:22::1/64 Scope:Global UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1 RX packets:739567 errors:0 dropped:0 overruns:0 frame:0 TX packets:1218240 errors:0 dropped:1542 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:46512557 (44.3 MiB) TX bytes:1559930874 (1.4 GiB) So OpenVPN is sucessfully creating a tun0 interface and assigning clients IPv6 addresses using 2607:f840:44:22::/64. The first client to connect is getting 2607:f840:44:22::1000 and the second 2607:f840:44:22::1001, and so on... plus 1 each time. After connecting as the first client, I can ping from my windows client machine 2607:f740:44:22::1 and 2607:f740:44:22::1000. However, I have no access to IPv6 websites. I believe the problem is that the tun0 IPv6 addressees are not being forwarded to the eth0 interface. This is the firewall running on the server: #!/bin/sh # # iptables configuration script # # Flush all current rules from iptables # iptables -F iptables -t nat -F # # Allow SSH connections on tcp port 22 # iptables -A INPUT -i eth0 -p tcp --dport 22 -j ACCEPT iptables -A OUTPUT -o eth0 -p tcp --sport 22 -j ACCEPT # # Set access for localhost # iptables -A INPUT -i lo -j ACCEPT # # Accept connections on 1195 for vpn access from client # iptables -A INPUT -i eth0 -p udp --dport 1195 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -p udp --sport 1195 -m state --state ESTABLISHED -j ACCEPT # # Apply forwarding for OpenVPN Tunneling # iptables -A FORWARD -m state --state RELATED,ESTABLISHED -j ACCEPT iptables -A FORWARD -s 10.8.0.0/24 -j ACCEPT iptables -t nat -A POSTROUTING -o eth0 -j SNAT --to 209.111.39.160 iptables -A FORWARD -j REJECT # # Enable forwarding # echo 1 > /proc/sys/net/ipv4/ip_forward # # Set default policies for INPUT, FORWARD and OUTPUT chains # iptables -P INPUT ACCEPT iptables -P FORWARD ACCEPT iptables -P OUTPUT ACCEPT # # IPv6 # IP6TABLES=/sbin/ip6tables $IP6TABLES -F INPUT $IP6TABLES -F FORWARD $IP6TABLES -F OUTPUT echo -n "1" >/proc/sys/net/ipv6/conf/all/forwarding echo -n "1" >/proc/sys/net/ipv6/conf/all/proxy_ndp echo -n "0" >/proc/sys/net/ipv6/conf/all/autoconf echo -n "0" >/proc/sys/net/ipv6/conf/all/accept_ra $IP6TABLES -A INPUT -i eth0 -m state --state ESTABLISHED,RELATED -j ACCEPT $IP6TABLES -A INPUT -i eth0 -p tcp --dport 22 -j ACCEPT $IP6TABLES -A INPUT -i eth0 -p icmpv6 -j ACCEPT $IP6TABLES -P INPUT ACCEPT $IP6TABLES -P FORWARD ACCEPT $IP6TABLES -P OUTPUT ACCEPT Server.conf: server-ipv6 2607:f840:44:22::/64 server 10.8.0.0 255.255.255.0 port 1195 proto udp dev tun ca ca.crt cert server.crt key server.key dh dh2048.pem ifconfig-pool-persist ipp.txt push "redirect-gateway def1 bypass-dhcp" push "dhcp-option DNS 208.67.222.222" push "dhcp-option DNS 208.67.220.220" keepalive 10 60 tls-auth ta.key 0 cipher AES-256-CBC comp-lzo user nobody group nobody persist-key persist-tun status openvpn-status.log log-append openvpn.log verb 5 Client.conf: client dev tun nobind keepalive 10 60 hand-window 15 remote 209.111.39.160 1195 udp persist-key persist-tun ca ca.crt key client1.key cert client1.crt remote-cert-tls server tls-auth ta.key 1 comp-lzo verb 3 cipher AES-256-CBC I'm not sure where I am going wrong, it could be the firewall, or something missing from server or client.conf. This version of OpenVPN was only released yesterday, and there's little info on the internet about how to setup an IPv6 over IPv4 vpn tunnel. I've read the manual for this new version of OpenVPN (parts pertaining to IPv6) and it provides very little info too. Thanks for any help.

    Read the article

  • many-to-many-to-many, incl alignment of data from diff sources

    - by JefeCoon
    Re-factoring dbase to support many:many:many. At the second and third levels we need to preserve end-user 'mapping' or aligning of data from different sources, e.g. Order 17 FirstpartyOrderID => aha LineItem_for_BigShinyThingy => AA-1 # maps to 77-a LineItem_for_BigShinyThingy => AA-2 # maps to 77-b, 77-c LineItem_for_LittleWidget => AA-x # maps to 77-zulu, 77-alpha, 99-foxtrot LineItem_for_LittleWidget => AA-y # maps to 77-zulu, 99-foxtrot LineItem_for_LittleWidget => AA-z # maps to 77-alpha ThirdpartyOrderID => foo LineItem_for_BigShinyThingy => 77-a LineItem_for_BigShinyThingy => 77-b LineItem_for_BigShinyThingy => 77-c LineItem_for_LittleWidget => 77-zulu LineItem_for_LittleWidget => 77-alpha ThirdpartyOrderID => bar LineItem_for_LittleWidget => 99-foxtrot Each LineItem has daily datapoints reported from its own source (Firstparty|Thirdparty). In our UI & app we provide tools to align these, then we'd like to save them into the cleanest possible schema for querying, enabling us to diff the reported daily datapoints, and perform other daily calculations (which we'll store in the dbase also, fortunately that should be cake once we've nailed this). We need to map related [firstparty|thirdparty]line_items which have their own respective datapoints. We'll be using the association to pull each line_items collection of datapoints for summary and discrepancy calculations. I'm considering two options, std has_many,through x2 --or-- possibly (scary) ubermasterjoin table OptionA: order<<-->> order_join_table[id,order_id,firstparty_order_id,thirdparty_order_id] <<-->>line_item order_join_table[firstparty_order_id]-->raw_order[id] order_join_table[thirdparty_order_id]-->raw_order[id] raw_order-->raw_line_items[raw_order_id] line_item<<-->> line_item_join[id,LI_join_id,firstparty_LI,thirdparty_LI <<-->>raw_line_items line_item_join[firstparty_LI]-->raw_line_item[id] line_item_join[thirdparty_LI]-->raw_line_item[id] raw_line_item<<-->>datapoints = we rely upon join to store all mappings of first|third orders & line_items = keys to raw_* enable lookup of these order & line_item details = concerns about circular references and/or lack of correct mapping logic, e.g order--line_item--raw_line_items vs. order--raw_order--raw_line_items OptionB: order<<-->> join_master[id,order_id,FP_order_id,TP_order_id,FP_line_item_id,TP_line_item_id] join_master[FP_order_id & TP_order_id]-->raw_order[id] join_master[FP_line_item_id & TP_line_item_id]-->raw_line_item[id] = every combo of FP_line_item + TP_line_item writes a record into the join_master table = "theoretically" queries easy/fast/flexible/sexy At long last, my questions: a) any learnings from painful firsthand experience about how best to implement/tune/optimize many-to-many-to-many relationships b) in rails? c) any painful gotchas (circular references, slow queries, spaghetti-monsters) to watch out for? d) any joy & goodness in Rails3 that makes this magically easy & joyful? e) anyone written the "how to do many-to-many-to-many schema in Rails and make it fast & sexy?" tutorial that I somehow haven't found? If not, I'll follow up with our learnings in the hope it's helpful.. Thanks in advance- --Jeff

    Read the article

  • Create dropdown from multi array + PHP class

    - by chris
    Hi there, Well, I am writing class that creates a DOB selection dropdown. I am having figure out the dropdown(), it seems working but not exactly. Code just creates one drop down, and under this dropdown all day, month and year data are in one selection. like: <label> <sup>*</sup>DOB</label> <select name="form_bod_year"> <option value=""/> <option selected="" value="0">1</option> <option value="1">2</option> <option value="2">3</option> <option value="3">4</option> <option value="4">5</option> <option value="5">6</option> .. <option value="29">30</option> <option value="30">31</option> <option value="1">January</option> <option value="2">February</option> .. <option value="11">November</option> <option value="12">December</option> <option selected="" value="0">1910</option> <option value="1">1911</option> .. <option value="98">2008</option> <option value="99">2009</option> <option value="100">2010</option> </select> Here is my code, I wonder that why all datas are in one selection. It has to be tree selction - Day:Month:Year. //dropdown connector class DropDownConnector { var $dropDownsDatas; var $field_label; var $field_name; var $locale; function __construct($dropDownsDatas, $field_label, $field_name, $locale) { $this-dropDownsDatas = $dropDownsDatas; $this-field_label = $field_label; $this-field_name = $field_name; $this-locale = $locale; } function getValue(){ return $_POST[$this-field_name]; } function dropdown(){ $selectedVal = $this-getValue($this-field_name); foreach($this-dropDownsDatas as $keys=$values){ foreach ($values as $key=$value){ $selected = ($key == $selectedVal ? "selected" : "" ); $options .= sprintf('%s',$key,$value); }; }; return $select_start = "$this-field_desc".$options.""; } function getLabel(){ $non_req = $this-getNotRequiredData(); $req = in_array($this-field_name, $non_req) ? '' : '*'; return $this-field_label ? $req . $this-field_label : ''; } function __toString() { $id = $this-field_name; $label = $this-getLabel(); $field = $this-dropdown(); return 'field_name.'"'.$label.''.$field.''; } } function generateForm ($lang,$country_list,$states_list,$days_of_month,$month_list,$years){ $xx = array( 'form_bod_day' = $days_of_month, 'form_bod_month' = $month_list, 'form_bod_year' = $years); echo $dropDownConnector = new DropDownConnector($xx,'DOB','bod','en-US'); } // Call php class to use class on external functionss. $avInq = new formGenerator; $lang='en-US'; echo generateForm ($lang,$country_list,$states_list,$days_of_month,$month_list,$years);

    Read the article

  • Linux/Apache performance very slow even on local network

    - by klausch
    I have an Ubuntu server machine running Apache and MYSQL. System and version info is as follows: Linux kernel 3.0.0.-12 Apache/2.2.20 MySQL Ver 14.14.Distrib 5.1.58 I am running a few websites on this server, some HTML only, some PHP/MySQL. THe [problem is that response time is very slow, both on static as well as the dynamic sites. Sometimes it takes more than 10 seconds before a response is given, this makes the sites very slow and almost unusable. The problem occurs even when requesting from the local network. I have added the involved subdomains to my /etc/hosts file, and abolve all the problem is not solved by using IP numbers instead of URL's. So there is no DNS lookup issue. I have modified the log format by showing the response times and sometimes a files takes 12 seconds to be served, see the jquery~.js file in the example screenshot. I have no explanation for this extremely long response time, but is is not even the only issue here, some other files takes a long time to be served too, but do not show a long response time in the log file. So probably different tissues are involved here. I cannot find a solution until now, any suggestions??? THanx in advance, Klaas link to screenshot picture from access logfile Some extra configuration info: apache2.conf (comment is removed) LockFile ${APACHE_LOCK_DIR}/accept.lock PidFile ${APACHE_PID_FILE} Timeout 300 KeepAlive On MaxKeepAliveRequests 100 KeepAliveTimeout 5 <IfModule mpm_prefork_module> StartServers 5 MinSpareServers 5 MaxSpareServers 10 MaxClients 150 MaxRequestsPerChild 0 </IfModule> <IfModule mpm_worker_module> StartServers 2 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxClients 150 MaxRequestsPerChild 0 </IfModule> <IfModule mpm_event_module> StartServers 2 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxClients 150 MaxRequestsPerChild 0 </IfModule> User ${APACHE_RUN_USER} Group ${APACHE_RUN_GROUP} AccessFileName .htaccess <Files ~ "^\.ht"> Order allow,deny Deny from all Satisfy all </Files> DefaultType text/plain HostnameLookups Off ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel warn Include mods-enabled/*.load Include mods-enabled/*.conf Include httpd.conf Include ports.conf LogFormat "%v:%p %h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" vhost_combined LogFormat "%h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\" %T/%D" combined LogFormat "%h %l %u %t \"%r\" %>s %O" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent Include conf.d/ Include sites-enabled/ And the virtual hostfile for one of the slow sites, in fact it is pretty straightforward... <VirtualHost *:80> ServerAdmin [email protected] ServerSignature EMail ServerName toenjoy.drsklaus.nl DocumentRoot /var/www/toenjoy.drsklaus.nl <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/toenjoy.drsklaus.nl/> Options Indexes FollowSymLinks MultiViews AllowOverride AuthConfig AuthType Basic AuthName "To Enjoy" AuthUserFile /etc/.htpasswd Require user petraaa Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> And the output of free -m: klaas@ubuntu-server:/etc/apache2$ free -m total used free shared buffers cached Mem: 1997 1401 595 0 144 1017 -/+ buffers/cache: 238 1758 Swap: 2035 0 2035 and I have no indication that swapping occurs on the moments the site is slow. I have runned top and it does not appear to be a CPU issue. I have the impression that the spawning of a apache thread could maybe be the bottleneck but it is just a suggestion. Maybe this gives some extra information! EDIT: The problem seemed to be gone for some time but occurs again! And not only with Apache, also connecting using SSH takes a tremendous time, sometimes it takes up to 15 seconds before the keyphrase is asked for. Also scp works very slowly. The behavious is really unpredoctable and makes the server very hard to use. Any ideas...?

    Read the article

  • Control XML serialization of Dictionary<K, T>

    - by Luca
    I'm investigating about XML serialization, and since I use lot of dictionary, I would like to serialize them as well. I found the following solution for that (I'm quite proud of it! :) ). [XmlInclude(typeof(Foo))] public class XmlDictionary<TKey, TValue> { /// <summary> /// Key/value pair. /// </summary> public struct DictionaryItem { /// <summary> /// Dictionary item key. /// </summary> public TKey Key; /// <summary> /// Dictionary item value. /// </summary> public TValue Value; } /// <summary> /// Dictionary items. /// </summary> public DictionaryItem[] Items { get { List<DictionaryItem> items = new List<DictionaryItem>(ItemsDictionary.Count); foreach (KeyValuePair<TKey, TValue> pair in ItemsDictionary) { DictionaryItem item; item.Key = pair.Key; item.Value = pair.Value; items.Add(item); } return (items.ToArray()); } set { ItemsDictionary = new Dictionary<TKey,TValue>(); foreach (DictionaryItem item in value) ItemsDictionary.Add(item.Key, item.Value); } } /// <summary> /// Indexer base on dictionary key. /// </summary> /// <param name="key"></param> /// <returns></returns> public TValue this[TKey key] { get { return (ItemsDictionary[key]); } set { Debug.Assert(value != null); ItemsDictionary[key] = value; } } /// <summary> /// Delegate for get key from a dictionary value. /// </summary> /// <param name="value"></param> /// <returns></returns> public delegate TKey GetItemKeyDelegate(TValue value); /// <summary> /// Add a range of values automatically determining the associated keys. /// </summary> /// <param name="values"></param> /// <param name="keygen"></param> public void AddRange(IEnumerable<TValue> values, GetItemKeyDelegate keygen) { foreach (TValue v in values) ItemsDictionary.Add(keygen(v), v); } /// <summary> /// Items dictionary. /// </summary> [XmlIgnore] public Dictionary<TKey, TValue> ItemsDictionary = new Dictionary<TKey,TValue>(); } The classes deriving from this class are serialized in the following way: <FooDictionary xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <Items> <DictionaryItemOfInt32Foo> <Key/> <Value/> </DictionaryItemOfInt32XmlProcess> <Items> This give me a good solution, but: How can I control the name of the element DictionaryItemOfInt32Foo What happens if I define a Dictionary<FooInt32, Int32> and I have the classes Foo and FooInt32? Is it possible to optimize the class above? THank you very much!

    Read the article

  • Javascript style objects in Objective-C

    - by awolf
    Background: I use a ton of NSDictionary objects in my iPhone and iPad code. I'm sick of the verbose way of getting/setting keys to these state dictionaries. So a little bit of an experiment: I just created a class I call Remap. Remap will take any arbitrary set[VariableName]:(NSObject *) obj selector and forward that message to a function that will insert obj into an internal NSMutableDictionary under the key [vairableName]. Remap will also take any (zero argument) arbitrary [variableName] selector and return the NSObject mapped in the NSMutableDictionary under the key [variableName]. e.g. Remap * remap = [[Remap alloc] init]; NSNumber * testNumber = [NSNumber numberWithInt:46]; [remap setTestNumber:testNumber]; testNumber = [remap testNumber]; [remap setTestString:@"test string"]; NSString * testString = [remap testString]; NSMutableDictionary * testDict = [NSMutableDictionary dictionaryWithObject:testNumber forKey:@"testNumber"]; [remap setTestDict:testDict]; testDict = [remap testDict]; where none of the properties testNumber, testString, or testDict are actually defined in Remap. The crazy thing? It works... My only question is how can I disable the "may not respond to " warnings for JUST accesses to Remap? P.S. : I'll probably end up scrapping this and going with macros since message forwarding is quite inefficient... but aside from that does anyone see other problems with Remap? Here's Remap's .m for those who are curious: #import "Remap.h" @interface Remap () @property (nonatomic, retain) NSMutableDictionary * _data; @end @implementation Remap @synthesize _data; - (void) dealloc { relnil(_data); [super dealloc]; } - (id) init { self = [super init]; if (self != nil) { NSMutableDictionary * dict = [[NSMutableDictionary alloc] init]; [self set_data:dict]; relnil(dict); } return self; } - (void)forwardInvocation:(NSInvocation *)anInvocation { NSString * selectorName = [NSString stringWithUTF8String: sel_getName([anInvocation selector])]; NSRange range = [selectorName rangeOfString:@"set"]; NSInteger numArguments = [[anInvocation methodSignature] numberOfArguments]; if (range.location == 0 && numArguments == 4) { //setter [anInvocation setSelector:@selector(setData:withKey:)]; [anInvocation setArgument:&selectorName atIndex:3]; [anInvocation invokeWithTarget:self]; } else if (numArguments == 3) { [anInvocation setSelector:@selector(getDataWithKey:)]; [anInvocation setArgument:&selectorName atIndex:2]; [anInvocation invokeWithTarget:self]; } } - (NSMethodSignature *) methodSignatureForSelector:(SEL) aSelector { NSString * selectorName = [NSString stringWithUTF8String: sel_getName(aSelector)]; NSMethodSignature * sig = [super methodSignatureForSelector:aSelector]; if (sig == nil) { NSRange range = [selectorName rangeOfString:@"set"]; if (range.location == 0) { sig = [self methodSignatureForSelector:@selector(setData:withKey:)]; } else { sig = [self methodSignatureForSelector:@selector(getDataWithKey:)]; } } return sig; } - (NSObject *) getDataWithKey: (NSString *) key { NSObject * returnValue = [[self _data] objectForKey:key]; return returnValue; } - (void) setData: (NSObject *) data withKey:(NSString *)key { if (key && [key length] >= 5 && data) { NSRange range; range.length = 1; range.location = 3; NSString * firstChar = [key substringWithRange:range]; firstChar = [firstChar lowercaseString]; range.length = [key length] - 5; // the 4 we have processed plus the training : range.location = 4; NSString * adjustedKey = [NSString stringWithFormat:@"%@%@", firstChar, [key substringWithRange:range]]; [[self _data] setObject:data forKey:adjustedKey]; } else { //assert? } } @end

    Read the article

  • Control XML serialization of generic types

    - by Luca
    I'm investigating about XML serialization, and since I use lot of dictionary, I would like to serialize them as well. I found the following solution for that (I'm quite proud of it! :) ). [XmlInclude(typeof(Foo))] public class XmlDictionary<TKey, TValue> { /// <summary> /// Key/value pair. /// </summary> public struct DictionaryItem { /// <summary> /// Dictionary item key. /// </summary> public TKey Key; /// <summary> /// Dictionary item value. /// </summary> public TValue Value; } /// <summary> /// Dictionary items. /// </summary> public DictionaryItem[] Items { get { List<DictionaryItem> items = new List<DictionaryItem>(ItemsDictionary.Count); foreach (KeyValuePair<TKey, TValue> pair in ItemsDictionary) { DictionaryItem item; item.Key = pair.Key; item.Value = pair.Value; items.Add(item); } return (items.ToArray()); } set { ItemsDictionary = new Dictionary<TKey,TValue>(); foreach (DictionaryItem item in value) ItemsDictionary.Add(item.Key, item.Value); } } /// <summary> /// Indexer base on dictionary key. /// </summary> /// <param name="key"></param> /// <returns></returns> public TValue this[TKey key] { get { return (ItemsDictionary[key]); } set { Debug.Assert(value != null); ItemsDictionary[key] = value; } } /// <summary> /// Delegate for get key from a dictionary value. /// </summary> /// <param name="value"></param> /// <returns></returns> public delegate TKey GetItemKeyDelegate(TValue value); /// <summary> /// Add a range of values automatically determining the associated keys. /// </summary> /// <param name="values"></param> /// <param name="keygen"></param> public void AddRange(IEnumerable<TValue> values, GetItemKeyDelegate keygen) { foreach (TValue v in values) ItemsDictionary.Add(keygen(v), v); } /// <summary> /// Items dictionary. /// </summary> [XmlIgnore] public Dictionary<TKey, TValue> ItemsDictionary = new Dictionary<TKey,TValue>(); } The classes deriving from this class are serialized in the following way: <XmlProcessList xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <Items> <DictionaryItemOfInt32Foo> <Key/> <Value/> </DictionaryItemOfInt32XmlProcess> <Items> This give me a good solution, but: How can I control the name of the element DictionaryItemOfInt32Foo What happens if I define a Dictionary<FooInt32, Int32> and I have the classes Foo and FooInt32? Is it possible to optimize the class above? THank you very much!

    Read the article

  • Same source, multiple targets with different resources (Visual Studio .Net 2008)

    - by Mike Bell
    A set of software products differ only by their resource strings, binary resources, and by the strings / graphics / product keys used by their Visual Studio Setup projects. What is the best way to create, organize, and maintain them? i.e. All the products essentially consist of the same core functionality customized by graphics, strings, and other resource data to form each product. Imagine you are creating a set of products like "Excel for Bankers", Excel for Gardeners", "Excel for CEOs", etc. Each product has the the same functionality, but differs in name, graphics, help files, included templates etc. The environment in which these are being built is: vanilla Windows.Forms / Visual Studio 2008 / C# / .Net. The ideal solution would be easy to maintain. e.g. If I introduce a new string / new resource projects I haven't added the resource to should fail at compile time, not run time. (And subsequent localization of the products should also be feasible). Hopefully I've missed the blindingly-obvious and easy way of doing all this. What is it? ============ Clarification(s) ================ By "product" I mean the package of software that gets installed by the installer and sold to the end user. Currently I have one solution, consisting of multiple projects, (including a Setup project), which builds a set of assemblies and create a single installer. What I need to produce are multiple products/installers, all with similar functionality, which are built from the same set of assemblies but differ in the set of resources used by one of the assemblies. What's the best way of doing this? ------------ The 95% Solution ----------------- Based upon Daminen_the_unbeliever's answer, a resource file per configuration can be achieved as follows: Create a class library project ("Satellite"). Delete the default .cs file and add a folder ("Default") Create a resource file in the folder "MyResources" Properties - set CustomToolNamespace to something appropriate (e.g. "XXX") Make sure the access modifier for the resources is "Public". Add the resources. Edit the source code. Refer to the resources in your code as XXX.MyResources.ResourceName) Create Configurations for each product variant ("ConfigN") For each product variant, create a folder ("VariantN") Copy and Paste the MyResources file into each VariantN folder Unload the "Satellite" project, and edit the .csproj file For each "VariantN/MyResources" <Compile> or <EmbeddedResource> tag, add a Condition="'$(Configuration)' == 'ConfigN'" attribute. Save, Reload the .csproj, and you're done... This creates a per-configuration resource file, which can (presumably) be further localized. Compile error messages are produced for any configuration that where a a resource is missing. The resource files can be localized using the standard method (create a second resources file (MyResources.fr.resx) and edit .csproj as before). The reason this is a 95% solution is that resources used to initialize forms (e.g. Form Titles, button texts) can't be easily handled in the same manner - the easiest approach seems to be to overwrite these with values from the satellite assembly.

    Read the article

< Previous Page | 362 363 364 365 366 367 368 369 370 371 372 373  | Next Page >