Search Results

Search found 8725 results on 349 pages for 'beginning steps'.

Page 334/349 | < Previous Page | 330 331 332 333 334 335 336 337 338 339 340 341  | Next Page >

  • How to generate a Program template by generating an abstract class

    - by Byron-Lim Timothy Steffan
    i have the following problem. The 1st step is to implement a program, which follows a specific protocol on startup. Therefore, functions as onInit, onConfigRequest, etc. will be necessary. (These are triggered e.g. by incoming message on a TCP Port) My goal is to generate a class for example abstract one, which has abstract functions as onInit(), etc. A programmer should just inherit from this base class and should merely override these abstract functions of the base class. The rest as of the protocol e.g. should be simply handled in the background (using the code of the base class) and should not need to appear in the programmers code. What is the correct design strategy for such tasks? and how do I deal with, that the static main method is not inheritable? What are the key-tags for this problem? (I have problem searching for a solution since I lack clear statements on this problem) Goal is to create some sort of library/class, which - included in ones code - results in executables following the protocol. EDIT (new explanation): Okay let me try to explain more detailled: In this case programs should be clients within a client server architecture. We have a client server connection via TCP/IP. Each program needs to follow a specific protocol upon program start: As soon as my program starts and gets connected to the server it will receive an Init Message (TcpClient), when this happens it should trigger the function onInit(). (Should this be implemented by an event system?) After onInit() a acknowledgement message should be sent to the server. Afterwards there are some other steps as e.g. a config message from the server which triggers an onConfig and so on. Let's concentrate on the onInit function. The idea is, that onInit (and onConfig and so on) should be the only functions the programmer should edit while the overall protocol messaging is hidden for him. Therefore, I thought using an abstract class with the abstract methods onInit(), onConfig() in it should be the right thing. The static Main class I would like to hide, since within it e.g. there will be some part which connects to the tcp port, which reacts on the Init Message and which will call the onInit function. 2 problems here: 1. the static main class cant be inherited, isn it? 2. I cannot call abstract functions from the main class in the abstract master class. Let me give an Pseudo-example for my ideas: public abstract class MasterClass { static void Main(string[] args){ 1. open TCP connection 2. waiting for Init Message from server 3. onInit(); 4. Send Acknowledgement, that Init Routine has ended successfully 5. waiting for Config message from server 6..... } public abstract void onInit(); public abstract void onConfig(); } I hope you get the idea now! The programmer should afterwards inherit from this masterclass and merely need to edit the functions onInit and so on. Is this way possible? How? What else do you recommend for solving this? EDIT: The strategy ideo provided below is a good one! Check out my comment on that.

    Read the article

  • Hidden UIView Orientation Change / Layout problems

    - by gargantaun
    The Problem: I have two View Controllers loaded into a root View Controller. Both sub view layouts respond to orientation changes. I switch between the two views using [UIView transformationFromView:...]. Both sub views work fine on their own, but if... Views are swapped Orientation Changes Views are swapped again the View that was previously hidden has serious layout problems. The more I repeat these steps the worse the problem gets. Implementation Details I have three viewsControllers. MyAppViewController A_ViewController B_ViewController A & B ViewControllers have a background image each, and a UIWebView and an AQGridView respectively. To give you an example of how i'm setting it all up, here's the loadView method for A_ViewController... - (void)loadView { [super loadView]; // background image // Should fill the screen and resize on orientation changes UIImageView *bg = [[UIImageView alloc] initWithFrame:self.view.bounds]; bg.contentMode = UIViewContentModeCenter; bg.autoresizingMask = UIViewAutoresizingFlexibleHeight | UIViewAutoresizingFlexibleWidth; bg.image = [UIImage imageNamed:@"fuzzyhalo.png"]; [self.view addSubview:bg]; // frame for webView // Should have a margin of 34 on all sides and resize on orientation changes CGRect webFrame = self.view.bounds; webFrame.origin.x = 34; webFrame.origin.y = 34; webFrame.size.width = webFrame.size.width - 68; webFrame.size.height = webFrame.size.height - 68; projectView = [[UIWebView alloc] initWithFrame:webFrame]; projectView.autoresizingMask = UIViewAutoresizingFlexibleHeight | UIViewAutoresizingFlexibleWidth; [self.view addSubview:projectView]; } For the sake of brevity, the AQGridView in B_ViewController is set up pretty much the same way. Now both these views work fine on their own. However, I use both of them in the AppViewController like this... - (void)loadView { [super loadView]; self.view.autoresizesSubviews = YES; [self setWantsFullScreenLayout:YES]; webView = [[WebProjectViewController alloc] init]; [self.view addSubview:webView.view]; mainMenu = [[GridViewController alloc] init]; [self.view addSubview:mainMenu.view]; activeView = mainMenu; [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(switchViews:) name:SWAPVIEWS object:nil]; } and I switch betweem the two views using my own switchView method like this - (void) switchViews:(NSNotification*)aNotification; { NSString *type = [aNotification object]; if ([type isEqualToString:MAINMENU]){ [UIView transitionFromView:activeView.view toView:mainMenu.view duration:0.75 options:UIViewAnimationOptionTransitionFlipFromRight completion:nil]; activeView = mainMenu; } if ([type isEqualToString:WEBVIEW]) { [UIView transitionFromView:activeView.view toView:webView.view duration:0.75 options:UIViewAnimationOptionTransitionFlipFromLeft completion:nil]; activeView = webView; } // These don't seem to do anything //[mainMenu.view setNeedsLayout]; //[webView.view setNeedsLayout]; } I'm fumbling my way through this, and I suspect a lot of what i've done is implemented incorrectly so please feel free to point out anything that should be done differently, I need the input. But my primary concern is to understand what's causing the layout problems. Here's two images which illustrate the nature of the layout issues... UPDATE: I just noticed that when the orientation is landscape, the transition flips vertically, when I would expect it to be horizontal. I don't know wether that's a clue as to what might be going wrong. Switch to the other view... change orientation.... switch back....

    Read the article

  • Resolve Wrong IP from Domain Name only on certain networks

    - by Godric Seer
    I host a personal website on an old desktop that is LAMP based. There are several strange things about this problem so I will break it down into steps. Since I have a dynamic IP, I use no-ip to make sure I have a working domain name at all times. I use the automatic update client, but logged in and checked and my no-ip domain has the proper IP tied to it. Here is a link to the homepage through the no-ip domain for reference. Also, I do a ping and a traceroute on the no-ip domain and get: [eckertzs@localhost ~]$ ping -c 1 endradil.noip.me PING endradil.noip.me (65.24.215.99) 56(84) bytes of data. 64 bytes from endradil.noip.me (65.24.215.99): icmp_seq=1 ttl=64 time=2.23 ms --- endradil.noip.me ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 104ms rtt min/avg/max/mdev = 2.233/2.233/2.233/0.000 ms [eckertzs@localhost ~]$ traceroute endradil.noip.me traceroute to endradil.noip.me (65.24.215.99), 30 hops max, 60 byte packets 1 . (192.168.2.1) 1.755 ms 5.409 ms 5.380 ms 2 endradil.noip.me (65.24.215.99) 6.297 ms 9.543 ms 10.324 ms Using this domain, I can connect to my webserver without issue or interruption(the https is required to avoid a redirect serverside, but it works). I also have a domain I have bought on GoDaddy where I have a CNAME record forwarding the www subdomain to my no-ip domain. CNAME Record Host: www Points to: endradil.noip.me TTL: 1 hour For the past several weeks, I never had an issue using the GoDaddy domain to connect (ssh or https). As of the past few days, however, the GoDaddy domain has only worked intermittently, for a few minutes at a time and then will go down for hours at a time. I get server not found errors most of the time. Also, if I happen to be using the GoDaddy domain for an ssh connection, the connection will freeze. I have run online tests of the DNS and have seen that the website is visible by external servers and resolved to the correct IP. I also contacted GoDaddy support but they had no issues connecting to the website, and therefore did not see any issues. My personal computers (Windows desktop, linux laptop, android phone) all fail to connect when on my personal wifi. If I disconnect my phone from the wifi and use my AT&T wireless data, it can connect with both domains without issue. When I attempt to use Google webmaster tools to crawl the site using the GoDaddy domain, Google can not find the site. From my linux laptop, I have found some interesting results when I ping or traceroute the domain. The results from these: [eckertzs@localhost ~]$ ping -c 1 www.endradil.com PING www.endradil.com.Belkin (198.105.244.228) 56(84) bytes of data. --- www.endradil.com.Belkin ping statistics --- 1 packets transmitted, 0 received, 100% packet loss, time 10000ms [eckertzs@localhost ~]$ traceroute www.endradil.com traceroute to www.endradil.com (198.105.244.228), 30 hops max, 60 byte packets 1 . (192.168.2.1) 1.918 ms 2.806 ms 2.772 ms 2 cpe-65-24-208-1.insight.res.rr.com (65.24.208.1) 29.247 ms 29.654 ms 30.094 ms 3 cpe-69-23-24-117.new.res.rr.com (69.23.24.117) 15.597 ms 23.218 ms 23.581 ms 4 agg24.clmcohib01r.midwest.rr.com (65.29.1.52) 30.581 ms 30.556 ms 31.192 ms 5 be27.clevohek01r.midwest.rr.com (65.29.1.38) 30.580 ms 31.062 ms 31.038 ms 6 bu-ether25.atlngamq47w-bcr01.tbone.rr.com (107.14.19.38) 37.863 ms 68.844 ms 43.773 ms 7 107.14.17.178 (107.14.17.178) 51.866 ms 51.019 ms 50.989 ms 8 ae0.pr1.dca10.tbone.rr.com (107.14.17.200) 48.467 ms ae-4-0.a0.lax91.tbone.rr.com (66.109.1.113) 49.912 ms * 9 v413.core1.ash1.he.net (209.51.175.33) 60.270 ms 50.842 ms 50.819 ms 10 100ge5-1.core1.nyc4.he.net (184.105.223.166) 55.597 ms 56.045 ms 56.020 ms 11 xerocole-inc.10gigabitethernet12-4.core1.nyc4.he.net (216.66.41.242) 56.001 ms 55.969 ms 55.992 ms 12 * * * both show the incorrect IP. Also, the traceroute timesout on hops 12 through 255 (output truncated above). The traceroute using site24x7 works and shows reasonable results when run from their california server. From another linux box on a different network but in the same city as me (10 miles away), I still get timeout for traceroute, however the IP resolves correctly for the domain. From this I believe that the DNS result is incorrectly cached in either my router/modem or perhaps even at my ISP level. My question is, first, how do I find out exactly what is wrong, and second, how do I resolve it.

    Read the article

  • How to restart RoR services after server has been rebooted

    - by Alan DeLonga
    Update I have been searching around to see what services would possibly need to be restarted in my project after reboot. One of them was thinking sphinx, which I finally got to the point where it logs: [Fri Nov 16 19:34:29.820 2012] [29623] accepting connections But I still cant run searchd or searchd --stop because there was no generated sphinx.conf file in the etc/sphinxsearch for more info refer to this open thread on thinking_sphinx after reboot I then turned to looking into restarting unicorn or thin based on some insight I got. The issue is when I check my gems I see one for thin AND unicorn. But when I try to start either one of them they have no file residing in etc/init.d/ where the nginx and sphinxsearch files reside... Would rebooting totally erase the files for an app server like thin or unicorn? We are hosted on Rackspace running ruby 1.9.2p290 rails (3.2.8, 3.2.7, 3.2.0) nginx/1.1.19 notice that there are gems for unicorn and thin but there is no unicorn.rb or thin.rb in my config folder for my app... I am still super lost if any one can give me some insight on some steps to take to figure this out I would really appreciate it. Anything would help, thanks for reading. thin 1.4.1 unicorn 4.3.1 When I run unicorn I get the same issue as referenced here : > /usr/local/bin/unicorn start /usr/local/lib/ruby/gems/1.9.1/gems/unicorn-4.3.1/lib/unicorn/configurator.rb:610:in `parse_rackup_file': rackup file (start) not readable (ArgumentError) from /usr/local/lib/ruby/gems/1.9.1/gems/unicorn-4.3.1/lib/unicorn/configurator.rb:76:in `reload' from /usr/local/lib/ruby/gems/1.9.1/gems/unicorn-4.3.1/lib/unicorn/configurator.rb:67:in `initialize' from /usr/local/lib/ruby/gems/1.9.1/gems/unicorn-4.3.1/lib/unicorn/http_server.rb:104:in `new' from /usr/local/lib/ruby/gems/1.9.1/gems/unicorn-4.3.1/lib/unicorn/http_server.rb:104:in `initialize' from /usr/local/lib/ruby/gems/1.9.1/gems/unicorn-4.3.1/bin/unicorn:121:in `new' from /usr/local/lib/ruby/gems/1.9.1/gems/unicorn-4.3.1/bin/unicorn:121:in `<top (required)>' from /usr/local/bin/unicorn:19:in `load' from /usr/local/bin/unicorn:19:in `<main>' When I run thin it just opens a command line prompt... /usr/local/bin/thin start >> Using rack adapter Other gems: * LOCAL GEMS * actionmailer (3.2.8, 3.2.7, 3.2.0) actionpack (3.2.8, 3.2.7, 3.2.0) activemodel (3.2.8, 3.2.7, 3.2.0) activerecord (3.2.8, 3.2.7, 3.2.0) activeresource (3.2.8, 3.2.7, 3.2.0) activesupport (3.2.8, 3.2.7, 3.2.0) arel (3.0.2) builder (3.0.0) bundler (1.1.5) carmen (1.0.0.beta2) carmen-rails (1.0.0.beta3) cocaine (0.2.1) coffee-rails (3.2.2) coffee-script (2.2.0) coffee-script-source (1.3.3) daemons (1.1.9) erubis (2.7.0) eventmachine (0.12.10) execjs (1.4.0) faraday (0.8.4) faraday_middleware (0.8.8) foursquare2 (1.8.2) geokit (1.6.5) hashie (1.2.0) hike (1.2.1) httparty (0.8.3) httpauth (0.1) i18n (0.6.0) journey (1.0.4) jquery-rails (2.0.2) json (1.7.4, 1.7.3) jwt (0.1.5) kgio (2.7.4) lastfm (1.8.0) libv8 (3.3.10.4 x86_64-linux) mail (2.4.4) mime-types (1.19, 1.18) minitest (1.6.0) multi_json (1.3.6) multi_xml (0.5.1) multipart-post (1.1.5) mysql2 (0.3.11) oauth2 (0.8.0) paperclip (3.1.1) polyglot (0.3.3) rack (1.4.1) rack-cache (1.2) rack-ssl (1.3.2) rack-test (0.6.1) rails (3.2.8, 3.2.7, 3.2.0) railties (3.2.8, 3.2.7, 3.2.0) raindrops (0.10.0, 0.9.0) rake (0.9.2.2, 0.8.7) rdoc (3.12, 2.5.8) riddle (1.5.3) sass (3.2.0, 3.1.19) sass-rails (3.2.5) sprockets (2.1.3) sqlite3 (1.3.6) sqlite3-ruby (1.3.3) therubyracer (0.10.2, 0.10.1) thin (1.4.1) thinking-sphinx (2.0.10) thor (0.16.0, 0.15.4, 0.14.6) tilt (1.3.3) treetop (1.4.10) tzinfo (0.3.33) uglifier (1.2.7, 1.2.4) unicorn (4.3.1) xml-simple (1.1.1) I am working on a project that was built by another group. I made some modifications to a constants file in the config folder (changing some values for arrays that populated some drop down fields), but the app had to be rebooted before those changes would be recognized. The hosting is through Rackspace, we rebooted through the option on their site. I contacted them and checked the status of our server, the port is open and operational. The problem is the app is not running when you go to the address for the site. Then when I put in the ip address of the server it just says "Welcome to Nginx". But in a log files I see: [Thu Nov 15 02:34:37.945 2012] [15916] caught SIGTERM, shutting down [Thu Nov 15 02:34:37.996 2012] [15916] shutdown complete I am not very versed in server side set up. I have also never worked on a Rails project that had to have specific services started before the application will start. Any insight as to how to figure out what services need to be restarted and how to go about restarting them would be greatly appreciated. I feel kind of dead in the water at this point... Thanks, Alan

    Read the article

  • How do I add the j2ee.jar to a Java2WSDL ant script programmatically?

    - by Marcus
    I am using IBM's Rational Application Developer. I have an ant script that contains the Java2WSDL task. When I run it via IBM, it gives compiler errors unless I include the j2ee.jar file in the classpath via the run tool (it does not pick up the jar files in the classpath in the script). However, I need to be able to call this script programmatically, and it is giving me this error: "java.lang.NoClassDefFoundError: org.eclipse.core.runtime.CoreException" I'm not sure which jars need to be added or where? Since a simple echo script runs, I assume that it is the j2ee.jar or another ant jar that needs to be added. I've added it to the project's buildpath, but that doesn't help. (I also have ant.jar, wsanttasks.jar, all the ant jars from the plugin, tools.jar, remoteAnt.jar, and the swt - all which are included in the buildpath when you run the script by itself.) Script: <?xml version="1.0" encoding="UTF-8"?> <project default="build" basedir="."> <path id="lib.path"> <fileset dir="C:\Program Files\IBM\WebSphere\AppServer\lib" includes="*.jar"/> <!-- Adding these does not help. <fileset dir="C:\Program Files\IBM\SDP70Shared\plugins\org.apache.ant_1.6.5\lib" includes="*.jar"/> <fileset dir="C:\Program Files\IBM\SDP70\jdk\lib" includes="*.jar"/> <fileset dir="C:\Program Files\IBM\SDP70\configuration\org.eclipse.osgi\bundles\1139\1\.cp\lib" includes="*.jar"/> <fileset dir="C:\Program Files\IBM\SDP70Shared\plugins" includes="*.jar"/> --> </path> <taskdef name="java2wsdl" classname="com.ibm.websphere.ant.tasks.Java2WSDL"> <classpath refid="lib.path"/> </taskdef> <target name="build"> <echo message="Beginning build"/> <javac srcdir="C:\J2W_Test\Java2Wsdl_Example" destdir="C:\J2W_Test\Java2Wsdl_Example"> <classpath refid="lib.path"/> <include name="WSExample.java"/> </javac> <echo message="Set up javac"/> <echo message="Running java2wsdl"/> <java2wsdl output="C:\J2W_Test\Java2Wsdl_Example\example\META-INF\wsdl\WSExample.wsdl" classpath="C:\J2W_Test\Java2Wsdl_Example" className= "example.WSExample" namespace="http://example" namespaceImpl="http://example" location="http://localhost:9080/example/services/WSExample" style="document" use="literal"> <mapping namespace="http://example" package="example"/> </java2wsdl> <echo message="Complete"/> </target> </project> Code: File buildFile = new File("build.xml"); Project p = new Project(); p.setUserProperty("ant.file", buildFile.getAbsolutePath()); DefaultLogger consoleLogger = new DefaultLogger(); consoleLogger.setErrorPrintStream(System.err); consoleLogger.setOutputPrintStream(System.out); consoleLogger.setMessageOutputLevel(Project.MSG_INFO); p.addBuildListener(consoleLogger); try { p.fireBuildStarted(); p.init(); ProjectHelper helper = ProjectHelper.getProjectHelper(); p.addReference("ant.projectHelper", helper); helper.parse(p, buildFile); p.executeTarget(p.getDefaultTarget()); p.fireBuildFinished(null); } catch (BuildException e) { p.fireBuildFinished(e); } Error: [java2wsdl] java.lang.NoClassDefFoundError: org.eclipse.core.runtime.CoreException [java2wsdl] at java.lang.J9VMInternals.verifyImpl(Native Method) [java2wsdl] at java.lang.J9VMInternals.verify(J9VMInternals.java:68) [java2wsdl] at java.lang.J9VMInternals.initialize(J9VMInternals.java:129) [java2wsdl] at com.ibm.ws.webservices.multiprotocol.discovery.ServiceProviderManager.getDiscoveredServiceProviders(ServiceProviderManager.java:378) [java2wsdl] at com.ibm.ws.webservices.multiprotocol.discovery.ServiceProviderManager.getAllServiceProviders(ServiceProviderManager.java:214) [java2wsdl] at com.ibm.ws.webservices.wsdl.fromJava.Emitter.initPluggableBindings(Emitter.java:2704) [java2wsdl] at com.ibm.ws.webservices.wsdl.fromJava.Emitter.<init>(Emitter.java:389) [java2wsdl] at com.ibm.ws.webservices.tools.ant.Java2WSDL.execute(Java2WSDL.java:122) [java2wsdl] at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:275) [java2wsdl] at org.apache.tools.ant.Task.perform(Task.java:364) [java2wsdl] at org.apache.tools.ant.Target.execute(Target.java:341) [java2wsdl] at org.apache.tools.ant.Target.performTasks(Target.java:369) [java2wsdl] at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1216) [java2wsdl] at org.apache.tools.ant.Project.executeTarget(Project.java:1185) [java2wsdl] at att.ant.RunAnt.main(RunAnt.java:32)

    Read the article

  • Fibre channel long distance woes

    - by Marki
    I need a fresh pair of eyes. We're using a 15km fibre optic line across which fibrechannel and 10GbE is multiplexed (passive optical CWDM). For FC we have long distance lasers suitable up to 40km (Skylane SFCxx0404F0D). The multiplexer is limited by the SFPs which can do max. 4Gb fibrechannel. The FC switch is a Brocade 5000 series. The respective wavelengths are 1550,1570,1590 and 1610nm for FC and 1530nm for 10GbE. The problem is the 4GbFC fabrics are almost never clean. Sometimes they are for a while even with a lot of traffic on them. Then they may suddenly start producing errors (RX CRC, RX encoding, RX disparity, ...) even with only marginal traffic on them. I am attaching some error and traffic graphs. Errors are currently in the order of 50-100 errors per 5 minutes when with 1Gb/s traffic. Optics Here is the power output of one port summarized (collected using sfpshow on different switches) SITE-A units=uW (microwatt) SITE-B ********************************************** FAB1 SW1 TX 1234.3 RX 49.1 SW3 1550nm (ko) RX 95.2 TX 1175.6 FAB2 SW2 TX 1422.0 RX 104.6 SW4 1610nm (ok) RX 54.3 TX 1468.4 What I find curious at this point is the asymmetry in the power levels. While SW2 transmits with 1422uW which SW4 receives with 104uW, SW2 only receives the SW4 signal with similar original power only with 54uW. Vice versa for SW1-3. Anyway the SFPs have RX sensitivity down to -18dBm (ca. 20uW) so in any case it should be fine... But nothing is. Some SFPs have been diagnosed as malfunctioning by the manufacturer (the 1550nm ones shown above with "ko"). The 1610nm ones apparently are ok, they have been tested using a traffic generator. The leased line has also been tested more than once. All is within tolerances. I'm awaiting the replacements but for some reason I don't believe it will make things better as the apparently good ones don't produce ZERO errors either. Earlier there was active equipment involved (some kind of 4GFC retimer) before putting the signal on the line. No idea why. That equipment was eliminated because of the problems so we now only have: the long distance laser in the switch, (new) 10m LC-SC monomode cable to the mux (for each fabric), the leased line, the same thing but reversed on the other side of the link. FC switches Here is a port config from the Brocade portcfgshow (it's like that on both sides, obviously) Area Number: 0 Speed Level: 4G Fill Word(On Active) 0(Idle-Idle) Fill Word(Current) 0(Idle-Idle) AL_PA Offset 13: OFF Trunk Port ON Long Distance LS VC Link Init OFF Desired Distance 32 Km Reserved Buffers 70 Locked L_Port OFF Locked G_Port OFF Disabled E_Port OFF Locked E_Port OFF ISL R_RDY Mode OFF RSCN Suppressed OFF Persistent Disable OFF LOS TOV enable OFF NPIV capability ON QOS E_Port OFF Port Auto Disable: OFF Rate Limit OFF EX Port OFF Mirror Port OFF Credit Recovery ON F_Port Buffers OFF Fault Delay: 0(R_A_TOV) NPIV PP Limit: 126 CSCTL mode: OFF Forcing the links to 2GbFC produces no errors, but we bought 4GbFC and we want 4GbFC. I don't know where to look anymore. Any ideas what to try next or how to proceed? If we can't make 4GbFC work reliably I wonder what the people working with 8 or 16 do... I don't assume that "a few errors here and there" are acceptable. Oh and BTW we are in contact with everyone of the manufacturers (FC switch, MUX, SFPs, ...) Except for the SFPs to be changed (some have been changed before) nobody has a clue. Brocade SAN Health says the fabric is ok. MUX, well, it's passive, it's only a prism, nature at it's best. Any shots in the dark? APPENDIX: Answers to your questions @Chopper3: This is the second generation of Brocades exhibiting the problem. Before we had 5000s, now we have 5100s. In the beginning when we still had the active MUX we rented a longdistance laser once to put it into the switch directly in order to make tests for a day, during that day of course it was clean. But as I said, sometimes it's clean just like that. And sometimes it's not. Alternative switches would mean to rebuild the entire SAN with those only to test. Alternative SFPs, well they're hard to come by just like that. @longneck: The line is rented. It's a dark fibre (9um monomode) so there's noone else on it. Sure there are splices. I can't go and look but I have to trust they have been done correctly. As I said the line has been checked and rechecked (using an optical time-domain reflectometer). Obviously you don't have all this equipment yourself because it's way too expensive. @mdpc: What would be the "wrong" type of cable according to you? Up to the switch everything is monomode, yes. The connectors are the correct ones too. Yeah I know there are the green ones where the fibre is cut off at a certain angle etc. But we have the correct ones for all that I know. Progress Report #1 We have had two fabrics (=2x2 switches) with Brocade 5100s with FabricOS 6.4.1 and two fabrics (another 2x4 switches) on FabricOS 7.0.2. On the longdistance ISLs (one in each fabric) it turned out that with FOS 6.4.1 setting it to long distance issues warnings about the VC Init setting and consequently the fill word. But those are only warnings. FOS 7.0.2 requires you to do modifications to VCI and the fillword for long distance links. Setting FOS 6.4.1 to the LS (long-distance static distance) setting with wrong VCI and fillword setting made the whole fabric inoperational (stuck in an SCN loop, use fabriclog -s to see, you don't see it anywhere else, no port error counters or anything increasing). Currently I'm giving the one fabric with the IMHO more correct settings a beating and it seems to do fine, whereas the other one without much traffic still has errors here and there. In short: We have eliminated the active part of the MUX (the FC retimer). We are putting the long distance SFPs into the end equipment themselves. Just to be sure we bought new monomode cables to connect the end equipment to the remaining passive part of the MUX. We are now trying out several long distance configs. It's almost black magic. Everything that happens is mostly empirical, noone seems to have a clue what are the exact reasons to do something. ("We have tried this, and it didn't work, then we tried that and it worked, so we stuck with that." But noone really seems to know why.) I'll keep you updated. Progress Report #2 We got the new lasers for one of the fabrics on warranty. It's ultra clean even on 4GbFC. They're transmitting with roughly 2mW (3dBm) whereas the others are only at 1.5mW (1.5dBm) although that should really be enough. The other fabric (where the lasers are apparently ok) still produces one or two CRCs infrequently. Using sfpshow the SFP producing the actual RX errors shows Status/Ctrl: 0x82 Alarm flags[0,1] = 0x5, 0x40 Warn Flags[0,1] = 0x5, 0x40 Now I'll have to find out what that means. Not sure if it was there before. Well I'll first clear my head with a week of vacation. 8-)

    Read the article

  • Using a boost::fusion::map in boost::spirit::karma

    - by user1097105
    I am using boost spirit to parse some text files into a data structure and now I am beginning to generate text from this data structure (using spirit karma). One attempt at a data structure is a boost::fusion::map (as suggested in an answer to this question). But although I can use boost::spirit::qi::parse() and get data in it easily, when I tried to generate text from it using karma, I failed. Below is my attempt (look especially at the "map_data" type). After some reading and playing around with other fusion types, I found boost::fusion::vector and BOOST_FUSION_DEFINE_ASSOC_STRUCT. I succeeded to generate output with both of them, but they don't seem ideal: in vector you cannot access a member using a name (it is like a tuple) -- and in the other solution, I don't think I need both ways (member name and key type) to access the members. #include <iostream> #include <string> #include <boost/spirit/include/karma.hpp> #include <boost/fusion/include/map.hpp> #include <boost/fusion/include/make_map.hpp> #include <boost/fusion/include/vector.hpp> #include <boost/fusion/include/as_vector.hpp> #include <boost/fusion/include/transform.hpp> struct sb_key; struct id_key; using boost::fusion::pair; typedef boost::fusion::map < pair<sb_key, int> , pair<id_key, unsigned long> > map_data; typedef boost::fusion::vector < int, unsigned long > vector_data; #include <boost/fusion/include/define_assoc_struct.hpp> BOOST_FUSION_DEFINE_ASSOC_STRUCT( (), assocstruct_data, (int, a, sb_key) (unsigned long, b, id_key)) namespace karma = boost::spirit::karma; template <typename X> std::string to_string ( const X& data ) { std::string generated; std::back_insert_iterator<std::string> sink(generated); karma::generate_delimited ( sink, karma::int_ << karma::ulong_, karma::space, data ); return generated; } int main() { map_data d1(boost::fusion::make_map<sb_key, id_key>(234, 35314988526ul)); vector_data d2(boost::fusion::make_vector(234, 35314988526ul)); assocstruct_data d3(234,35314988526ul); std::cout << "map_data as_vector: " << boost::fusion::as_vector(d1) << std::endl; //std::cout << "map_data to_string: " << to_string(d1) << std::endl; //*FAIL No 1* std::cout << "at_key (sb_key): " << boost::fusion::at_key<sb_key>(d1) << boost::fusion::at_c<0>(d1) << std::endl << std::endl; std::cout << "vector_data: " << d2 << std::endl; std::cout << "vector_data to_string: " << to_string(d2) << std::endl << std::endl; std::cout << "assoc_struct as_vector: " << boost::fusion::as_vector(d3) << std::endl; std::cout << "assoc_struct to_string: " << to_string(d3) << std::endl; std::cout << "at_key (sb_key): " << boost::fusion::at_key<sb_key>(d3) << d3.a << boost::fusion::at_c<0>(d3) << std::endl; return 0; } Including the commented line gives lots of pages of compilation errors, among which notably something like: no known conversion for argument 1 from ‘boost::fusion::pair’ to ‘double’ no known conversion for argument 1 from ‘boost::fusion::pair’ to ‘float’ Might it be that to_string needs the values of the map_data, and not the pairs? Though I am not good with templates, I tried to get a vector from a map using transform in the following way template <typename P> struct take_second { typename P::second_type operator() (P p) { return p.second; } }; // ... inside main() pair <char, int> ff(32); std::cout << "take_second (expect 32): " << take_second<pair<char,int>>()(ff) << std::endl; std::cout << "transform map_data and to_string: " << to_string(boost::fusion::transform(d1, take_second<>())); //*FAIL No 2* But I don't know what types am I supposed to give when instantiating take_second and anyway I think there must be an easier way to get (iterate over) the values of a map (is there?) If you answer this question, please also give your opinion on whether using an ASSOC_STRUCT or a map is better.

    Read the article

  • Detecting Acceleration in a car (iPhone Accelerometer)

    - by TheGazzardian
    Hello, I am working on an iPhone app where we are trying to calculate the acceleration of a moving car. Similar apps have accomplished this (Dynolicious), but the difference is that this app is designed to be used during general city driving, not on a drag strip. This leads us to one big concern that Dynolicious was luckily able to avoid: hills. Yes, hills. There are two important stages to this: calibration, and actual driving. Our initial run was simple and suffered the consequences. During the calibration stage, I took the average force on the phone, and during running, I just subtracted the average force from the current force to get the current acceleration this frame. The problem with this is that the typical car receives much more force than just the forward force - everything from turning to potholes was causing the values to go out of sync with what was really happening. The next run was to add the condition that the iPhone must be oriented in such a way that the screen was facing toward the back of the car. Using this method, I attempted to follow only force on the z-axis, but this obviously lead to problems unless the iPhone was oriented directly upright, because of gravity. Some trigonometry later, and I had managed to work gravity out of the equation, so that the car was actually being read very, very well by the iPhone. Until I hit a slope. As soon as the angle of the car changed, suddenly I was receiving accelerations and decelerations that didn't make sense, and we were once again going out of sync. Talking with someone a lot smarter than me at math lead to a solution that I have been trying to implement for longer than I would like to admit. It's steps are as follows: 1) During calibration, measure gravity as a vector instead of a size. Store that vector. 2) When the car initially moves forward, take the vector of motion and subtract gravity. Use this as the forward momentum. (Ignore, for now, the user cases where this will be difficult and let's concentrate on the math :) 3) From the forward vector and the gravity vector, construct a plane. 4) Whenever a force is received, project it onto said plane to get rid of sideways force/etc. 5) Then, use that force, the known magnitude of gravity, and the known direction of forward motion to essentially solve a triangle to get the forward vector. The problem that is causing the most difficulty in this new system is not step 5, which I have gotten to the point where all the numbers look as they should. The difficult part is actually the detection of the forward vector. I am selecting vectors whose magnitude exceeds gravity, and from there, averaging them and subtracting gravity. (I am doing some error checking to make sure that I am not using a force just because the iPhone accelerometer was off by a bit, which happens more frequently than I would like). But if I plot these vectors that I am using, they actually vary by an angle of about 20-30 degrees, which can lead to some strong inaccuracies. The end result is that the app is even more inaccurate now than before. So basically - all you math and iPhone brains out there - any glaring errors? Any potentially better solutions? Any experience that could be useful at all? Award: offering a bounty of $250 to the first answer that leads to a solution.

    Read the article

  • Trouble with copying dictionaries and using deepcopy on an SQLAlchemy ORM object

    - by Az
    Hi there, I'm doing a Simulated Annealing algorithm to optimise a given allocation of students and projects. This is language-agnostic pseudocode from Wikipedia: s ? s0; e ? E(s) // Initial state, energy. sbest ? s; ebest ? e // Initial "best" solution k ? 0 // Energy evaluation count. while k < kmax and e > emax // While time left & not good enough: snew ? neighbour(s) // Pick some neighbour. enew ? E(snew) // Compute its energy. if enew < ebest then // Is this a new best? sbest ? snew; ebest ? enew // Save 'new neighbour' to 'best found'. if P(e, enew, temp(k/kmax)) > random() then // Should we move to it? s ? snew; e ? enew // Yes, change state. k ? k + 1 // One more evaluation done return sbest // Return the best solution found. The following is an adaptation of the technique. My supervisor said the idea is fine in theory. First I pick up some allocation (i.e. an entire dictionary of students and their allocated projects, including the ranks for the projects) from entire set of randomised allocations, copy it and pass it to my function. Let's call this allocation aOld (it is a dictionary). aOld has a weight related to it called wOld. The weighting is described below. The function does the following: Let this allocation, aOld be the best_node From all the students, pick a random number of students and stick in a list Strip (DEALLOCATE) them of their projects ++ reflect the changes for projects (allocated parameter is now False) and lecturers (free up slots if one or more of their projects are no longer allocated) Randomise that list Try assigning (REALLOCATE) everyone in that list projects again Calculate the weight (add up ranks, rank 1 = 1, rank 2 = 2... and no project rank = 101) For this new allocation aNew, if the weight wNew is smaller than the allocation weight wOld I picked up at the beginning, then this is the best_node (as defined by the Simulated Annealing algorithm above). Apply the algorithm to aNew and continue. If wOld < wNew, then apply the algorithm to aOld again and continue. The allocations/data-points are expressed as "nodes" such that a node = (weight, allocation_dict, projects_dict, lecturers_dict) Right now, I can only perform this algorithm once, but I'll need to try for a number N (denoted by kmax in the Wikipedia snippet) and make sure I always have with me, the previous node and the best_node. So that I don't modify my original dictionaries (which I might want to reset to), I've done a shallow copy of the dictionaries. From what I've read in the docs, it seems that it only copies the references and since my dictionaries contain objects, changing the copied dictionary ends up changing the objects anyway. So I tried to use copy.deepcopy().These dictionaries refer to objects that have been mapped with SQLA. Questions: I've been given some solutions to the problems faced but due to my über green-ness with using Python, they all sound rather cryptic to me. Deepcopy isn't playing nicely with SQLA. I've been told thatdeepcopy on ORM objects probably has issues that prevent it from working as you'd expect. Apparently I'd be better off "building copy constructors, i.e. def copy(self): return FooBar(....)." Can someone please explain what that means? I checked and found out that deepcopy has issues because SQLAlchemy places extra information on your objects, i.e. an _sa_instance_state attribute, that I wouldn't want in the copy but is necessary for the object to have. I've been told: "There are ways to manually blow away the old _sa_instance_state and put a new one on the object, but the most straightforward is to make a new object with __init__() and set up the attributes that are significant, instead of doing a full deep copy." What exactly does that mean? Do I create a new, unmapped class similar to the old, mapped one? An alternate solution is that I'd have to "implement __deepcopy__() on your objects and ensure that a new _sa_instance_state is set up, there are functions in sqlalchemy.orm.attributes which can help with that." Once again this is beyond me so could someone kindly explain what it means? A more general question: given the above information are there any suggestions on how I can maintain the information/state for the best_node (which must always persist through my while loop) and the previous_node, if my actual objects (referenced by the dictionaries, therefore the nodes) are changing due to the deallocation/reallocation taking place? That is, without using copy?

    Read the article

  • One letter game problem?

    - by Alex K
    Recently at a job interview I was given the following problem: Write a script capable of running on the command line as python It should take in two words on the command line (or optionally if you'd prefer it can query the user to supply the two words via the console). Given those two words: a. Ensure they are of equal length b. Ensure they are both words present in the dictionary of valid words in the English language that you downloaded. If so compute whether you can reach the second word from the first by a series of steps as follows a. You can change one letter at a time b. Each time you change a letter the resulting word must also exist in the dictionary c. You cannot add or remove letters If the two words are reachable, the script should print out the path which leads as a single, shortest path from one word to the other. You can /usr/share/dict/words for your dictionary of words. My solution consisted of using breadth first search to find a shortest path between two words. But apparently that wasn't good enough to get the job :( Would you guys know what I could have done wrong? Thank you so much. import collections import functools import re def time_func(func): import time def wrapper(*args, **kwargs): start = time.time() res = func(*args, **kwargs) timed = time.time() - start setattr(wrapper, 'time_taken', timed) return res functools.update_wrapper(wrapper, func) return wrapper class OneLetterGame: def __init__(self, dict_path): self.dict_path = dict_path self.words = set() def run(self, start_word, end_word): '''Runs the one letter game with the given start and end words. ''' assert len(start_word) == len(end_word), \ 'Start word and end word must of the same length.' self.read_dict(len(start_word)) path = self.shortest_path(start_word, end_word) if not path: print 'There is no path between %s and %s (took %.2f sec.)' % ( start_word, end_word, find_shortest_path.time_taken) else: print 'The shortest path (found in %.2f sec.) is:\n=> %s' % ( self.shortest_path.time_taken, ' -- '.join(path)) def _bfs(self, start): '''Implementation of breadth first search as a generator. The portion of the graph to explore is given on demand using get_neighboors. Care was taken so that a vertex / node is explored only once. ''' queue = collections.deque([(None, start)]) inqueue = set([start]) while queue: parent, node = queue.popleft() yield parent, node new = set(self.get_neighbours(node)) - inqueue inqueue = inqueue | new queue.extend([(node, child) for child in new]) @time_func def shortest_path(self, start, end): '''Returns the shortest path from start to end using bfs. ''' assert start in self.words, 'Start word not in dictionnary.' assert end in self.words, 'End word not in dictionnary.' paths = {None: []} for parent, child in self._bfs(start): paths[child] = paths[parent] + [child] if child == end: return paths[child] return None def get_neighbours(self, word): '''Gets every word one letter away from the a given word. We do not keep these words in memory because bfs accesses a given vertex only once. ''' neighbours = [] p_word = ['^' + word[0:i] + '\w' + word[i+1:] + '$' for i, w in enumerate(word)] p_word = '|'.join(p_word) for w in self.words: if w != word and re.match(p_word, w, re.I|re.U): neighbours += [w] return neighbours def read_dict(self, size): '''Loads every word of a specific size from the dictionnary into memory. ''' for l in open(self.dict_path): l = l.decode('latin-1').strip().lower() if len(l) == size: self.words.add(l) if __name__ == '__main__': import sys if len(sys.argv) not in [3, 4]: print 'Usage: python one_letter_game.py start_word end_word' else: g = OneLetterGame(dict_path = '/usr/share/dict/words') try: g.run(*sys.argv[1:]) except AssertionError, e: print e

    Read the article

  • Complex sound handling (I.E. pitch change while looping)

    - by Matthew
    Hi everyone I've been meaning to learn Java for a while now (I usually keep myself in languages like C and Lua) but buying an android phone seems like an excellent time to start. now after going through the lovely set of tutorials and a while spent buried in source code I'm beginning to get the feel for it so what's my next step? well to dive in with a fully featured application with graphics, sound, sensor use, touch response and a full menu. hmm now there's a slight conundrum since i can continue to use cryptic references to my project or risk telling you what the application is but at the same time its going to make me look like a raving sci-fi nerd so bare with me for the brief... A semi-working sonic screwdriver (oh yes!) my grand idea was to make an animated screwdriver where sliding the controls up and down modulate the frequency and that frequency dictates the sensor data it returns. now I have a semi-working sound system but its pretty poor for what its designed to represent and I just wouldn't be happy producing a sub-par end product whether its my first or not. the problem : sound must begin looping when the user presses down on the control the sound must stop when the user releases the control when moving the control up or down the sound effect must change pitch accordingly if the user doesn't remove there finger before backing out of the application it must plate the casing of there device with gold (Easter egg ;P) now I'm aware of how monolithic the first 3 look and that's why I would really appreciate any help I can get. sorry for how bad this code looks but my general plan is to create the functional components then refine the code later, no good painting the walls if the roofs not finished. here's my user input, he set slide stuff is used in the graphics for the control @Override public boolean onTouchEvent(MotionEvent event) { //motion event for the screwdriver view if(event.getAction() == MotionEvent.ACTION_DOWN) { //make sure the users at least trying to touch the slider if (event.getY() > SonicSlideYTop && event.getY() < SonicSlideYBottom) { //power setup, im using 1.5 to help out the rate on soundpool since it likes 0.5 to 1.5 SonicPower = 1.5f - ((event.getY() - SonicSlideYTop) / SonicSlideLength); //just goes into a method which sets a private variable in my sound pool class thing mSonicAudio.setPower(1, SonicPower); //this handles the slides graphics setSlideY ( (int) event.getY() ); @Override public boolean onTouchEvent(MotionEvent event) { //motion event for the screwdriver view if(event.getAction() == MotionEvent.ACTION_DOWN) { //make sure the users at least trying to touch the slider if (event.getY() > SonicSlideYTop && event.getY() < SonicSlideYBottom) { //power setup, im using 1.5 to help out the rate on soundpool since it likes 0.5 to 1.5 SonicPower = 1.5f - ((event.getY() - SonicSlideYTop) / SonicSlideLength); //just goes into a method which sets a private variable in my sound pool class thing mSonicAudio.setPower(1, SonicPower); //this handles the slides graphics setSlideY ( (int) event.getY() ); //this is from my latest attempt at loop pitch change, look for this in my soundPool class mSonicAudio.startLoopedSound(); } } if(event.getAction() == MotionEvent.ACTION_MOVE) { if (event.getY() > SonicSlideYTop && event.getY() < SonicSlideYBottom) { SonicPower = 1.5f - ((event.getY() - SonicSlideYTop) / SonicSlideLength); mSonicAudio.setPower(1, SonicPower); setSlideY ( (int) event.getY() ); } } if(event.getAction() == MotionEvent.ACTION_UP) { mSonicAudio.stopLoopedSound(); SonicPower = 1.5f - ((event.getY() - SonicSlideYTop) / SonicSlideLength); mSonicAudio.setPower(1, SonicPower); } return true; } and here's where those methods end up in my sound pool class its horribly messy but that's because I've been trying a ton of variants to get this to work, you will also notice that I begin to hard code the index, again I was trying to get the methods to work before making them work well. package com.mattster.sonicscrewdriver; import java.util.HashMap; import android.content.Context; import android.media.AudioManager; import android.media.SoundPool; public class SoundManager { private float mPowerLvl = 1f; private SoundPool mSoundPool; private HashMap mSoundPoolMap; private AudioManager mAudioManager; private Context mContext; private int streamVolume; private int LoopState; private long mLastTime; public SoundManager() { } public void initSounds(Context theContext) { mContext = theContext; mSoundPool = new SoundPool(2, AudioManager.STREAM_MUSIC, 0); mSoundPoolMap = new HashMap<Integer, Integer>(); mAudioManager = (AudioManager)mContext.getSystemService(Context.AUDIO_SERVICE); streamVolume = mAudioManager.getStreamVolume(AudioManager.STREAM_MUSIC); } public void addSound(int index,int SoundID) { mSoundPoolMap.put(1, mSoundPool.load(mContext, SoundID, 1)); } public void playUpdate(int index) { if( LoopState == 1) { long now = System.currentTimeMillis(); if (now > mLastTime) { mSoundPool.play(mSoundPoolMap.get(1), streamVolume, streamVolume, 1, 0, mPowerLvl); mLastTime = System.currentTimeMillis() + 250; } } } public void stopLoopedSound() { LoopState = 0; mSoundPool.setVolume(mSoundPoolMap.get(1), 0, 0); mSoundPool.stop(mSoundPoolMap.get(1)); } public void startLoopedSound() { LoopState = 1; } public void setPower(int index, float mPower) { mPowerLvl = mPower; mSoundPool.setRate(mSoundPoolMap.get(1), mPowerLvl); } } ah ha! I almost forgot, that looks pretty ineffective but I omitted my thread which actuality updates it, nothing fancy it just calls : mSonicAudio.playUpdate(1); thanks in advance, Matthew

    Read the article

  • Bad disks in ancient server

    - by Joel Coel
    I have a 1998-era Netware 3.12 server that runs everything on our campus: general ledger, purchasing, payroll, student information, grades, you name it. The server has an Adaptec RAID controller with two volumes: RAID 1, 2 17GB scsi disks, Seagate ST318417W RAID 5, 3 4GB scsi disks, 2 Seagate ST34573W and 1 ST34572W. We are currently in the early stages of a project to replace this system, but you don't just jump into a new system like that and so I need to keep this server running until at least November 2011. This week we had not one but two hard drives fail. Thankfully they are from different volumes and we're able to keep running for the moment, but given the close nature of these failures I have serious doubts that I'll be able to avoid catastrophic failure from this server through the November target as is without restoring the RAID redundancy — it'll only take one more drive failure anywhere and I'm completely hosed. We are fortunate enough to have exact match "spares" lying around for both drives, but the spares are in unknown condition. I tried swapping just them in, but the RAID controller isn't smart enough to handle this and it renders the system unbootable. As for the RAID controller itself, there is utility I can get into during POST via a Ctrl-A shortcut, but I can't do much useful from there. To actually manage volumes I must first boot in to Netware, at which point I can use CI/O Array Management Software Version 2.0 to actually look at volume information. I suspect that the normal way to manage things is to boot from a special floppy with the controller software on it, but that floppy is long gone. Going through the options in the RAID software, I think the only supported way to replace a disk in an existing RAID volume is to physically add the disk, boot up and configure it as a "spare" for a volume, force the volume to use the spare to replace an existing down disk (and at this point I'm only guessing) so that the down disk becomes the spare, repair the volume, remove the spare from the volume, and then shut down and remove the disk. Then start all over for the other failed disk. All this amounts to a lot of downtime, assuming I can even make it work and that my spares are any good. As for finding reliable spares, I have no clue where to even begin looking to find a new 4GB scsi drive, or even which exact scsi system I'm looking for, as it's gone through a few different iterations over time. Another option is to migrate this to a virtual machine (hyper-v), but all previous attempts we've made in this area have failed to get very far. When this machine was installed I was just graduating from high school, and so it requires lower level knowledge of netware and dos than I ever developed, or if I did have since forgotten (I'm not exactly a dos neophyte, either). Part of my problem is this is a high-use server, and taking it down for a few days to figure things out isn't gonna fly very well. As for the question, I'm looking for anything that might be helpful in this situation: a recommendation on a place to find good spares from this era, personal experience repairing RAID volumes using a similar controller or building a hyper-v vm from an old netware server, a line on a floppy with better software for the RAID controller, recommendation on a good Novell consultant in Nebraska that would be able to put things right, a whole other option I haven't considered yet, etc. Update: For backups, we have good (recently verified via restore) backups of the data only -- nothing for the software that actually runs things. Update 2: Just a progress report that I currently have a working Netware 3.12 install in VMWare Virtual Server 2.0, thanks largely to the guide I found here: http://cerbulescubogdan.blogspot.com/2010/11/novell-netware-312-on-vmware.html The next steps are preparing empty netware volumes to match the additional volumes on my existing server, taking a dump of everything on the C:\ drive and netware volumes on my existing server, and figuring out from that information what modules need added to netware, installing my licenses (we do still have that disk, if it's any good), and moving data over. I have approval to bring the server down for a week after the first of the year (sadly not before), so, aside from creating empty volumes, the rest of the work will have to wait until then. Final Update (Jan 5, 2011): I was able to get spares working in both raid arrays without data loss this week. Both are now listed by the controller as "FAULT TOLLERANT" (yay!). I was also able to build on the progress from my last update and now have a functional "spare" server in VMWare Server 2.0. The spare can run and use our erp software, but I can't put it into production because I can't (yet) print from that box (and I have no idea why). Even so, this VM will do in a pinch if I have no other choice, and between it and the repaired RAID arrays I'm comfortable pushing on until I can junk the machine in November.

    Read the article

  • OS Missing? Messed up the MBR on Win7 64-bit

    - by hom3lesshom3boy
    I have a Windows 7 machine with two hard drives: a 1TB C: drive and 500GB J:. I had Windows XP installed on C: and Windows 7 installed on J:. I installed Windows 7 after Windows XP from an installer .exe I (legally) bought and downloaded. It, and all of my other files, are sitting on my J: drive intact. While under my Windows 7 install, a few days ago I decided to use Priform's CCleaner and use its DriveWipe utility to wipe the C: drive. 1% into the process, I cancelled and attempted to use it again. It gives me an error saying it can't format the drive, so I poke around the Internet a bit, give up, and restart my computer. I first get an "OS is missing" error after the computer boots past the BIOS. I downloaded and put UBCD on a bootable USB to use another drivewiping tool to completely erase the C: drive, hoping it'll take the problem with it. No luck. I try to use TestDisk to make my J: my primary active drive, but no luck. I still get the "OS is missing" error. Or sometimes it'll hang at Verifying DMI Pool. Or sometimes I'll get the "NTLDR is missing" error. I get hold of Hiren's and put it on another bootable USB. I first I tried the Boot Windows 7 from Hard Drive option, and I get "Error 15: File Not Found". I tried the "Fix 'NTLDR is Missing'" option (I'm not quite sure why this is even showing up, since I'm trying to get into a HDD with Windows 7 installed. Probably messed up somewhere when I used TestDisk) and I get this list: I'll run through the error messages I get: 1st Try - Windows could not start because the following file is missing or corrupt: \system32\hal.dll 2nd Try - Windows could not start because the following file is missing or corrupt: \system32\ntoskrnl.exe 3rd Try - Windows could not start because of a computer disk hardware configuration problem. Could not read from the selected boot disk. Check boot path and disk hardware. 4th - 8th Try - Same as #3 9th Try - I/O Error accessing boot sector file multi(0)disk(0)fdisk(0)\BOOTSEC.DOS. And computer freezes. 10th Try - computer restarts Needless to say, not a single one of those works. I then tried to open up the Windows 7 exe I have sitting on my J: from the Mini-XP OS on Hiren's, but it won't run because I'm trying to run a 64-bit file from a 32-bit exe. At least, that's the problem according to these guys: http://social.technet.microsoft.com/...-b2f54e9c7d18/ I then borrowed a 64-bit Windows Home Premium CD from a friend to get to the recovery options. But I get the error message: This version of System Recovery Options is not compatible with the version of Windows you are trying to repair. Try using a recovery disc that is compatible with this version of Windows. I pressed Shift + F10 to get to the Command Prompt directly. These are the exact steps I took from there (paraphrased a little): X:\Sources>bootrec /Fixmbr The operation completed successfully. X:\Sources>bootrec /Fixboot The operation completed successfully. I restarted my computer, but it still didn't work. I unplugged the C: drive, then tried bootrec and Diskpart: X:\Sources> bootrec.exe X:\Sources> bootrec /RebuildBcd Total identified Windows installations: 1 [1] \\?\GLOBALROOT\Device\HarddiskVolume1\Windows Add installation to bootlist? Yes(Y)/No(N)/All(A):y The requested system device cannot be found. X:\Sources>DiskPart DISKPART> List Disk Disk # Status Size Free Dyn Gpt Disk 0_Online_465GB_0B_______* Disk 1 Online 1000MB 0B (this is Hiren's on a bootable usb) DISKPART> Select Disk 0 Disk 0 is now the selected disk. DISKPART> List Partition Partition # Type Size Offset Partition 1 System 465GB 31KB DISKPART> Select Partition 1 Partition 1 is now the selected partition DISKPART> Active The selected disk is not a fixed MBR disk. The ACTIVE command can only be used on fixed MBR disks. DISKPART> exit Leaving Diskpart... X:\Sources>bootrec /Fixmbr The operation completed successfully. X:\Sources>bootrec /Fixboot The operation completed successfully. Before I go any further, is there anything I'm overlooking/doing wrong? All I care about is making the J: and Windows 7 bootable again. SPECS: Windows 7 Professional 64-Bit GIGABYTE - Motherboard - Socket 775 - GA-P35-DS3R (rev. 2.1) Crucial Ballistix 2048MB PC6400 DDR2 800MHz (2x2GB) Intel Core 2 Duo E6700 Processor (2.6 (6GHZ) I think... not sure anymore C: HDD - SAMSUNG HD103UJ (1TB, not plugged in) J: HDD - WDC WD5000AKS-00V1A0 (500GB)

    Read the article

  • Convert Decimal to ASCII

    - by Dan Snyder
    I'm having difficulty using reinterpret_cast. Before I show you my code I'll let you know what I'm trying to do. I'm trying to get a filename from a vector full of data being used by a MIPS I processor I designed. Basically what I do is compile a binary from a test program for my processor, dump all the hex's from the binary into a vector in my c++ program, convert all of those hex's to decimal integers and store them in a DataMemory vector which is the data memory unit for my processor. I also have instruction memory. So When my processor runs a SYSCALL instruction such as "Open File" my C++ operating system emulator receives a pointer to the beginning of the filename in my data memory. So keep in mind that data memory is full of ints, strings, globals, locals, all sorts of stuff. When I'm told where the filename starts I do the following: Convert the whole decimal integer element that is being pointed to to its ASCII character representation, and then search from left to right to see if the string terminates, if not then just load each character consecutively into a "filename" string. Do this until termination of the string in memory and then store filename in a table. My difficulty is generating filename from my memory. Here is an example of what I'm trying to do: C++ Syntax (Toggle Plain Text) 1.Index Vector NewVector ASCII filename 2.0 240faef0 128123792 'abc7' 'a' 3.0 240faef0 128123792 'abc7' 'ab' 4.0 240faef0 128123792 'abc7' 'abc' 5.0 240faef0 128123792 'abc7' 'abc7' 6.1 1234567a 243225 'k2s0' 'abc7k' 7.1 1234567a 243225 'k2s0' 'abc7k2' 8.1 1234567a 243225 'k2s0' 'abc7k2s' 9. //EXIT LOOP// 10.1 1234567a 243225 'k2s0' 'abc7k2s' Index Vector NewVector ASCII filename 0 240faef0 128123792 'abc7' 'a' 0 240faef0 128123792 'abc7' 'ab' 0 240faef0 128123792 'abc7' 'abc' 0 240faef0 128123792 'abc7' 'abc7' 1 1234567a 243225 'k2s0' 'abc7k' 1 1234567a 243225 'k2s0' 'abc7k2' 1 1234567a 243225 'k2s0' 'abc7k2s' //EXIT LOOP// 1 1234567a 243225 'k2s0' 'abc7k2s' Here is the code that I've written so far to get filename (I'm just applying this to element 1000 of my DataMemory vector to test functionality. 1000 is arbitrary.): C++ Syntax (Toggle Plain Text) 1.int i = 0; 2.int step = 1000;//top->a0; 3.string filename; 4.char *temp = reinterpret_cast<char*>( DataMemory[1000] );//convert to char 5.cout << "a0:" << top->a0 << endl;//pointer supplied 6.cout << "Data:" << DataMemory[top->a0] << endl;//my vector at pointed to location 7.cout << "Data(1000):" << DataMemory[1000] << endl;//the element I'm testing 8.cout << "Characters:" << &temp << endl;//my temporary char array 9. 10.while(&temp[i]!=0) 11.{ 12. filename+=temp[i];//add most recent non-terminated character to string 13. i++; 14. if(i==4)//when 4 chatacters have been added.. 15. { 16. i=0; 17. step+=1;//restart loop at the next element in DataMemory 18. temp = reinterpret_cast<char*>( DataMemory[step] ); 19. } 20. } 21. cout << "Filename:" << filename << endl; int i = 0; int step = 1000;//top-a0; string filename; char *temp = reinterpret_cast( DataMemory[1000] );//convert to char cout << "a0:" << top-a0 << endl;//pointer supplied cout << "Data:" << DataMemory[top-a0] << endl;//my vector at pointed to location cout << "Data(1000):" << DataMemory[1000] << endl;//the element I'm testing cout << "Characters:" << &temp << endl;//my temporary char array while(&temp[i]!=0) { filename+=temp[i];//add most recent non-terminated character to string i++; if(i==3)//when 4 chatacters have been added.. { i=0; step+=1;//restart loop at the next element in DataMemory temp = reinterpret_cast( DataMemory[step] ); } } cout << "Filename:" << filename << endl; So the issue is that when I do the conversion of my decimal element to a char array I assume that 8 hex #'s will give me 4 characters. Why isn't this this case? Here is my output: C++ Syntax (Toggle Plain Text) 1.a0:0 2.Data:0 3.Data(1000):4428576 4.Characters:0x7fff5fbff128 5.Segmentation fault

    Read the article

  • Magento Admin Custom Sales Report

    - by Ela
    Hi, I have to develop a module to export collection of product,order,customer combined attributes. So i thought rather than modifying the core sales report for this purpose better to do a custom functionality. These are the steps that i did but i am not able to produce it. Used magento 1.4.1 version for this. Under /var/www/magento141/app/code/core/Mage/Reports/etc/adminhtml.xml Added these lines for menus. <ereaders translate="title" module="reports"> <title>EReader Sales Report</title> <children> <ereaders translate="title" module="reports"> <title>Sales Report</title> <action>adminhtml/report_sales/ereaders</action> </ereaders> </children> </ereaders> Under /var/www/magento141/app/design/adminhtml/default/default/layout/sales.xml Added these lines for filter condition. <adminhtml_report_sales_ereaders> <update handle="report_sales"/> <reference name="content"> <block type="adminhtml/report_sales_sales" template="report/grid/container.phtml" name="sales.report.grid.container"> <block type="adminhtml/store_switcher" template="report/store/switcher/enhanced.phtml" name="store.switcher"> <action method="setStoreVarName"><var_name>store_ids</var_name></action> </block> <block type="sales/adminhtml_report_filter_form_order" name="grid.filter.form"> ---- </block> </block> </reference> </adminhtml_report_sales_ereaders> And then copied the needed block,model files from sales and renamed all of them into ereaders under /var/www/magento141/app/code/core/Mage/Adminhtml/. Then placed action for ereaders under /var/www/magento141/app/code/core/Mage/Adminhtml/controllers/Report/SalesController.php public function ereadersAction() { $this->_title($this->__('Reports'))->_title($this->__('Sales'))->_title($this->__('EReaders Sales')); $this->_showLastExecutionTime(Mage_Reports_Model_Flag::REPORT_ORDER_FLAG_CODE, 'ereaders'); $this->_initAction() ->_setActiveMenu('report/sales/ereaders') ->_addBreadcrumb(Mage::helper('adminhtml')->__('EReaders Sales Report'), Mage::helper('adminhtml')->__('EReaders Sales Report')); $gridBlock = $this->getLayout()->getBlock('report_sales_ereaders.grid'); $filterFormBlock = $this->getLayout()->getBlock('grid.filter.form'); $this->_initReportAction(array( $gridBlock, $filterFormBlock )); $this->renderLayout(); } Here when i use var_dump == //var_dump($this-getLayout()-getBlock('report_sales_ereaders.grid')); am getting bool(false) only. It does not call the ereaders grid, instead of its still loading blocks and grids from Sales only. I searched most of the files related to report, am not still able to find out the problem. Hope many of you gone through these sort of issues, please can anyone tell me where am making mistake or missing something.

    Read the article

  • Sending multiple requests simultaneously to the Server using Selenium with Java

    - by gagneet
    I wish to send multiple requests to the server, simultaneously. The problem statement will be: Read a text file containing multiple URL’s. Open each URL in the web browser. Collect the Cookie information for each call, and store it to a file. Send another call: http://myserver.com:1111/cookie?out=text Store the output (body text) of this file to a separate file for each call made in 4 Open the next URL in the text file given in 1 and repeat steps 1-6. The above is to be run with multi-threading, so that I can send around 5-10 URL requests simultaneously. I have implemented something in Selenium using Java, but have not been able to do the multi-threading approach. Code is given below: package com.cookie.selenium; import java.io.BufferedReader; import java.io.BufferedWriter; import java.io.FileReader; import java.io.FileWriter; import java.io.IOException; import com.thoughtworks.selenium.*; public class ReadURL extends SeleneseTestCase { public void setUp() throws Exception { setUp("http://www.myserver.com/", "*chrome"); } public static void main(String args[]) { Selenium selenium = new DefaultSelenium("localhost", 4444, "*chrome", "http://myserver"); selenium.start(); selenium.setTimeout("30000000"); try { BufferedReader inputfile = new BufferedReader(new FileReader("C:\\url.txt")); BufferedReader cookietextfile = new BufferedReader(new FileReader("C:\\text.txt")); BufferedWriter cookiefile = new BufferedWriter(new FileWriter("C:\\cookie.txt")); BufferedWriter outputfile = null; String str; String cookiestr = "http://myserver.com:1111/cookie?out=text"; String filename = null; int i = 0; while ((str = inputfile.readLine()) != null) { selenium.createCookie("T=222redHyt345&f=5&r=fg&t=100",""); selenium.open( str ); selenium.waitForPageToLoad("120000"); String urlcookie = selenium.getCookie(); System.out.println( "URL :" + str ); System.out.println( "Cookie :" + urlcookie ); cookiefile.write( urlcookie ); cookiefile.newLine(); selenium.open( cookiestr ); selenium.waitForPageToLoad("120000"); String bodytext = selenium.getBodyText(); System.out.println("Body Text :" + bodytext); filename = "C:\\cookies\\" + i + ".txt"; outputfile = new BufferedWriter(new FileWriter( filename )); outputfile.write( bodytext ); outputfile.newLine(); i++; } inputfile.close(); outputfile.close(); cookiefile.close(); selenium.stop(); } catch (IOException e) { } } } What basically I am trying to do here is, open the first set of URL from a text file (which has list given of all the URL's i wish to open). Then when I capture the cookie information from here and store it, I open another window to output all the cookie information for that server to my browser window. This works fine when I do outside of Selenium code, but when I do it within the above code, it opens a "Save As..." popup and my tests stop. :-( I wish to save the contents of that second call to a new file, but have not been able to do the same. Also, if I have to send multiple such requests to the server, how would that be possible in Java using a Selenium Framework. Currently, I am opening multiple instances of the framework and running them with different parameters :-(

    Read the article

  • Watching setTimeout loops so that only one is running at a time.

    - by DA
    I'm creating a content rotator in jQuery. 5 items total. Item 1 fades in, pauses 10 seconds, fades out, then item 2 fades in. Repeat. Simple enough. Using setTimeout I can call a set of functions that create a loop and will repeat the process indefinitely. I now want to add the ability to interrupt this rotator at any time by clicking on a navigation element to jump directly to one of the content items. I originally started going down the path of pinging a variable constantly (say every half second) that would check to see if a navigation element was clicked and, if so, abandon the loop, then restart the loop based on the item that was clicked. The challenge I ran into was how to actually ping a variable via a timer. The solution is to dive into JavaScript closures...which are a little over my head but definitely something I need to delve into more. However, in the process of that, I came up with an alternative option that actually seems to be better performance-wise (theoretically, at least). I have a sample running here: http://jsbin.com/uxupi/14 (It's using console.log so have fireBug running) Sample script: $(document).ready(function(){ var loopCount = 0; $('p#hello').click(function(){ loopCount++; doThatThing(loopCount); }) function doThatOtherThing(currentLoopCount) { console.log('doThatOtherThing-'+currentLoopCount); if(currentLoopCount==loopCount){ setTimeout(function(){doThatThing(currentLoopCount)},5000) } } function doThatThing(currentLoopCount) { console.log('doThatThing-'+currentLoopCount); if(currentLoopCount==loopCount){ setTimeout(function(){doThatOtherThing(currentLoopCount)},5000); } } }) The logic being that every click of the trigger element will kick off the loop passing into itself a variable equal to the current value of the global variable. That variable gets passed back and forth between the functions in the loop. Each click of the trigger also increments the global variable so that subsequent calls of the loop have a unique local variable. Then, within the loop, before the next step of each loop is called, it checks to see if the variable it has still matches the global variable. If not, it knows that a new loop has already been activated so it just ends the existing loop. Thoughts on this? Valid solution? Better options? Caveats? Dangers? UPDATE: I'm using John's suggestion below via the clearTimeout option. However, I can't quite get it to work. The logic is as such: var slideNumber = 0; var timeout = null; function startLoop(slideNumber) { ...do stuff here to set up the slide based on slideNumber... slideFadeIn() } function continueCheck(){ if (timeout != null) { // cancel the scheduled task. clearTimeout(timeout); timeout = null; return false; }else{ return true; } }; function slideFadeIn() { if (continueCheck){ // a new loop hasn't been called yet so proceed... // fade in the LI $currentListItem.fadeIn(fade, function() { if(multipleFeatures){ timeout = setTimeout(slideFadeOut,display); } }); }; function slideFadeOut() { if (continueLoop){ // a new loop hasn't been called yet so proceed... slideNumber=slideNumber+1; if(slideNumber==features.length) { slideNumber = 0; }; timeout = setTimeout(function(){startLoop(slideNumber)},100); }; startLoop(slideNumber); The above kicks of the looping. I then have navigation items that, when clicked, I want the above loop to stop, then restart with a new beginning slide: $(myNav).click(function(){ clearTimeout(timeout); timeout = null; startLoop(thisItem); }) If I comment out 'startLoop...' from the click event, it, indeed, stops the initial loop. However, if I leave that last line in, it doesn't actually stop the initial loop. Why? What happens is that both loops seem to run in parallel for a period. So, when I click my navigation, clearTimeout is called, which clears it.

    Read the article

  • Is there a way to delay compilation of a stored procedure's execution plan?

    - by Ian Henry
    (At first glance this may look like a duplicate of http://stackoverflow.com/questions/421275 or http://stackoverflow.com/questions/414336, but my actual question is a bit different) Alright, this one's had me stumped for a few hours. My example here is ridiculously abstracted, so I doubt it will be possible to recreate locally, but it provides context for my question (Also, I'm running SQL Server 2005). I have a stored procedure with basically two steps, constructing a temp table, populating it with very few rows, and then querying a very large table joining against that temp table. It has multiple parameters, but the most relevant is a datetime "@MinDate." Essentially: create table #smallTable (ID int) insert into #smallTable select (a very small number of rows from some other table) select * from aGiantTable inner join #smallTable on #smallTable.ID = aGiantTable.ID inner join anotherTable on anotherTable.GiantID = aGiantTable.ID where aGiantTable.SomeDateField > @MinDate If I just execute this as a normal query, by declaring @MinDate as a local variable and running that, it produces an optimal execution plan that executes very quickly (first joins on #smallTable and then only considers a very small subset of rows from aGiantTable while doing other operations). It seems to realize that #smallTable is tiny, so it would be efficient to start with it. This is good. However, if I make that a stored procedure with @MinDate as a parameter, it produces a completely inefficient execution plan. (I am recompiling it each time, so it's not a bad cached plan...at least, I sure hope it's not) But here's where it gets weird. If I change the proc to the following: declare @LocalMinDate datetime set @LocalMinDate = @MinDate --where @MinDate is still a parameter create table #smallTable (ID int) insert into #smallTable select (a very small number of rows from some other table) select * from aGiantTable inner join #smallTable on #smallTable.ID = aGiantTable.ID inner join anotherTable on anotherTable.GiantID = aGiantTable.ID where aGiantTable.SomeDateField > @LocalMinDate Then it gives me the efficient plan! So my theory is this: when executing as a plain query (not as a stored procedure), it waits to construct the execution plan for the expensive query until the last minute, so the query optimizer knows that #smallTable is small and uses that information to give the efficient plan. But when executing as a stored procedure, it creates the entire execution plan at once, thus it can't use this bit of information to optimize the plan. But why does using the locally declared variables change this? Why does that delay the creation of the execution plan? Is that actually what's happening? If so, is there a way to force delayed compilation (if that indeed is what's going on here) even when not using local variables in this way? More generally, does anyone have sources on when the execution plan is created for each step of a stored procedure? Googling hasn't provided any helpful information, but I don't think I'm looking for the right thing. Or is my theory just completely unfounded? Edit: Since posting, I've learned of parameter sniffing, and I assume this is what's causing the execution plan to compile prematurely (unless stored procedures indeed compile all at once), so my question remains -- can you force the delay? Or disable the sniffing entirely? The question is academic, since I can force a more efficient plan by replacing the select * from aGiantTable with select * from (select * from aGiantTable where ID in (select ID from #smallTable)) as aGiantTable Or just sucking it up and masking the parameters, but still, this inconsistency has me pretty curious.

    Read the article

  • Alternate way to create a clone of a UNIX System

    - by Spirit
    THE STORY: (If you don't like to read much, down below is the question :) ) Where I work we have two HP RP2470 servers same hardware same number of hard drives same everything :). One of them is a production server and runs HP-UX 11.00. The poor ba***rd hasn't been turned off for years and now I have to make a clone of it on the other server - just in case, for redundancy. The problem is simple (or not simple) as I have to make the the other server exactly the same. However the old version of OS (UX 11.00 is a history now) and the old software running on it, have made my task almost impossible. On the production server there is also a cloning/recover utility Ignite-UX. I tried many times to create a recovery tape with it. Then when I load the tape on the backup server, it succeeds with the loading of the tape (no errors no warnings) but on the next restart it fails to load the OS :S and drops into HP`s ISL prompt. --- THE QUESTION: Is there an alternate way to create a clone of the Unix System? The environment is: 1. 2x HP RP2470 Servers (non-Intel), same hardware, same number od HDDs (two each of them) same everything. 2. OS running: HP-UX 11.00 The production server has to be cloned without downtime - sadly :( as I hope that they will reconsider on this one For example (like on Windows platforms), if you try to copy an entire HDD with Windows inside on another HDD, and then put that HDD on another PC it will still work, as long as the hardware is the same. Can I do something like that with a Unix system? Can I somehow COPY the contents of the entire HDD, put those on another HDD, and then just load the HDD into the other server? (If you haven't read the story the servers are exactly the same) Will it work? Can it be done with ordinary commands like cp or dump or something like that? Does any one have a similar experience? --- UPDATE: 26.01.2012 NOTE: The update is related to "The Story". If you haven't read that part then you can skip this update. This is just a short update on the recover logs from the Ignite Tape.. someone with more exp. might notice something.. ... --- READING CONTENTS OF THE IGNITE TAPE --- --- OUTPUT OMITED --- ... ... x ./configure3, 413696 bytes, 808 tape blocks x ./monitor_bpr, 20480 bytes, 40 tape blocks * Download_mini-system: Complete * Loading_software: Begin * Installing boot area on disk. * Enabling swap areas. * Backing up LVM configuration for "vg00". * Processing the archive source (Recovery Archive). * Wed Jan 25 15:27:32 EST 2012: Starting archive load of the source (Recovery Archive). * Positioning the tape (/dev/rmt/0mn). * Archive extraction from tape is beginning. Please wait. * Wed Jan 25 15:39:52 EST 2012: Completed archive load of the source (Recovery Archive). * Executing user specified script: "/opt/ignite/data/scripts/os_arch_post_l". * Running in recovery mode (os_arch_post_l). * Running the ioinit command ("/sbin/ioinit -c") * Creating device files via the insf command. insf: Installing special files for sdisk instance 0 address 0/0/1/1.15.0 insf: Installing special files for sdisk instance 1 address 0/0/2/0.1.0 insf: Installing special files for sdisk instance 2 address 0/0/2/1.15.0 insf: Installing special files for stape instance 0 address 0/0/1/0.3.0 insf: Installing special files for btlan instance 0 address 0/0/0/0 insf: Installing special files for btlan instance 1 address 0/2/0/0 insf: Installing special files for pseudo driver dlpi insf: Installing special files for pseudo driver kepd insf: Installing special files for pseudo driver framebuf insf: Installing special files for pseudo driver sad * Running "/opt/upgrade/bin/tlinstall -v" and correcting transition link permissions. * Constructing the bootconf file. * Setting primary boot path to "0/0/1/1.15.0". * Executing: "/var/adm/sw/products/PHSS_20146/pfiles/iux_postload". * Executing: "/var/adm/sw/products/PHSS_25982/pfiles/iux_postload". NOTE: tlinstall is searching filesystem - please be patient NOTE: Successfully completed * Loading_software: Complete * Build_Kernel: Begin NOTE: Since the /stand/vmunix kernel is already in place, the kernel will not be re-built. Note that no mod_kernel directives will be processed. * Build_Kernel: Complete * Boot_From_Client_Disk: Begin * Rebooting machine as expected. NOTE: Rebooting system. sync'ing disks (0 buffers to flush): 0 buffers not flushed 0 buffers still dirty Closing open logical volumes... Done Console reset done. Boot device reset done. ********** VIRTUAL FRONT PANEL ********** System Boot detected ***************************************** LEDs: RUN ATTENTION FAULT REMOTE POWER FLASH OFF OFF ON ON LED State: Running non-OS code. (i.e. Boot or Diagnostics) ... ... ... --- SERVER IS PERFORMING POST SEQUENCE HERE --- --- OUTPUT OMITED --- ... ... ... ***************************************** ************ EARLY BOOT VFP ************* End of early boot detected ***************************************** Firmware Version 43.50 Duplex Console IO Dependent Code (IODC) revision 1 ------------------------------------------------------------------------------ (c) Copyright 1995-2002, Hewlett-Packard Company, All rights reserved ------------------------------------------------------------------------------ Processor Speed State CoProcessor State Cache Size Number State Inst Data --------- -------- --------------------- ----------------- ------------ 0 650 MHz Active Functional 750 KB 1.5 MB 1 650 MHz Idle Functional 750 KB 1.5 MB Central Bus Speed (in MHz) : 120 Available Memory : 2097152 KB Good Memory Required : 16140 KB Primary boot path: 0/0/1/1.15 Alternate boot path: 0/0/2/1.15 Console path: 0/0/4/1.643 Keyboard path: 0/0/4/0.0 Processor is starting autoboot process. To discontinue, press any key within 10 seconds. 10 seconds expired. Proceeding... Trying Primary Boot Path ------------------------ Booting... Boot IO Dependent Code (IODC) revision 1 HARD Booted. ISL Revision A.00.38 OCT 26, 1994 ISL booting hpux ISL>

    Read the article

  • Moving a Drupal between linux servers, best practice to avoid file-ownership problems

    - by zero
    I want to port over a Drupal commons 6x24 from a local LAMP-stack to a production webserver. Both systems run OpenSuse Linux. How do I do this, what are the most important steps. How should I handle file-ownership. It's important for me to have to have full control of the file ownership. If I use the wwwrun account, I frequently run into problems, due to a very strict webserver-admin. See for example the long history of looking for fixes and solutions see this thread and even more interesting see this very long and impressive thread here. All troubles I run into have to do with file-owernship and permissions. This is my current setup; Note: This was just a quick hacked installation - quick and dirty. Well my interest is after the general options i have in the port of a drupal from linux to linux linux-vi17:/srv/www/htdocs/com624 # ls -l insgesamt 224 -rwxrwxrwx 1 root www 45285 19. Jan 00:54 CHANGELOG.txt -rwxrwxrwx 1 root www 925 19. Jan 00:54 COPYRIGHT.txt -rwxrwxrwx 1 root www 206 19. Jan 00:54 cron.php drwxrwxrwx 2 root www 4096 19. Jan 00:54 includes -rwxrwxrwx 1 root www 923 19. Jan 00:54 index.php -rwxrwxrwx 1 root www 1244 19. Jan 00:54 INSTALL.mysql.txt -rwxrwxrwx 1 root www 1011 19. Jan 00:54 INSTALL.pgsql.txt -rwxrwxrwx 1 root www 47073 19. Jan 00:54 install.php -rwxrwxrwx 1 root www 15572 19. Jan 00:54 INSTALL.txt -rwxrwxrwx 1 root www 14940 19. Jan 00:54 LICENSE.txt -rwxrwxrwx 1 root www 1858 19. Jan 00:54 MAINTAINERS.txt drwxrwxrwx 3 root www 4096 19. Jan 00:54 misc drwxrwxrwx 35 root www 4096 19. Jan 00:54 modules drwxrwxrwx 4 root www 4096 19. Jan 00:54 profiles -rwxrwxrwx 1 root www 1470 19. Jan 00:54 robots.txt drwxrwxrwx 2 root www 4096 19. Jan 00:54 scripts drwxrwxrwx 4 root www 4096 19. Jan 00:54 sites drwxrwxrwx 7 root www 4096 19. Jan 00:54 themes -rwxrwxrwx 1 root www 26250 19. Jan 00:54 update.php -rwxrwxrwx 1 root www 4864 19. Jan 00:54 UPGRADE.txt -rwxrwxrwx 1 root www 294 19. Jan 00:54 xmlrpc.php linux-vi17:/srv/www/htdocs/com624 # thx to BetaRides answer here a quick overview on the drush functionality with rsync http://drush.ws/ core-rsync Rsync the Drupal tree to/from another server using ssh. Examples: drush rsync @dev @stage Rsync Drupal root from dev to stage (one of which must be local). drush rsync ./ @stage:%files/img Rsync all files in the current directory to the 'img' directory in the file storage folder on stage. Arguments: source May be rsync path or site alias. See rsync documentation and example.aliases.drushrc.php. destination May be rsync path or site alias. See rsync documentation and example.aliases.drushrc.php. Options: --mode The unary flags to pass to rsync; --mode=rultz implies rsync -rultz. Default is -az. --RSYNC-FLAG Most rsync flags passed to drush sync will be passed on to rsync. See rsync documentation. --exclude-conf Excludes settings.php from being rsynced. Default. --include-conf Allow settings.php to be rsynced --exclude-files Exclude the files directory. --exclude-sites Exclude all directories in "sites/" except for "sites/all". --exclude-other-sites Exclude all directories in "sites/" except for "sites/all" and the site directory for the site being synced. Note: if the site directory is different between the source and destination, use --exclude-sites followed by "drush rsync @from:%site @to:%site" --exclude-paths List of paths to exclude, seperated by : (Unix-based systems) or ; (Windows). --include-paths List of paths to include, seperated by : (Unix-based systems) or ; (Windows). Topics: docs-aliases Site aliases overview with examples Aliases: rsync

    Read the article

  • How can I get the previous logged events when a particular logger is triggered?

    - by Ben Laan
    I need to show the previous 10 events when a particular logger is triggered. The goal is to show what previous steps occurred immediately before NHibernate.SQL logging was issued. Currently, I am logging NHibernate sql to a separate file - this is working correctly. <appender name="NHibernateSqlAppender" type="log4net.Appender.RollingFileAppender"> <file value="Logs\NHibernate.log" /> <appendToFile value="true" /> <rollingStyle value="Size" /> <maxSizeRollBackups value="10" /> <maximumFileSize value="10000KB" /> <staticLogFileName value="true" /> <layout type="log4net.Layout.PatternLayout"> <conversionPattern value="%d{dd/MM/yy HH:mm:ss,fff} [%t] %-5p %c - %m%n" /> </layout> </appender> <logger name="NHibernate.SQL" additivity="false"> <level value="ALL"/> <appender-ref ref="NHibernateSqlAppender"/> </logger> <logger name="NHibernate" additivity="false"> <level value="WARN"/> <appender-ref ref="NHibernateSqlAppender"/> </logger> But this only outputs SQL, without context. I would like all previous logs within a specified namespace to also be logged, but only when the HNibernate.SQL appender is triggered. I have investigated the use of BufferingForwardingAppender as a means to collect all events, and then filter them within the NHibernateSqlAppender, but this is not working. I have read about the LoggerMatchFilter class, which seems like it is going to help, but I'm not sure where to put it. <appender name="BufferingForwardingAppender" type="log4net.Appender.BufferingForwardingAppender" > <bufferSize value="10" /> <lossy value="true" /> <evaluator type="log4net.Core.LevelEvaluator"> <threshold value="ALL"/> </evaluator> <appender-ref ref="NHibernateSqlAppender" /> </appender> <appender name="NHibernateSqlAppender" type="log4net.Appender.RollingFileAppender"> <file value="Logs\NHibernate.log" /> <appendToFile value="true" /> <rollingStyle value="Size" /> <maxSizeRollBackups value="10" /> <maximumFileSize value="10000KB" /> <staticLogFileName value="true" /> <filter type="log4net.Filter.LoggerMatchFilter"> <loggerToMatch value="NHibernate.SQL" /> <loggerToMatch value="Laan" /> </filter> <filter type="log4net.Filter.LoggerMatchFilter"> <loggerToMatch value="NHibernate" /> <acceptOnMatch value="false"/> </filter> <layout type="log4net.Layout.PatternLayout"> <conversionPattern value="%d{dd/MM/yy HH:mm:ss,fff} [%t] %-5p %c - %m%n" /> </layout> </appender> <root> <level value="ALL" /> <appender-ref ref="BufferingForwardingAppender"/> </root> The idea is that buffering appender will store all events, but then the NHibernateSqlAppender will only flush when an NHibernate.SQL event fires, plus it will flush the buffer (of 10 previous items, within the specified logger level, which in this example is Laan.*).

    Read the article

  • swap a div to an embed form when user tweets using the @anywhere function box

    - by Jeff
    I'm using the @anywhere twitter function on the front page of my site (vocabbomb.com) and right now it works great to allow users to tweet straight away. Problem is, when they click tweet, it just reloads the twitter box, empty. I want it to load something new, like a "thank you, now fill in this email form" and show a mailchimp form. Ok so this is the @anywhere code currently working fine: <div id="tbox"></div> <script type="text/javascript"> twttr.anywhere(function (T) { T("#tbox").tweetBox({ height: 100, width: 400, defaultContent: " #vocabbomb", label: "Use the word foo in a tweet:", }); }); </script> So this is fine, and when the user writes a tweet, it just re-displays the twitter box. I understand there is a function that lets you specify stuff after the tweet is made: (example from http://dev.twitter.com/pages/anywhere_tweetbox) <div id="example-ontweet"></div> <script type="text/javascript"> twttr.anywhere(function (T) { T("#example-ontweet").tweetBox({ onTweet : function(plaintext, html) { console.log(plaintext); alert(html); } }); }); </script> Is this the best way to attempt this? Or do I need a function or something that changes what's in the div to something else? I want the mailchimp email form to show up after the tweet. It includes a ton of script and tags and starts like this: <!-- Begin MailChimp Signup Form --> <!--[if IE]> <style type="text/css" media="screen"> #mc_embed_signup fieldset {position: relative;} #mc_embed_signup legend {position: absolute; top: -1em; left: .2em;} </style> <![endif]--> <!--[if IE 7]> <style type="text/css" media="screen"> .mc-field-group {overflow:visible;} </style> <![endif]--> <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.2.6/jquery.min.js"></script> <script type="text/javascript" src="http://downloads.mailchimp.com/js/jquery.validate.js"></script> <script type="text/javascript" src="http://downloads.mailchimp.com/js/jquery.form.js"></script> <div id="mc_embed_signup"> <form action="http://faresharenyc.us1.list-manage.com/subscribe/post?u=106b58b4751a007d826715754&amp;id=2fe6ba4e6a" method="post" id="mc-embedded-subscribe-form" name="mc-embedded-subscribe-form" class="validate" target="_blank" style="font: normal 100% Arial, sans-serif;font-size: 10px;"> ... So my attempt to just require it inside the function didn't work at all: <div id="tbox"></div> <script type="text/javascript"> twttr.anywhere(function (T) { T("#tbox").tweetBox({ height: 100, width: 400, defaultContent: " #vocabbomb", label: "Use the word foo in a tweet:", onTweet: function(plain, html){ <?php require_once('mailchimp.html'); ?> } }); }); </script> Nettuts had a brief discussion here: http://net.tutsplus.com/tutorials/javascript-ajax/using-twitters-anywhere-service-in-6-steps/ I am seeing now that the 'onTweet' function simply adds some text above the twitter box, but doesn't actually replace the entire twitter box. How do I do that? Is it possible? Thanks!

    Read the article

  • urllib2 misbehaving with dynamically loaded content

    - by Sheena
    Some Code headers = {} headers['user-agent'] = 'User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:16.0) Gecko/20100101 Firefox/16.0' headers['Accept'] = 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8' headers['Accept-Language'] = 'en-gb,en;q=0.5' #headers['Accept-Encoding'] = 'gzip, deflate' request = urllib.request.Request(sURL, headers = headers) try: response = urllib.request.urlopen(request) except error.HTTPError as e: print('The server couldn\'t fulfill the request.') print('Error code: {0}'.format(e.code)) except error.URLError as e: print('We failed to reach a server.') print('Reason: {0}'.format(e.reason)) else: f = open('output/{0}.html'.format(sFileName),'w') f.write(response.read().decode('utf-8')) A url http://groupon.cl/descuentos/santiago-centro The situation Here's what I did: enable javascript in browser open url above and keep an eye on the console disable javascript repeat step 2 use urllib2 to grab the webpage and save it to a file enable javascript open the file with browser and observe console repeat 7 with javascript off results In step 2 I saw that a whole lot of the page content was loaded dynamically using ajax. So the HTML that arrived was a sort of skeleton and ajax was used to fill in the gaps. This is fine and not at all surprising Since the page should be seo friendly it should work fine without js. in step 4 nothing happens in the console and the skeleton page loads pre-populated rendering the ajax unnecessary. This is also completely not confusing in step 7 the ajax calls are made but fail. this is also ok since the urls they are using are not local, the calls are thus broken. The page looks like the skeleton. This is also great and expected. in step 8: no ajax calls are made and the skeleton is just a skeleton. I would have thought that this should behave very much like in step 4 question What I want to do is use urllib2 to grab the html from step 4 but I cant figure out how. What am I missing and how could I pull this off? To paraphrase If I was writing a spider I would want to be able to grab plain ol' HTML (as in that which resulted in step 4). I dont want to execute ajax stuff or any javascript at all. I don't want to populate anything dynamically. I just want HTML. The seo friendly site wants me to get what I want because that's what seo is all about. How would one go about getting plain HTML content given the situation I outlined? To do it manually I would turn off js, navigate to the page and copy the html. I want to automate this. stuff I've tried I used wireshark to look at packet headers and the GETs sent off from my pc in steps 2 and 4 have the same headers. Reading about SEO stuff makes me think that this is pretty normal otherwise techniques such as hijax wouldn't be used. Here are the headers my browser sends: Host: groupon.cl User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:16.0) Gecko/20100101 Firefox/16.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-gb,en;q=0.5 Accept-Encoding: gzip, deflate Connection: keep-alive Here are the headers my script sends: Accept-Encoding: identity Host: groupon.cl Accept-Language: en-gb,en;q=0.5 Connection: close Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 User-Agent: User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:16.0) Gecko/20100101 Firefox/16.0 The differences are: my script has Connection = close instead of keep-alive. I can't see how this would cause a problem my script has Accept-encoding = identity. This might be the cause of the problem. I can't really see why the host would use this field to determine the user-agent though. If I change encoding to match the browser request headers then I have trouble decoding it. I'm working on this now... watch this space, I'll update the question as new info comes up

    Read the article

  • Why is Varnish not caching?

    - by Justin
    I am troubleshooting the setup of Varnish 3.x on my Ubuntu server. I'm running Drupal 7 on two sites set up on the box, via named-based vhosts. Before trying to get Varnish to play nice with Drupal I'm trying to just get Varnish to a PNG from cache. Here are the headers I get from a curl -I request of the PNG file: HTTP/1.1 200 OK Server: Apache/2.2.22 (Ubuntu) Last-Modified: Sun, 07 Oct 2012 21:18:59 GMT ETag: "a57c2-3850-4cb7ea73db6c0" Accept-Ranges: bytes Content-Length: 14416 Cache-Control: max-age=1209600 Expires: Thu, 25 Oct 2012 22:55:14 GMT Content-Type: image/png Accept-Ranges: bytes Date: Thu, 11 Oct 2012 22:55:14 GMT X-Varnish: 1766703058 Age: 0 Via: 1.1 varnish Connection: keep-alive X-Varnish-Cache: MISS Here is the Varnish VCL file I'm using (It's a default VCL configuration designed for Drupal): # Default backend definition. Set this to point to your content # server. # backend default { .host = "127.0.0.1"; .port = "8080"; } # Respond to incoming requests. sub vcl_recv { # Use anonymous, cached pages if all backends are down. if (!req.backend.healthy) { unset req.http.Cookie; } # Allow the backend to serve up stale content if it is responding slowly. set req.grace = 6h; # Pipe these paths directly to Apache for streaming. #if (req.url ~ "^/admin/content/backup_migrate/export") { # return (pipe); #} # Do not cache these paths. if (req.url ~ "^/status\.php$" || req.url ~ "^/update\.php$" || req.url ~ "^/admin$" || req.url ~ "^/admin/.*$" || req.url ~ "^/flag/.*$" || req.url ~ "^.*/ajax/.*$" || req.url ~ "^.*/ahah/.*$") { return (pass); } # Do not allow outside access to cron.php or install.php. #if (req.url ~ "^/(cron|install)\.php$" && !client.ip ~ internal) { # Have Varnish throw the error directly. # error 404 "Page not found."; # Use a custom error page that you've defined in Drupal at the path "404". # set req.url = "/404"; #} # Always cache the following file types for all users. This list of extensions # appears twice, once here and again in vcl_fetch so make sure you edit both # and keep them equal. if (req.url ~ "(?i)\.(pdf|asc|dat|txt|doc|xls|ppt|tgz|csv|png|gif|jpeg|jpg|ico|swf|css|js)(\?.*)?$") { unset req.http.Cookie; } # Remove all cookies that Drupal doesn't need to know about. We explicitly # list the ones that Drupal does need, the SESS and NO_CACHE. If, after # running this code we find that either of these two cookies remains, we # will pass as the page cannot be cached. if (req.http.Cookie) { # 1. Append a semi-colon to the front of the cookie string. # 2. Remove all spaces that appear after semi-colons. # 3. Match the cookies we want to keep, adding the space we removed # previously back. (\1) is first matching group in the regsuball. # 4. Remove all other cookies, identifying them by the fact that they have # no space after the preceding semi-colon. # 5. Remove all spaces and semi-colons from the beginning and end of the # cookie string. set req.http.Cookie = ";" + req.http.Cookie; set req.http.Cookie = regsuball(req.http.Cookie, "; +", ";"); set req.http.Cookie = regsuball(req.http.Cookie, ";(SESS[a-z0-9]+|SSESS[a-z0-9]+|NO_CACHE)=", "; \1="); set req.http.Cookie = regsuball(req.http.Cookie, ";[^ ][^;]*", ""); set req.http.Cookie = regsuball(req.http.Cookie, "^[; ]+|[; ]+$", ""); if (req.http.Cookie == "") { # If there are no remaining cookies, remove the cookie header. If there # aren't any cookie headers, Varnish's default behavior will be to cache # the page. unset req.http.Cookie; } else { # If there is any cookies left (a session or NO_CACHE cookie), do not # cache the page. Pass it on to Apache directly. return (pass); } } } # Set a header to track a cache HIT/MISS. sub vcl_deliver { if (obj.hits > 0) { set resp.http.X-Varnish-Cache = "HIT"; } else { set resp.http.X-Varnish-Cache = "MISS"; } } # Code determining what to do when serving items from the Apache servers. # beresp == Back-end response from the web server. sub vcl_fetch { # We need this to cache 404s, 301s, 500s. Otherwise, depending on backend but # definitely in Drupal's case these responses are not cacheable by default. if (beresp.status == 404 || beresp.status == 301 || beresp.status == 500) { set beresp.ttl = 10m; } # Don't allow static files to set cookies. # (?i) denotes case insensitive in PCRE (perl compatible regular expressions). # This list of extensions appears twice, once here and again in vcl_recv so # make sure you edit both and keep them equal. if (req.url ~ "(?i)\.(pdf|asc|dat|txt|doc|xls|ppt|tgz|csv|png|gif|jpeg|jpg|ico|swf|css|js)(\?.*)?$") { unset beresp.http.set-cookie; } # Allow items to be stale if needed. set beresp.grace = 6h; } # In the event of an error, show friendlier messages. sub vcl_error { # Redirect to some other URL in the case of a homepage failure. #if (req.url ~ "^/?$") { # set obj.status = 302; # set obj.http.Location = "http://backup.example.com/"; #} # Otherwise redirect to the homepage, which will likely be in the cache. set obj.http.Content-Type = "text/html; charset=utf-8"; synthetic {" <html> <head> <title>Page Unavailable</title> <style> body { background: #303030; text-align: center; color: white; } #page { border: 1px solid #CCC; width: 500px; margin: 100px auto 0; padding: 30px; background: #323232; } a, a:link, a:visited { color: #CCC; } .error { color: #222; } </style> </head> <body onload="setTimeout(function() { window.location = '/' }, 5000)"> <div id="page"> <h1 class="title">Page Unavailable</h1> <p>The page you requested is temporarily unavailable.</p> <p>We're redirecting you to the <a href="/">homepage</a> in 5 seconds.</p> <div class="error">(Error "} + obj.status + " " + obj.response + {")</div> </div> </body> </html> "}; return (deliver); } I'm getting a MISS and age 0 every time. If I'm understanding correctly, this means the file isn't being returned from Varnish's cache. Is there a problem with my Varnish config?

    Read the article

  • I have this broken php upload script

    - by Anders Kitson
    I have this script for uploading a image and content from a form, it works in one project but not the other. I have spent a good few hours trying to debug it, I am hoping someone could point out the issue I might be having. Where there are comments is where I have tried to debug. The first error I got was the "echo invalid file" at the beginning of the last comment. With these specific areas commented out the upload name and type that I am supposed to be grabbing from the form is not being echoed, I am thinking this is where the error is occurring, but can't quite seem to find it. Thanks. <?php include("../includes/connect.php"); /* if ((($_FILES["file"]["type"] == "image/gif") || ($_FILES["file"]["type"] == "image/jpeg") || ($_FILES["file"]["type"] == "image/pjpeg")) && ($_FILES["file"]["size"] < 2000000)) { */ if ($_FILES["file"]["error"] > 0) { echo "Return Code: " . $_FILES["file"]["error"] . "<br />"; } else { echo "Upload: " . $_FILES["file"]["name"] . "<br />"; echo "Type: " . $_FILES["file"]["type"] . "<br />"; echo "Size: " . ($_FILES["file"]["size"] / 1024) . " Kb<br />"; echo "Temp file: " . $_FILES["file"]["tmp_name"] . "<br />"; /* GRAB FORM DATA */ $title = $_POST['title']; $date = $_POST['date']; $content = $_POST['content']; $imageName1 = $_FILES["file"]["name"]; echo $title; echo "<br/>"; echo $date; echo "<br/>"; echo $content; echo "<br/>"; echo $imageName1; $sql = "INSERT INTO blog (title,date,content,image)VALUES( \"$title\", \"$date\", \"$content\", \"$imageName1\" )"; $results = mysql_query($sql)or die(mysql_error()); echo "<br/>"; if (file_exists("../images/blog/" . $_FILES["file"]["name"])) { echo $_FILES["file"]["name"] . " already exists. "; } else { move_uploaded_file($_FILES["file"]["tmp_name"], "../images/blog/" . $_FILES["file"]["name"]); echo "Stored in: " . "../images/blog/" . $_FILES["file"]["name"]; } } /* } else { echo "Invalid file" . "<br/>"; echo "Type: " . $_FILES["file"]["type"] . "<br />"; } */ //lets create a thumbnail of this uploaded image. /* $fileName = $_FILES["file"]["name"]; createThumb($fileName,310,"../images/blog/thumbs/"); function createThumb($thisFileName, $thisThumbWidth, $thisThumbDest){ $thisOriginalFilePath = "../images/blog/". $thisFileName; list($width, $height) = getimagesize($thisOriginalFilePath); $imgRatio =$width/$height; $thisThumbHeight = $thisThumbWidth/$imgRatio; $thumb = imagecreatetruecolor($thisThumbWidth,$thisThumbHeight); $source = imagecreatefromjpeg($thisOriginalFilePath); imagecopyresampled($thumb, $source, 0, 0, 0, 0, $thisThumbWidth,$thisThumbHeight, $width, $height); $newFileName = $thisThumbDest.$thisFileName; imagejpeg($thumb,$newFileName, 80); echo "<p><img src=\"$newFileName\" /></p>"; //header("location: http://www.google.ca"); } */ ?>

    Read the article

< Previous Page | 330 331 332 333 334 335 336 337 338 339 340 341  | Next Page >