Search Results

Search found 5619 results on 225 pages for 'starts'.

Page 213/225 | < Previous Page | 209 210 211 212 213 214 215 216 217 218 219 220  | Next Page >

  • How can I get Firefox to update background-color on a:hover *before* a javascript routine is run?

    - by Rob
    I'm having a Firefox-specific issue with a script I wrote to create 3d layouts. The correct behavior is that the script pulls the background-color from an element and then uses that color to draw on the canvas. When a user mouses over a link and the background-color changes to the :hover rule, the color being drawn changes on the canvas changes as well. When the user mouses out, the color should revert back to non-hover color. This works as expected in Webkit browsers and Opera, but it seems like Firefox doesn't update the background-color in CSS immediately after a mouseout event occurs, so the current background-color doesn't get drawn if a mouseout occurs and it isn't followed up by another event that calls the draw() routine. It works just fine in Opera, Chrome, and Safari. How can I get Firefox to cooperate? I'm including the code that I believe is most relevant to my problem. Any advice on how I fix this problem and get a consistent effect would be very helpful. function drawFace(coord, mid, popColor,gs,x1,x2,side) { /*Gradients in our case run either up/down or left right. We have two algorithms depending on whether or not it's a sideways facing piece. Rather than parse the "rgb(r,g,b)" string(popColor) retrieved from elsewhere, it is simply offset with the gs variable to give the illusion that it starts at a darker color.*/ var canvas = document.getElementById('depth'); //This is for excanvas.js var G_vmlCanvasManager; if (G_vmlCanvasManager != undefined) { // ie IE G_vmlCanvasManager.initElement(canvas); } //Init canvas if (canvas.getContext) { var ctx = canvas.getContext('2d'); if (side) var lineargradient=ctx.createLinearGradient(coord[x1][0]+gs,mid[1],mid[0],mid[1]); else var lineargradient=ctx.createLinearGradient(coord[0][0],coord[2][1]+gs,coord[0][0],mid[1]); lineargradient.addColorStop(0,popColor); lineargradient.addColorStop(1,'black'); ctx.fillStyle=lineargradient; ctx.beginPath(); //Draw from one corner to the midpoint, then to the other corner, //and apply a stroke and a fill. ctx.moveTo(coord[x1][0],coord[x1][1]); ctx.lineTo(mid[0],mid[1]); ctx.lineTo(coord[x2][0],coord[x2][1]); ctx.stroke(); ctx.fill(); } } function draw(e) { var arr = new Array() var i = 0; var mid = new Array(2); $(".pop").each(function() { mid[0]=Math.round($(document).width()/2); mid[1]=Math.round($(document).height()/2); arr[arr.length++]=new getElemProperties(this,mid); i++; }); arr.sort(sortByDistance); clearCanvas(); for (a=0;a<i;a++) { /*In the following conditional statements, we're testing to see which direction faces should be drawn, based on a 1-point perspective drawn from the midpoint. In the first statement, we're testing to see if the lower-left hand corner coord[3] is higher on the screen than the midpoint. If so, we set it's gradient starting position to start at a point in space 60pixels higher(-60) than the actual side, and we also declare which corners make up our face, in this case the lower two corners, coord[3], and coord[2].*/ if (arr[a].bottomFace) drawFace(arr[a].coord,mid,arr[a].popColor,-60,3,2); if (arr[a].topFace) drawFace(arr[a].coord,mid,arr[a].popColor,60,0,1); if (arr[a].leftFace) drawFace(arr[a].coord,mid,arr[a].popColor,60,0,3,true); if (arr[a].rightFace) drawFace(arr[a].coord,mid,arr[a].popColor,-60,1,2,true); } } $("a.pop").bind("mouseenter mouseleave focusin focusout",draw); If you need to see the effect in action, or if you want the full javascript code, you can check it out here: http://www.robnixondesigns.com/strangematter/

    Read the article

  • Win32: IProgressDialog will not disappear until you mouse over it.

    - by Ian Boyd
    i'm using the Win32 progress dialog. The damnest thing is that when i call: progressDialog.StopProgressDialog(); it doesn't disappear. It stays on screen until the user moves her mouse over it - then it suddenly disappers. The call to StopProgressDialog returns right away (i.e. it's not a synchronous call). i can prove this by doing things after the call has returned: private void button1_Click(object sender, EventArgs e) { //Force red background to prove we've started this.BackColor = Color.Red; this.Refresh(); //Start a progress dialog IProgressDialog pd = (IProgressDialog)new ProgressDialog(); pd.StartProgressDialog(this.Handle, null, PROGDLG.Normal, IntPtr.Zero); //The long running operation System.Threading.Thread.Sleep(10000); //Stop the progress dialog pd.SetLine(1, "Stopping Progress Dialog", false, IntPtr.Zero); pd.StopProgressDialog(); pd = null; //Return form to normal color to prove we've stopped. this.BackColor = SystemColors.Control; this.Refresh(); } The form: starts gray turns red to show we've stared turns back to gray color to show we've called stop So the call to StopProgressDialog has returned, except the progress dialog is still sitting there, mocking me, showing the message: Stopping Progress Dialog Doesn't Appear for 10 seconds Additionally, the progress dialog does not appear on screen until the System.Threading.Thread.Sleep(10000); ten second sleep is over. Not limited to .NET WinForms The same code also fails in Delphi, which is also an object wrapper around Window's windows: procedure TForm1.Button1Click(Sender: TObject); var pd: IProgressDialog; begin Self.Color := clRed; Self.Repaint; pd := CoProgressDialog.Create; pd.StartProgressDialog(Self.Handle, nil, PROGDLG_NORMAL, nil); Sleep(10000); pd.SetLine(1, StringToOleStr('Stopping Progress Dialog'), False, nil); pd.StopProgressDialog; pd := nil; Self.Color := clBtnFace; Self.Repaint; end; PreserveSig An exception would be thrown if StopProgressDialog was failing. Most of the methods in IProgressDialog, when translated into C# (or into Delphi), use the compiler's automatic mechanism of converting failed COM HRESULTS into a native language exception. In other words the following two signatures will throw an exception if the COM call returned an error HRESULT (i.e. a value less than zero): //C# void StopProgressDialog(); //Delphi procedure StopProgressDialog; safecall; Whereas the following lets you see the HRESULT's and react yourself: //C# [PreserveSig] int StopProgressDialog(); //Delphi function StopProgressDialog: HRESULT; stdcall; HRESULT is a 32-bit value. If the high-bit is set (or the value is negative) it is an error. i am using the former syntax. So if StopProgressDialog is returning an error it will be automatically converted to a language exception. Note: Just for SaG i used the [PreserveSig] syntax, the returned HRESULT is zero; MsgWait? The symptom is similar to what Raymond Chen described once, which has to do with the incorrect use of PeekMessage followed by MsgWaitForMultipleObjects: "Sometimes my program gets stuck and reports one fewer record than it should. I have to jiggle the mouse to get the value to update. After a while longer, it falls two behind, then three..." But that would mean that the failure is in IProgressDialog, since it fails equally well on CLR .NET WinForms and native Win32 code.

    Read the article

  • Can the STREAM and GUPS (single CPU) benchmark use non-local memory in NUMA machine

    - by osgx
    Hello I want to run some tests from HPCC, STREAM and GUPS. They will test memory bandwidth, latency, and throughput (in term of random accesses). Can I start Single CPU test STREAM or Single CPU GUPS on NUMA node with memory interleaving enabled? (Is it allowed by the rules of HPCC - High Performance Computing Challenge?) Usage of non-local memory can increase GUPS results, because it will increase 2- or 4- fold the number of memory banks, available for random accesses. (GUPS typically limited by nonideal memory-subsystem and by slow memory bank opening/closing. With more banks it can do update to one bank, while the other banks are opening/closing.) Thanks. UPDATE: (you may nor reorder the memory accesses that the program makes). But can compiler reorder loops nesting? E.g. hpcc/RandomAccess.c /* Perform updates to main table. The scalar equivalent is: * * u64Int ran; * ran = 1; * for (i=0; i<NUPDATE; i++) { * ran = (ran << 1) ^ (((s64Int) ran < 0) ? POLY : 0); * table[ran & (TableSize-1)] ^= stable[ran >> (64-LSTSIZE)]; * } */ for (j=0; j<128; j++) ran[j] = starts ((NUPDATE/128) * j); for (i=0; i<NUPDATE/128; i++) { /* #pragma ivdep */ for (j=0; j<128; j++) { ran[j] = (ran[j] << 1) ^ ((s64Int) ran[j] < 0 ? POLY : 0); Table[ran[j] & (TableSize-1)] ^= stable[ran[j] >> (64-LSTSIZE)]; } } The main loop here is for (i=0; i<NUPDATE/128; i++) { and the nested loop is for (j=0; j<128; j++) {. Using 'loop interchange' optimization, compiler can convert this code to for (j=0; j<128; j++) { for (i=0; i<NUPDATE/128; i++) { ran[j] = (ran[j] << 1) ^ ((s64Int) ran[j] < 0 ? POLY : 0); Table[ran[j] & (TableSize-1)] ^= stable[ran[j] >> (64-LSTSIZE)]; } } It can be done because this loop nest is perfect loop nest. Is such optimization prohibited by rules of HPCC?

    Read the article

  • mdadm raid5 recover double disk failure - with a twist (drive order)

    - by Peter Bos
    Let me acknowledge first off that I have made mistakes, and that I have a backup for most but not all of the data on this RAID. I still have hope of recovering the rest of the data. I don't have the kind of money to take the drives to a recovery expert company. Mistake #0, not having a 100% backup. I know. I have a mdadm RAID5 system of 4x3TB. Drives /dev/sd[b-e], all with one partition /dev/sd[b-e]1. I'm aware that RAID5 on very large drives is risky, yet I did it anyway. Recent events The RAID become degraded after a two drive failure. One drive [/dev/sdc] is really gone, the other [/dev/sde] came back up after a power cycle, but was not automatically re-added to the RAID. So I was left with a 4 device RAID with only 2 active drives [/dev/sdb and /dev/sdd]. Mistake #1, not using dd copies of the drives for restoring the RAID. I did not have the drives or the time. Mistake #2, not making a backup of the superblock and mdadm -E of the remaining drives. Recovery attempt I reassembled the RAID in degraded mode with mdadm --assemble --force /dev/md0, using /dev/sd[bde]1. I could then access my data. I replaced /dev/sdc with a spare; empty; identical drive. I removed the old /dev/sdc1 from the RAID mdadm --fail /dev/md0 /dev/sdc1 Mistake #3, not doing this before replacing the drive I then partitioned the new /dev/sdc and added it to the RAID. mdadm --add /dev/md0 /dev/sdc1 It then began to restore the RAID. ETA 300 mins. I followed the process via /proc/mdstat to 2% and then went to do other stuff. Checking the result Several hours (but less then 300 mins) later, I checked the process. It had stopped due to a read error on /dev/sde1. Here is where the trouble really starts I then removed /dev/sde1 from the RAID and re-added it. I can't remember why I did this; it was late. mdadm --manage /dev/md0 --remove /dev/sde1 mdadm --manage /dev/md0 --add /dev/sde1 However, /dev/sde1 was now marked as spare. So I decided to recreate the whole array using --assume-clean using what I thought was the right order, and with /dev/sdc1 missing. mdadm --create /dev/md0 --assume-clean -l5 -n4 /dev/sdb1 missing /dev/sdd1 /dev/sde1 That worked, but the filesystem was not recognized while trying to mount. (It should have been EXT4). Device order I then checked a recent backup I had of /proc/mdstat, and I found the drive order. md0 : active raid5 sdb1[0] sde1[4] sdd1[2] sdc1[1] 8790402048 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU] I then remembered this RAID had suffered a drive loss about a year ago, and recovered from it by replacing the faulty drive with a spare one. That may have scrambled the device order a bit...so there was no drive [3] but only [0],[1],[2], and [4]. I tried to find the drive order with the Permute_array script: https://raid.wiki.kernel.org/index.php/Permute_array.pl but that did not find the right order. Questions I now have two main questions: I screwed up all the superblocks on the drives, but only gave: mdadm --create --assume-clean commands (so I should not have overwritten the data itself on /dev/sd[bde]1. Am I right that in theory the RAID can be restored [assuming for a moment that /dev/sde1 is ok] if I just find the right device order? Is it important that /dev/sde1 be given the device number [4] in the RAID? When I create it with mdadm --create /dev/md0 --assume-clean -l5 -n4 \ /dev/sdb1 missing /dev/sdd1 /dev/sde1 it is assigned the number [3]. I wonder if that is relevant to the calculation of the parity blocks. If it turns out to be important, how can I recreate the array with /dev/sdb1[0] missing[1] /dev/sdd1[2] /dev/sde1[4]? If I could get that to work I could start it in degraded mode and add the new drive /dev/sdc1 and let it resync again. It's OK if you would like to point out to me that this may not have been the best course of action, but you'll find that I realized this. It would be great if anyone has any suggestions.

    Read the article

  • Nightmare: Upgrading Tomcat 5.5 to 6.0

    - by pavanlimo
    I'm trying to upgrade a perfectly running embedded Tomcat 5.5 to Tomcat 6.0. I understand that all I need to do is replace Tomcat 5.5 jars with 6.0. That's what I did. So I replaced the following jars: catalina-5.0.28.jar catalina-5.5.9.jar catalina-optional-5.5.9.jar commons-el.jar commons-modeler-1.1.0.jar jasper-compiler-jdt.jar jasper-compiler.jar jasper-runtime.jar jmx-5.0.28.jar jsp-api-2.0.jar naming-factory.jar naming-resources.jar servlet-api-2.4.jar servlets-default.jar tomcat-coyote.jar tomcat-http.jar tomcat-util.jar with: annotations-api.jar catalina.jar jasper.jar tomcat-dbcp.jar catalina-ant.jar el-api.jar jsp-api.jar tomcat-i18n-es.jar catalina-ha.jar jasper-el.jar servlet-api.jar tomcat-i18n-fr.jar catalina-tribes.jar jasper-jdt.jar tomcat-coyote.jar tomcat-i18n-ja.jar tomcat-juli.jar As soon as I start the server, I get the following message in the logs at INFO level: INFO: Starting Servlet Engine: Apache Tomcat/6.0.29 Dec 31, 2010 6:04:18 AM org.apache.catalina.loader.WebappClassLoader validateJarFile INFO: validateJarFile(/usr/local/blah/blue/./WEB-INF/lib/servlet-api.jar) - jar not loaded. See Servlet Spec 2.3, section 9.7.2. Offending class: javax/servlet/Servlet.class Based on the this explanation, I need to remove a jar file which has a conflicting Servlet.class. I swear to God, there is no other conflicting jar file, I grepped system wide for Servlet.class, it matched only servlet-api.jar. I also downloaded javaee.jar and replaced it by servlet-api.jar, to same avail. Having tried lot of these stuff, I did not have much to look upto, so set the tomcat logging level to ALL. In the log I could see that it is trying to check for Servlet.class in each and every jar it is loading until it finds servlet-api.jar and throws "jar not loaded" message as soon as it finds servlet-api.jar. See below: FINE: Checking for javax/servlet/Servlet.class Jan 2, 2011 7:39:33 AM org.apache.catalina.loader.WebappLoader setRepositories FINE: Deploy JAR /WEB-INF/lib/servlet-api.jar to /usr/local/blah/blue/./WEB-INF/lib/servlet-api.jar Jan 2, 2011 7:39:33 AM org.apache.catalina.loader.WebappClassLoader addJar FINE: addJar(/WEB-INF/lib/servlet-api.jar) Jan 2, 2011 7:39:33 AM org.apache.catalina.loader.WebappClassLoader validateJarFile FINE: Checking for javax/servlet/Servlet.class Jan 2, 2011 7:39:33 AM org.apache.catalina.loader.WebappClassLoader validateJarFile INFO: validateJarFile(/usr/local/blah/blue/./WEB-INF/lib/servlet-api.jar) - jar not loaded. See Servlet Spec 2.3, section 9.7.2. Offending class: javax/servlet/Servlet.class Jan 2, 2011 7:39:33 AM org.apache.catalina.loader.WebappLoader setRepositories Please note however, that Tomcat starts successfully! And as soon as I hit the URL on the browser, I get blank page(this may be in my case only, I guess 'cuz of my web.xml, sorta different from most. Other people on the internet have got Error 404 instead.) with following log statements(at finest level) Jan 2, 2011 9:40:01 AM org.apache.catalina.connector.CoyoteAdapter parseSessionCookiesId FINE: Requested cookie session id is 0FBA716E3F9B0147C3AF7ABAE3B1C27B Jan 2, 2011 9:40:01 AM org.apache.catalina.authenticator.AuthenticatorBase invoke FINE: Security checking request GET /login.jsp Jan 2, 2011 9:40:01 AM org.apache.catalina.realm.RealmBase findSecurityConstraints FINE: Checking constraint 'SecurityConstraint[protected]' against GET /login.jsp --> false Jan 2, 2011 9:40:01 AM org.apache.catalina.realm.RealmBase findSecurityConstraints FINE: Checking constraint 'SecurityConstraint[protected]' against GET /login.jsp --> false Jan 2, 2011 9:40:01 AM org.apache.catalina.realm.RealmBase findSecurityConstraints FINE: Checking constraint 'SecurityConstraint[protected]' against GET /login.jsp --> false Jan 2, 2011 9:40:01 AM org.apache.catalina.realm.RealmBase findSecurityConstraints FINE: Checking constraint 'SecurityConstraint[protected]' against GET /login.jsp --> false Jan 2, 2011 9:40:01 AM org.apache.catalina.realm.RealmBase findSecurityConstraints FINE: No applicable constraint located Jan 2, 2011 9:40:01 AM org.apache.catalina.authenticator.AuthenticatorBase invoke FINE: Not subject to any constraint Jan 2, 2011 9:40:01 AM org.apache.catalina.core.StandardWrapper allocate FINEST: Returning non-STM instance I'm not sure if the above log message is important, but I'm for all-out disclosure here. One interesting thing though, I manually created a dummy jsp file containing only "helloooo" just outside WEB-INF folder(no security constraints for this file). This file was accessible and could be displayed. But, all my jsp's and classes are inside WEB-INF(ofcourse). Sick and tired of this issue, please help me solve it. I've already spent 20-24 hours on this unsuccessfully. Any pointers directions hints leads?

    Read the article

  • The counter doesnt seem to increase when ever the edittext changes

    - by Mabulhuda
    Im using Edit-texts and i need when ever an edit-text is changed to increment a counter by one but the counter isnt working , I mean the app starts and everything but the counter doesnt seem to change please help here is the code public class Numersys extends Activity implements TextWatcher { EditText mark1 ,mark2, mark3,mark4,mark5,mark6 , hr1 ,hr2,hr3,hr4,hr5,hr6; EditText passed, currentavg; TextView tvnewavg ; Button calculate; double marks , curAVG , NewAVG ; String newCumAVG; int counter , hrs , curHr , NewHr; @Override protected void onCreate(Bundle savedInstanceState) { // TODO Auto-generated method stub super.onCreate(savedInstanceState); setContentView(R.layout.numersys); mark1=(EditText)findViewById(R.id.mark1n); mark2=(EditText)findViewById(R.id.mark2n); mark3=(EditText)findViewById(R.id.mark3n); mark4=(EditText)findViewById(R.id.mark4n); mark5=(EditText)findViewById(R.id.mark5n); mark6=(EditText)findViewById(R.id.mark6n); hr1=(EditText)findViewById(R.id.ethour1); hr2=(EditText)findViewById(R.id.ethour2); hr3=(EditText)findViewById(R.id.ethour3); hr4=(EditText)findViewById(R.id.ethour4); hr5=(EditText)findViewById(R.id.ethour5); hr6=(EditText)findViewById(R.id.ethour6); passed=(EditText)findViewById(R.id.etPassCn); currentavg=(EditText)findViewById(R.id.etCavgn); tvnewavg=(TextView)findViewById(R.id.tvcAVGn); mark1.addTextChangedListener(this); mark2.addTextChangedListener(this); mark3.addTextChangedListener(this); mark4.addTextChangedListener(this); mark5.addTextChangedListener(this); mark6.addTextChangedListener(this); calculate=(Button)findViewById(R.id.bAvgCalcn); @Override public void onClick(View arg0) { // TODO Auto-generated method stub switch(arg0.getId()) { case 0: break; case 1: hrs=Integer.valueOf(hr1.getText().toString()); marks=Double.valueOf(mark1.getText().toString())*Integer.valueOf(hr1.getText().toString()); curHr=Integer.valueOf(passed.getText().toString()); curAVG=Double.valueOf(currentavg.getText().toString())*curHr; NewHr= curHr+hrs; NewAVG= (marks+curAVG)/NewHr; break; case 2: hrs=Integer.valueOf(hr1.getText().toString())+Integer.valueOf(hr2.getText().toString()); marks=Double.valueOf(mark1.getText().toString())*Integer.valueOf(hr1.getText().toString()) +Double.valueOf(mark2.getText().toString())*Integer.valueOf(hr2.getText().toString()); curHr=Integer.valueOf(passed.getText().toString()); curAVG=Double.valueOf(currentavg.getText().toString())*curHr; NewHr= curHr+hrs; NewAVG= (marks+curAVG)/NewHr; break; case 3: hrs=Integer.valueOf(hr1.getText().toString())+Integer.valueOf(hr2.getText().toString()) +Integer.valueOf(hr3.getText().toString()); marks=Double.valueOf(mark1.getText().toString())*Integer.valueOf(hr1.getText().toString()) +Double.valueOf(mark2.getText().toString())*Integer.valueOf(hr2.getText().toString()) +Double.valueOf(mark3.getText().toString())*Integer.valueOf(hr3.getText().toString()); curHr=Integer.valueOf(passed.getText().toString()); curAVG=Double.valueOf(currentavg.getText().toString())*curHr; NewHr= curHr+hrs; NewAVG= (marks+curAVG)/NewHr; break; case R.id.bAvgCalcn: newCumAVG=String.valueOf(NewAVG); tvnewavg.setText(newCumAVG); } } }); } @Override public void afterTextChanged(Editable arg0) { // TODO Auto-generated method stub } @Override public void beforeTextChanged(CharSequence arg0, int arg1, int arg2, int arg3) { // TODO Auto-generated method stub } @Override public void onTextChanged(CharSequence arg0, int arg1, int arg2, int arg3) { // TODO Auto-generated method stub if(mark1.hasFocus()) { counter = counter+1; } if(mark2.hasFocus()) { counter = counter+1; } if(mark3.hasFocus()) { counter = counter+1; } if(mark4.hasFocus()) { counter = counter+1; } if(mark5.hasFocus()) { counter = counter+1; } if(mark6.hasFocus()) { counter = counter+1; } }

    Read the article

  • WCF via SSL connectivity problems

    - by Brett Widmeier
    Hello, I am hosting a WCF service from inside a Windows service using WAS. When I set the service to listen on 127.0.0.1, I have connectivity from my local machine as well as from my network. However, when I set it to listen on my outbound interface port 443, I can no longer even see the wsdl by connecting with a browser. Strangely, I can connect to the service by using telnet. The cert I am using was generated for my interface by a CA, and I have successfully used this exact cert with this service before. When checking the application log, I see that the service starts without error and is listening on the correct interface. From this information, it seems to me that the config file is in a valid state, but somehow misconfigured for what I want. I have, however, previously deployed this same setup on other sites using this config file. In case it is helpful, below is my config file. Any thoughts? <!--<system.diagnostics> <sources> <source name="System.ServiceModel" switchValue="Warning, ActivityTracing" propagateActivity="true"> <listeners> <add type="System.Diagnostics.DefaultTraceListener" name="Default"> <filter type="" /> </add> <add name="ServiceModelTraceListener"> <filter type="" /> </add> </listeners> </source> </sources> <sharedListeners> <add initializeData="app_tracelog.svclog" type="System.Diagnostics.XmlWriterTraceListener, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" name="ServiceModelTraceListener" traceOutputOptions="Timestamp"> <filter type="" /> </add> </sharedListeners> </system.diagnostics>--> <appSettings/> <connectionStrings/> <system.serviceModel> <!--<diagnostics> <messageLogging logEntireMessage="true" logMalformedMessages="true" logMessagesAtServiceLevel="true" logMessagesAtTransportLevel="true" maxMessagesToLog ="1000" maxSizeOfMessageToLog="524288"/> </diagnostics>--> <bindings> <basicHttpBinding> <binding name="basicHttps"> <security mode="Transport"> <transport clientCredentialType="None"/> <message /> </security> </binding> </basicHttpBinding> </bindings> <services> <service behaviorConfiguration="ServiceBehavior" name="<fully qualified name of service>"> <endpoint address="" binding="basicHttpBinding" name="OrdersSoap" contract="<fully qualified name of contract>" bindingNamespace="http://emr.orders.com/WebServices" bindingConfiguration="basicHttps" /> <endpoint binding="mexHttpsBinding" address="mex" contract="IMetadataExchange" /> <host> <baseAddresses> <add baseAddress="https://<external IP>/<name of service>>/" /> </baseAddresses> </host> </service> </services> <behaviors> <serviceBehaviors> <behavior name="ServiceBehavior"> <serviceMetadata httpsGetEnabled="False"/> <serviceDebug includeExceptionDetailInFaults="True" /> <dataContractSerializer maxItemsInObjectGraph="2147483646"/> </behavior> </serviceBehaviors> </behaviors> </system.serviceModel>

    Read the article

  • Compiler issues on VC++ 2008 Express, Seemingly correct code throws errors.

    - by Anthony Clever
    Hi there, I've been trying to get back into coding for a while, so I figured I'd start with some simple SDL, now, without the file i/o, this compiles fine, but when I throw in the stdio code, it starts throwing errors. This I'm not sure about, I don't see any problem with the code itself, however, like I said, I might as well be a newbie, and figured I'd come here to get someone with a little more experience with this type of thing to look at it. I guess my question boils down to: "Why doesn't this compile under Microsoft's Visual C++ 2008 Express?" I've attached the error log at the bottom of the code snippet. Thanks in advance for any help. #include "SDL/SDL.h" #include "stdio.h" int main(int argc, char *argv[]) { FILE *stderr; FILE *stdout; stderr = fopen("stderr", "wb"); stdout = fopen("stdout", "wb"); SDL_Init(SDL_INIT_EVERYTHING); fprintf(stdout, "SDL INITIALIZED SUCCESSFULLY\n"); SDL_Quit(); fprintf(stderr, "SDL QUIT.\n"); fclose(stderr); fclose(stdout); return 0; } /* 1>------ Build started: Project: opengl_crap, Configuration: Debug Win32 ------ 1>Compiling... 1>main.cpp 1>c:\documents and settings\owner\my documents\visual studio 2008\projects\opengl_crap\opengl_crap\main.cpp(6) : error C2090: function returns array 1>c:\documents and settings\owner\my documents\visual studio 2008\projects\opengl_crap\opengl_crap\main.cpp(6) : error C2528: '__iob_func' : pointer to reference is illegal 1>c:\documents and settings\owner\my documents\visual studio 2008\projects\opengl_crap\opengl_crap\main.cpp(6) : error C2556: 'FILE ***__iob_func(void)' : overloaded function differs only by return type from 'FILE *__iob_func(void)' 1> c:\program files\microsoft visual studio 9.0\vc\include\stdio.h(132) : see declaration of '__iob_func' 1>c:\documents and settings\owner\my documents\visual studio 2008\projects\opengl_crap\opengl_crap\main.cpp(7) : error C2090: function returns array 1>c:\documents and settings\owner\my documents\visual studio 2008\projects\opengl_crap\opengl_crap\main.cpp(7) : error C2528: '__iob_func' : pointer to reference is illegal 1>c:\documents and settings\owner\my documents\visual studio 2008\projects\opengl_crap\opengl_crap\main.cpp(9) : error C2440: '=' : cannot convert from 'FILE *' to 'FILE ***' 1> Types pointed to are unrelated; conversion requires reinterpret_cast, C-style cast or function-style cast 1>c:\documents and settings\owner\my documents\visual studio 2008\projects\opengl_crap\opengl_crap\main.cpp(10) : error C2440: '=' : cannot convert from 'FILE *' to 'FILE ***' 1> Types pointed to are unrelated; conversion requires reinterpret_cast, C-style cast or function-style cast 1>c:\documents and settings\owner\my documents\visual studio 2008\projects\opengl_crap\opengl_crap\main.cpp(13) : error C2664: 'fprintf' : cannot convert parameter 1 from 'FILE ***' to 'FILE *' 1> Types pointed to are unrelated; conversion requires reinterpret_cast, C-style cast or function-style cast 1>c:\documents and settings\owner\my documents\visual studio 2008\projects\opengl_crap\opengl_crap\main.cpp(15) : error C2664: 'fprintf' : cannot convert parameter 1 from 'FILE ***' to 'FILE *' 1> Types pointed to are unrelated; conversion requires reinterpret_cast, C-style cast or function-style cast 1>c:\documents and settings\owner\my documents\visual studio 2008\projects\opengl_crap\opengl_crap\main.cpp(17) : error C2664: 'fclose' : cannot convert parameter 1 from 'FILE ***' to 'FILE *' 1> Types pointed to are unrelated; conversion requires reinterpret_cast, C-style cast or function-style cast 1>c:\documents and settings\owner\my documents\visual studio 2008\projects\opengl_crap\opengl_crap\main.cpp(18) : error C2664: 'fclose' : cannot convert parameter 1 from 'FILE ***' to 'FILE *' 1> Types pointed to are unrelated; conversion requires reinterpret_cast, C-style cast or function-style cast 1>Build log was saved at "file://c:\Documents and Settings\Owner\My Documents\Visual Studio 2008\Projects\opengl_crap\opengl_crap\Debug\BuildLog.htm" 1>opengl_crap - 11 error(s), 0 warning(s) ========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ========== */

    Read the article

  • Why are my USB 2.0 devices hanging Windows XP?

    - by BenAlabaster
    Background on the machine I'm having a problem with: The machine was inherited and appears to be circa 2003 (there's a date stamp on the power supply which leads me to this conclusion). I've got it set up as a Skype terminal for my 2 year old to keep in touch with her grandparents and other members of the family - which everyone loves. It has a DFI CM33-TL/G ATX (identified using SiSoft Sandra) motherboard hosting an Intel Celeron 1.3GHz CPU, 768Mb PC133 SDRAM, a D-LINK WDA-2320 54G Wi-Fi network card and a generic USB 2.0 expansion board based on the NEC uPD720102 chipset containing 3 external and 1 internal USB sockets. It's also hosting a 1.44Mb floppy drive on FDD0, a new 80Gb Western Digital hard drive running as master on IDE0 and a Panasonic DVD+/-RW running as master on IDE1. All this is sitting in a slimline case running off a Macron Power MPT-135 135W Flex power supply. The motherboard is running a version of Award BIOS 05/24/2002-601T-686B-6A6LID4AC-00. Could this be updated? If so, from where? I've raked through the manufacturer's website but can't find any hint of downloads for either drivers or BIOS updates. The hard disk is freshly formatted and built with Windows XP Professional/Service Pack 3 and is up to date with all current patches. In addition to Windows XP, the only other software it's running is Skype 4.1 (4.2 hangs the whole machine as soon as it starts up, requiring a hard boot to recover). It's got a Daytek MV150 15" touch screen hooked up to the on board VGA and COM1 sockets with the most current drivers from the Daytek website and the most current version of ELO-Touchsystems drivers for the touch component. The webcam is a Logitech Webcam C200 with the latest drivers from the Logitech website. The problem: If I hook any devices to the USB 2.0 sockets, it hangs the whole machine and I have to hard boot it to get it back up. If I have any devices attached to the USB 2.0 sockets when I boot up, it hangs before Windows gets to the login prompt and I have to hard boot it to recover. Workarounds found: I can plug the same devices into the on board USB 1.0 sockets and everything works fine, albeit at reduced performance. I've tried 3 different kinds of USB thumb drives, 3 different makes/models of webcams and my iPhone all with the same effect. They're recognized and don't hang the machine when I hook them to the USB 1.0 but if I hook them to the USB 2.0 ports, the machine hangs within a couple of seconds of recognizing the devices were connected. Attempted solutions: I've seen suggestions that this could be a power problem - that the PSU just doesn't have the wattage to drive these ports. While I'm doubtful this is the problem [after all the motherboard has the same standard connector regardless of the PSU wattage], I tried disabling all the on board devices that I'm not using - on board LAN, the second COM port, the AGP connector etc. through the BIOS in what I'm sure is a futile attempt to reduce the power consumption... I also modified the ACPI and power management settings. It didn't have any noticeable affect, although it didn't do any harm either. Could the wattage of the PSU really cause this problem? If it can, is there anything I need to be aware of when replacing it or do I just need to make sure it's got a higher wattage than the current one? My interpretation was that the wattage only affected the number of drives you could hook up to the power connectors, is that right? I've installed the USB card in another machine and it works without issue, so it's not a problem with the USB card itself, and Windows says the card is installed and working correctly... right up until I connect a device to it. The only thing I haven't done which I only just thought of while writing this essay is trying the USB 2.0 card in a different PCI slot, or re-ordering the wi-fi and USB cards in the slots... although I'm not sure if this will make any difference - does anyone have any experience that would suggest this might work? Other thoughts/questions: Perhaps this is an incompatibility between the USB 2.0 card and the BIOS, would re-flashing the BIOS with a newer version help? Do I need to be able to identify the manufacturer of the motherboard in order to be able to find a BIOS edition specific for this motherboard or will any version of Award BIOS function in its place? Question: Does anyone have any ideas that could help me get my USB 2.0 devices hooked up to this machine?

    Read the article

  • Qt, MSVC, and /Zc:wchar_t- == I want to blow up the world

    - by Noah Roberts
    So Qt is compiled with /Zc:wchar_t- on windows. What this means is that instead of wchar_t being a typedef for some internal type (__wchar_t I think) it becomes a typedef for unsigned short. The really cool thing about this is that the default for MSVC is the opposite, which of course means that the libraries you're using are likely compiled with wchar_t being a different type than Qt's wchar_t. This doesn't become an issue of course until you try to use something like std::wstring in your code; especially when one or more libraries have functions that accept it as parameters. What effectively happens is that your code happily compiles but then fails to link because it's looking for definitions using std::wstring<unsigned short...> but they only contain definitions expecting std::wstring<__wchar_t...> (or whatever). So I did some web searching and ran into this link: http://bugreports.qt.nokia.com/browse/QTBUG-6345 Based on the statement by Thiago Macieira, "Sorry, we will not support building Qt like this," I've been worried that fixing Qt to work like everything else might cause some problem and have been trying to avoid it. We recompiled all of our support libraries with the /Zc:wchar_t- flag and have been fairly content with that until a couple days ago when we started trying to port over (we're in the process of switching from Wx to Qt) some serialization code. Because of how win32 works, and because Wx just wraps win32, we've been using std::wstring to represent string data with the intent of making our product as i18n ready as possible. We did some testing and Wx did not work with multibyte characters when trying to print special stuff (even not so special stuff like the degree symbol was an issue). I'm not so sure that Qt has this problem since QString isn't just a wrapper to the underlying _TCHAR type but is a Unicode monster of some sort. At any rate, the serialization library in boost has compiled parts. We've attempted to recompile boost with /Zc:wchar_t- but so far our attempts to tell bjam to do this have gone unheeded. We're at an impasse. From where I'm sitting I have three options: Recompile Qt and hope it works with /Zc:wchar_t. There's some evidence around the web that others have done this but I have no way of predicting what will happen. All attempts to ask Qt people on forums and such have gone unanswered. Hell, even in that very bug report someone asks why and it just sat there for a year. Keep fighting with bjam until it listens. Right now I've got someone under me doing that and I have more experience fighting with things to get what I want but I do have to admit to getting rather tired of it. I'm also concerned that I'll KEEP running into this issue just because Qt wants to be a c**t. Stop using wchar_t for anything. Unfortunately my i18n experience is pretty much 0 but it seems to me that I just need to find the right to/from function in QString (it has a BUNCH) to encode the Unicode into 8-bytes and visa-versa. UTF8 functions look promising but I really want to be sure that no data will be lost if someone from Zimbabfuckegypt starts writing in their own language and the documentation in QString frightens me a little into thinking that could happen. Of course, I could always run into some library that insists I use wchar_t and then I'm back to 1 or 2 but I rather doubt that would happen. So, what's my question... Which of these options is my best bet? Is Qt going to eventually cause me to gouge out my own eyes because I decided to compile it with /Zc:wchar_t anyway? What's the magic incantation to get boost to build with /Zc:wchar_t- and will THAT cause permanent mental damage? Can I get away with just using the standard 8-bit (well, 'common' anyway) character classes and be i18n compliant/ready? How do other Qt developers deal with this mess?

    Read the article

  • Linux server apache httpd processes take i/o wait to close to 100% and lock down server

    - by user3682065
    For about 5 days now, and seemingly out of the blue, my linux server has started locking up from time to time. The pattern is always the same as far as I can tell from top and iotop commands around the time it starts happening: One or more httpd processes (usually one) hang and start using up 100% of CPU power, the %wa goes close to 100% and in the iotop I see several httpd processes with 99.99% in the IO column. I'm also running an SVN server on this machine through apache and the one way that I've been consistently able to reproduce this is to do an SVN commit of new files or an SVN update from the repository on this server (I am the only one using this SVN repository). This will always reproduce this scenario successfully, but until very recently I had no problems at all checking in/out of SVN. But sometimes it just happens for no detectable reason at all it seems. So it seems like there is some issue with my Apache that leads it to have processes use up a lot of read/write upon certain triggers. I was wondering if anyone could help me uncover that issue. EDIT: OK now it's happening again: This is top: [root@server ~]# top top - 10:56:54 up 2:59, 5 users, load average: 171.46, 70.35, 27.01 Tasks: 328 total, 2 running, 326 sleeping, 0 stopped, 0 zombie Cpu(s): 1.9%us, 2.0%sy, 0.0%ni, 0.0%id, 96.1%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 2021144k total, 1968192k used, 52952k free, 2500k buffers Swap: 4194288k total, 2938584k used, 1255704k free, 39008k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 10390 apache 20 0 2774m 936m 6200 D 2.0 47.4 1:52.27 httpd 2149 root 20 0 927m 13m 1040 S 0.7 0.7 1:50.46 namecoind 11 root 20 0 0 0 0 R 0.3 0.0 0:30.10 events/0 23 root 20 0 0 0 0 S 0.3 0.0 0:17.88 kblockd/1 2049 root 20 0 382m 4932 2880 D 0.3 0.2 0:03.67 httpd 2144 root 20 0 1702m 69m 1164 S 0.3 3.5 5:19.68 bitcoind 6325 root 20 0 15164 1100 656 R 0.3 0.1 0:11.09 top 10311 apache 20 0 387m 9496 7320 D 0.3 0.5 0:01.89 httpd 10313 apache 20 0 391m 10m 7364 D 0.3 0.5 0:02.40 httpd 10466 apache 20 0 399m 12m 7392 D 0.3 0.7 0:02.41 httpd 10599 apache 20 0 391m 9324 7340 D 0.3 0.5 0:00.15 httpd 10628 apache 20 0 384m 7620 4052 D 0.3 0.4 0:00.01 httpd 10633 apache 20 0 384m 7048 3504 D 0.3 0.3 0:00.01 httpd 10634 apache 20 0 384m 8012 4048 D 0.3 0.4 0:00.02 httpd 10638 apache 20 0 400m 22m 9.8m D 0.3 1.1 0:01.93 httpd 10640 apache 20 0 385m 8288 4028 D 0.3 0.4 0:00.03 httpd 10641 apache 20 0 401m 21m 6376 D 0.3 1.1 0:01.45 httpd 10759 apache 20 0 385m 8816 3480 D 0.3 0.4 0:01.45 httpd 10773 apache 20 0 384m 8044 3464 D 0.3 0.4 0:00.02 httpd This is an iotop snapshot: Total DISK READ: 5.93 M/s | Total DISK WRITE: 0.00 B/s TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND 10732 be/4 apache 3.76 K/s 0.00 B/s 0.00 % 58.48 % httpd 876 be/3 root 0.00 B/s 52.68 K/s 0.00 % 52.98 % [jbd2/dm-1-8] 10906 be/4 root 124.17 K/s 0.00 B/s 0.00 % 23.03 % sh -c [ -x /usr/local/psa/admin/sbin/backupmng ] && /usr/local/psa/admin/sbin/backupmng >/dev/null 2>&1 2156 be/4 root 206.94 K/s 0.00 B/s 0.00 % 21.15 % bitcoind 10904 be/4 mysql 0.00 B/s 0.00 B/s 0.00 % 18.94 % mysqld --basedir=/usr --datadir=/var/lib/mysql --user=mysql --log-error=/var/log/mysqld.log --pid-file=/var/run/mysqld/mysqld.pid --socket=/var/lib/mysql/mysql.sock 10773 be/4 apache 7.53 K/s 0.00 B/s 0.00 % 14.77 % httpd 10641 be/4 apache 15.05 K/s 0.00 B/s 0.00 % 11.57 % httpd 10399 be/4 apache 1057.29 K/s 0.00 B/s 43.16 % 10.56 % httpd 10682 be/4 sw-cp-se 158.03 K/s 0.00 B/s 0.00 % 7.45 % sw-engine-cgi -c /usr/local/psa/admin/conf/php.ini -d auto_prepend_file=auth.php3 -u psaadm 10774 be/4 apache 3.76 K/s 0.00 B/s 0.00 % 6.53 % httpd 10624 be/4 apache 0.00 B/s 0.00 B/s 0.00 % 5.53 % httpd 10356 be/4 apache 899.26 K/s 0.00 B/s 35.52 % 4.01 % httpd 10795 be/4 apache 0.00 B/s 0.00 B/s 0.00 % 3.93 % httpd 10804 be/4 apache 7.53 K/s 0.00 B/s 0.00 % 3.08 % httpd 4379 be/4 root 2.89 M/s 0.00 B/s 99.99 % 0.00 % namecoind 10619 be/4 apache 462.80 K/s 0.00 B/s 7.80 % 0.00 % httpd 10636 be/4 apache 3.76 K/s 0.00 B/s 0.00 % 0.00 % httpd 10716 be/4 mysql 105.35 K/s 0.00 B/s 5.92 % 0.00 % mysqld --basedir=/usr --datadir=/var/lib/mysql --user=mysql --log-error=/var/log/mysqld.log --pid-file=/var/run/mysqld/mysqld.pid --socket=/var/lib/mysql/mysql.sock 1988 be/4 root 18.81 K/s 0.00 B/s 0.00 % 0.00 % spamd_full.sock I also ran lsof -p for pid 10390 which was way up top under the top command and this is the bottom line where I can sort of see what request this was and it says CLOSE_WAIT: httpd 10390 apache 34u IPv6 315879 0t0 TCP default-domain.com:https->crawl-66-249-65-91.googlebot.com:42907 (CLOSE_WAIT) I'm still not sure what exactly is causing this all to happen though? I killed that service but %wa and load average remain high, I also stopped mysqld and other services. It really only goes down once I stop httpd altogether, and even then I can't start it without finding remaining hanging httpd processes via "netstat -tulpn", killing those or doing "killall -9 httpd" and after waiting a while for it to cycle through all those then doing /etc/init.d/httpd start

    Read the article

  • Set up linux box for hosting a-z

    - by microchasm
    I am in the process of reinstalling the OS on a machine that will be used to host a couple of apps for our business. The apps will be local only; access from external clients will be via vpn only. The prior setup used a hosting control panel (Plesk) for most of the admin, and I was looking at using another similar piece of software for the reinstall - but I figured I should finally learn how it all works. I can do most of the things the software would do for me, but am unclear on the symbiosis of it all. This is all an attempt to further distance myself from the land of Configuration Programmer/Programmer, if at all possible. I can't find a full walkthrough anywhere for what I'm looking for, so I thought I'd put up this question, and if people can help me on the way I will edit this with the answers, and document my progress/pitfalls. Hopefully someday this will help someone down the line. The details: CentOS 5.5 x86_64 httpd: Apache/2.2.3 mysql: 5.0.77 (to be upgraded) php: 5.1 (to be upgraded) The requirements: SECURITY!! Secure file transfer Secure client access (SSL Certs and CA) Secure data storage Virtualhosts/multiple subdomains Local email would be nice, but not critical The Steps: Download latest CentOS DVD-iso (torrent worked great for me). Install CentOS: While going through the install, I checked the Server Components option thinking I was going to be using another Plesk-like admin. In hindsight, considering I've decided to try to go my own way, this probably wasn't the best idea. Basic config: Setup users, networking/ip address etc. Yum update/upgrade. Upgrade PHP/MySQL: To upgrade PHP and MySQL to the latest versions, I had to look to another repo outside CentOS. IUS looks great and I'm happy I found it! Add IUS repository to our package manager cd /tmp wget http://dl.iuscommunity.org/pub/ius/stable/Redhat/5/x86_64/epel-release-1-1.ius.el5.noarch.rpm rpm -Uvh epel-release-1-1.ius.el5.noarch.rpm wget http://dl.iuscommunity.org/pub/ius/stable/Redhat/5/x86_64/ius-release-1-4.ius.el5.noarch.rpm rpm -Uvh ius-release-1-4.ius.el5.noarch.rpm yum list | grep -w \.ius\. # list all the packages in the IUS repository; use this to find PHP/MySQL version and libraries you want to install Remove old version of PHP and install newer version from IUS rpm -qa | grep php # to list all of the installed php packages we want to remove yum shell # open an interactive yum shell remove php-common php-mysql php-cli #remove installed PHP components install php53 php53-mysql php53-cli php53-common #add packages you want transaction solve #important!! checks for dependencies transaction run #important!! does the actual installation of packages. [control+d] #exit yum shell php -v PHP 5.3.2 (cli) (built: Apr 6 2010 18:13:45) Upgrade MySQL from IUS repository /etc/init.d/mysqld stop rpm -qa | grep mysql # to see installed mysql packages yum shell remove mysql mysql-server #remove installed MySQL components install mysql51 mysql51-server mysql51-devel transaction solve #important!! checks for dependencies transaction run #important!! does the actual installation of packages. [control+d] #exit yum shell service mysqld start mysql -v Server version: 5.1.42-ius Distributed by The IUS Community Project Upgrade instructions courtesy of IUS wiki: http://wiki.iuscommunity.org/Doc/ClientUsageGuide Install rssh (restricted shell) to provide scp and sftp access, without allowing ssh login cd /tmp wget http://dag.wieers.com/rpm/packages/rssh/rssh-2.3.2-1.2.el5.rf.x86_64.rpm rpm -ivh rssh-2.3.2-1.2.el5.rf.x86_64.rpm useradd -m -d /home/dev -s /usr/bin/rssh dev passwd dev Edit /etc/rssh.conf to grant access to SFTP to rssh users. vi /etc/rssh.conf Uncomment or add: allowscp allowsftp This allows me to connect to the machine via SFTP protocol in Transmit (my FTP program of choice; I'm sure it's similar with other FTP apps). rssh instructions appropriated (with appreciation!) from http://www.cyberciti.biz/tips/linux-unix-restrict-shell-access-with-rssh.html Set up virtual interfaces ifconfig eth1:1 192.168.1.3 up #start up the virtual interface cd /etc/sysconfig/network-scripts/ cp ifcfg-eth1 ifcfg-eth1:1 #copy default script and match name to our virtual interface vi ifcfg-eth1:1 #modify eth1:1 script #ifcfg-eth1:1 | modify so it looks like this: DEVICE=eth1:1 IPADDR=192.168.1.3 NETMASK=255.255.255.0 NETWORK=192.168.1.0 ONBOOT=yes NAME=eth1:1 Add more Virtual interfaces as needed by repeating. Because of the ONBOOT=yes line in the ifcfg-eth1:1 file, this interface will be brought up when the system boots, or the network starts/restarts. service network restart Shutting down interface eth0: [ OK ] Shutting down interface eth1: [ OK ] Shutting down loopback interface: [ OK ] Bringing up loopback interface: [ OK ] Bringing up interface eth0: [ OK ] Bringing up interface eth1: [ OK ] ping 192.168.1.3 64 bytes from 192.168.1.3: icmp_seq=1 ttl=64 time=0.105 ms And this is where I'm at. I will keep editing this as I make progress. Any tips on how to Configure virtual interfaces/ip based virtual hosts for SSL, setting up a CA, or anything else would be appreciated.

    Read the article

  • Passing a variable from Excel 2007 Custom Task Pane to Hosted PowerShell

    - by Uros Calakovic
    I am testing PowerShell hosting using C#. Here is a console application that works: using System; using System.Collections; using System.Collections.Generic; using System.Collections.ObjectModel; using System.Management.Automation; using System.Management.Automation.Runspaces; using Microsoft.Office.Interop.Excel; namespace ConsoleApplication3 { class Program { static void Main() { Application app = new Application(); app.Visible = true; app.Workbooks.Add(XlWBATemplate.xlWBATWorksheet); Runspace runspace = RunspaceFactory.CreateRunspace(); runspace.Open(); runspace.SessionStateProxy.SetVariable("Application", app); Pipeline pipeline = runspace.CreatePipeline("$Application"); Collection<PSObject> results = null; try { results = pipeline.Invoke(); foreach (PSObject pob in results) { Console.WriteLine(pob); } } catch (RuntimeException re) { Console.WriteLine(re.GetType().Name); Console.WriteLine(re.Message); } } } } I first create an Excel.Application instance and pass it to the hosted PowerShell instance as a varible named $Application. This works and I can use this variable as if Excel.Application was created from within PowerShell. I next created an Excel addin using VS 2008 and added a user control with two text boxes and a button to the addin (the user control appears as a custom task pane when Excel starts). The idea was this: when I click the button a hosted PowerShell instance is created and I can pass to it the current Excel.Application instance as a variable, just like in the first sample, so I can use this variable to automate Excel from PowerShell (one text box would be used for input and the other one for output. Here is the code: using System; using System.Windows.Forms; using System.Management.Automation; using System.Management.Automation.Runspaces; using System.Collections.ObjectModel; using Microsoft.Office.Interop.Excel; namespace POSHAddin { public partial class POSHControl : UserControl { public POSHControl() { InitializeComponent(); } private void btnRun_Click(object sender, EventArgs e) { txtOutput.Clear(); Microsoft.Office.Interop.Excel.Application app = Globals.ThisAddIn.Application; Runspace runspace = RunspaceFactory.CreateRunspace(); runspace.Open(); runspace.SessionStateProxy.SetVariable("Application", app); Pipeline pipeline = runspace.CreatePipeline( "$Application | Get-Member | Out-String"); app.ActiveCell.Value2 = "Test"; Collection<PSObject> results = null; try { results = pipeline.Invoke(); foreach (PSObject pob in results) { txtOutput.Text += pob.ToString() + "-"; } } catch (RuntimeException re) { txtOutput.Text += re.GetType().Name; txtOutput.Text += re.Message; } } } } The code is similar to the first sample, except that the current Excel.Application instance is available to the addin via Globals.ThisAddIn.Application (VSTO generated) and I can see that it is really a Microsoft.Office.Interop.Excel.Application instance because I can use things like app.ActiveCell.Value2 = "Test" (this actually puts the text into the active cell). But when I pass the Excel.Application instance to the PowerShell instance what gets there is an instance of System.__ComObject and I can't figure out how to cast it to Excel.Application. When I examine the variable from PowerShell using $Application | Get-Member this is the output I get in the second text box: TypeName: System.__ComObject Name MemberType Definition ---- ---------- ---------- CreateObjRef Method System.Runtime.Remoting.ObjRef CreateObj... Equals Method System.Boolean Equals(Object obj) GetHashCode Method System.Int32 GetHashCode() GetLifetimeService Method System.Object GetLifetimeService() GetType Method System.Type GetType() InitializeLifetimeService Method System.Object InitializeLifetimeService() ToString Method System.String ToString() My question is how can I pass an instance of Microsoft.Office.Interop.Excel.Application from a VSTO generated Excel 2007 addin to a hosted PowerShell instance, so I can manipulate it from PowerShell? (I have previously posted the question in the Microsoft C# forum without an answer)

    Read the article

  • FullCalendar displaying last weeks events below this weeks events

    - by Brian
    I am developing an application to manage an On-Call calendar, using FullCalendar to render it. Most events are 1 week long, starting at 8:00 AM Tuesday and ending the following Tuesday at 8:00 AM. Another event, presumably with a different person on-call, will follow that event. During a hallway usability test, someone commented that the month calendar view was difficult to read because the previous weeks event is not at the top of the stack, instead rendering below the event that starts during that week. When being viewed, the eye perceives that it should go down 1 line to view the remaining timeline because the event from last week is there, instead of moving down to the following week. I investigated what I believe to be the problem: function segCmp(a, b) { return (b.msLength - a.msLength) * 100 + (a.event.start - b.event.start); } sorts the events for a row, but uses the length of the event in the calculation. Since the current week's event will have a longer duration, it always get sorted to the top. To test, I changed the start dates to Wednesday so the durations are closer. This cause the events to render how I would expect, with last weeks events at the top and this weeks at the bottom. I thought that if one of the events in the compare doesn't start that week, then it should only be compared based on the start times. I modified the function to be: function segCmp(a, b) { if (a.isStart == false || b.isStart == false) { return (a.event.start - b.event.start); } return (b.msLength - a.msLength) * 100 + (a.event.start - b.event.start); } This solved my problem, and the rendering now looks good - and passes the hallway test. What I don't know is if this would have an impact in other areas. I have taken a look at the other views (month, week, day) and they all seem to be rendering properly as well. I am just not familiar with FullCalendar enough to file a bug or feature request on this, or if this would even be considered a bug. I am wondering if what I modified is correct, or if it is not what a better modification would be to fix this issue. Thanks! Below I have the json results for what should be displayed: [{"title":"Person 1 - OnCall (OSS On Call)","id":12,"allDay":false,"start":"2010-11-30T15:00:00.0000000Z","end":"2010-12-07T15:00:00.0000000Z","editable":false,"className":"fc-event-title-calendar","url":"/TimeManagement/Edit/12"}, {"title":"Person 2 - OnCall (OSS On Call)","id":13,"allDay":false,"start":"2010-12-07T15:00:00.0000000Z","end":"2010-12-14T15:00:00.0000000Z","editable":false,"className":"fc-event-title-calendar","url":"/TimeManagement/Edit/13"}, {"title":"Person 3 - OnCall (OSS On Call)","id":14,"allDay":false,"start":"2010-12-14T15:00:00.0000000Z","end":"2010-12-21T15:00:00.0000000Z","editable":false,"className":"fc-event-title-calendar","url":"/TimeManagement/Edit/14"}, {"title":"Person 4 - OnCall (OSS On Call)","id":15,"allDay":false,"start":"2010-12-21T15:00:00.0000000Z","end":"2010-12-28T15:00:00.0000000Z","editable":false,"className":"fc-event-title-calendar","url":"/TimeManagement/Edit/15"}, {"title":"Person 5 - OnCall (OSS On Call)","id":16,"allDay":false,"start":"2010-12-28T15:00:00.0000000Z","end":"2011-01-04T15:00:00.0000000Z","editable":false,"className":"fc-event-title-calendar","url":"/TimeManagement/Edit/16"}, {"title":"Person 6 - OnCall (OSS On Call)","id":17,"allDay":false,"start":"2011-01-04T15:00:00.0000000Z","end":"2011-01-11T15:00:00.0000000Z","editable":false,"className":"fc-event-title-calendar","url":"/TimeManagement/Edit/17"}, {"title":"Christmas","id":2,"allDay":true,"start":"2010-12-25T07:00:00.0000000Z","end":null,"editable":false,"className":"fc-event-title-calendar","url":"/TimeManagement/Edit/2"}, {"title":"New Years Eve","id":3,"allDay":true,"start":"2010-12-31T07:00:00.0000000Z","end":null,"editable":false,"className":"fc-event-title-calendar","url":"/TimeManagement/Edit/3"}, {"title":"New Years Day","id":4,"allDay":true,"start":"2011-01-01T07:00:00.0000000Z","end":null,"editable":false,"className":"fc-event-title-calendar","url":"/TimeManagement/Edit/4"}]

    Read the article

  • Java Refuses to Start - Could not reserve enough space for object heap

    - by Randyaa
    Background We have a pool of aproximately 20 linux blades. Some are running Suse, some are running Redhat. ALL share NAS space which contains the following 3 folders: /NAS/app/java - a symlink that points to an installation of a Java JDK. Currently version 1.5.0_10 /NAS/app/lib - a symlink that points to a version of our application. /NAS/data - directory where our output is written All our machines have 2 processors (hyperthreaded) with 4gb of physical memory and 4gb of swap space. We limit the number of 'jobs' each machine can process at a given time to 6 (this number likely needs to change, but that does not enter into the current problem so please ignore it for the time being). Some of our jobs set a Max Heap size of 512mb, some others reserve a Max Heap size of 2048mb. Again, we realize we could go over our available memory if 6 jobs started on the same machine with the heap size set to 2048, but to our knowledge this has not yet occurred. The Problem Once and a while a Job will fail immediately with the following message: Error occurred during initialization of VM Could not reserve enough space for object heap Could not create the Java virtual machine. We used to chalk this up to too many jobs running at the same time on the same machine. The problem happened infrequently enough (MAYBE once a month) that we'd just restart it and everything would be fine. The problem has recently gotten much worse. All of our jobs which request a max heap size of 2048m fail immediately almost every time and need to get restarted several times before completing. We've gone out to individual machines and tried executing them manually with the same result. Debugging It turns out that the problem only exists for our SuSE boxes. The reason it has been happening more frequently is becuase we've been adding more machines, and the new ones are SuSE. 'cat /proc/version' on the SuSE boxes give us: Linux version 2.6.5-7.244-bigsmp (geeko@buildhost) (gcc version 3.3.3 (SuSE Linux)) #1 SMP Mon Dec 12 18:32:25 UTC 2005 'cat /proc/version' on the RedHat boxes give us: Linux version 2.4.21-32.0.1.ELsmp ([email protected]) (gcc version 3.2.3 20030502 (Red Hat Linux 3.2.3-52)) #1 SMP Tue May 17 17:52:23 EDT 2005 'uname -a' gives us the following on BOTH types of machines: UTC 2005 i686 i686 i386 GNU/Linux No jobs are running on the machine, and no other processes are utilizing much memory. All of the processes currently running might be using 100mb total. 'top' currently shows the following: Mem: 4146528k total, 3536360k used, 610168k free, 132136k buffers Swap: 4194288k total, 0k used, 4194288k free, 3283908k cached 'vmstat' currently shows the following: procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 0 0 0 610292 132136 3283908 0 0 0 2 26 15 0 0 100 0 If we kick off a job with the following command line (Max Heap of 1850mb) it starts fine: java/bin/java -Xmx1850M -cp helloworld.jar HelloWorld Hello World If we bump up the max heap size to 1875mb it fails: java/bin/java -Xmx1875M -cp helloworld.jar HelloWorld Error occurred during initialization of VM Could not reserve enough space for object heap Could not create the Java virtual machine. It's quite clear that the memory currently being used is for Buffering/Caching and that's why so little is being displayed as 'free'. What isn't clear is why there is a magical 1850mb line where anything higher means Java can't start. Any explanations would be greatly appreciated.

    Read the article

  • Cluster failover and strange gratuitous arp behavior

    - by lazerpld
    I am experiencing a strange Windows 2008R2 cluster related issue that is bothering me. I feel that I have come close as to what the issue is, but still don't fully understand what is happening. I have a two node exchange 2007 cluster running on two 2008R2 servers. The exchange cluster application works fine when running on the "primary" cluster node. The problem occurs when failing over the cluster ressource to the secondary node. When failing over the cluster to the "secondary" node, which for instance is on the same subnet as the "primary", the failover initially works ok and the cluster ressource continues to work for a couple of minutes on the new node. Which means that the recieving node does send out a gratuitous arp reply packet that updated the arp tables on the network. But after x amount of time (typically within 5 minutes time) something updates the arp-tables again because all of a sudden the cluster service does not answer to pings. So basically I start a ping to the exchange cluster address when its running on the "primary node". It works just great. I failover the cluster ressource group to the "secondary node" and I only have loss of one ping which is acceptable. The cluster ressource still answers for some time after being failed over and all of a sudden the ping starts timing out. This is telling me that the arp table initially is updated by the secondary node, but then something (which I haven't found out yet) wrongfully updates it again, probably with the primary node's MAC. Why does this happen - has anyone experienced the same problem? The cluster is NOT running NLB and the problem stops immidiately after failing over back to the primary node where there are no problems. Each node is using NIC teaming (intel) with ALB. Each node is on the same subnet and has gateway and so on entered correctly as far as I am concerned. Edit: I was wondering if it could be related to network binding order maybe? Because I have noticed that the only difference I can see from node to node is when showing the local arp table. On the "primary" node the arp table is generated on the cluster address as the source. While on the "secondary" its generated from the nodes own network card. Any input on this? Edit: Ok here is the connection layout. Cluster address: A.B.6.208/25 Exchange application address: A.B.6.212/25 Node A: 3 physical nics. Two teamed using intels teaming with the address A.B.6.210/25 called public The last one used for cluster traffic called private with 10.0.0.138/24 Node B: 3 physical nics. Two teamed using intels teaming with the address A.B.6.211/25 called public The last one used for cluster traffic called private with 10.0.0.139/24 Each node sits in a seperate datacenter connected together. End switches being cisco in DC1 and NEXUS 5000/2000 in DC2. Edit: I have been testing a little more. I have now created an empty application on the same cluster, and given it another ip address on the same subnet as the exchange application. After failing this empty application over, I see the exact same problem occuring. After one or two minutes clients on other subnets cannot ping the virtual ip of the application. But while clients on other subnets cannot, another server from another cluster on the same subnet has no trouble pinging. But if i then make another failover to the original state, then the situation is the opposite. So now clients on same subnet cannot, and on other they can. We have another cluster set up the same way and on the same subnet, with the same intel network cards, the same drivers and same teaming settings. Here we are not seeing this. So its somewhat confusing. Edit: OK done some more research. Removed the NIC teaming of the secondary node, since it didnt work anyway. After some standard problems following that, I finally managed to get it up and running again with the old NIC teaming settings on one single physical network card. Now I am not able to reproduce the problem described above. So it is somehow related to the teaming - maybe some kind of bug? Edit: Did some more failing over without being able to make it fail. So removing the NIC team looks like it was a workaround. Now I tried to reestablish the intel NIC teaming with ALB (as it was before) and i still cannot make it fail. This is annoying due to the fact that now i actually cannot pinpoint the root of the problem. Now it just seems to be some kind of MS/intel hick-up - which is hard to accept because what if the problem reoccurs in 14 days? There is a strange thing that happened though. After recreating the NIC team I was not able to rename the team to "PUBLIC" which the old team was called. So something has not been cleaned up in windows - although the server HAS been restarted! Edit: OK after restablishing the ALB teaming the error came back. So I am now going to do some thorough testing and i will get back with my observations. One thing is for sure. It is related to Intel 82575EB NICS, ALB and Gratuitous Arp.

    Read the article

  • How to generate a Program template by generating an abstract class

    - by Byron-Lim Timothy Steffan
    i have the following problem. The 1st step is to implement a program, which follows a specific protocol on startup. Therefore, functions as onInit, onConfigRequest, etc. will be necessary. (These are triggered e.g. by incoming message on a TCP Port) My goal is to generate a class for example abstract one, which has abstract functions as onInit(), etc. A programmer should just inherit from this base class and should merely override these abstract functions of the base class. The rest as of the protocol e.g. should be simply handled in the background (using the code of the base class) and should not need to appear in the programmers code. What is the correct design strategy for such tasks? and how do I deal with, that the static main method is not inheritable? What are the key-tags for this problem? (I have problem searching for a solution since I lack clear statements on this problem) Goal is to create some sort of library/class, which - included in ones code - results in executables following the protocol. EDIT (new explanation): Okay let me try to explain more detailled: In this case programs should be clients within a client server architecture. We have a client server connection via TCP/IP. Each program needs to follow a specific protocol upon program start: As soon as my program starts and gets connected to the server it will receive an Init Message (TcpClient), when this happens it should trigger the function onInit(). (Should this be implemented by an event system?) After onInit() a acknowledgement message should be sent to the server. Afterwards there are some other steps as e.g. a config message from the server which triggers an onConfig and so on. Let's concentrate on the onInit function. The idea is, that onInit (and onConfig and so on) should be the only functions the programmer should edit while the overall protocol messaging is hidden for him. Therefore, I thought using an abstract class with the abstract methods onInit(), onConfig() in it should be the right thing. The static Main class I would like to hide, since within it e.g. there will be some part which connects to the tcp port, which reacts on the Init Message and which will call the onInit function. 2 problems here: 1. the static main class cant be inherited, isn it? 2. I cannot call abstract functions from the main class in the abstract master class. Let me give an Pseudo-example for my ideas: public abstract class MasterClass { static void Main(string[] args){ 1. open TCP connection 2. waiting for Init Message from server 3. onInit(); 4. Send Acknowledgement, that Init Routine has ended successfully 5. waiting for Config message from server 6..... } public abstract void onInit(); public abstract void onConfig(); } I hope you get the idea now! The programmer should afterwards inherit from this masterclass and merely need to edit the functions onInit and so on. Is this way possible? How? What else do you recommend for solving this? EDIT: The strategy ideo provided below is a good one! Check out my comment on that.

    Read the article

  • Android launches system settings instead of my app

    - by jsundin
    Hi, For some reason whenever I (try to) start my app the phone decides to launch system settings instead of my "main activity". And yes, I am referring to the "Android system settings", and not something from my app. This only happens on my phone, and I suppose it probably could be related to the fact that my app had just opened system settings when I decided to re-launch with a new version from Eclipse. It is possible to start the app from within Eclipse, but when I navigate back from the app it returns to the system settings rather than the home screen, as if the settings activity was started first and then my activity. If I then start the app from the phone all I get is system settings yet again. The app is listening to the VIEW-action for a specific URL substring, and when I start the app using a matching URL I get the same result as when I start it from Eclipse, app starts, but when I return I return to settings. I have tried googling for this problem, and all I could find was something about Android saving state when an app gets killed, but without any information on how to reset this state. I have tried uninstalling the app, killing system settings, rebooting the phone, reinstalling, clearing application data.. no luck.. For what it's worth, here's the definition of my main activity from the manifest, <activity android:name=".HomeActivity" android:label="@string/app_name" android:screenOrientation="portrait" android:clearTaskOnLaunch="true" android:launchMode="singleTop"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> <intent-filter> <action android:name="android.intent.action.VIEW"></action> <category android:name="android.intent.category.DEFAULT"></category> <category android:name="android.intent.category.BROWSABLE"></category> <data android:pathPrefix="/isak-web-mobile/smart/" android:scheme="http" android:host="*"></data> </intent-filter> </activity> And here is the logcat-line from when I try to start my app, nothing about any settings anywhere. I/ActivityManager( 1301): Starting activity: Intent { act=android.intent.action.MAIN cat=[android.intent.category.LAUNCHER] flg=0x10200000 cmp=se.opencare.isak/.HomeActivity } When I launch from Eclipse I also get this line (as one would expect), I/ActivityManager( 1301): Start proc se.opencare.isak for activity se.opencare.isak/.HomeActivity: pid=23068 uid=10163 gids={3003, 1007, 1015} If it matters the phone is a HTC Desire Z running 2.2.1. Currently, this is my HomeActivity, public class HomeActivity extends Activity { public static final String TAG = "HomeActivity"; @Override protected void onActivityResult(int requestCode, int resultCode, Intent data) { Log.d(TAG, "onActivityResult(" + requestCode + ", " + resultCode + ", " + data + ")"); super.onActivityResult(requestCode, resultCode, data); } @Override protected void onCreate(Bundle savedInstanceState) { Log.d(TAG, "onCreate(" + savedInstanceState + ")"); super.onCreate(savedInstanceState); } @Override protected void onDestroy() { Log.d(TAG, "onDestroy()"); super.onDestroy(); } @Override protected void onPause() { Log.d(TAG, "onPause()"); super.onPause(); } @Override protected void onPostCreate(Bundle savedInstanceState) { Log.d(TAG, "onPostCreate(" + savedInstanceState + ")"); super.onPostCreate(savedInstanceState); } @Override protected void onPostResume() { Log.d(TAG, "onPostResume()"); super.onPostResume(); } @Override protected void onRestart() { Log.d(TAG, "onRestart()"); super.onRestart(); } @Override protected void onRestoreInstanceState(Bundle savedInstanceState) { Log.d(TAG, "onRestoreInstanceState(" + savedInstanceState + ")"); super.onRestoreInstanceState(savedInstanceState); } @Override protected void onResume() { Log.d(TAG, "onResume()"); super.onResume(); } @Override protected void onStart() { Log.d(TAG, "onStart()"); super.onStart(); } @Override protected void onStop() { Log.d(TAG, "onStop()"); super.onStop(); } @Override protected void onUserLeaveHint() { Log.d(TAG, "onUserLeaveHint()"); super.onUserLeaveHint(); } } Nothing (of the above) is written to the log.

    Read the article

  • Variable mysteriously changing value

    - by Eitan
    I am making a simple tcp/ip chat program for practicing threads and tcp/ip. I was using asynchronous methods but had a problem with concurrency so I went to threads and blocking methods (not asynchronous). I have two private variables defined in the class, not static: string amessage = string.Empty; int MessageLength; and a Thread private Thread BeginRead; Ok so I call a function called Listen ONCE when the client starts: public virtual void Listen(int byteLength) { var state = new StateObject {Buffer = new byte[byteLength]}; BeginRead = new Thread(ReadThread); BeginRead.Start(state); } and finally the function to receive commands and process them, I'm going to shorten it because it is really long: private void ReadThread(object objectState) { var state = (StateObject)objectState; int byteLength = state.Buffer.Length; while (true) { var buffer = new byte[byteLength]; int len = MySocket.Receive(buffer); if (len <= 0) return; string content = Encoding.ASCII.GetString(buffer, 0, len); amessage += cleanMessage.Substring(0, MessageLength); if (OnRead != null) { var e = new CommandEventArgs(amessage); OnRead(this, e); } } } Now, as I understand it only one thread at a time will enter BeginRead, I call Receive, it blocks until I get data, and then I process it. The problem: the variable amessage will change it's value between statements that do not touch or alter the variable at all, for example at the bottom of the function at: if (OnRead != null) "amessage" will be equal to 'asdf' and at if (OnRead != null) "amessage" will be equal to qwert. As I understand it this is indicative of another thread changing the value/running asynchronously. I only spawn one thread to do the receiving and the Receive function is blocking, how could there be two threads in this function and if there is only one thread how does amessage's value change between statements that don't affect it's value. As a side note sorry for spamming the site with these questions but I'm just getting a hang of this threading story and it's making me want to sip cyanide. Thanks in advance. EDIT: Here is my code that calls the Listen Method in the client: public void ConnectClient(string ip,int port) { client.Connect(ip,port); client.Listen(5); } and in the server: private void Accept(IAsyncResult result) { var client = new AbstractClient(MySocket.EndAccept(result)); var e = new CommandEventArgs(client, null); Clients.Add(client); client.Listen(5); if (OnClientAdded != null) { var target = (Control) OnClientAdded.Target; if (target != null && target.InvokeRequired) target.Invoke(OnClientAdded, this, e); else OnClientAdded(this, e); } client.OnRead += OnRead; MySocket.BeginAccept(new AsyncCallback(Accept), null); } All this code is in a class called AbstractClient. The client inherits the Abstract client and when the server accepts a socket it create's it's own local AbstractClient, in this case both modules access the functions above however they are different instances and I couldn't imagine threads from different instances combining especially as no variable is static.

    Read the article

  • Background custom button animation using WPF

    - by ajtp
    Hi, I am using Resources dictionaries to customize my controls and apply them as themes to my WPF application so I have implemented one for the button control. A code snippet for my custom Button.xaml is (its namespace is MyWPFApp.Themes): <ResourceDictionary ...> ... <LinearGradientBrush x:Key="NormalBackground" EndPoint="0,1" StartPoint="0,0"> <GradientStop Color="sc#1.000000, 0.250141, 0.333404, 0.884413" Offset="0"/> <GradientStop Color="#ccffffff" Offset="1"/> </LinearGradientBrush> <LinearGradientBrush x:Key="OverBackground" EndPoint="0,1" StartPoint="0,0"> <GradientStop Color="#da5e69" Offset="0"/> <GradientStop Color="#d12e27" Offset="1"/> </LinearGradientBrush> <LinearGradientBrush x:Key="ClickBackground" EndPoint="0,1" StartPoint="0,0"> <GradientStop Color="#d22828" Offset="1"/> <GradientStop Color="#b00000" Offset="0"/> </LinearGradientBrush> ... <Style TargetType="{x:Type Button}"> ... <Setter Property="Background" Value="{StaticResource NormalBackground}"/> ... </Style> </ResourceDictionary> and I apply it by doing the following from my main Application.xaml: <Application ...> <Application.Resources> <ResourceDictionary> <ResourceDictionary.MergedDictionaries> <ResourceDictionary Source="Themes/Button.xaml"/> </ResourceDictionary.MergedDictionaries> </ResourceDictionary> </Application.Resources> </Application> What I try to do is to change background color, for example, from White to Lime for 6 seconds by doing this from one of my pages, MyPage1.xaml, using StoryBoard: <Page x:Class="MyWPFApp.Pages.MyPage1" ...> <Page.Resources> ... <Storyboard x:Key="sbBtnResetHC" Storyboard.TargetName="BtnResetHC" Storyboard.TargetProperty="(Background). (SolidColorBrush.Color)"> <ColorAnimation From="Pink" To="Green" By="Blue" Duration="0:0:6" RepeatBehavior="3x" AutoReverse="True" /> </Storyboard> ... </Page.Resources> ... <Button x:Name="BtnResetHC" Click="BtnResetHC_Click" Width="90" Visibility="Collapsed" /> ... </Page> and from code behind MyPage1.xaml.cs I start animation by doing this: Storyboard sb = (Storyboard)FindResource("sbBtnResetHC"); sb.Begin(); when the button is visible, but it doesn't work for me. Any ideas what's wrong? Maybe another possibility, as I want the animation starts on button visible is to do a trigger for the button over Visibility property, Is it a better solution? Thanks a lot!

    Read the article

  • filling in the holes in the result of a query

    - by ????? ????????
    my query is returning: +------+------+------+------+------+------+------+-------+------+------+------+------+-----+ | Jan | Feb | Mar | Apr | May | Jun | Jul | Aug | Sep | Oct | Nov | Dec | Bla | +------+------+------+------+------+------+------+-------+------+------+------+------+-----+ | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 | 0 | 13 | | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 | 0 | 0 | 14 | | 0 | 0 | 0 | 0 | 0 | 9 | 0 | 0 | 0 | 0 | 8 | 37 | 29 | | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 374 | 30 | | 0 | 0 | 1 | 0 | 78 | 2 | 4 | 8 | 57 | 169 | 116 | 602 | 31 | | 156 | 255 | 79 | 75 | 684 | 325 | 289 | 194 | 407 | 171 | 584 | 443 | 32 | | 1561 | 2852 | 2056 | 796 | 2004 | 1755 | 879 | 1052 | 1490 | 1683 | 2532 | 2381 | 33 | | 4167 | 3841 | 4798 | 3399 | 4132 | 5849 | 3157 | 4381 | 4424 | 4487 | 4178 | 5343 | 34 | | 5472 | 5939 | 5768 | 4150 | 7483 | 6836 | 6346 | 6288 | 6850 | 7155 | 5706 | 5231 | 35 | | 5749 | 4741 | 5264 | 4045 | 6544 | 7405 | 7524 | 6625 | 6344 | 5508 | 6513 | 3854 | 36 | | 5464 | 6323 | 7074 | 4861 | 7244 | 6768 | 6632 | 7389 | 8077 | 8745 | 6738 | 5039 | 37 | | 5731 | 7205 | 7476 | 5734 | 9103 | 9244 | 7339 | 8970 | 9726 | 9089 | 6328 | 5512 | 38 | | 7262 | 6149 | 8231 | 6654 | 9886 | 9834 | 9306 | 10065 | 9983 | 9984 | 6738 | 5806 | 39 | | 5886 | 6934 | 7137 | 6978 | 9034 | 9155 | 7389 | 9437 | 9711 | 8665 | 6593 | 5337 | 40 | +------+------+------+------+------+------+------+-------+------+------+------+------+-----+ as you can see the BLA column starts from 13. i want it to start from 1, then 2, then 3 etc......I do not want any gaps in the data. The reason there are gaps is because all of the months are 0 for that specific bla how do i get the result set to include ALL values for BLA, even ones that will yield 0 for the months? here are the desired results: +-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+ | Jan | Feb | Mar | Apr | May | Jun | Jul | Aug | Sep | Oct | Nov | Dec | Bla | +-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+ | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 | | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 | | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 4 | | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 | | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 | | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 | | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 | | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 | | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 | | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 11 | | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 12 | | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 13 | | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 | 0 | 0 | 14 | | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 15 | | … | … | … | … | … | … | … | … | … | … | … | … | … | +-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+ here's my query: WITH CTE AS ( select sum(case when datepart(month,[datetime entered]) = 1 then 1 end) as Jan, sum(case when datepart(month,[datetime entered]) = 2 then 1 end) as Feb, sum(case when datepart(month,[datetime entered]) = 3 then 1 end) as Mar, sum(case when datepart(month,[datetime entered]) = 4 then 1 end) as Apr, sum(case when datepart(month,[datetime entered]) = 5 then 1 end) as May, sum(case when datepart(month,[datetime entered]) = 6 then 1 end) as Jun, sum(case when datepart(month,[datetime entered]) = 7 then 1 end) as Jul, sum(case when datepart(month,[datetime entered]) = 8 then 1 end) as Aug, sum(case when datepart(month,[datetime entered]) = 9 then 1 end) as Sep, sum(case when datepart(month,[datetime entered]) = 10 then 1 end) as Oct, sum(case when datepart(month,[datetime entered]) = 11 then 1 end) as Nov, sum(case when datepart(month,[datetime entered]) = 12 then 1 end) as Dec, DATEPART(yyyy,[datetime entered]) as [Year], bla= CASE WHEN datediff(d, CAST([datetime entered] as DATE), CAST([datetime completed] as DATE))*24 + CONVERT(CHAR(2),[datetime completed],108) >191 THEN 192 ELSE datediff(d, CAST([datetime entered] as DATE), CAST([datetime completed] as DATE))*24 + CONVERT(CHAR(2),[datetime completed],108) END --,datediff(d, CAST([datetime entered] as DATE), CAST([datetime completed] as DATE)) AS Sort_Days, --DATEPART(hour, [datetime completed] ) AS Sort_Hours from [TurnAround] group by datediff(d, CAST([datetime entered] as DATE), CAST([datetime completed] as DATE))*24 + CONVERT(CHAR(2),[datetime completed],108), DATEPART(yyyy,[datetime entered]) , [datetime entered] --[DateTime Completed] ) SELECT ISNULL(SUM(Jan),0) Jan, ISNULL(SUM(Feb),0) Feb, ISNULL(SUM(Mar),0) Mar, ISNULL(SUM(Apr),0) Apr, ISNULL(SUM(May),0) May, ISNULL(SUM(Jun),0) Jun, ISNULL(SUM(Jul),0) Jul, ISNULL(SUM(Aug),0) Aug, ISNULL(SUM(Sep),0) Sep, ISNULL(SUM(Oct),0) Oct, ISNULL(SUM(Nov),0) Nov, ISNULL(SUM(Dec),0) Dec, [year], --,Sort_Hours, --Sort_Days, A.RN Bla FROM ( SELECT *, RN=ROW_NUMBER() OVER(ORDER BY object_id) FROM sys.all_objects) A LEFT JOIN CTE B ON A.RN = CASE WHEN B.Bla > 191 THEN 192 ELSE B.Bla END WHERE A.RN BETWEEN 1 AND 192 GROUP BY A.RN,[year]

    Read the article

  • Software: Launching League of Legends spectator mode from Command Line (Mac)

    - by Alex Popov
    Background: tl;dr at the end League of Legends has a spectator mode, in which you can watch someone else's game (essentially a replay) with a 3 minute delay. Popular LoL website OP.GG has figured out a clever way of hosting these spectator games on their own servers, thereby making them replayable, as opposed to only being available while the game is on (as Riot does it). If you request a replay from OP.GG, it sends a batch file which looks for where the League is situated and then the magic happens: @start "" "League of Legends.exe" "8394" "LoLLauncher.exe" "" "spectator fspectate.op.gg:4081 tjJbtRLQ/HMV7HuAxWV0XsXoRB4OmFBr 1391881421 NA1" This works fine on Windows. I'm trying to get it to work on Mac (which has an official client). First I tried running the same command by hand, (split for convenience) /Applications/ ... /LeagueOfLegends.app/ ... /LeagueofLegends 8393 LoLLauncher \ /Applications/ ... /LolClient spectator fspectate.op.gg:4081 tjJbtRLQ/HMV7HuAxWV0XsXoRB4OmFBr 1391881421 NA1 Running this, however, just starts the LoLLauncher, which closes all the active League processes. The exactly same thing happens if I just call /Applications/ ... /LeagueOfLegends.app/ ... /LeagueofLegends Next I tried seeing what actually happens when Spectator mode is initiated so I ran $ ps -axf | grep -i lol which showed UID PID PPID C STIME TTY TIME CMD 503 3085 1 0 Wed02pm ?? 0:00.00 (LolClient) 503 24607 1 0 9:19am ?? 0:00.98 /Applications/League of Legends.app/Contents/LOL/RADS/system/UserKernel.app/Contents/MacOS/UserKernel updateandrun lol_launcher LoLLauncher.app 503 24610 24607 0 9:19am ?? 1:08.76 /Applications/League of Legends.app/Contents/LoL/RADS/projects/lol_launcher/releases/0.0.0.122/deploy/LoLLauncher.app/Contents/MacOS/LoLLauncher 503 24611 24610 0 9:19am ?? 1:23.02 /Applications/League of Legends.app/Contents/LoL/RADS/projects/lol_air_client/releases/0.0.0.127/deploy/bin/LolClient -runtime .\ -nodebug META-INF\AIR\application.xml .\ -- 8393 503 24927 24610 0 9:44am ?? 0:03.37 /Applications/League of Legends.app/Contents/LoL/RADS/solutions/lol_game_client_sln/releases/0.0.0.117/deploy/LeagueOfLegends.app/Contents/MacOS/LeagueofLegends 8394 LoLLauncher /Applications/League of Legends.app/Contents/LoL/RADS/projects/lol_air_client/releases/0.0.0.127/deploy/bin/LolClient spectator 216.133.234.17:8088 Yn1oMX/n3LpXNebibzUa1i3Z+s2HV0ul 1400781241 NA1 Of Interest: there is (LolClient) which I cannot kill by it's PID. UserKernel updateandrun lol_launcher LoLLauncher.app is launched first. LoLLauncher is launched by the UserKernel (as we can see from the PPID) The very long command (PID: 24927) is how Spectator mode is launched, and is also launched by UserKernel. Spectator mode is launched in exactly the same way that the OP.GG .bat wanted to, with the only difference that Spectator mode connects to Riot instead of OP.GG's spectate server. I tried attaching GDB to the LolClient, but I couldn't get anything meaningful from it since it's an Adobe AIR application (and I've never used GDB with code other than mine own). Next I ran dtruss -a -b 100m -f -p $PID on everything I could think of: the LolClient, the LolLauncher and the UserKernel and skimmed the half a million lines produced. I found stuff like the GET request used to get the information of the game to spectate, but I could not see any launch of the equivalent of League of Legends.exe with spectator options. Finally, I ran lsof | grep -i lol to see if anything else was opened in the process, but didn't find anything that seemed appropriate. Open were UserKernel, LolLauncher, LolClient, Adobe AIR, LeagueofLegends and then Bugsplat, all of which are expected. None of this seemed especially relevant to figuring out how LeagueofLegends was opened into spectator mode. It obviously can be done, since Spectator mode is accessible from within the client. It seems likely that it can be done from the CLI, since Windows can do it and the clients are supposed to equals. Unless I'm missing something in the difference between how UNIX and Windows handle CLI application launches. My question is if there are any other things I can try to figure out how to launch Spectator mode myself. tl;dr: Trying to get into spectator mode from the CLI. It's possible on Windows (see first code block) but it just restarts League on Mac. What else can I try to find what call is made, and how to reproduce it? PS: Please let me know how I can improve this question or its formatting, I'd love to use StackOverflow/SuperUser, but as the guys said on the podcast this week (Ep. 59) it's very intimidating. Sorry for posting this on StackOverflow the first time :(

    Read the article

  • Selecting and inserting text at cursor location in textfield with JS/jQuery

    - by IceCreamYou
    Hello. I have developed a system in PHP which processes #hashtags like on Twitter. Now I'm trying to build a system that will suggest tags as I type. When a user starts writing a tag, a drop-down list should appear beneath the textarea with other tags that begin with the same string. Right now, I have it working where if a user types the hash key (#) the list will show up with the most popular #hashtags. When a tag is clicked, it is inserted at the end of the text in the textarea. I need the tag to be inserted at the cursor location instead. Here's my code; it operates on a textarea with class "facebook_status_text" and a div with class "fbssts_floating_suggestions" that contains an unordered list of links. (Also note that the syntax [#my tag] is used to handle tags with spaces.) maxlength = 140; var dest = $('.facebook_status_text:first'); var fbssts_box = $('.fbssts_floating_suggestions'); var fbssts_box_orig = fbssts_box.html(); dest.keyup(function(fbss_key) { if (fbss_key.which == 51) { fbssts_box.html(fbssts_box_orig); $('.fbssts_floating_suggestions .fbssts_suggestions a').click(function() { var tag = $(this).html(); //This part is not well-optimized. if (tag.match(/W/)) { tag = '[#'+ tag +']'; } else { tag = '#'+ tag; } var orig = dest.val(); orig = orig.substring(0, orig.length - 1); var last = orig.substring(orig.length - 1); if (last == '[') { orig = orig.substring(0, orig.length - 1); } //End of particularly poorly optimized code. dest.val(orig + tag); fbssts_box.hide(); dest.focus(); return false; }); fbssts_box.show(); fbssts_box.css('left', dest.offset().left); fbssts_box.css('top', dest.offset().top + dest.outerHeight() + 1); } else if (fbss_key.which != 16) { fbssts_box.hide(); } }); dest.blur(function() { var t = setTimeout(function() { fbssts_box.hide(); }, 250); }); When the user types, I also need get the 100 characters in the textarea before the cursor, and pass it (presumably via POST) to /fbssts/load/tags. The PHP back-end will process this, figure out what tags to suggest, and print the relevant HTML. Then I need to load that HTML into the .fbssts_floating_suggestions div at the cursor location. Ideally, I'd like to be able to do this: var newSuggestions = load('/fbssts/load/tags', {text: dest.getTextBeforeCursor()}); fbssts_box.html(fbssts_box_orig); $('.fbssts_floating_suggestions .fbssts_suggestions a').click(function() { var tag = $(this).html(); if (tag.match(/W/)) { tag = tag +']'; } dest.insertAtCursor(tag); fbssts_box.hide(); dest.focus(); return false; }); And here's the regex I'm using to identify tags (and @mentions) in the PHP back-end, FWIW. %(\A(#|@)(\w|(\p{L}\p{M}?))+\b)|((?<=\s)(#|@)(\w|(\p{L}\p{M}?))+\b)|(\[(#|@).+?\])%u Right now, my main hold-up is dealing with the cursor location. I've researched for the last two hours, and just ended up more confused. I would prefer a jQuery solution, but beggars can't be choosers. Thanks!

    Read the article

  • Monorail - Form submission using GET instead of POST

    - by Septih
    Hello, I'm writing some additions to a Castle MonoRail based site involving an Add and an Edit form. The add form works fine and uses POST but the edit form uses GET. The only major difference I can see is that the edit method is called with the Id of the object being edited in the query string. When the submit button is pressed on the edit form, the only argument passed is this object Id again. Here is the code for the edit form: <form action="edit.ashx" method="post"> <h3>Coupon Description</h3> <textarea name="comments" width="200">$comments</textarea> <br/><br/> <h3>Coupon Actions</h3> <br/> <div>Give Stories</div> <ul class="checklist" style="overflow:auto;height:144px;width:100%"> #foreach ($story in $stories.Values) <li> <label> #set ($associated = "") #foreach($storyId in $storyIds) #if($story.Id == $storyId) #set($associated = " checked='true'") #end #end <input type="checkbox" name="chk_${story.Id}" id="chk_${story.Id}" value="true" class="checkbox" $associated/> $story.Name </label> </li> #end </ul> <br/><br/> <div>Give Credit Amount</div> <input type="text" name="credit" value="$credit" /> <br/><br/> <h3>Coupon Uses</h3> <input type="checkbox" name="multi" #if($multi) checked="true" #end /> Multi-Use Coupon?<br/><br/> <b>OR</b> <br/> <br/> Number of Uses per Coupon: <input type="text" name="uses" value="$uses" /> <br/> <input type="submit" name="Save" /> </form> The differences between this and the add form is the velocity stuff to do with association and the values of the inputs being from the PropertyBag. The Method dealing with this on the controller starts like this: public void Edit(int id) { Coupon coupon = Coupon.GetRepository(User.Site.Id).FindById(id).Value; if(coupon == null) { RedirectToReferrer(); return; } IFutureQueryOfList<Story> stories = Story.GetRepository(User.Site.Id).OnlyReturnEnabled().FindAll("Name", true); if (Params["Save"] == null) { ... } } It reliably gets called but a breakpoint on the Params["Save"] lets me see that the HttpMethod is "GET" and the only arguments passed (In the Form and the Request) are the object Id and additional HTTP headers. At the end of the day I'm not that familiar with MonoRail and this may be a stupid mistake on my behalf, but I would very much appreciate being made a fool out of if it fixes the problem! :) Thanks

    Read the article

  • Web browsing is fast, but downloads are slow

    - by Ricket
    I work for a company on my university's campus, helping with general IT problems and some web development. But lately there has been a problem that has me and my boss completely stumped. We, plus one contractor, make up the entire IT department, so I'm reaching out to you for help. All around the office, we have wall jacks. These collect in a closet down the hall and all plug into a switch. This switch, along with our individual server jacks, plugs into another switch, and that switch plugs into our firewall hardware. Then the firewall is connected out to our campus network. Our campus internet is, well, very fast. I don't know exactly the terms, tiers, etc., but we have thousands of students and downloads can run as fast as 10 MB/s at night; uploads are sometimes even faster. I think we're practically ISP level. In short, I have a lot of faith that it is not the campus side of things that is causing a problem, combined with other evidence I'll mention in a moment. So our symptoms: web browsing is fast. Web pages, images, etc. load instantly. No problems there. But then when I go to download something, the download starts fast but very quickly (a matter of seconds) drops to nearly 0. Often it will actually drop to 0 and time out. This happens with even very small files, 1 MB or less. It smells to me like a QoS sort of thing. I'm not entirely sure, and I wanted to get your opinions first. My boss is hesitant to touch our firewall, much less let me touch it, and it was set up and is managed by a consultant remotely. These problems don't seem tied to a time of the day. I've tried downloads after 5:00 and still the same thing happens. From my desk, I can turn on my wireless adapter and pick up the campus wireless access point. If I unplug ethernet and connect to it, downloads are fast. This adds to my suspicion that it's limited to our company network. Also, a number of weeks ago the consultant upgraded our firewall firmware. Suddenly everything was very fast. I tested with downloads from Sun and speedtest.net and things were blazing fast, as they should be with our campus internet! It was wonderful, and I figured the slow speeds were an old firmware bug. In a matter of days, things steadily declined until they were back to the old symptoms. Oh, and we have antivirus installed on every computer, and we keep it up to date. Though I suppose the possibility is still there that someone could have spyware which is bogging down our internet, in which case what is the easiest/best way to find this out? (maybe this should go in a separate question) Thank you for your patience in reading all of this. Do you have any ideas as to what I can try? Is this something that you've experienced before? What sort of tools or methods can I use to try and diagnose the problem? P.S. everything here is Windows. Windows Server 2003 and 2008 on our servers, and Windows XP on employees' machines. Update: We are submitting a ticket to the university to just take a look and see if they see anything unusual and/or can suggestion methods for us to try and pinpoint our problem. Hopefully they'll be helpful! I'll update this to let you know what goes on. Update again: We found a hub (yes, a HUB) right between our campus connection and our firewall. It had only those two ethernet cables plugged into it, nothing else. After removing the hub, our speeds have jumped up to several mbps. However in talking with the campus, we got them to run a gigabit line to our firewall in place of the 100mbps line. As of friday, we are at about 65 mbps up and down (according to speedtest.net at 8am)!! Go NC State!!

    Read the article

< Previous Page | 209 210 211 212 213 214 215 216 217 218 219 220  | Next Page >