Search Results

Search found 28300 results on 1132 pages for 'david back'.

Page 681/1132 | < Previous Page | 677 678 679 680 681 682 683 684 685 686 687 688  | Next Page >

  • In Ubuntu, my apps no longer have the system menu bar (not the panels!)

    - by user25522
    So, I booted up my box today after the weekend, and my apps no longer have the System Menu Bar at the top. It kinda sucks 'cause that's an easy way to maximize windows. How do I get it back? And I'm not talking about the panels at the top/bottom of the screen. This is what my terminal looks like. It's really missing the bar at the top. I have no rep so can't post pics, but here's a link to the image

    Read the article

  • After installing Windows 7, there're lines on the screen

    - by user22589
    Hi, I've just installed Windows 7 on my Dell machine. Everything works fine except that there are one or two lines on the screen. The line is so random and I am sure that it's now the monitor problem. The graphic card is NVidia GeForce 7600. I replaced the default driver with the driver from the manufacturer and it didn't help. When I change screen setting, the line goes away for a while and it comes back in different location. Sometimes an app window has the line and if I move the window the line follows with the window. What can I do to fix it? Thanks. Sam

    Read the article

  • What are the reasons why outlook looses configurations?

    - by jnroche
    Can't seem to establish any logic why outlook suddenly looses its profile coniguration settings intermittently. I work for an IT Contact Centre but it hurts when someone asks me why they loose their outlook profiles suddenly, and some most of the time. I know there are lots of reasons but I'm not sure which ones are the best. Could it be that the PC in a hurge corporate organization will not be connected to the network properly so the profile gets corrupted for outlook? But they don't usually shut down the PC after office hours due to the fact that its a 24 hours operations. On top of that, when users are migrated to Windows 7 / Office 2010 and then logs on to this pc, then opens outlook, then logs off then goes back to a Windows XP PC and opens outlook 2003 the profile is again lost. Again why is this? Is there anyone out there whose facing same connection/outlook profile issues getting lost for no apparent reason?

    Read the article

  • What differences should I know? I just upgraded to 13.10 from 10.10 [on hold]

    - by test
    I ran Ubuntu 10.10 for a long time because I liked the menu style. The change in GUI with the upgrades drove me nuts but I finally gave in and downloaded Saucy Salamander 13.10 x64. It's a fresh install running as a virtual machine guest in VMWare Workstation 9 on a Windows 7 x64 host. Well it looks like all those icons are still there on the side which I would be OK with if there were some way to bring back my menus. I have no organized way of accessing things now, or do I? That is the purpose for this question, maybe there is some functionality I just can't find but is there. Also all my fine tuning was gone. I used to be able to change DPI but that's gone. I went ahead and installed Unity Tweak Tool via sudo apt-get install unity-tweak-tool but I couldn't find an icon for it after I installed it.. because again no menu. So I did a search for it and found it there. I've changed the font and which side the window buttons appear on which is good enough for now. Anyway... any suggestions you may have for me I'm game. I'm a Windows 7 user primarily but I use Ubuntu every once in a while. I really liked the old style where everything was categorized like for Applications there was Accessories, Games, Graphics, Internet, Office, Sound & Video, Wine.

    Read the article

  • IP Helper service uses 50-60% CPU every 1 minute for 3-4 seconds on Windows 7

    - by Sarveshwar
    I have checked everything - all the processes in the process explorer. IP Helper service is causing CPU usage of over 50 % every 1 minute for 3-4 seconds then comes back to normal. In the task manager, the process is svchost.exe and the service is iphlpsvc. Here's the result of "ipconfig /all" command: ![alt text][1] [1]: From Snapshots Here's a screenshot of "netsh interface teredo show state" command: ![alt text][1] [1]: From Snapshots Please help me resolve this. Also utorrent is not showing the teredo address.

    Read the article

  • Server not repsonding after Ubuntu upgrade

    - by aaa
    I have a server that was running Ubuntu 8.04. I upgraded Ubuntu via the command-line and got no warning messages. After it was done the server restarted but now I am unable to SSH back in. My site is also not resolving. My guess is I overwrote something important. What are some trouble-shooting steps I can take? I am able to execute commands through my VPS provider's admin interface. I'm not sure what version it upgraded me to honestly.

    Read the article

  • perfmon.exe itself taking 52.71% of cpu on windows 7 after chrome dies?

    - by jamesmoorecode
    On my Windows 7 machine (build 7100, x64, Dell XPS M1710 laptop), I'm getting horrible performance after chrome crashes. I kill the chrome process from the Resource Monitor, but after that perfmon.exe itself is shown as taking about 50% of the cpu (52.31% right now). Quitting Performance Monitor, then starting it again, shows perfmon starting out with a reasonable CPU, but it quickly (ten seconds) shoots right back up. Suggestions? So far a reboot seems to be the only way to solve the problem. I'm assuming that the perfmon issue is just a symptom of the real problem. (Update, much later: this never got resolved. I'm not seeing the problem in the RTM Win7 + latest Chrome. Yes, it was a core 2 duo, so presumably Chrome was running full blast on one cpu.)

    Read the article

  • how can i edit my admission form which i filled wrong . their is no other form is avilable now ...what i do ??? [closed]

    - by user60065
    Hi, I am a 2nd year student in graduation. Recently I filled an admission form for final year admission but it came back to me after 2 days because I had entered wrong information. I want to edit the wrong information and I have scanned the form. I am looking for a good online site where I can upload the scanned document and convert same into an editable format. I don’t mind paying. If any can point to a good site will be great & thanks in advance

    Read the article

  • IE8 losing session cookies in popup windows.

    - by HackedByChinese
    We have an ASP.NET application that uses Forms Auth. When users log in, a session ID cookie and a Forms Auth ticket (stored as a cookie) are generated. These are session cookies, not permanent cookies. It is intentional and desirable that when the browser closes, the user is effectively logged out. Once a user logs in, a new window is popped up using window.open('location here');. The page that is opened is effectively the workspace the user works in throughout the rest of their session. From this page, other pop-ups are also used. Lately, we've had a number of customers (all using latest versions of IE8) complaining that the when they log in, the initial pop-up takes them back to the log in screen rather than their homepage. Alternately, users can sometimes log in, get to the homepage (which again, is in a new pop up window), and it all seems fine, until any additional pop-ups are created, where it starts redirecting them to the log in screen again. In attempting to troubleshoot the issue, I've used good old Fiddler. When the problem starts manifesting, I've noticed that the browser is not sending up the ASP.NET session ID session cookie OR the Forms Auth ticket session cookie, even though the response to the log in POST clearly pushes down those cookies. What's more strange is if I CTRL+N to open a new window from the popped-up window that is missing the session cookies, then manually type in the URL to the home page, those cookies magically appear again. However, subsequent window.open(); calls will continue to be broken, not sending the session cookies and taking the user to the log in screen. It's important to note that sometimes, for seemingly no good reason, those same users can suddenly log in and work normally for a while, then it goes back to broken. Now, I've ensured that there are no browser add-ons, plug-ins, toolbars, etc. are running. I've added our site as a trusted site and dropped the security settings to Low, I've modified the Cookie Privacy policy to "accept all" and even disabled automatic policy settings, manually forcing it to accept everything and include session cookies. Nothing appears to affect it. Also note the web application resides on a single server. There is no load balancing, web gardens, server farms, clusters, etc. The server does reside behind an ISA server, but other than that it's pretty straight forward. I've been searching around for days and haven't found anything actionable. Heck, sometimes I can't even reproduce it reliably. I have found a few references to people having this same problem, but they seem to be referencing an issue that was allegedly fixed in a beta or RC release (example: http://stackoverflow.com/questions/179260/ie8-loses-cookies-when-opening-a-new-window-after-a-redirect). These are release versions of IE, with up-to-date patches. I'm aware that I can try to set permanent cookies instead of session cookies. However, this has drastic security implications for our application. Update It seems that the problem automagically goes away when the user is added as a Local Administrator on the machine. Only time will tell if this change permanently (and positively) affects this problem. Time to bust out ProcMon and see if there is a resource access problem.

    Read the article

  • WCF MustUnderstand headers are not understood

    - by raghur
    Hello everyone, I am using a Java Web Service which is developed by one of our vendor which I really do not have any control over it. I have written a WCF router which the client application calls it and the router sends the message to the Java Web Service and returns the data back to the client. The issue what I am encountering is, I am successfully able to call the Java web service from the WCF router, but, I am getting the following exceptions back. Router config file is as follows: <customBinding> <binding name="SimpleWSPortBinding"> <!--<reliableSession maxPendingChannels="4" maxRetryCount="8" ordered="true" />--> <!--<mtomMessageEncoding messageVersion ="Soap12WSAddressing10" ></mtomMessageEncoding>--> <textMessageEncoding maxReadPoolSize="64" maxWritePoolSize="16" messageVersion="Soap12WSAddressing10" writeEncoding="utf-8" /> <httpTransport manualAddressing="false" maxBufferPoolSize="524288" maxReceivedMessageSize="65536" allowCookies="false" authenticationScheme="Anonymous" bypassProxyOnLocal="true" keepAliveEnabled="true" maxBufferSize="65536" transferMode="Buffered" unsafeConnectionNtlmAuthentication="false"/> </binding> </customBinding> Test client config file <customBinding> <binding name="DocumentRepository_Binding_Soap12"> <!--<reliableSession maxPendingChannels="4" maxRetryCount="8" ordered="true" />--> <!--<mtomMessageEncoding messageVersion ="Soap12WSAddressing10" ></mtomMessageEncoding>--> <textMessageEncoding maxReadPoolSize="64" maxWritePoolSize="16" messageVersion="Soap12WSAddressing10" writeEncoding="utf-8"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> </textMessageEncoding> <httpTransport manualAddressing="false" maxBufferPoolSize="524288" maxReceivedMessageSize="65536" allowCookies="false" authenticationScheme="Anonymous" bypassProxyOnLocal="false" hostNameComparisonMode="StrongWildcard" keepAliveEnabled="true" maxBufferSize="65536" proxyAuthenticationScheme="Anonymous" realm="" transferMode="Buffered" unsafeConnectionNtlmAuthentication="false" useDefaultWebProxy="true" /> </binding> </customBinding> If I use the textMessageEncoding I am getting <soap:Text xml:lang="en">MustUnderstand headers: [{http://www.w3.org/2005/08/addressing}To, {http://www.w3.org/2005/08/addressing}Action] are not understood.</soap:Text> If I use mtomMessageEncoding I am getting The server did not provide a meaningful reply; this might be caused by a contract mismatch, a premature session shutdown or an internal server error. My Router class is as follows: [ServiceBehavior(InstanceContextMode = InstanceContextMode.Single, ConcurrencyMode = ConcurrencyMode.Multiple, AddressFilterMode = AddressFilterMode.Any, ValidateMustUnderstand = false)] public class EmployeeService : IEmployeeService { public System.ServiceModel.Channels.Message ProcessMessage(System.ServiceModel.Channels.Message requestMessage) { ChannelFactory<IEmployeeService> factory = new ChannelFactory<IEmployeeService>("client"); factory.Endpoint.Behaviors.Add(new MustUnderstandBehavior(false)); IEmployeeService proxy = factory.CreateChannel(); Message responseMessage = proxy.ProcessMessage(requestMessage); return responseMessage; } } The "client" in the above code under ChannelFactory is defined in the config file as: <client> <endpoint address="http://JavaWS/EmployeeService" binding="wsHttpBinding" bindingConfiguration="wsHttp" contract="EmployeeService.IEmployeeService" name="client" behaviorConfiguration="clientBehavior"> <headers> </headers> </endpoint> </client> Really appreciate your kind help. Thanks in advance, Raghu

    Read the article

  • Perl Scripts Seem to be Caching?

    - by Ben Liyanage
    I'm running a perl script via mod_perl with Apache. I am trying to parse parameters in a restful fashion (aka GET www.domain.com/rest.pl/Object/ID). If I specify the ID like this: GET www.domain.com/rest.pl/Object/1234 I will get object 1234 back as a result as expected. However, if I specify an incorrect hacked url like this GET www.domain.com/rest.pl/Object/123 I will also get back object 1234. I am pretty sure that the issue in this question is happening so I packaged up my method and invoked it from my core script out of the package. Even after that I am still seeing the threads returning seemingly cached data. One of the recommendations in the aforementioned article is to replace my with our. My impression from reading up on our vs my was that our is global and my is local. My assumption is the the local variable gets reinitialized each time. That said, like in my code example below I am resetting the variables each time with new values. Apache Perl Configuration is set up like this: PerlModule ModPerl::Registry <Directory /var/www/html/perl/> SetHandler perl-script PerlHandler ModPerl::Registry Options +ExecCGI </Directory> Here is the perl script getting invoked directly: #!/usr/bin/perl -w use strict; use warnings; use Rest; Rest::init(); Here is my package Rest. This file contains various functions for handling the rest requests. #!/usr/bin/perl -w package Rest; use strict; # Other useful packages declared here. our @EXPORT = qw( init ); our $q = CGI->new; sub GET($$) { my ($path, $code) = @_; return unless $q->request_method eq 'GET' or $q->request_method eq 'HEAD'; return unless $q->path_info =~ $path; $code->(); exit; } sub init() { eval { GET qr{^/ZonesByCustomer/(.+)$} => sub { Rest::Zone->ZonesByCustomer($1); } } } # packages must return 1 1; As a side note, I don't completely understand how this eval statement works, or what string is getting parsed by the qr command. This script was largely taken from this example for setting up a rest based web service with perl. I suspect it is getting pulled from that magical $_ variable, but I'm not sure if it is, or what it is getting populated with (clearly the query string or url, but it only seems to be part of it?). Here is my package Rest::Zone, which will contain the meat of the functional actions. I'm leaving out the specifics of how I'm manipulating the data because it is working and leaving this as an abstract stub. #!/usr/bin/perl -w package Rest::Zone; sub ZonesByCustomer($) { my ($className, $ObjectID) = @_; my $q = CGI->new; print $q->header(-status=>200, -type=>'text/xml',expires => 'now', -Cache_control => 'no-cache', -Pragma => 'no-cache'); my $objectXML = "<xml><object>" . $ObjectID . "</object></xml>"; print $objectXML; } # packages must return 1 1; What is going on here? Why do I keep getting stale or cached values?

    Read the article

  • How can I dynamically change auto complete entries in a C# combobox or textbox?

    - by Sam Hopkins
    I have a combobox in C# and I want to use auto complete suggestions with it, however I want to be able to change the auto complete entries as the user types, because the possible valid entries are far too numerous to populate the AutoCompleteStringCollection at startup. As an example, suppose I'm letting the user type in a name. I have a list of possible first names ("Joe", "John") and a list of surnames ("Bloggs", "Smith"), but if I have a thousand of each, then that would be a million possible strings - too many to put in the auto complete entries. So initially I want to have just the first names as suggestions ("Joe", "John") , and then once the user has typed the first name, ("Joe"), I want to remove the existing auto complete entries and replace them with a new set consisting of the chosen first name followed by the possible surnames ("Joe Bloggs", "Joe Smith"). In order to do this, I tried the following code: void InitializeComboBox() { ComboName.AutoCompleteMode = AutoCompleteMode.SuggestAppend; ComboName.AutoCompleteSource = AutoCompleteSource.CustomSource; ComboName.AutoCompleteCustomSource = new AutoCompleteStringCollection(); ComboName.TextChanged += new EventHandler( ComboName_TextChanged ); } void ComboName_TextChanged( object sender, EventArgs e ) { string text = this.ComboName.Text; string[] suggestions = GetNameSuggestions( text ); this.ComboQuery.AutoCompleteCustomSource.Clear(); this.ComboQuery.AutoCompleteCustomSource.AddRange( suggestions ); } However, this does not work properly. It seems that the call to Clear() causes the auto complete mechanism to "turn off" until the next character appears in the combo box, but of course when the next character appears the above code calls Clear() again, so the user never actually sees the auto complete functionality. It also causes the entire contents of the combo box to become selected, so between every keypress you have to deselect the existing text, which makes it unusable. If I remove the call to Clear() then the auto complete works, but it seems that then the AddRange() call has no effect, because the new suggestions that I add do not appear in the auto complete dropdown. I have been searching for a solution to this, and seen various things suggested, but I cannot get any of them to work - either the auto complete functionality appears disabled, or new strings do not appear. Here is a list of things I have tried: Calling BeginUpdate() before changing the strings and EndUpdate() afterwards. Calling Remove() on all the existing strings instead of Clear(). Clearing the text from the combobox while I update the strings, and adding it back afterwards. Setting the AutoCompleteMode to "None" while I change the strings, and setting it back to "SuggestAppend" afterwards. Hooking the TextUpdate or KeyPress event instead of TextChanged. Replacing the existing AutoCompleteCustomSource with a new AutoCompleteStringCollection each time. None of these helped, even in various combinations. Spence suggested that I try overriding the ComboBox function that gets the list of strings to use in auto complete. Using a reflector I found a couple of methods in the ComboBox class that look promising - GetStringsForAutoComplete() and SetAutoComplete(), but they are both private so I can't access them from a derived class. I couldn't take that any further. I tried replacing the ComboBox with a TextBox, because the auto complete interface is the same, and I found that the behaviour is slightly different. With the TextBox it appears to work better, in that the Append part of the auto complete works properly, but the Suggest part doesn't - the suggestion box briefly flashes to life but then immediately disappears. So I thought "Okay, I'll

    Read the article

  • git local master branch stopped tracking remotes/origin/master, can't push

    - by Paul Smith
    Just when I thought I'd got the hang of the git checkout -b newbranch - commit/commit/commit - git checkout master - git merge newbranch - git rebase -i master - git push workflow in git, something blew up, and I can't see any reason for it. Here's the general workflow, which has worked for me in the past: # make sure I'm up to date on master: $ git checkout master $ git pull # k, no conflicts # start my new feature $ git checkout -b FEATURE9 # master @ 2f93e34 Switched to a new branch 'FEATURE9' ... work, commit, work, commit, work, commit... $ git commit -a $ git checkout master $ git merge FEATURE9 $ git rebase -i master # squash some of the FEATURE9 ugliness Ok so far; now what I expect to see -- and normally do see -- is this: $ git status # On branch master # Your branch is ahead of 'origin/master' by 1 commit. # nothing to commit (working directory clean) But instead, I only see "nothing to commit (working directory clean)", no "Your branch is ahead of 'origin/master' by 1 commit.", and git pull shows this weirdness: $ git pull From . # unexpected * branch master -> FETCH_HEAD # unexpected Already up-to-date. # expected And git branch -a -v shows this: $ git branch -a -v FEATURE9 3eaf059 started feature 9 * master 3eaf059 started feature 9 remotes/origin/HEAD -> origin/master remotes/origin/master 2f93e34 some boring previous commit # should=3eaf059 git branch clearly shows that I'm currently on * master, and git log clearly shows that master (local) is at 3eaf059, while remotes/origin/HEAD - remotes/origin/master is stuck back at the fork. Ideally I'd like to know the semantics of how I might have gotten into this, but I would settle for a way to get my working copy tracking the remote master again & get the two back in sync without losing history. Thanks! (Note: I re-cloned the repo in a new directory and manually re-applied the changes, and everything worked fine, but I don't want that to be the standard workaround.) Addendum: The title says "can't push", but there's no error message. I just get the "already up to date" response even though git branch -a -v shows that local master is ahead of /remotes/origin/master. Here's the output from git pull and git remote -v, respectively: $ git pull From . * branch master -> FETCH_HEAD Already up-to-date. $ git remote -v origin [email protected]:proj.git (fetch) origin [email protected]:proj.git (push) Addendum 2: It looks as if my local master is configured to push to the remote, but not to pull from it. After doing for remote in 'git branch -r | grep -v master '; do git checkout --track $remote ; done, here's what I have. It seems I just need to get master pulling from remotes/origin/master again, no? $ git remote show origin * remote origin Fetch URL: [email protected]:proj.git Push URL: [email protected]:proj.git HEAD branch: master Remote branches: experiment_f tracked master tracked Local branches configured for 'git pull': experiment_f merges with remote experiment_f Local refs configured for 'git push': experiment_f pushes to experiment_f (up to date) master pushes to master (local out of date)

    Read the article

  • How to make sliding button sidebar in Flex

    - by Tam
    Hi, I'm fairly new to Flex. I want to make a button (icon) on the far right in the middle of the page that display a sliding side bar with multiple buttons in it when you hover over it. I want when the user hover out of the button bar it slides back again. Conceptually I got the basics of that to work. The issue I'm having is that when the user moves the mouse between the buttons in the sidebar it kicks in changing state and side bar slides back again. I tried using different types of containers and I got the same results. Any Advice? Thanks, Tam Here is the code: <?xml version="1.0" encoding="utf-8"?> <s:VGroup xmlns:fx="http://ns.adobe.com/mxml/2009" xmlns:s="library://ns.adobe.com/flex/spark" xmlns:mx="library://ns.adobe.com/flex/halo" xmlns:vld ="com.lal.validators.*" xmlns:effect="com.lal.effects.*" width="150" horizontalAlign="right" gap="0"> <fx:Script> <![CDATA[ import com.lal.model.LalModelLocator; var _model:LalModelLocator = LalModelLocator.getInstance(); ]]> </fx:Script> <fx:Declarations> <mx:ArrayCollection id="someData"> </mx:ArrayCollection> </fx:Declarations> <s:states> <s:State name="normal" /> <s:State name="expanded" /> </s:states> <fx:Style source="/styles.css"/> <s:transitions> <s:Transition fromState="normal" toState="expanded" > <s:Sequence> <s:Wipe direction="left" duration="250" target="{buttonsGroup}" /> </s:Sequence> </s:Transition> <s:Transition fromState="expanded" toState="normal" > <s:Sequence> <s:Wipe direction="right" duration="250" target="{buttonsGroup}" /> </s:Sequence> </s:Transition> </s:transitions> <s:Button skinClass="com.lal.skins.CalendarButtonSkin" id="calendarIconButton" includeIn="normal" verticalCenter="0" mouseOver="currentState = 'expanded'"/> <s:Panel includeIn="expanded" id="buttonsGroup" mouseOut="currentState = 'normal' " width="150" height="490" > <s:layout> <s:VerticalLayout gap="0" paddingRight="0" /> </s:layout> <s:Button id="mondayButton" width="120" height="70" mouseOver="currentState = 'expanded'"/> <s:Button id="tuesdayButton" width="120" height="70" mouseOver="currentState = 'expanded'"/> <s:Button id="wednesdayButton" width="120" height="70" mouseOver="currentState = 'expanded'"/> <s:Button id="thursdayButton" width="120" height="70" mouseOver="currentState = 'expanded'"/> <s:Button id="fridayButton" width="120" height="70" mouseOver="currentState = 'expanded'"/> <s:Button id="saturdayButton" width="120" height="70" mouseOver="currentState = 'expanded'"/> <s:Button id="sundayButton" width="120" height="70" mouseOver="currentState = 'expanded'"/> </s:Panel> </s:VGroup>

    Read the article

  • MVC 2 Ajax.Beginform passes returned Html to javascript function

    - by Joe
    Hi, I have a small partial Create Person form in a page above a table of results. I want to be able to post the form to the server, which I can do no problem with ajax.Beginform. <% using (Ajax.BeginForm("Create", new AjaxOptions { OnComplete = "ProcessResponse" })) {%> <fieldset> <legend>Fields</legend> <div class="editor-label"> <%=Html.LabelFor(model => model.FirstName)%> </div> <div class="editor-field"> <%=Html.TextBoxFor(model => model.FirstName)%> <%=Html.ValidationMessageFor(model => model.FirstName)%> </div> <div class="editor-label"> <%=Html.LabelFor(model => model.LastName)%> </div> <div class="editor-field"> <%=Html.TextBoxFor(model => model.LastName)%> <%=Html.ValidationMessageFor(model => model.LastName)%> </div> <p> <input type="submit" /> </p> </fieldset> <% } %> Then in my controller I want to be able to post back a partial which is just a table row if the create is successful and append it to the table, which I can do easily with jquery. $('#personTable tr:last').after(data); However, if server validation fails I want to pass back my partial create person form with the validation errors and replace the existing Create Person form. I have tried returning a Json array Controller: return Json(new { Success = true, Html= this.RenderViewToString("PersonSubform",person) }); Javascript: var json_data = response.get_response().get_object(); with a pass/fail flag and the partial rendered as a string using the solition below but that doesnt render the mvc validation controls when the form fails. SO RenderPartialToString So, is there any way I can hand my javascript the out of the box PartialView("PersonForm") as its returned from my ajax.form? Can I pass some addition info as a Json array so I can tell if its pass or fail and maybe add a message? UPDATE I can now pass the HTML of a PartialView to my javascript but I need to pass some additional data pairs like ServerValidation : true/false and ActionMessage : "you have just created a Person Bill". Ideally I would pass a Json array rather than hidden fields in my partial. function ProcessResponse(response) { var html = response.get_data(); $("#campaignSubform").html(html); } Many thanks in advance

    Read the article

  • Getting values from Dynamic elements.

    - by nCdy
    I'm adding some dynamic elements to my WebApp this way : (Language used is Nemerele (It has a simple C#-like syntax)) unless (GridView1.Rows.Count==0) { foreach(index with row = GridView1.Rows[index] in [0..GridView1.Rows.Count-1]) { row.Cells[0].Controls.Add ({ def TB = TextBox(); TB.EnableViewState = false; unless(row.Cells[0].Text == "&nbsp;") { TB.Text = row.Cells[0].Text; row.Cells[0].Text = ""; } TB.ID=TB.ClientID; TB.Width = 60; TB }); row.Cells[0].Controls.Add ({ def B = Button(); B.EnableViewState = false; B.Width = 80; B.Text = "?????????"; B.UseSubmitBehavior=false; // Makes no sense //B.OnClientClick="select(5);"; // HERE I CAN KNOW ABOUT TB.ID //B.Click+=EventHandler(fun(_,_) : void { }); // POST BACK KILL THAT ALL B }); } } This textboxes must make first field of GridView editable so ... but now I need to save a values. I can't do it on server side because any postback will Destroy all dynamic elements so I must do it without Post Back. So I try ... <script type="text/javascript" src="Scripts/jquery-1.4.1.min.js"></script> <script type="text/javascript"> function CallPageMethod(methodName, onSuccess, onFail) { var args = ''; var l = arguments.length; if (l > 3) { for (var i = 3; i < l - 1; i += 2) { if (args.length != 0) args += ','; args += '"' + arguments[i] + '":"' + arguments[i + 1] + '"'; } } var loc = window.location.href; loc = (loc.substr(loc.length - 1, 1) == "/") ? loc + "Report.aspx" : loc; $.ajax({ type: "POST", url: loc + "/" + methodName, data: "{" + args + "}", contentType: "application/json; charset=utf-8", dataType: "json", success: onSuccess, fail: onFail }); } function select(index) { var id = $("#id" + index).html(); CallPageMethod("SelectBook", success, fail, "id",id); } function success(response) { alert(response.d); } function fail(response) { alert("&#1054;&#1096;&#1080;&#1073;&#1082;&#1072;."); } </script> So... here is a trouble string : var id = $("#id" + index).html(); I know what is ID here : TB.ID=TB.ClientID; (when I add it) but I have no idea how to send it on Web Form. If I can add something like this div : <div id="Result" onclick="select(<%= " TB.ID " %>);"> Click here. </div> from the code it will be really goal, but I can't add this element as from CodeBehind as a dynamic element. So how can I transfer TB.ID or TB.ClientID to some static div Or how can I add some clickable dynamic element without PostBack to not destroy all my dynamic elements. Thank you.

    Read the article

  • Why wont a simple socket to the localhost connect?

    - by Jake M
    I am following a tutorial that teaches me how to use win32 sockets(winsock2). I am attempting to create a simple socket that connects to the "localhost" but my program is failing when I attempt to connect to the local host(at the function connect()). Do I need admin privileges to connect to the localhost? Maybe thats why it fails? Maybe theres a problem with my code? I have tried the ports 8888 & 8000 & they both fail. Also if I change the port to 80 & connect to www.google.com I can connect BUT I get no response back. Is that because I haven't sent a HTTP request or am I meant to get some response back? Here's my code (with the includes removed): // Constants & Globals // typedef unsigned long IPNumber; // IP number typedef for IPv4 const int SOCK_VER = 2; const int SERVER_PORT = 8888; // 8888 SOCKET mSocket = INVALID_SOCKET; SOCKADDR_IN sockAddr = {0}; WSADATA wsaData; HOSTENT* hostent; int _tmain(int argc, _TCHAR* argv[]) { // Initialise winsock version 2.2 if (WSAStartup(MAKEWORD(SOCK_VER,2), &wsaData) != 0) { printf("Failed to initialise winsock\n"); WSACleanup(); system("PAUSE"); return 0; } if (LOBYTE(wsaData.wVersion) != SOCK_VER || HIBYTE(wsaData.wVersion) != 2) { printf("Failed to load the correct winsock version\n"); WSACleanup(); system("PAUSE"); return 0; } // Create socket mSocket = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP); if (mSocket == INVALID_SOCKET) { printf("Failed to create TCP socket\n"); WSACleanup(); system("PAUSE"); return 0; } // Get IP Address of website by the domain name, we do this by contacting(??) the Domain Name Server if ((hostent = gethostbyname("localhost")) == NULL) // "localhost" www.google.com { printf("Failed to resolve website name to an ip address\n"); WSACleanup(); system("PAUSE"); return 0; } sockAddr.sin_port = htons(SERVER_PORT); sockAddr.sin_family = AF_INET; sockAddr.sin_addr.S_un.S_addr = (*reinterpret_cast <IPNumber*> (hostent->h_addr_list[0])); // sockAddr.sin_addr.s_addr=*((unsigned long*)hostent->h_addr); // Can also do this // ERROR OCCURS ON NEXT LINE: Connect to server if (connect(mSocket, (SOCKADDR*)(&sockAddr), sizeof(sockAddr)) != 0) { printf("Failed to connect to server\n"); WSACleanup(); system("PAUSE"); return 0; } printf("Got to here\r\n"); // Display message from server char buffer[1000]; memset(buffer,0,999); int inDataLength=recv(mSocket,buffer,1000,0); printf("Response: %s\r\n", buffer); // Shutdown our socket shutdown(mSocket, SD_SEND); // Close our socket entirely closesocket(mSocket); // Cleanup Winsock WSACleanup(); system("pause"); return 0; }

    Read the article

  • Samplegrabber works fine on AVI/MPEG files but choppy with WMV

    - by jomtois
    I have been using the latest version of the WPFMediaKit. What I am trying to do is write a sample application that will use the Samplegrabber to capture the video frames of video files so I can have them as individual Bitmaps. So far, I have had good luck with the following code when constructing and rendering my graph. However, when I use this code to play back a .wmv video file, when the samplegrabber is attached, it will play back jumpy or choppy. If I comment out the line where I add the samplegrabber filter, it works fine. Again, it works with the samplegrabber correctly with AVI/MPEG, etc. protected virtual void OpenSource() { FrameCount = 0; /* Make sure we clean up any remaining mess */ FreeResources(); if (m_sourceUri == null) return; string fileSource = m_sourceUri.OriginalString; if (string.IsNullOrEmpty(fileSource)) return; try { /* Creates the GraphBuilder COM object */ m_graph = new FilterGraphNoThread() as IGraphBuilder; if (m_graph == null) throw new Exception("Could not create a graph"); /* Add our prefered audio renderer */ InsertAudioRenderer(AudioRenderer); var filterGraph = m_graph as IFilterGraph2; if (filterGraph == null) throw new Exception("Could not QueryInterface for the IFilterGraph2"); IBaseFilter renderer = CreateVideoMixingRenderer9(m_graph, 1); IBaseFilter sourceFilter; /* Have DirectShow find the correct source filter for the Uri */ var hr = filterGraph.AddSourceFilter(fileSource, fileSource, out sourceFilter); DsError.ThrowExceptionForHR(hr); /* We will want to enum all the pins on the source filter */ IEnumPins pinEnum; hr = sourceFilter.EnumPins(out pinEnum); DsError.ThrowExceptionForHR(hr); IntPtr fetched = IntPtr.Zero; IPin[] pins = { null }; /* Counter for how many pins successfully rendered */ int pinsRendered = 0; m_sampleGrabber = (ISampleGrabber)new SampleGrabber(); SetupSampleGrabber(m_sampleGrabber); hr = m_graph.AddFilter(m_sampleGrabber as IBaseFilter, "SampleGrabber"); DsError.ThrowExceptionForHR(hr); /* Loop over each pin of the source filter */ while (pinEnum.Next(pins.Length, pins, fetched) == 0) { if (filterGraph.RenderEx(pins[0], AMRenderExFlags.RenderToExistingRenderers, IntPtr.Zero) >= 0) pinsRendered++; Marshal.ReleaseComObject(pins[0]); } Marshal.ReleaseComObject(pinEnum); Marshal.ReleaseComObject(sourceFilter); if (pinsRendered == 0) throw new Exception("Could not render any streams from the source Uri"); /* Configure the graph in the base class */ SetupFilterGraph(m_graph); HasVideo = true; /* Sets the NaturalVideoWidth/Height */ //SetNativePixelSizes(renderer); } catch (Exception ex) { /* This exection will happen usually if the media does * not exist or could not open due to not having the * proper filters installed */ FreeResources(); /* Fire our failed event */ InvokeMediaFailed(new MediaFailedEventArgs(ex.Message, ex)); } InvokeMediaOpened(); } And: private void SetupSampleGrabber(ISampleGrabber sampleGrabber) { FrameCount = 0; var mediaType = new AMMediaType { majorType = MediaType.Video, subType = MediaSubType.RGB24, formatType = FormatType.VideoInfo }; int hr = sampleGrabber.SetMediaType(mediaType); DsUtils.FreeAMMediaType(mediaType); DsError.ThrowExceptionForHR(hr); hr = sampleGrabber.SetCallback(this, 0); DsError.ThrowExceptionForHR(hr); } I have read a few things saying the the .wmv or .asf formats are asynchronous or something. I have attempted inserting a WMAsfReader to decode which works, but once it goes to the VMR9 it gives the same behavior. Also, I have gotten it to work correctly when I comment out the IBaseFilter renderer = CreateVideoMixingRenderer9(m_graph, 1); line and have filterGraph.Render(pins[0]); -- the only drawback is that now it renders in an Activemovie widow of its own instead of my control, however the samplegrabber functions correctly and without any skipping. So I am thinking the bug is in the VMR9 / samplegrabbing somewhere. Any help? I am new to this.

    Read the article

  • Reversing page navigation on PHP

    - by ilnur777
    Can anyone help me with reversing this PHP page navigation? Current script setting shows this format: [0] | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 ... 14 • Forward • End But I really need it to reverse for this format: [14] | 13 | 12 | 11| 10 | 9 | 8 | 7 | 6 ... 0 • Back • Start Here is the PHP code: <? $onpage = 10; // on page function page(){ if(empty($_GET["page"])){ $page = 0; } else { if(!is_numeric($_GET["page"])) die("Bad page number!"); $page = $_GET["page"]; } return $page; } function navigation($onpage, $page){ //---------------- $countt = 150; $cnt=$countt; // total amount of entries $rpp=$onpage; // total entries per page $rad=4; // amount of links to show near current page (2 left + 2 right + current page = total 5) $links=$rad*2+1; $pages=ceil($cnt/$rpp); if ($page>0) { echo "<a href=\"?page=0\"><<< Start</a> <font color='#CCCCCC'>•</font> <a href=\"?page=".($page-1)."\">< Back</a> <font color='#CCCCCC'>•</font>"; } $start=$page-$rad; if ($start>$pages-$links) { $start=$pages-$links; } if ($start<0) { $start=0; } $end=$start+$links; if ($end>$pages) { $end=$pages; } for ($i=$start; $i<$end; $i++) { echo " "; if ($i==$page) { echo "["; } else { echo "<a href=\"?page=$i\">"; } echo $i; if ($i==$page) { echo "]"; } else { echo "</a>"; } if ($i!=($end-1)) { echo " <font color='#CCCCCC'>|</font>"; } } if ($pages>$links&&$page<($pages-$rad-1)) { echo " ... <a href=\"?page=".($pages-1)."\">".($pages-1)."</a>"; } if ($page<$pages-1) { echo " <font color='#CCCCCC'>•</font> <a href=\"?page=".($page+1)."\">Forward ></a> <font color='#CCCCCC'>•</font> <a href=\"?page=".($pages-1)."\">End >>></a>"; } } $page = page(); // detect page $navigation = navigation($onpage, $page); // detect navigation ?>

    Read the article

  • ASMX Web Services with SOAP Extension

    - by digitall
    I am in the process of setting up a web service for an external client to connect to my client's application and update some information. I went the ASMX route (the rest of the application runs on WCF) because I knew the external client could be very difficult to deal with and I was trying to keep everything as simple as possible. They also aren't a .Net shop which makes things worse. After getting the service setup for them I provided the ASMX URL for them to see how to format the SOAP headers/message content. They have since come back and told me their tool is unable to send them in the format required by .Net and I can either come up with a different way to accept messages or we have to go with FTP. Based on this I have been researching how to intercept their message, reformat it the way my service requires it (which means adding two lines), and then let it process. This path led me to SOAP Extensions which I have been trying to work with but can't seem to figure out. When I have my sample application call the web service from the generated code Add Web Reference provides everything works great until I add in my extension. All it currently does is override ChainStream setting an internal stream equal to the one being passed in and returning a new stream, like this: private Stream newStream = null; private Stream oldStream = null; public override Stream ChainStream(Stream stream) { this.oldStream = stream; this.newStream = new MemoryStream(); return newStream; } I also override ProcessMessage and in it take the contents of oldStream and set newStream equal to that and then write to a different stream with an XmlWriter. I take that new stream and using a StreamReader read it into a string, with the end goal being manipulating it here and setting newStream (which is being used by ChainStream) equal to the contents of this. Here is that piece: public override void ProcessMessage(SoapMessage message) { switch (message.Stage) { case SoapMessageStage.BeforeDeserialize: this.Process(); break; default: break; } } private void Process() { this.newStream.Position = 0L; XmlTextReader reader = new XmlTextReader(this.oldStream); MemoryStream outStream = new MemoryStream(); using (XmlWriter writer = XmlWriter.Create(outStream)) { do { writer.WriteNode(reader, true); } while (reader.Read()); writer.Flush(); } outStream.Seek(0, SeekOrigin.Begin); StreamReader streamReader = new StreamReader(outStream); string message = streamReader.ReadToEnd(); newStream = outStream; newStream.Seek(0, SeekOrigin.Begin); streamReader.Close(); } By running this, which seems to me like it would be fine, my test application gets a 400 Bad Request back from the service. If I either don't use the extension or I don't do anything in ProcessMessage everything seems fine. Any suggestions to this? As a side note, once I get this working with the generated code from the WSDL I will be moving to a WebRequest to try sending the message to the service. Currently that bombs out with a 415 Unsupported Media Type response. I'm trying to keep this post to one question but if anyone has any tips with using a WebRequest to connect to an ASMX service it would be much appreciated!

    Read the article

  • What git branching models actually work - the final question

    - by UncleCJ
    In our company we have successfully deployed git and we are currently using a simple trunk/release/hotfixes branching model. However, this has it's problems, I have some key issues of confusion in the community which would be awesome to have answered here. Maybe my hopes for an Alexander stroke are too great, quite possibly I'll decompose this question into more manageable issues, but here's my first shot. Workflows / branching models - below are the three main descriptions of this I have seen, but they are partially contradicting each other or don't go far enough to sort out the subsequent issues we've run into (as described below). Thus our team so far defaults to not so great solutions. Are you doing something better? gitworkflows(7) Manual Page (nvie) A successful Git branching model (reinh) A Git Workflow for Agile Teams Merging vs rebasing (tangled vs sequential history) - the bids on this are as confusing as it gets. Should one pull --rebase or wait with merging back to the mainline until your task is finished? Personally I lean towards merging since this preserves a visual illustration of on which base a task was started and finished, and I even prefer merge --no-ff for this purpose. It has other drawbacks however. Also many haven't realized the useful property of merging - that it isn't commutative (merging a topic branch into master does not mean merging master into the topic branch). I am looking for a natural workflow - sometimes mistakes happen because our procedures don't capture a specific situation with simple rules. For example a fix needed for earlier releases should of course be based sufficiently downstream to be possible to merge upstream into all branches necessary (is the usage of these terms clear enough?). However it happens that a fix makes it into the master before the developer realizes it should have been placed further downstream, and if that is already pushed (even worse, merged or something based on it) then the option remaining is cherry-picking, with it's associated perils... What simple rules like such do you use? Also in this is included the awkwardness of one topic branch necessarily excluding other topic branches (assuming they are branched from a common baseline). Developers don't want to finish a feature to start another one feeling like the code they just wrote is not there anymore How to avoid creating merge conflicts (due to cherry-pick)? What seems like a sure way to create a merge conflict is to cherry-pick between branches, they can never be merged again? Would applying the same commit in revert (how to do this?) in either branch possibly solve this situation? This is one reason I do not dare to push for a largely merge-based workflow. How to decompose into topical branches? - We realize that it would be awesome to assemble a finished integration from topic branches, but often work by our developers is not clearly defined (sometimes as simple as "poking around") and if some code has already gone into a "misc" topic, it can not be taken out of there again, according to the question above? How do you work with defining/approving/graduating/releasing your topic branches? Proper procedures like code review and graduating would of course be lovely, but we simply cannot keep things untangled enough to manage this - any suggestions? integration branches, illustration please? Vote and comment as much as you'd like, I'll try to keep the issue page clear and informative enough. Thanks! Below is a list of related topics on stackoverflow I have checked out: What are some good strategies to allow deployed applications to be hotfixable? Workflow description for git usage for in-house development Git workflow for corporate Linux kernel development How do you maintain development code and production code? (thanks for this PDF!) git releases management Git Cherry-pick vs Merge Workflow How to cherry-pick multiple commits How do you merge selective files with git-merge? How to cherry pick a range of commits and merge into another branch ReinH Git Workflow git workflow for making modifications you’ll never push back to origin Cherry-pick a merge Proper Git workflow for combined OS and Private code? Maintaining Project with Git Why cant Git merge file changes with a modified parent/master. Git branching / rebasing good practices When will "git pull --rebase" get me in to trouble?

    Read the article

  • Using ASP.NET MVC 2 with Ninject 2 from scratch

    - by Rune Jacobsen
    I just did File - New Project last night on a new project. Ah, the smell of green fields. I am using the just released ASP.NET MVC 2 (i.e. no preview or release candidate, the real thing), and thought I'd get off to a good start using Ninject 2 (also released version) with the MVC extensions. I downloaded the MVC extensions project, opened it in VS2008Sp1, built it in release mode, and then went into the mvc2\build\release folder and copied Ninject.dll and Ninject.Web.Mvc.dll from there to the Libraries folder on my project (so that I can lug them around in source control and always have the right version everywhere). I didn't include the corresponding .xml files - should I? Do they just provide intellisense, or some other function? Not a big deal I believe. Anyhoo, I followed the most up-to-date advice I could find; I referenced the DLLs in my MVC2 project, then went to work on Global.asax.cs. First I made it inherit from NinjectHttpApplication. I removed the Application_Start() method, and overrode OnApplicationStarted() instead. Here is that method: protected override void OnApplicationStarted() { base.OnApplicationStarted(); AreaRegistration.RegisterAllAreas(); RegisterRoutes(RouteTable.Routes); // RegisterAllControllersIn(Assembly.GetExecutingAssembly()); } And I also followed the advice of VS and implemented the CreateKernel method: protected override Ninject.IKernel CreateKernel() { // RegisterAllControllersIn(Assembly.GetExecutingAssembly()); return new StandardKernel(); } That is all. No other modifications to the project. You'll notice that the RegisterAllControllersIn() method is commented out in two places above. I've figured I can run it in three different combinations, all with their funky side effects; Running it like above. I am then presented with the standard "Welcome to ASP.NET MVC" page in all its' glory. However, after this page is displayed correctly in the browser, VS shows me an exception that was thrown. It throws in NinjectControllerFactory.GetControllerInstance(), which was called with a NULL value in the controllerType parameter. Notice that this happens after the /Home page is rendered - I have no idea why it is called again, and by using breakpoints I've already determined that GetControllerInstance() has been successfully called for the HomeController. Why this new call with controllerType as null? I really have no idea. Pressing F5 at this time takes me back to the browser, no complaints there. Uncommenting the RegisterAllControllersIn() method in CreateKernel() This is where stuff is really starting to get funky. Now I get a 404 error. Some times I have also gotten an ArgumentNullException on the RegisterAllControllersIn() line, but that is pretty rare, and I have not been able to reproduce it. Uncommenting the RegisterAllControllers() method in OnApplicationStarted() (And putting the comment back on the one in CreateKernel()) Results in behavior that seems exactly like that in point 1. So to keep from going on forever - is there an exact step-by-step guide on how to set up an MVC 2 project with Ninject 2 (both non-beta release versions) to get the controllers provided by Ninject? Of course I will then start providing some actual stuff for injection (like ISession objects and repositories, loggers etc), but I thought I'd get this working first. Any help will be highly appreciated! (Also posted to the Ninject Google Group)

    Read the article

  • .NET 4.0 Dynamic object used statically?

    - by Kevin Won
    I've gotten quite sick of XML configuration files in .NET and want to replace them with a format that is more sane. Therefore, I'm writing a config file parser for C# applications that will take a custom config file format, parse it, and create a Python source string that I can then execute in C# and use as a static object (yes that's right--I want a static (not the static type dyanamic) object in the end). Here's an example of what my config file looks like: // my custom config file format GlobalName: ExampleApp Properties { ExternalServiceTimeout: "120" } Python { // this allows for straight python code to be added to handle custom config def MyCustomPython: return "cool" } Using ANTLR I've created a Lexer/Parser that will convert this format to a Python script. So assume I have that all right and can take the .config above and run my Lexer/Parser on it to get a Python script out the back (this has the added benefit of giving me a validation tool for my config). By running the resultant script in C# // simplified example of getting the dynamic python object in C# // (not how I really do it) ScriptRuntime py = Python.CreateRuntime(); dynamic conf = py.UseFile("conftest.py"); dynamic t = conf.GetConfTest("test"); I can get a dynamic object that has my configuration settings. I can now get my config file settings in C# by invoking a dynamic method on that object: //C# calling a method on the dynamic python object var timeout = t.GetProperty("ExternalServiceTimeout"); //the config also allows for straight Python scripting (via the Python block) var special = t.MyCustonPython(); of course, I have no type safety here and no intellisense support. I have a dynamic representation of my config file, but I want a static one. I know what my Python object's type is--it is actually newing up in instance of a C# class. But since it's happening in python, it's type is not the C# type, but dynamic instead. What I want to do is then cast the object back to the C# type that I know the object is: // doesn't work--can't cast a dynamic to a static type (nulls out) IConfigSettings staticTypeConfig = t as IConfigSettings Is there any way to figure out how to cast the object to the static type? I'm rather doubtful that there is... so doubtful that I took another approach of which I'm not entirely sure about. I'm wondering if someone has a better way... So here's my current tactic: since I know the type of the python object, I am creating a C# wrapper class: public class ConfigSettings : IConfigSettings that takes in a dynamic object in the ctor: public ConfigSettings(dynamic settings) { this.DynamicProxy = settings; } public dynamic DynamicProxy { get; private set; } Now I have a reference to the Python dynamic object of which I know the type. So I can then just put wrappers around the Python methods that I know are there: // wrapper access to the underlying dynamic object // this makes my dynamic object appear 'static' public string GetSetting(string key) { return this.DynamicProxy.GetProperty(key).ToString(); } Now the dynamic object is accessed through this static proxy and thus can obviously be passed around in the static C# world via interface, etc: // dependency inject the dynamic object around IBusinessLogic logic = new BusinessLogic(IConfigSettings config); This solution has the benefits of all the static typing stuff we know and love while at the same time giving me the option of 'bailing out' to dynamic too: // the DynamicProxy property give direct access to the dynamic object var result = config.DynamicProxy.MyCustomPython(); but, man, this seems rather convoluted way of getting to an object that is a static type in the first place! Since the whole dynamic/static interaction world is new to me, I'm really questioning if my solution is optimal or if I'm missing something (i.e. some way of casting that dynamic object to a known static type) about how to bridge the chasm between these two universes.

    Read the article

  • OpenGL antialiasing not working

    - by user146780
    I'v been trying to anti alias with OGL. I found a code chunk that is supposed to do this but I see no antialiasing. I also reset my settings in Nvidia Control Panel but no luck. Does this code in fact antialias the cube? GLboolean polySmooth = GL_TRUE; static void init(void) { glCullFace (GL_BACK); glEnable (GL_CULL_FACE); glBlendFunc (GL_SRC_ALPHA_SATURATE, GL_ONE); glClearColor (0.0, 0.0, 0.0, 0.0); } #define NFACE 6 #define NVERT 8 void drawCube(GLdouble x0, GLdouble x1, GLdouble y0, GLdouble y1, GLdouble z0, GLdouble z1) { static GLfloat v[8][3]; static GLfloat c[8][4] = { {0.0, 0.0, 0.0, 1.0}, {1.0, 0.0, 0.0, 1.0}, {0.0, 1.0, 0.0, 1.0}, {1.0, 1.0, 0.0, 1.0}, {0.0, 0.0, 1.0, 1.0}, {1.0, 0.0, 1.0, 1.0}, {0.0, 1.0, 1.0, 1.0}, {1.0, 1.0, 1.0, 1.0} }; /* indices of front, top, left, bottom, right, back faces */ static GLubyte indices[NFACE][4] = { {4, 5, 6, 7}, {2, 3, 7, 6}, {0, 4, 7, 3}, {0, 1, 5, 4}, {1, 5, 6, 2}, {0, 3, 2, 1} }; v[0][0] = v[3][0] = v[4][0] = v[7][0] = x0; v[1][0] = v[2][0] = v[5][0] = v[6][0] = x1; v[0][1] = v[1][1] = v[4][1] = v[5][1] = y0; v[2][1] = v[3][1] = v[6][1] = v[7][1] = y1; v[0][2] = v[1][2] = v[2][2] = v[3][2] = z0; v[4][2] = v[5][2] = v[6][2] = v[7][2] = z1; #ifdef GL_VERSION_1_1 glEnableClientState (GL_VERTEX_ARRAY); glEnableClientState (GL_COLOR_ARRAY); glVertexPointer (3, GL_FLOAT, 0, v); glColorPointer (4, GL_FLOAT, 0, c); glDrawElements(GL_QUADS, NFACE*4, GL_UNSIGNED_BYTE, indices); glDisableClientState (GL_VERTEX_ARRAY); glDisableClientState (GL_COLOR_ARRAY); #else printf ("If this is GL Version 1.0, "); printf ("vertex arrays are not supported.\n"); exit(1); #endif } /* Note: polygons must be drawn from front to back * for proper blending. */ void display(void) { if (polySmooth) { glClear (GL_COLOR_BUFFER_BIT); glEnable (GL_BLEND); glEnable (GL_POLYGON_SMOOTH); glDisable (GL_DEPTH_TEST); } else { glClear (GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glDisable (GL_BLEND); glDisable (GL_POLYGON_SMOOTH); glEnable (GL_DEPTH_TEST); } glPushMatrix (); glTranslatef (0.0, 0.0, -8.0); glRotatef (30.0, 1.0, 0.0, 0.0); glRotatef (60.0, 0.0, 1.0, 0.0); drawCube(-0.5, 0.5, -0.5, 0.5, -0.5, 0.5); glPopMatrix (); glFlush (); } void reshape(int w, int h) { glViewport(0, 0, (GLsizei) w, (GLsizei) h); glMatrixMode(GL_PROJECTION); glLoadIdentity(); gluPerspective(30.0, (GLfloat) w/(GLfloat) h, 1.0, 20.0); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); } void keyboard(unsigned char key, int x, int y) { switch (key) { case `t': case `T': polySmooth = !polySmooth; glutPostRedisplay(); break; case 27: exit(0); /* Escape key */ break; default: break; } } int main(int argc, char** argv) { glutInit(&argc, argv); glutInitDisplayMode (GLUT_SINGLE | GLUT_RGB | GLUT_ALPHA | GLUT_DEPTH); glutInitWindowSize(200, 200); glutCreateWindow(argv[0]); init (); glutReshapeFunc (reshape); glutKeyboardFunc (keyboard); glutDisplayFunc (display); glutMainLoop(); return 0; } Thanks

    Read the article

  • Taking screenshots in Windows Vista, Windows 7, with transparent areas outside the app region

    - by Steve Sheldon
    Hey Folks, I am trying to take a screenshot of an application and I would like to make the parts of the rectangle that are not part of the applications region be transparent. So for instance on a standard windows application I would like to make the rounded corners transparent. I wrote a quick test application which works on on XP (or vista/windows 7 with aero turned off): protected override void OnPaint(PaintEventArgs e) { base.OnPaint(e); Graphics g = e.Graphics; // Just find a window to test with IntPtr hwnd = FindWindowByCaption(IntPtr.Zero, "Calculator"); WINDOWINFO info = new WINDOWINFO(); info.cbSize = (uint)Marshal.SizeOf(info); GetWindowInfo(hwnd, ref info); Rectangle r = Rectangle.FromLTRB(info.rcWindow.Left, info.rcWindow.Top, info.rcWindow.Right, info.rcWindow.Bottom); IntPtr hrgn = CreateRectRgn(info.rcWindow.Left, info.rcWindow.Top, info.rcWindow.Right, info.rcWindow.Bottom); GetWindowRgn(hwnd, hrgn); // fill a rectangle which would be where I would probably // write some mask color g.FillRectangle(Brushes.Red, r); // fill the region over the top, all I am trying to do here // is show the contrast between the applications region and // the rectangle that the region would be placed in Region region = Region.FromHrgn(hrgn); region.Translate(info.rcWindow.Left, info.rcWindow.Top); g.FillRegion(Brushes.Blue, region); } Quick side note: before commenting that this code is doing stuff it shouldn't (or should do, like dispose) in a paint function, I know, it's not going anywhere but this post and is designed purely as a way to quickly show the problem, and that it does... OK, back to the problem ;) When I run this test app on XP (or Vista/Windows 7 with Aero off), I get something like this, which is great because I can eek an xor mask out of this that can be used later with BitBlt. Here is the problem, on Vista or Windows 7 with Aero enabled, there isn't necessarily a region on the window, in fact in most cases there isn't. Can anybody help me figure out how to get the region of the application like this on these platforms. Here are some of the approaches I have already tried... 1. Using the PrintWindow function: This doesn't work because it gives back a screenshot taken of the window with Aero off and this window is a different shape from the window returned with Aero on 2 Using the Desktop Window Manager API to get a full size thumbnail: This didn't work because it draws directly to the screen and from what I can tell you can't get a screenshot directly out of this api. Yeah, I could open a window with a pink background, show the thumbnail, take a screenshot then hide this temporary window but thats a horrible user experience and a complete hack I would rather not have my name on. 3. Using Graphics.CopyFromScreen or some other pinvoke variant of this: This doesn't work because I can't assume that the window I need information from is at the top of the z-order on the screen. Right now, the best solution I can think of is to special case Aero on Windows 7 and Vista to manually rub out the corners by hard coding some graphics paths I paint out but this solution would suck since any application that performs custom skinning will break this. Can you think of another or better solution? If you are here, thanks for taking time to read this post, I appreciate any help or direction that you can offer!

    Read the article

< Previous Page | 677 678 679 680 681 682 683 684 685 686 687 688  | Next Page >