Search Results

Search found 10634 results on 426 pages for 'pass'.

Page 320/426 | < Previous Page | 316 317 318 319 320 321 322 323 324 325 326 327  | Next Page >

  • How to use Guice in Swing application

    - by Gerco Dries
    I have a Swing application that I would like to convert from spaghetti to using dependency injection with Guice. Using Guice to provide services like configuration and task queues is going great but I'm now starting on the GUI of the app and am unsure of how to proceed. The application is basically a JFrame with a bunch of tabs in a JTabbedPane. Each of the tabs is a separate JPanel subclass that lays out the various components and needs services to perform actions when certain buttons are pressed. In the current application, this looks somewhat like this: @Inject public MainFrame(SomeService service, Executor ex, Configuration config) { tabsPane = new JTabbedPane(); // Create the panels for each tab and add them to the tabbedpane somePanel = new SomeTabPanel(service, ex, config); tabsPane.addTab("Panel 1", somePanel); someOtherPanel = new SomeOtherTabPanel(service, ex, config); tabsPane.addTab("Panel 2", someOtherPanel); ... do more stuff } Obviously, this doesn't exactly follow DI best practices. I don't want to have to @Inject the tabs because that would get me a constructor with dozens of parameters. I do want to use Guice to inject the required dependencies into whatever tab objects I need without me having to pass all of those dependencies to the tab constructors. All of the dependencies for the tab objects are services that my Module knows about, so basically all I think I want to do is to ask Guice for the required objects and have them constructed for me.

    Read the article

  • Reading bmp file for steganography

    - by Shantanu Gupta
    I am trying to read a bmp file in C++(Turbo). But i m not able to print binary stream. I want to encode txt file into it and decrypt it. How can i do this. I read that bmp file header is of 54 byte. But how and where should i append txt file in bmp file. ? I know only Turbo C++, so it would be helpfull for me if u provide solution or suggestion related to topic for the same. int main() { ifstream fr; //reads ofstream fw; // wrrites to file char c; int random; clrscr(); char file[2][100]={"s.bmp","s.txt"}; fr.open(file[0],ios::binary);//file name, mode of open, here input mode i.e. read only if(!fr) cout<<"File can not be opened."; fw.open(file[1],ios::app);//file will be appended if(!fw) cout<<"File can not be opened"; while(!fr) cout<<fr.get(); // error should be here. but not able to find out what error is it fr.close(); fw.close(); getch(); } This code is running fine when i pass txt file in binary mode

    Read the article

  • Static factory pattern with EJB3/JBoss

    - by purecharger
    I'm fairly new to EJBs and full blown application servers like JBoss, having written and worked with special purpose standalone Java applications for most of my career, with limited use of JEE. I'm wondering about the best way to adapt a commonly used design pattern to EJB3 and JBoss: the static factory pattern. In fact this is Item #1 in Joshua Bloch's Effective Java book (2nd edition) I'm currently working with the following factory: public class CredentialsProcessorFactory { private static final Log log = LogFactory.getLog(CredentialsProcessorFactory.class); private static Map<CredentialsType, CredentialsProcessor> PROCESSORS = new HashMap<CredentialsType, CredentialsProcessor>(); static { PROCESSORS.put(CredentialsType.CSV, new CSVCredentialsProcessor()); } private CredentialsProcessorFactory() {} public static CredentialsProcessor getProcessor(CredentialsType type) { CredentialsProcessor p = PROCESSORS.get(type); if(p == null) throw new IllegalArgumentException("No CredentialsProcessor registered for type " + type.toString()); return p; } However, in the implementation classes of CredentialsProcessor, I require injected resources such as a PersistenceContext, so I have made the CredentialsProcessor interface a @Local interface, and each of the impl's marked with @Stateless. Now I can look them up in JNDI and use the injected resources. But now I have a disconnect because I am not using the factory anymore. My first thought was to change the getProcessor(CredentialsType) method to do a JNDI lookup and return the SLSB instance that is required, but then I need to configure and pass the proper qualified JNDI name. Before I go down that path, I wanted to do more research on accepted practices. How is this design pattern treated in EJB3 / JEE?

    Read the article

  • Uploadify: Passing a form's ID as a parameter with scriptData

    - by Matt
    I need the ability to have multiple upload inputs on one page (potentially hundreds) using Uploadify. The upload PHP file will be renaming the uploaded file based on the ID of the input button used to submit it, so it will need that ID. Since I will be having hundreds of upload buttons on one page, I wanted to create a universal instantiation, so I did this using the class of the forms rather than the ID of the forms. However, when one of the inputs is clicked, I would like to pass the ID of that input as scriptData to the PHP. This is not working; PHP says 'formId' is undefined. Is there a good way get the ID attribute of the form input being used, and passing it to the upload PHP? Or is there a completely different and better method of accomplishing this? Thank you in advance!! <script type="text/javascript"> $(document).ready(function() { $('.uploady').uploadify({ 'uploader' : '/uploadify/uploadify.swf', 'script' : '/uploadify/uploadify.php', 'cancelImg' : '/uploadify/cancel.png', 'folder' : '/uploadify', 'auto' : true, // LINE IN QUESTION 'scriptData' : {'formId':$(this).attr('id')} }); }); </script> </head> The inputs look like this: <input id="file_upload1" class="uploady" name="file_upload" type="file" /> <input id="file_upload2" class="uploady" name="file_upload" type="file" /> <input id="file_upload3" class="uploady" name="file_upload" type="file" />

    Read the article

  • Selected Item in Dropdown Lists from Enum in ASP.net MVC

    - by AlexCuse
    Sorry if this is a dup, my searching turned up nothing. I am using the following method to generate drop down lists for enum types (lifted from here: http://addinit.com/?q=node/54): public static string DropDownList(this HtmlHelper helper, string name, Type type, object selected) { if (!type.IsEnum) throw new ArgumentException("Type is not an enum."); if(selected != null && selected.GetType() != type) throw new ArgumentException("Selected object is not " + type.ToString()); var enums = new List<SelectListItem>(); foreach (int value in Enum.GetValues(type)) { var item = new SelectListItem(); item.Value = value.ToString(); item.Text = Enum.GetName(type, value); if(selected != null) item.Selected = (int)selected == value; enums.Add(item); } return System.Web.Mvc.Html.SelectExtensions.DropDownList(helper, name, enums, "--Select--"); } It is working fine, except for one thing. If I give the dropdown list the same name as the property on my model the selected value is not set properly. Meaning this works: <%= Html.DropDownList("fam", typeof(EnumFamily), Model.Family)%> But this doesn't: <%= Html.DropDownList("family", typeof(EnumFamily), Model.Family)%> Because I'm trying to pass an entire object directly to the controller method I am posting to, I would really like to have the dropdown list named for the property on the model. When using the "right" name, the dropdown does post correctly, I just can't seem to set the selected value. I don't think this matters but I am running MVC 1 on mono 2.6

    Read the article

  • RhinoMocks Testing callback method

    - by joblot
    Hi All I have a service proxy class that makes asyn call to service operation. I use a callback method to pass results back to my view model. Doing functional testing of view model, I can mock service proxy to ensure methods are called on the proxy, but how can I ensure that callback method is called as well? With RhinoMocks I can test that events are handled and event raise events on the mocked object, but how can I test callbacks? ViewModel: public class MyViewModel { public void GetDataAsync() { // Use DI framework to get the object IMyServiceClient myServiceClient = IoC.Resolve<IMyServiceClient>(); myServiceClient.GetData(GetDataAsyncCallback); } private void GetDataAsyncCallback(Entity entity, ServiceError error) { // do something here... } } ServiceProxy: public class MyService : ClientBase, IMyServiceClient { // Constructor public NertiAdminServiceClient(string endpointConfigurationName, string remoteAddress) : base(endpointConfigurationName, remoteAddress) { } // IMyServiceClient member. public void GetData(Action<Entity, ServiceError> callback) { Channel.BeginGetData(EndGetData, callback); } private void EndGetData(IAsyncResult result) { Action<Entity, ServiceError> callback = result.AsyncState as Action<Entity, ServiceError>; ServiceError error; Entity results = Channel.EndGetData(out error, result); if (callback != null) callback(results, error); } } Thanks

    Read the article

  • Best way for a remote web app to authenticate users in my current web app?

    - by jklp
    So a bit of background, I'm working on an existing web application which has a set of users, who are able to log in via a traditional login screen with a user name and password, etc. Recently we've managed to score a client (who have their own Intranet site), who are wanting to be able to have their users log into their Intranet site, and then have their users click a link on their Intranet which redirects to our application and logs them into it automatically. I've had two suggestions on how to implement this so far: Create a URL which takes 2 parameters (which are "username" and "password") and have the Intranet site pass those parameters to us (our connection is via TLS so it's all encrypted). This would work fine, but it seems a little "hacky", and also means that the logins and passwords have to be the same on both systems (and having to write some kind of web service which can update the passwords for users - which also seems a bit insecure) Provide a token to the Intranet, so when the client clicks on a link on the Intranet, it sends the token to us, along with the user name (and no password) which means they're authenticated. Again, this sounds a bit hacky as isn't that essentially the same as providing everyone with the same password to log in? So to summarise, I'm after the following things: A way for the users who are already authenticated on the Intranet to log into our system without too much messing around, and without using an external system to authenticate, i.e. LDAP / Kerberos Something which isn't too specific to this client, and can easily be implemented by other Intranets to log in

    Read the article

  • VS2008 EF and non crud SP usage.

    - by SteveO
    Using an edmx version of EF. My returned data is a join between tables that has a COMPOUND filter on the primary table. In essence this query is going to return a SEGMENT of Law codes and descriptions that a user can tie to a Sex Offender report. I have a complex SP because Linq2SQL cannot pass in a between statement, or at least that is how I understand the error. The Code itself is broken up by '-' marks. 39-13-504 "Aggravated Sexual Battery" User wants to have a query with 4 parmas 39, 13, 500, 599. Get all codes from Title 39 and Chapter 13 with parts between 500 and 599. I have the SP in place to do the work, is there are way to consume the SP within the EF? I find many blogs about SPs with CRUD operations as their use of an SP. That doesn't fit this need at all. I do not have a single table but a join to the "prior selections" table that maps the key for the code. Any pointers on how to get a READ with an SP? TIA

    Read the article

  • How to force two process to run on the same CPU?

    - by kovan
    Context: I'm programming a software system that consists of multiple processes. It is programmed in C++ under Linux. and they communicate among them using Linux shared memory. Usually, in software development, is in the final stage when the performance optimization is made. Here I came to a big problem. The software has high performance requirements, but in machines with 4 or 8 CPU cores (usually with more than one CPU), it was only able to use 3 cores, thus wasting 25% of the CPU power in the first ones, and more than 60% in the second ones. After many research, and having discarded mutex and lock contention, I found out that the time was being wasted on shmdt/shmat calls (detach and attach to shared memory segments). After some more research, I found out that these CPUs, which usually are AMD Opteron and Intel Xeon, use a memory system called NUMA, which basically means that each processor has its fast, "local memory", and accessing memory from other CPUs is expensive. After doing some tests, the problem seems to be that the software is designed so that, basically, any process can pass shared memory segments to any other process, and to any thread in them. This seems to kill performance, as process are constantly accessing memory from other processes. Question: Now, the question is, is there any way to force pairs of processes to execute in the same CPU?. I don't mean to force them to execute always in the same processor, as I don't care in which one they are executed, altough that would do the job. Ideally, there would be a way to tell the kernel: If you schedule this process in one processor, you must also schedule this "brother" process (which is the process with which it communicates through shared memory) in that same processor, so that performance is not penalized.

    Read the article

  • Nunit Relative Path failing

    - by levi.siebens
    I'm having an issue with Nunit where I cannot find an image file when I run my tests and each time it looks for images it looks in the Nunit folder instead of looking inside the folder where the binary resides. Below is a detailed description of what's happening. I'm building a binary that is under test which contains the definition for some game elements and png files which will define the sprites I'm using (for sanity sake call it Binary1) Nunit runs tests from a seperate binary (Binary1Test) executing test methods against the first binary (Binary1). All tests pass, unless the test executes code in Binary1 which then requires Binary1 to use one of the image files (which are defined via a relative path). When the method is called, Nunit throws a file not found exception stating that it cannot find the file and states it's looking inside of the Program Files\Nunit.net 2.0 folder So I have no idea why the code is doing this, and to make matters more confusing when I pull up Enviornment.CurrentDirectory it gives me the correct path (the path to my debug folder) and not the path to nunit. Also if I use this instead of using the relative path, my tests will run without issue. So my question is, does anyone know why in the case of loading relative paths from within my binary that nunit decides to use it's directory instead of the directory where the binary is located and where the images are stored? Thanks.

    Read the article

  • Trouble understanding SSL certificate chain verification

    - by Josh K
    My app uses SSL to communicate securely with a server and it's having trouble verifying the certificate chain. The chain looks like this: Entrust.net Secure Server Certification Authority - DigiCert Global CA - *.ourdomain.com We are using a certificate store pulled from Mozilla. It contains the Entrust.net certificate, but not the DigiCert Global CA one. My understanding is that an intermediate authority doesn't have to be trusted as long as the root authority is, but the verification fails: % openssl verify -CAfile mozilla-root-certs.crt ourdomain.com.crt error 20 at 0 depth lookup:unable to get local issuer certificate So do I need to explicitly trust the DigiCert Global CA in order for verification to pass? That seems wrong. But you tell me! EDIT: I now understand that the certificate file needs to be available to OpenSSL up front. Something like this works: % openssl verify -CAfile mozilla-root-certs.crt -untrusted digicert.crt ourdomain.com.crt ourdomain.com.crt: OK This allows me to provide a copy of the DigiCert CA without explicitly saying "I trust it", the whole chain still needs to be verified. But surely browsers like Firefox won't always ship with a copy of every single certificate it'll ever need. There's always going to be new CAs and the point is to use the security of the root certificate to make sure all intermediate CAs are valid. Right? So how does this work? Is it really as silly as it looks?

    Read the article

  • how to escape white space in bash loop list

    - by MCS
    I have a bash shell script that loops through all child directories (but not files) of a certain directory. The problem is that some of the directory names contain spaces. Here are the contents of my test directory: $ls -F test Baltimore/ Cherry Hill/ Edison/ New York City/ Philadelphia/ cities.txt And the code that loops through the directories: for f in `find test/* -type d`; do echo $f done Here's the output: test/Baltimore test/Cherry Hill test/Edison test/New York City test/Philadelphia Cherry Hill and New York City are treated as 2 or 3 separate entries. I tried quoting the filenames, like so: for f in `find test/* -type d | sed -e 's/^/\"/' | sed -e 's/$/\"/'`; do echo $f done but to no avail. There's got to be a simple way to do this. Any ideas? The answers below are great. But to make this more complicated - I don't always want to use the directories listed in my test directory. Sometimes I want to pass in the directory names as command-line parameters instead. I took Charles' suggestion of setting the IFS and came up with the following: dirlist="${@}" ( [[ -z "$dirlist" ]] && dirlist=`find test -mindepth 1 -type d` && IFS=$'\n' for d in $dirlist; do echo $d done ) and this works just fine unless there are spaces in the command line arguments (even if those arguments are quoted). For example, calling the script like this: test.sh "Cherry Hill" "New York City" produces the following output: Cherry Hill New York City Again, I know there must be a way to do this - I just don't know what it is...

    Read the article

  • Excel vba -get ActiveX Control checkbox when event handler is triggered

    - by danoran
    I have an excel spreadsheet that is separated into different sections with named ranges. I want to hide a named range when a checkbox is clicked. I can do this for one checkbox, but I would like to have a single function that can hide the appropriate section based on the calling checkbox. I was planning on calling that function from the event_handlers for when the checkboxes are clicked, and to pass the checkbox as an argument. Is there a way to access the checkbox object that calls the event handler? This works: Sub chkDogsInContest_Click() ActiveSheet.Names("DogsInContest").RefersToRange.EntireRow.Hidden = Not chkMemberData.Value End Sub But this is what I would like to do: Sub chkDogsInContest_Click() Module1.Show_Hide_Section (<calling checkbox>) End Sub These functions are defined in a different module: 'The format for the the names of the checkbox controls is 'CHECKBOX_NAME_PREFIX + <name> 'where "name" is also the name of the associated Named Range Public Const CHECKBOX_NAME_PREFIX As String = "chk" 'The format for the the names of the checkbox controls is 'CHECKBOX_NAME_PREFIX + <name> 'where "name" is also the name of the associated Named Range Public Function CheckName_To_SectionName(ByRef strCheckName As String) CheckName_To_SectionName = Mid(strCheckName, CHECKBOX_NAME_PREFIX.Length() + 1) End Function Public Sub Show_Hide_Section(ByRef chkBox As CheckBox) ActiveSheet.Names(CheckName_To_SectionName(chkBox.Name())).RefersTo.EntireRow.Hidden = True End Sub

    Read the article

  • Mocking WebResponse's from a WebRequest

    - by Rob Cooper
    I have finally started messing around with creating some apps that work with RESTful web interfaces, however, I am concerned that I am hammering their servers every time I hit F5 to run a series of tests.. Basically, I need to get a series of web responses so I can test I am parsing the varying responses correctly, rather than hit their servers every time, I thought I could do this once, save the XML and then work locally. However, I don't see how I can "mock" a WebResponse, since (AFAIK) they can only be instantiated by WebRequest.GetResponse How do you guys go about mocking this sort of thing? Do you? I just really don't like the fact I am hammering their servers :S I dont want to change the code too much, but I expect there is a elegant way of doing this.. Update Following Accept Will's answer was the slap in the face I needed, I knew I was missing a fundamental point! Create an Interface that will return a proxy object which represents the XML. Implement the interface twice, on that uses WebRequest, the other that returns static "responses". The interface implmentation then either instantiates the return type based on the response, or the static XML. You can then pass the required class when testing or at production to the service layer. Once I have the code knocked up, I'll paste some samples. Thanks Will :)

    Read the article

  • Django inlineformset validation and delete

    - by Andrew Gee
    Hi, Can someone tell me if a form in an inlineformset should go through validation if the DELETE field is checked. I have a form that uses an inlineformset and when I check the DELETE box it fails because the required fields are blank. If I put data in the fields it will pass validation and then be deleted. Is that how it is supposed to work, I would have thought that if it is marked for delete it would bypass the validation for that form. Regards Andrew Follow up - but I would still appreciate some others opinions/help What I have figured out is that for validation to work the a formset form must either be empty or complete(valid) otherwise it will have errors when it is created and will not be deleted. As I have a couple of hidden fields in my formset forms and they are pre-populated when the page loads via javascript the form fails validation on the other required fields which might still be blank. The way I have gotten around this by adding in a check in the add_fields that tests if the DELETE input is True and if it is it makes all fields on the form not required, which means it passes validation and will then delete. def add_fields(self, form, index) #add other fields that are required.... deleteValue = form.fields['DELETE'].widget.value_from datadict(form.data, form.files, form.add_prefix('DELETE')) if bool(deleteValue) or deleteValue == '': for name, field in form.fields.items(): form.fields[name].required= False This seems to be an odd way to do things but I cannot figure out another way. Is there a simpler way that I am missing? I have also noticed that when I add the new form to my page and check the Delete box, there is no value passed back via the request, however an existing form (one loaded from the database) has a value of on when the Delete box is checked. If the box is not checked then the input is not in the request at all. Thanks Andrew

    Read the article

  • How to clean-up an Entity Framework object context?

    - by Daniel Brückner
    I am adding several entities to an object context. try { forach (var document in documents) { this.Validate(document); // May throw a ValidationException. this.objectContext.AddToDocuments(document); } this.objectContext.SaveChanges(); } catch { // How to clean-up the object context here? throw; } If some of the documents pass the the validation and one fails, all documents that passed the validation remain added to the object context. I have to clean-up the object context because it may be reused and the following can happen. var documentA = new Document { Id = 1, Data = "ValidData" }; var documentB = new Document { Id = 2, Data = "InvalidData" }; var documentC = new Document { Id = 3, Data = "ValidData" }; try { // Adding document B will cause a ValidationException but only // after document A is added to the object context. this.DocumentStore.AddDocuments(new[] { documentA, documentB, documentC }); } catch (ValidationException) { } // Try again without the invalid document B. this.DocumentStore.AddDocuments(new[] { documentA, documentC }); This will again add document A to the object context and in consequence SaveChanges() will throw an exception because of a duplicate primary key. So I have to remove all already added documents in the case of an validation error. I could of course perform the validation first and only add all documents after they have been successfully validated. But sadly this does not solve the whole problem - if SaveChanges() fails all documents still remain add but unsaved. I tried to detach all objects returned by this.objectContext.ObjectStateManager.GetObjectStateEntries(EntityState.Added) but I am getting a exception stating that the object is not attached. So how do I get rid of all added but unsaved objects?

    Read the article

  • Possible to create an implicit cast for an anonymous type to a dictionary?

    - by Ralph
    I wrote a method like this: using AttrDict = System.Collections.Generic.Dictionary<string, object>; using IAttrDict = System.Collections.Generic.IEnumerable<System.Collections.Generic.KeyValuePair<string, object>>; static string HtmlTag(string tagName, string content = null, IAttrDict attrs = null) { var sb = new StringBuilder("<"); sb.Append(tagName); if(attrs != null) foreach (var attr in attrs) sb.AppendFormat(" {0}=\"{1}\"", attr.Key, attr.Value.ToString().EscapeQuotes()); if (content != null) sb.AppendFormat(">{0}</{1}>", content, tagName); else sb.Append(" />"); return sb.ToString(); } Which you can call like HtmlTag("div", "hello world", new AttrDict{{"class","green"}}); Not too bad. But what if I wanted to allow users to pass an anonymous type in place of the dict? Like HtmlTag("div", "hello world", new {@class="green"}); Even better! I could write the overload easily, but the problem is I'm going to have about 50 functions like this, I don't want to overload each one of them. I was hoping I could just write an implicit cast to do the work for me... public class AttrDict : Dictionary<string, object> { public static implicit operator AttrDict(object obj) { // conversion from anonymous type to AttrDict here } } But C# simply won't allow it: user-defined conversions to or from a base class are not allowed So what can I do?

    Read the article

  • Rewriting URL's in codeigniter with url_title()?

    - by Craig Ward
    I am rewriting my website with codeigniter and have something I want to do but not sure it is possible. I have a gallery on my site powered by the Flickr API. Here is an example of the code I use to display the Landscape pictures: <?php foreach ($landscapes->photoset->photo as $l->photoset->photo) : ?> <a >photoset->photo->farm ?>/<?php echo $l->photoset->photo->server ?>/<?php echo $l->photoset->photo->id ?>/<?php echo $l->photoset->photo->secret ?>/<?php echo $l->photoset->photo->title ?>'> <img class='f_thumb'>photoset->photo->farm ?>.static.flickr.com/<?php echo $l->photoset->photo->server ?>/<?php echo $l->photoset->photo->id ?>_<?php echo $l->photoset->photo->secret ?>_s.jpg' title='<?php echo $l->photoset->photo->title ?>' alt='<?php echo $l->photoset->photo->title ?>' /></a> <?php endforeach; ?> As you can see when a user clicks on a picture I pass over the Farm, Server, ID, Secret and Title elements using URI segments and build the page in the controller using $data['farm'] = $this->uri->segment(3); $data['server'] = $this->uri->segment(4); $data['id'] = $this->uri->segment(5); $data['secret'] = $this->uri->segment(6); $data['title'] = $this->uri->segment(7); Everything works and is fine but the URL’s are a tad long, example “http://localhost:8888/wip/index.php/gallery/focus/3/2682/4368875046/e8f97f61d9/Old Mill House in Donegal” Is there a way to rewrite the URL so its more like “http://localhost:8888/wip/index.php/gallery/focus/Old_Mill_House_in_Donegal” I was looking at using: $url_title = $this->uri->segment(7); $url_title = url_title($url_title, 'underscore', TRUE); But I don’t seem to be able to get it to work. Any ideas?

    Read the article

  • Searching for flexible way to specify conenction string used by APS.NET membership

    - by bzamfir
    Hi, I develop an asp.net web application and I use ASP.NET membership provider. The application uses the membership schema and all required objects inside main application database However, during development I have to switch to various databases, for different development and testing scenarios. For this I have an external connection strings section file and also an external appsettings section, which allow me to not change main web.config but switch the db easily, by changing setting only in appsettings section. My files are as below: <connectionStrings configSource="connections.config"> </connectionStrings> <appSettings file="local.config"> .... ConnectionStrings looks as usual: <connectionStrings> <add name="MyDB1" connectionString="..." ... /> <add name="MyDB2" connectionString="..." ... /> .... </connectionStrings> And local.config as below <appSettings> <add key="ConnectionString" value="MyDB2" /> My code takes into account to use this connection string But membership settings in web.config contains the connection string name directly into the setting, like <add name="MembershipProvider" connectionStringName="MyDB2" ...> .... <add name="RoleProvider" connectionStringName="MyDB2" ...> Because of this, every time I have to edit them too to use the new db. Is there any way to config membership provider to use an appsetting to select db connection for membership db? Or to "redirect" it to read connection setting from somewhere else? Or at least to have this in some external file (like local.config) Maybe is some easy way to wrap asp.net membership provider intio my own provider which will just read connection string from where I want and pass it to original membership provider, and then just delegate the whole membership functionality to asp.net membership provider. Thanks.

    Read the article

  • using wget against protected site with NTLM

    - by Joey V.
    Trying to mirror a local intranet site and have found previous questions using 'wget'. It works great with sites that are anonymous, but I have not been able to use it against a site that is expecting username\password (IIS with Integrated Windows Authentication). Here is what I pass in: wget -c --http-user='domain\user' --http-password=pwd http://local/site -dv Here is the debug output (note I replaced some with dummy values obviously): Setting --verbose (verbose) to 1 DEBUG output created by Wget 1.11.4 on Windows-MSVC. --2009-07-14 09:39:04-- http://local/site Host `local' has not issued a general basic challenge. Resolving local... seconds 0.00, x.x.x.x Caching local = x.x.x.x Connecting to local|x.x.x.x|:80... seconds 0.00, connected. Created socket 1896. Releasing 0x003e32b0 (new refcount 1). ---request begin--- GET /site/ HTTP/1.0 User-Agent: Wget/1.11.4 Accept: */* Host: local Connection: Keep-Alive ---request end--- HTTP request sent, awaiting response... ---response begin--- HTTP/1.1 401 Access Denied Server: Microsoft-IIS/5.1 Date: Tue, 14 Jul 2009 13:39:04 GMT WWW-Authenticate: Negotiate WWW-Authenticate: NTLM Content-Length: 4431 Content-Type: text/html ---response end--- 401 Access Denied Closed fd 1896 Unknown authentication scheme. Authorization failed.

    Read the article

  • Did I find a bug in WriteableBitmap when using string literals

    - by liserdarts
    For performance reasons I'm converting a large list of images into a single image. This code does exactly what I want. Private Function FlattenControl(Control As UIElement) As Image Control.Measure(New Size(1000, 1000)) Control.Arrange(New Rect(0, 0, 1000, 1000)) Dim ImgSource As New Imaging.WriteableBitmap(1000, 1000) ImgSource.Render(Control, New TranslateTransform) ImgSource.Invalidate Dim Img As New Image Img.Source = ImgSource Return Img End Function I can add all the images into a canvas pass the canvas to this function and I get back one image. My code to load all the images looks like this. Public Function BuildTextures(Layer As GLEED2D.Layer) As FrameworkElement Dim Container As New Canvas For Each Item In Layer.Items If TypeOf Item Is GLEED2D.TextureItem Then Dim Texture = CType(Item, GLEED2D.TextureItem) Dim Url As New Uri(Texture.texture_filename, UriKind.Relative) Dim Img As New Image Img.Source = New Imaging.BitmapImage(Url) Container.Children.Add(Img) End If Next Return FlattenControl(Container) End Function The GLEED2D.Layer and GLEED2D.TextureItem classes are from the free level editor GLEED2D (http://www.gleed2d.de/). The texture_filename on every TextureItem is "Images/tree_clipart_pine_tree.png" This works just fine, but it's just a proof of concept. What I really need to do (among other things) is have the path to the image hard coded. If I replace Texture.texture_filename in the code above with the string literal "Images/tree_clipart_pine_tree.png" the images do not appear in the final merged image. I can add a breakpoint and see that the WriteableBitmap has all of it's pixels as 0 after the call to Invalidate. I have no idea how this could cause any sort of difference, but it gets stranger. If I remove the call to FlattenControl and just return the Canvas instead, the images are visible. It's only when I use the string literal with the WriableBitmap that the images do not appear. I promise you that the value in the texture_filename property is exactly "Images/tree_clipart_pine_tree.png". I'm using Silverlight 3 and I've also reproduced this in Silverlight 4. Any ideas?

    Read the article

  • Command-Line Parsing API from TestAPI library - Type-Safe Commands how to

    - by MicMit
    Library at http://testapi.codeplex.com/ Excerpt of usage from http://blogs.msdn.com/ivo_manolov/archive/2008/12/17/9230331.aspx A third common approach is forming strongly-typed commands from the command-line parameters. This is common for cases when the command-line looks as follows: some-exe COMMAND parameters-to-the-command The parsing in this case is a little bit more involved: Create one class for every supported command, which derives from the Command abstract base class and implements an expected Execute method. Pass an expected command along with the command-line arguments to CommandLineParser.ParseCommand – the method will return a strongly-typed Command instance that can be Execute()-d. // EXAMPLE #3: // Sample for parsing the following command-line: // Test.exe run /runId=10 /verbose // In this particular case we have an actual command on the command-line (“run”), which we want to effectively de-serialize and execute. public class RunCommand : Command { bool? Verbose { get; set; } int? RunId { get; set; } public override void Execute() { // Implement your "run" execution logic here. } } Command c = new RunCommand(); CommandLineParser.ParseArguments(c, args); c.Execute(); ============================ I don't get if we instantiate specific class before parsing arguments , what's the point of command line argument "run" which is very first one. I thought the idea was to instantiate and execute command/class based on a command line parameter ( "run" parameter becomes instance RunCommand class, "walk" becomes WalkCommand class and so on ). Can it be done with the latest version ?

    Read the article

  • Problem with JSONResult

    - by vikitor
    Hi, I' still newby to this so I'll try to explain what I'm doing. Basically what I want is to load a dropdownlist depending on the value of a previous one, and I want it to load the data and appear when the other one is changed. This is the code I've written in my controller: public ActionResult GetClassesSearch(bool ajax, string phylumID, string kingdom){ IList<TaxClass> lists = null; int _phylumID = int.Parse(phylumID); int _kingdom = int.Parse(kingdom); lists = _taxon.getClassByPhylumSearch(_phylumID, _kingdom); return Json(lists.count); } and this is how I call the method from the javascript function: function loadClasses(_phylum) { var phylum = _phylum.value; $.getJSON("/Suspension/GetClassesSearch/", { ajax: true, phylumID: phylum, kingdom: kingdom }, function(data) { alert(data); alert('no fallo') document.getElementById("pClass").style.display = "block"; document.getElementById("sClass").options[0] = new Option("-select-", "0", true, true); //for (i = 0; i < data.length; i++) { // $('#sClass').addOption(data[i].classID, data[i].className); //} }); } The thing is that just like this it works, I pass the function the number of classes within a selected phylum, and it displays the pclass element, the problem gets when I try to populate the slist with data (which should contain the objects retrieved from the database), because when there is data returned by the database changing return Json(lists) instead of return Json(lists.count) I keep getting the same error: A circular reference was detected while serializing an object of type 'SubSonic.Schema.DatabaseColumn'. I've been going round and round debugging and making tests but I can't make it work, and it is suppossed to be a simple thing, but I'm missing something. I have commented the for loop because I'm not quite sure if that's the way you access the data, because I've not been able to make it work when it finds records. Can anyone help me? Thanks in advance, Victor

    Read the article

  • Getting gridview column value as parameter for javascript function

    - by newName
    I have a gridview with certain number of columns and the ID column where I want to get the value from is currently set to visible = false. The other column is a TemplateField column (LinkButton) and as the user clicks on the button it will grab the value from the ID column and pass the value to one of my javascript function. WebForm: <script language=javascript> function openContent(contentID) { window.open('myContentPage.aspx?contentID=' + contentID , 'View Content','left=300,top=300,toolbar=no,scrollbars=yes,width=1000,height=500'); return false; } </script> <asp:GridView ID="gvCourse" runat="server" AutoGenerateColumns="False" OnRowCommand="gvCourse_RowCommand" OnRowDataBound="gvCourse_RowDataBound" BorderStyle="None" GridLines="None"> <RowStyle BorderStyle="None" /> <Columns> <asp:TemplateField> <ItemTemplate> <asp:LinkButton ID="lnkContent" runat="server" CommandName="View" CommandArgument='<%#Eval("contentID") %>' Text='<%#Eval("contentName") %>'> </asp:LinkButton> </ItemTemplate> </asp:TemplateField> <asp:BoundField DataField="contentID" HeaderText="contentID" ReadOnly="True" SortExpression="contentID" Visible="False" /> <asp:BoundField DataField="ContentName" HeaderText="ContentName" ReadOnly="True" SortExpression="ContentName" Visible="False" /> </Columns> <AlternatingRowStyle BorderStyle="None" /> Code behind: protected void gvCourse_RowCommand(object sender, GridViewCommandEventArgs e) { if (e.CommandName == "View") { intID = Convert.ToInt32(e.CommandArgument); } } protected void gvCourse_RowDataBound(object sender, System.Web.UI.WebControls.GridViewRowEventArgs e) { if (e.Row.RowType == DataControlRowType.DataRow) { ((LinkButton)e.Row.FindControl("lnkContent")).Attributes.Add("onclick", "return openContent('" + intID + "');"); } } right not i'm trying to get the intID based on user selected item so when the user clicks on the linkbutton it will open a popup window using javascript and the ID will be used as querystring.

    Read the article

  • Handling mach exceptions in 64bit OS X application

    - by Brad S
    I have been able to register my own mach port to capture mach exceptions in my applications and it works beautifully when I target 32 bit. However when I target 64 bit, my exception handler catch_exception_raise() gets called but the array of exception codes that is passed to the handler are 32 bits wide. This is expected in a 32 bit build but not in 64 bit. In the case where I catch EXC_BAD_ACCESS the first code is the error number and the second code should be the address of the fault. Since the second code is 32 bits wide the high 32 bits of the 64 bit fault address is truncated. I found a flag in <mach/exception_types.h> I can pass in task_set_exception_ports() called MACH_EXCEPTION_CODES which from looking at the Darwin sources appears to control the size of the codes passed to the handler. It looks like it is meant to be ored with the behavior passed in to task_set_exception_ports(). However when I do this and trigger an exception, my mach port gets notified, I call exc_server() but my handler never gets called, and when the reply message is sent back to the kernel I get the default exception behavior. I am targeting the 10.6 SDK. I really wish apple would document this stuff better. Any one have any ideas?

    Read the article

< Previous Page | 316 317 318 319 320 321 322 323 324 325 326 327  | Next Page >