Search Results

Search found 45466 results on 1819 pages for 'config files'.

Page 413/1819 | < Previous Page | 409 410 411 412 413 414 415 416 417 418 419 420  | Next Page >

  • Parse JSON into a ListView friendly output

    - by Thomas McDonald
    So I have this JSON, which then my activity retrieves to a string: {"popular": {"authors_last_month": [ { "url":"http://activeden.net/user/OXYLUS", "item":"OXYLUS", "sales":"1148", "image":"http://s3.envato.com/files/15599.jpg" }, { "url":"http://activeden.net/user/digitalscience", "item":"digitalscience", "sales":"681", "image":"http://s3.envato.com/files/232005.jpg" } { ... } ], "items_last_week": [ { "cost":"4.00", "thumbnail":"http://s3.envato.com/files/227943.jpg", "url":"http://activeden.net/item/christmas-decoration-balls/75682", "sales":"43", "item":"Christmas Decoration Balls", "rating":"3", "id":"75682" }, { "cost":"30.00", "thumbnail":"http://s3.envato.com/files/226221.jpg", "url":"http://activeden.net/item/xml-flip-book-as3/63869", "sales":"27", "item":"XML Flip Book / AS3", "rating":"5", "id":"63869" }, { ... }], "items_last_three_months": [ { "cost":"5.00", "thumbnail":"http://s3.envato.com/files/195638.jpg", "url":"http://activeden.net/item/image-logo-shiner-effect/55085", "sales":"641", "item":"image logo shiner effect", "rating":"5", "id":"55085" }, { "cost":"15.00", "thumbnail":"http://s3.envato.com/files/180749.png", "url":"http://activeden.net/item/banner-rotator-with-auto-delay-time/22243", "sales":"533", "item":"BANNER ROTATOR with Auto Delay Time", "rating":"5", "id":"22243"}, { ... }] } } It can be accessed here as well, although it because it's quite a long string, I've trimmed the above down to display what is needed. Basically, I want to be able to access the items from "items_last_week" and create a list of them - originally my plan was to have the 'thumbnail' on the left with the 'item' next to it, but from playing around with the SDK today it appears too difficult or impossible to achieve this, so I would be more than happy with just having the 'item' data from 'items_last_week' in the list. Coming from php I'm struggling to use any of the JSON libraries which are available to Java, as it appears to be much more than a line of code which I will need to deserialize (I think that's the right word) the JSON, and they all appear to require some form of additional class, apart from the JSONArray/JSONObject script I have which doesn't like the fact that items_last_week is nested (again, I think that's the JSON terminology) and takes an awful long time to run on the Android emulator. So, in effect, I need a (preferably simple) way to pass the items_last_week data to a ListView. I understand I will need a custom adapter which I can probably get my head around but I cannot understand, no matter how much of the day I've just spent trying to figure it out, how to access certain parts of a JSON string..

    Read the article

  • Debugging Actionmailer

    - by Trip
    I have actionmailer set up. Emails are not being sent, and no errors. Where can I start my search to debug this? class Notifier < ActionMailer::Base default_url_options[:host] = APP_DOMAIN def email_blast(user, subject, message) subject subject from NOTIFIER_EMAIL recipients user.email sent_on Time.zone.now body :user => user.first_name + ' ' + user.last_name, :message => message end I do get a return in my log that the email was sent, just no actual email goes through. Also the reason, that this is not working is because I switched form a cluster to a solo box and some server settings were overwritten. I suspect that is probably the reason why this is not working. Anyone know what specific server settings I would have to look at ? UPDATE: ActionMailer::Base.delivery_method = :sendmail config.action_mailer.default_url_options = { :host => "75.101.153.93" } I found this in my production.rb . This code was originally here when it worked. Again, I believe that there must be something missing on my server..I did a 'which sendmail' and it returned /usr/bin/sendmail , so I added this : config.action_mailer.raise_delivery_errors = false config.action_mailer.perform_deliveries = true config.action_mailer.sendmail_settings = { :location => '/usr/bin/sendmail', :arguments => '-i -t' } Redeployed, restarted the server, and tested it. No emails were sent. The production.log said something was sent : Processing MediaController#create_a_video (for 173.161.167.41 at 2010-06-03 11:58:13) [GET] Parameters: {"action"=>"create_a_video", "controller"=>"media", "organization_id"=>"470", "_"=>"1275591493194"} Sent mail to [email protected] Rendering media/create_a_video Completed in 128ms (View: 51, DB: 1) | 200 OK [http://invent.hqchannel.com/organizations/470/media/create_a_video?_=1275591493194]

    Read the article

  • How did this happen?? Git error? Some other fluke?

    - by marfarma
    Every file in this Rails project is duplicated with a -e and again with a -e-e tacked onto the end of it, like the following. It's that way in my GitHub repository too. But I can't figure out how it happened. Any clue? Google searching comes up empty. -rw-r--r--@ 1 usrname staff 959 Jan 7 02:13 Gemfile -rw-r--r-- 1 usrname staff 958 Jan 5 01:10 Gemfile-e -rw-r--r-- 1 usrname staff 958 Jan 5 01:09 Gemfile-e-e -rw-r--r-- 1 usrname staff 6650 Jan 7 02:13 Gemfile.lock -rw-r--r-- 1 usrname staff 6650 Jan 5 01:10 Gemfile.lock-e -rw-r--r-- 1 usrname staff 6650 Jan 5 01:09 Gemfile.lock-e-e lrwxr-xr-x 1 usrname staff 18 Jan 5 00:37 README.rdoc - doc/README_FOR_APP -rw-r--r-- 1 usrname staff 283 Jan 5 01:10 Rakefile -rw-r--r-- 1 usrname staff 283 Jan 5 01:10 Rakefile-e -rw-r--r-- 1 usrname staff 283 Jan 5 01:09 Rakefile-e-e drwxr-xr-x 6 usrname staff 204 Jan 5 00:37 app drwxr-xr-x 5 usrname staff 170 Jan 5 01:10 autotest drwxr-xr-x 28 usrname staff 952 Jan 5 01:15 config -rw-r--r-- 1 usrname staff 173 Jan 5 01:10 config.ru -rw-r--r-- 1 usrname staff 173 Jan 5 01:10 config.ru-e -rw-r--r-- 1 usrname staff 173 Jan 5 01:09 config.ru-e-e

    Read the article

  • Unity Configuration and Same Assembly

    - by tyndall
    I'm currently getting an error trying to resolve my IDataAccess class. The value of the property 'type' cannot be parsed. The error is: Could not load file or assembly 'TestProject' or one of its dependencies. The system cannot find the file specified. (C:\Source\TestIoC\src\TestIoC\TestProject\bin\Debug\TestProject.vshost.exe.config line 14) This is inside a WPF Application project. What is the correct syntax to refer to the Assembly you are currently in? is there a way to do this? I know in a larger solution I would be pulling Types from seperate assemblies so this might not be an issue. But what is the right way to do this for a small self-contained test project. Note: I'm only interested in doing the XML config at this time, not the C# (in code) config. UPDATE: see all comments My XML config: <configuration> <configSections> <section name="unity" type="Microsoft.Practices.Unity.Configuration.UnityConfigurationSection, Microsoft.Practices.Unity.Configuration" /> </configSections> <unity> <typeAliases> <!-- Lifetime manager types --> <typeAlias alias="singleton" type="Microsoft.Practices.Unity.ContainerControlledLifetimeManager, Microsoft.Practices.Unity" /> <typeAlias alias="external" type="Microsoft.Practices.Unity.ExternallyControlledLifetimeManager, Microsoft.Practices.Unity" /> <typeAlias alias="IDataAccess" type="TestProject.IDataAccess, TestProject" /> <typeAlias alias="DataAccess" type="TestProject.DataAccess, TestProject" /> </typeAliases> <containers> <container name="Services"> <types> <type type="IDataAccess" mapTo="DataAccess" /> </types> </container> </containers> </unity> </configuration>

    Read the article

  • Removing related elements using XSLT 1.0

    - by pmdarrow
    I'm attempting to remove Component elements from the XML below that have File children with the extension "config." I've managed to do this part, but I also need to remove the matching ComponentRef elements that have the same "Id" values as these Components. <Fragment> <DirectoryRef Id="MyWebsite"> <Component Id="Comp1"> <File Source="Web.config" /> </Component> <Component Id="Comp2"> <File Source="Default.aspx" /> </Component> </DirectoryRef> </Fragment> <Fragment> <ComponentGroup Id="MyWebsite"> <ComponentRef Id="Comp1" /> <ComponentRef Id="Comp2" /> </ComponentGroup> </Fragment> Based on other SO answers, I've come up with the following XSLT to remove these Component elements: <?xml version="1.0" encoding="utf-8"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:output method="xml" indent="yes" /> <xsl:template match="Component[File[substring(@Source, string-length(@Source)- string-length('config') + 1) = 'config']]" /> <xsl:template match="@*|node()"> <xsl:copy> <xsl:apply-templates select="@*|node()"/> </xsl:copy> </xsl:template> </xsl:stylesheet> Unfortunately, this doesn't remove the matching ComponentRef elements (i.e. those that have the same "Id" values). The XSLT will remove the component with the Id "Comp1" but not the ComponentRef with Id "Comp1". How do I achieve this using XSLT 1.0?

    Read the article

  • asp.net 4.0 webforms - how to keep ContentPlaceHolder1_ out of client id's in a simple way?

    - by James Manning
    I'm attempting to introduce master pages to an existing webforms site that's avoided using them because of client id mangling in the past (and me not wanting to deal with the mangling and doing <% foo.ClientID % everywhere :) GOAL: use 'static' id values (whatever is in the server control's id attribute) except for data-bound / repeating controls which would break for those cases and therefore need suffixes or whatever to differentiate (basically, Predictable) Now that the site migrated to ASP.NET 4.0, I first attempted to use ClientIDMode of Static (in the web.config) but that broke too many places doing repeating controls (checkboxes inside gridviews, for instance) since they all resulted with the same id. So, I then tried Predictable (again, just in the web.config) so that the repeating controls wouldn't have conflicting id's, and it works well except that the master page content placeholder (which is indeed a naming container) is still reflecting in the resulting client id's (for instance, ContentPlaceHolder1_someCheckbox). Certainly I could leave the web.config setting as static and then go through all the databound/repeating controls switch them to Predictable, but I'm hoping there's some easier/simpler way to get that effect without having to scatter ClientIDMode attributes in those N number of places (or extend all those databound controls with my own usercontrol that just sets clientidmode, or whatever). I even thought of leaving web.config set to static and doing a master or basepage handler (preinit? not sure if that would work or not) that would go walk Controls with OfType<INamingContainer() (might be a better choice on the type, but that seems like a good starting choice looking at repeater and gridview) and then set those to Predictable so I'd get static for all my 'normal' things outside of repeating controls but not have to deal with static inside things like gridview/repeater/etc. I don't see any way to mark the content placeholder such that it 'opts out' of being included in child id's - setting the ID of the placeholder to empty/blank doesn't work as it's a required attribute :) At that point I figured there was a better/simpler way that I was missing and decided to ask on SO :) Edit: I thought about changing all my 'fetch by id' jquery calls from $('#foo') to fetch_by_id('foo') and then having that function return the 'right one' by checking $('#foo').length and then $('#ContentPlaceHolder1_foo').length (and maybe other patterns) or even just have it return $('#foo, #ContentPlaceHolder1_foo') (again, potentially other patterns) but changing all the places I fetch elements by id seemed pretty ugly too, and I'd like to avoid that abstraction layer if possible to do so easily :)

    Read the article

  • How to automatically split git commits to separate changes to a single file

    - by Hercynium
    I'm just plain stuck as to how to accomplish this, or if it's even possible. Even it it can be done, I wonder if it could be setting us up for a messed-up, unmanageable repository. I have set up two branches of the code-base. One is "master" and the other is "prod". The HEAD of prod is always the latest code in production, and master is the main development branch. Here's the problem, though: We're converting from CVS here at $work and most of the developers are still getting used to git. Their CVS workflow involved tagging versions of individual files for production, then updating the servers using the tag. Unfortunately, this has let to sloppy practices like committing unrelated changes together and then tagging the files after-the-fact... and the devs want to know how they can do the following: In their local repos, they hack and commit to their hearts' delight, then at the end of the day, be able to run a command that takes a list of files whose commits over the day get merged with their local prod - and only those files - even if those commits combine changes to other files. I know how to split commits with git rebase --interactive, but I have no clue how I would automate splitting commits at all, never mind the way I want to. I do realize the simplest thing would be to just tell them to switch the their prod branches, checkout the files from their master branches into the working tree then commit to prod. My problem with that is losing the history of their commits over the day.

    Read the article

  • Is there a reason to use the XML::LibXML::Number-object in my XML::LibXML-example?

    - by sid_com
    In this example I get to times '96'. Is there a possible case where I would need a XML::LibXML-Number-object to to achieve the goal? #!/usr/bin/env perl use warnings; use strict; use 5.012; use XML::LibXML; my $xml_string =<<EOF; <?xml version="1.0" encoding="UTF-8"?> <filesystem> <path> <dirname>/var</dirname> <files> <action>delete</action> <age units="hours">10</age> </files> <files> <action>delete</action> <age units="hours">96</age> </files> </path> </filesystem> EOF #/ my $doc = XML::LibXML->load_xml( string => $xml_string ); my $root = $doc->documentElement; my $result = $root->find( '//files/age[@units="hours"]' ); $result = $result->get_node( 1 ); say ref $result; # XML::LibXML::Element say $result->textContent; # 96 $result = $root->find ( 'number( //files/age[@units="hours"] )' ); say ref $result; # XML::LibXML::Number say $result; # 96

    Read the article

  • swfupload session problem destroy session

    - by saquib
    Hello Friends, I have a problem with swfupload. I am passing session_id() like this /upload-file.php?s=189477fcfa1ec7f630e70a09e1e84cae but its not maintaining session and destroying my current session (logging me out) here is code in file upload. <?php if(isset($_GET['s'])) { session_id($_GET['s']); session_start(); require_once 'admin/class/user.php'; $u = new User(); //Check for user logged in if($u->islogged() == FALSE) { header("location: index.php"); exit(); code continue ..... } because am not logged in server redirect me to the index.php this is swfupload debugger window output SWF DEBUG: ----- END SWF DEBUG OUTPUT ---- SWF DEBUG: SWF DEBUG: Event: fileDialogStart : Browsing files. Multi Select. Allowed file types: *.jpg SWF DEBUG: Select Handler: Received the files selected from the dialog. Processing the file list... SWF DEBUG: Event: fileQueued : File ID: SWFUpload_0_0 SWF DEBUG: Event: fileDialogComplete : Finished processing selected files. Files selected: 1. Files Queued: 1 SWF DEBUG: StartUpload: First file in queue SWF DEBUG: Event: uploadStart : File ID: SWFUpload_0_0 SWF DEBUG: ReturnUploadStart(): File accepted by startUpload event and readied for upload. Starting upload to /upload-file.php?s='189477fcfa1ec7f630e70a09e1e84cae' for File ID: SWFUpload_0_0 SWF DEBUG: Event: uploadProgress (OPEN): File ID: SWFUpload_0_0 SWF DEBUG: Event: uploadProgress: File ID: SWFUpload_0_0. Bytes: 317793. Total: 317793 SWF DEBUG: Event: uploadError: HTTP ERROR : File ID: SWFUpload_0_0. HTTP Status: 302. SWF DEBUG: Event: uploadComplete : Upload cycle complete. SWF DEBUG: StartUpload: First file in queue SWF DEBUG: StartUpload(): No files found in the queue.

    Read the article

  • Docs for auto-generated methods in Ruby on Rails

    - by macek
    Rails has all sorts of auto-generated methods that I've often times struggled to find documentation for. For example, in routes.rb, if I have: map.resources :projects do |p| p.resources :tasks end This will get a plethora of auto-generate path and url helpers. Where can I find documentation for how to work with these paths? I generally understand how to work with them, but more explicit docs might help me understand some of the magic that happens behind the scenes. # compare project_path(@project) project_task_path(@project, @task) # to project_path(:id => @project.id) project_task_path(:project_id => @project.id, :id => @task.id) Also, when I change an attribute on a model, @post.foo_changed? will be true. Where can I find documentation for this and all other magical methods that are created like this? If the magic is there, I'd love to take advantage of it. And finally: Is there a complete resource for config.___ statements for environment.rb? I was able to find docs for Configuration#gem but what attributes can I set within the stubs like config.active_record.___, config.action_mailer.___, config.action_controller.___, etc. Again, I'm looking for a complete resource here, not just a settings for the examples I provided. Even if you can only answer one of these questions, please chime in. These things seem to have been hiding from me and it's my goal to get them some more exposure, so I'll be upvoting all links to docs that point me to what I'm looking for. Thanks! ps, If they're not called auto-generated methods, I apologize. Someone can teach me a lesson here, too :) Edit I'm not looking for tutorials here, folks. I have a fair amount of experience with rails; I'm just looking for complete docs. E.g., I understand how routing works, I just want docs where I can read about all of the usage options.

    Read the article

  • AnkhSVN: Cannot checkout Subsolution due to existing "versioned" folder

    - by lostiniceland
    Hello Everyone I am using Subversion since quite some time for Java-Development and I have setup a repository on my local NAS. Since I have a MSDN subscription via my company I recently installed Visual Studio 2010 to do a small project with .NET. According to some "best-practices" my project folder looks like the following. MySolution main.sln Services services.sln Service A files Service A Test files View projectfiles Persistence persistence.sln PersistenceXml files PersistenceXml Test files PersistenceDB files PersistenceDB Test files The idea is, that the main.sln only contains the projects for the application, meaning no test projects. The subsolutions, contain the project(s) and their corresponding testprojects. I was able to put all those projects under versioncontrol with AnkhSVN, so I have the same structure there in my trunk. Commiting changes was also no problem. Now I would like to check the this out on another machine. I was able to check out the main.sln which downloaded everything that was inside this solution. It skipped the services.sln, persistence.sln and all the test-projects. Until now everything is fine. Now, here comes the problem: when I am tryting to check out the subsolution (eg. services.sln) I get an error, I think it was UnsupportedOperation. I guess this happens because ankhsvn is tryting to download the folder Service A again and create ist hidden .svn folder which is already present. The only workaround I can think of by now is installing Tortoise SVN and check out the whole thing at once. It would be nicer though to have everything from within VS. Does anyone know how I can solve this? Is another client the only solution?

    Read the article

  • GNU Make - Dependencies on non program code

    - by Tim Post
    A requirement for a program I am writing is that it must be able to trust a configuration file. To accomplish this, I am using several kinds of hashing algorithms to generate a hash of the file at compile time, this produces a header with the hashes as constants. Dependencies for this are pretty straight forward, my program depends on config_hash.h, which has a target that produces it. The makefile looks something like this : config_hash.h: $(SH) genhash config/config_file.cfg > $(srcdir)/config_hash.h $(PROGRAM): config_hash.h $(PROGRAM_DEPS) $(CC) ... ... ... I'm using the -M option to gcc, which is great for dealing with dependencies. If my header changes, my program is rebuilt. My problem is, I need to be able to tell if the config file has changed, so that config_hash.h is re-generated. I'm not quite sure how explain that kind of dependency to GNU make. I've tried listing config/config_file.cfg as a dependency for config_hash.h, and providing a .PHONY target for config_file.cfg without success. Obviously, I can't rely on the -M switch to gcc to help me here, since the config file is not a part of any object code. Any suggestions? Unfortunately, I can't post much of the Makefile, or I would have just posted the whole thing.

    Read the article

  • Python Tkinter comparing PhotoImage objects

    - by Kyle Schmidt
    In a simple LightsOut game, when I click on a light I need to toggle the image on a button. I'm doing this with Tkinter, so I thought I'd just check and see what image is currently on the button (either 'on.gif' or 'off.gif') and set it to the other one, like this: def click(self,x,y): if self.buttons[x][y].image == self.on: self.buttons[x][y].config(image=self.off) self.buttons[x][y].image == self.off else: self.buttons[x][y].config(image=self.on) self.buttons[x][y].image == self.on This ends up always being True - I can turn a lgiht off, but never turn it back on. Did some research, realized that I should probably be using cmp: def click(self,x,y): if cmp(self.buttons[x][y].image,self.on) == 0: self.buttons[x][y].config(image=self.off) self.buttons[x][y].image == self.off else: self.buttons[x][y].config(image=self.on) self.buttons[x][y].image == self.on But that gave me the exact same result. Both self.on and self.off are PhotoImage objects. Aside from keeping a separate set of lists which tracks what type of light is in each position and redrawing them every click, is there a way to directly compare two PhotoImage objects like this?

    Read the article

  • Removing HttpModule for specific path in ASP.NET / IIS 7 application?

    - by soccerdad
    Most succinctly, my question is whether an ASP.NET 4.0 app running under IIS 7 integrated mode should be able to honor this portion of my Web.config file: <location path="auth/windows"> <system.webServer> <modules> <remove name="FormsAuthentication"/> </modules> </system.webServer> </location> I'm experimenting with mixed mode authentication (Windows and Forms - I know there are other questions on S.O. about the topic). Using IIS Manager, I've disabled Anonymous authentication to auth/windows/winauth.aspx, which is within the location path above. I have Failed Request Tracing set up to trace various HTTP status codes, including 302s. When I request the winauth.aspx page, a 302 HTTP status code is returned. If I look at the request trace, I can see that a 401 (unauthorized) was originally generated by the AnonymousAuthenticationModule. However, the FormsAuthenticationModule converts that to a 302, which is what the browser sees. So it seems as though my attempt to remove that module from the pipeline for pages in that path isn't working. But I'm not seeing any complaints anywhere (event viewer, yellow pages of death, etc.) that would indicate it's an invalid configuration. I want the 401 returned to the browser, which presumably would include an appropriate WWW-Authenticate header. A couple of other points: a) I do have <authentication mode="Forms"> in my Web.config, and that is what the 302 redirects to; b) I got the "name" of the module I'm trying to remove from the inetserv\config\applicationHost.config file. Anyone had any luck removing modules in this fashion? Thanks much, Donnie

    Read the article

  • Git: changes not reflecting on other checkouts - huh?

    - by Chad Johnson
    Okay, so, I have my branches (git branch -a): * chat master remotes/origin/HEAD -> origin/master remotes/origin/chat I make changes (still with the 'chat' branch checkout out), commit, and push. I go to my server, on which I have a clone of the repository, and I do a fetch: git getch then I switch to the chat branch: git checkout --track -b chat origin/chat and I event do a pull, just to make sure everything is up to date: git pull and my changes from my other computer are NOT. THERE. What the heck am I doing wrong? If I had hair, I would have pulled it out. Thankfully I am bald. When I try a 'git commit' again, I get this # On branch chat # Changed but not updated: # (use "git add/rm <file>..." to update what will be committed) # (use "git checkout -- <file>..." to discard changes in working directory) # # modified: app/controllers/chat_controller.rb # modified: app/views/dashboard/index.html.erb # modified: app/views/dashboard/layout.js.erb # modified: app/views/layouts/dashboard.html.erb # deleted: app/views/project/.tmp_edit.html.erb.55742~ # deleted: app/views/project/.tmp_edit.html.erb.83482~ # modified: public/stylesheets/dashboard/layout.css # # Untracked files: # (use "git add <file>..." to include in what will be committed) # # .loadpath # .project # config/database.yml # config/environments/development.yml # config/environments/production.yml # config/environments/test.yml # log/ no changes added to commit (use "git add" and/or "git commit -a")

    Read the article

  • How to implement jsf validator?

    - by Krishna
    HI, I want to know how to implement Validator in JSF. What is the advantages of declaring the validator-id. When it will be called in the life cycle?. I have implemented the following code. Please find out what is wrong in the code. I am not seeing it called anywhere in the life cycle. <?xml version="1.0"?> <!DOCTYPE faces-config PUBLIC "-//Sun Microsystems, Inc.//DTD JavaServer Faces Config 1.1//EN" "http://java.sun.com/dtd/web-facesconfig_1_1.dtd"> <faces-config> <lifecycle> <phase-listener>javabeat.net.jsf.JsfPhaseListener</phase-listener> </lifecycle> <validator> <validator-id>JsfValidator</validator-id> <validator-class>javabeat.net.jsf.JsfValidator</validator-class> </validator> <managed-bean> <managed-bean-name>jsfBean</managed-bean-name> <managed-bean-class>javabeat.net.beans.ManagedBean</managed-bean-class> <managed-bean-scope>request</managed-bean-scope> </managed-bean> <navigation-rule> <navigation-case> <from-outcome>success</from-outcome> <to-view-id>success.jsp</to-view-id> </navigation-case> </navigation-rule> </faces-config> public class JsfValidator implements Validator { public JsfValidator() { System.out.println("Inside JsfValidator Constructor"); } @Override public void validate(FacesContext facesContext, UIComponent uiComponent, Object object) throws ValidatorException { System.out.println("Inside Validator"); } }

    Read the article

  • VS 2008 debugger: How does it decide what Cassini port to run a web service under?

    - by BDW
    I have a VS 2008 solution that includes a web site and a web service. I'm developing both at once, and it's helpful to be able to debug from one into the other. It occasionally can't find the web service. If I look in the web.config, I find the port number it's looking at is not the port number it auto-runs the service in when I use the debugger. For example, the web.config reference says something like: add key="mynamespace.mywebservice" value="http://localhost:55765/mywebservice.asmx" When I hover over the Cassini port icon, I find that the web service is running in port 55382 (or some other non-55765 port). No wonder it can't find it. Is there a way to enforce that the port number it runs under is the one specified in the web config? And if it's not using the web config port number to figure out where to run it... where does it decide? I know in VS2005, there was a way to specify the port number to use when debugging, but I can't find that anywhere in the web service project in VS 2008. This is really going to cause problems as more developers come on to this project - how can I fix it? Deleting and re-adding the web services to the project fixes it, but I'd literally have to do it a couple times a day, not an ideal solution.

    Read the article

  • Excel automation using C#

    - by tecnodude
    Hi, I have a folder with close to 400 excel files. I need to copy the worksheets in all these excel files to a single excel file. using Interop and Reflection namespaces heres is what I have accomplished so far. I use folderBrowserDialog to browse to the folder and select it, this enable me to get the file names of the files within the folder and iterate through them this is as far as i got, any help would be appreciated. if (result == DialogResult.OK) { string path = fbd1.SelectedPath; //get the path int pathLength = path.Length + 1; string[] files = Directory.GetFiles(fbd1.SelectedPath);// getting the names of files in that folder foreach (string i in files) { MessageBox.Show("1 " + i); myExcel.Application excelApp = new myExcel.ApplicationClass(); excelApp.Visible = false; MessageBox.Show("2 " + i); myExcel.Workbook excelWorkbook = excelApp.Workbooks.Add(excelApp.Workbooks._Open(i, 0, false, 5, "", "", false, myExcel.XlPlatform.xlWindows, "", true, false, 0, true)); myExcel.Sheets excelSheets = excelWorkbook.Worksheets; MessageBox.Show("3 " + i); excelApp.Workbooks.Close(); excelApp.Quit(); } MessageBox.Show("Done!"); } How do i append the copied sheets to the destination file. Hope the question is clear? thanks.

    Read the article

  • Web API Getting Http 500 error : Issue Solved See Below

    - by Joe Grasso
    Here is my MVC Controller and everything is fine: private UnitOfWork UOW; public InventoryController() { UOW = new UnitOfWork(); } // GET: /Inventory/ public ActionResult Index() { var products = UOW.ProductRepository.GetAll().ToList(); return View(products); } Same method call in API Controller gives me an Http 500 Error: private UnitOfWork _unitOfWork; public TestController() { _unitOfWork = new UnitOfWork(); } public IEnumerable<Product> Get() { var products = _unitOfWork.ProductRepository.GetAll().ToList(); return products; } Debugging shows that indeed there is data being returned in both controllers' UOW calls. I then added a customer configuration in Global: public static void CustomizeConfig(HttpConfiguration config) { config.Formatters.Remove(config.Formatters.XmlFormatter); var json = config.Formatters.JsonFormatter; json.SerializerSettings.ContractResolver = new CamelCasePropertyNamesContractResolver(); } I am still receiving an Http 500 in API Controller ONLY and at a loss as to why. Any ideas? UPDATE: It appears using lazy loading caused the problem. When I set the associated properties to NON-VIRTUAL the Test API provided the necessary JSON string. However, whereas before I had the Vendor class included, I only have VendorId. I really wanted to included the associated classes. Any ideas? I know there are alot of smart people out there. Anyone?

    Read the article

  • Report Viewer Configuration error - In View Source of webpage

    - by sarada
    I found the following error message when I checked View source of the web page , but the web page works just fine. Our Test lead found the error while performing Assertion tests. Report Viewer Configuration Error The Report Viewer Web Control HTTP Handler has not been registered in the application's web.config file. Add <add verb=" * " path="Reserved.ReportViewerWebControl.axd" type = "Microsoft.Reporting.WebForms.HttpHandler, Microsoft.ReportViewer.WebForms, Version=10.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" /> to the system.web/httpHandlers section of the web.config file, or add <add name="ReportViewerWebControlHandler" preCondition="integratedMode" verb="*" path="Reserved.ReportViewerWebControl.axd" type="Microsoft.Reporting.WebForms.HttpHandler, Microsoft.ReportViewer.WebForms, Version=10.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" /> to the system.webServer/handlers section for Internet Information Services 7 or later Why is this error message coming up in view source.. Note:There is a div tag around this error message which has style="display:none" I am trying to find out why but everybody has only discussed this error message as one which is thrown in the webpage. The changes suggested to web.config are already present in our config file.

    Read the article

  • ASP.NET - Accessing copied content

    - by James Kolpack
    I have a class library project which contains some content files configured with the "Copy if newer" copy build action. This results in the files being copied to a folder under ...\bin\ for every project in the solution. In this same solution, I've got a ASP.NET web project (which is MVC, by the way). In the library I have a static constructor load the files into data structures accessible by the web project. Previously I've been including the content as an embedded resource. I now need to be able to replace them without recompiling. I want to access the data in three different contexts: Unit testing the library assembly Debugging the web application Hosting the site in IIS For unit testing, Environment.CurrentDirectory points to a path containing the copied content. When debugging however, it points to C:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE. I've also looked at Assembly.GetExecutingAssembly().Location which points to C:\Windows\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\root\c44f9da4\9238ccc\assembly\dl3\eb4c23b4\9bd39460_f7d4ca01\. What I need is to the physical location of the webroot \bin folder, but since I'm in a static constructor in the library project, I don't have access to a Request.PhysicalApplicationPath. Is there some other environment variable or structure where I can always find my "Copy if newer" files?

    Read the article

  • A problem regarding dll inheritance

    - by Adam
    Hello all I have created a dll that will be used by multiple applications, and have created an installer package that installs it to the program files, as well as adds it to the Global Assembly Cache. The dll itself uses log4net, and requires a xml file for the logging definitions. Therefore when the installer is run, the following files get copied to the install directory within program files: The main dll that I developed - The Log4Net.dll - the Log4Net.xml file I am now experiencing a problem. I have created a test console application for experimentation. I have added my dll as a reference, and set the 'local copy' flag to false. When I compile the test console exe however, I noticed that it has copied the log4net.dll and log4net.xml files to the bin directory. And when running the test console, it appears that it will only work if the log4net.dll is in the same directory as the exe. This is dispite the fact that the test console application does not use log4net, only the dll that was added as a reference does. Is there some way to have it so that the log4net.dll & xml files used will be the ones that were installed to the program files, rather than any application needed to copy over local copies? The applications that will be using my dll will not be using log4net, only the dll that they are referencing uses it. Many thanks

    Read the article

  • implementing a Intelligent File Transfer Software in java over TCP/IP

    - by whyjava
    Hello I am working on a proposal where we have to implement a software which can move files between one source to destination.The overall goal of this project is to create intelligent file transfer.This software will have three components :- 1) Broker : Broker is the module that communicates with other brokers, monitors files, moves files, retrieves configurations from the Configuration Manager, supplies process information for the monitor, archives files, writes all process data to log files and escalates issues if necessary 2) Configuration Manager :Configuration Manager is a web-based application used to configure and deploy the configuration to all brokers. 3) Monitor : Monitor is a web-based application used to monitor each Broker in the environment. This project has to be built up in java and protocol for file transfer in tcp/ip. Client does not want to use FTP. File Transfer seems very easy, until there are several processes who are waiting to pick the file up automatically. Several problems arise: How can we guarantee the file is received at the destination? If a file isn’t received the first time, we should try it again (even after a restart or power breakdown) ? How does the receiver knows the file that is received is complete? How can we transfer multiple files synchronously? How can we protect the bandwidth, so file transfer isn’t blocking other processes? How does one interoperate between multiple OS platforms? What about authentication? How can we monitor het workflow? Auditing / logging Archiving Can you please provide answer to some of these? Thanks

    Read the article

  • Fontforge finds no fonts

    - by user1858080
    I'm trying to make my c++ program detect installed fonts on my Win32 machine. I tried fontforge by taking the library from the GTK+ bundle. I use the following test code: #include<fontconfig.h> FcBool success = FcInit (); if ( !success ) { return false; } FcConfig *config = FcInitLoadConfigAndFonts (); if(!config) { return false; } FcChar8 *s, *file; FcPattern *p = FcPatternCreate(); FcObjectSet *os = FcObjectSetBuild (FC_FAMILY,NULL); FcFontSet *fs = FcFontList(config, p, os); LOG("Total fonts: %d\n", fs->nfont); for (int i=0; fs && i < fs->nfont; i++) { FcPattern *font = fs->fonts[i]; s = FcNameUnparse(font); LOG("Font: %s\n", s); free(s); if (FcPatternGetString(font, FC_FILE, 0, &file) == FcResultMatch) { LOG("Filename: %s\n", file); } } // destroy objects here ... Unfortunately this test application only prints: "Total fonts: 0" I know there are fonts installed on my machine and I know that Gimp2.0 detects them, so there must be somthing wrong with my test code. Does anyone have any idea? Besides linking the fontconfig-1.dll I did nothing special. I haven't created any config files or anything, because I couldn't read anywhere about having to do that. Please place any suggestions, thanks!

    Read the article

  • how to use a PHP Constant that gets pulled from a database

    - by Ronedog
    Can you read out the name of a PHP constant from a database and use it inside of a php variable, to display the value of the constant for use in a menu? For example here's what I'm trying to accomplish In SQL: select menu_name AS php_CONSTANT where menu_id=1 the value returned would be L_HOME which is the name of a CONSTANT in a php config page. The php config page looks like this define('L_HOME','Home'); and gets loaded before the database call. The php usage would be $db_returned_constant which has a value of L_HOME that came from the db call, then I would place this into a string such as $string = '<ul><li>' . $db_returned_constant . '</li></ul>' and thus return a string that looks like $string = '<ul><li><a href="#" onclick="path_from_db">Home</a></li></ul>'. To sum up what I'm trying to do Load a config file based on the language preference query the db to return the menu name, which is the name of a CONSTANT in the config file loaded in step one, and also retrieve the menu_link which is used in the "onclick" event. Use a php variable to hold the name of the CONSTANT Place the variable into a string that gets echo'd out to create the menu displaying the value of the CONSTANT. I hope this makes enough sense...is it even possible to use a constant like this? Thanks.

    Read the article

< Previous Page | 409 410 411 412 413 414 415 416 417 418 419 420  | Next Page >