Search Results

Search found 15209 results on 609 pages for 'configuration'.

Page 523/609 | < Previous Page | 519 520 521 522 523 524 525 526 527 528 529 530  | Next Page >

  • Glassfish, railo and coldbox - messed up links?

    - by mrt181
    I am new to ColdFusion and ColdBox (and programming). I tried to setup ColdBox but some of the links in the sample applications are broken. My configuration is a GlassFish v3 installation with the current Railo OSS. I access my site through Apache 2.2.14. So instead of http://127.0.0.1:8080/railo/ I access my environment trough http://railo/. In Railo I have a webroot mapping / to C:/webapps/myproject/. I have copied the current ColdBox 3M4 to C:/webapps/myproject/coldbox. I can access the dashboard through http://railo/coldbox/dashboard/index.cfm and have access to all options. My problems start the moment I try to open the sample gallery: HTTP Status 500 - type Exception report message description The server encountered an internal error () that prevented it from fulfilling this request. exception java.io.FileNotFoundException: C:\webapps\viss-dev\coldbox\samples (Zugriff verweigert) note The full stack traces of the exception and its root causes are available in the GlassFish v3 logs. GlassFish v3 OK, no problem, just enter the link directly: http://railo/coldbox/samples/index.cfm. The site looks plain, who cares - BUT all local links look like this: http://127.0.0.1:8080/coldbox/samples/applications/helloworld/index.cfm (railo is replaced with 127.0.0.1:8080). Looks like trouble. To make my confusion perfect: when I try to access the login app: http://railo/coldbox/samples/applications/sampleloginapp/index.cfm and hit the submit button, I am redirected to this address: http://railo/railo/coldbox/samples/applications/sampleloginapp/index.cfm. I believe that this is not really ColdBox-related, but it manifests itself when I try to use ColdBox, so here I am. P.S.: amazon.de takes too long to ship the ColdBox book :(

    Read the article

  • Netbeans (PHP) catching syntax error on xml declaration

    - by Mike Valeriano
    Hello. I've just installed and configured Netbeans to work with PHP (including xdebug), and almost everything is working as intended, except that I've been getting "errors" in the IDE after I edited the default webpage template to comply with xhtml 1.1. The template is this: <?xml version="1.0" encoding="${project.encoding}" ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=${project.encoding}" /> <title></title> </head> <body> </body> </html> These are the errors I receive (I can't post images yet). Page is created OK, character encoding is inherited from project's settings as expected. It's probably something to do with the xml declaration on top of the document, but I don't really know how to "tune" Netbeans to ignore it and not show the 3 errors on every page I create. The warning is there because NB does not recognize the xhtml 1.1 DTD, so it falls back to html 4.01, which does not support the xmlns attribute in the html tag - and that's the only thing I could find searching around. It will be fixed in the next version, so I'm not worried about it. I know there's nothing wrong with the markup, but there's probably something I'm missing in NB configuration, and I would like to get rid of those messages because they pretty much take all the space I reserve for errors/warnings/tasks. So is there any way I can either make NB recognize this xml declaration or make it ignore these specific "errors"? Thanks.

    Read the article

  • Optional Member Objects

    - by David Relihan
    Okay, so you have a load of methods sprinkled around your systems main class. So you do the right thing and refactor by creating a new class and perform move method(s) into a new class. The new class has a single responsibility and all is right with the world again: class Feature { public: Feature(){}; void doSomething(); void doSomething1(); void doSomething2(); }; So now your original class has a member variable of type object: Feature _feature; Which you will call in the main class. Now if you do this many times, you will have many member-objects in your main class. Now these features may or not be required based on configuration so in a way it's costly having all these objects that may or not be needed. Can anyone suggest a way of improving this? At the moment I plan to test in the newly created class if the feature is enabled - so the when a call is made to method I will return if it is not enabled. I could have a pointer to the object and then only call new if feature is enabled - but this means I will have to test before I call a method on it which would be potentially dangerous and not very readable. Would having an auto_ptr to the object improve things: auto_ptr<Feature> feature; Or am I still paying the cost of object invokation even though the object may\or may not be required. BTW - I don't think this is premeature optimisation - I just want to consider the possibilites.

    Read the article

  • How to send arbitrary ftp commands in C#

    - by cchampion
    I have implemented the ability to upload, download, delete, etc. using the FtpWebRequest class in C#. That is fairly straight forward. What I need to do now is support sending arbitrary ftp commands such as quote SITE LRECL=132 RECFM=FB or quote SYST Here's an example configuration straight from our app.config: <!-- The following commands will be executed before any uploads occur --> <extraCommands> <command>quote SITE LRECL=132 RECFM=FB</command> </extraCommands> I'm still researching how to do this using FtpWebRequest. I'll probably try WebClient class next. Anyone can point me in the right direction quicker? Thanks! UPDATE: I've come to that same conclusion, as of .NET Framework 3.5 FtpWebRequest doesn't support anything except what's in WebRequestMethods.Ftp.*. I'll try a third party app recommended by some of the other posts. Thanks for the help!

    Read the article

  • java logging nightmare and log4j not behaving as expected with spring + tomcat6

    - by maverick
    I have a spring application that has configured log4j (via xml) and that runs on Tomcat6 that was working fine until we add a bunch of dependencies via Maven. At some point the whole application just started logging part of what it was supposed to be declared into the log4.xml "a small rant here" Why logging has to be that hard in java world? why suddenly an application that was just fine start behaving so weird and why it's so freaking hard to debug? I've been reading and trying to solve this issue for days but so far no luck, hopefully some expert here can give me some insights on this I've added log4j debug option to check whether log4j is taking reading the config file and its values and this is what part of it shows log4j: Level value for org.springframework.web is [debug]. log4j: org.springframework.web level set to DEBUG log4j: Retreiving an instance of org.apache.log4j.Logger. log4j: Setting [org.compass] additivity to [true]. log4j: Level value for org.compass is [debug]. log4j: org.compass level set to DEBUG As you can see debug is enabled for compass and spring.web but it only shows "INFO" level for both packages. My log4j config file has nothing out of extraordinary just a plain ConsoleAppender <log4j:configuration xmlns:log4j="http://jakarta.apache.org/log4j/"> <!-- Appenders --> <appender name="console" class="org.apache.log4j.ConsoleAppender"> <param name="Target" value="System.out" /> <layout class="org.apache.log4j.PatternLayout"> <param name="ConversionPattern" value="%-5p: %c - %m%n" /> </layout> </appender> What's the trick to make this work? What it's my misunderstanding here? Can someone point me in the right direction and explain how can I make this logging mess more bullet proof?

    Read the article

  • What ORM for .NET should I use?

    - by eKek0
    I'm relatively new to .NET and have being using Linq2Sql for a almost a year, but it lacks some of the features I'm looking for now. I'm going to start a new project in which I want to use an ORM with the following characteristics: It has to be very productive, I don't want to be dealing with the access layer to save or retrieve objects from or to the database, but it should allows me to easily tweak any object before actually commit it to the database; also it should allows me to work easily with a changing database schema It should allows me to extend the objects mapped from the database, for example to add virtual attributes to them (virtual columns to a table) It has to be (at least almost) database agnostic, it should allows me to work with different databases in a transparent way It has to have not so much configuration or must be based on conventions to make it work It should allows me to work with Linq So, do you know any ORM that I could use? Thank you for your help. EDIT I know that an option is to use NHibernate. This appears as the facto standard for enterprise level applications, but also it seems that is not very productive because its deep learning curve. In other way, I have read in some other post here in SO that it doesn't integrate well with Linq. Is all of that true?

    Read the article

  • After rich:extendedDataTable sortby,otheractions are not getting executed

    - by user118802
    I have a RichFaces UI which are having sidebar menu and sidebar had 8 links. I am using Seam @DataModel and @Factory and hibernate criteria to populate all the 8 pages. In all the pages i have sortby functionality which is working fine. I am able to get all data in all the 8 pages and I can freely navigate around all the links/xhtmls. But if in one of the xhtmlpages if I do sorting or groupby after that I am unable to navigate to other pages.If I select any other link the same last query which is executed for sorting is getting executed. Is this an issue? or do I need add any configuration. Please help me in solving this issue. Below is the codesnippet one of the 8 xhtml <rich:column sortable="true" sortBy="#{p.regionid}" width="100px" label="Region Id"> <f:facet name="header"> <h:outputText value="Region Id" /> </f:facet> <h:outputText value="#{p.regionid}" /> </rich:column> <rich:column sortable="true" sortBy="#{p.region}" width="100px" label="Region Name"> <f:facet name="header"> <h:outputText value="Region Name" /> </f:facet> <h:outputText value="#{p.region}" /> </rich:column> Sidebar Action @DataModel("regions") private List<CoreRegion> listRegions; @Factory("regions") public void getRegions() { System.out.println("Inside get Regions"); Session userDatabase = HibernateUtil.getSession(); Criteria crit = userDatabase.createCriteria(CoreRegion.class); listRegions = crit.list();

    Read the article

  • Can I force Apache 2.2 connection close from inside a C module?

    - by Amos Shapira
    Hello, We'd like to have a more fine-grained control on the connections we serve in a C++ Apache 2.2 module (on CentOS 5). One of the connections needs to stay alive for a few multiple requests, so we set "KeepAlive" to "On" and set a short keep-alive period. But for every such connection we have a few more connections from the browser which we don't need to leave behind and instead want to force them to close after a single request. Some of these connections are on different ports (so we can distinguish them by port, since KeepAlive can be set per virtual host) and some request a different URL (so we can tell from the path and parameters that we don't want to leave them behind). Also for the one we do want to keep alive, we know that after a certain request we'd like to close it too. But so far the only way we found to "cancel" the keep-alive is to send a polite "Connection: close" header to the client. If the client is not well behaved, or malicious, then they can keep it open and waste our resources. Is there a way to tell Apache to close the connection from the server side? The documentation advises against just plain close(2) call on the socket since Apache needs to do some clean up before that's done. But is there some API or a trick to "override" the static "KeepAlive On" configuration dynamically (and convince Apache to call close(2))? Thanks.

    Read the article

  • Using WebMatrix and Visial Studio 2010 on a Razor project

    - by Terrence Koehn
    I am having problems using both WebMatrix and VS on a Razor project. I have downloaded and installed all updates from the official ASP.net web site. After getting the project to compile in VS I get the following error: "The type of page you have requested is not served because it has been explicitly forbidden. The extension '.cshtml' may be incorrect." Now when I open the project in WebMatrix I receive the same error. I can open/run other projects in WebMatrix without errors so apparently VS changed some configuration in my project? Fortunately I have found a work around but the problem is still not solved. 1) Create a new empty folder for site. 2) Copy contents of folder from failing site. 3) In WebMatrix use option "Site From Folder". Once I have the site up and running with the above steps I can delete the original folder, then rename the new folder (which is now working) to the orig name and the site will stop working again. There is some setting on my system tied to the original folder name that is stopping cshtml files from being served. What/Where is that setting? Thanks, Terrence Koehn

    Read the article

  • Can I stop the dbml designer from adding a connection string to the dbml file?

    - by drs9222
    We have a custom function AppSettings.GetConnectionString() which is always called to determine the connection string that should be used. How this function works is unimportant to the discussion. It suffices to say that it returns a connection string and I have to use it. I want my LINQ to SQL DataContext to use this so I removed all connection string informatin from the dbml file and created a partial class with a default constructor like this: public partial class SampleDataContext { public SampleDataContext() : base(AppSettings.GetConnectionString()) { } } This works fine until I use the designer to drag and drop a table into the diagram. The act of dragging a table into the diagram will do several unwanted things: A settings file will be created A app.config file will be created My dbml file will have the connection string embedded in it All of this is done before I even save the file! When I save the diagram the designer file is recreated and it will contain its own default constructor which uses the wrong connection string. Of course this means my DataContext now has two default constructors and I can't build anymore! I can undo all of these bad things but it is annoying. I have to manually remove the connection string and the new files after each change! Is there anyway I can stop the designer from making these changes without asking? EDIT The requirement to use the AppSettings.GetConnectionString() method was imposed on me rather late in the game. I used to use something very similar to what it generates for me. There are quite a few places that call the default constructor. I am aware that change them all to create the data context in another way (using a different constructor, static method, factory, ect..). That kind of change would only be slightly annoying since it would only have to be done once. However, I feel, that it is sidestepping the real issue. The dbml file and configuration files would still contain an incorrect, if unused, connection string which at best could confuse other developers.

    Read the article

  • problem configure JBoss to work with JNDI

    - by Spiderman
    I am trying to bind connection to the DB using JNDI in my application that runs on JBoss. I did the following: I created the datasource file oracle-ds.xml filled it with the relevant xml elements: <datasources> <local-tx-datasource> <jndi-name>bilby</jndi-name> ... </local-tx-datasource> </datasources> and put it in the folder \server\default\deploy Added the relevant oracle jar file than in my application I performed: JndiObjectFactoryBean factory = new JndiObjectFactoryBean(); factory.setJndiName("bilby"); try{ factory.afterPropertiesSet(); dataSource = factory.getObject(); } catch(NamingException ne) { ne.printStackTrace(); } and this cause the error: javax.naming.NameNotFoundException: bilby not bound then in the output after this error occured I saw the line: 18:37:56,560 INFO [ConnectionFactoryBindingService] Bound ConnectionManager 'jb oss.jca:service=DataSourceBinding,name=bilby' to JNDI name 'java:bilby' So what is my configuration problem? I think that it may be that JBoss first loads and runs the .war file of my application and only then it loads the oracle-ds.xml that contain my data-source definition. The problem is that they are both located in the same folder. Is there a way to define priority of loading them, or maybe this is not the problem at all. Any idea?

    Read the article

  • Rails routing problem

    - by Steve
    I am new to Rails routing and I currently have a problem and hope someone can explain it to me. I am using Rails 2.3.5 Firstly, let me describe my working-fine code: I have a text example, which has a controller (cars_controller) with an update action (along with some other actions). The update action needs the :id parameter. The edit.html.erb has a form: <% form_for :car, :url = {:controller = 'cars', :action = 'update' } % ... # rest of the form content. In the configuration/routes.rb, I have a self-defined routing rule for update: map.connect 'car/update/:id', :controller = 'cars', :action = 'update' This works fine. Secondly, I change the code. All I change is the self-defined routing rule to map.connect 'car/:action/:id, :controller = 'cars' To me, this rule covers the self-written routing rule. Of course, this rule is also used by other actions such as edit. But the edit.html.erb doesn't work. It complains that update action misses the :id parameter. I have to change the form_for helper to: <% form_for :car, :url = {:controller = 'cars', :action = 'update', :id = @car }% ... # @car is the instance passed to edit view. I know that if missing the :id parameter, update action will complain. What I don't understand is why my first code works (with my self-defined routing rule) but my second code fails. It seems to me that I didn't provide :id parameter in my self-defined routing rule. Anyone has an idea?

    Read the article

  • TeamCity output artifacts not published to IIS7 folder

    - by clausas
    I am trying to set up TeamCity to build and deploy an ASP.NET MVC application. I have the setup running successfully on other servers using TeamCity 4.5, but the new server is running TeamCity 6, and I am having trouble getting it to work as expected. TeamCity manages to get the files from source control, and the project (Visual Studio Solution 2008 set to "Build") builds and outputs the necessary files as expected. The problem seems to be with my artifact paths, as the output files are not copied to the website folder. My solution consists of dozen projects, of which the "Web" project is the interesting one in this case. The build checkout directory is C:\TeamCity\buildAgent\work\7da320cebf0ee541, and the "Web"-project is found in C:\TeamCity\buildAgent\work\7da320cebf0ee541\Web I have set up my build configuration with the following artifact paths (relative from checkout directory to the folder containing the website): Web/bin=>../../../../inetpub/wwwroot/staging/bin Web/Content=>../../../../inetpub/wwwroot/staging/Content Web/Views=>../../../../inetpub/wwwroot/staging/Views Web/Media=>../../../../inetpub/wwwroot/staging/Media Web/*.aspx=>../../../../inetpub/wwwroot/staging Web/*.asax=>../../../../inetpub/wwwroot/staging (I've tried with more ../ just in case, but it didn't make a difference). This is the output I get from the log [19:35:29]: Publishing artifacts (1s) [19:35:29]: [Publishing artifacts] Paths to publish: [Web/bin=../../../../inetpub/wwwroot/staging/bin, Web/Content=../../../../inetpub/wwwroot/staging/Content, Web/obj=../../../../inetpub/wwwroot/staging/obj, Web/Views=../../../../inetpub/wwwroot/staging/Views, Web/Media=../../../../inetpub/wwwroot/staging/Media, Web/.aspx=../../../../inetpub/wwwroot/staging, Web/.asax=../../../../inetpub/wwwroot/staging, teamcity-info.xml] [19:35:30]: [Publishing artifacts] Sending files [19:35:32]: Build finished Logs from some of the other servers running TeamCity 4.5 uses a different format, with a line for each of the artifacts being published, I'm not sure if this is relevant or only due to a different logging format. Everything seems to be working, but no files are put in my website folder after a build, am I missing something here? Any help will be much appreciated :)

    Read the article

  • CentOS: make python 2.6 see django

    - by NP
    In a harrowing attempt to get mod_wsgi to run on CentOS 5.4, I've added python 2.6 as an optional library following the instructions here. The configuration seems fine except that when trying to ping the server the Apache log prints this error: mod_wsgi (pid=20033, process='otalo', application='127.0.0.1|'): Loading WSGI script '...django.wsgi'. [Sat Mar 27 16:11:45 2010] [error] [client 171.66.52.218] mod_wsgi (pid=20033): Target WSGI script '...django.wsgi' cannot be loaded as Python module. [Sat Mar 27 16:11:45 2010] [error] [client 171.66.52.218] mod_wsgi (pid=20033): Exception occurred processing WSGI script '...django.wsgi'. [Sat Mar 27 16:11:45 2010] [error] [client 171.66.52.218] Traceback (most recent call last): [Sat Mar 27 16:11:45 2010] [error] [client 171.66.52.218] File "...django.wsgi", line 8, in [Sat Mar 27 16:11:45 2010] [error] [client 171.66.52.218] import django.core.handlers.wsgi [Sat Mar 27 16:11:45 2010] [error] [client 171.66.52.218] ImportError: No module named django.core.handlers.wsgi when I go to my python2.6 install's command line and try 'import django', the module is not found (ImportError). However, my default python 2.4 installation (still working fine) is able to import successfully. How do I point python 2.6 to django? Thanks in advance.

    Read the article

  • Auto-resolving a hostname in WCF Metadata Publishing

    - by Mike C
    I am running a self-hosted WCF service. In the service configuration, I am using localhost in my BaseAddresses that I hook my endpoints to. When trying to connect to an endpoint using the WCF test client, I have no problem connecting to the endpoint and getting the metadata using the machine's name. The problem that I run into is that the client that is generated from metadata uses localhost in the endpoint URLs it wants to connect to. I'm assuming that this is because localhost is the endpoint URL published by metadata. As a result, any calls to the methods on the service will fail since localhost on the calling machine isn't running the service. What I would like to figure out is if it is possible for the service metadata to publish the proper URL to a client depending on the client who is calling it. For example, if I was requesting the service metadata from a machine on the same network as the server the endpoint should be net.tcp://MYSERVER:1234/MyEndpoint. If I was requesting it from a machine outside the network, the URL should be net.tcp://MYSERVER.mydomain.com:1234/MyEndpoint. And obviously if the client was on the same machine, THEN the URL could be net.tcp://localhost:1234/MyEndpoint. Is this just a flaw in the default IMetadataExchange contract? Is there some reason the metadata needs to publish the information in a non-contextual way? Is there another way I should be configuring my BaseAddresses in order to get the functionality I want? Thanks, Mike

    Read the article

  • generating and unmarshalling java classes while unmarshalling input contains a DTD

    - by Hans Westerbeek
    Hi, For a Spring-based project, I have the following situation to solve: I have XML files coming in whose contents I will have to parse at runtime. Those XML files come with a DTD reference. I need to generate the classes that the unmarshaller churns out using the right at build time, using the Maven2 plugin for the unmarshalling library of choice. This is also not very hard to do, once I have generated an XSD from the DTD. I want to use spring-oxm's UnMarshaller interface to do the unmarshalling at runtime. This I understand how to do. The xml files come in with a DTD reference, and all unmarshalling libraries out there want to do unmarshalling based on an XSD. Now, as described in the castor documentation, I can convert the DTD to an XSD and keep it on the classpath. However, when an actual XML file comes into the system it will still have that DTD reference at the top, and there's nothing I can really do about that (except for string replacing which feels hacky in this case). Will this cause the unmarshaller, like Castor to fail? Am I right in suspecting that this DTD reference will cause the unmarshalling to fail? Could I do pure DTD-based unmarshalling? Or can this somehow be prevented by providing detailed configuration to the unmarshaller? Until now, I have tried castor, xmlbeans and xstream. Which would fit my purposes best? Has anyone else been in this situation? Did you also end up just doing manual DOM or SAX parsing?

    Read the article

  • Problems with ASP.NET, machine-level web.config, and the location element

    - by Daniel Schaffer
    I've got a server running Windows Web Server 2008 R2. The machine-level web.config has the following entries: <location path="Preview"> <appSettings> <add key="Environment" value="Preview" /> </appSettings> </location> <location path="Staging"> <appSettings> <add key="Environment" value="Staging" /> </appSettings> </location> <location path="Production"> <appSettings> <add key="Environment" value="Production" /> </appSettings> </location> I have a website that I'd set up in the direction D:\Sites\Preview\, so the full path would be D:\Sites\Preview\WebSite1. If I put a simple aspx file that just outputs the value of ConfigurationManager.AppSettings["Environment"], it displays the value Preview. I'm not clear on exactly how that works, but it does. I'd set this up several weeks ago, and just now tried to duplicate this - I put a second site in the D:\Sites\Preview\ directory, expecting that it would automatically pick up the appropriate appSettings entries, but for some reason it hasn't - the same aspx page doesn't show anything. Additionally, when I go into the IIS manager and open the Configuration Editor, there are no settings in there, whereas there are settings listed for the first site. Any ideas as to what I could be missing? Is the location element intended to work like this, or did I just find some magical fluke with my first site?

    Read the article

  • Intellisense for custom config section problem with namespaces

    - by Quick Joe Smith
    I have just rolled a custom configuration section, created an accompanying schema document for Intellisense and added it to the Web.config's Schemas property as per Michael Stum's answer to another similar question. Unfortunately, and possibly due to me creating the XSD by hand with limited knowledge, the Intellisense relies on an xmlns attribute pointing to my XSD file's namespace being present in the custom config element. However, when running the project I get an Unrecognized attribute 'xmlns'. Note that attribute names are case-sensitive error. I could probably just modify my XSD file to define the xmlns attribute for that element, however I am wondering if this is just a bandaid fix to a larger problem. I must confess I don't have a very good understanding of XML namespaces so this might be an oppportunity to set me straight on a few things. Here is the attributes for my XSD file's root xs:schema element: <xs:schema id="awesomeConfig" targetNamespace="http://awesome.com/schemas" xmlns="http://awesome.com/schemas" elementFormDefault="qualified" xmlns:xs="http://www.w3.org/2001/XMLSchema"> ... </xs:schema> And on creating the element in the Web.config file, Visual Studio 2008 automatically appends: <awesomeConfig xmlns="http://awesome.com/schemas"></awesomeConfig> So have I misunderstood the meaning of the xs:schema attributes at all, or is the proper solution as simple as it seems?

    Read the article

  • Is this a good job description? What title would you give this position?

    - by Zack Peterson
    Department: Information Technology Reports To: Chief Information Officer Purpose: Company's ________________ is specifically engaged in the development of World Wide Web applications and distributed network applications. This person is concerned with all facets of the software development process and specializes in software product management. He or she contributes to projects in an application architect role and also performs individual programming tasks. Essential Duties & Responsibilities: This person is involved in all aspects of the software development process such as: Participation in software product definitions, including requirements analysis and specification Development and refinement of simulations or prototypes to confirm requirements Feasibility and cost-benefit analysis, including the choice of architecture and framework Application and database design Implementation (e.g. installation, configuration, customization, integration, data migration) Authoring of documentation needed by users and partners Testing, including defining/supporting acceptance testing and gathering feedback from pre-release testers Participation in software release and post-release activities, including support for product launch evangelism (e.g. developing demonstrations and/or samples) and subsequent product build/release cycles Maintenance Qualifications: Bachelor's degree in computer science or software engineering Several years of professional programming experience Proficiency in the general technology of the World Wide Web: Hypertext Transfer Protocol (HTTP) Hypertext Markup Language (HTML) JavaScript Cascading Style Sheets (CSS) Proficiency in the following principles, practices, and techniques: Accessibility Interoperability Usability Security (especially prevention of SQL injection and cross-site scripting (XSS) attacks) Object-oriented programming (e.g. encapsulation, inheritance, modularity, polymorphism, etc.) Relational database design (e.g. normalization, orthogonality) Search engine optimization (SEO) Asynchronous JavaScript and XML (AJAX) Proficiency in the following specific technologies utilized by Company: C# or Visual Basic .NET ADO.NET (including ADO.NET Entity Framework) ASP.NET (including ASP.NET MVC Framework) Windows Presentation Foundation (WPF) Language Integrated Query (LINQ) Extensible Application Markup Language (XAML) jQuery Transact-SQL (T-SQL) Microsoft Visual Studio Microsoft Internet Information Services (IIS) Microsoft SQL Server Adobe Photoshop

    Read the article

  • C# webservice async callback not getting called on HTTP 407 error.

    - by Ben
    Hi, I am trying to test the use-case of a customer having a proxy with login credentials, trying to use our webservice from our client. If the request is synchronous, my job is easy. Catch the WebException, check for the 407 code, and prompt the user for the login credentials. However, for async requests, I seem to be running into a problem: the callback is never getting called! I ran a wireshark trace and did indeed see that the HTTP 407 error was being passed back, so I am bewildered as to what to do. Here is the code that sets up the callback and starts the request: TravelService.TravelServiceImplService svc = new TravelService.TravelServiceImplService(); svc.Url = svcUrl; svc.CreateEventCompleted += CbkCreateEventCompleted; svc.CreateEventAsync(crReq, req); And the code that was generated when I consumed the WSDL: public void CreateEventAsync(TravelServiceCreateEventRequest CreateEventRequest, object userState) { if ((this.CreateEventOperationCompleted == null)) { this.CreateEventOperationCompleted = new System.Threading.SendOrPostCallback(this.OnCreateEventOperationCompleted); } this.InvokeAsync("CreateEvent", new object[] { CreateEventRequest}, this.CreateEventOperationCompleted, userState); } private void OnCreateEventOperationCompleted(object arg) { if ((this.CreateEventCompleted != null)) { System.Web.Services.Protocols.InvokeCompletedEventArgs invokeArgs = ((System.Web.Services.Protocols.InvokeCompletedEventArgs)(arg)); this.CreateEventCompleted(this, new CreateEventCompletedEventArgs(invokeArgs.Results, invokeArgs.Error, invokeArgs.Cancelled, invokeArgs.UserState)); } } Debugging the WS code, I found that even the SoapHttpClientProtocol.InvokeAsync method was not calling its callback as well. Am I missing some sort of configuration?

    Read the article

  • ffmpeg mp3 convertion

    - by Alex
    was to convert several thousand MP3 files, and get this for some files: ffmpeg -t 45 -i "public/system/musics/files/2009/original/03_Memphis.mp3" -y "memphis.mp3" FFmpeg version 0.5-svn17737+3:0.svn20090303-1ubuntu6, Copyright (c) 2000-2009 Fabrice Bellard, et al. configuration: --enable-gpl --enable-postproc --enable-swscale --enable-x11grab --extra-version=svn17737+3:0.svn20090303-1ubuntu6 --prefix=/usr --enable-avfilter --enable-avfilter-lavf --enable-libgsm --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvorbis --enable-pthreads --disable-stripping --disable-vhook --enable-libdc1394 --disable-armv5te --disable-armv6 --disable-armv6t2 --disable-armvfp --disable-neon --disable-altivec --disable-vis --enable-shared --disable-static libavutil 49.15. 0 / 49.15. 0 libavcodec 52.20. 0 / 52.20. 0 libavformat 52.31. 0 / 52.31. 0 libavdevice 52. 1. 0 / 52. 1. 0 libavfilter 0. 4. 0 / 0. 4. 0 libswscale 0. 7. 1 / 0. 7. 1 libpostproc 51. 2. 0 / 51. 2. 0 built on Apr 10 2009 23:18:41, gcc: 4.3.3 public/system/musics/files/2009/original/03_Memphis.mp3: could not find codec parameters what could be the problem?

    Read the article

  • Problems compiling libjingle/gtk+-2.0 for Mac OS X

    - by mindthief
    Hi All, I'm trying to compile libjingle on Mac OSX Snow Leopard. The INSTALL file said to './configure', 'make' and 'make install', as usual. But make fails for me. Initially it gave some messages indicating that I didn't have pkg-config installed (I guess OSX doesn't come with it installed?), so I downloaded pkg-config from http://pkgconfig.freedesktop.org/releases/ Now I get this message: Package gtk+-2.0 was not found in the pkg-config search path. Perhaps you should add the directory containing `gtk+-2.0.pc' to the PKG_CONFIG_PATH environment variable No package 'gtk+-2.0' found I tried to install gtk by using the script at SourceForge: http://sourceforge.net/projects/gtk-osx/ (this is the website pointed to by the gtk website) Running the script didn't really seem to do anything, here is the output: $./gtk-osx-build-setup.sh Checking out jhbuild (2.27.3) from git... From git://git.gnome.org/jhbuild * tag 2.27.3 -> FETCH_HEAD Installing jhbuild... Installing jhbuild configuration... Installing gtk-osx moduleset files... Done. $ And I still get that error message about "Package gtk+-2.0 not found" while make-ing libjingle. Help will be appreciated, thanks!

    Read the article

  • Fast CGI, Lighttpd, Ubuntu

    - by Gosh
    Is this log file familiar to someone of UBUNTU users? Lighttpd log file: > 2009-08-30 21:37:45: (log.c.75) server started > 2009-08-30 21:37:45: (mod_fastcgi.c.1029) the fastcgi-backend php5-cgi *failed* to start: > 2009-08-30 21:37:45: (mod_fastcgi.c.1033) *child exited with status 9 php5-cgi* > 2009-08-30 21:37:45: (mod_fastcgi.c.1036) If you're trying to run PHP as a FastCGI backend, make sure you're using the FastCGI-enabled version. > You can find out if it is the right one by executing 'php -v' and it should display '(cgi-fcgi)' in the output, NOT '(cgi)' NOR '(cli)'. > For more information, check http://trac.lighttpd.net/trac/wiki/Docs%3AModFastCGI#preparing-php-as-a-fastcgi-programIf this is PHP on Gentoo, add 'fastcgi' to the USE flags. > 2009-08-30 21:37:45: (mod_fastcgi.c.1340) [ERROR]: *spawning fcgi failed*. > 2009-08-30 21:37:45: (server.c.908) Configuration of plugins failed. Going down. Please open a secret how did you solved fcgi problem and made lighttpd to start, if you did. Thx, Gosh.

    Read the article

  • Setting processor affinity on CSC.exe launched by CoreCompile MSBuild Task

    - by Hardy
    I am wondering if there is simple way to ensure that when a c# project is compiled the CSC.exe launched inherits the parent processor affinity settings, or perhaps of a way where by i can supply this. I have been trying to accomplish this by launching a bat file from vs.net cmd prompt like start /affinity 01 custombuild.cmd and inside my custombuild.cmd i have @echo off msbuild Libraries.sln /t:rebuild /p:Configuration=Release;platform=x64 /m:1 :END The command line call to Csc.exe this generates looks like the following C:\Windows\Microsoft.NET\Framework\v4.0.30319\Csc.exe ... ignoring the rest for brevity. What i 'd like to see is the CSC.exe to inherit the processor affinity or a simple way to be able to override how csc.exe call is generated so i can make it into a start /affinity 01 C:\Windows\Microsoft.NET\Framework\v4.0.30319\Csc.exe ... ignoring the rest for brevity. I also noticed that CoreCompile target is defined in Microsoft.CSharp.targets, should i be considering overriding MSBuildToolsPath variable so i can sneak in my own version. This feels rather hacky. Any help would be much appreciated.

    Read the article

  • Silverlight 3 + Java WebService

    - by Heko
    Hello! I have a Silverlight 3 project, and I need to call a Java WebService - the bindings are ok (SOAP 1.1 and basicHttpBinding): ClientConfig File: <configuration> <system.serviceModel> <bindings> <basicHttpBinding> <binding name="SkyinfoTestInterfaceExport2_SkyinfoTestInterfaceHttpBinding" maxBufferSize="2147483647" maxReceivedMessageSize="2147483647"> <security mode="None"> <transport> <extendedProtectionPolicy policyEnforcement="Never" /> </transport> </security> </binding> </basicHttpBinding> </bindings> <client> <endpoint address="myAddress" binding="basicHttpBinding" bindingConfiguration="SkyinfoTestInterfaceExport2_SkyinfoTestInterfaceHttpBinding" contract="SkyInfoServiceReference.SkyinfoTestInterface" name="SkyinfoTestInterfaceExport2_SkyinfoTestInterfaceHttpPort" /> </client> </system.serviceModel> When I call a method on client I get this Policy error: An error occurred while trying to make a request to URI '...'. This could be due to attempting to access a service in a cross-domain way without a proper cross-domain policy in place, or a policy that is unsuitable for SOAP services. You may need to contact the owner of the service to publish a cross-domain policy file and to ensure it allows SOAP-related HTTP headers to be sent. This error may also be caused by using internal types in the web service proxy without using the InternalsVisibleToAttribute attribute. Please see the inner exception for more details. I know about those 2 policy XML filesbut Java EE service which I'm trying to call is hosted on a IBM WebSphere Process Server to which I don't have access. Does anybody know how to work around this policy exception?

    Read the article

< Previous Page | 519 520 521 522 523 524 525 526 527 528 529 530  | Next Page >