Search Results

Search found 15439 results on 618 pages for 'wls configuration'.

Page 537/618 | < Previous Page | 533 534 535 536 537 538 539 540 541 542 543 544  | Next Page >

  • How should I secure my webapp written using Wicket, Spring, and JPA?

    - by Martin
    So, I have an web-based application that is using the Wicket 1.4 framework, and it uses Spring beans, the Java Persistence API (JPA), and the OpenSessionInView pattern. I'm hoping to find a security model that is declarative, but doesn't require gobs of XML configuration -- I'd prefer annotations. Here are the options so far: Spring Security (guide) - looks complete, but every guide I find that combines it with Wicket still calls it Acegi Security, which makes me think it must be old. Wicket-Auth-Roles (guide 1 and guide 2) - Most guides recommend mixing this with Spring Security, and I love the declarative style of @Authorize("ROLE1","ROLE2",etc). I'm concerned about having to extend AuthenticatedWebApplication, since I'm already extending org.apache.wicket.protocol.http.WebApplication, and Spring is already proxying that behind org.apache.wicket.spring.SpringWebApplicationFactory. SWARM / WASP (guide) - This looks the newest (though the main contributor passed away years ago), but I hate all of the JAAS-styled text files that declare permissions for principals. I also don't like the idea of making an Action class for every single thing a user might want to do. Secure models also aren't immediately obvious to me. Plus, there isn't an Authn example. Additionally, it looks like lots of folks recommend mixing the first and second options. I can't tell what the best practice is at all, though.

    Read the article

  • Online file storage similar to Amazon S3

    - by Joel G
    I am looking to code a file storage application in perl similar to amazon s3. I already have a amazon s3 clone that I found online called parkplace but its in ruby and is old also isn't built for high loads. I am not really sure what modules and programs I should use so id like some help picking them out. My requirements are listed below (yes I know there are lots but I could start simple then add more once I get it going): Easy API implementation for client side apps. (maybe RESTful but extras like mkdir and cp (?) Centralized database server for the USERDB (maybe PostgreSQL (?). Logging of all connections, bandwidth used, well pretty much everything to a centralized server (maybe PostgreSQL again (?). Easy server side configuration (config file(s) stored on the servers). Web based control panel for admin(s) and user(s) to show logs. (could work just running queries from the databases) Fast High Uptime Low memory usage Some sort of load distribution/load balancer (maybe a dns based or pound or perlbal or something else (?). Maybe a cache of some sort (memcached or parlbal or something else (?). Thanks in advance

    Read the article

  • Why do WCF clients depend on the app.config file?

    - by routeNpingme
    Like a lot of things, I'm sure there's a good reason for this, so please help me understand... Why, by default, do WCF services store settings in app.config? This has been so frustrating trying to work with multiple Silverlight class libraries. These class libraries are supposed to be completely independent from each other, and this dependency on the app.config seems to cause the following headaches: Single Responsibility Principle - I should be able to add a reference to a class library and go. If that class library uses a service reference, this idea is shot before I even start coding against it. Muddy Configuration - To get other libraries to work, I have to copy and paste the service configurations into the "main" application configs. If an endpoint changes in any way, I can't just worry about a new version of that class DLL - I have to worry about anything that uses it, too. Complex Alternatives - Programmatically creating the endpoint isn't pretty. Period. There has to be a better way. Why doesn't WCF at least separate the service configurations into a ServiceName.config or something that gets copied to an output directory. What am I missing? How do you deal with this?

    Read the article

  • what makes a Tomcat5.5 cannot be "aware" of new Java Web Applications?

    - by Michael Mao
    This is for uni homework, but I reckon it is more a generic problem to the Tomcat Server(version 5.5.27) on my uni. The problem is, I first did a skeleton Java Web Application (Just a simple Servlet and a welcome-file, nothing complicated, no lib included) using NetBeans 6.8 with the bundled Tomcat 6.0.20 (localhost:8084/WSD) Then, to test and prove it is "portable" and "auto-deploy-able", I cleaned and built a WSD.war file and dropped it onto my Xampp Tomcat (localhost:8080/WSD). The war extracted everything accordingly and I can see identical output from this Tomcat. So far, so good. However, after I tried to drop to war onto uni server, funny thing happens: uni server Even though I've changed the war permission to 755, it is simply not "responding". I then copied the extracted files to uni server, the MainServlet cannot be recognized from within its Context Path "/WSD", basically nothing works, expect the static index.jsp. I tried several times to stop and restart uni Tomcat, it doesn't help? I wonder what makes this happen? Is there anything I did wrong with my approach? To be frank I paid no attention to a server not under my control, and I am unfortunately not a real active day-to-day Java Programmer now. I understand the fundamentals of MVC, Servelets, JSPs, JavaBeans, but I really feel frustrated by this, as I cannot see why... Or, should I ask, a Java Web Application, after cleaned and built by NetBeans6.8, is self-contained and self-configured so ready to be deployed to any Java Web Container? I know, I can certainly program everything in plain old JSP, but this is soooo... unacceptable to myself... Update : I am now wondering if there is any free Tomcat Hosting so that I would like to see if my war file and/or my web app can go with them without any configuration at all?

    Read the article

  • Beginner problem / error with ansi-c and gcc under ubuntu.

    - by Framester
    Hi, I am just starting programming ansi c with gcc under ubuntu (9.04). I get following error messages: error messages: main.c:6: error: expected identifier or ‘(’ before ‘/’ token In file included from /usr/include/stdio.h:75, from main.c:9: /usr/include/libio.h:332: error: expected specifier-qualifier-list before ‘size_t’ /usr/include/libio.h:364: error: expected declaration specifiers or ‘...’ before ‘size_t’ /usr/include/libio.h:373: error: expected declaration specifiers or ‘...’ before ‘size_t’ /usr/include/libio.h:493: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘_IO_sgetn’ In file included from main.c:9: /usr/include/stdio.h:314: error: expected declaration specifiers or ‘...’ before ‘size_t’ /usr/include/stdio.h:682: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘fread’ /usr/include/stdio.h:688: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘fwrite’ main.c:12: error: expected identifier or ‘(’ before ‘/’ token I assume it is a very simple problem, maybe in the configuration of ubuntu or gcc. I am new to programming under linux as well. I googled for help and went through a tutorial but could not find an answer. Thank you! code: #include <stdio.h> #include <math.h> int main() { printf("TestOutput\n"); return (0); } command: ~/Documents/projects/Trials$ gcc -Wall -ansi main.c

    Read the article

  • Zend_Auth and database session SaveHandler

    - by takeshin
    I have created Zend_Auth adapter implementing Zend_Auth_Adapter_Interface (similar to Pádraic's adapter) and created simple ACL plugin. Everything works fine with default session handler. So far, so good. As a next step I have created custom Session SaveHandler to persist session data in the database. My implementation is very similar to this one from parables-demo. Seems that everything is working fine. Session data are properly saved to the database, session objects are serialized, but authentication does not work when I enable this custom SaveHandler. I have debugged the authentication and all works fine up till the next request, when the authentication data are lost. I suspected, that is has something to do with the fact, that I use $adapter->write($object) instead $adapter->write($string), but the same happens with strings. I'm bootstrapping Zend_Application_Resource_Session in the first Bootstrap method, as early as possible. Does Zend_Auth need any extra configuration to persist data in the database? Why the authentity is being lost?

    Read the article

  • Application Design: Single vs. Multiple Hits to the DB

    - by shyneman
    I'm building a service that performs a set of configured activities based on the type of request that it receives. Each activity involves going to the database and retrieving/updating some kind of information. The logic for each activity can be generalized and re-used across different request types. The activities may need to participate in a transaction for the duration of the servicing the request. One option, I'm considering is having each activity maintain its own access to DAL/database. This fully encapsulates the activity into a stand-alone re-usable piece, but hitting the database multiple times for one request doesn't seem like a viable option. I don't really know how to easily implement the concept of a transaction across the multiple activities here either. The second option is to encapsulate ALL the activities into one big activity and hit the database once. But this does not allow re-use and configuration of these activities for different requests. Does anyone have any suggestions and input about what should be the best way to approach my problem? Thanks for any help.

    Read the article

  • To what extent should code try to explain fatal exceptions?

    - by Andrzej Doyle
    I suspect that all non-trivial software is likely to experience situations where it hits an external problem it cannot work around and thus needs to fail. This might be due to bad configuration, an external server being down, disk full, etc. In these situations, especially if the software is running in non-interactive mode, I expect that all one can really do is log an error and wait for the admin to read the logs and fix the problem. If someone happens to interact with the software in the meantime, e.g. a request comes in to a server that failed to initialize properly, then perhaps an appropriate hint can be given to check the logs and maybe even the error can be echoed (depending on whether you can tell if they're a technical guy as opposed to a business user). For the moment though let's not think too hard about this part. My question is, to what extent should the software be responsible for trying to explain the meaning of the fatal error? In general, how much competence/knowledge are you allowed to presume on administrators of the software, and how much should you include troubleshooting information and potential resolution steps when logging fatal errors? Of course if there's something that's unique to the runtime context this should definitely be logged; but lets assume your software needs to talk to Active Directory via LDAP and gets back an error "[LDAP: error code 49 - 80090308: LdapErr: DSID-0C090334, comment: AcceptSecurityContext error, data 525, vece]". Is it reasonable to assume that the maintainers will be able to Google the error code and work out what it means, or should the software try to parse the error code and log that this is caused by an incorrect user DN in the LDAP config? I don't know if there is a definitive best-practices answer for this, so I'm keen to hear a variety of views.

    Read the article

  • GNU Make - Dependencies on non program code

    - by Tim Post
    A requirement for a program I am writing is that it must be able to trust a configuration file. To accomplish this, I am using several kinds of hashing algorithms to generate a hash of the file at compile time, this produces a header with the hashes as constants. Dependencies for this are pretty straight forward, my program depends on config_hash.h, which has a target that produces it. The makefile looks something like this : config_hash.h: $(SH) genhash config/config_file.cfg > $(srcdir)/config_hash.h $(PROGRAM): config_hash.h $(PROGRAM_DEPS) $(CC) ... ... ... I'm using the -M option to gcc, which is great for dealing with dependencies. If my header changes, my program is rebuilt. My problem is, I need to be able to tell if the config file has changed, so that config_hash.h is re-generated. I'm not quite sure how explain that kind of dependency to GNU make. I've tried listing config/config_file.cfg as a dependency for config_hash.h, and providing a .PHONY target for config_file.cfg without success. Obviously, I can't rely on the -M switch to gcc to help me here, since the config file is not a part of any object code. Any suggestions? Unfortunately, I can't post much of the Makefile, or I would have just posted the whole thing.

    Read the article

  • How to debug an external library (OpenCV) in Visual C++?

    - by neuviemeporte
    I am developing a project in VC++2008. The project uses the OpenCV library (but I guess this applies to any other library). I am working with the Debug configuration, the linker properties include the debug versions of the library .lib's as additional dependencies. In VC++ Directories under Tools|Options i set up the include directory, the .lib directory, the source directories for the library as well. I get an error while calling one of the functions from the library and I'd like to see exactly what that function is doing. The line that produces the error is: double error = cvStereoCalibrate(&calObjPointsM, &img1PointsM, &img2PointsM, &pointCountsM, &cam1M, &dist1M, &cam2M, &dist2M, imgSize, &rotM, &transM, NULL, NULL, cvTermCriteria(CV_TERMCRIT_ITER + CV_TERMCRIT_EPS, 100, 1e-5)); I set up a breakpoint at this line to see how the cvStereoCalibrate() function fails. Unfortunately the debugger won't show the source code for this function when I hit "Step into". It skips immediately to the cvTermCriteria() (which is a simple inline, macro-kinda function) and show its contents. Is there anything else I need to do to be able to enter the external library functions in the debugger? EDIT: I think the cvTermCriteria() function shows in the debugger, because it's defined in a header file, therefore immediately accesible to the project.

    Read the article

  • (500) Internal Server Error with C# and Web Dev 2008 Express

    - by user32848
    The code below is generic, found in a variety of places, including a book I have. I have used it as a base for a working program in VS 2005. Now I've resurrected it with my current Visual Web Developer 2008 Express Edition and I seem to have problems connecting it to my default development server (I don't have IIS on my XP). The error is: (500) Internal Server Error. Is this saying what I thought it did (above) or something else, and how do I solve this problem? using System; using System.Collections; using System.Collections.Generic; using System.Configuration; using System.IO; using System.Net; using System.Linq; using System.Web; using System.Web.Security; using System.Web.UI; using System.Web.UI.WebControls; using System.Web.UI.WebControls.WebParts; using System.Web.UI.HtmlControls; using System.Text.RegularExpressions; public partial class _Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { string strResult = ""; string url = "http://weather.unisys.com"; WebResponse objResponse; WebRequest objRequest; try { objRequest = System.Net.HttpWebRequest.Create(url); } catch { objRequest = System.Net.HttpWebRequest.Create("http://"+ url); } objResponse = objRequest.GetResponse(); using (StreamReader sr = new StreamReader(objResponse.GetResponseStream())) { strResult = sr.ReadToEnd(); sr.Close(); } } }

    Read the article

  • getting error as Internal Server error

    - by Rishi2686
    Hi There, When I try to run my site it gives error as Internal Server error, when I refresh the page I get my result properly.The error page looks like this: Internal Server Error The server encountered an internal error or misconfiguration and was unable to complete your request. Please contact the server administrator, [email protected] and inform them of the time the error occurred, and anything you might have done that may have caused the error. More information about this error may be available in the server error log. Additionally, a 404 Not Found error was encountered while trying to use an ErrorDocument to handle the request. I also checked error_log file on my server, it gives error as: [Sat Jun 12 01:21:55 2010] [error] [client 117.195.6.76] File does not exist: /home/rohit25/public_html/test/500.shtml, referer: http://www.test.mysite.com/home.php sometimes error can be; [Sat May 29 19:35:12 2010] [error] [client 97.85.189.208] File does not exist: /home2/carlton/public_html/test/favicon.ico Are there any changes required in configuration file, I also tried to involve this error code in custom error page, it shows error page, which could not resolve this issue. Your urgent help will be greatly appreciated.

    Read the article

  • access JRUN jndi environment vaiables from coldfusion (java)

    - by jake
    I want to put some instance specific configuration information in JNDI. I looked at the information here: http://www.adobe.com/support/jrun/working_jrun/jrun4_jndi_and_j2ee_enc/jrun4_jndi_and_j2ee_enc03.html I have added this node to the web.xml: <env-entry> <description>Administrator e-mail address</description> <env-entry-name>adminemail</env-entry-name> <env-entry-value>[email protected]</env-entry-value> <env-entry-type>java.lang.String</env-entry-type> </env-entry> In coldfusion I have tried several different approaches to querying the data: <cfset ctx = createobject("java","javax.naming.InitialContext") > <cfset val = ctx.lookup("java:comp/env") > That lookup returns a jrun.naming.JRunNamingContext. If i preform a lookup on ctx for the specific binding I am adding I get an error. <cfset val = ctx.lookup("java:comp/env/adminemail") > No such binding: adminemail Preforming a listBindings returns an empty jrun.naming.JRunNamingEnumeration. <cfset val = ctx.listBindings("java:comp/env") > I only want to put a string value (probably several) into the ENC (or any JNDI directory at this point).

    Read the article

  • Castle, sharing a transient component between a decorator and a decorated component

    - by Marius
    Consider the following example: public interface ITask { void Execute(); } public class LoggingTaskRunner : ITask { private readonly ITask _taskToDecorate; private readonly MessageBuffer _messageBuffer; public LoggingTaskRunner(ITask taskToDecorate, MessageBuffer messageBuffer) { _taskToDecorate = taskToDecorate; _messageBuffer = messageBuffer; } public void Execute() { _taskToDecorate.Execute(); Log(_messageBuffer); } private void Log(MessageBuffer messageBuffer) {} } public class TaskRunner : ITask { public TaskRunner(MessageBuffer messageBuffer) { } public void Execute() { } } public class MessageBuffer { } public class Configuration { public void Configure() { IWindsorContainer container = null; container.Register( Component.For<MessageBuffer>() .LifeStyle.Transient); container.Register( Component.For<ITask>() .ImplementedBy<LoggingTaskRunner>() .ServiceOverrides(ServiceOverride.ForKey("taskToDecorate").Eq("task.to.decorate"))); container.Register( Component.For<ITask>() .ImplementedBy<TaskRunner>() .Named("task.to.decorate")); } } How can I make Windsor instantiate the "shared" transient component so that both "Decorator" and "Decorated" gets the same instance? Edit: since the design is being critiqued I am posting something closer to what is being done in the app. Maybe someone can suggest a better solution (if sharing the transient resource between a logger and the true task is considered a bad design)

    Read the article

  • Why does a change of Session State provider lead to an ASPx page yielding garbage?

    - by Rory Becker
    I have an aspnet webapp which has worked very well up until now. I was recently asked to explore ways of making it scale better. I found that seperation of database and Webapp would help. Further I was told that if I changed my session providing mechanism to SQLServer, I would be able to duplicate the Web Stack to several machines which could each call back to the state server allowing the load to be distirbuted better. This sounds logical. So I created an ASPState database using ASPNet_RegSQL.exe as detailed in many locations across the web and changed the web.config on my app from: <sessionState mode="InProc" cookieless="false" timeout="20" /> To: <sessionState mode="SQLServer" sqlConnectionString="Server=SomeSQLServer;user=SomeUser;password=SomePassword" cookieless="false" timeout="20" /> Then I addressed my app, which presented me with its logon screen and I duly logged in. Once in I was presented, not with the page I was expecting, but with: I can change the sessionstate back and forth. This problem goes away and then comes back based on which set of configuration I use. Why is this happening?

    Read the article

  • Spring: Bean fails to read off values from external Properties file when using @Value annotation

    - by daydreamer
    XML Configuration <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:util="http://www.springframework.org/schema/util" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/util http://www.springframework.org/schema/util/spring-util.xsd"> <util:properties id="mongoProperties" location="file:///storage/local.properties" /> <bean id="mongoService" class="com.business.persist.MongoService"></bean> </beans> and MongoService looks like @Service public class MongoService { @Value("#{mongoProperties[host]}") private String host; @Value("#{mongoProperties[port]}") private int port; @Value("#{mongoProperties[database]}") private String database; private Mongo mongo; private static final Logger LOGGER = LoggerFactory.getLogger(MongoService.class); public MongoService() throws UnknownHostException { LOGGER.info("host=" + host + ", port=" + port + ", database=" + database); mongo = new Mongo(host, port); } public void putDocument(@Nonnull final DBObject document) { LOGGER.info("inserting document - " + document.toString()); mongo.getDB(database).getCollection(getCollectionName(document)).insert(document, WriteConcern.SAFE); } I write my MongoServiceTest as public class MongoServiceTest { @Autowired private MongoService mongoService; public MongoServiceTest() throws UnknownHostException { mongoRule = new MongoRule(); } @Test public void testMongoService() { final DBObject document = DBContract.getUniqueQuery("001"); document.put(DBContract.RVARIABLES, "values"); document.put(DBContract.PVARIABLES, "values"); mongoService.putDocument(document); } and I see failures in tests as 12:37:25.224 [main] INFO c.s.business.persist.MongoService - host=null, port=0, database=null java.lang.NullPointerException at com.business.persist.MongoServiceTest.testMongoService(MongoServiceTest.java:40) Which means bean was not able to read the values from local.properties local.properties ### === MongoDB interaction === ### host="127.0.0.1" port=27017 database=contract How do I fix this? update It doesn't seem to read off the values even after creating setters/getters for the fields. I am really clueless now. How can I even debug this issue? Thanks much!

    Read the article

  • Active Directory Membership Provider - how to expand on this?

    - by Jaxidian
    I'm working on getting an MVC app up and running via AD Membership Provider and I'm having some issues figuring this out. I have a base configuration setup and working when I login as [email protected] + password. <connectionStrings> <add name="MyConnString" connectionString="LDAP://domaincontroller/OU=Product Users,DC=my,DC=domain,DC=com" /> </connectionStrings> <membership defaultProvider="MyProvider"> <providers> <clear /> <add name="MyProvider" connectionStringName="MyConnString" connectionUsername="my.domain.com\service_account" connectionPassword="biguglypassword" type="System.Web.Security.ActiveDirectoryMembershipProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" /> </providers> </membership> However, I'd LIKE to do some other things and I'm not sure how to go about them. Login without typing the domain (i.e. the "@my.domain.com"). I realize that this could only work if I limit myself to just one domain - that's fine. Organize users in up to N different OUs within a single OU. As you can tell from my current connection string, I'm authenticating users in my Product Users OU. I would LIKE to create OUs for various companies within this OU and put the users into those OUs. How can I authenticate across all of these different OUs? I'm trying to figure out how the Active Directory Membership Provider ties in with the Profile and Role providers. Are there AD versions of those too or am I stuck with SQL, home-grown, or finding something somebody else has coded up? Many thanks!!

    Read the article

  • EF Many-to-many dbset.Include in DAL on GenericRepository

    - by Bryant
    I can't get the QueryObjectGraph to add INCLUDE child tables if my life depended on it...what am I missing? Stuck for third day on something that should be simple :-/ DAL: public abstract class RepositoryBase<T> where T : class { private MyLPL2Context dataContext; private readonly IDbSet<T> dbset; protected RepositoryBase(IDatabaseFactory databaseFactory) { DatabaseFactory = databaseFactory; dbset = DataContext.Set<T>(); DataContext.Configuration.LazyLoadingEnabled = true; } protected IDatabaseFactory DatabaseFactory { get; private set; } protected MyLPL2Context DataContext { get { return dataContext ?? (dataContext = DatabaseFactory.Get()); } } public IQueryable<T> QueryObjectGraph(Expression<Func<T, bool>> filter, params string[] children) { foreach (var child in children) { dbset.Include(child); } return dbset.Where(filter); } ... DAL repositories public interface IBreed_TranslatedSqlRepository : ISqlRepository<Breed_Translated> { } public class Breed_TranslatedSqlRepository : RepositoryBase<Breed_Translated>, IBreed_TranslatedSqlRepository { public Breed_TranslatedSqlRepository(IDatabaseFactory databaseFactory) : base(databaseFactory) {} } BLL Repo: public IQueryable<Breed_Translated> QueryObjectGraph(Expression<Func<Breed_Translated, bool>> filter, params string[] children) { return _r.QueryObjectGraph(filter, children); } Controller: var breeds1 = _breedTranslatedRepository .QueryObjectGraph(b => b.Culture == culture, new string[] { "AnimalType_Breed" }) .ToList(); I can't get to Breed.AnimalType_Breed.AnimalTypeId ..I can drill as far as Breed.AnimalType_Breed then the intelisense expects an expression? Cues if any, DB Tables: bold is many-to-many Breed, Breed_Translated, AnimalType_Breed, AnimalType, ...

    Read the article

  • Web development scheme for staging and production servers using Git Push

    - by ServAce85
    I am using git to manage a dynamic website (PHP + MySQL) and I want to send my files from my localhost to my staging and development servers in the most efficient and hassle-free way. I am currently convinced that the best way for me to approach this problem is to use this git branching model to organize my local git repo. From there, I will use the release branches to push to my staging server for testing. Once I am happy that the release code works on the staging server, I can then merge with my master branch and push that to my production server. Pushing to Staging Server: As noted in many introductory git posts, I could run into problems pushing into a non-bare repo, so, as suggested in this response, I plan to push the release branch to a bare repo on the server and have a post-receive hook that clones the bare repo to a non-bare repo that also acts as the web-hosted directory. Pushing to Production Server: Here's my newest source of confusion... In the response that I cited above, it made me curious as to why @Paul states that it's a completely different story when pushing to a live, development server. I guess I don't see the problem. Would it be safe and hassle-free to follow the same steps as above, but for the master branch? Where are the potential pit-falls? Config Files: With respect to configuration files that are unique to each environment (.htaccess, config.php, etc), it seems simplest to .gitignore each of those files in their respective repos on their respective servers. Can you see anything immediately wrong with this? Better solutions? Accessing Data: Finally, as I initially stated, the site uses MySQL databases to store data. How would you suggest I access that data (for testing purposes) from the staging server and localhost? I realize that I may have asked way too many questions for a single post, but since they're all related to the best way to set up this development scheme, I thought it was necessary.

    Read the article

  • Drupal administration theme doesn't apply to Blocks pages (admin/build/block)

    - by hfidgen
    A site I'm creating for a customer in D6 has various images overlaying parts of the main content area. It looks very pretty and they have to be there for the general effect. The problem is, if you use this theme in the administration pages, the images get in the way of everything. My solution was to create a custom admin theme, based on the default one, which has these image areas disabled in the output template files - page.tpl.php The problem is that when you try and edit the blocks page, it uses the default theme and half the blocks admin settings are unclickable behind the images. I KNOW this is by design in Drupal, but it's annoying the hell out of me and is edging towards "bug" rather than "feature" in my mind. It also appears that there is no way of getting around it. You can edit /modules/blocks/block.admin.inc to force Drupal to show the blocks page in the chosen admin theme. BUT whichever changes you then make will not be transferred to the default theme, as Drupal treats each theme separately and each theme can have different block layouts. :x function block_admin_display($theme = NULL) { global $custom_theme; // If non-default theme configuration has been selected, set the custom theme. // $custom_theme = isset($theme) ? $theme : variable_get('theme_default', 'garland'); // Display admin theme $custom_theme = variable_get('admin_theme', '0'); // Fetch and sort blocks $blocks = _block_rehash(); usort($blocks, '_block_compare'); return drupal_get_form('block_admin_display_form', $blocks, $theme); } Can anyone help? the only thing I can think of is to push the $content area well below the areas where the image appear and use blocks only for content display. Thanks!

    Read the article

  • Web API Getting Http 500 error : Issue Solved See Below

    - by Joe Grasso
    Here is my MVC Controller and everything is fine: private UnitOfWork UOW; public InventoryController() { UOW = new UnitOfWork(); } // GET: /Inventory/ public ActionResult Index() { var products = UOW.ProductRepository.GetAll().ToList(); return View(products); } Same method call in API Controller gives me an Http 500 Error: private UnitOfWork _unitOfWork; public TestController() { _unitOfWork = new UnitOfWork(); } public IEnumerable<Product> Get() { var products = _unitOfWork.ProductRepository.GetAll().ToList(); return products; } Debugging shows that indeed there is data being returned in both controllers' UOW calls. I then added a customer configuration in Global: public static void CustomizeConfig(HttpConfiguration config) { config.Formatters.Remove(config.Formatters.XmlFormatter); var json = config.Formatters.JsonFormatter; json.SerializerSettings.ContractResolver = new CamelCasePropertyNamesContractResolver(); } I am still receiving an Http 500 in API Controller ONLY and at a loss as to why. Any ideas? UPDATE: It appears using lazy loading caused the problem. When I set the associated properties to NON-VIRTUAL the Test API provided the necessary JSON string. However, whereas before I had the Vendor class included, I only have VendorId. I really wanted to included the associated classes. Any ideas? I know there are alot of smart people out there. Anyone?

    Read the article

  • No Buffer Space available(maximum connection reached?) Form Postgres EDB Driver

    - by Listening.Platform
    We are facing an exception while connecting to database through our java application. The stack trace is as follows com.edb.util.PSQLException: The connection attempt failed. at com.edb.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:189) at com.edb.core.ConnectionFactory.openConnection(ConnectionFactory.java:64) at com.edb.jdbc2.AbstractJdbc2Connection.<init>(AbstractJdbc2Connection.java:161) at com.edb.jdbc3.AbstractJdbc3Connection.<init>(AbstractJdbc3Connection.java:30) at com.edb.jdbc3.Jdbc3Connection.<init>(Jdbc3Connection.java:24) at com.edb.Driver.makeConnection(Driver.java:391) at com.edb.Driver.connect(Driver.java:266) at java.sql.DriverManager.getConnection(Unknown Source) at java.sql.DriverManager.getConnection(Unknown Source) ... 12 more Caused by: java.net.SocketException: No buffer space available (maximum connections reached?): connect at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.PlainSocketImpl.doConnect(Unknown Source) at java.net.PlainSocketImpl.connectToAddress(Unknown Source) at java.net.PlainSocketImpl.connect(Unknown Source) at java.net.SocksSocketImpl.connect(Unknown Source) at java.net.Socket.connect(Unknown Source) at java.net.Socket.connect(Unknown Source) at java.net.Socket.<init>(Unknown Source) at java.net.Socket.<init>(Unknown Source) at com.edb.core.PGStream.<init>(PGStream.java:70) at com.edb.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:115) ... 20 more When the error occured we were not able to connect to internet and DB and had to reboot the system. But the error occured again after 3 days at same code i.e while connecting to DB. We checked TCP connections using netstat. But there were not many TCP connections i.e it has not reached the max limit. Our application has multiple long running Java processes that pools the DB connections (not more than 60) and keeps it alive for firing the next query (as it has to poll the DB every 2 seconds). Some of the queries in our application are joining large tables (10 million records) to get the related data. We are using following System and applications Windows 2003 server SP2 Java 1.6 Postgres Plus Advanced server 8.4 Database edb-jdbc14.jar driver for connection DB from Java We have used the default configuration of Postgres DB except increasing the connection to 120 from 100. Has anybody encountred the same error with postgres edb driver? Can anybody help us finding the solution?

    Read the article

  • Can a page opt out of IIS 7 compression?

    - by Glen Little
    My pages are automatically being compressed by IIS7 with GZIP. That is great... but, for one particular page, I need to stream it to the user, using Response.Flush() when needed. But when the output is being compressed, the IIS server seems to collect all my output until the page is done before compressing and sending it to the client. That nullifies my attempt to Flush the content out to the user. Is there a way that I can have this one page opt out of the compression? One possible option I've determined that if I manually set the content type to one that does not match the IIS configuration at c:\windows\system32\inetsrv\config\applicationhost.config, then IIS will not compress it. Eg. Response.ContentType = "x-text/html". This works okay with IE8, as it falls back to display the HTML. But Firefox will ask the user what to do with the unknown file type. This could work, if there was another Mime Type I could use that browsers would accept as HTML, that is not matched in the applicationhost.config. For reference, these are the mime types that will be compressed: <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/x-javascript" enabled="true" /> <add mimeType="application/atom+xml" enabled="true" /> <add mimeType="application/xaml+xml" enabled="true" /> Others options? Are there other options to opt out of compression?

    Read the article

  • Unsupported smapling rate in flex/actionscript

    - by Rajeev
    In action script i need Loading configuration file /opt/flex/frameworks/flex-config.xml t3.mxml(10): Error: unsupported sampling rate (24000Hz) [Embed(source="music.mp3")] t3.mxml(10): Error: Unable to transcode music.mp3. [Embed(source="music.mp3")] The code is <?xml version="1.0"?> <!-- embed/EmbedSound.mxml --> <mx:Application xmlns:mx="http://www.adobe.com/2006/mxml"> <mx:Script> <![CDATA[ import flash.media.*; [Embed(source="sample.mp3")] [Bindable] public var sndCls:Class; public var snd:Sound = new sndCls() as Sound; public var sndChannel:SoundChannel; public function playSound():void { sndChannel=snd.play(); } public function stopSound():void { sndChannel.stop(); } ]]> </mx:Script> <mx:HBox> <mx:Button label="play" click="playSound();"/> <mx:Button label="stop" click="stopSound();"/> </mx:HBox> </mx:Application>

    Read the article

  • Will this class cause memory leaks, and does it need a dispose method? (asp.net vb)

    - by Phil
    Here is the class to export a gridview to an excel sheet: Imports System Imports System.Data Imports System.Configuration Imports System.IO Imports System.Web Imports System.Web.Security Imports System.Web.UI Imports System.Web.UI.WebControls Imports System.Web.UI.WebControls.WebParts Imports System.Web.UI.HtmlControls Namespace ExcelExport Public NotInheritable Class GVExportUtil Private Sub New() End Sub Public Shared Sub Export(ByVal fileName As String, ByVal gv As GridView) HttpContext.Current.Response.Clear() HttpContext.Current.Response.AddHeader("content-disposition", String.Format("attachment; filename={0}", fileName)) HttpContext.Current.Response.ContentType = "application/ms-excel" Dim sw As StringWriter = New StringWriter Dim htw As HtmlTextWriter = New HtmlTextWriter(sw) Dim table As Table = New Table table.GridLines = GridLines.Vertical If (Not (gv.HeaderRow) Is Nothing) Then GVExportUtil.PrepareControlForExport(gv.HeaderRow) table.Rows.Add(gv.HeaderRow) End If For Each row As GridViewRow In gv.Rows GVExportUtil.PrepareControlForExport(row) table.Rows.Add(row) Next If (Not (gv.FooterRow) Is Nothing) Then GVExportUtil.PrepareControlForExport(gv.FooterRow) table.Rows.Add(gv.FooterRow) End If table.RenderControl(htw) HttpContext.Current.Response.Write(sw.ToString) HttpContext.Current.Response.End() End Sub Private Shared Sub PrepareControlForExport(ByVal control As Control) Dim i As Integer = 0 Do While (i < control.Controls.Count) Dim current As Control = control.Controls(i) If (TypeOf current Is LinkButton) Then control.Controls.Remove(current) control.Controls.AddAt(i, New LiteralControl(CType(current, LinkButton).Text)) ElseIf (TypeOf current Is ImageButton) Then control.Controls.Remove(current) control.Controls.AddAt(i, New LiteralControl(CType(current, ImageButton).AlternateText)) ElseIf (TypeOf current Is HyperLink) Then control.Controls.Remove(current) control.Controls.AddAt(i, New LiteralControl(CType(current, HyperLink).Text)) ElseIf (TypeOf current Is DropDownList) Then control.Controls.Remove(current) control.Controls.AddAt(i, New LiteralControl(CType(current, DropDownList).SelectedItem.Text)) ElseIf (TypeOf current Is CheckBox) Then control.Controls.Remove(current) control.Controls.AddAt(i, New LiteralControl(CType(current, CheckBox).Checked)) End If If current.HasControls Then GVExportUtil.PrepareControlForExport(current) End If i = (i + 1) Loop End Sub End Class End Namespace Will this class cause memory leaks? And does anything here need to be disposed of? The code is working but I am getting frequent crashes of the app pool when it is in use. Thanks.

    Read the article

< Previous Page | 533 534 535 536 537 538 539 540 541 542 543 544  | Next Page >