Search Results

Search found 7776 results on 312 pages for 'configure in'.

Page 263/312 | < Previous Page | 259 260 261 262 263 264 265 266 267 268 269 270  | Next Page >

  • Squid handling of concurrent cache misses

    - by Oliver H-H
    We're using a Squid cache to off-load traffic from our web servers, ie. it's setup as a reverse-proxy responding to inbound requests before they hit our web servers. When we get blitzed with concurrent requests for the same request that's not in the cache, Squid proxies all the requests through to our web ("origin") servers. For us, this behavior isn't ideal: our origin servers gets bogged down trying to fulfill N identical requests concurrently. Instead, we'd like the first request to proxy through to the origin server, the rest of the requests to queue at the Squid layer, and then all be fulfilled by Squid when the origin server has responded to that first request. Does anyone know how to configure Squid to do this? We've read through the documentation multiple times and thoroughly web-searched the topic, but can't figure out how to do it. We use Akamai too and, interestingly, this is its default behavior. (However, Akamai has so many nodes that we still see lots of concurrent requests in certain traffic spike scenarios, even with Akamai's super-node feature enabled.) This behavior is clearly configurable for some other caches, eg. the Ehcache documentation offers the option "Concurrent Cache Misses: A cache miss will cause the filter chain, upstream of the caching filter to be processed. To avoid threads requesting the same key to do useless duplicate work, these threads block behind the first thread." Some folks call this behavior a "blocking cache," since the subsequent concurrent requests block behind the first request until it's fulfilled or timed-out. Thx for looking over my noob question! Oliver

    Read the article

  • Problem with GWT behind a reverse proxy - either nginx or apache

    - by Don Branson
    I'm having this problem with GWT when it's behind a reverse proxy. The backend app is deployed within a context - let's call it /context. The GWT app works fine when I hit it directly: http://host:8080/context/ I can configure a reverse proxy in front it it. Here's my nginx example: upstream backend { server 127.0.0.1:8080; } ... location / { proxy_pass http://backend/context/; } But, when I run through the reverse proxy, GWT gets confused, saying: 2009-10-04 14:05:41.140:/:WARN: Login: ERROR: The serialization policy file '/C7F5ECA5E3C10B453290DE47D3BE0F0E.gwt.rpc' was not found; did you forget to include it in this deployment? 2009-10-04 14:05:41.140:/:WARN: Login: WARNING: Failed to get the SerializationPolicy 'C7F5ECA5E3C10B453290DE47D3BE0F0E' for module 'https://hostname:444/'; a legacy, 1.3.3 compatible, serialization policy will be used. You may experience SerializationExceptions as a result. 2009-10-04 14:05:41.292:/:WARN: StoryService: ERROR: The serialization policy file '/0445C2D48AEF2FB8CB70C4D4A7849D88.gwt.rpc' was not found; did you forget to include it in this deployment? 2009-10-04 14:05:41.292:/:WARN: StoryService: WARNING: Failed to get the SerializationPolicy '0445C2D48AEF2FB8CB70C4D4A7849D88' for module 'https://hostname:444/'; a legacy, 1.3.3 compatible, serialization policy will be used. You may experience SerializationExceptions as a result. In other words, GWT isn't getting the word that it needs to prepend /context/ hen look for C7F5ECA5E3C10B453290DE47D3BE0F0E.gwt.rpc, but only when the request comes throught proxy. A workaround is to add the context to the url for the web site: location /context/ { proxy_pass http://backend/context/; } but that means the context is now part of the url that the user sees, and that's ugly. Anybody know how to make GWT happy in this case? Software versions: GWT - 1.7.0 (same problem with 1.7.1) Jetty - 6.1.21 (but the same problem existed under tomcat) nginx - 0.7.62 (same problem under apache 2.x) I've looked at the traffic between the proxy and the backend using DonsProxy, but there's nothing noteworthy there.

    Read the article

  • How do I lookup a JNDI Datasource from outside a web container?

    - by masotime
    I have the following environment set up: Java 1.5 Sun Application Server 8.2 Oracle 10 XE Struts 2 Hibernate I'm interested to know how I can write code for a Java client (i.e. outside of a web application) that can reference the JNDI datasource provided by the application server. The ports for the Sun Application Server are all at their defaults. There is a JNDI datasource named jdbc/xxxx in the server configuration, but I noticed that the Hibernate configuration for the web application uses the name java:comp/env/jdbc/xxxx instead. Most of the examples I've seen so far involve code like Context ctx = new InitialContext(); ctx.lookup("jdbc/xxxx"); But it seems I'm either using the wrong JNDI name, or I need to configure a jndi.properties or other configuration file to correctly point to a listener? I have appserv-rt.jar from the Sun Application Server which has a jndi.properties inside of it, but it does not seem to help. There's a similar question here, but it doesn't give any code / refers to having iBatis obtain the JNDI Datasource automatically: http://stackoverflow.com/questions/39053/accessing-datasource-from-outside-a-web-container-through-jndi

    Read the article

  • Why are changes to coffeescript files not being compiled when my Rails 3.2.0 app is in development mode?

    - by ben
    Normally, any changes I make to .js.coffee files in my Rails 3.2.0 app in development mode take effect when I refresh the page. All of a sudden, this is not happening. If I do rake assets:precompile, then the changes are shown, but then if I do rake assets:clean they go back to not being shown. What is causing this? Edit: Restarting the server makes the changes show. Why isn't this happening automatically as before? Edit: Here is my development.rb Myapp::Application.configure do # Settings specified here will take precedence over those in config/application.rb # In the development environment your application's code is reloaded on # every request. This slows down response time but is perfect for development # since you don't have to restart the web server when you make code changes. config.cache_classes = false # Log error messages when you accidentally call methods on nil. config.whiny_nils = true # Show full error reports and disable caching config.consider_all_requests_local = true config.action_controller.perform_caching = false # Don't care if the mailer can't send config.action_mailer.raise_delivery_errors = false # Print deprecation notices to the Rails logger config.active_support.deprecation = :log # Only use best-standards-support built into browsers config.action_dispatch.best_standards_support = :builtin # Raise exception on mass assignment protection for Active Record models config.active_record.mass_assignment_sanitizer = :strict # Log the query plan for queries taking more than this (works # with SQLite, MySQL, and PostgreSQL) config.active_record.auto_explain_threshold_in_seconds = 0.5 # Do not compress assets config.assets.compress = false # Expands the lines which load the assets config.assets.debug = true config.action_mailer.default_url_options = { :host => 'localhost:3000' } config.log_level = :warn end

    Read the article

  • Dependency Injection Question - ASP.NET

    - by Paul
    I'm starting a web application that contains the following projects: Booking.Web Booking.Services Booking.DataObjects Booking.Data I'm using the repository pattern in my data project only. All services will be the same, no matter what happens. However, if a customer wants to use Access, it will use a different data repository than if the customer wants to use SQL Server. I have StructureMap, and want to be able to do the following: Web project is unaffected. It's a web forms application that will only know about the services project and the dataobjects project. When a service is called, it will use StructureMap (by looking up the bootstrapper.cs file) to see which data repository to use. An example of a services class is the error logging class: public class ErrorLog : IErrorLog { ILogging logger; public ErrorLog() { } public ErrorLog(ILogging logger) { this.logger = logger; } public void AddToLog(string errorMessage) { try { AddToDatabaseLog(errorMessage); } catch (Exception ex) { AddToFileLog(ex.Message); } finally { AddToFileLog(errorMessage); } } private void AddToDatabaseLog(string errorMessage) { ErrorObject error = new ErrorObject { ErrorDateTime = DateTime.Now, ErrorMessage = errorMessage }; logger.Insert(error); } private void AddToFileLog(string errorMessage) { // TODO: Take this value from the web.config instead of hard coding it TextWriter writer = new StreamWriter(@"E:\Work\Booking\Booking\Booking.Web\Logs\ErrorLog.txt", true); writer.WriteLine(DateTime.Now.ToString() + " ---------- " + errorMessage); writer.Close(); } } I want to be able to call this service from my web project, without defining which repository to use for the data access. My boostrapper.cs file in the services project is defined as: public class Bootstrapper { public static void ConfigureStructureMap() { ObjectFactory.Initialize(x => { x.AddRegistry(new ServiceRegistry()); } ); } public class ServiceRegistry : Registry { protected override void configure() { ForRequestedType<IErrorLog>().TheDefaultIsConcreteType<Booking.Services.Logging.ErrorLog>(); ForRequestedType<ILogging>().TheDefaultIsConcreteType<SqlServerLoggingProvider>(); } } } What else do I need to get this to work? When I defined a test, the ILogger object was null. Thanks,

    Read the article

  • Replace System.Net.Mail.MailMessage with manually created message and send it

    - by DEH
    I am trying to send emails that will bounce to a known mailbox. I plan to use VERP. Unfortunately the System.Net.Mail.MailMessage object does not allow me to precisely set the From: and Sender: headers within my email - it forces the values so that the resulting email contains the phrase 'on behalf of', and does not allow me fine control over the relevant mime headers. I therefore plan to manually write mime email messages directly to the pickup directory so that I can independently control the From and Sender headers. My dev box is a Vista box and therefore does not have an SMTP server. I would like to configure the dev box so that I have an SMTP server running on it. I can then turn off the SMTP server, write messages to the pickup dir, then turn on the SMPT server and see how the individual emails that I have written will behave (some delivered, some bounced to a bounce handler on a different email domain, as dictated by the Sender). Two questions: 1. Can anyone recommend an SMTP server that will monitor a pickup directory? 2. If I set headers as follows; From:[email protected]; Sender:[email protected] then the recipient will see the email as having come from [email protected] ( and won't see any reference to [email protected]), but if the mail bounces then the NDR will be sent to [email protected]). It's a real pain to have to do this, but I can't see any way of using System.Net.Mail.MailMessage without it messing up my headers.

    Read the article

  • Filling combobox from database by using hibernate in Java

    - by denny
    Heyy; I am developing a small swing based application with hibernate in java. And I want fill combobox from database coloumn.How i can do that ? And I don't know in where(under initComponents, buttonActionPerformd) i need to do. For saving i'am using jbutton and it's code is here : private void jButton1ActionPerformed(java.awt.event.ActionEvent evt) { int idd=Integer.parseInt(jTextField1.getText()); String name=jTextField2.getText(); String description=jTextField3.getText(); Session session = null; SessionFactory sessionFactory = new Configuration().configure() .buildSessionFactory(); session = sessionFactory.openSession(); Transaction transaction = session.getTransaction(); try { ContactGroup con = new ContactGroup(); con.setId(idd); con.setGroupName(name); con.setGroupDescription(description); transaction.begin(); session.save(con); transaction.commit(); } catch (Exception e) { e.printStackTrace(); } finally{ session.close(); } }

    Read the article

  • SSL certificate pre-fetch .NET

    - by Wil P
    I am writing a utility that would allow me to monitor the health of our websites. This consists of a series of validation tasks that I can run against a web application. One of the tests is to anticipate the expiration of a particular SSL certificate. I am looking for a way to pre-fetch the SSL certificate installed on a web site using .NET or a WINAPI so that I can validate the expiration date of the certificate associated with a particular website. One way I could do this is to cache the certificates when they are validated in the ServicePointManager.ServerCertificateValidationCallback handler and then match them up with configured web sites, but this seems a bit hackish. Another would be to configure the application with the certificate for the website, but I'd rather avoid this if I can in order to minimize configuration. What would be the easiest way for me to download an SSL certificate associated with a website using .NET so that I can inspect the information the certificate contains to validate it? EDIT: To extend on the answer below there is no need to manually create the ServicePoint prior to creating the request. It is generated on the request object as part of executing the request. private static string GetSSLExpiration(string url) { HttpWebRequest request = WebRequest.Create(url) as HttpWebRequest; using (WebResponse response = request.GetResponse()) { } if (request.ServicePoint.Certificate != null) { return request.ServicePoint.Certificate.GetExpirationDateString(); } else { return string.Empty; } }

    Read the article

  • How to upload files?

    - by Brian Roisentul
    I just wanted to know how to configure FCKEditor to upload files and images to the server where the website is hosted. The relevant part for it's config file(i think) looks like this: FCKConfig.LinkUpload = true ; FCKConfig.LinkUploadURL = FCKConfig.BasePath + 'filemanager/connectors/' + _QuickUploadLanguage + '/upload.' + _QuickUploadExtension ; FCKConfig.LinkUploadAllowedExtensions = ".(7z|aiff|asf|avi|bmp|csv|doc|fla|flv|gif|gz|gzip|jpeg|jpg|mid|mov|mp3|mp4|mpc|mpeg|mpg|ods|odt|pdf|png|ppt|pxd|qt|ram|rar|rm|rmi|rmvb|rtf|sdc|sitd|swf|sxc|sxw|tar|tgz|tif|tiff|txt|vsd|wav|wma|wmv|xls|xml|zip)$" ; // empty for all FCKConfig.LinkUploadDeniedExtensions = "" ; // empty for no one FCKConfig.ImageUpload = true ; FCKConfig.ImageUploadURL = FCKConfig.BasePath + 'filemanager/connectors/' + _QuickUploadLanguage + '/upload.' + _QuickUploadExtension + '?Type=Image' ; FCKConfig.ImageUploadAllowedExtensions = ".(jpg|gif|jpeg|png|bmp)$" ; // empty for all FCKConfig.ImageUploadDeniedExtensions = "" ; // empty for no one Could it be a folder permission problem? Is this part of the config.js alright?

    Read the article

  • How to allow anonymous login in org.apache.ftpserver?

    - by ablmf
    I wrote a little code like this to start an ftp server embedded in my application. It's based on apache ftpserver I found that anonymous user could not login. Client keeps get 530. Do I have add a configure file for ftp? I can not find any API to create a User to add to UserManger. private void start_ftp() throws FtpException { FtpServerFactory serverFactory = new FtpServerFactory(); ListenerFactory factory = new ListenerFactory(); // set the port of the listener factory.setPort(DEF_FTP_PORT); // replace the default listener serverFactory.addListener("default", factory.createListener()); Ftplet fl = new MyFtplet(); Map<String, Ftplet> map_ftplest = new LinkedHashMap<String, Ftplet>(); map_ftplest.put("default", fl); serverFactory.setFtplets(map_ftplest); UserManagerFactory u_factory = new PropertiesUserManagerFactory(); UserManager u_manager = u_factory.createUserManager(); //u_manager. Boolean b = u_manager.doesExist("anonymous"); serverFactory.setUserManager(u_manager); // start the server server = serverFactory.createServer(); server.start(); }

    Read the article

  • PubSubHubBub Hubs

    - by PartlyCloudy
    Hi, I'm currently building a live web application based upon the PubSubHubBub protocol. However, I encountered several issues. First, I'm in search of a hub application that I can run on my server. There are several applications, but most of them are not mature yet, or they don't support the 0.3 spec. The official google hub runs on the Google App Engine and can even be executed locally. Unfortunately, "Tasks will not run automatically. Push the 'Run' button to execute each task." This behaviour is useful for debugging and understanding the workflow, but in some live tests, it would be nice not to invoke all tasks manually. Is there a way to tweak the local app engine due automatically run tasks? Next, I have a question concerning the spec itself. The Google reference implementation provides the initial publish method bound to the outpoint uri + /publish. But this is not reflected in the specs. So are there any mature hubs that can be run locally for debugging? Or are there ways to configure the offical google app engine hub to run locally and to execute tasks directly? Thanks in advance

    Read the article

  • Serializing a part of object graph

    - by Felix
    Hi all, I have a problem regarding Java custom serialization. I have a graph of objects and want to configure where to stop when I serialize a root object from client to server. Let's make it a bit concrete, clear by giving a sample scenario. I have Classes of type Company Employee (abstract) Manager extends Employee Secretary extends Employee Analyst extends Employee Project Here are the relations: Company(1)---(n)Employee Manager(1)---(n)Project Analyst(1)---(n)Project Imagine, I'm on the client side and I want to create a new company, assign it 10 employees (new or some existing) and send this new company to the server. What I expect in this scenario is to serialize the company and all bounding employees to the server side, because I'll save the relations on the database. So far no problem, since the default Java serialization mechanism serializes the whole object graph, excluding the field which are static or transient. My goal is about the following scenario. Imagine, I loaded a company and its 1000 employees from the server to the client side. Now I only want to rename the company's name (or some other field, that directly belongs to the company) and update this record. This time, I want to send only the company object to the server side and not the whole list of employees (I just update the name, the employees are in this use case irrelevant). My aim also includes the configurability of saying, transfer the company AND the employees but not the Project-Relations, you must stop there. Do you know any possibility of achieving this in a generic way, without implementing the writeObject, readObject for every single Entity-Object? What would be your suggestions? I would really appreciate your answers. I'm open to any ideas and am ready to answer your questions in case something is not clear.

    Read the article

  • Best terminal environment for Cygwin/Windows?

    - by Anders Sandvig
    Today I run Cygwin with rxvt using the following startup line: rxvt -bg black -sl 8192 -fg white -sr -g 150x56 -fn "Fixedsys" -e /usr/bin/bash --login -i This gives me a resizeable native Windows window which is much better than the standard "DOS box" the default cygwin.bat provides. However, the current configuration does have a couple of issues: I am not able to enter non-ASCII characters into the terminal window (i.e. æ, ø, å and Æ, Ø, Å, which I use semi-frequently. In fact, the terminal will not even accept them when I paste them into the window. If I paste a string like "bølle" (Norwegian for "bulley"), all I get is "blle". I am not able to render UTF-8 character, they only show as ?, even if they are supported by the font (i.e. when rendering the same characters in ISO-8859-1 they show just fine.). I am running English Windows Vista with locale and keyboard layout set to Norwegian (ISO-8859-1 character set?), but I've had the exact same issue on Windows 2000 and XP. Anyone knows how to fix this (i.e. a better way to configure rxvt)? Apart from the issues mentioned above, I'm very happy with rxvt, so if I find a way to resolve them I'd like to continue using it. However, if the issues are not (easily) solvable, are the any other good terminal solutions for Cygwin? Update The solution provided by Andy and Mattias (editing the .inputrc file) did solve the input problem, but output rendering is still an issue. Output is fine when I render in ISO-8859-1, but when using UTF-8 I only get ? for non-ASCII characters. This behavior is consistent between rxvt, urxvt (under Cygwin XFree X Server), mintty and PuttyCyg. Is there a similar configuration file where output encoding can be set (i.e. the equivalent of setting output locale on a Linux system)?

    Read the article

  • How to determine which source files are required for an Eclipse run configuration

    - by isme
    When writing code in an Eclipse project, I'm usually quite messy and undisciplined in how I create and organize my classes, at least in the early hacky and experimental stages. In particular, I create more than one class with a main method for testing different ideas that share most of the same classes. If I come up with something like a useful app, I can export it to a runnable jar so I can share it with friends. But this simply packs up the whole project, which can become several megabytes big if I'm relying on large library such as httpclient. Also, if I decide to refactor my lump of code into several projects once I work out what works, and I can't remember which source files are used in a particular run configuration, all I can do it copy the main class to a new project and then keep copying missing types till the new project compiles. Is there a way in Eclipse to determine which classes are actually used in a particular run configuration? EDIT: Here's an example. Say I'm experimenting with web scraping, and so far I've tried to scrape the search-result pages of both youtube.com and wrzuta.pl. I have a bunch of classes that implement scraping in general, a few that are specific to each of youtube and wrzuta. On top of this I have a basic gui common to both scrapers, but a few wrzuta- and youtube-specific buttons and options. The WrzutaGuiMain and YoutubeGuiMain classes each contain a main method to configure and show the gui for each respective website. Can Eclipse look at each of these to determine which types are referenced?

    Read the article

  • C++ error: expected initializer before ‘&’ token

    - by Werner
    Hi, the following piece of C++ code compiled two years ago in a suse 10.1 Linux machine. #ifndef DATA_H #define DATA_H #include <iostream> #include <iomanip> inline double sqr(double x) { return x*x; } enum Direction { X,Y,Z }; inline Direction next(const Direction d) { switch(d) { case X: return Y; case Y: return Z; case Z: return X; } } inline ostream& operator<<(ostream& os,const Direction d) { switch(d) { case X: return os << "X"; case Y: return os << "Y"; case Z: return os << "Z"; } } ... ... Now, I am trying to compile it on Ubuntu 9.10 and I get the error: data.h:20: error: expected initializer before ‘&’ token which is referred to the line of: inline ostream& operator<<(ostream& os,const Direction d) the g++ used on this machine is: Using built-in specs. Target: x86_64-linux-gnu Configured with: ../src/configure -v --with-pkgversion='Ubuntu 4.4.1-4ubuntu9' --with-bugurl=file:///usr/share/doc/gcc-4.4/README.Bugs --enable-languages=c,c++,fortran,objc,obj-c++ --prefix=/usr --enable-shared --enable-multiarch --enable-linker-build-id --with-system-zlib --libexecdir=/usr/lib --without-included-gettext --enable-threads=posix --with-gxx-include-dir=/usr/include/c++/4.4 --program-suffix=-4.4 --enable-nls --enable-clocale=gnu --enable-libstdcxx-debug --enable-objc-gc --disable-werror --with-arch-32=i486 --with-tune=generic --enable-checking=release --build=x86_64-linux-gnu --host=x86_64-linux-gnu --target=x86_64-linux-gnu Thread model: posix gcc version 4.4.1 (Ubuntu 4.4.1-4ubuntu9) Could you give me some hint about this error? Thanks

    Read the article

  • JDBC/OSGi and how to dynamically load drivers without explicitly stating dependencies in the bundle?

    - by Chris
    Hi, This is a biggie. I have a well-structured yet monolithic code base that has a primitive modular architecture (all modules implement interfaces yet share the same classpath). I realize the folly of this approach and the problems it represents when I go to deploy on application servers that may have different conflicting versions of my library. I'm dependent on around 30 jars right now and am mid-way though bnding them up. Now some of my modules are easy to declare the versioned dependencies of, such as my networking components. They statically reference classes within the JRE and other BNDded libraries but my JDBC related components instantiate via Class.forName(...) and can use one of any number of drivers. I am breaking everything up into OSGi bundles by service area. My core classes/interfaces. Reporting related components. Database access related components (via JDBC). etc.... I wish for my code to be able to still be used without OSGi via single jar file with all my dependencies and without OSGi at all (via JARJAR) and also to be modular via the OSGi meta-data and granular bundles with dependency information. How do I configure my bundle and my code so that it can dynamically utilize any driver on the classpath and/or within the OSGi container environment (Felix/Equinox/etc.)? Is there a run-time method to detect if I am running in an OSGi container that is compatible across containers (Felix/Equinox/etc.) ? Do I need to use a different class loading mechanism if I am in a OSGi container? Am I required to import OSGi classes into my project to be able to load an at-bundle-time-unknown JDBC driver via my database module? I also have a second method of obtaining a driver (via JNDI, which is only really applicable when running in an app server), do I need to change my JNDI access code for OSGi-aware app servers?

    Read the article

  • Problem with reusing UITableViewCell's

    - by Sheehan Alam
    I have a UITableView that is re-using cells when the user scrolls. Everything appears and scrolls fine, except when the user clicks on an actual row, the highlighted cell displays some text from another cell. I'm not exactly sure why. #define IMAGE_TAG 1111 #define LOGIN_TAG 2222 #define FULL_NAME_TAG 3333 // Customize the appearance of table view cells. - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"Cell"; STUser *mySTUser = [[[STUser alloc]init]autorelease]; mySTUser = [items objectAtIndex:indexPath.row]; AsyncImageView* asyncImage = nil; UILabel* loginLabel = nil; UILabel* fullNameLabel = nil; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleSubtitle reuseIdentifier:CellIdentifier] autorelease]; } else { asyncImage = (AsyncImageView *) [cell.contentView viewWithTag:IMAGE_TAG]; loginLabel = (UILabel *) [cell.contentView viewWithTag:LOGIN_TAG]; fullNameLabel = (UILabel *) [cell.contentView viewWithTag:FULL_NAME_TAG]; } // Configure the cell... cell.accessoryType = UITableViewCellAccessoryDisclosureIndicator; CGRect frame = CGRectMake(0, 0, 44, 44); asyncImage = [[[AsyncImageView alloc]initWithFrame:frame] autorelease]; asyncImage.tag = IMAGE_TAG; NSURL* url = [NSURL URLWithString:mySTUser.avatar_url_large]; [asyncImage loadImageFromURL:url]; [cell.contentView addSubview:asyncImage]; loginLabel.tag = LOGIN_TAG; CGRect loginLabelFrame = CGRectMake(60, 0, 200, 10); loginLabel = [[[UILabel alloc] initWithFrame:loginLabelFrame] autorelease]; loginLabel.text = [NSString stringWithFormat:@"%@",mySTUser.login]; [cell.contentView addSubview:loginLabel]; fullNameLabel.tag = FULL_NAME_TAG; CGRect fullNameLabelFrame = CGRectMake(60, 20, 200, 10); fullNameLabel = [[[UILabel alloc] initWithFrame:fullNameLabelFrame] autorelease]; fullNameLabel.text = [NSString stringWithFormat:@"%@ %@",mySTUser.first_name, mySTUser.last_name]; //[NSString stringWithFormat:@"%@",mySTUser.login]; [cell.contentView addSubview:fullNameLabel]; return cell; }

    Read the article

  • Trouble compiling C/C++ project in NetBeans 6.8 with MinGW on Windows

    - by dontoo
    I am learning C and because VC++ 2008 doesn't support C99 features I have just installed NetBeans and configure it to work with MinGW. I can compile single file project ( main.c) and use debugger but when I add new file to project I get error "undefined reference to ... function(code) in that file..". Obviously MinGW does't link my files or I don't know how properly add them to my project (c standard library files work fine). /bin/make -f nbproject/Makefile-Debug.mk SUBPROJECTS= .build-conf make[1]: Entering directory `/c/Users/don/Documents/NetBeansProjects/CppApplication_7' /bin/make -f nbproject/Makefile-Debug.mk dist/Debug/MinGW-Windows/cppapplication_7.exe make[2]: Entering directory `/c/Users/don/Documents/NetBeansProjects/CppApplication_7' mkdir -p dist/Debug/MinGW-Windows gcc.exe -o dist/Debug/MinGW-Windows/cppapplication_7 build/Debug/MinGW-Windows/main.o build/Debug/MinGW-Windows/main.o: In function `main': C:/Users/don/Documents/NetBeansProjects/CppApplication_7/main.c:5: undefined reference to `X' collect2: ld returned 1 exit status make[2]: *** [dist/Debug/MinGW-Windows/cppapplication_7.exe] Error 1 make[2]: Leaving directory `/c/Users/don/Documents/NetBeansProjects/CppApplication_7' make[1]: *** [.build-conf] Error 2 make[1]: Leaving directory `/c/Users/don/Documents/NetBeansProjects/CppApplication_7' make: *** [.build-impl] Error 2 BUILD FAILED (exit value 2, total time: 1s) main.c #include "header.h" int main(int argc, char** argv) { X(); return (EXIT_SUCCESS); } header.h #ifndef _HEADER_H #define _HEADER_H #include <stdio.h> #include <stdlib.h> void X(void); #endif source.c #include "header.h" void X(void) { printf("dsfdas"); }

    Read the article

  • StructureMap problems with bidirectional/circular dependencies

    - by leozilla
    I am currently integrating StructureMap within our business layer but have problems because of bidirectional dependencies. The layer contains multiple manager where each manager can call methods on each other, there are no restrictions or rules for communication. This also includes possible circular dependencies like in the example below. I know the design itself is questionable but currently we just want StructureMap to work and will focus on further refactoring in the future. Every manager implements the IManager interface internal interface IManager { bool IsStarted { get; } void Start(); void Stop(); } And does also have his own specific interface. internal interface IManagerA : IManager { void ALogic(); } internal interface IManagerB : IManager { void BLogic(); } Here are to dummy manager implementations. internal class ManagerA : IManagerA { public IManagerB ManagerB { get; set; } public void ALogic() { } public bool IsStarted { get; private set; } public void Start() { } public void Stop() { } } internal class ManagerB : IManagerB { public IManagerA ManagerA { get; set; } public void BLogic() { } public bool IsStarted { get; private set; } public void Start() { } public void Stop() { } } Here is the StructureMap configuration i use atm. I am still not sure how i should register the managers so currently i use a manual registration. Maybee someone could help me with this too. For<IManagerA>().Singleton().Use<ManagerA>(); For<IManagerB>().Singleton().Use<ManagerB>(); SetAllProperties(convention => { // configure the property injection for all managers convention.Matching(prop => typeof(IManager).IsAssignableFrom(prop.PropertyType)); }); After all i cannot create IManagerA because StructureMap complians about the circular dependency between ManagerA and ManagerB. Is there an easy and clean solution to solve this problem but keep to current design? br David

    Read the article

  • Using authsmtp from a Grails server

    - by Simon
    This is quite a specific question, and I have had no luck on the grails nabble forum, so I thought I would post here. I am using the grails mail plug-in, but I think my question is a general one about using authsmtp as an email gateway from my server. I am having trouble sending mail from my app using authsmtp. I have installed and configured the mail plugin and was originally using my ISP's SMTP server to send mails. However when I deployed to AWS EC2 this failed because my elastic IP was blocked by the SMTP host. So I bought myself an authsmtp account and set up my server email address as an accepted one at authsmtp. I then changed my configuration in SecurityConfig.groovy to point to the authsmtp server that I had been designated... mailHost = "mail.authsmtp.com" mailUsername = "myusername" mailPassword = "mypassword" mailProtocol = "smtp" mailFrom = "[email protected]" mailPort = 2525 ...and I'm just trying to get this to work locally before I deploy back up to AWS. Sending mail fails and in my log I have this exception: 2010-02-13 10:59:44,218 [http-8080-1] ERROR service.EmailerService - Failed to send emails: Failed messages: com.sun.mail.smtp.SMTPSendFailedException: 513 5.0.0 Your email system must authenticate before sending mail. org.springframework.mail.MailSendException; nested exception details (1) are: Failed message 1: com.sun.mail.smtp.SMTPSendFailedException: 513 5.0.0 Your email system must authenticate before sending mail. at com.sun.mail.smtp.SMTPTransport.issueSendCommand(SMTPTransport.java:1388) at com.sun.mail.smtp.SMTPTransport.mailFrom(SMTPTransport.java:959) at com.sun.mail.smtp.SMTPTransport.sendMessage(SMTPTransport.java:583) I'm a bit lost since the username and password I provide in the configuration are definitely correct. A terse and not very helpful conversation with authsmtp support suggests that I need to MD5 and/or base64 encode my credentials before sending, so my question is in three parts... 1) any idea what's going on with the failure and why that message is appearing? 2) how would I encode the credentials to pass to authsmtp and how would I configure that for the mail plugin 3) has anyone successfully connected and sent mail through authsmtp from the mail plugin and specifically from AWS EC2?

    Read the article

  • Multiple database with NHibernate

    - by Flint
    Hi, I have two databases. One from Oracle 10g. Another from Mysql. I have configured my web application with Nhibernate for Oracle and now I am in need of using the MySQL database. So how can i configure the hibernate.cfg.xml so that i can use both of the database at the same application? My current hibernate.cfg.xml is: <?xml version="1.0" encoding="utf-8" ?> <hibernate-configuration xmlns="urn:nhibernate-configuration-2.2"> <session-factory> <property name="connection.provider">NHibernate.Connection.DriverConnectionProvider</property> <property name="connection.driver_class">NHibernate.Driver.OracleClientDriver</property> <property name="connection.connection_string">Data Source=xe;Persist Security Info=True;User ID=hr;Password=hr;Unicode=True</property> <property name="show_sql">false</property> <property name="dialect">NHibernate.Dialect.Oracle9Dialect</property> <!-- mapping files --> <mapping assembly="DataTransfer" /> </session-factory> </hibernate-configuration>

    Read the article

  • Grails UrlMappings with .html

    - by Glennn
    I'm developing a Grails web application (mainly as a learning exercise). I have previously written some standard Grails apps, but in this case I wanted to try creating a controller that would intercept all requests (including static html) of the form: <a href="/testApp/testJsp.jsp">test 1</a> <a href="/testApp/testGsp.gsp">test 2</a> <a href="/testApp/testHtm.htm">test 3</a> <a href="/testApp/testHtml.html">test 4</a> The intent is to do some simple business logic (auditing) each time a user clicks a link. I know I could do this using a Filter (or a range of other methods), however I thought this should work too and wanted to do this using a Grails framework. I set up the Grail UrlMappings.groovy file to map all URLs of that form (/$myPathParam?) to a single controller: class UrlMappings { static mappings = { "/$controller/$action?/$id?"{ constraints { } } "/$path?" (controller: 'auditRecord', action: 'showPage') "500"(view:'/error') } } In that controller (in the appropriate "showPage" action) I've been printing out the path information, for example: def showPage = { println "params.path = " + params.path ... render(view: resultingView) } The results of the println in the showPage action for each of my four links are testJsp.jsp testGsp.gsp testHtm.htm testHtml Why is the last one "testHtml", not "testHtml.html"? In a previous (Stack Overflow query) Olexandr encountered this issue and was advised to simply concatenate the value of request.format - which, indeed, does return "html". However request.format also returns "html" for all four links. I'm interested in gaining an understanding of what Grails is doing and why. Is there some way to configure Grails so the params.path variable in the controller shows "testHtml.html" rather than stripping off the "html" extension? It doesn't seem to remove the extension for any other file type (including .htm). Is there a good reason it's doing this? I know that it is a bit unusual to use a controller for static html, but still would like to understand what's going on.

    Read the article

  • Alternatives to using web.config to store settings (for complex solutions)

    - by Brian MacKay
    In our web applications, we seperate our Data Access Layers out into their own projects. This creates some problems related to settings. Because the DAL will eventually need to be consumed from perhaps more than one application, web.config does not seem like a good place to keep the connection strings and some of the other DAL-related settings. To solve this, on some of our recent projects we introduced a third project just for settings. We put the setting in a system of .Setting files... With a simple wrapper, the ability to have different settings for various enviroments (Dev, QA, Staging, Production, etc) was easy to achieve. The only problem there is that the settings project (including the .Settings class) compiles into an assembly, so you can't change it without doing a build/deployment, and some of our customers want to be able to configure their projects without Visual Studio. So, is there a best practice for this? I have that sense that I'm reinventing the wheel. Some solutions such as storing settings in a fixed directory on the server in, say, our own XML format occurred to us. But again, I would rather avoid having to re-create encryption for sensitive values and so on. And I would rather keep the solution self-contained if possible. EDIT: The original question did not contain the really penetrating reason that we can't (I think) use web.config ... That puts a few (very good) answers out of context, my bad.

    Read the article

  • NHibernate stored procedure problem

    - by Calvin
    I'm having a hard time trying to get my stored procedure works with NHibernate. The data returned from the SP does not correspond to any database table. This is my mapping file: <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" assembly="DomainModel" namespace="DomainModel.Entities"> <sql-query name="DoSomething"> <return class="SomeClass"> <return-property name="ID" column="ID"/> </return> exec [dbo].[sp_doSomething] </sql-query> </hibernate-mapping> Here is my domain class: namespace DomainModel.Entities { public class SomeClass { public SomeClass() { } public virtual Guid ID { get; set; } } } When I run the code, it fails with Exception Details: NHibernate.HibernateException: Errors in named queries: {DoSomething} at line 80 Line 78: config.Configure(Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "NHibernate.config")); Line 79: Line 80: g_sessionFactory = config.BuildSessionFactory(); When I debug into NHibernate code, it seems that SomeClass is not added to the persister dictionary because there isn't a class mapping (only sql-query) defined in hbm.xml. And later on in CheckNamedQueries function, it is not able to find the persistor for SomeClass. I've checked all the obvious things (e.g. make hbm as an embedded resource) and my code isn't too much different from other samples I found on the web, but somehow I just can't get it working. Any idea how I can resolve this issue?

    Read the article

  • Is it possible to write an IIS URL Rewrite Rule that examines content of HTTP Post?

    - by JohnRudolfLewis
    I need to split a portion of functionality away from a legacy ISAPI dll onto another solution (ASP.NET MVC most likely). IIS7's URL Rewrite sounded like a perfect candidate for the job, but it turns out I cannot find a way to configure the rules the way I need. I need to write a rule that examines the content of the HTTP post for a particular value. i.e. <form method="post" action="legacy_isapi.dll"> <input name="foo" /> </form> if (Request.Form["foo"] == "bar") Context.RewritePath("/some_other_url/on_the_same_machine/foo/bar"); As a proof of concept, I was able to create an IHttpModule that examines context.Request.Form collection and performs a rewrite when certain parameters are present. I installed this module in my website, and it works. Rather than a custom module, however, I'd rather extend the existing URL Rewrite module to support examining the content of the HTTP Post as one of its rules. Is this possible?

    Read the article

< Previous Page | 259 260 261 262 263 264 265 266 267 268 269 270  | Next Page >