Search Results

Search found 18142 results on 726 pages for 'wcf configuration'.

Page 340/726 | < Previous Page | 336 337 338 339 340 341 342 343 344 345 346 347  | Next Page >

  • PowerPoint PlugIn does not read defaults from .dll.config file

    - by Nick T
    I'm working on a very simple PowerPoint plugin, and I'm quite a bit stumped. In my settings.settings file, I have configured a setting "StartPath", which references where the PowerPoint plugin will navigate to using a Browser component. After I compile the application, and run the installer generated by the Setup project, the application is installed and uses the default value in the settings file. However, if I edit the application.dll.config file, the plugin still uses the old values. How can I set things up such that the plugin references the .dll.config file and not its default settings? The code to access the settings is listed below, including the other variants I have tried: //Attempt 1 string location = MyApplication.Properties.Settings.Default.StartPath; //Attempt 2 string location = ConfigurationManager.AppSettings["StartPath"]; //Attempt 3: Configuration element is inaccessible due to its protection level string applicationName = Environment.GetCommandLineArgs()[0] + ".exe"; string exePath = System.IO.Path.Combine(Environment.CurrentDirectory, applicationName); Configuration config = ConfigurationManager.OpenExeConfiguration(exePath); string location = config.AppSettings["StartPath"];

    Read the article

  • Exclude subdirectory from rewrite rule in web.config

    - by Clog
    This question comes up often, but I can only find solutions for PHP, Apache, htaccess etc but not for web.config I would like my pages to return in HTTP not HTTPS, except for forms within certain subdirectories. I have created the following web.config file, but how do I exclude a subdirectory called forms. <configuration> <system.webServer> <rewrite> <rules> <rule name="Force all to HTTP" stopProcessing="true"> <match url="(.*)" /> <conditions> <add input="{HTTPS}" pattern="on" ignoreCase="true" /> </conditions> <action type="Redirect" redirectType="Found" url="http://www.mysite.com/{R:1}" /> </rule> </rules> </rewrite> </system.webServer> </configuration> Many thanks all you clever clogs.

    Read the article

  • NHibernate auditing in disconnected mode

    - by Ciaran
    I'm developing an app with a Silverlight UI, transferring my domain objects over WCF and persisting them via NHibernate. I'm therefore working with NHibernate in a disconnected mode. I'm already using the NHibernate PreUpdate and PreInsert EventListeners to perform some metadata operations (updating Create/Update date, created/updated by etc) and they are working fine. I now have a requirement to perform data logging on some of my domain objects. So I will need to have an audit table that has a before-save and after-save state of certain entities. I had wanted to use the @event.Persister.OldState and @event.Persister.NewState to perform this logging, but because I am in a disconnected scenario (using different Sessions from when data is retrieved to when it is persisted), @event.Persister.OldState is null when I am saving my changes back to the database. How is anyone else doing data logging in a disconnected scenario with NHibernate?

    Read the article

  • Workflow Foundation: Asynchronous operations (lengthy network I/O)

    - by StormianRootSolver
    I have to create an application that will be started a few times per day (it's non - interactive). To operate, it needs LARGE amounts of data from the Internet (megabytes) via a rather slow connection, so the WCF service calls take quite some time. At the same time, it needs to perform local calculations and has a sophisticated initialization process. So, what I want to do is to create a workflow that asynchronously fetches the data (takes a few minutes) while already initializing / calculating locally. Is there a way to accomplish this?

    Read the article

  • Maven + AspectJ - all steps to configure it

    - by Alice
    I have a problem with applying aspects to my maven project. Probably I am missing something, so I've made a list of steps. Could you please check if it is correct? Let say in projectA is an aspect class and in projectB classes, which should be changed by aspects. Create maven project ProjectA with AspectJ class add Aspectj plugin and dependency Add ProjectA as a dependency to projectB pom.xml Add to projectB pom.xml plugin " <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>aspectj-maven-plugin</artifactId> <version>1.4</version> <executions> <execution> <goals> <goal>compile</goal> <goal>test-compile</goal> </goals> </execution> </executions> <configuration> <source>${maven.compiler.source}</source> <target>${maven.compiler.target}</target> <aspectLibraries> <aspectLibrary> <groupId>ProjectA</groupId> <artifactId>ProjectA</artifactId> </aspectLibrary> </aspectLibraries> </configuration> </plugin> Add aspectj dependency After all these steps my problem is, that during compilation I get: [WARNING] advice defined in AspectE has not been applied [Xlint:adviceDidNotMatch] And then when I run my program: Exception in thread "FeatureExcutionThread" java.lang.NoClassDefFoundError: AspectE

    Read the article

  • Is multithreading the right way to go for my case?

    - by Julien Lebosquain
    Hello, I'm currently designing a multi-client / server application. I'm using plain good old sockets because WCF or similar technology is not what I need. Let me explain: it isn't the classical case of a client simply calling a service; all clients can 'interact' with each other by sending a packet to the server, which will then do some action, and possible re-dispatch an answer message to one or more clients. Although doable with WCF, the application will get pretty complex with hundreds of different messages. For each connected client, I'm of course using asynchronous methods to send and receive bytes. I've got the messages fully working, everything's fine. Except that for each line of code I'm writing, my head just burns because of multithreading issues. Since there could be around 200 clients connected at the same time, I chose to go the fully multithreaded way: each received message on a socket is immediately processed on the thread pool thread it was received, not on a single consumer thread. Since each client can interact with other clients, and indirectly with shared objects on the server, I must protect almost every object that is mutable. I first went with a ReaderWriterLockSlim for each resource that must be protected, but quickly noticed that there are more writes overall than reads in the server application, and switched to the well-known Monitor to simplify the code. So far, so good. Each resource is protected, I have helper classes that I must use to get a lock and its protected resource, so I can't use an object without getting a lock. Moreover, each client has its own lock that is entered as soon as a packet is received from its socket. It's done to prevent other clients from making changes to the state of this client while it has some messages being processed, which is something that will happen frequently. Now, I don't just need to protect resources from concurrent accesses. I must keep every client in sync with the server for some collections I have. One tricky part that I'm currently struggling with is the following: I have a collection of clients. Each client has its own unique ID. When a client connects, it must receive the IDs of every connected client, and each one of them must be notified of the newcomer's ID. When a client disconnects, every other client must know it so that its ID is no longer valid for them. Every client must always have, at a given time, the same clients collection as the server so that I can assume that everybody knows everybody. This way if I'm sending a message to client #1 telling "Client #2 has done something", I know that it will always be correctly interpreted: Client 1 will never wonder "but who is Client 2 anyway?". My first attempt for handling the connection of a new client (let's call it X) was this pseudo-code (remember that newClient is already locked here): lock (clients) { foreach (var client in clients) { lock (client) { client.Send("newClient with id X has connected"); } } clients.Add(newClient); newClient.Send("the list of other clients"); } Now imagine that in the same time, another client has sent a packet that translates into a message that must be broadcasted to every connected client, the pseudo-code will be something like this (remember that the current client - let's call it Y - is already locked here): lock (clients) { foreach (var client in clients) { lock (client) { client.Send("something"); } } } An obvious deadlock occurs here: on one thread X is locked, the clients lock has been entered, started looping through the clients, and at one moment must get Y's lock... which is already acquired on the second thread, itself waiting for the clients collection lock to be released! This is not the only case like this in the server application. There are other collections which must be kept in sync with the clients, some properties on a client can be changed by another one, etc. I tried other types of locks, lock-free mechanisms and a bunch of other things. Either there were obvious deadlocks when I'm using too much locks for safety, or obvious race conditions otherwise. When I finally find a good middle point between the two, it usually comes with very subtle race conditions / dead locks and other multi-threading issues... my head hurts very quickly since for any single line of code I'm writing I have to review almost the whole application to ensure everything will behave correctly with any number of threads. So here's my final question: how would you resolve this specific case, the general case, and more importantly: aren't I going the wrong way here? I have little problems with the .NET framework, C#, simple concurrency or algorithms in general. Still, I'm lost here. I know I could use only one thread processing the incoming requests and everything will be fine. However, that won't scale well at all with more clients... But I'm thinking more and more to go this simple way. What do you think? Thanks in advance to you, StackOverflow people which have taken the time to read this huge question. I really had to explain the whole context if I want to get some help.

    Read the article

  • Dynamic form creation from XSD in ASP.NET

    - by CitadelCSAlum
    I know there is a lot of documentation on the internet as far as XSD to forms, but I have not been able to come across one that is straight forward enough for my situation. I am working with a WCF web service that is going to fetch and .xsd xml schema, and must return the HTML of a form based on the .xsd xml schema. Is there any third party tools that can help out with this, if so what are they? If not, do you have any suggestions,better methods,etc for how this can be done?

    Read the article

  • after assembling jar - No Persistence provider for EntityManager named ....

    - by alabalaa
    im developing a standalone application and it works fine when starting it from my ide(intellij idea), but after creating an uberjar and start the application from it javax.persistence.spi.PersistenceProvider is thrown saying "No Persistence provider for EntityManager named testPU" here is my persistence.xml which is placed under meta-inf directory: <persistence-unit name="testPU" transaction-type="RESOURCE_LOCAL"> <provider>org.hibernate.ejb.HibernatePersistence</provider> <class>test.model.Configuration</class> <properties> <property name="hibernate.connection.username" value="root"/> <property name="hibernate.connection.driver_class" value="com.mysql.jdbc.Driver"/> <property name="hibernate.connection.password" value="root"/> <property name="hibernate.connection.url" value="jdbc:mysql://localhost:3306/test"/> <property name="hibernate.show_sql" value="true"/> <property name="hibernate.dialect" value="org.hibernate.dialect.MySQLInnoDBDialect"/> <property name="hibernate.c3p0.timeout" value="300"/> <property name="hibernate.hbm2ddl.auto" value="update"/> </properties> </persistence-unit> and here is how im creating the entity manager factory: emf = Persistence.createEntityManagerFactory("testPU"); im using maven and tried the assembly plug-in with the default configuration fot it, i dont have much experience with assembling jars and i dont know if im missing something, so if u have any ideas ill be glad to hear them

    Read the article

  • What Windows Form control would be a good fit for this use case?

    - by Sergio Tapia
    I'm going create an open source Help desk solution free of charge for small to medium businesses to use. I'm currently working on the client application. I want to have a list of tickets that have been opened by the user. So it would be like a table TicketsByUser: Ticket Number | Type | Description | Date | Handled? 123456 | Hardware | My mouse broke | 10/20/2010 | No 123456 | Hardware | My mouse broke | 10/20/2010 | Yes I was thinking of using ListView because of it's name, but I have zero experience with it, so maybe it's not what I'm looking for. I'm going to be pulling the data from a WCF service which in turn pulls it from a MS SQL database.

    Read the article

  • public class ImageHandler : IHttpHandler

    - by Ken
    cmd.Parameters.AddWithValue("@id", new system.Guid (imageid)); What using System reference would this require? Here is the handler: using System; using System.Collections.Specialized; using System.Web; using System.Web.Configuration; using System.Web.Security; using System.Globalization; using System.Configuration; using System.Data.SqlClient; using System.Data; using System.IO; using System.Web.Profile; using System.Drawing; public class ImageHandler : IHttpHandler { public void ProcessRequest(HttpContext context) { string imageid; if (context.Request.QueryString["id"] != null) imageid = (context.Request.QueryString["id"]); else throw new ArgumentException("No parameter specified"); context.Response.ContentType = "image/jpeg"; Stream strm = ShowProfileImage(imageid.ToString()); byte[] buffer = new byte[8192]; int byteSeq = strm.Read(buffer, 0, 8192); while (byteSeq > 0) { context.Response.OutputStream.Write(buffer, 0, byteSeq); byteSeq = strm.Read(buffer, 0, 8192); } //context.Response.BinaryWrite(buffer); } public Stream ShowProfileImage(String imageid) { string conn = ConfigurationManager.ConnectionStrings["MyConnectionString1"].ConnectionString; SqlConnection connection = new SqlConnection(conn); string sql = "SELECT image FROM Profile WHERE UserId = @id"; SqlCommand cmd = new SqlCommand(sql, connection); cmd.CommandType = CommandType.Text; cmd.Parameters.AddWithValue("@id", new system.Guid (imageid));//Failing Here!!!! connection.Open(); object img = cmd.ExecuteScalar(); try { return new MemoryStream((byte[])img); } catch { return null; } finally { connection.Close(); } } public bool IsReusable { get { return false; } } }

    Read the article

  • Advice: Python Framework Server/Worker Queue management (not Website)

    - by Muppet Geoff
    I am looking for some advice/opinions of which Python Framework to use in an implementation of multiple 'Worker' PCs co-ordinated from a central Queue Manager. For completeness, the 'Worker' PCs will be running Audio Conversion routines (which I do not need advice on, and have standalone code that works). The Audio conversion takes a long time, and I need to co-ordinate an arbitrary number of the 'Workers' from a central location, handing them conversion tasks (such as where to get the source files, or where to ask for the job configuration) with them reporting back some additional info, such as the runtime of the converted audio etc. At present, I have a script that makes a webservice call to get the 'configuration' for a conversion task, based on source files located on the worker already (we manually copy the source files to the worker, and that triggers a conversion routine). I want to change this, so that we can distribute conversion tasks ("Oy you, process this: xxx") based on availability, and in an ideal world, based on pending tasks too. There is a chance that Workers can go offline mid-conversion (but this is not likely). All the workers are Windows based, the co-ordinator can be WIndows or Linux. I have (in my initial searches) come across the following - and I know that some are cross-dependent: Celery (with RabbitMQ) Twisted Django Using a framework, rather than home-brewing, seems to make more sense to me right now. I have a limited timeframe in which to develop this functional extension. An additional consideration would be using a Framework that is compatible with PyQT/PySide so that I can write a simple UI to display Queue status etc. I appreciate that the specifics above are a little vague, and I hope that someone can offer me a pointer or two. Again: I am looking for general advice on which Python framework to investigate further, for developing a Server/Worker 'Queue management' solution, for non-web activities (this is why DJango didn't seem the right fit).

    Read the article

  • Doxygen including methods twice doc files

    - by Maarek
    I'm having this issue where doxygen is adding the method twice in the documentation file. Is there a setting that stops auto-generation of documentation for methods within the .m file. For example in the documentation I'll see something like whats below where the first definition of + (Status *)registerUser is from the header XXXXXX.h file where the second is from XXXXXX.m. Header documentation : /** @brief Test Yada Yada @return <#(description)#> */ + (Status *)registerUser; Output: + (Status *) registerUser Test Yada Yada. Returns: <#(description)#> + (Status *) registerUser <#(brief description)#> <#(comprehensive description)#> registerUser Returns: <#(description)#> Definition at line 24 of file XXXXXX.m. Here are the build related configuration options. I've tried playing with them. EXTRACT_ALL with YES and NO... Hiding uncodumented Members and Classes. #--------------------------------------------------------------------------- # Build related configuration options #--------------------------------------------------------------------------- EXTRACT_ALL = NO EXTRACT_PRIVATE = NO EXTRACT_STATIC = NO EXTRACT_LOCAL_CLASSES = YES EXTRACT_LOCAL_METHODS = NO EXTRACT_ANON_NSPACES = NO HIDE_UNDOC_MEMBERS = YES HIDE_UNDOC_CLASSES = YES HIDE_FRIEND_COMPOUNDS = NO HIDE_IN_BODY_DOCS = NO INTERNAL_DOCS = NO CASE_SENSE_NAMES = NO HIDE_SCOPE_NAMES = NO SHOW_INCLUDE_FILES = YES FORCE_LOCAL_INCLUDES = NO INLINE_INFO = YES SORT_MEMBER_DOCS = YES SORT_BRIEF_DOCS = NO SORT_MEMBERS_CTORS_1ST = NO SORT_GROUP_NAMES = NO SORT_BY_SCOPE_NAME = NO

    Read the article

  • How can I manage building library projects that produce both a static lib and a dll?

    - by Scott Langham
    I've got a large visual studio solution with ~50 projects. There are configurations for StaticDebug, StaticRelease, Debug and Release. Some libraries are needed in both dll and static lib form. To get them, we rebuild the solution with a different configuration. The Configuration Manager window is used to setup which projects need to build in which flavours, static lib, dynamic dll or both. This can by quite tricky to manage and it's a bit annoying to have to build the solution multiple times and select the configurations in the right order. Static versions need building before non-static versions. I'm wondering, instead of this current scheme, might it be simpler to manage if, for the projects I needed to produce both a static lib and dynamc dll, I created two projects. Eg: CoreLib CoreDll I could either make both of these projects reference all the same files and build them twice, or I'm wondering, would it be possible to build CoreLib and then get CoreDll to link it to generate the dll? I guess my question is, do you have any advice on how to structure your projects in this kind of situation? Thanks.

    Read the article

  • Why does the Maven goal "package" include the resources in the jar, but the goal "jar:jar" doesnt?

    - by Bernhard V
    Hi, when I package my project with the Maven goal "package", the resources are included as well. They are originally located in the directory "src/main/resources". Because I want to create an executable jar and add the classpath to the manifest, I'm using maven-jar-plugin. I've configured it as the following likes: <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-jar-plugin</artifactId> <version>2.2</version> <configuration> <archive> <manifest> <addClasspath>true</addClasspath> <mainClass>at.sozvers.stp.zpv.ekvkumsetzer.Main</mainClass> </manifest> </archive> </configuration> </plugin> Why won't the jar file created with "jar:jar" include my resources as well. As far as I'm concerned it should use the same directories as the "package" goal (which are in my case inherited from the Maven Super POM).

    Read the article

  • Forms authentication: how do you store username password in web.config?

    - by Nick G
    I'm used to using Forms Authentication with a database, but I'm writing a little internal utility and the app doesn't have a database so I want to store the username and password in web.config. However for some reason, forms authentication is still trying to access SQL Server and I can't see how to stop it doing this and pick up the credentials from web.config. What am I doing wrong? I just get the error "Failed to generate a user instance of SQL Server due to a failure in impersonating the client. The connection will be closed." Here are the relevant sections of my web.config: <configuration> <system.web> <authentication mode="Forms"> <forms loginUrl="~/Login.aspx" timeout="60" name=".LoginCookie" path="/" > <credentials passwordFormat="Clear"> <user name="user1" password="[pass]" /> <user name="user2" password="[pass]" /> </credentials> </forms> </authentication> <authorization> <deny users="?" /> </authorization> </system.web> </configuration>

    Read the article

  • Hadoop File Read

    - by user3684584
    Hadoop Distributed Cache Wordcount example in hadoop 2.2.0. Copied file into hdfs filesystem to be used inside setup of mapper class. protected void setup(Context context) throws IOException,InterruptedException { Path[] uris = DistributedCache.getLocalCacheFiles(context.getConfiguration()); cacheData=new HashMap(); for(Path urifile: uris) { try { BufferedReader readBuffer1 = new BufferedReader(new FileReader(urifile.toString())); String line; while ((line=readBuffer1.readLine())!=null) { System.out.println("**************"+line); cacheData.put(line,line); } readBuffer1.close(); } catch (Exception e) { System.out.println(e.toString()); } } } Inside Driver Main class Configuration conf = new Configuration(); String[] otherArgs = new GenericOptionsParser(conf,args).getRemainingArgs(); if (otherArgs.length != 3) { System.err.println("Usage: wordcount <in> <out>"); System.exit(2); } Job job = new Job(conf, "word_count"); job.setJarByClass(WordCount.class); job.setMapperClass(Map.class); job.setReducerClass(Reduce.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(IntWritable.class); FileInputFormat.addInputPath(job, new Path(otherArgs[0])); Path outputpath=new Path(otherArgs[1]); outputpath.getFileSystem(conf).delete(outputpath,true); FileOutputFormat.setOutputPath(job,outputpath); System.out.println("CachePath****************"+otherArgs[2]); DistributedCache.addCacheFile(new URI(otherArgs[2]),job.getConfiguration()); System.exit(job.waitForCompletion(true) ? 0 : 1); But getting exception java.io.FileNotFoundException: file:/home/user12/tmp/mapred/local/1408960542382/cache (No such file or directory) So Cache functionality not working properly. Any Idea ?

    Read the article

  • Proper structure for dependency injection (using Guice)

    - by David B.
    I would like some suggestions and feedback on the best way to structure dependency injection for a system with the structure described below. I'm using Guice and thus would prefer solutions centered around it's annotation-based declarations, not XML-heavy Spring-style configuration. Consider a set of similar objects, Ball, Box, and Tube, each dependent on a Logger, supplied via the constructor. (This might not be important, but all four classes happen to be singletons --- of the application, not Gang-of-Four, variety.) A ToyChest class is responsible for creating and managing the three shape objects. ToyChest itself is not dependent on Logger, aside from creating the shape objects which are. The ToyChest class is instantiated as an application singleton in a Main class. I'm confused about the best way to construct the shapes in ToyChest. I either (1) need access to a Guice Injector instance already attached to a Module binding Logger to an implementation or (2) need to create a new Injector attached to the right Module. (1) is accomplished by adding an @Inject Injector injectorfield to ToyChest, but this feels weird because ToyChest doesn't actually have any direct dependencies --- only those of the children it instantiates. For (2), I'm not sure how to pass in the appropriate Module. Am I on the right track? Is there a better way to structure this? The answers to this question mention passing in a Provider instead of using the Injector directly, but I'm not sure how that is supposed to work. EDIT: Perhaps a more simple question is: when using Guice, where is the proper place to construct the shapes objects? ToyChest will do some configuration with them, but I suppose they could be constructed elsewhere. ToyChest (as the container managing them), and not Main, just seems to me like the appropriate place to construct them.

    Read the article

  • Spring.net customer namespace parser

    - by ListenToRick
    I have a customer parser which looks like this: [NamespaceParser( Namespace = "http://mysite/schema/cache", SchemaLocationAssemblyHint = typeof(CacheNamespaceParser ), SchemaLocation = "/cache.xsd" ) ] public class CacheNamespaceParser : NamespaceParserSupport { public override void Init() { RegisterObjectDefinitionParser("cache", new CacheParser ()); } } public class CacheParser : AbstractSimpleObjectDefinitionParser { protected override Type GetObjectType(XmlElement element) { return typeof(CacheDefinition); } protected override void DoParse(XmlElement element, ObjectDefinitionBuilder builder) { } protected override bool ShouldGenerateIdAsFallback { get { return true; } } } in the web config i have the following configuration.... <spring> <parsers> <parser type="Spring.Data.Config.DatabaseNamespaceParser, Spring.Data"/> <parser type="App.Web.CacheNamespaceParser, WebApp" /> </parsers> When I run the project I get the following error: An error occurred creating the configuration section handler for spring/parsers: Invalid resource name. Name has to be in 'assembly:<assemblyName>/<namespace>/<resourceName>' format. I put a break point in the CacheNamespaceParser init method and it is called. If I remove from the web config all is well! Any ideas whats wrong

    Read the article

  • nhibernate sessionfactory instance more than once on web service

    - by Manuel
    Hello, i have a web service that use nhibernate. I have a singleton pattern on the repositorry library but on each call the service, it creates a new instance of the session factory wich is very expensive. What can i do? region Atributos /// <summary> /// Session /// </summary> private ISession miSession; /// <summary> /// Session Factory /// </summary> private ISessionFactory miSessionFactory; private Configuration miConfiguration = new Configuration(); private static readonly ILog log = LogManager.GetLogger(typeof(NHibernatePersistencia).Name); private static IRepositorio Repositorio; #endregion #region Constructor private NHibernatePersistencia() { //miConfiguration.Configure("hibernate.cfg.xml"); try { miConfiguration.Configure(); this.miSessionFactory = miConfiguration.BuildSessionFactory(); this.miSession = this.SessionFactory.OpenSession(); log.Debug("Se carga NHibernate"); } catch (Exception ex) { log.Error("No se pudo cargar Nhibernate " + ex.Message); throw ex; } } public static IRepositorio Instancia { get { if (Repositorio == null) { Repositorio = new NHibernatePersistencia(); } return Repositorio; } } #endregion #region Propiedades /// <summary> /// Sesion de NHibernate /// </summary> public ISession Session { get { return miSession.SessionFactory.GetCurrentSession(); } } /// <summary> /// Sesion de NHibernate /// </summary> public ISessionFactory SessionFactory { get { return this.miSessionFactory; } } #endregion In wich way can i create a single instance for all services?

    Read the article

  • Webservice parameter value encoding

    - by dnolan
    We have a webservice created in WCF and presenting itself as a basicHttpBinding. One of the parameters is a string which takes an xml string. Looking at the soap the client generates to send to the webservice, the xml is encoded, with all < and swapped into < and >. My question is, is this all that is encoded, or has the parameter been run through HtmlEncode so that other entities would be swapped also? The reason I ask is we are submitting with our client to a 3rd party webservice now and they would like to know the details on what is encoded.

    Read the article

  • C# why unit test has this strange behaviour?

    - by 5YrsLaterDBA
    I have a class to encrypt the connectionString. public class SKM { private string connStrName = "AndeDBEntities"; internal void encryptConnStr() { if(isConnStrEncrypted()) return; ... } private bool isConnStrEncrypted() { bool status = false; // Open app.config of executable. System.Configuration.Configuration config = ConfigurationManager.OpenExeConfiguration(ConfigurationUserLevel.None); // Get the connection string from the app.config file. string connStr = config.ConnectionStrings.ConnectionStrings[connStrName].ConnectionString; status = !(connStr.Contains("provider")); Log.logItem(LogType.DebugDevelopment, "isConnStrEncrypted", "SKM::isConnStrEncrypted()", "isConnStrEncrypted=" + status); return status; } } Above code works fine in my application. But not in my unit test project. In my unit test project, I test the encryptConnStr() method. it will call isConnStrEncrypted() method. Then exception (null pointer) will be thrown at this line: string connStr = config.ConnectionStrings.ConnectionStrings[connStrName].ConnectionString; I have to use index like this to pass the unit test: string connStr = config.ConnectionStrings.ConnectionStrings[0].ConnectionString; I remember it worked several days ago at the time I added above unit test. But now it give me an error. The unit test is not integrated with our daily auto build yet. We only have ONE connectionStr. It works with product but not in unit test. Don't know why. Anybody can explain to me?

    Read the article

  • Why is there so much XML in Java these days?

    - by BD at Rivenhill
    This is really more of a philosophy/design issue. I did some work in Java back in the middle 90's and again in the early 2000's and now I'm coming back to it after spending a lot of time in C/C++ and it seems like there was an explosion of XML dependency while I was gone. Major build system tools like ant and maven depend on XML for their configuration, but I'm actually more concerned with all the frameworks, such as Spring, Hibernate, etc. My experience is that powerful supporting libraries like these are where a developer can really get leverage for building programs with lots of features without writing a lot of code, but it really seems like I'm getting one language for the price of two here. I write a bunch of Java classes, but then I also write a bunch of XML files to glue them together. The things that get done in the XML are things that I can see reasonable ways of doing in straight code without the middleman, and they don't really seem to be treated exactly like configuration files: they change rarely and they end up getting committed to source code control like the Java code itself, but they are distributed with the resulting application and need to be unpacked and installed in the classpath in order to get the application to work. I'm working with server applications that are not web-based, so maybe the domain is a bit different from what most people are doing, but I just can't help feeling that I must be doing something wrong here. Can someone point me to a good source of information for why these design choices were made and what problems they are meant to solve so that I can analyze my own experiences in this context?

    Read the article

  • Is the Subversion 'stack' a realistic alternative to Team Foundation Server?

    - by Robert S.
    I'm evaluating Microsoft Team Foundation Server for my customer, who currently uses Visual SourceSafe and nothing else. They have explicitly expressed a desire to implement a more rigid and process-driven environment as their application is in production and they have future releases to consider. The particular areas I'm trying to cover are: Configuration management (e.g., source control) Change management (workflow and doco for change requests, tasks) Release management (builds and deployments) Incident and problem management (issues and bugs) Document management (similar to source control, but available via web) Code analysis constraints on check-ins A testing framework Reporting Visual Studio 2008 integration TFS does all of these things quite well, but it's expensive and complex to maintain, and the inexpensive Workgroup edition doesn't scale. We don't get TFS as part of our MSDN subscription. Those problems can be overcome, but before I tell my customer to go the TFS route, which in itself isn't a terrible thing, I wanted to evaluate the alternatives. I know Subversion is often suggested for its configuration management/source control, but what about the other areas? Would a combination of Subversion/NUnit/Wiki/CruiseControl/NAnt/something else satisfy all of these requirements? What tools do I need to include in my evaluation? Or should I just bite the bullet and go with TFS since we're already invested in the Microsoft stack?

    Read the article

  • LINQ EF not saving to database...

    - by Keith Barrows
    I guess this is a continuation of the last question I asked: http://stackoverflow.com/questions/2587542/bulk-insert-and-update-with-ado-net-entity-framework. I am not getting any errors while doing inserts yet no data is actually going into my DB. My DB is a SDF file (SQL CE). Any ideas what to check? My app.config looks like: <?xml version="1.0" encoding="utf-8"?> <configuration> <configSections> </configSections> <connectionStrings> <add name="Lab_Use_Billing.Properties.Settings.LabUseConnectionString" connectionString="Data Source=|DataDirectory|\Models\LabUse.sdf" providerName="Microsoft.SqlServerCe.Client.3.5" /> <add name="LabUseEntities" connectionString="metadata=res://*/Models.LabUseEntities.csdl|res://*/Models.LabUseEntities.ssdl|res://*/Models.LabUseEntities.msl; provider=System.Data.SqlServerCe.3.5; provider connection string=&quot;Data Source=|DataDirectory|\Models\LabUse.sdf&quot;" providerName="System.Data.EntityClient" /> </connectionStrings> </configuration> TIA

    Read the article

  • .NET "Timer" would block other method calls?

    - by Ricky
    Hi guys: In ASP.NET 3.5, we suspect a delegate triggering by a "Timer" will block other method calls. From logs, some function calls will wait for the finishing of the delegate and continue to work. Is it true? If yes, what workaround can I do? PS: The delegate contains codes to use WCF to retrieve data and the following code private void Replace<T>(ref IList<T> src, IList<T> des) { lock(src) { while (src.Count > 0) { GC.SuppressFinalize(src.ElementAt(0)); src.RemoveAt(0); } GC.SuppressFinalize(src); src = des; } } Thanks a lot.

    Read the article

< Previous Page | 336 337 338 339 340 341 342 343 344 345 346 347  | Next Page >