Search Results

Search found 18347 results on 734 pages for 'generate password'.

Page 608/734 | < Previous Page | 604 605 606 607 608 609 610 611 612 613 614 615  | Next Page >

  • Error reached after genereated entity framework classes by edmgen tool

    - by loviji
    Hello, First I read this question, but this knowledge did not help to solve my problems. In initial I've created edmx file by Visual Studio. Generated files with names: uqsModel.Designer.cs uqsModel.edmx This files are located on App_Code folder. And my web app work normally. In Web Config generated connectionstring automatically. <add name="uqsEntities" connectionString="metadata=res://*/App_Code.uqsModel.csdl|res://*/App_Code.uqsModel.ssdl|res://*/App_Code.uqsModel.msl;provider=System.Data.SqlClient;provider connection string=&quot;Data Source=aemloviji\sqlexpress;Initial Catalog=uqs;Integrated Security=True;MultipleActiveResultSets=True&quot;" providerName="System.Data.EntityClient" /></connectionStrings> Then I had to generate classes by the instrument edmgen tool(full generation mode). Generated new files with names: uqsModel.cs uqsModel.csdl uqsModel.msl uqsModel.ssdl uqsViews.cs it save new classed to the folder where edmx files located before, and remove existing edmx files. And when page redirrects to any web page server side code fails. And problem: Unable to load the specified metadata resource. Some idea, please.

    Read the article

  • WCF Service Impersonation

    - by robalot
    Good Day Everyone... Apparently, I'm not setting-up impersonation correctly for my WCF service. I do NOT want to set security on a method-by-method basis (in the actual code-behind). The service (at the moment) is open to be called by everyone on the intranet. So my questions are… Q: What web-config tags am I missing? Q: What do I need to change in the web-config to make impersonation work? The Service Web.config Looks Like... <configuration> <system.web> <authorization> <allow users="?"/> </authorization> <authentication mode="Windows"/> <identity impersonate="true" userName="MyDomain\MyUser" password="MyPassword"/> </system.web> <system.serviceModel> <services> <service behaviorConfiguration="wcfFISH.DataServiceBehavior" name="wcfFISH.DataService"> <endpoint address="" binding="wsHttpBinding" contract="wcfFISH.IFishData"> <identity> <dns value="localhost"/> </identity> </endpoint> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" /> </service> </services> <behaviors> <serviceBehaviors> <behavior name="wcfFISH.DataServiceBehavior"> <serviceMetadata httpGetEnabled="false"/> <serviceDebug includeExceptionDetailInFaults="false"/> </behavior> </serviceBehaviors> </behaviors> </system.serviceModel> </configuration>

    Read the article

  • Creating A Single Generic Handler For Agatha?

    - by David
    I'm using the Agatha request/response library (and StructureMap, as utilized by Agatha 1.0.5.0) for a service layer that I'm prototyping, and one thing I've noticed is the large number of handlers that need to be created. It generally makes sense that any request/response type pair would need their own handler. However, as this scales to a large enterprise environment that's going to be A LOT of handlers. What I've started doing is dividing up the enterprise domain into logical processor classes (dozens of processors instead of many hundreds or possibly eventually thousands handlers). The convention is that each request/response type (all of which inherit from a domain base request/response pair, which inherit from Agatha's) gets exactly one function in a processor somewhere. The generic handler (which inherits from Agatha's RequestHandler) then uses reflection in the Handle method to find the method for the given TREQUEST/TRESPONSE and invoke it. If it can't find one or if it finds more than one, it returns a TRESPONSE containing an error message (messages are standardized in the domain's base response class). The goal here is to allow developers across the enterprise to just concern themselves with writing their request/response types and processor functions in the domain and not have to spend additional overhead creating handler classes which would all do exactly the same thing (pass control to a processor function). However, it seems that I still need to have defined a handler class (albeit empty, since the base handler takes care of everything) for each request/response type pair. Otherwise, the following exception is thrown when dispatching a request to the service: StructureMap Exception Code: 202 No Default Instance defined for PluginFamily Agatha.ServiceLayer.IRequestHandler`1[[TSFG.Domain.DTO.Actions.HelloWorldRequest, TSFG.Domain.DTO, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null]], Agatha.ServiceLayer, Version=1.0.5.0, Culture=neutral, PublicKeyToken=6f21cf452a4ffa13 Is there a way that I'm not seeing to tell StructureMap and/or Agatha to always use the base handler class for all request/response type pairs? Or maybe to use Reflection.Emit to generate empty handlers in memory at application start just to satisfy the requirement? I'm not 100% familiar with these libraries and am learning as I go along, but so far my attempts at both those possible approaches have been unsuccessful. Can anybody offer some advice on solving this, or perhaps offer another approach entirely?

    Read the article

  • JNI problem when calling a native library that loads another native library

    - by TheEnemyOfQuality
    I've got a bit of an odd problem. I have a project in C++ that's basically a wrapper for a third party DLL like this: MyLibrary --loads DLL_A ----loads DLL_B I load DLL_A with LoadLibrary(), wrap several of its functions and generate my own DLL. I've tested this in a C++ project and a C# project. Both do everything they're supposed to do: load DLL_A, make a couple of function calls, and indirectly load DLL_B. The problem is when I build a DLL for java and make the calls through JNI. Everything runs like it should (no java.lang.UnsatisfiedLinkError), but when it comes time for DLL_A to load DLL_B it doesn't work. From debugging, the loading of DLL_B happens on a function call in DLL_A that takes a callback. When called from Java, this function call seems to fail (the function pointer is fine and the actual call goes off without a hitch), and I get an odd pop-up window saying DLL_B failed to load, and my program is left waiting for a callback that never happens. I can explicitly load DLL_B just fine (both from Java and from C++) and I've checked every possible path, path variable, and tried placing the dlls everywhere to see if it could be looking somewhere funny. I'm pretty sure it's not a path problem. Ultimately I don't know how DLL_A is loading DLL_B and I can't figure out why everything works fine in C++ and C#, but not in Java. I'm absolutely flummoxed. It could still be something specific to my setup (although I've looked as hard as I can look), but I'm throwing this scenario out there to see if anyone has run into a similar problem. -Dave

    Read the article

  • ASP.NET Projects with Two Versions of AjaxControlToolkit

    - by Chris
    In my Solution I have three projects. Project A is a web app and uses version 1.0.10618.0 of the AjaxControlToolkit. I would love to upgrade it to the latest but unfortunately any newer release completely breaks a portion of my site. Project B is also a web app but is a completely new software product and so it uses (and relies on) the latest version of the AjaxControlToolkit. Everything works great. Thought A and B are totally different products they use the same DB and rely on the same ClassLibrary. Project C is a small web app that ties A and B together with certain functionality like forgot password pages. The pages in this app reside in a virtual directory of both A and B. Project C currently uses v1.0.10618.0 of the toolkit so it works with Project A but it fails with project B because the manifest definitions of the dlls don't match (to be expected). What I've done is built a new dll of the toolkit and changed the assembly and namespace to AjaxControlToolkit_v1 and then changed all v1 references to this new dll so the old version and new versions can sit side by side in the same bin folder and nobody complains. I then changed my web.config controls tag to look like this: <add tagPrefix="ajaxToolkit" namespace="AjaxControlToolkit_v1" assembly="AjaxControlToolkit_v1, Version=1.0.10618.0, Culture=neutral, PublicKeyToken=28f01b0e84b6d53e"/> This all works except I get a runtime error of: Unknown server tag 'ajaxToolkit:AnimationExtender'. I can't figure out why this is, any ideas on how to remedy it?

    Read the article

  • Which key:value store to use with Python?

    - by Kurt
    So I'm looking at various key:value (where value is either strictly a single value or possibly an object) stores for use with Python, and have found a few promising ones. I have no specific requirement as of yet because I am in the evaluation phase. I'm looking for what's good, what's bad, what are the corner cases these things handle well or don't, etc. I'm sure some of you have already tried them out so I'd love to hear your findings/problems/etc. on the various key:value stores with Python. I'm looking primarily at: memcached - http://www.danga.com/memcached/ python clients: http://pypi.python.org/pypi/python-memcached/1.40 http://www.tummy.com/Community/software/python-memcached/ CouchDB - http://couchdb.apache.org/ python clients: http://code.google.com/p/couchdb-python/ Tokyo Tyrant - http://1978th.net/tokyotyrant/ python clients: http://code.google.com/p/pytyrant/ Lightcloud - http://opensource.plurk.com/LightCloud/ Based on Tokyo Tyrant, written in Python Redis - http://code.google.com/p/redis/ python clients: http://pypi.python.org/pypi/txredis/0.1.1 MemcacheDB - http://memcachedb.org/ So I started benchmarking (simply inserting keys and reading them) using a simple count to generate numeric keys and a value of "A short string of text": memcached: CentOS 5.3/python-2.4.3-24.el5_3.6, libevent 1.4.12-stable, memcached 1.4.2 with default settings, 1 gig memory, 14,000 inserts per second, 16,000 seconds to read. No real optimization, nice. memcachedb claims on the order of 17,000 to 23,000 inserts per second, 44,000 to 64,000 reads per second. I'm also wondering how the others stack up speed wise.

    Read the article

  • How to know about MySQL 'refused connections'

    - by celalo
    Hello, I am using MONyog to montitor my two mysql servers. I get alert emails from MONyog when something goes wrong. There is an error I could not find out why. It says: Connection History: Percentage of refused connections) - 66.67% the percentage is not important, this is just about having refused connections. I get this email every half an hour. So this is like a constant situation. This must be my mistake, because I just set up those servers and there is no chance somebody else could be interfering the servers. MONyog advices me: Try to isolate users/applications that are using an incorrect password or trying to connect from unauthorized hosts. A client will be disallowed to connect if it takes more than connect_timeout seconds to connect. Set the value of log_warnings system variable to 2. This will force the MySQL server to log further information about the error. I added log_warnings=2 to my.cnf and I enabled logging like this: [mysqld_safe] . . log_warnings=2 log-error = /var/log/mysql/error.log . . . . [mysqld_safe] . log-error=/var/log/mysqld.log . . I cannot see any warnings at /var/log/mysql/error.log I can see some warnings at /var/log/mysqld.log but they are about something else. In sum, my question is how can I detect refused connections? Please let me know if any more info is required. Thanks in advance.

    Read the article

  • MVC2 and jquery.validate.js

    - by Will I Am
    I am experiencing some confusion with jquery.validate.js First of all, what is MicrosoftMvcJqueryValidation.js. It is referenced in snippets on the web but appears to have dissapeared from the RTM MVC2 and is now in futures. Do I need it? The reason I'm asking is that I'm trying to use the validator with MVC and I can't get it to work. I defined my JS as: $(document).ready(function () { $("#myForm").validate({ rules: { f_fullname: { required: true }, f_email: { required: true, email: true }, f_email_c: { equalTo: "#f_email" }, f_password: { required: true, minlength: 6 }, f_password_c: { equalTo: "#f_password" } }, messages: { f_fullname: { required: "* required" }, f_email: { required: "* required", email: "* not a valid email address" }, f_email_c: { equalTo: "* does not match the other email" }, f_password: { required: "* required", minlength: "password must be at least 6 characters long" }, f_password_c: { equalTo: "* does not match the other email" } } }); }); and my form on the view: <% using (Html.BeginForm("CreateNew", "Account", FormMethod.Post, new { id = "myForm" })) { %> <fieldset> <label for="f_fullname">name:</label><input id="f_fullname"/> <label for="f_email"><input id="f_email"/> ...etc... <input type="submit" value="Create" id="f_submit"/> </fieldset> <% } %> and the validation method gets called on .ready() with no errors in firebug. however when I submit the form, nothing gets validated and the form gets submitted. If I create a submitHandler() it gets called, but the plugin doesn't detect any validation errors (.valid() == true) What am I missing?

    Read the article

  • - (void)keyboardWasShown not called when switching to another UITextField

    - by Shawn
    I'm having a strange problem that I don't understand. I have a UIScrollView with several UITextField objects. When I switch to the view, I set the first UITextField as firstresponder, and the keyboardWasShown method gets called due to the UIKeyboardDidShowNotification that the view is registered for. The weird thing is, when I touch the next UITextField, the keyboardWasShown method is not called. I don't understand this, since Apple's documentation says "If your interface has multiple text fields, the user can tap between them to edit the values in each one. When that happens, however, the keyboard does not disappear but the system does still generate UIKeyboardDidShowNotification notifications each time editing begins in a new text field." My code I've copied directly from Apple's documentation as well, and it works properly, but it only gets called the first time. What am I missing? - (void)registerForKeyboardNotifications { [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(keyboardWasShown:) name:UIKeyboardDidShowNotification object:nil]; [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(keyboardWasHidden:) name:UIKeyboardDidHideNotification object:nil]; } - (void)keyboardWasShown:(NSNotification *)aNotification { //if (keyboardShown) return; NSDictionary* info = [aNotification userInfo]; CGSize kbSize = [[info objectForKey:UIKeyboardBoundsUserInfoKey] CGRectValue].size; CGRect bkgndRect = activeField.superview.frame; bkgndRect.size.height += kbSize.height; [activeField.superview setFrame:bkgndRect]; [scrollView setContentOffset:CGPointMake(0.0, activeField.frame.origin.y) animated:YES]; keyboardShown = YES; UIBarButtonItem *doneButton = [[UIBarButtonItem alloc] initWithBarButtonSystemItem:UIBarButtonSystemItemDone target:self action:@selector(done)]; self.navigationItem.rightBarButtonItem = doneButton; [doneButton release]; }

    Read the article

  • fabric deploy problem

    - by alexarsh
    Hi, I'm trying to deploy a django app with fabric and get the following error: Alexs-MacBook:fabric alex$ fab config:instance=peergw deploy -H <ip> - u <username> -p <password> [192.168.2.93] run: cat /etc/issue Traceback (most recent call last): File "build/bdist.macosx-10.6-universal/egg/fabric/main.py", line 419, in main File "/Users/alex/Rabota/server/mx30/scripts/fabric/fab/ commands.py", line 37, in deploy checkup() File "/Users/alex/Rabota/server/mx30/scripts/fabric/fab/ commands.py", line 140, in checkup if not 'Ubuntu' in run('cat /etc/issue'): File "build/bdist.macosx-10.6-universal/egg/fabric/network.py", line 382, in host_prompting_wrapper File "build/bdist.macosx-10.6-universal/egg/fabric/operations.py", line 414, in run File "build/bdist.macosx-10.6-universal/egg/fabric/network.py", line 65, in __getitem__ File "build/bdist.macosx-10.6-universal/egg/fabric/network.py", line 140, in connect File "build/bdist.macosx-10.6-universal/egg/paramiko/client.py", line 149, in load_system_host_keys File "build/bdist.macosx-10.6-universal/egg/paramiko/hostkeys.py", line 154, in load File "build/bdist.macosx-10.6-universal/egg/paramiko/hostkeys.py", line 66, in from_line File "build/bdist.macosx-10.6-universal/egg/paramiko/rsakey.py", line 61, in __init__ paramiko.SSHException: Invalid key Alexs-MacBook:fabric alex$ I can't connect to the server via ssh. What can be my problem? Regards, Arshavski Alexander.

    Read the article

  • Pass table as parameter to SQLCLR TV-UDF

    - by Skeolan
    We have a third-party DLL that can operate on a DataTable of source information and generate some useful values, and we're trying to hook it up through SQLCLR to be callable as a table-valued UDF in SQL Server 2008. Taking the concept here one step further, I would like to program a CLR Table-Valued Function that operates on a table of source data from the DB. I'm pretty sure I understand what needs to happen on the T-SQL side of things; but, what should the method signature look like in the .NET (C#) code? What would be the parameter datatype for "table data from SQL Server?" e.g. /* Setup */ CREATE TYPE InTableType AS TABLE (LocationName VARCHAR(50), Lat FLOAT, Lon FLOAT) GO CREATE TYPE OutTableType AS TABLE (LocationName VARCHAR(50), NeighborName VARCHAR(50), Distance FLOAT) GO CREATE ASSEMBLY myCLRAssembly FROM 'D:\assemblies\myCLR_UDFs.dll' WITH PERMISSION_SET = EXTERNAL_ACCESS GO CREATE FUNCTION GetDistances(@locations InTableType) RETURNS OutTableType AS EXTERNAL NAME myCLRAssembly.GeoDistance.SQLCLRInitMethod GO /* Execution */ DECLARE @myTable InTableType INSERT INTO @myTable(LocationName, Lat, Lon) VALUES('aaa', -50.0, -20.0) INSERT INTO @myTable(LocationName, Lat, Lon) VALUES('bbb', -20.0, -50.0) SELECT * FROM @myTable DECLARE @myResult OutTableType INSERT INTO @myResult MyCLRTVFunction @myTable --returns a table result calculated using the input The lat/lon - distance thing is a silly example that should of course be better handled entirely in SQL; but I hope it illustrates the general intent of table-in - table-out through a table-valued UDF tied to a SQLCLR assembly. I am not certain this is possible; what would the SQLCLRInitMethod method signature look like in the C#? public class GeoDistance { [SqlFunction(FillRowMethodName = "FillRow")] public static IEnumerable SQLCLRInitMethod(<appropriateType> myInputData) { //... } public static void FillRow(...) { //... } } If it's not possible, I know I can use a "context connection=true" SQL connection within the C# code to have the CLR component query for the necessary data given the relevant keys; but that's sensitive to changes in the DB schema. So I hope to just have SQL bundle up all the source data and pass it to the function. Bonus question - assuming this works at all, would it also work with more than one input table?

    Read the article

  • How to generating comments in hbm2java created POJO?

    - by jschoen
    My current setup using hibernate uses the hibernate.reveng.xml file to generate the various hbm.xml files. Which are then turned into POJOs using hbm2java. We spent some time while designing our schema, to place some pretty decent descriptions on the Tables and there columns. I am able to pull these descriptions into the hbm.xml files when generating them using hbm2jhbmxml. So I get something similar to this: <class name="test.Person" table="PERSONS"> <comment>The comment about the PERSONS table.</comment> <property name="firstName" type="string"> <column name="FIRST_NAME" length="100" not-null="true"> <comment>The first name of this person.</comment> </column> </property> <property name="middleInitial" type="string"> <column name="MIDDLE_INITIAL" length="1"> <comment>The middle initial of this person.</comment> </column> </property> <property name="lastName" type="string"> <column name="LAST_NAME" length="100"> <comment>The last name of this person.</comment> </column> </property> </class> So how do I tell hbm2java to pull and place these comments in the created Java files? I have read over this about editing the freemarker templates to change the way code is generated. I under stand the concept, but it was not to detailed about what else you could do with it beyond there example of pre and post conditions.

    Read the article

  • Using MSADO15.DLL and C++ with MinGW/GCC on Windows Vista

    - by Eugen Mihailescu
    INTRODUCTION Hi, I am very new to C++, is my 1st statement. I have started initially with VC++ 2008 Express, I've notice that GCC becomes kind of standard so I am trying to make the right steps event from the beginning. I have written a piece of code that connects to MSSQL Server via ADO, on VC++ it's working like a charm by importing MSADO15.dll: #import "msado15.dll" no_namespace rename("EOF", "EndOfFile") Because I am going to move from VC++ I was looking for an alternative (eventually multi-platform) IDE, so I stick (for this time) with Code::Block (I'm using last nightly buil, SVN 6181). As compiler I choose to use GCC 3.4.5 (ported via MinGW 5.1.6), under Vista. I was trying to compile a simple "hello world" application with GCC that use/import the same msado15.dll (#import "c:\Program Files\Common Files\System\ADO\msado15.dll" no_namespace rename("EOF", "EndOfFile")) and I was surprised to see a lot of compile-time errors. I was expected that the #import compiler's directive will generate a library from "msado15.dll" so it can link to it later (link-edit time or whatever). Instead it was trying to read it as a normal file (like a header file,if you like) because it was trying to interprete each line in the DLL (which has a MZ signature): Example: Compiling: main.cpp E:\MyPath\main.cpp:2:64: warning: extra tokens at end of #import directive In file included from E:\MyPath\main.cpp:2: c:\Program Files\Common Files\System\ADO\msado15.dll:1: error: stray '\144' in program In file included from E:\MyPath\main.cpp:2: c:\Program Files\Common Files\System\ADO\msado15.dll:1:4: warning: null character(s) ignored c:\Program Files\Common Files\System\ADO\msado15.dll:1: error: stray '\3' in program c:\Program Files\Common Files\System\ADO\msado15.dll:1:6: warning: null character(s) ignored c:\Program Files\Common Files\System\ADO\msado15.dll:1: error: stray '\4' in program ... and so on. MY QUESTION Well, it is obvious that under this version of GCC the #import directive does not do the expected job (perhaps #import is not supported anymore by GCC), so finally my question: how to use the ADO to access MSSQL database on a C++ program compiled with GCC (v3.4.5)?

    Read the article

  • Linq to SQL DynamicInvoke(System.Object[])' has no supported translation to SQL.

    - by ewwwyn
    I have a class, Users. Users has a UserId property. I have a method that looks something like this: static IQueryable<User> FilterById(this IQueryable<User> p, Func<int, bool> sel) { return p.Where(m => sel(m)); } Inevitably, when I call the function: var users = Users.FilterById(m => m > 10); I get the following exception: Method 'System.Object DynamicInvoke(System.Object[])' has no supported translation to SQL. Is there any solution to this problem? How far down the rabbit hole of Expression.KillMeAndMyFamily() might I have to go? To clarify why I'm doing this: I'm using T4 templates to autogenerate a simple repository and a system of pipes. Within the pipes, instead of writing: new UserPipe().Where(m => m.UserId > 10 && m.UserName.Contains("oo") && m.LastName == "Wee"); I'd like to generate something like: new UserPipe() .UserId(m => m > 10) .UserName(m => m.Contains("oo")) .LastName("Wee");

    Read the article

  • How do I configure a C# web service client to send HTTP request header and body in parallel?

    - by Christopher
    Hi, I am using a traditional C# web service client generated in VS2008 .Net 3.5, inheriting from SoapHttpClientProtocol. This is connecting to a remote web service written in Java. All configuration is done in code during client initialization, and can be seen below: ServicePointManager.Expect100Continue = false; ServicePointManager.DefaultConnectionLimit = 10; var client = new APIService { EnableDecompression = true, Url = _url + "?guid=" + Guid.NewGuid(), Credentials = new NetworkCredential(user, password, null), PreAuthenticate = true, Timeout = 5000 // 5 sec }; It all works fine, but the time taken to execute the simplest method call is almost double the network ping time. Whereas a Java test client takes roughly the same as the network ping time: C# client ~ 550ms Java client ~ 340ms Network ping ~ 300ms After analyzing the TCP traffic for a session discovered the following: Basically, the C# client sent TCP packets in the following sequence. Client Send HTTP Headers in one packet. Client Waits For TCP ACK from server. Client Sends HTTP Body in one packet. Client Waits For TCP ACK from server. The Java client sent TCP packets in the following sequence. Client Sends HTTP Headers in one packet. Client Sends HTTP Body in one packet. Client Revieves ACK for first packet. Client Revieves ACK for second packet. Client Revieves ACK for second packet. Is there anyway to configure the C# web service client to send the header/body in parallel as the Java client appears to? Any help or pointers much appreciated.

    Read the article

  • OleDb database to DataSet and back in c#?

    - by Troy
    I'm writing a program that lets a user: Connect to an (arbitrary) View all of the tables in that database in separate DataGridViews Edit them in the program, generate random data, and see the results Choose to commit those changes or revert So I discovered the DataSet class, which looks like it's capable of holding everything that a database would, and I decided that the best thing to do here would be to load everything into one dataset, let the user edit it, and then save the dataset back to the database. The problem is that the only way I can find to load the database tables is this: set = new DataSet(); DataTable schema = connection.GetOleDbSchemaTable( OleDbSchemaGuid.Tables, new string[] { null, null, null, "TABLE" }); foreach (DataRow row in schema.Rows) { string tableName = row.Field<string>("TABLE_NAME"); DataTable dTable = new DataTable(); new OleDbDataAdapter("SELECT * FROM " + tableName, connection).Fill(dTable); dTable.TableName = tableName; set.Tables.Add(dTable); } while it seems like there should be a simpler way given that datasets appear to be designed for exactly this purpose. The real problem though is when I try saving these things. In order to use the OleDbDataAdapter.Update() method, I'm told that I have to provide valid INSERT queries. Doesn't that kind of negate the whole point of having a class to handle this stuff for me? Anyway, I'm hoping somebody can either explain how to load and save a database into a dataset or maybe give me a better idea of how to do what I'm trying to do. I could always parse the commands together myself, but that doesn't seem like the best solution.

    Read the article

  • Getting mail from GMail into Java application using IMAP

    - by Dave
    I want to access messages in GMail from a Java application using JavaMail and IMAP. Why am I getting a SocketTimeoutException? Here is my code: Properties props = System.getProperties(); props.setProperty("mail.imap.host", "imap.gmail.com"); props.setProperty("mail.imap.port", "993"); props.setProperty("mail.imap.connectiontimeout", "5000"); props.setProperty("mail.imap.timeout", "5000"); try { Session session = Session.getDefaultInstance(props, new MyAuthenticator()); URLName urlName = new URLName("imap://[email protected]:[email protected]"); Store store = session.getStore(urlName); if (!store.isConnected()) { store.connect(); } } catch (NoSuchProviderException e) { e.printStackTrace(); System.exit(1); } catch (MessagingException e) { e.printStackTrace(); System.exit(2); } I set the timeout values so that it wouldn't take "forever" to timeout. Also, MyAuthenticator also has the username and password, which seems redundant with the URL. Is there another way to specify the protocol? (I didn't see it in the JavaDoc for IMAP.)

    Read the article

  • Can YAML have inheritance?

    - by Jason
    This question involves a lot of symfony but it should be easy enough for someone to follow who only knows YAML and not symfony. My symfony models come from a three-step process: First, I create the tables in MySQL. Second, I run a symfony command (symfony doctrine:build-schema) to convert my table structure into a YAML file. Third, I run another symfony command (symfony doctrine:build-model) to convert the YAML file into PHP code. Here's the problem: there are some tables in the database that I don't want to end up in my symfony code. For example, let's say I have two tables: one called my_table and another called wordpress. The YAML file I end up with might look like this: MyTable: connection: doctrine tableName: my_table Wordpress: connection: doctrine tableName: wordpress That's great except the wordpress table has nothing to do with my symfony models. The result is that every single time I make a change to my database and generate this YAML file, I have to manually remove wordpress. It's annoying! I'd like to be able to create a file called baseConfig.php or something that looks like this: $config = array( 'MyTable' => array( 'connection' => 'doctrine', 'tableName' => 'my_table', ), 'Wordpress' => array( 'connection' => 'doctrine', 'tableName' => 'wordpress', ), ); And then I could have a separate file called config.php or something where I could make modifications to the base config: unset($config['Wordpress']); So my question is: is there any way to convert YAML into executable PHP code (as opposed to load YAML INTO PHP code like what sfYaml::load() does) to achieve this sort of thing? Or is there maybe some other way to achieve YAML inheritance? Thanks, Jason

    Read the article

  • Proper way to cleanup dynamic engines and can they be loaded twice?

    - by Becky
    Hello - I am having problems loading Engine PKCS #11 as a dynamic engine using python and M2Crypto. I am trying to access an Aladdin USB eToken. Here are the important steps from my python code: dynamic = Engine.load_dynamic_engine("pkcs11", "/usr/local/ssl/lib/engines/engine_pkcs11.so") pkcs11 = Engine.Engine("pkcs11") pkcs11.ctrl_cmd_string("MODULE_PATH", "/usr/lib/libeTPkcs11.so") pkcs11.engine_init_custom() # initialize engine with custom M2Crypto patch # next few steps which I deleted pass password and grab key & cert off token Engine.cleanup() This works fine the first time this method gets run. The second time, it fails when loading the dynamic engine (see error below). Traceback (most recent call last): File "", line 1, in ? File "/usr/local/lib/python2.4/site-packages/M2Crypto/Engine.py", line 98, in load_dynamic_engine e.ctrl_cmd_string("LOAD", None) File "/usr/local/lib/python2.4/site-packages/M2Crypto/Engine.py", line 38, in ctrl_cmd_string raise EngineError(Err.get_error()) M2Crypto.Engine.EngineError: 4002:error:260B606D:engine routines:DYNAMIC_LOAD:init failed:eng_dyn.c:521: Is it impossible to load engines twice in a python session? Am I missing some kind of engine cleanup/deletion? The OpenSSL docs talk about engine_finish() but I don't think M2Crypto offers that. Is there a method to tell if the engine is already loaded? Thanks!

    Read the article

  • playframework auto-test Jenkins CI wait for completion?

    - by notbrain
    I am trying to set up Jenkins CI for a playframework.org application but am having trouble properly launching play after the auto-test command is run. The tests all run fine, but it seems as though my script is launching both play auto-test and play start --%ci at the same time. When the play start --%ci command runs, it gets a pid and everything, but it's not running. FILE: auto-test.sh, jenkins runs this with execute shell #!/bin/bash # pwd is jenkins workspace dir # change into approot dir cd customer-portal; # kill any previous play launches if [ -e "server.pid" ] then kill `cat server.pid`; rm -rf server.pid; fi # drop and re-create the DB mysql --user=USER --password=PASS --host=HOSTNAME < ../setupdb.sql # auto-test the most recent build /usr/local/lib/play/play auto-test; # this is inadequate for waiting for auto-test to complete? # how to wait for actual process completion? # sleep 60; wait; # Conditional start based on tests # Launch normal on pass, test on fail # if [ -e "./test-result/result.passed" ] then /usr/local/lib/play/play start --%ci; exit 0; else /usr/local/lib/play/play test; exit 1; fi

    Read the article

  • Java-Maven: How to add manually a library to the maven repository?

    - by Aaron
    I'm trying to generate a jasperReport, but I receive this: net.sf.jasperreports.engine.util.JRFontNotFoundException: Font 'Times New Roman' is not available to the JVM. See the Javadoc for more details. After searching on the net, I found that I need to add a jar to the classpath with the font. So, I create a jar file with the ttf files and now I want to add this as a dependency to my pom file. So: I installed the file : mvn install:install-file -Dfile=tf.jar -DgroupId=tf -DartifactId=tf -Dversion=1.0.0 -Dpackaging=jar and in my pom, I added these lines: <dependency> <groupId>tf</groupId> <artifactId>tf</artifactId> <version>1.0.0</version> </dependency> but I receive this: Dependency 'tf:tf:1.0.0' not found less I checked the repository folder and the jar file is there, in ... tf\tf\1.0.0\ What I'm doing wrong?

    Read the article

  • FindByIdentity in System.DirectoryServices.AccountManagment Memory Issues

    - by MVC Fanatic
    I'm working on an active directory managament application. In addition to the typical Create A New User, Enable/Disable an account, reset my password etc. it also managages application permissions for all of the clients web applications. Application management is handled by thousands of AD groups such as which are built from 3 letter codes for the application, section and site, there are also hundreds of AD groups which determine which applications and locations a coordinator can grant rights to. All of these groups in turn belong to other groups so I typically filter the groups list with the MemberOf property to find the groups that a user directly belongs to (or everyone has rights to do everything). I've made extensive use of the System.DirectoryServices.AccountManagment namespace using the FindByIdentity method in 31 places throughout the application. This method calls a private method FindPrincipalByIdentRefHelper on the internal ADStoreCtx class. A SearchResultCollection is created but not disposed so eventually typically once or twice a day the web server runs out of memory and all of the applications on the web server stop responsing until iis is reset because the resources used by the com objects aren't ever relased. There are places where I fall back to the underlying directory objects, but there are lot of places where I'm using the properties on the Principal - it's a vast improvement over using the esoteric ad property names in the .Net 2.0 Directory services code. I've contacted microsoft about the problem and it's been fixed in .Net 4.0 but they don't currently have plans to fix it in 3.5 unless there is intrest in the community about it. I only found information about it in a couple of places the MDSN documentation in the community content state's there is a memory leak at the bottom (guess I should have read that before using the the method) http://msdn.microsoft.com/en-us/library/bb345628.aspx And the class in question is internal and doesn't expose SearchResultsCollection outside the offending method so I can't get at the results to dispose them or inherit from the class and override the method. So my questions are Has anyone else encountered this problem? If so were you able to work around it? Do I have any option besides rewriting the application not using any of the .Net 3.5 active directory code? Thanks

    Read the article

  • Why is my code signing (MS authenticode) verification failing?

    - by Tim
    I posted this question and have a freshly minted code signing cert from Thawte. I followed the instructions (or so I thought) and the code signing claims to be done right, however when I try to verify the tool shows an error. I have no idea what it means and no idea how to fix this. Any comments would be appreciated. Command line to sign exe: signtool sign /f mdt.pfx /p password /t http://timestamp.verisign.com/scripts/timstamp.dll test.exe Results: The following certificate was selected: Issued to: [my company] Issued by: Thawte Code Signing CA Expires: 4/23/2011 7:59:59 PM SHA1 hash: 7D1A42364765F8969E83BC00AB77F901118F3601 Done Adding Additional Store Attempting to sign: test.exe Successfully signed and timestamped: test.exe Number of files successfully Signed: 1 Number of warnings: 0 Number of errors: 0 Note that there are no errors or warnings. Now, when I try to verify imagine my surprise: signtool verify /v test.exe results in: Verifying: test.exe SHA1 hash of file: 490BA0656517D3A322D19F432F1C6D40695CAD22 Signing Certificate Chain: Issued to: Thawte Premium Server CA Issued by: Thawte Premium Server CA Expires: 12/31/2020 7:59:59 PM SHA1 hash: 627F8D7827656399D27D7F9044C9FEB3F33EFA9A Issued to: Thawte Code Signing CA Issued by: Thawte Premium Server CA Expires: 8/5/2013 7:59:59 PM SHA1 hash: A706BA1ECAB6A2AB18699FC0D7DD8C7DE36F290F Issued to: [my company] Issued by: Thawte Code Signing CA Expires: 4/23/2011 7:59:59 PM SHA1 hash: 7D1A42364765F8969E83BC00AB77F901118F3601 The signature is timestamped: 4/27/2010 10:19:19 AM Timestamp Verified by: Issued to: Thawte Timestamping CA Issued by: Thawte Timestamping CA Expires: 12/31/2020 7:59:59 PM SHA1 hash: BE36A4562FB2EE05DBB3D32323ADF445084ED656 Issued to: VeriSign Time Stamping Services CA Issued by: Thawte Timestamping CA Expires: 12/3/2013 7:59:59 PM SHA1 hash: F46AC0C6EFBB8C6A14F55F09E2D37DF4C0DE012D Issued to: VeriSign Time Stamping Services Signer - G2 Issued by: VeriSign Time Stamping Services CA Expires: 6/14/2012 7:59:59 PM SHA1 hash: ADA8AAA643FF7DC38DD40FA4C97AD559FF4846DE Number of files successfully Verified: 0 Number of warnings: 0 Number of errors: 1

    Read the article

  • Fast screen capture and lost Vsync

    - by user338759
    Hi, I'd like to generate a movie in real time with a self-made application doing fast screen captures with part of the screen occupied by a running 3D application. I'm aware that several applications already exist for this (like FRAPS or Taksi), and even dedicated DirectShow filters (like UScreenCapture), but i really need to make this with my own external application. When correctly setup (UScreenCapture + ffdshow), capturing an compressing a full screen does not consumes as much CPU as you would expect (about 15%), and does not impairs the performances of the 3D app. The problem of doing a capture from an external application is that the 3D application loses it's Vsync and creates a shaggy, difficult to use 3D application (3D app is only presented on a small part of the screen, the rest being GDI, DirectX) FRAPS solves this problem by allowing you to capture only one application at a time (the one with focus). Depending on the technology used (OpenGl, DirectX, GDI), it hooks the Vsync and does its capture (with glReadPixels,...), without perturbing it. Doing this does not solve my problem, since I want the full composed screen image (including 3D and the rest) AND a smooth 3D app. The UScreenCapture seems to use a fast DirectX call to capture the whole screen, but the openGL 3D app is still out of sync. Doing a BitBlt is too slow and CPU consumming to do real time 30 fps acquisition (at least under windows XP, not sure with 7) My question is to know if there is a way to achieve my goal with Windows 7 and it's brand new DirectX compositing engine? Windows 7 succeeds to show live VSynced duplicated previews of every app (in the taskbar), so there must be a way to access the currenlty displayed screen buffer without perturbing the rendering of the 3D OpenGL app ? Any other suggestion, technology ? thank you

    Read the article

  • svcutil, WSDL, and the generated interfaces not being sufficient for implementation

    - by chtmd
    I have a WSDL file defining a service that I have to implement in WCF. I had read that I could generate the proxy using svcutil from the WSDL file, and that I could then use the generated interfaces to implement the service. Unfortunately, I can't quite seem to find a way to have the interfaces contain the correct attributes to expose the contracts. All operations have the "OperationContractAttribute" attribute, but it appears as though for the service to be exposed, I require the "OperationContract" for each one. Same thing with "ServiceContractAttribute" and "ServiceContract", and I imagine DataContract, but I haven't gotten that far. I could manually make these changes, but I would much prefer a technique where the existing code could be easily used, or better code could be generated for my uses. Is there some way that this can be done? Thanks. EDIT: Command used: svcutil ObjectManagerService.wsdl /n:*,Sample /o:ObjectManagerServiceProxy.cs /nologo Code sample: public interface ObjectManagerSyncPortType { // CODEGEN: Generating message contract since the operation createObject is neither RPC nor document wrapped. [System.ServiceModel.OperationContractAttribute(Action="http://www.sample.com/createObject", ReplyAction="*")] [System.ServiceModel.XmlSerializerFormatAttribute()] Sample.createObjectResponse1 createObject(Sample.createObjectRequest1 request); As best as I can tell/see the WSDL file is entirely self-contained and requires no additional XSD files.

    Read the article

< Previous Page | 604 605 606 607 608 609 610 611 612 613 614 615  | Next Page >