Search Results

Search found 5371 results on 215 pages for 'keys'.

Page 176/215 | < Previous Page | 172 173 174 175 176 177 178 179 180 181 182 183  | Next Page >

  • Vim script to compile TeX source and launch PDF only if no errors

    - by Jeet
    Hi, I am switching to using Vim for for my LaTeX editing environment. I would like to be able to tex the source file from within Vim, and launch an external viewing if the compile was successful. I know about the Vim-Latex suite, but, if possible, would prefer to avoid using it: it is pretty heavy-weight, hijacks a lot of my keys, and clutters up my vimruntime with a lot of files. Here is what I have now: if exists('b:tex_build_mapped') finish endif " use maparg or mapcheck to see if key is free command! -buffer -nargs=* BuildTex call BuildTex(0, <f-args>) command! -buffer -nargs=* BuildAndViewTex call BuildTex(1, <f-args>) noremap <buffer> <silent> <F9> <Esc>:call BuildTex(0)<CR> noremap <buffer> <silent> <S-F9> <Esc>:call BuildTex(1)<CR> let b:tex_build_mapped = 1 if exists('g:tex_build_loaded') finish endif let g:tex_build_loaded = 1 function! BuildTex(view_results, ...) write if filereadable("Makefile") " If Makefile is available in current working directory, run 'make' with arguments echo "(using Makefile)" let l:cmd = "!make ".join(a:000, ' ') echo l:cmd execute l:cmd if a:view_results && v:shell_error == 0 call ViewTexResults() endif else let b:tex_flavor = 'pdflatex' compiler tex make % if a:view_results && v:shell_error == 0 call ViewTexResults() endif endif endfunction function! ViewTexResults(...) if a:0 == 0 let l:target = expand("%:p:r") . ".pdf" else let l:target = a:1 endif if has('mac') execute "! open -a Preview ".l:target endif endfunction The problem is that v:shell_error is not set, even if there are compile errors. Any suggestions or insight on how to detect whether a compile was successful or not would be greatly appreciated! Thanks!

    Read the article

  • Paperclip and Amazon S3 Issue

    - by Jimmy
    Hey everyone, I have a rails app running on Heroku. I am using paperclip for some simple image uploads for user avatars and some other things, I have S3 set as my backend and everything seems to be working fine except when trying to push to S3 I get the following error: The AWS Access Key Id you provided does not exist in our records. Thinking I mis-pasted my access key and secret key, I tried again, still no luck. Thinking maybe it was just a buggy key I deactivated it and generated a new one. Still no luck. Now for both keys I have used the S3 browser app on OS X and have been able to connect to each and view my current buckets and add/delete buckets. Is there something I should be looking out for? I have my application's S3 and paperclip setup like so development: bucket: (unique name) access_key_id: ENV['S3_KEY'] secret_access_key: ENV['S3_SECRET'] test: bucket: (unique name) access_key_id: ENV['S3_KEY'] secret_access_key: ENV['S3_SECRET'] production: bucket: (unique_name) access_key_id: ENV['S3_KEY'] secret_access_key: ENV['S3_SECRET'] has_attached_file :cover, :styles => { :thumb => "50x50" }, :storage => :s3, :s3_credentials => "#{RAILS_ROOT}/config/s3.yml", :path => ":class/:id/:style/:filename" Note: I just added the (unique name) bits, those aren't actually there--I have also verified bucket names, but I don't even think this is getting that far. I also have my heroku environment vars setup correctly and have them setup on dev

    Read the article

  • Ruby on Rails export to csv - maintain mysql select statement order

    - by zekial
    Exporting some data from mysql to a csv file using FasterCSV. I'd like the columns in the outputted CSV to be in the same order as the select statement in my query. Example: rows = Data.find( :all, :select=>'name, age, height, weight' ) headers = rows[0].attributes.keys FasterCSV.generate do |csv| csv << headers rows.each do |r| csv << r.attributes.values end end CSV Output: height,weight,name,age 74,212,bob,23 70,201,fred,24 . . . I want the CSV columns in the same order as my select statement. Obviously the attributes method is not going to work. Any ideas on the best way to ensure that the columns in my csv file will be in the same order as the select statement? Got a lot of data and performance is an issue. The select statement is not static. I realize I could loop through column names within the rows.each loop but it seems kinda dirty.

    Read the article

  • File not found on RSACryptoServiceProvider, service account permissions?

    - by Ben Scheirman
    Our web service wraps around a third party library that contains the following code. We are using an Active Directory service account in the IIS 6 app pool (no interactive login abilities). Our service fails with the error “The system cannot find the file specified”. We’ve traced the error to the “RSACryptoServiceProvider provider = new RSACryptoServiceProvider();”. The third party assembly depends on a x509 file based certificate for its encryption process and the Service Account has Read / Write access to the keys folder. Additionally, the service account has Read, Write, Modify rights to “C:\Documents and Settings\All Users\Application Data\Microsoft\Crypto\RSA\MachineKeys”. Code: StringBuilder builder = new StringBuilder(publicKeyData); builder.Replace("-----BEGIN CERTIFICATE-----", ""); builder.Replace("-----END CERTIFICATE-----", ""); X509Certificate2 certificate = new X509Certificate2( Convert.FromBase64String(builder.ToString())); string xmlString = certificate.PublicKey.Key.ToXmlString(false); RSACryptoServiceProvider provider = new RSACryptoServiceProvider(); //BOOM CspKeyContainerInfo containerInfo = provider.CspKeyContainerInfo; provider.PersistKeyInCsp = false; provider.FromXmlString(xmlString); loadedKeys.Add(key, provider); provider2 = provider; We cracked open FileMon and noticed that there is a FILE NOT FOUND for that AppPool, followed by another SUCCESS for the same exact file. I'm out of my element here, anybody have an idea as to why we're seeing this?

    Read the article

  • drupal using node.save with XMLRPC call to another site. "Access Denied" message

    - by EricP
    I have a piece of code on 1 drupal site to create a node another drupal site in a multi-site setup. It looks like I'm getting the sessionid and logging in just fine, but when trying to create a "page" node, I get "Access denied". Under Services - Settings I have "Key Authentication", "Use keys" is unchecked, and "Use sessid" is checked. Below is my code: <p>Test Page 1</p> <? $url = 'http://drupal2.dev/xmlrpc.php'; ?> <? $conn = xmlrpc($url, 'system.connect'); print_r($conn); ?> <p>--</p> <? $login = xmlrpc($url, 'user.login', $conn['sessid'], 'superuser_name', 'superuser_password'); print_r($login); ?> <p>--</p> <? $data=array('type'=>'page', 'title'=>'Test', 'body'=>'test'); $data_s=serialize($data); $result = xmlrpc($url, 'node.save', $login['sessid'], $data_s); echo $result; //echo $data_s; ?> <? if($error = xmlrpc_error()){ if($error->code > 0){ $error->message = t('Outgoing HTTP request failed because the socket could not be opened.'); } drupal_set_message(t('Operation failed because the remote site gave an error: %message (@code).', array( '%message' => $error->message, '@code' => $error->code ) ) ); } ?> thanks for the help.

    Read the article

  • MySQL Relational Database Foreign Key

    - by user623879
    To learn databasing, I am creating a movie database. To associate multiple directors with a movie, I have the following schema: movie(m_ID, ....) m_director(dirID, dirName)//dirID is a autoincrement primary key m_directs(dirID, m_ID) //dirID, m_ID are set as foreign Keys in the mysql database(InnoDB engine) I have a program that connects to the db that needs to add a movie to the database. I can easily add a new entry to the movie table and the m_director table, but I am having trouble adding a entry in the m_directs table. INSERT INTO m_director (dirName) VALUES("Jason Reitman"); INSERT INTO m_directs (dirID, m_ID) VALUES(LAST_INSERT_ID(), "tt0467406"); I am using this sql statement to insert a new director and add the association to the movie. I know the primary key of the movie, but I don't know the dirID, so I use LAST_INSERT_ID() to get the last id of the director just inserted. The problem I am having is that I get the following error: MySql.Data.MySqlClient.MySqlException (0x80004005): Cannot add or update a child row: a foreign key constraint fails (`siteproducts`. `m_directs`, CONSTRAINT `m_directs_ibfk_2` FOREIGN KEY (`dirID`) REFERENCES `m_directs` (`dirID`) ON DELETE CASCADE ON UPDATE CASCADE) Any ideas?

    Read the article

  • Cannot update a single field using Linq to Sql

    - by KallDrexx
    I am having a hard time attempting to update a single field without having to retrieve the whole record prior to saving. For example, in my web application I have an in place editor for the Name and Description fields of an object. Once you edit either field, it sends the new field (with the object's ID value) to the web server. What I want is the webserver to take that value and ID and only update the one field. There are only two ways google tells me to do this: 1) When I get the value I want to change, the value and the ID, retrieve the record from the database, update the field in the c# object, and then send it back to the server. I don't like this method because not only does it include a completely unnecessary database read call (which includes two tables due to the way my schema is). 2) Set UpdateCheck for all the fields (but the primary keys) to UpdateCheck.Never. This doesn't work for me (I think) due to my mapping layer between the Linq to Sql and my Entity/ViewModel layer. When I convert my entity into the linq to sql db object it seems to be updating those fields regardless of the UpdateCheck setting. This might be just because of integers, since not setting an int means it is a zero (and no, I can't use int? instead). Are there any other options that I have?

    Read the article

  • How do I eliminate Error 3002?

    - by Andrew
    Say I have the following table definitions in SQL Server 2008: CREATE TABLE Person (PersonId INT IDENTITY NOT NULL PRIMARY KEY, Name VARCHAR(50) NOT NULL, ManyMoreIrrelevantColumns VARCHAR(MAX) NOT NULL) CREATE TABLE Model (ModelId INT IDENTITY NOT NULL PRIMARY KEY, ModelName VARCHAR(50) NOT NULL, Description VARCHAR(200) NULL) CREATE TABLE ModelScore (ModelId INT NOT NULL REFERENCES Model (ModelId), Score INT NOT NULL, Definition VARCHAR(100) NULL, PRIMARY KEY (ModelId, Score)) CREATE TABLE PersonModelScore (PersonId INT NOT NULL REFERENCES Person (PersonId), ModelId INT NOT NULL, Score INT NOT NULL, PRIMARY KEY (PersonId, ModelId), FOREIGN KEY (ModelId, Score) REFERENCES ModelScore (ModelId, Score)) The idea here is that each Person may have only one ModelScore per Model, but each Person may have a score for any number of defined Models. As far as I can tell, this SQL should enforce these constraints naturally. The ModelScore has a particular "meaning," which is contained in the Definition. Nothing earth-shattering there. Now, I try translating this into Entity Framework using the designer. After updating the model from the database and doing some editing, I have a Person object, a Model object, and a ModelScore object. PersonModelScore, being a join table, is not an object but rather is included as an association with some other name (let's say ModelScorePersonAssociation). The mapping details for the association are as follows: - Association - Maps to PersonModelScore - ModelScore ModelId : Int32 <=> ModelId : int Score : Int32 <=> Score : int - Person PersonId : Int32 <=> PersonId : int On the right-hand side, the ModelId and PersonId values have primary key symbols, but the Score value does not. Upon compilation, I get: Error 3002: Problem in Mapping Fragment starting at line 5190: Potential runtime violation of table PersonModelScore's keys (PersonModelScore.ModelId, PersonModelScore.PersonId): Columns (PersonModelScore.PersonId, PersonModelScore.ModelId) are mapped to EntitySet ModelScorePersonAssociation's properties (ModelScorePersonAssociation.Person.PersonId, ModelScorePersonAssociation.ModelScore.ModelId) on the conceptual side but they do not form the EntitySet's key properties (ModelScorePersonAssociation.ModelScore.ModelId, ModelScorePersonAssociation.ModelScore.Score, ModelScorePersonAssociation.Person.PersonId). What have I done wrong in the designer or otherwise, and how can I fix the error? Many thanks!

    Read the article

  • LINQ(2 SQL) Insert Multiple Tables Question

    - by Refracted Paladin
    I have 3 tables. A primary EmploymentPlan table with PK GUID EmploymentPlanID and 2 FK's GUID PrevocServicesID & GUID JobDevelopmentServicesID. There are of course other fields, almost exclusively varchar(). Then the 2 secondary tables with the corresponding PK to the primary's FK's. I am trying to write the LINQ INSERT Method and am struggling with the creation of the keys. Say I have a method like below. Is that correct? Will that even work? Should I have seperate methods for each? Also, when INSERTING I didn't think I needed to provide the PK for a table. It is auto-generated, no? Thanks, public static void InsertEmploymentPlan(int planID, Guid employmentQuestionnaireID, string user, bool communityJob, bool jobDevelopmentServices, bool prevocServices, bool transitionedPrevocIntegrated, bool empServiceMatchPref) { using (var context = MatrixDataContext.Create()) { var empPrevocID = Guid.NewGuid(); var prevocPlan = new tblEmploymentPrevocService { EmploymentPrevocID = empPrevocID }; context.tblEmploymentPrevocServices.InsertOnSubmit(prevocPlan); var empJobDevID = Guid.NewGuid(); var jobDevPlan = new tblEmploymentJobDevelopmetService() { JobDevelopmentServicesID = empJobDevID }; context.tblEmploymentJobDevelopmetServices.InsertOnSubmit(jobDevPlan); var empPlan = new tblEmploymentQuestionnaire { CommunityJob = communityJob, EmploymentQuestionnaireID = Guid.NewGuid(), InsertDate = DateTime.Now, InsertUser = user, JobDevelopmentServices = jobDevelopmentServices, JobDevelopmentServicesID =empJobDevID, PrevocServices = prevocServices, PrevocServicesID =empPrevocID, TransitionedPrevocToIntegrated =transitionedPrevocIntegrated, EmploymentServiceMatchPref = empServiceMatchPref }; context.tblEmploymentQuestionnaires.InsertOnSubmit(empPlan); context.SubmitChanges(); } } I understand I can use more then 1 InsertOnSubmit, See SO ? HERE, I just don't understand how that would apply to my situation and the PK/FK creation.

    Read the article

  • Starter question of declarative style SQLAlchemy relation()

    - by jfding
    I am quite new to SQLAlchemy, or even database programming, maybe my question is too simple. Now I have two class/table: class User(Base): __tablename__ = 'users' id = Column(Integer, primary_key=True) name = Column(String(40)) ... class Computer(Base): __tablename__ = 'comps' id = Column(Integer, primary_key=True) buyer_id = Column(None, ForeignKey('users.id')) user_id = Column(None, ForeignKey('users.id')) buyer = relation(User, backref=backref('buys', order_by=id)) user = relation(User, backref=backref('usings', order_by=id)) Of course, it cannot run. This is the backtrace: File "/Library/Python/2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/state.py", line 71, in initialize_instance fn(self, instance, args, kwargs) File "/Library/Python/2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/mapper.py", line 1829, in _event_on_init instrumenting_mapper.compile() File "/Library/Python/2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/mapper.py", line 687, in compile mapper._post_configure_properties() File "/Library/Python/2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/mapper.py", line 716, in _post_configure_properties prop.init() File "/Library/Python/2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/interfaces.py", line 408, in init self.do_init() File "/Library/Python/2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/properties.py", line 716, in do_init self._determine_joins() File "/Library/Python/2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/properties.py", line 806, in _determine_joins "many-to-many relation, 'secondaryjoin' is needed as well." % (self)) sqlalchemy.exc.ArgumentError: Could not determine join condition between parent/child tables on relation Package.maintainer. Specify a 'primaryjoin' expression. If this is a many-to-many relation, 'secondaryjoin' is needed as well. There's two foreign keys in class Computer, so the relation() callings cannot determine which one should be used. I think I must use extra arguments to specify it, right? And howto? Thanks

    Read the article

  • Change find method in database search so that it isn't case sensitive in Rails app

    - by Ryan
    Hello, I am learning Rails and have created a work-in-progress app that does one-word searches on a database of shortcut keys for various programs (http://keyboardcuts.heroku.com/shortcuts/home). The search method in the Shortcut model is the following: def self.search(search) search_condition = "%" + search + "%" find(:all, :conditions => ['action LIKE ? OR application LIKE ?', search_condition, search_condition]) end ...where 'action' and 'application' are columns in a SQLite table. (source: https://we.riseup.net/rails/simple-search-tutorial) For some reason, the search seems to be case sensitive (you can see this by searching 'Paste' vs. 'paste'). Can anyone help me figure out why and what I can do to make it not case sensitive? If not, can you at least point me in the right direction? Database creation: I first copied shortcuts from various website into Excel and saved it as a CSV. Then I migrated the database and filled it with the data using db:seed and a small script I wrote (I viewed the database and it looked fine). To get the SQLite database to the server, I used Taps as outline by the Heroku website (http://blog.heroku.com/archives/2009/3/18/push_and_pull_databases_to_and_from_heroku/). I am using Ubuntu. Please let me know if you need more information. Thanks in advance for you help, very much appreciated! Ryan

    Read the article

  • Hidden features of Perl?

    - by Adam Bellaire
    What are some really useful but esoteric language features in Perl that you've actually been able to employ to do useful work? Guidelines: Try to limit answers to the Perl core and not CPAN Please give an example and a short description Hidden Features also found in other languages' Hidden Features: (These are all from Corion's answer) C# Duff's Device Portability and Standardness Quotes for whitespace delimited lists and strings Aliasable namespaces Java Static Initalizers JavaScript Functions are First Class citizens Block scope and closure Calling methods and accessors indirectly through a variable Ruby Defining methods through code PHP Pervasive online documentation Magic methods Symbolic references Python One line value swapping Ability to replace even core functions with your own functionality Other Hidden Features: Operators: The bool quasi-operator The flip-flop operator Also used for list construction The ++ and unary - operators work on strings The repetition operator The spaceship operator The || operator (and // operator) to select from a set of choices The diamond operator Special cases of the m// operator The tilde-tilde "operator" Quoting constructs: The qw operator Letters can be used as quote delimiters in q{}-like constructs Quoting mechanisms Syntax and Names: There can be a space after a sigil You can give subs numeric names with symbolic references Legal trailing commas Grouped Integer Literals hash slices Populating keys of a hash from an array Modules, Pragmas, and command-line options: use strict and use warnings Taint checking Esoteric use of -n and -p CPAN overload::constant IO::Handle module Safe compartments Attributes Variables: Autovivification The $[ variable tie Dynamic Scoping Variable swapping with a single statement Loops and flow control: Magic goto for on a single variable continue clause Desperation mode Regular expressions: The \G anchor (?{}) and '(??{})` in regexes Other features: The debugger Special code blocks such as BEGIN, CHECK, and END The DATA block New Block Operations Source Filters Signal Hooks map (twice) Wrapping built-in functions The eof function The dbmopen function Turning warnings into errors Other tricks, and meta-answers: cat files, decompressing gzips if needed Perl Tips See Also: Hidden features of C Hidden features of C# Hidden features of C++ Hidden features of Java Hidden features of JavaScript Hidden features of Ruby Hidden features of PHP Hidden features of Python

    Read the article

  • Page.User.Identity.Name is blank on pages of subdomains

    - by sparks
    I have multiple subdomains trying to use a single subdomain for authentiction using forms authentication all running on windows server 2008 r2. All of the forms authentication pages are setup to use the same name, and on the authentication page the cookie is added with the following snippet: FormsAuthentication.SetAuthCookie(txtUserName.Text, false); System.Web.HttpCookie MyCookie = System.Web.Security.FormsAuthentication.GetAuthCookie(User.Identity.Name.ToString(), false); MyCookie.Domain = ConfigurationManager.AppSettings["domainName"]; Response.AppendCookie(MyCookie); When I am logged in to signon.mysite.com the page.user.identity.isauthenticated and page.user.identity.name properties both work fine. When I navigate to subdomain.mysite.com the page.user.identity.isauthenticated returns true, bue the name is empty. I tried to retrieve it from the cookie using the following, but it also was blank. HttpCookie cookie = Request.Cookies[".ASPXAUTH"]; FormsAuthenticationTicket fat = FormsAuthentication.Decrypt(cookie.Value); user2_lbl.Text = fat.Name; When googling the issue I found some people saying something must be added to global.asax and other saying it wasn't necessary. The goal is to be able to login on the authentication subdomain and have the user identity accessible from the root site and other subdomains. Machine keys match in all web.config, and the AppSettings["domainName"] is set to "mysite.com" currently. Does anyone know what is preventing me from accessing the user information?

    Read the article

  • Error in Implementing WS Security web service in WebLogic 10.3

    - by Chris
    Hi, I am trying to develop a JAX WS web service with WS-Security features in WebLogic 10.3. I have used the ant tasks WSDLC, JWSC and ClientGen to generate skeleton/stub for this web service. I have two keystores namely WSIdentity.jks and WSTrust.jks which contains the keys and certificates. One of the alias of WSIdentity.jks is "ws02p". The test client has the following code to invoke the web service: SecureSimpleService service = new SecureSimpleService(); SecureSimplePortType port = service.getSecureSimplePortType(); List credProviders = new ArrayList(); CredentialProvider cp = new ClientBSTCredentialProvider( "E:\\workspace\\SecureServiceWL103\\keystores\\WSIdentity.jks", "webservice", "ws01p","webservice"); credProviders.add(cp); string endpointURL="http://localhost:7001/SecureSimpleService/SecureSimpleService"; BindingProvider bp = (BindingProvider)port; Map requestContext = bp.getRequestContext(); requestContext.put(BindingProvider.ENDPOINT_ADDRESS_PROPERTY, endpointURL); requestContext.put(WSSecurityContext.CREDENTIAL_PROVIDER_LIST,credProviders); requestContext.put(WSSecurityContext.TRUST_MANAGER, new TrustManager() { public boolean certificateCallback(X509Certificate[] chain, int validateErr) { // Put some custom validation code in here. // Just return true for now return true; } }); SignResponse resp1 = new SignResponse(); resp1 = port.echoSignOnlyMessage("hello sign"); System.out.println("Result: " + resp1.getMessage()); When I trying to invoke this web servcie using this test client I am getting the error "Invalid signing policy" with the following stack trace: *[java] weblogic.wsee.security.wss.policy.SecurityPolicyArchitectureException: Invalid signing policy [java] at weblogic.wsee.security.wss.plan.SecurityPolicyBlueprintDesigner.verifyPolicy(SecurityPolicyBlueprintDesigner.java:786) [java] at weblogic.wsee.security.wss.plan.SecurityPolicyBlueprintDesigner.designOutboundBlueprint(SecurityPolicyBlueprintDesigner.java:136) Am I missing any configuration settings in WebLogic admin console or is it do with something else. Thanks in advance.

    Read the article

  • android keylistener losing key taps

    - by miannelle
    I am using a keylistener to get key taps. The problem is that once you tap the delete key, the next key tap is not registering. The key tap after that keeps working. If I tap 2 deletes in a row, they work, just no other keys. They just disappear. I put in a log test before the "if (keycode" section and it shows nothing after the first delete is pressed, unless it is another delete. I am using the following code (Thanks Shawn).: itemPrice.setKeyListener(new CalculatorKeyListener()); itemPrice.setRawInputType(Configuration.KEYBOARD_12KEY); class CalculatorKeyListener extends NumberKeyListener { public int getInputType() { return InputType.TYPE_CLASS_NUMBER; } @Override public boolean onKeyDown(View view, Editable content, int keyCode, KeyEvent event) { if (keyCode >= KeyEvent.KEYCODE_0 && keyCode <= KeyEvent.KEYCODE_9) { digitPressed(keyCode - KeyEvent.KEYCODE_0); } else if (keyCode == KeyEvent.KEYCODE_DEL) { deletePressed(); } return true; } @Override protected char[] getAcceptedChars() { return new char[] { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 }; } } With this problem the keylistener provides no value to me. There must be something that I am missing. Thanks,

    Read the article

  • UniqueConstraint in EmbeddedConfiguration

    - by LantisGaius
    I just started using db4o on C#, and I'm having trouble setting the UniqueConstraint on the DB.. here's the db4o configuration static IObjectContainer db = Db4oEmbedded.OpenFile(dbase.Configuration(), "data.db4o"); static IEmbeddedConfiguration Configuration() { IEmbeddedConfiguration dbConfig = Db4oEmbedded.NewConfiguration(); // Initialize Replication dbConfig.File.GenerateUUIDs = ConfigScope.Globally; dbConfig.File.GenerateVersionNumbers = ConfigScope.Globally; // Initialize Indexes dbConfig.Common.ObjectClass(typeof(DAObs.Environment)).ObjectField("Key").Indexed(true); dbConfig.Common.Add(new Db4objects.Db4o.Constraints.UniqueFieldValueConstraint(typeof(DAObs.Environment), "Key")); return dbConfig; } and the object to serialize: class Environment { public string Key { get; set; } public string Value { get; set; } } everytime I get to commiting some values, an "Object reference not set to an instance of an object." Exception pops up, with a stack trace pointing to the UniqueFieldValueConstraint. Also, when I comment out the two lines after the "Initialize Indexes" comment, everything runs fine (Except you can save non-unique keys, which is a problem)~ Commit code (In case I'm doing something wrong in this part too:) public static void Create(string key, string value) { try { db.Store(new DAObs.Environment() { Key = key, Value = value }); db.Commit(); } catch (Db4objects.Db4o.Events.EventException ex) { System.Console.WriteLine (DateTime.Now + " :: Environment.Create\n" + ex.InnerException.Message +"\n" + ex.InnerException.StackTrace); db.Rollback(); } } Help please? Thanks in advance~

    Read the article

  • relational data from xml

    - by Beta033
    Our problem is this, we have a relational database to store objects in tables. As any relational database could, several of these tables have multiple foreign keys pointing to other tables (all pretty normal stuff) We've been trying to identify a solution to allow export of this relational data, ideally, only 1 of the objects in the model, to some sort of file (xml, text, ??). So it wouldn't be simple enough to just export 1 table as data stored in other tables would contribute to the complete model of the object. Something like the following picture: Toward this, i've written a routine to export the structure by following the foreign key paths which exports something similar to the following. <Tables> <TableA PK="1", val1, val2, val3> <TableC PK="1", FK_A="1", Val1, val2, val3> <TableC PK="2", FK_A="1", val1, val2, val3> <TableB PK="1", FK_A="1", FK_C="1", val1, val2, val3> <TableB PK="2", FK_A="1", FK_C="2", val1, val2, val3> <TableD PK="1", FK_B="1", FK_C="1", val1> <TableD PK="2", FK_B="2", FK_C="1", val1> </Tables> However, given this structure, it cannot be placed into a heirarchial format (ie D2 is a child of C1 and B2; and B2 is a child of C2) Which in turn, makes my life very difficult when trying to identify a methodology to reimport (and reKey) these objects. Has anybody done anything like this? how do you do it? are there tools or documentation on how this is best accomplished? Thanks for your help.

    Read the article

  • Pass table as parameter to SQLCLR TV-UDF

    - by Skeolan
    We have a third-party DLL that can operate on a DataTable of source information and generate some useful values, and we're trying to hook it up through SQLCLR to be callable as a table-valued UDF in SQL Server 2008. Taking the concept here one step further, I would like to program a CLR Table-Valued Function that operates on a table of source data from the DB. I'm pretty sure I understand what needs to happen on the T-SQL side of things; but, what should the method signature look like in the .NET (C#) code? What would be the parameter datatype for "table data from SQL Server?" e.g. /* Setup */ CREATE TYPE InTableType AS TABLE (LocationName VARCHAR(50), Lat FLOAT, Lon FLOAT) GO CREATE TYPE OutTableType AS TABLE (LocationName VARCHAR(50), NeighborName VARCHAR(50), Distance FLOAT) GO CREATE ASSEMBLY myCLRAssembly FROM 'D:\assemblies\myCLR_UDFs.dll' WITH PERMISSION_SET = EXTERNAL_ACCESS GO CREATE FUNCTION GetDistances(@locations InTableType) RETURNS OutTableType AS EXTERNAL NAME myCLRAssembly.GeoDistance.SQLCLRInitMethod GO /* Execution */ DECLARE @myTable InTableType INSERT INTO @myTable(LocationName, Lat, Lon) VALUES('aaa', -50.0, -20.0) INSERT INTO @myTable(LocationName, Lat, Lon) VALUES('bbb', -20.0, -50.0) SELECT * FROM @myTable DECLARE @myResult OutTableType INSERT INTO @myResult MyCLRTVFunction @myTable --returns a table result calculated using the input The lat/lon - distance thing is a silly example that should of course be better handled entirely in SQL; but I hope it illustrates the general intent of table-in - table-out through a table-valued UDF tied to a SQLCLR assembly. I am not certain this is possible; what would the SQLCLRInitMethod method signature look like in the C#? public class GeoDistance { [SqlFunction(FillRowMethodName = "FillRow")] public static IEnumerable SQLCLRInitMethod(<appropriateType> myInputData) { //... } public static void FillRow(...) { //... } } If it's not possible, I know I can use a "context connection=true" SQL connection within the C# code to have the CLR component query for the necessary data given the relevant keys; but that's sensitive to changes in the DB schema. So I hope to just have SQL bundle up all the source data and pass it to the function. Bonus question - assuming this works at all, would it also work with more than one input table?

    Read the article

  • problem disposing class in Dictionary it is Still in the heap memory although using GC.Collect

    - by Bahgat Mashaly
    Hello i have a problem disposing class in Dictionary this is my code private Dictionary<string, MyProcessor> Processors = new Dictionary<string, MyProcessor>(); private void button1_Click(object sender, EventArgs e) { if (!Processors.ContainsKey(textBox1.Text)) { Processors.Add(textBox1.Text, new MyProcessor()); } } private void button2_Click(object sender, EventArgs e) { MyProcessor currnt_processor = Processors[textBox1.Text]; Processors.Remove(textBox2.Text); currnt_processor.Dispose(); currnt_processor = null; GC.Collect(); } public class MyProcessor: IDisposable { private bool isDisposed = false; string x = ""; public MyProcessor() { for (int i = 0; i < 20000; i++) { //this line only to increase the memory usage to know if the class is dispose or not x = x + "gggggggggggg"; } this.Dispose(); GC.SuppressFinalize(this); } public void Dispose() { this.Dispose(true); GC.SuppressFinalize(this); } public void Dispose(bool disposing) { if (!this.isDisposed) { isDisposed = true; this.Dispose(); } } ~MyProcessor() { Dispose(false); } } i use "ANTS Memory Profiler" to monitor heap memory the disposing work only when i remove all keys from dictionary how can i destroy the class from heap memory ? thanks in advance

    Read the article

  • Exception thrown on secondary workflow

    - by nav
    Hi All, I am running a workflow fired when a new list item is created, the workflow creates a new task item and sends an email. This works fine. I have another workflow fired when a task item is modified, when I try and modify the task item and click complete an exception: I have checked the recycle bin and the item asscociated with the workflow is not there, i have deleted all items for the recycle bin but still get the error... Any help appreciated! Exception details: Server Error in '/' Application. The workflow's parent item associated with this task is currently in the recycle bin, which prevents the task from being completed. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: Microsoft.SharePoint.SPException: The workflow's parent item associated with this task is currently in the recycle bin, which prevents the task from being completed. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [SPException: The workflow's parent item associated with this task is currently in the recycle bin, which prevents the task from being completed.] Microsoft.SharePoint.WebPartPages.DataFormWebPart.UpdateCallback(Int32 affectedRecords, Exception ex) +198 System.Web.UI.DataSourceView.Update(IDictionary keys, IDictionary values, IDictionary oldValues, DataSourceViewOperationCallback callback) +4432739 Microsoft.SharePoint.WebPartPages.DataFormWebPart.FlatCommit() +724 Microsoft.SharePoint.WebPartPages.DataFormWebPart.PerformCommit() +92 Microsoft.SharePoint.WebPartPages.DataFormWebPart.HandleOnSave(Object sender, EventArgs e) +61 Microsoft.SharePoint.WebPartPages.DataFormWebPart.RaisePostBackEvent(String eventArgument) +3651 System.Web.UI.Page.RaisePostBackEvent(IPostBackEventHandler sourceControl, String eventArgument) +39 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +3215

    Read the article

  • How to use BeanTreeView?

    - by joseph
    Hello. I have read through whole platform.netbeans.org and I do not find how to add some instance to existing children. I can add static array of diagrams(created in addNotify()), but how to add new Diagram object to the Children? I want to add it from an action that I will create(not included in code). See the code: public class ExploirerTopComponent extends TopComponent implements ExplorerManager.Provider { ... ... ... public ExploirerTopComponent() { ... ... associateLookup (ExplorerUtils.createLookup(mgr, getActionMap())); mgr.setRootContext(new MyProjects());// setDisplayName ("Projects explorer"); } public class MyProjects extends AbstractNode { public MyProjects(MainProject obj) { super (new ProjectsChildren(), Lookups.singleton(obj)); setDisplayName ( obj.getName()); } public MyProjects() { super (new ProjectsChildren()); setDisplayName ("My projects"); } } . public class ProjectsChildren extends Children.Keys{ public ProjectsChildren() { } @Override protected Node[] createNodes(Object o) { MainProject obj = (MainProject) o; return new Node[] { new MyDiagrams() }; } @Override protected void addNotify() { Diagram[] d = new Diagram[3]; for (int i = 0; i < pr.length; i++) { pr[i] = new Diagram("digram"); } setKeys (pr); } }

    Read the article

  • Update table columns bound to NSArrayController

    - by Loz
    Hi, I'm fairly new to the world of bindings in cocoa, and I'm having some troubles (perhaps/probably due to a misunderstanding). I have a singleton that contains an NSMutableArray called plugins, containing objects of class Plugin. It has a method called loadPlugins which adds objects to the plugins array. This may be called at any point. It's been added as an instance in Interface Builder. Also in IB is an NSObjectController, whose content outlet is connected to the singleton. There is also an NSArrayController, whose contentArray is bound to the NSObjectController (controller key is 'selection', model key path is 'plugins', object class name is 'Plugin'). And finally I have a table view with 2 columns, the values of which are bound to the NSArrayController's arrangedObjects, using keys of properties in the Plugin class. So far so standard (as far as I can tell from tutorials at least). My trouble is that when the loadPlugins method is called in the singleton, and objects are added to the plugins array, the table doesn't update to show the objects (unless loadPlugins is called before the nib is loaded). -reloadData called on the tableView doesn't do anything. Is there a way to tell the NSArrayController that the referenced array has been updated? I understand there is the -add: method for NSArrayController, which could be used in the loadPlugins, but this isn't desirable as I want to keep the singleton totally separate from the display aspect. This seems related to: http://stackoverflow.com/questions/1623396/refresh-cocoa-binding-nsarraycontroller-combobox The line: "editing the array behind the controller's back" seems to perhaps pinpoint the problem, but I would hope that it would be possible to have the singleton not know about the controller. Thanks in advance.

    Read the article

  • Suggest a good method with least lookup time complexity

    - by Amrish
    I have a structure which has 3 identifier fields and one value field. I have a list of these objects. To give an analogy, the identifier fields are like the primary keys to the object. These 3 fields uniquely identify an object. Class { int a1; int a2; int a3; int value; }; I would be having a list of say 1000 object of this datatype. I need to check for specific values of these identity key values by passing values of a1, a2 and a3 to a lookup function which would check if any object with those specific values of a1, a2 and a3 is present and returns that value. What is the most effective way to implement this to achieve a best lookup time? One solution I could think of is to have a 3 dimensional matrix of length say 1000 and populate the value in it. This has a lookup time of O(1). But the disadvantages are. 1. I need to know the length of array. 2. For higher identity fields (say 20), then I will need a 20 dimension matrix which would be an overkill on the memory. For my actual implementation, I have 23 identity fields. Can you suggest a good way to store this data which would give me the best look up time?

    Read the article

  • Generic wrapper for System.Web.Caching.Cache functions

    - by David Neale
    I've created a generic wrapper for using the Cache object: public class Cache<T> where T : class { public Cache Cache {get;set;} public CachedKeys Key {get;set;} public Cache(Cache cache, CachedKeys key){ Cache = cache; Key = key; } public void AddToCache(T obj){ Cache.Add(Key.ToString(), obj, null, DateTime.Now.AddMinutes(5), System.Web.Caching.Cache.NoSlidingExpiration, System.Web.Caching.CacheItemPriority.Normal, null); } public bool TryGetFromCache(out T cachedData) { cachedData = Cache[Key.ToString()] as T; return cachedData != null; } public void RemoveFromCache() { Cache.Remove(Key.ToString()); } } The CachedKeys enumeration is just a list of keys that can be used to cache data. The trouble is, to call it is quite convuluted: var cache = new Cache<MyObject>(Page.Cache, CachedKeys.MyKey); MyObject myObject = null; if(!cache.TryGetFromCache(out myObject)){ //get data... cache.AddToCache(data); //add to cache return data; } return myObject; I only store one instance of each of my objects in the cache. Therefore, is there any way that I can create an extension method that accepts the type of object to Cache and uses (via Reflection) its Name as the cache key? public static Cache<T> GetCache(this Cache cache, Type cacheType){ Cache<cacheType> Cache = new Cache<cacheType>(cache, cacheType.Name); } Of course, there's two errors here: Extension methods must be defined in a non-generic static class The type or namespace name 'cacheType' could not be found This is clearly not the right approach but I thought I'd show my working. Could somebody guide me in the right direction?

    Read the article

  • MSTest Test Context Exception Handling

    - by Flip
    Is there a way that I can get to the exception that was handled by the MSTest framework using the TestContext or some other method on a base test class? If an unhandled exception occurs in one of my tests, I'd like to spin through all the items in the exception.Data dictionary and display them to the test result to help me figure out why the test failed (we usually add data to the exception to help us debug in the production env, so I'd like to do the same for testing). Note: I am not testing that an exception was SUPPOSED TO HAPPEN (I have other tests for that), I am testing a valid case, I just need to see the exception data. Here is a code example of what I'm talking about. [TestMethod] public void IsFinanceDeadlineDateValid() { var target = new BusinessObject(); SetupBusinessObject(target); //How can I capture this in the text context so I can display all the data //in the exception in the test result... var expected = 100; try { Assert.AreEqual(expected, target.PerformSomeCalculationThatMayDivideByZero()); } catch (Exception ex) { ex.Data.Add("SomethingImportant", "I want to see this in the test result, as its important"); ex.Data.Add("Expected", expected); throw ex; } } I understand there are issues around why I probably shouldn't have such an encapsulating method, but we also have sub tests to test all the functionality of PerformSomeCalculation... However, if the test fails, 99% of the time, I rerun it passes, so I can't debug anything without this information. I would also like to do this on a GLOBAL level, so that if any test fails, I get the information in the test results, as opposed to doing it for each individual test. Here is the code that would put the exception info in the test results. public void AddDataFromExceptionToResults(Exception ex) { StringBuilder whereAmI = new StringBuilder(); var holdException = ex; while (holdException != null) { Console.WriteLine(whereAmI.ToString() + "--" + holdException.Message); foreach (var item in holdException.Data.Keys) { Console.WriteLine(whereAmI.ToString() + "--Data--" + item + ":" + holdException.Data[item]); } holdException = holdException.InnerException; } }

    Read the article

< Previous Page | 172 173 174 175 176 177 178 179 180 181 182 183  | Next Page >