Search Results

Search found 12714 results on 509 pages for 'db schema'.

Page 85/509 | < Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >

  • How to fix "could not find a base address that matches schema http"... in WCF

    - by Craig Shearer
    I'm trying to deploy a WCF service to my server, hosted in IIS. Naturally it works on my machine :) But when I deploy it, I get the following error: This collection already contains an address with scheme http. There can be at most one address per scheme in this collection. Googling on this, I find that I have to put a serviceHostingEnvironment element into the web.config file: <serviceHostingEnvironment> <baseAddressPrefixFilters> <add prefix="http://mywebsiteurl"/> </baseAddressPrefixFilters> </serviceHostingEnvironment> But once I have done this, I get the following: Could not find a base address that matches scheme http for the endpoint with binding BasicHttpBinding. Registered base address schemes are [https]. It seems it doesn't know what the base address is, but how do I specify it? Here's the relevant section of my web.config file: <system.serviceModel> <serviceHostingEnvironment> <baseAddressPrefixFilters> <add prefix="http://mywebsiteurl"/> </baseAddressPrefixFilters> </serviceHostingEnvironment> <behaviors> <serviceBehaviors> <behavior name="WcfPortalBehavior"> <serviceMetadata httpGetEnabled="true"/> <serviceDebug includeExceptionDetailInFaults="true"/> </behavior> </serviceBehaviors> </behaviors> <bindings> <basicHttpBinding> <binding name="BasicHttpBinding_IWcfPortal" maxBufferSize="2147483647" maxReceivedMessageSize="2147483647" receiveTimeout="00:10:00" sendTimeout="00:10:00" openTimeout="00:10:00" closeTimeout="00:10:00"> <readerQuotas maxBytesPerRead="2147483647" maxArrayLength="2147483647" maxStringContentLength="2147483647"/> </binding> </basicHttpBinding> </bindings> <services> <service behaviorConfiguration="WcfPortalBehavior" name="Csla.Server.Hosts.Silverlight.WcfPortal"> <endpoint address="" binding="basicHttpBinding" contract="Csla.Server.Hosts.Silverlight.IWcfPortal" bindingConfiguration="BasicHttpBinding_IWcfPortal"> </endpoint> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange"/> </service> </services> </system.serviceModel> Can anybody shed some light on what's going on and how to fix it? Thanks! Craig

    Read the article

  • Microsoft Sync Framework - How to reprovision a table (or entire scope) fater schema changes?

    - by Rabbi
    B"H I have already setup Syncing with Microsoft Sync Framework, and now I need to add fields to a table. How do I re-provision the databases? The setup is exceedingly simple: Two sql express 2008 servers The scope includes the entire database Using Microsoft Sync Framework 2.0 Synchronizing by direct access. Using the standard new SqlSyncProvider Do I make the structural changes at both ends? Or do I only change one Server and let Sync Framework somehow propagate the change? Do I need to delete the _tracking tables and/or the stored procedures? How about the triggers? Has anyone been using the Sync Framework? Please help.

    Read the article

  • ‘Empty’ results from MQL Query. Freebase Schema: /film/film/starring & /film/actor/film

    - by user1879631
    First post here, so I hope this is enough detail. I started using freebase-python today to get film information for a program that I’m working on. One thing that I need to grab is a list of actors that starred in a film. I’ve followed some tutorials and guides on the way to do this, and can get a list of films for a director or the director of a film, but when I try to do the same with an actor or a film’s cast, I get ‘null’ results. I have the same problem in both Python and the Freebase MQL Query Editor, and you can see what I've tried below. Links to all of the examples below written in the editor can be found here, as Stack Overflow wouldn't let me post links underneath each example on my first post! Working director query in Python: import freebase fb = freebase.mqlread q = {'type':'/film/film', 'name':'Inception', 'directed_by':[]} fb(q) Working director's filmography query in Python: import freebase fb = freebase.mqlread q = {'type':'/film/director', 'name': 'Christopher Nolan', 'film':[]} fb(q) Based on these tests, I tried to do the same with actors, but with odd results: Not working cast list query in Python: import freebase fb = freebase.mqlread q = {'type':'/film/film', 'name':'Inception', 'starring':[]} fb(q) Not working actor's filmography query in Python: import freebase fb = freebase.mqlread q = {'type':'/film/actor', 'name':'Leonardo DiCaprio', 'film':[]} fb(q) Strangely, I get an accurate number of actors/films back, but no names. Does anyone have any idea what the problem might be? Thanks a lot, I'd appreciate any advice.

    Read the article

  • Is there a rake task for advancing or retreating your schema version by exactly one?

    - by user30997
    Back when migration version numbers were simply incremented as you created migrations, it was easy enough to do: rake migrate VERSION=097 rake migrate VERSION=098 rake migrate VERSION=099 rake migrate VERSION=100 ...but we now have migration numbers that are something like YYYYMMDDtimeofday. Not that this is a bad thing - it keeps the migration version collisions to a minimum - but when I have 50 migrations and want to step through them one-at-a-time, it is a hassle: rake migrate VERSION=20090129215142 rake migrate VERSION=20090129219783 ...etc. I have to have a list of all the migrations open in front of me, typing out the version numbers to advance by one. Is there anything that would have an easier syntax, like: rake migrate VERSION=NEXT or rake migrate VERSION=PREV ?

    Read the article

  • Is there a GUI that I can use to create XML documents based on my schema?

    - by David Conlisk
    Hi all, I want to create a simple graphical user interface to allow non-technical users to create an XML file without having to manually edit the XML source. Ideally I'd like a drag and drop interface, but failing that, anything really. The contents of the XML file are similar to an encoded flow chart of a binary tree, so maybe something like Visio, with a save as xml option? Here's a quick sample of the XML output that is required: <?xml version="1.0" encoding="utf-8"?> <steps> <step id="1" type="prompt"> <prompt> Welcome. </prompt> <next>1.1</next> </step> <step id="1.1" type="question"> <prompt> Do you have what you need? </prompt> <yes>1.2</yes> <no>1.1.1</no> </step> ... </steps> Are there any existing tools out there that you can recommend for this purpose? Ideally open-source or with a free personal license, but I'm interested in hearing about all options. Thanks, David

    Read the article

  • Resultant of a polynomial with x^n–1

    - by devin.omalley
    Resultant of a polynomial with x^n–1 (mod p) I am implementing the NTRUSign algorithm as described in http://grouper.ieee.org/groups/1363/lattPK/submissions/EESS1v2.pdf , section 2.2.7.1 which involves computing the resultant of a polynomial. I keep getting a zero vector for the resultant which is obviously incorrect. private static CompResResult compResMod(IntegerPolynomial f, int p) { int N = f.coeffs.length; IntegerPolynomial a = new IntegerPolynomial(N); a.coeffs[0] = -1; a.coeffs[N-1] = 1; IntegerPolynomial b = new IntegerPolynomial(f.coeffs); IntegerPolynomial v1 = new IntegerPolynomial(N); IntegerPolynomial v2 = new IntegerPolynomial(N); v2.coeffs[0] = 1; int da = a.degree(); int db = b.degree(); int ta = da; int c = 0; int r = 1; while (db > 0) { c = invert(b.coeffs[db], p); c = (c * a.coeffs[da]) % p; IntegerPolynomial cb = b.clone(); cb.mult(c); cb.shift(da - db); a.sub(cb, p); IntegerPolynomial v2c = v2.clone(); v2c.mult(c); v2c.shift(da - db); v1.sub(v2c, p); if (a.degree() < db) { r *= (int)Math.pow(b.coeffs[db], ta-a.degree()); r %= p; if (ta%2==1 && db%2==1) r = (-r) % p; IntegerPolynomial temp = a; a = b; b = temp; temp = v1; v1 = v2; v2 = temp; ta = db; } da = a.degree(); db = b.degree(); } r *= (int)Math.pow(b.coeffs[0], da); r %= p; c = invert(b.coeffs[0], p); v2.mult(c); v2.mult(r); v2.mod(p); return new CompResResult(v2, r); } There is pseudocode in http://www.crypto.rub.de/imperia/md/content/texte/theses/da_driessen.pdf which looks very similar. Why is my code not working? Are there any intermediate results I can check? I am not posting the IntegerPolynomial code because it isn't too interesting and I have unit tests for it that pass. CompResResult is just a simple "Java struct".

    Read the article

  • Storing Icons and Sql with PHP

    - by Ole Jak
    So I have a simple Apache with MySql I am developing a PHP app. I have Users Table in my DB. I vant to let them store Icons. My question Is what's the best way of attaching such data as icons (100-250kb's) to DB - Is it beter to store them Inside DB or store them as File and some how attaching links to icons into DB. What's the best way? Are there any classes that automate this process (of attaching such data to DB)?

    Read the article

  • Importing PKCS#12 (.p12) files into Firefox From the Command Line

    - by user11165
    I’ve posted this question up on #Ubuntu and #Firefox Forums, and really could do with some help.. Anyone know where i could look or help with the answer. I’m hoping the power of social media will come through… I have a need to perform the following action: Firefox 3.6.x: Quote: open Edit - Preferences - Advanced - Encryption - View Certificates - Your Certificates - Import However i need the same functionality from the bash command line. So far I’ve established that the following command is supposed to be used: Quote: certutil -A -t “u,u,u” -d /home/df001/.mozilla/firefox/qe5y5lht.tc.default/ -n “mycert” -i client.p12 This executes with no isses, however, doesn’t show up in any Firefox Certificate store. However, I have noted that prior to running this command, i have a cert8.db key3.db and secmod.db file in the above folder. After running the command the certutil seems to have created a cert9.db, key4.db and pkcs12.txt file Listing the contents using the command: Quote: certutil -L -d sql:/home/df001/.mozilla/firefox/qe5y5lht.tc.default/ does seem to confirm my attempts of importing files into a certificate folder of some kind have worked. because i get Quote: Certificate Nickname Trust Attributes SSL,S/MIME,JAR/XPI Thawte SSL CA „ Go Daddy Secure Certification Authority „ Thawte SGC CA „ Entrust Certification Authority - L1C „ My Nero CT,C,c mynero P„ davidfield - Internet Widgits Pty Ltd u,u,u So, having tried this, and heading back over to the www, i cam across this command: Quote: pk12util -d /home/df001/.mozilla/firefox/qe5y5lht.tc.default/ -i client.p12 -n “David Field” -P “cert8.db” this again, appears to be importing something somewhere, however, again, Viewing certs from the Firefox interface doesn’t show the imported Cert. I’m surmising here on reading that the certutil and pk12util are creating a new NSS database, which firefox isn’t reading. So my question is, how can i get the p12 cert from the command line so it displays in the firefox Certificate manager interface? Why have i posted this here? Why not post on the firefox forum? Well i will copy and post the same question there as well, however the ability to use the command line to do this is important, as I have potentially 2000 machines which will need a user cert imported into firefox via a p12 file. I need to do this in the form of a script, i thought the hard part was going to be making the p12 file from the microsoft 2003 CA, turns out thats easy. I can’t just import via the GUI and copy over cert8.db x 2000, i can’t ask users to use the CA webinterface as its for VPN access, the users are off site, and they need the VPN to get to the cert server.. Is there any person out there who can help? By the way, i don't have the tor buttun installed.

    Read the article

  • How can I dump my MS SQL Server Database Schema to a human readable & printable format?

    - by Kyle West
    I want to generate something like the following: LineItems Id ItemId OrderId Price Orders Id CustomerId DateCreated Customers Id FirstName LastName Email I don't need all the relationships, the diagram that will never print correctly, the metadata, anything. Just a list of the tables and their columns in a simple text format. Has anyone done this before? Is there a simple solution? Thanks, Kyle

    Read the article

  • How can I dump my SQL Server Database Schema to a human readable & printable format?

    - by Kyle West
    I want to generate something like the following: LineItems Id ItemId OrderId Price Orders Id CustomerId DateCreated Customers Id FirstName LastName Email I don't need all the relationships, the diagram that will never print correctly, the metadata, anything. Just a list of the tables and their columns in a simple text format. Has anyone done this before? Is there a simple solution? Thanks, Kyle

    Read the article

  • How to optimize my PostgreSQL DB for prefix search?

    - by asmaier
    I have a table called "nodes" with roughly 1.7 million rows in my PostgreSQL db =#\d nodes Table "public.nodes" Column | Type | Modifiers --------+------------------------+----------- id | integer | not null title | character varying(256) | score | double precision | Indexes: "nodes_pkey" PRIMARY KEY, btree (id) I want to use information from that table for autocompletion of a search field, showing the user a list of the ten titles having the highest score fitting to his input. So I used this query (here searching for all titles starting with "s") =# explain analyze select title,score from nodes where title ilike 's%' order by score desc; QUERY PLAN ----------------------------------------------------------------------------------------------------------------------- Sort (cost=64177.92..64581.38 rows=161385 width=25) (actual time=4930.334..5047.321 rows=161264 loops=1) Sort Key: score Sort Method: external merge Disk: 5712kB -> Seq Scan on nodes (cost=0.00..46630.50 rows=161385 width=25) (actual time=0.611..4464.413 rows=161264 loops=1) Filter: ((title)::text ~~* 's%'::text) Total runtime: 5260.791 ms (6 rows) This was much to slow for using it with autocomplete. With some information from Using PostgreSQL in Web 2.0 Applications I was able to improve that with a special index =# create index title_idx on nodes using btree(lower(title) text_pattern_ops); =# explain analyze select title,score from nodes where lower(title) like lower('s%') order by score desc limit 10; QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------ Limit (cost=18122.41..18122.43 rows=10 width=25) (actual time=1324.703..1324.708 rows=10 loops=1) -> Sort (cost=18122.41..18144.60 rows=8876 width=25) (actual time=1324.700..1324.702 rows=10 loops=1) Sort Key: score Sort Method: top-N heapsort Memory: 17kB -> Bitmap Heap Scan on nodes (cost=243.53..17930.60 rows=8876 width=25) (actual time=96.124..1227.203 rows=161264 loops=1) Filter: (lower((title)::text) ~~ 's%'::text) -> Bitmap Index Scan on title_idx (cost=0.00..241.31 rows=8876 width=0) (actual time=90.059..90.059 rows=161264 loops=1) Index Cond: ((lower((title)::text) ~>=~ 's'::text) AND (lower((title)::text) ~<~ 't'::text)) Total runtime: 1325.085 ms (9 rows) So this gave me a speedup of factor 4. But can this be further improved? What if I want to use '%s%' instead of 's%'? Do I have any chance of getting a decent performance with PostgreSQL in that case, too? Or should I better try a different solution (Lucene?, Sphinx?) for implementing my autocomplete feature?

    Read the article

  • ASP.NET LINQ to Delete Rows

    - by Tyler
    Dim db As New SQLDataContext Try Dim deleteBoatPics = (From boat In db.Photos Where boat.boatid = id) db.Photos.DeleteOnSubmit(deleteBoatPics) db.SubmitChanges() Catch ex As Exception End Try I'm getting an error that says: Unable to cast object of type 'System.Data.Linq.DataQuery`1[WhiteWaterPhotos.Photo]' to type 'WhiteWaterPhotos.Photo'. I have two separate db.SubmitChanges() because when the button is pressed, I have it delete the records from 1 table, and then the next. I'm lost, can someone help me out?

    Read the article

  • What to do after a servicing fails on TFS 2010

    - by Martin Hinshelwood
    What do you do if you run a couple of hotfixes against your TFS 2010 server and you start to see seem odd behaviour? A customer of mine encountered that very problem, but they could not just, or at least not easily, go back a version.   You see, around the time of the TFS 2010 launch this company decided to upgrade their entire 250+ development team from TFS 2008 to TFS 2010. They encountered a few problems, owing mainly to the size of their TFS deployment, and the way they were using TFS. They were not doing anything wrong, but when you have the largest deployment of TFS outside of Microsoft you tend to run into problems that most people will never encounter. We are talking half a terabyte of source control in TFS with over 80 proxy servers. Its certainly the largest deployment I have ever heard of. When they did their upgrade way back in April, they found two major flaws in the product that meant that they had to back out of the upgrade and wait for a couple of hotfixes. KB983504 – Hotfix KB983578 – Patch KB2401992 -Hotfix In the time since they got the hotfixes they have run 6 successful trial migrations, but we are not talking minutes or hours here. When you have 400+ GB of data it takes time to copy it around. It takes time to do the upgrade and it takes time to do a backup. Well, last week it was crunch time with their developers off for Christmas they had a window of opportunity to complete the upgrade. Now these guys are good, but they wanted Northwest Cadence to be available “just in case”. They did not expect any problems as they already had 6 successful trial upgrades. The problems surfaced around 20 hours in after the first set of hotfixes had been applied. The new Team Project Collection, the only thing of importance, had disappeared from the Team Foundation Server Administration console. The collection would not reattach either. It would not even list the new collection as attachable! Figure: We know there is a database there, but it does not This was a dire situation as 20+ hours to repeat would leave the customer over time with 250+ developers sitting around doing nothing. We tried everything, and then we stumbled upon the command of last resort. TFSConfig Recover /ConfigurationDB:SQLServer\InstanceName;TFS_ConfigurationDBName /CollectionDB:SQLServer\instanceName;"Collection Name" -http://msdn.microsoft.com/en-us/library/ff407077.aspx WARNING: Never run this command! Now this command does something a little nasty. It assumes that there really should not be anything wrong and sets about fixing it. It ignores any servicing levels in the Team Project Collection database and forcibly applies the latest version of the schema. I am sure you can imagine the types of problems this may cause when the schema is updated leaving the data behind. That said, as far as we could see this collection looked good, and we were even able to find and attach the team project collection to the Configuration database. Figure: After attaching the TPC it enters a servicing mode After reattaching the team project collection we found the message “Re-Attaching”. Well, fair enough that sounds like something that may need to happen, and after checking that there was disk IO we left it to it. 14+ hours later, it was still not done so the customer raised a priority support call with MSFT and an engineer helped them out. Figure: Everything looks good, it is just offline. Tip: Did you know that these logs are not represented in the ~/Logs/* folder until they are opened once? The engineer dug around a bit and listened to our situation. He knew that we had run the dreaded “tfsconfig restore”, but was not phased. Figure: This message looks suspiciously like the wrong servicing version As it turns out, the servicing version was slightly out of sync with the schema. KB Schema Successful           KB983504 341 Yes   KB983578 344 sort of   KB2401992 360 nope   Figure: KB, Schema table with notation to its success The Schema version above represents the final end of run version for that hotfix or patch. The only way forward The problem was that the version was somewhere between 341 and 344. This is not a nice place to be in and the engineer give us the  only way forward as the removal of the servicing number from the database so that the re-attach process would apply the latest schema. if his sounds a little like the “tfsconfig recover” command then you are exactly right. Figure: Sneakily changing that 3 to a 1 should do the trick Figure: Changing the status and dropping the version should do it Now that we have done that we should be able to safely reattach and enable the Team Project Collection. Figure: The TPC is now all attached and running You may think that this is the end of the story, but it is not. After a while of mulling and seeking expert advice we came to the opinion that the database was, for want of a better term, “hosed”. There could well be orphaned data in there and the likelihood that we would have problems later down the line is pretty high. We contacted the customer back and made them aware that in all likelihood the repaired database was more like a “cut and shut” than anything else, and at the first sign of trouble later down the line was likely to split in two. So with 40+ hours invested in getting this new database ready the customer threw it away and started again. What would you do? Would you take the “cut and shut” to production and hope for the best?

    Read the article

  • When doing a Schema Export with hbm2ddl, is there a way to specify that you DO NOT want Nullable For

    - by Jon Erickson
    The DDL that is being created is putting all of my many to many associations into 1 table, but I actually want each many to many association in its' own table (for other reasons) Right now hbm2ddl is creating this table (only Table1Key OR Table2Key OR Table3Key should be filled out for any given record, causing this table to have nullable foreign keys): +-----------+ | xRef | +-----------+ | Table1Key | | Table2Key | | Table3Key | | RiskKey | +-----------+ I want hbm2ddl to create the following 3 tables so that there are no nullable foreign keys. +-----------+ +-----------+ +-----------+ | xRef1 | | xRef2 | | xRef3 | +-----------+ +-----------+ +-----------+ | Table1Key | | Table2Key | | Table3Key | | RiskKey | | RiskKey | | RiskKey | +-----------+ +-----------+ +-----------+

    Read the article

  • Good object/DB set-up for CMS-esque app for managing content and user permissions?

    - by sah302
    Hi all, so I am writing a big CMS-esque app to allow users to manage web content through web applications, I've got a pretty good db-driven user permission system going, but am having trouble coming up with a good way to handle content groups and pages, I've got a couple options and not sure which one to take. Furthermore, I am not sure how to handle static page updates that have no 'widgets' in them. My current set-up for permissions is this: Objects: User, UserGroup, UserUserGroup, UserGroupType Standard many to many relationship User -> UserUserGroup <- UserGroup each Usergroup has a UserGroupType, which could be anything from Title, Department, to PermissionGroup. PermissionGroup manages the permissions. Right now on a per page basis I check permissions based on their PermissionsGroups. So for a page which has CMS features for a news widget, I check for permission groups of "Site Admin" and "News Admin". Now the issue I am coming to is, the site has many different departments involved. No problem I think, I can just have a EntityContentGroup so any widget app can be used for any departments. So my HR department, each of their news items would be in the EntityContentGroup with the news item ID, and content group of "HR" or "HR News". But maybe this isn't the most efficient way to go about it? I don't want to put the content group simply as a NewsItemType because some news items could apply to multiple areas, so I want to be able to assign them to as many areas as I want. Likewise, all of my widget apps have this, so that's why I decided to choose EntityContentGroup and not just NewsItemContentGroup. I was also thinking well instead of doing a contentGroup do a Page object that says which page some entity should be on. It seems almost like the same thing, but would I want to use Page for something else? I was thinking Page would be used for static pages with no widgets, a simple Rich Text Editor can edit the content of that page and I save that item to a page?? And then instead of doing a page level check for UserGroup permissions, would it be better to associate a usergroup to a contentgroup, and then just depending on what contentGroup content on the page is displayed, determine the permissions through that relationship? Is that better? I am not sure at this point. I guess I am just getting a tad overwhelmed at this is the largest app in scope and size that I have ever written. What is the best approach for this based on my current user permission set-up?

    Read the article

  • Can Entity Framework be used for the purpose of entity/schema definition at application runtime?

    - by Kabeer
    Hello. Can 'Entity Framework' be used for the purpose of entity definition at application runtime? Ok, to make it simple, here is what I want to achieve: My application is a product. I should be able to define entities at runtime on the basis of inputs gathered from an 'authoring' user (in effect this means 'model first' approach). These entities are of course persistable. Further, after having defined the entities and their relationships, I should be able to make complex queries across them for many reasons, including reports. Is the above possible and how? So far what I have realized is that there is a dependency on Visual Studio.

    Read the article

  • How to drop null values with a native Oracle XML DB Web Service?

    - by gfjr
    I am using native Oracle XML DB Web Services (using a PL/SQL function with a web service). I want to drop null values (put nothing in the output (no XML element)). It's working with Oracle 11.2.0.1.0 but not with Oracle 11.2.0.3.0. Just to clarify... I don't want to consume a webservice with PL/SQL, I want to publish my PL/SQL packages/procedures/functions as a web service! Hope someone can help me. Thank you. In this example is the column "country" null. Oracle 11.2.0.1.0 (this is what I want): <soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"> <soap:Body> <GET_PERSONOutput xmlns="http://xmlns.oracle.com/orawsv/TESTSTUFF/GET_PERSON"> <RETURN> <PERSON> <PERSON_ID>3</PERSON_ID> <FIRST_NAME>Harry</FIRST_NAME> <LAST_NAME>Potter</LAST_NAME> </PERSON> </RETURN> </GET_PERSONOutput> </soap:Body> </soap:Envelope> Oracle 11.2.0.3.0: <soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"> <soap:Body> <GET_PERSONOutput xmlns="http://xmlns.oracle.com/orawsv/TESTSTUFF/GET_PERSON"> <RETURN> <PERSON> <PERSON_ID>3</PERSON_ID> <FIRST_NAME>Harry</FIRST_NAME> <LAST_NAME>Potter</LAST_NAME> <COUNTRY/> </PERSON> </RETURN> </GET_PERSONOutput> </soap:Body> </soap:Envelope>

    Read the article

  • Preventing Netbeans JAXB generation trashing classes

    - by Mac
    I'm developing a SOAP service using JAX-WS and JAXB under Netbeans 6.8, and getting a little frustrated with Netbeans trashing my work every time the XSD schema my JAXB bindings are based upon changes. To elaborate, the IDE automatically generates classes bound to the schema, which can then be (un)marshalled from/to XML using JAXB. To these classes I've added extra methods to (for example) convert to and from separate classes designed to be persisted to database with JPA. The problem is that whenever the schema changes and I rebuild, these classes are regenerated, and all my custom methods are deleted. I can manually replace them by copy-pasting from a backup file, but that is rather time-consuming and tedious. As I'm using an iterative design approach, the schema is changing rather frequently and I'm wasting an awful lot of time whenever it does, simply to reinstate my previous code. While the IDE automatically regenerating the JAXB-bound classes is entirely reasonable and I don't mean to imply otherwise, I was wondering if anyone had any bright ideas as to how to prevent my extra work having to be manually reinstated every time my schema changes?

    Read the article

  • Long-running Database Query

    - by JamesMLV
    I have a long-running SQL Server 2005 query that I have been hoping to optimize. When I look at the actual execution plan, it says a Clustered Index Seek has 66% of the cost. Execuation Plan Snippit: <RelOp AvgRowSize="31" EstimateCPU="0.0113754" EstimateIO="0.0609028" EstimateRebinds="0" EstimateRewinds="0" EstimateRows="10198.5" LogicalOp="Clustered Index Seek" NodeId="16" Parallel="false" PhysicalOp="Clustered Index Seek" EstimatedTotalSubtreeCost="0.0722782"> <OutputList> <ColumnReference Database="[wf_1]" Schema="[dbo]" Table="[Indices]" Alias="[I]" Column="quoteDate" /> <ColumnReference Database="[wf_1]" Schema="[dbo]" Table="[Indices]" Alias="[I]" Column="price" /> <ColumnReference Database="[wf_1]" Schema="[dbo]" Table="[Indices]" Alias="[I]" Column="tenure" /> </OutputList> <RunTimeInformation> <RunTimeCountersPerThread Thread="0" ActualRows="1067" ActualEndOfScans="1" ActualExecutions="1" /> </RunTimeInformation> <IndexScan Ordered="true" ScanDirection="FORWARD" ForcedIndex="false" NoExpandHint="false"> <DefinedValues> <DefinedValue> <ColumnReference Database="[wf_1]" Schema="[dbo]" Table="[Indices]" Alias="[I]" Column="quoteDate" /> </DefinedValue> <DefinedValue> <ColumnReference Database="[wf_1]" Schema="[dbo]" Table="[Indices]" Alias="[I]" Column="price" /> </DefinedValue> <DefinedValue> <ColumnReference Database="[wf_1]" Schema="[dbo]" Table="[Indices]" Alias="[I]" Column="tenure" /> </DefinedValue> </DefinedValues> <Object Database="[wf_1]" Schema="[dbo]" Table="[Indices]" Index="[_dta_index_Indices_14_320720195__K5_K2_K1_3]" Alias="[I]" /> <SeekPredicates> <SeekPredicate> <Prefix ScanType="EQ"> <RangeColumns> <ColumnReference Database="[wf_1]" Schema="[dbo]" Table="[Indices]" Alias="[I]" Column="HedgeProduct" ComputedColumn="true" /> </RangeColumns> <RangeExpressions> <ScalarOperator ScalarString="(1)"> <Const ConstValue="(1)" /> </ScalarOperator> </RangeExpressions> </Prefix> <StartRange ScanType="GE"> <RangeColumns> <ColumnReference Database="[wf_1]" Schema="[dbo]" Table="[Indices]" Alias="[I]" Column="tenure" /> </RangeColumns> <RangeExpressions> <ScalarOperator ScalarString="[@StartMonth]"> <Identifier> <ColumnReference Column="@StartMonth" /> </Identifier> </ScalarOperator> </RangeExpressions> </StartRange> <EndRange ScanType="LE"> <RangeColumns> <ColumnReference Database="[wf_1]" Schema="[dbo]" Table="[Indices]" Alias="[I]" Column="tenure" /> </RangeColumns> <RangeExpressions> <ScalarOperator ScalarString="[@EndMonth]"> <Identifier> <ColumnReference Column="@EndMonth" /> </Identifier> </ScalarOperator> </RangeExpressions> </EndRange> </SeekPredicate> </SeekPredicates> </IndexScan> </RelOp> From this, does anyone see an obvious problem that would be causing this to take so long? Here is the query: (SELECT quotedate, tenure, price, ActualVolume, HedgePortfolioValue, Price AS UnhedgedPrice, ((ActualVolume*Price - HedgePortfolioValue)/ActualVolume) AS HedgedPrice FROM ( SELECT [quoteDate] ,[price] , tenure ,isnull(wf_1.[Risks].[HedgePortValueAsOfDate2](1,tenureMonth,quotedate,price),0) as HedgePortfolioValue ,[TotalOperatingGasVolume] as ActualVolume FROM [wf_1].[dbo].[Indices] I inner join ( SELECT DISTINCT tenureMonth FROM [wf_1].[Risks].[KnowRiskTrades] WHERE HedgeProduct = 1 AND portfolio <> 'Natural Gas Hedge Transactions' ) B ON I.tenure=B.tenureMonth inner join ( SELECT [Month],[TotalOperatingGasVolume] FROM [wf_1].[Risks].[ActualGasVolumes] ) C ON C.[Month]=B.tenureMonth WHERE HedgeProduct = 1 AND quoteDate>=dateadd(day, -3*365, tenureMonth) AND quoteDate<=dateadd(day,-3,tenureMonth) )A )

    Read the article

  • [EF + Oracle] Intro

    - by JTorrecilla
    Prologue I have a busy personal and working time, and at this moment that I start to get more free time, I decided to start a Serie about Entity Framework with Oracle. A few time ago, I got my first experience with EF and Oracle with Oracle 10 g express and Oracle 10 g with the same results, Doesn’t work. Now I download Oracle 11 g to Test again. Tools To start using EF with Oracle we need the following: 1. Visual Studio 2010. No Express Edition 2. Oracle 11g 3 Oracle Driver for EF (ODAC) Intro People, who are starting with EF developments, I recommend to take a look into Unai Zorrilla’s Blog, the post were written in Spanish but they are great! To this Serie, we are going to define the DB from the Oracle administrator. For that we need to follow the next steps: 1. Create a User with a PassWord. In my example the user will be Jtorrecilla 2. Create a TableSpace 3. Define some example tables   (Image1) When we have created the DB, we are going to start a new project in VS 2010. I will start a C# Project. To start with EF, we need to add a new objet to our Project “ADO .NET Entity Data Model". (Image2) The next step will be to indicate that our model will be based on an existing DB, and indicate the connection string (Images 3 and 4): (Imagen3) (Imagen4) Once we selected the connection string, we will need to indicate that in the connection will be saved “Sensitive” data (Image 5), and in the next step we are going to select the DB objets to use in the project(Image 6).   (Image 5) (Image 6) A the end of the selection, we will press Finish button, and it will generate a EDMX file to add to our solution, and in the IDE will appear the DB Schema with the selected Tables and Relations. (Imagen7) One Entity is composed by a set of properties (each matches with a column from the Table in the DB) and Navigation Properties that represents any relation with other Entities.   Finally With this chapter we have installed the environment, defined a DB and configured the solution to start using EF with Oracle. In the next chapter we are going to see What is a Entity and how it works. I hope you enjoy this Serie!

    Read the article

  • XMLHttpRequest not working, trying to test database connection [closed]

    - by Frederick Marcoux
    I'm currently creating my own CMS for personnal use but I'm blocked at a code. I'm trying to make a installation script but the AJAX request to test if database works, doesn't work... There's my JS code: function testDB() { "use strict"; var host = document.getElementById('host').value; var username = document.getElementById('username').value; var password = document.getElementById('password').value; var db = document.getElementById('db_name').value; var xmlhttp = new XMLHttpRequest(); var url = "test_db.php"; var params = "host="+host+"&username="+username+"&password="+password+"&db="+db; xmlhttp.open("POST", url, true); xmlhttp.setRequestHeader("Content-type", "application/x-www-form-urlencoded"); xmlhttp.setRequestHeader("Content-length", params.length); xmlhttp.setRequestHeader("Connection", "close"); xmlhttp.send(params); $('#loader').removeAttr('style'); if (xmlhttp.responseText !== '') { if (xmlhttp.readyState===4 && xmlhttp.status===200) { $('#next').removeAttr('disabled'); $('#test').attr('disabled', 'disabled'); $('#test').text('Connection Successful!'); $('#test').addClass('btn-success'); $('#login').addClass('success'); $('#login1').addClass('success'); $('#db').addClass('success'); $('#loader').attr('style', 'display: none;'); } else { $('#next').attr('disabled', 'disabled'); $('#test').removeClass('btn-success'); $('#test').removeAttr('disabled'); $('#test').text('Test Connection'); $('#login').removeClass('success'); $('#login1').removeClass('success'); $('#db').removeClass('success'); $('#loader').attr('style', 'display: none;'); } } else { $('#next').attr('disabled', 'disabled'); $('#next').attr('disabled', 'disabled'); $('#test').removeClass('btn-success'); $('#test').removeAttr('disabled'); $('#test').text('Test Connection'); $('#login').removeClass('success'); $('#login1').removeClass('success'); $('#db').removeClass('success'); $('#loader').attr('style', 'display: none;'); } } And there's my PHP code: <?php $link = mysql_connect($_POST['host'], $_POST['username'], $_POST['password']); if (!$link) { echo ''; } else { if (mysql_select_db($_POST['db'])) { echo 'Connection Successful!'; } else { echo ''; } } mysql_close($link); ?> I don't know why it doesn't work but I tried with JQuery $.ajax, $.get, $.post but nothing work...

    Read the article

  • Problem in generation of custom classes at web service client

    - by user443324
    I have a web service which receives an custom object and returns another custom object. It can be deployed successfully on GlassFish or JBoss. @WebMethod(operationName = "providerRQ") @WebResult(name = "BookingInfoResponse" , targetNamespace = "http://tlonewayresprovidrs.jaxbutil.rakes.nhst.com/") public com.nhst.rakes.jaxbutil.tlonewayresprovidrs.BookingInfoResponse providerRQ(@WebParam(name = "BookingInfoRequest" , targetNamespace = "http://tlonewayresprovidrq.jaxbutil.rakes.nhst.com/") com.nhst.rakes.jaxbutil.tlonewayresprovidrq.BookingInfoRequest BookingInfoRequest) { com.nhst.rakes.jaxbutil.tlonewayresprovidrs.BookingInfoResponse BookingInfoResponse = new com.nhst.rakes.jaxbutil.tlonewayresprovidrs.BookingInfoResponse(); return BookingInfoResponse; } But when I create a client for this web service, two instances of BookingInfoRequest and BookingInfoResponse generated even I need only one instance. This time an error is returned that says multiple classes with same name are can not be possible....... Here is wsdl..... <?xml version='1.0' encoding='UTF-8'?><!-- Published by JAX-WS RI at http://jax-ws.dev.java.net. RI's version is JAX-WS RI 2.2.1-hudson-28-. --><!-- Generated by JAX-WS RI at http://jax-ws.dev.java.net. RI's version is JAX-WS RI 2.2.1-hudson-28-. --><definitions xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd" xmlns:wsp="http://www.w3.org/ns/ws-policy" xmlns:wsp1_2="http://schemas.xmlsoap.org/ws/2004/09/policy" xmlns:wsam="http://www.w3.org/2007/05/addressing/metadata" xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/" xmlns:tns="http://demo/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns="http://schemas.xmlsoap.org/wsdl/" targetNamespace="http://demo/" name="DemoJAXBParamService"> <wsp:Policy wsu:Id="DemoJAXBParamPortBindingPolicy"> <ns1:OptimizedMimeSerialization xmlns:ns1="http://schemas.xmlsoap.org/ws/2004/09/policy/optimizedmimeserialization" /> </wsp:Policy> <types> <xsd:schema> <xsd:import namespace="http://tlonewayresprovidrs.jaxbutil.rakes.nhst.com/" schemaLocation="http://localhost:31133/DemoJAXBParamService/DemoJAXBParamService?xsd=1" /> </xsd:schema> <xsd:schema> <xsd:import namespace="http://tlonewayresprovidrs.jaxbutil.rakes.nhst.com" schemaLocation="http://localhost:31133/DemoJAXBParamService/DemoJAXBParamService?xsd=2" /> </xsd:schema> <xsd:schema> <xsd:import namespace="http://tlonewayresprovidrq.jaxbutil.rakes.nhst.com/" schemaLocation="http://localhost:31133/DemoJAXBParamService/DemoJAXBParamService?xsd=3" /> </xsd:schema> <xsd:schema> <xsd:import namespace="http://tlonewayresprovidrq.jaxbutil.rakes.nhst.com" schemaLocation="http://localhost:31133/DemoJAXBParamService/DemoJAXBParamService?xsd=4" /> </xsd:schema> <xsd:schema> <xsd:import namespace="http://demo/" schemaLocation="http://localhost:31133/DemoJAXBParamService/DemoJAXBParamService?xsd=5" /> </xsd:schema> </types> <message name="providerRQ"> <part name="parameters" element="tns:providerRQ" /> </message> <message name="providerRQResponse"> <part name="parameters" element="tns:providerRQResponse" /> </message> <portType name="DemoJAXBParam"> <operation name="providerRQ"> <input wsam:Action="http://demo/DemoJAXBParam/providerRQRequest" message="tns:providerRQ" /> <output wsam:Action="http://demo/DemoJAXBParam/providerRQResponse" message="tns:providerRQResponse" /> </operation> </portType> <binding name="DemoJAXBParamPortBinding" type="tns:DemoJAXBParam"> <wsp:PolicyReference URI="#DemoJAXBParamPortBindingPolicy" /> <soap:binding transport="http://schemas.xmlsoap.org/soap/http" style="document" /> <operation name="providerRQ"> <soap:operation soapAction="" /> <input> <soap:body use="literal" /> </input> <output> <soap:body use="literal" /> </output> </operation> </binding> <service name="DemoJAXBParamService"> <port name="DemoJAXBParamPort" binding="tns:DemoJAXBParamPortBinding"> <soap:address location="http://localhost:31133/DemoJAXBParamService/DemoJAXBParamService" /> </port> </service> </definitions> So, I want to know that how to generate only one instance(I don't know why two instances are generated at client side?). Please help me to move in right direction.

    Read the article

  • DB Design Pattern - Many to many classification / categorised tagging.

    - by Robin Day
    I have an existing database design that stores Job Vacancies. The "Vacancy" table has a number of fixed fields across all clients, such as "Title", "Description", "Salary range". There is an EAV design for "Custom" fields that the Clients can setup themselves, such as, "Manager Name", "Working Hours". The field names are stored in a "ClientText" table and the data stored in a "VacancyClientText" table with VacancyId, ClientTextId and Value. Lastly there is a many to many EAV design for custom tagging / categorising the vacancies with things such as Locations/Offices the vacancy is in, a list of skills required. This is stored as a "ClientCategory" table listing the types of tag, "Locations, Skills", a "ClientCategoryItem" table listing the valid values for each Category, e.g., "London,Paris,New York,Rome", "C#,VB,PHP,Python". Finally there is a "VacancyClientCategoryItem" table with VacancyId and ClientCategoryItemId for each of the selected items for the vacancy. There are no limits to the number of custom fields or custom categories that the client can add. I am now designing a new system that is very similar to the existing system, however, I have the ability to restrict the number of custom fields a Client can have and it's being built from scratch so I have no legacy issues to deal with. For the Custom Fields my solution is simple, I have 5 additional columns on the Vacancy Table called CustomField1-5. This removes one of the EAV designs. It is with the tagging / categorising design that I am struggling. If I limit a client to having 5 categories / types of tag. Should I create 5 tables listing the possible values "CustomCategoryItems1-5" and then an additional 5 many to many tables "VacancyCustomCategoryItem1-5" This would result in 10 tables performing the same storage as the three tables in the existing system. Also, should (heaven forbid) the requirements change in that I need 6 custom categories rather than 5 then this will result in a lot of code change. Therefore, can anyone suggest any DB Design Patterns that would be more suitable to storing such data. I'm happy to stick with the EAV approach, however, the existing system has come across all the usual performance issues and complex queries associated with such a design. Any advice / suggestions are much appreciated. The DBMS system used is SQL Server 2005, however, 2008 is an option if required for any particular pattern.

    Read the article

  • Microsoft Sync Framework - How to reprovision a table (or entire scope) after schema changes?

    - by Rabbi
    I have already setup syncing with Microsoft Sync Framework, and now I need to add fields to a table. How do I re-provision the databases? The setup is exceedingly simple: Two SQL Express 2008 servers The scope includes the entire database Using Microsoft Sync Framework 2.0 Synchronizing by direct access. Using the standard new SqlSyncProvider Do I make the structural changes at both ends? Or do I only change one server and let Sync Framework somehow propagate the change? Do I need to delete the _tracking tables and/or the stored procedures? How about the triggers? Has anyone been using the Sync Framework? Please help.

    Read the article

< Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >