Search Results

Search found 17194 results on 688 pages for 'document databases'.

Page 68/688 | < Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >

  • What is the best way to build a database from a MS Word document?

    - by Jayron Soares
    Please advise me on how to approach this problem: I have a sequential list of metadata in a document in MS Word. The basic idea is to create a Python algorithm to iterate over the information, retrieving just the name of the PROCESS, when is made a queue, from a database. Example metadata: Process: Process Walker (1965) Exact reference: Walker Process Equipment., Inc. v. Food Machinery Corp. Link: http://caselaw.lp.findlaw.com/scripts/getcase.pl?court=US&vol=382&invol= Type of procedure: Certiorari to the United States Court of Appeals for the Seventh Circuit. Parties: Walker Process Equipment, Inc. Sector: Systems is ... Start Date: October 12-13 Arguedas, 1965 Summary: Food Machinery Company has initiated a process to stop or slow the entry of competitors through the use of a patent obtained by fraud. The case concerned a patent on "knee action swing diffusers" used in aeration equipment for sewage treatment systems, and the question was whether "the maintenance and enforcement of a patent obtained by fraud before the patent office" may be a basis for antitrust punishment. Report of the evolution process: petitioner, in answer to respond... Importance: a) First case which established an analysis for the diagnosis of dispute… There are about 200 pages containing the information above. I have in mind the idea of implementing an algorithm in Python to be able to break this information sequence and try to store it in a web database (an open source application that I’m looking for) in order to allow for free consultations.

    Read the article

  • SQL SERVER – Integrate Your Data with Skyvia – Cloud ETL Solution

    - by Pinal Dave
    In our days data integration often becomes a key aspect of business success. For business analysts it’s very important to get integrated data from various sources, such as relational databases, cloud CRMs, etc. to make correct and successful decisions. There are various data integration solutions on market, and today I will tell about one of them – Skyvia. Skyvia is a cloud data integration service, which allows integrating data in cloud CRMs and different relational databases. It is a completely online solution and does not require anything except for a browser. Skyvia provides powerful etl tools for data import, export, replication, and synchronization for SQL Server and other databases and cloud CRMs. You can use Skyvia data import tools to load data from various sources to SQL Server (and SQL Azure). Skyvia supports such cloud CRMs as Salesforce and Microsoft Dynamics CRM and such databases as MySQL and PostgreSQL. You even can migrate data from SQL Server to SQL Server, or from SQL Server to other databases and cloud CRMs. Additionally Skyvia supports import of CSV files, either uploaded manually or stored on cloud file storage services, such as Dropbox, Box, Google Drive, or FTP servers. When data import is not enough, Skyvia offers bidirectional data synchronization. With this tool, you can synchronize SQL Server data with other databases and cloud CRMs. After performing the first synchronization, Skyvia tracks data changes in the synchronized data storages. In SQL Server databases (and other relational databases) it creates additional tracking tables and triggers. This allows synchronizing only the changed data. Skyvia also maps records by their primary key values to each other, so it does not require different sources to have the same primary key structure. It still can match the corresponding records without having to add any additional columns or changing data structure. The only requirement for synchronization is that primary keys must be autogenerated. With Skyvia it’s not necessary for data to have the same structure in integrated data storages. Skyvia supports powerful mapping mechanisms that allow synchronizing data with completely different structure. It provides support for complex mathematical and string expressions when mapping data, using lookups, etc. You may use data splitting – loading data from a single CSV file or source table to multiple related target tables. Or you may load data from several source CSV files or tables to several related target tables. In each case Skyvia preserves data relations. It builds corresponding relations between the target data automatically. When you often work with cloud CRM data, native CRM data reporting and analysis tools may be not enough for you. And there is a vast set of professional data analysis and reporting tools available for SQL Server. With Skyvia you can quickly copy your cloud CRM data to an SQL Server database and apply corresponding SQL Server tools to the data. In such case you can use Skyvia data replication tools. It allows you to quickly copy cloud CRM data to SQL Server or other databases without customizing any mapping. You need just to specify columns to copy data from. Target database tables will be created automatically. Skyvia offers powerful filtering settings to replicate only the records you need. Skyvia also provides capability to export data from SQL Server (including SQL Azure) and other databases and cloud CRMs to CSV files. These files can be either downloadable manually or loaded to cloud file storages or FTP server. You can use export, for example, to backup SQL Azure data to Dropbox. Any data integration operation can be scheduled for automatic execution. Thus, you can automate your SQL Azure data backup or data synchronization – just configure it once, then schedule it, and benefit from automatic data integration with Skyvia. Currently registration and using Skyvia is completely free, so you can try it yourself and find out whether its data migration and integration tools suits for you. Visit this link to register on Skyvia: https://app.skyvia.com/register Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Cloud Computing

    Read the article

  • Details of 5GB and 50GB SQL Azure databases have now been released, along with new price points

    - by Eric Nelson
    Like many others signed up to the Windows Azure Platform, I received an email overnight detailing the upcoming database size changes for SQL Azure. I know from our work with early adopters over the last 12 months that the 1GB and 10GB limits were sometimes seen as blockers, especially when migrating existing application to SQL Azure. On June 28th 2010, we will be increasing the size limits: SQL Azure Web Edition database from 1 GB to 5 GB SQL Azure Business Edition database will go from 10 GB to 50 GB Along with these changes comes new price points, including the option to increase in increments of 10GB: Web Edition: Up to 1 GB relational database = $9.99 / month Up to 5 GB relational database = $49.95 / month Business Edition: Up to 10 GB relational database = $99.99 / month Up to 20 GB relational database = $199.98 / month Up to 30 GB relational database = $299.97 / month Up to 40 GB relational database = $399.96 / month Up to 50 GB relational database = $499.95 / month Check out the full SQL Azure pricing. Related Links: http://ukazure.ning.com UK community site Getting started with the Windows Azure Platform

    Read the article

  • What is the best way to work with large databases in Java depending on context?

    - by Singletony
    Hi guys. We are trying to figure out the best practice for working with very large DBs in Java. What we do is a kind of BI, i.e analyzing very large DBs, and using them to create intermediate DBs that represent intelligent knowledge of the DBs. We are currently using JDBC, and just preforming queries using a ResultSet. As more and more data is being created, we are wondering whether more appropriate ways exist for parsing and manipulating these large DBs: We need to support 'chunk' manipulation and not an entire DB at once(e.g. limit in JDBC, very poor performance) We do not need to be constantly connected since we are just pulling results and creating new tables of our own. We want to understand JDBC alternatives, with respect to advantages and disadvantages. Whether you think JDBC is the way to go or not, what are the best practices to go by depending on context (e.g. for large DBs queried in chunks) ? If my question is not clear, I will gladly elaborate! THANK YOU SO MUCH!

    Read the article

  • Does the deprecation of mysql_* functions in PHP carry over to other Databases(MSSQL)?

    - by MobyD
    I'm not talking about MySQL, I'm talking about Microsoft SQL Server I've been aware of PDO for quite some time now, standard mysql functions are dangerous and should be avoided. http://php.net/manual/en/function.mysql-connect.php But what about the MSSQL function in PHP? They are, for most purposes, identical sets of functions, but the PHP page describing mssql_* carries no warning of deprecation. http://us.php.net/manual/en/function.mssql-connect.php There are PDO drivers available for MSSQL, but they aren't quite as readily available or used as the MySQL drivers. Ideally, it looks to me like I should get them working and move from mssql_* to PDO like I have with MySQL, but is it as big of a priority? Is there some hidden safety to MSSQL that means it's exempt from all of the mysql_* hatred as of late? Or is its obscurity as a backend the only reason there hasn't been more PDO encouragement?

    Read the article

  • How should I handle using two databases with a legacy PHP application?

    - by Toby Allen
    I have a legacy PHP application that was written in 2004 and uses MSSQL as a database backend. At this stage MSSQL is still supported by PHP but only just via a Microsoft driver. I have looked at converting to mysql via automated tools, which work quite well, but I have quite complex views which need a lot of individual work to convert. I don't have a great deal of time to do this. Many tools I wish to use and frameworks I would like to move the application to, don't support MSSQL, so I was considering adding new features using a new mysql database and wondered if anyone had opinions on the pros and cons of using two seperate database backends in a single application?

    Read the article

  • Oracle B2B - Synchronous Request Reply

    - by cdwright
    Introduction So first off, let me say I didn't create this demo (although I did modify it some). I got it from a member of the B2B development technical staff. Since it came with only a simple readme file, I thought I would take some time and write a more detailed explanation about how it works. Beginning with Oracle SOA Suite PS5 (11.1.1.6), B2B supports synchronous request reply over http using the b2b/syncreceiver servlet. I’m attaching the demo to this blog which includes a SOA composite archive that needs to be deployed using JDeveloper, a B2B repository with two agreements that need to be deployed using the B2B console, and a test xml file that gets sent to the b2b/syncreceiver servlet using your favorite SOAP test tool (I'm using Firefox Poster here). You can download the zip file containing the demo here. The demo works by sending the sample xml request file (req.xml) to http://<b2bhost>:8001/b2b/syncreceiver using the SOAP test tool.  The syncreceiver servlet keeps the socket connection open between itself and the test tool so that it can synchronously send the reply message back. When B2B receives the inbound request message, it is passed to the SOA composite through the default B2B Fabric binding. A simple reply is created in BPEL and returned to B2B which then sends the message back to the test tool using that same socket connection. I’ll show you the B2B configuration first, then we’ll look at the soa composite. Configuring B2B No additional configuration necessary in order to use the syncreceiver servlet. It is already running when you start SOA. After importing the GC_SyncReqRep.zip repository file into B2B, you’ll have the typical GlobalChips host trading partner and the Acme remote trading partner. Document Management The repository contains two very simple custom XML document definitions called Orders and OrdersResponse. In order to determine the trading partner agreement needed to process the inbound Orders document, you need to know two things about it; what is it and where it came from. So let’s look at how B2B identifies the appropriate document definition for the message. The XSD’s for these two document definitions themselves are not particularly interesting. Whenever you're dealing with custom XML documents, B2B identifies the appropriate document definition for each XML message using an XPath Identification Expression. The expression is entered for each of these document definitions under the document administration tab in the B2B console. The full XPATH expression for the Orders document is  //*[local-name()='shiporder']/*[local-name()='shipto']/*[local-name()='name']/text(). You can see this path in the XSD diagram below and how it uniquely identifies this message. The OrdersReponse document is identified in the same way. The XPath expression for it is //*[local-name()='Response']/*[local-name()='Status']/text(). You can see how it’s path differs uniquely identifying the reply from the request. Trading Partner Profile The trading partner profiles are very simple too. For GlobalChips, a generic identifier is being used to identify the sender of the response document using the host trading partner name. For Acme, a generic identifier is also being used to identify the sender of the inbound request using the remote trading partner name. The document types are added for the remote trading partner as usual. So the remote trading partner Acme is the sender of the Orders document, and it is the receiver of the OrdersResponse document. For the remote trading partner only, there needs to be a dummy channel which gets used in the outbound response agreement. The channel is not actually used. It is just a necessary place holder that needs to be there when creating the agreement. Trading Partner Agreement The agreements are equally simple. There is no validation and translation is not an option for a custom XML document type. For the InboundAgreement (request) the document definition is set to OrdersDef. In the Agreement Parameters section the generic identifiers have been added for the host and remote trading partners. That’s all that is needed for the inbound transaction. For the OutboundAgreement (response), the document definition is set to OrdersResponseDef and the generic identifiers for the two trading partners are added. The remote trading partner dummy delivery channel is also added to the agreement. SOA Composite Import the SOA composite archive into JDeveloper as an EJB JAR file. Open the composite and you should have a project that looks like this. In the composite, open the b2bInboundSyncSvc exposed service and advance through the setup wizard. Select your Application Server Connection and advance to the Operations window. Notice here that the B2B binding is set to Receive. It is not set for Synchronous Request Reply. Continue advancing through the wizard as you normally would and select finish at the end. Now open BPELProcess1 in the composite. The BPEL process is set as a Synchronous Request Reply as you can see below. The while loop is there just to give the process something to do. The actual reply message is prepared in the assignResponseValues assignment followed by an Invoke of the B2B binding. Open the replyResponse Invoke and go to the properties tab. You’ll see that the fromTradingPartnerId, toTradingPartner, documentTypeName, and documentProtocolRevision properties have been set. Testing the Configuration To test the configuration, I used Firefox Poster. Enter the URL for the b2b/syncreceiver servlet and browse for the req.xml file that contains the test request message. In the Headers tab, add the property ‘from’ and give it the value ‘Acme’. This is how B2B will know where the message is coming from and it will use that information along with the document type name to find the right trading partner agreement. Now post the message. You should get back a response with a status of ‘200 OK’. That’s all there is to it.

    Read the article

  • Working with documents and SharePoint - Best practices

    - by KunaalKapoor
    Follow these simple guidelines to make collaboration using SharePoint easier:1. File Name:While it is allowed to use spaces in your filename (and maybe it seems even logical to do so), don’t use them if your file will end up (or is born on) SharePoint. When you use the “download a copy” functionality, SharePoint will replace the spaces with an “_”. This might (will) result in inconsistency when you upload the “same” file again, since SharePoint will see this as a different file (since the filename is different). I recommend using a filename with Capitalization style naming guideline. For instance: the document “Overall governance model.docx” would be named “OverallGovernanceModel.docx”Use the TITLE field in the office applications to give your document a title (and subtitle and keywords, .) The title column can be used in a view in a library. You can get to the document properties by clicking on 'Office Button/Prepare/Properties'. (Office 2007). This is metadata that is stored with the document, and will remain in the document (even if you exchange this document via e-mail, via an external hard drive). The filename cannot be longer than 128 characters. (and that is IMHO far beyond reasonable) You cannot use any of these characters: ” # % & * : < > ? \ / { | } ~ 2. Versioning:SharePoint has a built-in versioning system. You can work with major (published) versions, and minor (draft) versions. Of each of these two document types, you can store a numbers of versions that are kept. Watch out, each version is saved, not only the delta between 2 versions, and this counts to your Site Collection Quota. (Example: you have a Word document with a size of 2 MB. When you keep 5 Drafts this will result in storing (and consuming) 10 MB.So, don’t call your document “NewUserAccountProcessDRAFTv1.docx”, but “NewUserAccountProcess.docx” and use versioning setting in your library.You can enable views on your library to display the version number.You can enable the version number to be displayed in a Word document.3. Use MetadataUse metadata to assign other properties to documents, so it can be easily identified, sorted- or grouped by.

    Read the article

  • Calling a webservice via Javascript

    - by jeroenb
    If you want to consume a webservice, it's not allways necessary to do a postback. It's even not that hard! 1. Webservice You have to add the scriptservice attribute to the webservice. [System.Web.Script.Services.ScriptService]public class PersonsInCompany : System.Web.Services.WebService { Create a WebMethod [WebMethod] public Person GetPersonByFirstName(string name) { List<Person> personSelect = persons.Where(p => p.FirstName.ToLower().StartsWith(name.ToLower())).ToList(); if (personSelect.Count > 0) return personSelect.First(); else return null; } 2. webpage Add reference to your service to your scriptmanager <script type="text/javascript"> function GetPersonInCompany() { var val = document.getElementById("MainContent_TextBoxPersonName"); PersonsInCompany.GetPersonByFirstName(val.value, FinishCallback); } function FinishCallback(result) { document.getElementById("MainContent_LabelFirstName").innerHTML = result.FirstName; document.getElementById("MainContent_LabelName").innerHTML = result.Name; document.getElementById("MainContent_LabelAge").innerHTML = result.Age; document.getElementById("MainContent_LabelCompany").innerHTML = result.Company; } </script> Add some javascript, where you first call your webservice. Classname.Webmethod = PersonsInCompany.GetPersonByFirstName Add a callback to catch the result from the webservice. And use the result to update your page. <script type="text/javascript"> function GetPersonInCompany() { var val = document.getElementById("MainContent_TextBoxPersonName"); PersonsInCompany.GetPersonByFirstName(val.value, FinishCallback); } function FinishCallback(result) { document.getElementById("MainContent_LabelFirstName").innerHTML = result.FirstName; document.getElementById("MainContent_LabelName").innerHTML = result.Name; document.getElementById("MainContent_LabelAge").innerHTML = result.Age; document.getElementById("MainContent_LabelCompany").innerHTML = result.Company; } </script>   If you have any question, feel free to contact me! You can download the code here.

    Read the article

  • Should I keep separate client codebases and databases for a software-as-a-service application?

    - by John
    My question is about the architecture of my application. I have a Rails application where companies can administrate all things related to their clients. Companies would buy a subscription and their users can access the application online. Hopefully I will get multiple companies subscribing to my application/service. What should I do with my code and database? Seperate app code base and database per company One app code base but seperate database per company One app code base and one database The decision involves security (e.g. a user from company X should not see any data from company Y) performance (let's suppose it becomes successful, it should have a good performance) and scalability (again, if successful, it should have a good performance but also easy for me to handle all the companies, code changes, etc). For the sake of maintainability, I tend to opt for the one code base, but for the database I really don't know. What do you think is the best option?

    Read the article

  • What is the best way to work with large databases in Java depending on context?

    - by user19000
    We are trying to figure out the best practice for working with very large DBs in Java. What we do is a kind of BI (business Intelligence), i.e analyzing very large DBs, and using them to create intermediate DBs that represent intelligent knowledge of the DBs. We are currently using JDBC, and just preforming queries using a ResultSet. As more and more data is being created, we are wondering whether more appropriate ways exist for parsing and manipulating these large DBs: We need to support 'chunk' manipulation and not an entire DB at once(e.g. limit in JDBC, very poor performance) We do not need to be constantly connected since we are just pulling results and creating new tables of our own. We want to understand JDBC alternatives, with respect to advantages and disadvantages. Whether you think JDBC is the way to go or not, what are the best practices to go by depending on context (e.g. for large DBs queried in chunks)?

    Read the article

  • Can I create two databases, each for different directories?

    - by Tim
    I have run Recoll which created a database for my data partition on my internal hard drive. The database is stored under my home partition in the same internal hard drive. I now want to run Recoll to create a database for a dicrectory on my external hard drive, and store this new database on that external hard drive, because my internal hard drive doesn't have enough space to hold the new database. I was wondering how to do that in Recoll? Note: my current Recoll was installed from software center of Ubuntu 12.04. Thanks!

    Read the article

  • TinyMCE include crashes IE8

    - by dkris
    I am trying to open a popup onclick using a function call as shown below. <a id="forgotPasswordLink" href="#" onclick="openSupportPage(document.getElementById('forgotPasswordLink').innerHTML);"> Some Text </a> I am creating the HTML for the pop up page on the fly and also including the TinyMCE source file over there. The code is as shown below: <script type="text/javascript"> <!-- function openSupportPage(unsafeSupportText) { var features="width=700,height=400,status=yes,toolbar=no,menubar=no,location=no,scrollbars=yes"; var winId=window.open('','Test',features); winId.focus(); winId.document.open('text/html','replace'); winId.document.write('<html><head><title>' + document.title + '</title><link rel="stylesheet" href="./css/default.css" type="text/css">\n'); winId.document.write('<script src="./js/tiny_mce/tiny_mce.js" type="text/javascript" language="javascript">Script_1</script>\n'); winId.document.write('<script src="./js/support_page.js" type="text/javascript" language="javascript">Script_2</script>\n'); winId.document.write('</head><body onload="inittextarea()">\n');/*function call which will use the TinyMCE source file*/ winId.document.write(' \n'); winId.document.write('<p>&#160;</p>'); var hiddenFrameHTML = document.getElementById("HiddenFrame").innerHTML; hiddenFrameHTML = hiddenFrameHTML.replace(/&amp;/gi, "&"); hiddenFrameHTML = hiddenFrameHTML.replace(/&lt;/gi, "<"); hiddenFrameHTML = hiddenFrameHTML.replace(/&gt;/gi, ">"); winId.document.write(hiddenFrameHTML); winId.document.write('<textarea id="content" rows="10" style="width:100%">\n'); winId.document.write(document.getElementById(top.document.forms[0].id + ":supportStuff").innerHTML); winId.document.write('</textArea>\n'); var hiddenFrameHTML2 = document.getElementById("HiddenFrame2").innerHTML; hiddenFrameHTML2 = hiddenFrameHTML2.replace(/&amp;/gi, "&").replace(/&lt;/gi, "<").replace(/&gt;/gi, ">"); winId.document.write(hiddenFrameHTML2); winId.document.write('</body></html>\n'); winId.document.close(); } // --> </script> The support.js file contains the following. function inittextarea() { tinyMCE.init({ elements : "content", mode : "exact", theme : "advanced", readonly : true, setup : function(ed) { ed.onInit.add(function() { tinyMCE.activeEditor.execCommand("mceToggleVisualAid"); }); } }); } The problem arises when the onclick event is fired and the pop up opens up, IE8 stops responding and seems to hang. It is working fine on Chrome, Firefox and Safari. I feel that the issue is because of TinyMCE script inclusion because on commenting the lines that include the tiny_mce.js and the support_page.js, the popup renders with no issues. I am also using the latest TinyMCE release. Please let me know why this is happening and what could be the resolution.

    Read the article

  • How do I add a version number field to an office 2007 docx document?

    - by Jon Cage
    I've been having a crack at using fields in Word 2007 and have hit a slight stumbling block. I want to add a field which I can use in several parts of the document to represent the current version (something of the form v0.1 but I can't see an obvious way to do it). The only provision I've found for this is something called RevNum but that gets updated every time I save the document. Is there a field I've missed or a way of adding custom fields or something?

    Read the article

  • Database Snapshots of Mirrored databases affect performance of Principal database?

    - by yrushka
    I have 2 servers set in Mirroring High-safety. One is Principal and another in Mirror. Currently I have 2 snapshots of a Production database (100 GB size) created on Principal server (for no_lock purpose of massive select processes) and 2 snapshots on the mirror server for the same database for reporting purposes. I know snapshots reduce performance of source databases but I am not sure if snapshots from mirror server have any impact on principal server's performance. thanks,

    Read the article

  • How to determine which image(s) in a word document is unusually large?

    - by Brian
    I have a word 2007 document that is now 7MB in size that is being edited by many folks. I would like to figure out which of the many images in the document is the 'culprit'. My hunch is likely one or two of them is a bitmap or some other large image. In smaller documents when this is happened I can do it by trial and error: Remove an Image Save the File Check File size repeat Is there a more elegant solution to this issue?

    Read the article

< Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >