Search Results

Search found 22170 results on 887 pages for 'multiple schema'.

Page 390/887 | < Previous Page | 386 387 388 389 390 391 392 393 394 395 396 397  | Next Page >

  • MySQL and GROUP_CONCAT() maximum length

    - by zewaren
    Hello everyone, I'm using GROUP_CONCAT() in a MySQL query to convert multiple rows into a single string. However, the maximum length of the result of this function is 1024 characters. I'm very well aware that I can change the param group_concat_max_len to increase this limit: SET SESSION group_concat_max_len = 1000000; However, on the server I'm using, I can't change any param. Not by using the preceding query and not by editing any configuration file. So my question is: Is there any other way to get the output of a multiple row query into a single string? Thank you for your answers.

    Read the article

  • PHP File Upload second file does not upload, first file does without error

    - by Curtis
    So I have a script I have been using and it generally works well with multiple files... When I upload a very large file in a multiple file upload, only the first file is uploaded. I am not seeing an errors as to why. I figure this is related to a timeout setting but can not figure it out - Any ideas? I have foloowing set in my htaccess file php_value post_max_size 1024M php_value upload_max_filesize 1024M php_value memory_limit 600M php_value output_buffering on php_value max_execution_time 259200 php_value max_input_time 259200 php_value session.cookie_lifetime 0 php_value session.gc_maxlifetime 259200 php_value default_socket_timeout 259200

    Read the article

  • Open source embedded filesystem (or single file virtual filesystem, or structured storage) library f

    - by Ioan
    I'm not sure what the "general" name of something like this might be. I'm looking for a library that gives me a file format to store different types of binary data in an expanding single file. open source, non-GPL (LGPL ok) C interface the file format is a single file multiple files within using a POSIX-like file API (or multiple "blobs" within using some other API) file/structure editing is done in-place reliable first, performant second Examples include: the virtual drives of a virtual machine whefs HDF CDF NetCDF Problems with the above: whefs doesn't appear to be very mature, but best describes what I'm after HDF, CDF, NetCDF are usable (also very reliable and fast), but they're rather complicated and I'm not entirely convinced of their support for opaque binary "blobs" Edit: Forgot to mention, one other relevant question: http://stackoverflow.com/questions/1361560/simple-virtual-filesystem-in-c-c Another similar question: http://stackoverflow.com/questions/374417/is-there-an-open-source-alternative-to-windows-compound-files Edit: Added condition of in-place editing.

    Read the article

  • Python: split files using mutliple split delimiters

    - by donalmg
    Hi, I have multiple CSV files which I need to parse in a loop to gather information. The problem is that while they are the same format, some are delimited by '\t' and others by ','. After this, I want to remove the double-quote from around the string. Can python split via multiple possible delimiters? At the minute, I can split the line with one by using: f = open(filename, "r") fields = f.readlines() for fs in fields: sf = fs.split('\t') tf = [fi.strip ('"') for fi in sf] Any suggestions are welcome.

    Read the article

  • Facebook FQL Question

    - by Michael
    I'm trying to use the Facebook Javascript API to run FQL queries, and it works fine if I try and get users by username or uid, but doesn't work when I'm searching by name. function get_username() { var name = prompt("Enter name: ") FB.api( { method: 'fql.query', query: 'SELECT username FROM user WHERE name in "'+name+'"' }, function(response) { var x = response[0].username alert('Username is ' + x); } ); } I realize that this will probably return multiple users, but I can't figure out how to tell if it's returning multiple users or no users at all, it seems to freeze after trying to get response[0].username. I'm probably making a beginner mistake but any ideas?

    Read the article

  • Rails, searchlogic choose categories with checkboxes

    - by atmorell
    Hello, I am useing searchlogic to search some paintings. Each painting belong to a single category. What I would like to do is add multiple checkboxes to my search form, so that users can mark multiple categories. (joined with or) Is this possible with searchlogic? The query I am looking for is something like this: SELECT * FROM paintings WHERE category LIKE "white" OR category LIKE "red"... f.check_box :category (white) f.check_box :category (black) f.check_box :category (red) f.check_box :category (green) etc.

    Read the article

  • JSF2 - what scope for f:ajax elements?

    - by Konrad Garus
    I have this form: <h:form> <h:outputText value="Tag:" /> <h:inputText value="#{entryRecorder.tag}"> <f:ajax render="category" /> </h:inputText> <h:outputText value="Category:" /> <h:inputText value="#{entryRecorder.category}" id="category" /> </h:form> What I'm trying to achieve: When you type in the "tag" field, the entryRecorder.tag field is updated with what was typed. By some logic upon this action the bean also updates its category field. This change should be reflected in the form. Questions: What scope shall I use for EntryRecorder? Request may not be satisfactory for multiple AJAX requests, while session will not work with multiple browser windows per one session. How can I register my updateCategory() action in EntryRecorder so that it is triggered when the bean is updated?

    Read the article

  • LLBLGen Pro v3.1 released!

    - by FransBouma
    Yesterday we released LLBLGen Pro v3.1! Version 3.1 comes with new features and enhancements, which I'll describe briefly below. v3.1 is a free upgrade for v3.x licensees. What's new / changed? Designer Extensible Import system. An extensible import system has been added to the designer to import project data from external sources. Importers are plug-ins which import project meta-data (like entity definitions, mappings and relational model data) from an external source into the loaded project. In v3.1, an importer plug-in for importing project elements from existing LLBLGen Pro v3.x project files has been included. You can use this importer to create source projects from which you import parts of models to build your actual project with. Model-only relationships. In v3.1, relationships of the type 1:1, m:1 and 1:n can be marked as model-only. A model-only relationship isn't required to have a backing foreign key constraint in the relational model data. They're ideal for projects which have to work with relational databases where changes can't always be made or some relationships can't be added to (e.g. the ones which are important for the entity model, but are not allowed to be added to the relational model for some reason). Custom field ordering. Although fields in an entity definition don't really have an ordering, it can be important for some situations to have the entity fields in a given order, e.g. when you use compound primary keys. Field ordering can be defined using a pop-up dialog which can be opened through various ways, e.g. inside the project explorer, model view and entity editor. It can also be set automatically during refreshes based on new settings. Command line relational model data refresher tool, CliRefresher.exe. The command line refresh tool shipped with v2.6 is now available for v3.1 as well Navigation enhancements in various designer elements. It's now easier to find elements like entities, typed views etc. in the project explorer from editors, to navigate to related entities in the project explorer by right clicking a relationship, navigate to the super-type in the project explorer when right-clicking an entity and navigate to the sub-type in the project explorer when right-clicking a sub-type node in the project explorer. Minor visual enhancements / tweaks LLBLGen Pro Runtime Framework Entity creation is now up to 30% faster and takes 5% less memory. Creating an entity object has been optimized further by tweaks inside the framework to make instantiating an entity object up to 30% faster. It now also takes up to 5% less memory than in v3.0 Prefetch Path node merging is now up to 20-25% faster. Setting entity references required the creation of a new relationship object. As this relationship object is always used internally it could be cached (as it's used for syncing only). This increases performance by 20-25% in the merging functionality. Entity fetches are now up to 20% faster. A large number of tweaks have been applied to make entity fetches up to 20% faster than in v3.0. Full WCF RIA support. It's now possible to use your LLBLGen Pro runtime framework powered domain layer in a WCF RIA application using the VS.NET tools for WCF RIA services. WCF RIA services is a Microsoft technology for .NET 4 and typically used within silverlight applications. SQL Server DQE compatibility level is now per instance. (Usable in Adapter). It's now possible to set the compatibility level of the SQL Server Dynamic Query Engine (DQE) per instance of the DQE instead of the global setting it was before. The global setting is still available and is used as the default value for the compatibility level per-instance. You can use this to switch between CE Desktop and normal SQL Server compatibility per DataAccessAdapter instance. Support for COUNT_BIG aggregate function (SQL Server specific). The aggregate function COUNT_BIG has been added to the list of available aggregate functions to be used in the framework. Minor changes / tweaks I'm especially pleased with the import system, as that makes working with entity models a lot easier. The import system lets you import from another LLBLGen Pro v3 project any entity definition, mapping and / or meta-data like table definitions. This way you can build repository projects where you store model fragments, e.g. the building blocks for a customer-order system, a user credential model etc., any model you can think of. In most projects, you'll recognize that some parts of your new model look familiar. In these cases it would have been easier if you would have been able to import these parts from projects you had pre-created. With LLBLGen Pro v3.1 you can. For example, say you have an Oracle schema called CRM which contains the bread 'n' butter customer-order-product kind of model. You create an entity model from that schema and save it in a project file. Now you start working on another project for another customer and you have to use SQL Server. You also start using model-first development, so develop the entity model from scratch as there's no existing database. As this customer also requires some CRM like entity model, you import the entities from your saved Oracle project into this new SQL Server targeting project. Because you don't work with Oracle this time, you don't import the relational meta-data, just the entities, their relationships and possibly their inheritance hierarchies, if any. As they're now entities in your project you can change them a bit to match the new customer's requirements. This can save you a lot of time, because you can re-use pre-fab model fragments for new projects. In the example above there are no tables yet (as you work model first) so using the forward mapping capabilities of LLBLGen Pro v3 creates the tables, PK constraints, Unique Constraints and FK constraints for you. This way you can build a nice repository of model fragments which you can re-use in new projects.

    Read the article

  • How To AutoScroll a DataGridView during Drag and Drop

    - by Mason
    One of the forms in my C# .NET application has multiple DataGridViews that implement drag and drop to move the rows around. The drag and drop mostly works right, but I've been having a hard time getting the DataGridViews to AutoScroll - when a row is dragged near the top or bottom of the box, to scroll it in that direction. So far, I've tried implementing a version of this solution. I have a ScrollingGridView class inheriting from DataGridView that implements the described timer, and according to the debugger, the timer is firing appropriately, but the timer code: SendMessage(Handle, WM_VSCROLL, (IntPtr)scrollDirectionInt, IntPtr.Zero); doesn't do anything as far as I can tell, possibly because I have multiple DataGridViews in the form. I also tried modifying the AutoScrollOffset property, but that didn't do anything either. Investigation of the DataGridView and ScrollBar classes doesn't seem to suggest any other commands or functions that will actually make the DataGridView scroll. Can anyone help me with a function that will actually scroll the DataGridView, or some other way to solve the problem?

    Read the article

  • Visual SourceSafe: Architecture/Management

    - by Nic
    I was looking for information on how other people with larger teams manage SourceSafe currently. I was looking for recommendations and advice for a new project I was setting up that will allow for a few key things Scalability Manage multiple overlapping releases Geared more around .NET however allows for legacy applications (VB, ASP and VBS) I am really looking for any lessons learned from other teams. I come from a StarTeam background and we used view labels and release labels to manage multiple overlapping projects. View labels geared more towards compiled code and SQL and the revision labels were used for VB/ASP projects. Thank you for any advice and sharing your experience and frustrations with other companies you might have worked with in the past.

    Read the article

  • Best practices for cross platform git config?

    - by Bas Bossink
    Context A number of my application user configuration files are kept in a git repository for easy sharing across multiple machines and multiple platforms. Amongst these configuration files is .gitconfig which contains the following settings for handling the carriage return linefeed characters [core] autocrlf = true safecrlf = false Problem These settings also gets applied on a GNU/Linux platform which causes obscure errors. Question What are some best practices for handling these platform specific differences in configuration files? Proposed solution I realize this problem could be solved by having a branch for each platform and keeping the common stuff in master and merging with the platform branch when master moves forward. I'm wondering if there are any easier solutions to this problem?

    Read the article

  • Sharepoint - connectable receiving, XSLT editable web part?

    - by Corey O.
    You can use the "Data View" webpart to take data from a database call, then you can edit the XSLT manually to make it look and do whatever you want, within the scope of that data and XSLT capabilities. Is there a web part that allows me to do the same thing, but with data that is received by a connected webpart source rather than a database set? For example: I'd like to be able to pull in a Data View webpart that queries a bunch of data and makes it available all over the page. Then, I would like to hide that Data View. Once it is hidden, I'd like to be able to take another customizable webpart and pull a field (or multiple fields if possible) from the Data View webpart via a webpart connection. This would allow me to display various fields in creative formats without having to call the same query multiple times on the same page by different webparts. Is there an in house webpart that will allow me to do this?

    Read the article

  • Django templates onchange data

    - by Hulk
    In the following code, i have a drop down box and a multi select box. My question is that using javascript and django .how will i changes the designation with changes in names from drop down box. <tr><td> name:</td><td><select id="name" name="name">{% for name in names %} <option value="{{name.id}}" {% for selected_id in names %}{% ifequal name.id selected_id %} {{ selected }} {% endifequal %} {% endfor %}>{{name.name}}</option>{% endfor %} </select> </td></tr> {% for desg in designation %} <tr><td><p>Topics:</td><td> <select id="desg" name="desg" multiple="multiple"> <option value="{{desg.id}}" >{{desg.desg}}</option> </select></p></td></tr> {% endfor %} Thanks..

    Read the article

  • IoC container configuration

    - by nivlam
    How should the configuration for an IoC container be organized? I know that registering by code should be placed at the highest level in an application, but what if an application had hundreds of dependencies that need to be registered? Same with XML configurations. I know that you can split up the XML configurations into multiple files, but that would seem like it would become a maintenance hassle if someone had to dig through multiple XML files. Are there any best practices for organizing the registration of dependencies? From all the videos and tutorials that I've seen, the code used in the demo were simple enough to place in a single location. I have yet to come across a sample application that utilizes a large number of dependencies.

    Read the article

  • Django exclude(**kwargs) help

    - by shawnjan
    Hey guys/gals! I had a question for you, something that I can't seem to find the solution for... Basically, I have a model called Environment, and I am passing all of them to a view, and there are particular environments that I would like to exclude. Now, I know there is a exclude function on a queryset, but I can't seem to figure out how to use it for multiple options... For example, I tried this but it didn't work: kwargs = {"name": "env1", "name": "env2"} envs = Environment.objects.exclude( kwards ) But the only thing that it will exclude is the last "name" value in the list of kwargs. I understand why it does that now, but I still can't seem to exclude multiple objects with one command. Any help is much appreciated! Shawn

    Read the article

  • Building a list of favorites from Core Data

    - by Jim
    I'm building an app with two tabs. The first tab has a main tableview that connects to a detail view of the row. The second tab will display a tableView based on the user adding content to it by tapping a button on the detail view. My question is this. What is the correct design pattern to do this? Do I create a second ManagedObjectContext/ManagedObjectContextID then save that context to a new persistent store or can the MOC be saved to the existing store without affecting the original tableview? I've looked at CoreData Recipes and CoreData Books and neither deal with multiple stores although books does deal with multiple MOC's. Any reference would be great.

    Read the article

  • C#: Pass a user-defined type to a Oracle stored procedure

    - by pistacchio
    With reference to http://stackoverflow.com/questions/980324/oracle-variable-number-of-parameters-to-a-stored-procedure I have s stored procedure to insert multiple Users into a User table. The table is defined like: CREATE TABLE "USER" ( "Name" VARCHAR2(50), "Surname" VARCHAR2(50), "Dt_Birth" DATE, ) The stored procedure to insert multiple Users is: type userType is record ( name varchar2(100), ... ); type userList is table of userType index by binary_integer; procedure array_insert (p_userList in userList) is begin forall i in p_userList.first..p_userList.last insert into users (username) values (p_userList(i) ); end array_insert; How can I call the stored procedure from C# passing a userList of userType? Thanks

    Read the article

  • C++ Winsock non-blocking/async UDP socket

    - by Ragnagard
    Hi all! I'm developping a little data processor in c++ over UDP sockets, and have a thread (just one, and apart the sockets) that process the info received from them. My problem happens when i need to receive info from multiple clients in the socket at the same time. How could i do something like: Socket foo; /* init socket vars and attribs */ while (serving){ thread_processing(foo_info); } for multiple clients (many concurrent access) in c++? I'm using winsocks atm on win32, but just get standard blocking udp sockets working. No gui, it's a console app. I'll appreciate so much an example or pointer to one ;). Thanks in advance.

    Read the article

  • Complex type support in process flow &ndash; XMLTYPE

    - by shawn
        Before OWB 11.2 release, there are only 5 simple data types supported in process flow: DATE, BOOLEAN, INTEGER, FLOAT and STRING. A new complex data type – XMLTYPE is added in 11.2, in order to support complex data being passed between the process flow activities. In this article we will give a simple example to illustrate the usage of the new type and some related editors.     Suppose there is a bookstore that uses XML format orders as shown below (we use the simplest form for the illustration purpose), then we can create a process flow to handle the order, take the order as the input, then extract necessary information, and generate a confirmation email to the customer automatically. <order id=’0001’>     <customer>         <name>Tom</name>         <email>[email protected]</email>     </customer>     <book id=’Java_001’>         <quantity>3</quantity>     </book> </order>     Considering a simple user case here: we use an input parameter/variable with XMLTYPE to hold the XML content of the order; then we can use an Assign activity to retrieve the email info from the order; after that, we can create an email activity to send the email (Other activities might be added in practical case, but will not be described here). 1) Set XML content value     For testing purpose, we will create a variable to hold the sample order, and then this will be used among the process flow activities. When the variable is of XMLTYPE and the “Literal” value is set the true, the advance editor will be enabled.     Click the “Advance Editor” shown as above, a simple xml editor will popup. The editor has basic features like syntax highlight and check as shown below:     We can also do the basic validation or validation against schema with the editor by selecting the normalized schema. With this, it will be easier to provide the value for XMLTYPE variables. 2) Extract information from XML content     After setting the value, we need to extract the email information with the Assign activity. In process flow, an enhanced expression builder is used to help users construct the XPath for extracting values from XML content. When the variable’s literal value is set the false, the advance editor is enabled.     Click the button, the advance editor will popup, as shown below:     The editor is based on the expression builder (which is often used in mapping etc), an XPath lib panel is appended which provides some help information on how to write the XPath. The expression used here is: “XMLTYPE.EXTRACT(XML_ORDER,'/order/customer/email/text()').getStringVal()”, which uses ‘/order/customer/email/text()’ as the XPath to extract the email info from the XML document.     A variable called “EMAIL_ADDR” is created with String data type to hold the value extracted.     Then we bind the “VARIABLE” parameter of Assign activity to “EMAIL_ADDR” variable, which means the value of the “EMAIL_ADDR” activity will be set to the result of the “VALUE” parameter of Assign activity. 3) Use the extracted information in Email activity     We bind the “TO_ADDRESS” parameter of the email activity to the “EMAIL_ADDR” variable created in above step.     We can also extract other information from the xml order directly through the expression, for example, we can set the “MESSAGE_BODY” with value “'Dear '||XMLTYPE.EXTRACT(XML_ORDER,'/order/customer/name/text()').getStringVal()||chr(13)||chr(10)||'   You have ordered '||XMLTYPE.EXTRACT(XML_ORDER,'/order/book/quantity/text()').getStringVal()||' '||XMLTYPE.EXTRACT(XML_ORDER,'/order/book/@id').getStringVal()”. This expression will extract the customer name, the quantity and the book id from the order to compose the message body.     To make the email activity work, we need provide some other necessary information, Such as “SMTP_SERVER” (which is the SMTP server used to send the emails, like “mail.bookstore.com”. The default PORT number is set to 25. You need to change the value accordingly), “FROM_ADDRESS” and “SUBJECT”. Then the process flow is ready to go.     After deploying the process flow package, we can simply run the process flow to check if the result is as expected (An email will be sent to the specified email address with proper subject and message body).     Note: In oracle 11g, there is an enhanced security feature - ACL (Access Control List), which restrict the network access within db, so we need to edit the list to allow UTL_SMTP work if you are using oracle 11g. Refer to chapter “Access Control Lists for UTL_TCP/HTTP/SMTP” and “Managing Fine-Grained Access to External Network Services” for more details.       In previous releases, XMLTYPE already exists in other OWB objects, like mapping/transformation etc. When the mapping/transformation is dragged into a process flow, the parameters with XMLTYPE are mapped to STRING. Now with the XMLTYPE support in process flow, the XMLTYPE will map to XMLTYPE in a more natural way, and we can leverage the new data type for the design.

    Read the article

  • Replication Services as ETL extraction tool

    - by jorg
    In my last blog post I explained the principles of Replication Services and the possibilities it offers in a BI environment. One of the possibilities I described was the use of snapshot replication as an ETL extraction tool: “Snapshot Replication can also be useful in BI environments, if you don’t need a near real-time copy of the database, you can choose to use this form of replication. Next to an alternative for Transactional Replication it can be used to stage data so it can be transformed and moved into the data warehousing environment afterwards. In many solutions I have seen developers create multiple SSIS packages that simply copies data from one or more source systems to a staging database that figures as source for the ETL process. The creation of these packages takes a lot of (boring) time, while Replication Services can do the same in minutes. It is possible to filter out columns and/or records and it can even apply schema changes automatically so I think it offers enough features here. I don’t know how the performance will be and if it really works as good for this purpose as I expect, but I want to try this out soon!” Well I have tried it out and I must say it worked well. I was able to let replication services do work in a fraction of the time it would cost me to do the same in SSIS. What I did was the following: Configure snapshot replication for some Adventure Works tables, this was quite simple and straightforward. Create an SSIS package that executes the snapshot replication on demand and waits for its completion. This is something that you can’t do with out of the box functionality. While configuring the snapshot replication two SQL Agent Jobs are created, one for the creation of the snapshot and one for the distribution of the snapshot. Unfortunately these jobs are  asynchronous which means that if you execute them they immediately report back if the job started successfully or not, they do not wait for completion and report its result afterwards. So I had to create an SSIS package that executes the jobs and waits for their completion before the rest of the ETL process continues. Fortunately I was able to create the SSIS package with the desired functionality. I have made a step-by-step guide that will help you configure the snapshot replication and I have uploaded the SSIS package you need to execute it. Configure snapshot replication   The first step is to create a publication on the database you want to replicate. Connect to SQL Server Management Studio and right-click Replication, choose for New.. Publication…   The New Publication Wizard appears, click Next Choose your “source” database and click Next Choose Snapshot publication and click Next   You can now select tables and other objects that you want to publish Expand Tables and select the tables that are needed in your ETL process In the next screen you can add filters on the selected tables which can be very useful. Think about selecting only the last x days of data for example. Its possible to filter out rows and/or columns. In this example I did not apply any filters. Schedule the Snapshot Agent to run at a desired time, by doing this a SQL Agent Job is created which we need to execute from a SSIS package later on. Next you need to set the Security Settings for the Snapshot Agent. Click on the Security Settings button.   In this example I ran the Agent under the SQL Server Agent service account. This is not recommended as a security best practice. Fortunately there is an excellent article on TechNet which tells you exactly how to set up the security for replication services. Read it here and make sure you follow the guidelines!   On the next screen choose to create the publication at the end of the wizard Give the publication a name (SnapshotTest) and complete the wizard   The publication is created and the articles (tables in this case) are added Now the publication is created successfully its time to create a new subscription for this publication.   Expand the Replication folder in SSMS and right click Local Subscriptions, choose New Subscriptions   The New Subscription Wizard appears   Select the publisher on which you just created your publication and select the database and publication (SnapshotTest)   You can now choose where the Distribution Agent should run. If it runs at the distributor (push subscriptions) it causes extra processing overhead. If you use a separate server for your ETL process and databases choose to run each agent at its subscriber (pull subscriptions) to reduce the processing overhead at the distributor. Of course we need a database for the subscription and fortunately the Wizard can create it for you. Choose for New database   Give the database the desired name, set the desired options and click OK You can now add multiple SQL Server Subscribers which is not necessary in this case but can be very useful.   You now need to set the security settings for the Distribution Agent. Click on the …. button Again, in this example I ran the Agent under the SQL Server Agent service account. Read the security best practices here   Click Next   Make sure you create a synchronization job schedule again. This job is also necessary in the SSIS package later on. Initialize the subscription at first synchronization Select the first box to create the subscription when finishing this wizard Complete the wizard by clicking Finish The subscription will be created In SSMS you see a new database is created, the subscriber. There are no tables or other objects in the database available yet because the replication jobs did not ran yet. Now expand the SQL Server Agent, go to Jobs and search for the job that creates the snapshot:   Rename this job to “CreateSnapshot” Now search for the job that distributes the snapshot:   Rename this job to “DistributeSnapshot” Create an SSIS package that executes the snapshot replication We now need an SSIS package that will take care of the execution of both jobs. The CreateSnapshot job needs to execute and finish before the DistributeSnapshot job runs. After the DistributeSnapshot job has started the package needs to wait until its finished before the package execution finishes. The Execute SQL Server Agent Job Task is designed to execute SQL Agent Jobs from SSIS. Unfortunately this SSIS task only executes the job and reports back if the job started succesfully or not, it does not report if the job actually completed with success or failure. This is because these jobs are asynchronous. The SSIS package I’ve created does the following: It runs the CreateSnapshot job It checks every 5 seconds if the job is completed with a for loop When the CreateSnapshot job is completed it starts the DistributeSnapshot job And again it waits until the snapshot is delivered before the package will finish successfully Quite simple and the package is ready to use as standalone extract mechanism. After executing the package the replicated tables are added to the subscriber database and are filled with data:   Download the SSIS package here (SSIS 2008) Conclusion In this example I only replicated 5 tables, I could create a SSIS package that does the same in approximately the same amount of time. But if I replicated all the 70+ AdventureWorks tables I would save a lot of time and boring work! With replication services you also benefit from the feature that schema changes are applied automatically which means your entire extract phase wont break. Because a snapshot is created using the bcp utility (bulk copy) it’s also quite fast, so the performance will be quite good. Disadvantages of using snapshot replication as extraction tool is the limitation on source systems. You can only choose SQL Server or Oracle databases to act as a publisher. So if you plan to build an extract phase for your ETL process that will invoke a lot of tables think about replication services, it would save you a lot of time and thanks to the Extract SSIS package I’ve created you can perfectly fit it in your usual SSIS ETL process.

    Read the article

  • Using TextboxList events and callbacks

    - by Wraith
    Has anyone gotten callbacks working with Guillermo Rauch's TextboxList Autocomplete? I've tried multiple ways to bind and multiple events (e.g. hover) - nothing seems to register. $('#entSearch').textboxlist({unique: true, plugins: { autocomplete: { minLength: 3, queryRemote: true, placeholder: false, remote: { url: "{{=URL(r=request, f='call/json/suggest')}}", extraParams: {type: "", guid: ""} } } }, onHover: function(token) { alert('hover 1'); } }); $('#entSearch').hover(function() { alert('hover 2'); }); $('#entSearch').bind('hover', function() { alert('hover 3'); });

    Read the article

  • Reconstructing Position in the Original Array from the Position in a Stripped Down Array

    - by aronchick
    I have a text file that contains a number of the following: <ID> <Time 1> --> <Time 2> <Quote (potentially multiple line> <New Line Separator> <ID> <Time 1> --> <Time 2> <Quote (potentially multiple line> <New Line Separator> <ID> <Time 1> --> <Time 2> <Quote (potentially multiple line> <New Line Separator> I have a very simple regex for stripping these out into a constant block so it's just: <Quote> <Quote> <Quote> What I'd like to do is present the quotes as a block to the user, and have them select it (using jQuery.fieldSelection) and then use the selected content to back out to the original array, so I can get timing and IDs. Because this has to go out to HTML, and the user has to be able to select the text on the screen, I can't do anything like hidden divs or hidden input fields. The only data I will have is the character range selected on screen. To be specific, this is what it looks like: 1 0:00 --> 0:05 He was bored. So bored. His great intellect, seemingly inexhaustible, was hungry for new challenges but he was the last of the great innovators 2 0:05 --> 0:10 - society's problems had all been solved. 3 0:11 --> 0:20 All seemingly unconnected disciplines had long since been found to be related in horrifically elusive and contrived ways and he had mastered them all. And this is what I'd like to present to the user for selection: He was bored. So bored. His great intellect, seemingly inexhaustible, was hungry for new challenges but he was the last of the great innovators - society's problems had all been solved. All seemingly unconnected disciplines had long since been found to be related in horrifically elusive and contrived ways and he had mastered them all. Has anyone com across something like this before? Any ideas how to take the selected text, or selection position, and go backwards to the original meta-data?

    Read the article

  • CUDA compare arrays

    - by user315511
    Hello. Trying to make an app that will compare 1-to-multiple bitmaps. there is one reference bitmap and multiple other bitmaps. Result from each compare should be new bitmap with diffs. Maybe comparing bitmaps rather as textures than arrays? My biggest problem is making kernel accept more than one input pointer, and how to compare the data.. extern "C" __global__ void compare(float *odata, float *idata, int width, int height) works and following does not (i call the function with enough params) extern "C" __global__ void compare(float *odata, float *idata, float *idata2, int width, int height)

    Read the article

  • A PHP design pattern for the model part [PHP Zend Framework]

    - by Matthieu
    I have a PHP MVC application using Zend Framework. As presented in the quickstart, I use 3 layers for the model part : Model (business logic) Data mapper Table data gateway (or data access object, i.e. one class per SQL table) The model is UML designed and totally independent of the DB. My problem is : I can't have multiple instances of the same "instance/record". For example : if I get, for example, the user "Chuck Norris" with id=5, this will create a new model instance wich members will be filled by the data mapper (the data mapper query the table data gateway that query the DB). Then, if I change the name to "Duck Norras", don't save it in DB right away, and re-load the same user in another variable, I have "synchronisation" problems... (different instances for the same "record") Right now, I use the Multiton pattern : like Singleton, but multiple instances indexed by a key (wich is the user ID in our example). But this is complicating my developpement a lot, and my testings too. How to do it right ?

    Read the article

  • Best Fit Scheduling Algorithm

    - by Teegijee
    I'm writing a scheduling program with a difficult programming problem. There are several events, each with multiple meeting times. I need to find an arrangement of meeting times such that each schedule contains any given event exactly once, using one of each event's multiple meeting times. Obviously I could use brute force, but that's rarely the best solution. I'm guessing this is a relatively basic computer science problem, which I'll learn about once I am able to start taking computer science classes. In the meantime, I'd prefer any links where I could read up on this, or even just a name I could Google.

    Read the article

< Previous Page | 386 387 388 389 390 391 392 393 394 395 396 397  | Next Page >