Search Results

Search found 17990 results on 720 pages for 'virtualization option'.

Page 163/720 | < Previous Page | 159 160 161 162 163 164 165 166 167 168 169 170  | Next Page >

  • MFC Combo-Box Control is not showing the full list of items when I click the drop-down menu...

    - by shan23
    I'm coding an app in MSVS 2008, which has a ComboBox control which I initialize thru the code as below: static char* OptionString[4] = {"Opt1", "Opt2", "Opt3", "Opt4"}; BOOL CMyAppDlg::OnInitDialog() { CDialog::OnInitDialog(); // Set the icon for this dialog. The framework does this automatically // when the application's main window is not a dialog SetIcon(m_hIcon, TRUE); // Set big icon SetIcon(m_hIcon, FALSE); // Set small icon // TODO: Add extra initialization here m_Option.AddString(OptionString[0]); m_Option.AddString(OptionString[1]); m_Option.AddString(OptionString[2]); m_Option.AddString(OptionString[3]); m_Option.SetCurSel(0); return TRUE; // return TRUE unless you set the focus to a control } Now, when I build the app and click the down-arrow, the drop-down box shows the first option ONLY(since I've selected that thru my code). But, if i press down-arrow key on keyboard, it cycles thru the options in the order I've inserted, but never does it show more than 1 option in the box. So, In case an user wants to select option3, he has to cycle through options 1 and 2 !! Though once I select any option using the keyboard, the appropriate event handlers are fired, I'm miffed by this behaviour , as is understandable. I'm listing the properties of the combo-box control as well - only the properties that are true(rest are set to false): Type - Dropdown Vertical Scrollbar Visible Tabstop This has bugged me for weeks now. Can anyone pls enlighten me ?

    Read the article

  • What is the best anti-crack scheme for your trial or subscription software?

    - by gmatt
    Writing code takes time and effort and just like any other human being we need to live by making an income (save for the few that are actually self sustainable.) Here are 3 general schemes to make a living: Independent developers can offer a trial then purchase scheme. An alternative is an open source base application with pay extensions. A last (probably least popular with customers) scheme is to enforce some kind of subscription. Then the price of the software pales in comparison to the long term subscription fees. So, my question would be a hypothetical one. Suppose that you invest thousands of hours into developing an application. Now suppose you can choose any one of the three options to make a living off this application--or any other option you want--and suppose you have a very real fear of loosing 80% of your revenue to a cracked version if one can be made. To be clear this application does not require the internet to perform all its useful functions, that is, your application is a prime candidate to be a cracked release on some website. Which option would you feel most comfortable with defending yourself against this possible situation and briefly describe why this option would be the best.

    Read the article

  • Best practice - When to evaluate conditionals of function execution

    - by Tesserex
    If I have a function called from a few places, and it requires some condition to be met for anything it does to execute, where should that condition be checked? In my case, it's for drawing - if the mouse button is held down, then execute the drawing logic (this is being done in the mouse movement handler for when you drag.) Option one says put it in the function so that it's guaranteed to be checked. Abstracted, if you will. public function Foo() { DoThing(); } private function DoThing() { if (!condition) return; // do stuff } The problem I have with this is that when reading the code of Foo, which may be far away from DoThing, it looks like a bug. The first thought is that the condition isn't being checked. Option two, then, is to check before calling. public function Foo() { if (condition) DoThing(); } This reads better, but now you have to worry about checking from everywhere you call it. Option three is to rename the function to be more descriptive. public function Foo() { DoThingOnlyIfCondition(); } private function DoThingOnlyIfCondition() { if (!condition) return; // do stuff } Is this the "correct" solution? Or is this going a bit too far? I feel like if everything were like this function names would start to duplicate their code. About this being subjective: of course it is, and there may not be a right answer, but I think it's still perfectly at home here. Getting advice from better programmers than I is the second best way to learn. Subjective questions are exactly the kind of thing Google can't answer.

    Read the article

  • How to properly close a UDT server in Netty 4

    - by Steffen
    I'm trying to close my UDT server (Netty 4.0.5.Final) with shutDownGracefully() and reopen it on the same port. Unfortunately, I always get the socket exception below although it waits until the future has completed. I also added the socket option SO_REUSEADDR. What is the proper way to do this? Exception in thread "main" com.barchart.udt.ExceptionUDT: UDT Error : 5011 : another socket is already listening on the same UDP port : listen0:listen [id: 0x323d3939] at com.barchart.udt.SocketUDT.listen0(Native Method) at com.barchart.udt.SocketUDT.listen(SocketUDT.java:1136) at com.barchart.udt.net.NetServerSocketUDT.bind(NetServerSocketUDT.java:66) at io.netty.channel.udt.nio.NioUdtAcceptorChannel.doBind(NioUdtAcceptorChannel.java:71) at io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:471) at io.netty.channel.DefaultChannelPipeline$HeadHandler.bind(DefaultChannelPipeline.java:1006) at io.netty.channel.DefaultChannelHandlerContext.invokeBind(DefaultChannelHandlerContext.java:504) at io.netty.channel.DefaultChannelHandlerContext.bind(DefaultChannelHandlerContext.java:487) at io.netty.channel.ChannelDuplexHandler.bind(ChannelDuplexHandler.java:38) at io.netty.handler.logging.LoggingHandler.bind(LoggingHandler.java:254) at io.netty.channel.DefaultChannelHandlerContext.invokeBind(DefaultChannelHandlerContext.java:504) at io.netty.channel.DefaultChannelHandlerContext.bind(DefaultChannelHandlerContext.java:487) at io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:848) at io.netty.channel.AbstractChannel.bind(AbstractChannel.java:193) at io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:321) at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:354) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:366) at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:101) at java.lang.Thread.run(Thread.java:724) A small test program demonstration the problem: public class MsgEchoServer { public static class MsgEchoServerHandler extends ChannelInboundHandlerAdapter { } public void run() throws Exception { final ThreadFactory acceptFactory = new UtilThreadFactory("accept"); final ThreadFactory connectFactory = new UtilThreadFactory("connect"); final NioEventLoopGroup acceptGroup = new NioEventLoopGroup(1, acceptFactory, NioUdtProvider.MESSAGE_PROVIDER); final NioEventLoopGroup connectGroup = new NioEventLoopGroup(1, connectFactory, NioUdtProvider.MESSAGE_PROVIDER); try { final ServerBootstrap boot = new ServerBootstrap(); boot.group(acceptGroup, connectGroup) .channelFactory(NioUdtProvider.MESSAGE_ACCEPTOR) .option(ChannelOption.SO_BACKLOG, 10) .option(ChannelOption.SO_REUSEADDR, true) .handler(new LoggingHandler(LogLevel.INFO)) .childHandler(new ChannelInitializer<UdtChannel>() { @Override public void initChannel(final UdtChannel ch) throws Exception { ch.pipeline().addLast(new MsgEchoServerHandler()); } }); final ChannelFuture future = boot.bind(1234).sync(); } finally { acceptGroup.shutdownGracefully().syncUninterruptibly(); connectGroup.shutdownGracefully().syncUninterruptibly(); } new MsgEchoServer().run(); } public static void main(final String[] args) throws Exception { new MsgEchoServer().run(); } }

    Read the article

  • Is it possible to virtualize war file execution without separate J2EE container deployments?

    - by Smith
    Let's say I want to allow my developers to upload their war files to a web app (not the application server itself) running on our intranet and that web app would then run those wars as if they were separate apps deployed individually in our J2EE container. In other words, we are not actually deploying the wars as separate apps in the container - they are simply running side-by-side inside this one web app that acts like a J2EE container. Is that possible? Something like a war virtualization app?

    Read the article

  • How to create refresh statements for TableAdapter objects in Visual Studio?

    - by Mark Wilkins
    I am working on developing an ADO.NET data provider and an associated DDEX provider. I am unable to convince the Visual Studio TableAdapater Configuration Wizard to generate SQL statements to refresh the data table after inserts and updates. It generates the insert and delete statements but will not produce the select statements to do the refresh. The functionality referred to can be accessed by dropping a table from the Server Explorer (inside Visual Studio) onto a DataSet (e.g., DataSet1.xsd). It creates a TableAdapter object and configures SELECT, UPDATE, DELETE, and INSERT statements. If you right click on the TableAdapter object, the context menu has a “Configure” option that starts the “TableAdapter Configuration Wizard”. The first dialog of that wizard has an Advanced Options button, which leads to an option titled “Refresh the data table”. When used with SQL Server tables, that option causes a statement of the form “select field1, field2, …” to be added on to the end of the commands for the TableAdapter’s InsertCommand and UpdateCommand. Do you have any idea what type property or interface might need to be exposed from the DDEX provider (or maybe the ADO.NET data provider) in order to make Visual Studio add those refresh statements to the update/insert commands? The MSDN documentation for the Advanced SQL Generation Options Dialog Box has a note stating, “Refreshing the data table is only supported on databases that support batching of SQL statements.” This seems to imply that a .NET data provider might need to expose some property indicating such behavior is supported. But I cannot find it. Any ideas?

    Read the article

  • How to reuse results with a schema for end of day stock-data

    - by Vishalrix
    I am creating a database schema to be used for technical analysis like top-volume gainers, top-price gainers etc.I have checked answers to questions here, like the design question. Having taken the hint from boe100 's answer there I have a schema modeled pretty much on it, thusly: Symbol - char 6 //primary Date - date //primary Open - decimal 18, 4 High - decimal 18, 4 Low - decimal 18, 4 Close - decimal 18, 4 Volume - int Right now this table containing End Of Day( EOD) data will be about 3 million rows for 3 years. Later when I get/need more data it could be 20 million rows. The front end will be asking requests like "give me the top price gainers on date X over Y days". That request is one of the simpler ones, and as such is not too costly, time wise, I assume. But a request like " give me top volume gainers for the last 10 days, with the previous 100 days acting as baseline", could prove 10-100 times costlier. The result of such a request would be a float which signifies how many times the volume as grown etc. One option I have is adding a column for each such result. And if the user asks for volume gain in 10 days over 20 days, that would require another table. The total such tables could easily cross 100, specially if I start using other results as tables, like MACD-10, MACD-100. each of which will require its own column. Is this a feasible solution? Another option being that I keep the result in cached html files and present them to the user. I dont have much experience in web-development, so to me it looks messy; but I could be wrong ( ofc!) . Is that a option too? Let me add that I am/will be using mod_perl to present the response to the user. With much of the work on mysql database being done using perl. I would like to have a response time of 1-2 seconds.

    Read the article

  • PHP: Retrieving JSON via jQuery ajax help

    - by iamjonesy
    Hey I have a script that is creating and echoing a JSON encoded array of magento products. I have a script that calls this script using jQuery's ajax function but I'm not getting a proper response. This is the script that creates the array: $collection = Mage::getModel('catalog/product')->getCollection(); $collection->addAttributeToSelect('name'); $collection->addAttributeToSelect('price'); $products = array(); foreach ($collection as $product){ $products[$product->getPrice()] = $product->getName(); } header('Content-Type: application/x-json; charset=utf-8'); echo(json_encode($products)); Here is my jQuery: <select id="products"> <option value="#">Select</option> </select> <script type="text/javascript"> jQuery.noConflict(); jQuery(document).ready(function(){ jQuery.ajax({ type: "GET", url: "http://localhost.com/magento/modules/products/get.php", dataType: "json", success: function(products) { jQuery.each(products,function(price,name) { var opt = jQuery('<option />'); opt.val(name); opt.text(price); jQuery('#products').append(opt); }); } }); }); </script> I'm getting a response from this but I'm not seeing a any JSON. I'm using firebug. I can see there has been a JSON encoded response but the response tab is emtyp and my select boxes have no options. Can anyone see and problems with my code? Here is the response I should get: {"82.9230":"Dummy","177.0098":"Dummy 2","76.0208":"Dummy 3","470.6054":"Dummy 4","357.0083":"Dummy Product 5"} Thanks, Billy

    Read the article

  • Data Annotations on ViewModels or Domain Objects

    - by Ahmad
    Where would data annotations be more suitable: ViewModels or Domain Objects or Both I am struggling to decide where these will be more suited. I have not as yet fully utilized them but this question came to mind. From most of the examples I have seen, they are generally placed on Models and simply use the required attributes for validation using ModelState.IsValid. I have also seen another question on SO where the use of data annotations alone is not sufficient and advocate. Option 1 - I will still need to validate again in my service layer. ( I think that my service layer should be complete and this include validation, since its planned to be used elsewhere) Option 2 - How will I then get the benefits of the built in validation both client and server side. Option 3 - there will be a repetition of validation logic, however I was wondering if one could use a MetaData class approach that can be used for both ViewModels and Domain Objects. ( This is completely of the top of my head, so it may be nonsensical) I wonder if this question even makes sense. If not, can someone please help in understanding this better. Have I completely misunderstood the use of data annotations?

    Read the article

  • Common one-to-many table for multiple entities

    - by Ben V
    Suppose I have two tables, Customer and Vendor. I want to have a common address table for customer and vendor addresses. Customers and Vendors can both have one to many addresses. Option 1 Add columns for the AddressID to the Customer and Vendor tables. This just doesn't seem like a clean solution to me. Customer Vendor Address -------- --------- --------- CustomerID VendorID AddressID AddressID1 AddressID1 Street AddressID2 AddressID2 City... Option 2 Move the foreign key to the Address table. For a Customer, Address.CustomerID will be populated. For a Vendor, Address.VendorID will be populated. I don't like this either - I shouldn't need to modify the address table every time I want to use it for another entity. Customer Vendor Address -------- --------- --------- CustomerID VendorID AddressID CustomerID VendorID Option 3 I've also seen this - only 1 foreign key column on the Address table with another column to identify which foreign key table the address belongs to. I don't like this one because it requires all the foreign key tables to have the same type of ID. It also seems messy once you start coding against it. Customer Vendor Address -------- --------- --------- CustomerID VendorID AddressID FKTable FKID So, am I just too picky, or is there something I haven't thought of?

    Read the article

  • TFS: Choose which Team Project to add a solution too.

    - by Patricker
    I have a solution which I developed in VS2008 and which I am trying to add to Source Control (TFS 2010, though the issue happened in TFS 2008 as well). I have several TFS workspaces on my computer and I have access to several Team Projects. When I right click the solution in my Solution Explorer and choose the "Add Solution to Source Control" option I am never given an option of choosing which Workspace or which Team Project to add the existing solution too. VS2008 then proceeds to add it to the same team project every time. I have tried selecting an alternate workspace/team project in every window where I can see an option for it but it always adds it back to the same one. I even tried changing the name of my new workspace so that alphabetically it was the first thinking that it might be somehow related to that... no luck. I then tried goign to the Change Source Control window where you can add/remove bindings on a solution/project but that window also defaults to the same Team Project as trying to add the solution directly does... Any help would be greatly appreciated with this, maybe I'm just missing something?

    Read the article

  • How can I prevent ListBox from selecting an item when I right-click it?

    - by chaiguy
    The tricky part is that each item has a ContextMenu that I still want to open when it is right-clicked (I just don't want it selecting it). In fact, if it makes things any easier, I don't want any automatic selection at all, so if there's some way I can disable it entirely that would be just fine. I'm thinking of just switching to an ItemsControl actually, so long as I can get virtualization and scrolling to work with it.

    Read the article

  • Application Design: Single vs. Multiple Hits to the DB

    - by shyneman
    I'm building a service that performs a set of configured activities based on the type of request that it receives. Each activity involves going to the database and retrieving/updating some kind of information. The logic for each activity can be generalized and re-used across different request types. The activities may need to participate in a transaction for the duration of the servicing the request. One option, I'm considering is having each activity maintain its own access to DAL/database. This fully encapsulates the activity into a stand-alone re-usable piece, but hitting the database multiple times for one request doesn't seem like a viable option. I don't really know how to easily implement the concept of a transaction across the multiple activities here either. The second option is to encapsulate ALL the activities into one big activity and hit the database once. But this does not allow re-use and configuration of these activities for different requests. Does anyone have any suggestions and input about what should be the best way to approach my problem? Thanks for any help.

    Read the article

  • Assign Multiple Custom User Roles to a Custom Post Type

    - by OUHSD Webmaster
    Okay here's the situation.... I'm working on a my business website. There will be a work/portfolio area. "Work" is a custom post type. "Designer" is a custom user role. "Client" is a custom user role. In creating a new "Work" post I would like to be able to select both a "designer" and "Client" to assign to the piece of work, as I would assign an author to a regular ol' post. I tried the method from this answer but it did not work for me. ) I placed it in my functions.php file. ` add_filter('wp_dropdown_users', 'test'); function test($output) { global $post; //Doing it only for the custom post type if($post->post_type == 'work') { $users = get_users(array('role'=>'designer')); //We're forming a new select with our values, you can add an option //with value 1, and text as 'admin' if you want the admin to be listed as well, //optionally you can use a simple string replace trick to insert your options, //if you don't want to override the defaults $output .= "<select id='post_author_override' name='post_author_override' class=''>"; foreach($users as $user) { $output .= "<option value='".$user->id."'>".$user->user_login."</option>"; } $output .= "</select>"; } return $output; } ` Any help would be extremely appreciated!

    Read the article

  • Inserting an element within jQuery Validation plugin's error template

    - by simshaun
    I'm utilizing the jQuery Validation plugin for my form. It lets you change the errorElement and wrap the errorElement using with the wrapper option. But, I want to insert an element within errorElement like this: <label class="error"><em></em>Error message goes here</label> Is there an easy way to accomplish inserting the em tag? I've tried prepending the em tag using the errorPlacement option (see below), but it seems the plugin is replacing the contents of errorElement afterwards. $.validator.setDefaults({ errorPlacement: function(error, element) { error.prepend('<em/>'); error.insertBefore(element); } }); I've also tried prepending the em tag using the showErrors option (see below). Again, it seems the plugin is replacing the contents of errorElement afterwards. $.validator.setDefaults({ showErrors: function(errorMap, errorList) { for (var i = 0; i < errorList.length; i++) { var error = errorList[i], $label = this.errorsFor(error.element), $element = $(error.element); if ($label.length && $label.find('em').length == 0) { $label.prepend('<em/>'); } } this.defaultShowErrors(); } }); I've also tried modifying the plugin so that when the error element is generated, the <em> tag is prepended. That works until I focus on a form element that has an error, after which the em tag is removed. (It's doing this because jQuery validation is constantly updating the contents of the error element as I focus and/or type in the field, therefore erasing my em tag added at error-element creation.)

    Read the article

  • Javascript Function for related select elements onSubmit

    - by Livingston
    I am trying to create (4) Select elements within a form element. Each select element has a different number of options. So if a user click one option on Select #1 and then click another option on Select #2, after hitting submit they will be taken to www.blah.com/option1/option2 page which will display the filtered results. Or they can chose an option from all 4 select menus and be taken to option1/option2/option3/option4 page. The categories are all related .Select #1 is a category, Select #2 is a subcategory of 1, Select #3 is a subcategory of #2 and Select #4 is a subcategory of #3. A great example would be on (LINK) http://www.safavieh.com/rugs (LINK) except only four Select elements. I would also like to add the "Reset" button next to "Submit." I know I need to construct a function in my header and use onSubmit attribute within the form but other than that I am unsure of what's involved so I'm hoping someone could point me in the right direction. It's important I learn most of this for myself. Thanks for your time Livingston

    Read the article

  • how to generate unique numbers less than 8 characters long.

    - by loudiyimo
    hi I want to generate unique id's everytime i call methode generateCustumerId(). The generated id must be 8 characters long or less than 8 characters. This requirement is necessary because I need to store it in a data file and schema is determined to be 8 characters long for this id. Option 1 works fine. Instead of option 1, I want to use UUID. The problem is that UUID generates an id which has to many characters. Does someone know how to generate a unique id which is less then 99999999? option 1 import java.util.HashSet; import java.util.Random; import java.util.Set; public class CustomerIdGenerator { private static Set<String> customerIds = new HashSet<String>(); private static Random random = new Random(); // XXX: replace with java.util.UUID public static String generateCustumerId() { String customerId = null; while (customerId == null || customerIds.contains(customerId)) { customerId = String.valueOf(random.nextInt(89999999) + 10000000); } customerIds.add(customerId); return customerId; } } option2 generates an unique id which is too long public static String generateCustumerId() { String ownerId = UUID.randomUUID().toString(); System.out.println("ownerId " + ownerId); return ownerId }

    Read the article

  • WinCE and PC USB communication

    - by sebeksd
    We are developing some device and we need to find good solution for one of needed functionality. Thing is that we need communicate WinCE 6.0 (ARM) and Windows on PC. Easiest way is of course COM port but in our case it is impossible (all serial ports are used on WinCE and we don't want to add one more). Second option is LAN but for us it is not the best option for few reasons. So there is third option we could use. USB to USB communication but how to do that ? Of course WinCE is USB Device and PC is USB Host so all hardware basics are meet. We could use Active sync but there are few problems with it: - WinCE 6.0 is not working with WMDC (drivers on device just crash after connecting device with PC) and I didn't find any solution for it so in this case we need to use WinXP on PC side (old ActiveSync) - we need to filter communication with active sync to only our application, no other non authorized software should be allowed (what I know this is imposible to obtain). So propably best way to do what we need is to communicate throug USB like standard COM (serial communication). The question is, how it could be made, are we need to write driver on WinCE and also a Driver on Windows (PC), or there are better solution? Maybe some driver for WinCE 6.0 that would emulate Virtual COM on PC side (and of course allow standard Read/Write to it on WinCE side) ? Could someone tell me if something like that exists ?

    Read the article

  • Windows mobile 6.0 network settings

    - by Gauls
    Hi I am using Windows mobile 6.0 classic PDA's , I want to use wireless to access my service on the server(Win 2008), what should be my settings in the PDA network management? here's what my settings which works for some PDA's and for some it works for some time and stops? and for some PDA's similar settings does not work at all?? "Programs that connect to the internet..." = ISP "Programs that connect to a private network..." = My Work Network BTW what is this My work network ??? i don't understand as my PDA's use wireless (with proxy) i wud guess ISP to both should work right? but when i change second option to ISP as well the PDA that can connect to internet and server thru PDA IE but not through .netCF application will not get connected at all, but will work fine (apart from .netCF application accessing the server) if i change the second option back to My work network, so basically PDA is not using the first option of ISP at all ?? All i want to know is 1.) correct settings for WM 6.0 classic wirelessly accesing server 2008 with proxy 2) explain what is my work network ? Thanks gauls

    Read the article

  • Is DB logging more secure than file logging for my PHP web app?

    - by iama
    I would like to log errors/informational and warning messages from within my web application to a log. I was initially thinking of logging all of these onto a text file. However, my PHP web app will need write access to the log files and the folder housing this log file may also need write access if log file rotation is desired which my web app currently does not have. The alternative is for me to log the messages to the MySQL database since my web app is already using the MySQL database for all its data storage needs. However, this got me thinking that going with the MySQL option is much better than the file option since I already have a configuration file with the database access information protected using file system permissions. If I now go with the log file option I need to tinker the file and folder access permissions and this will only make my application less secure and defeats the whole purpose of logging. Is this correct? I am using XAMPP for development and am a newbie to LAMP. Please let me know your recommendations for logging. Thanks.

    Read the article

  • PHP/Javascript: Need help removing line break from code

    - by Josh K
    I am trying to get names from a .txt file. I am using file('filename.txt') but if i use a php for loop to try and trim the names, it comes out on multiple lines: This is the actual line that prints the javascript (NOTE: I am going to be putting these names into a select box, if you can fix the problem at hand the rest of the code should work so unless you a big problem with it you don't need to comment) <?php for($i=1; $i < 27; $i++){ echo "selbox.options[selbox.options.length] = new Option('".lines[$i]."','".$lines[$i]."');\n"; } ?> Heres how it comes out on the browser: selbox.options[selbox.options.length] = new Option('Djamal ABDOUN ','First1 Last1 '); selbox.options[selbox.options.length] = new Option('Chadli AMRI ','First2 Last2 '); I need those to be on one line so i dont get a unterminated string literal js error Any ideas on what i can do here? EDIT: Oh i should probably mention, the $lines var is initiated like this: $lines = file('filename.txt'); and inside that file i have formatted like this First1 Last1 First2 Last2 First3 Last3 etc. and i hit delete after each Last name until the next first name is touching, then hit enter to put it on a new line (Editor is notepad++)

    Read the article

  • Export large amount of data from Oracle 10G to SQL Server 2005

    - by uniball
    Dear all, I need to export 100 million data rows (avg row length ~ 100 bytes) from Oracle 10G database table into SQL server (over WAN/VLAN with 6MBits/sec capacity) on a regular basis. So far, these are the options that I have tried and a quick summary. Has anyone tried this before? Are there other better options? Which option would be the best in terms of performance and reliability? The time taken has been calculated using tests on smaller amounts of data and then extrapolating it to estimate the time required. Using data import wizard on the SQL server or SSIS packages to import the data. It will take around 150 hours to complete the task. Using Oracle batch job to spool data into a comma-delimited flat-file. Then using SSIS package to FTP this file to the SQL server and then load directly from the flat-file. The issue here is the size of the flat-file which is expected to run in GBs. Although this option is drastically different, I am even considering the option of using Linked Server to query the Oracle data directly at run-time to avoid bringing in data. Performance is a big problem and I have limited control over the Oracle database in terms of creating table indexes. Regards, Uniball

    Read the article

< Previous Page | 159 160 161 162 163 164 165 166 167 168 169 170  | Next Page >