Search Results

Search found 59278 results on 2372 pages for 'time estimation'.

Page 535/2372 | < Previous Page | 531 532 533 534 535 536 537 538 539 540 541 542  | Next Page >

  • How to "bubble" a Controls features when place in a custom UserControl

    - by Ted
    I'll try to explain what Im after. I dont know the technical term for it, so here goes: Example 1: If I place a LisView on a Form and add som columns I am able, in Design-Time, to click-and-drag the columns to resize them. Example 2: Now, I place a ListView in a UserControl and name it "MyCustomListView" (and perhaps add some method to enhance it somehow). If I know place the "MyCustomListView" on a Form I am unable to click-and-drag the column headers to resize them in Design-Time. Is there any way to easily make that happen? Some form of "pass the click-and-drag event to the underlying control and let that control do its magic". Im not really looking to recode, just pass on the mouseclick (or whatever it is) and let the, in this case, ListView react as it did in the first example above. Regards

    Read the article

  • php imap_search SINCE date problem

    - by Daniel
    Hey I'm trying to show only the emails received in the last 5 min for example $since = date('d-M-Y G\:i', time() - 300); // actual time - 5 min $mb=imap_search ($m_mail, 'UNSEEN SINCE ' . $since . ''); Is there an easy way to do this ? I find a way: take all the unseen mails for the curent day(ex:9 may 2010), loop it and for then checking if it was send in the last X minutes echo it( using the imap_headerinfo-udate )BUT when i looked at gmail.com one email was recevied at 17:08 and on my server appears that the email was received at 14:08 Dan

    Read the article

  • How do I increase timeout for a cronjob/crontab?

    - by Mohit Ranka
    I have written a script that gets data from solr for which date is within the specified period, and I run the script using as a daily cron. The problem is the cronjob does not complete the task. If I manually run the script (for the same time period), it works well. If I reduce the specified time period, the script runs from the cron as well. So my guess is cronjob is timing out while running the script is there is much data to process. How do I increase the timeout for cronjob? PS - 1. The script I am running in cronjob is a bash script which runs a python script.

    Read the article

  • Optimizing performance of large ASP.NET applications

    - by NLV
    Hello, I'm building a asp.net web application with lots and lots of controls and huge volumes of data. My application is very slow and it is taking a large amount of time to load the data into the .net controls like grid, tree view etc. I also have some ajaxified pages and controls in my application. I want to reduce the page load time in each postbacks. What are the standards/best practices to be followed while developing large asp.net applications? Thank you. NLV

    Read the article

  • glTexImage2D behavior on iPhone and other OpenGL ES platforms

    - by spurserh
    Hello, I am doing some work which involves drawing video frames in real time in OpenGL ES. Right now I am using glTexImage2D to transfer the data, in the absence of Pixel Buffer Objects and the like. I suspect that the use of glTexImage2D with one or two frames of look-ahead, that is, using several textures so that the glTexImage2D call can be initiated a frame or two ahead, will allow for sufficient parallelism to play in real time if the system is capable of it at all. Is my assumption true that the driver will handle the actual data transfer to the hardware asynchronously after glTexImage2D returns, assuming I don't try to use the texture or call glFinish/glFlush? Is there a better way to do this with OpenGL ES? Thank you very much, Sean

    Read the article

  • Hard Disk DRDY error: is it a crash

    - by pranjal
    I am using IBM Thinkpad, 1.7GHz, 512 RAM with Linux Mint 9 installed. I have two partitions in addition to root. One of the partitions became read-only yesterday, after which I rebooted my system. It is extremely slow along with DRDY Error : Is my Hard disk crashed ? Error Log while booting. Differences between boot sector and its backup. failed command : READ DMA BMDMA : stat 0X25 ata 1.00 : status : { DRDY ERR } ata 1.00 : status :{ UNC } Buffer I/O error on logical device, logical block 65467 smartctl output for the partition: mint mint # smartctl -a /dev/sda1 smartctl version 5.38 [i686-pc-linux-gnu] Copyright (C) 2002-8 Bruce Allen Home page is http://smartmontools.sourceforge.net/ === START OF INFORMATION SECTION === Device Model: TOSHIBA MK4026GAX RoHS Serial Number: X5LY1623T Firmware Version: PA107E User Capacity: 40,007,761,920 bytes Device is: Not in smartctl database [for details use: -P showall] ATA Version is: 6 ATA Standard is: Exact ATA specification draft version not indicated Local Time is: Thu Feb 17 06:48:25 2011 UTC SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED General SMART Values: Offline data collection status: (0x84) Offline data collection activity was suspended by an interrupting command from host. Auto Offline Data Collection: Enabled. Self-test execution status: ( 0) The previous self-test routine completed without error or no self-test has ever been run. Total time to complete Offline data collection: ( 153) seconds. Offline data collection capabilities: (0x1b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. No Conveyance Self-test supported. No Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. No General Purpose Logging support. Short self-test routine recommended polling time: ( 2) minutes. Extended self-test routine recommended polling time: ( 30) minutes. SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000b 100 100 050 Pre-fail Always - 0 2 Throughput_Performance 0x0005 100 100 050 Pre-fail Offline - 0 3 Spin_Up_Time 0x0027 100 100 001 Pre-fail Always - 310 4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 3968 5 Reallocated_Sector_Ct 0x0033 100 100 050 Pre-fail Always - 40 7 Seek_Error_Rate 0x000b 100 100 050 Pre-fail Always - 0 8 Seek_Time_Performance 0x0005 100 100 050 Pre-fail Offline - 0 9 Power_On_Hours 0x0032 082 082 000 Old_age Always - 7257 10 Spin_Retry_Count 0x0033 179 100 030 Pre-fail Always - 0 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 3484 192 Power-Off_Retract_Count 0x0032 100 100 000 Old_age Always - 489 193 Load_Cycle_Count 0x0032 064 064 000 Old_age Always - 367150 194 Temperature_Celsius 0x0022 100 100 000 Old_age Always - 36 (Lifetime Min/Max 14/57) 196 Reallocated_Event_Count 0x0032 100 100 000 Old_age Always - 33 197 Current_Pending_Sector 0x0032 100 100 000 Old_age Always - 82 198 Offline_Uncorrectable 0x0030 100 100 000 Old_age Offline - 1 199 UDMA_CRC_Error_Count 0x0032 200 253 000 Old_age Always - 0 220 Disk_Shift 0x0002 100 100 000 Old_age Always - 101 222 Loaded_Hours 0x0032 085 085 000 Old_age Always - 6146 223 Load_Retry_Count 0x0032 100 100 000 Old_age Always - 0 224 Load_Friction 0x0022 100 100 000 Old_age Always - 0 226 Load-in_Time 0x0026 100 100 000 Old_age Always - 227 240 Head_Flying_Hours 0x0001 100 100 001 Pre-fail Offline - 0 SMART Error Log Version: 1 ATA Error Count: 2371 (device log contains only the most recent five errors) CR = Command Register [HEX] FR = Features Register [HEX] SC = Sector Count Register [HEX] SN = Sector Number Register [HEX] CL = Cylinder Low Register [HEX] CH = Cylinder High Register [HEX] DH = Device/Head Register [HEX] DC = Device Command Register [HEX] ER = Error register [HEX] ST = Status register [HEX] Powered_Up_Time is measured from power on, and printed as DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes, SS=sec, and sss=millisec. It "wraps" after 49.710 days. Error 2371 occurred at disk power-on lifetime: 7256 hours (302 days + 8 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 40 51 05 1a 1b 00 e0 Error: UNC 5 sectors at LBA = 0x00001b1a = 6938 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- c8 00 05 1a 1b 00 e0 00 00:03:10.061 READ DMA f8 00 00 00 00 00 e0 00 00:03:10.061 READ NATIVE MAX ADDRESS ec 00 00 00 00 00 a0 02 00:03:10.053 IDENTIFY DEVICE ef 03 45 00 00 00 a0 02 00:03:10.053 SET FEATURES [Set transfer mode] f8 00 00 00 00 00 e0 00 00:03:10.053 READ NATIVE MAX ADDRESS Error 2370 occurred at disk power-on lifetime: 7256 hours (302 days + 8 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 40 51 05 1a 1b 00 e0 Error: UNC 5 sectors at LBA = 0x00001b1a = 6938 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- c8 00 05 1a 1b 00 e0 00 00:03:03.328 READ DMA f8 00 00 00 00 00 e0 00 00:03:03.327 READ NATIVE MAX ADDRESS ec 00 00 00 00 00 a0 02 00:03:03.320 IDENTIFY DEVICE ef 03 45 00 00 00 a0 02 00:03:03.319 SET FEATURES [Set transfer mode] f8 00 00 00 00 00 e0 00 00:03:03.319 READ NATIVE MAX ADDRESS Error 2369 occurred at disk power-on lifetime: 7256 hours (302 days + 8 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 40 51 05 1a 1b 00 e0 Error: UNC 5 sectors at LBA = 0x00001b1a = 6938 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- c8 00 05 1a 1b 00 e0 00 00:02:56.582 READ DMA f8 00 00 00 00 00 e0 00 00:02:56.582 READ NATIVE MAX ADDRESS ec 00 00 00 00 00 a0 02 00:02:56.574 IDENTIFY DEVICE ef 03 45 00 00 00 a0 02 00:02:56.574 SET FEATURES [Set transfer mode] f8 00 00 00 00 00 e0 00 00:02:56.574 READ NATIVE MAX ADDRESS Error 2368 occurred at disk power-on lifetime: 7256 hours (302 days + 8 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 40 51 05 1a 1b 00 e0 Error: UNC 5 sectors at LBA = 0x00001b1a = 6938 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- c8 00 05 1a 1b 00 e0 00 00:02:49.809 READ DMA f8 00 00 00 00 00 e0 00 00:02:49.809 READ NATIVE MAX ADDRESS ec 00 00 00 00 00 a0 02 00:02:49.801 IDENTIFY DEVICE ef 03 45 00 00 00 a0 02 00:02:49.801 SET FEATURES [Set transfer mode] f8 00 00 00 00 00 e0 00 00:02:49.801 READ NATIVE MAX ADDRESS Error 2367 occurred at disk power-on lifetime: 7256 hours (302 days + 8 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 40 51 05 1a 1b 00 e0 Error: UNC 5 sectors at LBA = 0x00001b1a = 6938 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- c8 00 05 1a 1b 00 e0 00 00:02:43.056 READ DMA f8 00 00 00 00 00 e0 00 00:02:43.056 READ NATIVE MAX ADDRESS ec 00 00 00 00 00 a0 02 00:02:43.048 IDENTIFY DEVICE ef 03 45 00 00 00 a0 02 00:02:43.048 SET FEATURES [Set transfer mode] f8 00 00 00 00 00 e0 00 00:02:43.047 READ NATIVE MAX ADDRESS SMART Self-test log structure revision number 1 No self-tests have been logged. [To run self-tests, use: smartctl -t] Device does not support Selective Self Tests/Logging Do I need to get a new Hard Disk my PC ?

    Read the article

  • WCF performance improvements

    - by Burt
    I am developing a WPF application that talks to a server via WCF services over the internet. After profiling the application I noticed a lot of time is being taking up by creating the appropriate WCF client proxy and making the call to the server. The code on the server is optimised and doesn't take any time to run yet I am still seeing a 1.5 second delay from when a service is invloked to it returning to the client. A few points to give a bit of background: I am using the ASP.Net membership for security I will eventually hook into the same server side code through a website I would eventually like to have offline support in the application I really need to nail the performance early though as if the app is taking a couple of seconds to come back it is too long for what I am trying to do. Can anyone suggest performance tips that will help me please?

    Read the article

  • Increase columns width in Silverlight DataGrid to fill whole DG width

    - by Henrik P. Hessel
    Hello, I have a DataGrid Control which is bound to a SQL Table. The XAML Code is: <data:DataGrid x:Name="dg_sql_data" Grid.Row="1" Visibility="Collapsed" Height="auto" Margin="0,5,5,5" AutoGenerateColumns="false" AlternatingRowBackground="Aqua" Opacity="80" > <data:DataGrid.Columns> <data:DataGridTextColumn Header="Latitude" Binding="{Binding lat}" /> <data:DataGridTextColumn Header="Longitude" Binding="{Binding long}" /> <data:DataGridTextColumn Header="Time" Binding="{Binding time}" /> </data:DataGrid.Columns> </data:DataGrid> Is it possible increase the single columns sizes to fill out the complete width of the datagrid? thx!, rAyt Edit: Columns with "*" as width are coming with the Silverlight SDK 4.

    Read the article

  • Need to compare blobs in firebird

    - by Michael Beck
    Hey guys, I need to check on the contents of blobs in my databases (yes, plural, but one problem at a time). In one database, I have about 900 images of potentially varying sizes. I need to check to see if the versioning system that's built into our application is actually correctly replicating the image data from the previous version to the new version of a record. How do I compare values en masse so I don't have to pick through each record one at a time and open up the blob using FlameRobin or Firebird Maestro and visually compare these images? Thanks for any assistance.

    Read the article

  • Fastest way to call a COM objects method without using a RCW

    - by Nathan W
    I'm trying to find the cleanest and fastest way for calling a COM objects methods. I was using a RCW for the object but every time a new version of the third party COM object comes out its GUID changes which then renders the RCW useless, so I had to change and start using Type mytype = Type.GetTypeFromProgID("MyCOMApp.Application"); so that every time a new version of the COM object comes out I don't have to recomplie and redeploy my app. At the moment I am using refelection like mytype.InvokeMemeber but I feel it is so slow compared to just calling the RCW. How does everyone else tackle the problem of changing 3rd party COM object versions, but still maintaining the speed of a RCW?

    Read the article

  • SQL query optimization

    - by nvtthang
    I have a problem with my SQL query that take time to get all records from database. Any body help me. Below is a sample of database: order(order_id, order_nm) customer(customer_id, customer_nm) orderDetail(orderDetail_id, order_id, orderDate, customer_id, Comment) I want to get latest customer and order detail information. Here is may solution: I've created a function that GetLatestOrderByCustomer(CusID) to get lastest Customer information. CREATE FUNCTION [dbo].[GetLatestOrderByCustomer] ( @cus_id int ) RETURNS varchar(255) AS BEGIN DECLARE @ResultVar varchar(255) SELECT @ResultVar = tmp.comment FROM ( SELECT TOP 1 orderDate, comment FROM orderDetail WHERE orderDetail.customer_id = @cust_id ) tmp -- Return the result of the function RETURN @ResultVar END Below is my SQL query SELECT customer.customer_id , customer.customer_nm , dbo.GetLatestOrderByCustomer(customer.customer_id) FROM Customer LEFT JOIN orderDetail ON orderDetail.customer_id = customer.customer_id It's take time to run the function. Could anybody suggest me any solutions to make it better? Thanks in advance.

    Read the article

  • Validation plugin hijacks click function?

    - by timkl
    Absolute newbie question, any help is highly appreciated :) I am using curvycorners (http://www.curvycorners.net/) in combination with the jQuery validation plugin (http://bassistance.de/jquery-plugins/jquery-plugin-validation/), and I'm having problems getting the div to redraw the rounded corners when I do like this: $("input[type='submit']").click(function(e) { curvyCorners.redraw(); }); When I click the submit-button the first time the form validates, the validation error-message pops up and expands the div, causing the layout to go ugly. However when I click on it the second time the rounded corners redraw nicely. Could it be that my validation plugin hijacks my initial click? How do I go about this? Any hint is very much appreciated.

    Read the article

  • How do you do HTML form testing without real user input simulation ?

    - by justjoe
    this question is like this one, except it's for PHP testing via browser. It's about testing your form input. Right now, i have a form on a single page. It has 12 input boxes. Every time i test the form, i have write those 12 input boxes in my browser. i know it's not a specific coding question. This question is more about how to do direct testing on your form So, how to do recursive testing without consuming too much of your time ?

    Read the article

  • JIRA JQL searching by date - is there a way of getting Today() (Date) instead of Now() (DateTime)

    - by Shevek
    I am trying to create some Issue Filters in JIRA based on CreateDate. The only date/time function I can find is Now() and searches relative to that, i.e. "-1d", "-4d" etc. The only problem with this is that Now() is time specific so there is no way of getting a particular day's created issues. i.e. Created < Now() AND Created >= "-1d" when run at 2pm today will show all issues created from 2pm yesterday to 2pm today when run at 9am tomorrow will show all issues created from 9am today to 9am tomorrow What I want is to be able to search for all issues created from 00:00 to 23:59 on any day. Is this possible?

    Read the article

  • Covariance and Contravariance on the same type argument

    - by William Edmondson
    The C# spec states that an argument type cannot be both covariant and contravariant at the same time. This is apparent when creating a covariant or contravariant interface you decorate your type parameters with "out" or "in" respectively. There is not option that allows both at the same time ("outin"). Is this limitation simply a language specific constraint or are there deeper, more fundamental reasons based in category theory that would make you not want your type to be both covariant and contravariant? Edit: My understanding was that arrays were actually both covariant and contravariant. public class Pet{} public class Cat : Pet{} public class Siamese : Cat{} Cat[] cats = new Cat[10]; Pet[] pets = new Pet[10]; Siamese[] siameseCats = new Siamese[10]; //Cat array is covariant pets = cats; //Cat array is also contravariant since it accepts conversions from wider types cats = siameseCats;

    Read the article

  • Using SqlBulkCopy in a multithread scenario with ThreadPool issue

    - by Ruben F.
    Hi. I'm facing a dilemma (!). In a first scenario, i implemented a solution that replicates data from one data base to another using SQLBulkCopy synchronously and i had no problem at all. Now, using ThreadPool, i implemented the same in a assynchronously scenario, a thread per table, and all works fine, but past some time (usualy 1hour because the operations of copy takes about the same time), the operations sended to the ThreadPool stop being executed. There are one diferent SQLBulkCopy using one diferent SQLConnection per thread. I already see the number of free threads, and they are all free at the begining of the invocation...I have one AutoResetEvent to wait that the threads finish their job before launching again, and a Semaphore FIFO that hold the counter of active threads. There are some issue that I have forgotten or that I should avaliate when using SqlBulkCopy? I apreciate some help, because my ideas are over;) Thanks

    Read the article

  • TortoiseSVN: how to set up projects on a existing directory structure of source code

    - by Steve
    I have an old pet project I want to revive (haven't had enough time for it last year - small kid - you know) - so restored old copy of my dev folder from archive, but since I have rebuilt my machine since when - I can't remember what needs to be done now. I installed the latest version of TortoiseSVN, and the existing directory structure from my old dev machine looks like: ProjectName *SubProject1 **branches ***1.1 ***1.2 **tags **trunk *SubProject2 **branches **1.0.3 **1.0.4 **1.0.5 **tags **trunk I tried "import project" but it ask for a url - don't know what to specify there ... can someone post a url to a good TortSVN tutorial - so I could set up my projects quickly (I guess I need to setup SubProject1 and SubProject2) - then I install AnkhSVN for VS2008 and will spend this Sunday coding like crazy while I still have some time ;-)

    Read the article

  • Paypal IPN Handler for Cart

    - by kimmothy16
    Hey everyone! I'm using a Paypal add to cart button for a simple ecommerce website. I had a Paypal IPN handler written in PHP that was successfully working for 'Buy it Now' buttons and purchasing one item at a time. I would run a database query each time to update the store's inventory to reflect the purchase. Now I'm upgrading this store to a 'cart' version so people could check out with multiple items at a time. Would anyone be able to tell me, in general, how my IPN handler would need to be altered to accommodate this? I'm unsure of what a response from Paypal looks like for a cart purchase as opposed to a buy it now purchase. Thanks, any help or examples of working IPN cart scripts would be very appreciated! My current code is below.. // Paypal POSTs HTML FORM variables to this page // we must post all the variables back to paypal exactly unchanged and add an extra parameter cmd with value _notify-validate // initialise a variable with the requried cmd parameter $req = 'cmd=_notify-validate'; // go through each of the POSTed vars and add them to the variable foreach ($_POST as $key => $value) { $value = urlencode(stripslashes($value)); $req .= "&$key=$value"; } // post back to PayPal system to validate $header .= "POST /cgi-bin/webscr HTTP/1.0\r\n"; $header .= "Content-Type: application/x-www-form-urlencoded\r\n"; $header .= "Content-Length: " . strlen($req) . "\r\n\r\n"; // In a live application send it back to www.paypal.com // but during development you will want to uswe the paypal sandbox // comment out one of the following lines $fp = fsockopen ('www.sandbox.paypal.com', 80, $errno, $errstr, 30); //$fp = fsockopen ('www.paypal.com', 80, $errno, $errstr, 30); // or use port 443 for an SSL connection //$fp = fsockopen ('ssl://www.paypal.com', 443, $errno, $errstr, 30); if (!$fp) { // HTTP ERROR } else { fputs ($fp, $header . $req); while (!feof($fp)) { $res = fgets ($fp, 1024); if (strcmp ($res, "VERIFIED") == 0) { $item_name = stripslashes($_POST['item_name']); $item_number = $_POST['item_number']; $item_id = $_POST['custom']; $payment_status = $_POST['payment_status']; $payment_amount = $_POST['mc_gross']; //full amount of payment. payment_gross in US $payment_currency = $_POST['mc_currency']; $txn_id = $_POST['txn_id']; //unique transaction id $receiver_email = $_POST['receiver_email']; $payer_email = $_POST['payer_email']; $size = $_POST['option_selection1']; $item_id = $_POST['item_id']; $business = $_POST['business']; if ($payment_status == 'Completed') { // UPDATE THE DATABASE }

    Read the article

  • XAML Serialization object not using asp.net shadow copy

    - by mrwayne
    Hi, I'm having a problem where i use the XAML serializer / deserializer for a configuration type file that i have. The problem that i'm getting, is that the XAML serializer is returning objects from the assembly in the /Bin directory, while the rest of the web application is using assembly's stored in the ..../Temporary Files/.. directory. Is there any way to prevent this from happening? Is this a bug in the XAML serializer / assembly loading routines? Every time i compile i need to stop and start the asp.net application so the shadow copy and the bin are exactly the same file. Even when not making a change to the dll and recompiling still causes the problem. Any thoughts on how to get around this problem? Currently i've tried turning shadow copy off, but then i have the same problem of needing to shut down / start up the web app every time i compile. Help!

    Read the article

  • Synchronizing two SQL Server databases using MS Sync Framework

    - by Immortal
    I have one central SQL Server database which can be offline from time to time. I have a desktop application using Local DB Cache (SQL CE) to synchronize with the central database and I also have a web application with its own SQL Server that I'd also would like to keep synchronized. All synchronizations must be bidirectional. Is there a way to synchronize my central database with web application's database in the same way as I synchronize my central database with desktop client? I know about collaboration scenarios and peer-to-peer synchronization but I would like to avoid manual provisioning of databases. I'd like to use integrated sql server 2008 change tracking just like in the SQL CE <-- SQL Server scenario.

    Read the article

  • Using PostRequestHandlerExecute, Flush and Close to clean up after a request - why is this bad?

    - by Erwin
    After some requests I need to clean up after the user - by calling a remote web service to release some resources if I guess the user doesn't need them anymore. It is ok to leave them hanging and letting them time out on the other server, but the polite thing to do is to inform it that I do not need them anymore. I do not want to waste the users time waiting for cleaning - so I tried to find place to put it. First I tried Application_EndRequest, but I needed something later. Then I found PostRequestHandlerExecute which seemed like a nice place, but I still need a Flush and Close to release the connection to the user. Protected Sub Application_PostRequestHandlerExecute(ByVal sender As Object, ByVal e As System.EventArgs) Response.Flush() Response.Close() ' Simulation of clean up activity: System.Threading.Thread.Sleep(4000) ' Really a couple of web service calls End Sub Is there some other place I could put these lengthy clean up routines?

    Read the article

  • [OpenGl] Loading new texture into already defined texture name

    - by Dan Vogel
    I have an OpenGl program in which I am displaying an image using textures. I want to be able to load a new image to be displayed. In my Init function I call: Gl.glGenTextures(1, mTextures); Since only one image will be displayed at time, I am using the same texture name for each image. Each time a new image is loaded I call the following: Gl.glBindTexture(Gl.GL_TEXTURE_2D, mTexture[0]); Gl.glTexImage2D(Gl.GL_TEXTURE_2D, 0, Gl.GL_LUMINANCE, mTexSizeX, mTexSizeY, 0, Gl.GL_LUMINANCE, Gl.GL_UNSIGNED_SHORT, mTexBuffer); Gl.glTexParameteri(Gl.GL_TEXTURE_2D, Gl.GL_TEXTURE_MIN_FILTER, Gl.GL_LINEAR); Gl.glTexParameteri(Gl.GL_TEXTURE_2D, Gl.GL_TEXTURE_MAG_FILTER, Gl.GL_LINEAR); The first image will display as expected. However, all images load after the first, display as all black. What am I doing wrong?

    Read the article

  • Long-running transactions structured approach

    - by disown
    I'm looking for a structured approach to long-running (hours or more) transactions. As mentioned here, these type of interactions are usually handled by optimistic locking and manual merge strategies. It would be very handy to have some more structured approach to this type of problem using standard transactions. Various long-running interactions such as user registration, order confirmation etc. all have transaction-like semantics, and it is both error-prone and tedious to invent your own fragile manual roll-back and/or time-out/clean-up strategies. Taking a RDBMS as an example, I realize that it would be a major performance cost associated with keeping all the transactions open. As an alternative, I could imagine having a database supporting two isolation levels/strategies simultaneously, one for short-running and one for long-running conversations. Long-running conversations could then for instance have more strict limitations on data access to facilitate them taking more time (read-only semantics on some data, optimistic locking semantics etc). Are there any solutions which could do something similar?

    Read the article

  • .NET and P2P - writing a P2P messenger

    - by brovar
    Hi there, Does anyone have any advice how to write such app? Or maybe knows some nice tutorial? I would like to use System.Net.PeerToPeer namespace, but everything I can find about it is MSDN which I can't read without getting mad. Or maybe using "old-school" TCP/IP would more efficient? I will appreciate every piece of advice. Every sample code I will shower with gold ;) And please, don't send me back to Google for I have searched for a long time for sth useful - maybe inaccurately but time is running out and I really need some help.

    Read the article

  • Need advice on speeding up tableViewCell photo loading performance

    - by ambertch
    I have around 20 tableview cells that each contain a number (2-5) thumbnail sized pictures (they are VERY small Facebook profile pictures, ex. http://profile.ak.fbcdn.net/hprofile-ak-sf2p/hs254.snc3/23133_201668_2989_q.jpg). Each picture is an UIImageView added to the cell's contentview. Scrolling performance is poor, and measuring the draw time I've found the UIImage rendering is the bottleneck. I've researched/thought of some solutions but as I am new to iphone development I am not sure which strategy to pursue: preload all the images and retrieve them from disk instead of URL when drawing cells (I'm not sure if cell drawing will still be slow, so I want to hold off on the time investment here) Have the cells display a placeholder image from disk, while the picture is asynchronously loaded (this seems to be the best solution, but I'm currently not sure exactly how to do best do this) There's the fast drawing recommendation from Tweetie, but I don't know that will have much affect if it turns out my overhead is in network loading (http://blog.atebits.com/2008/12/fast-scrolling-in-tweetie-with-uitableview/) Thoughts/implementation advice? Thanks!

    Read the article

< Previous Page | 531 532 533 534 535 536 537 538 539 540 541 542  | Next Page >