Search Results

Search found 748 results on 30 pages for 'periodically'.

Page 24/30 | < Previous Page | 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • Finding latency issues (stalls) in embedded Linux systems

    - by camh
    I have an embedded Linux system running on an Atmel AT91SAM9260EK board on which I have two processes running at real-time priority. A manager process periodically "pings" a worker process using POSIX message queues to check the health of the worker process. Usually the round-trip ping takes about 1ms, but very occasionally it takes much longer - about 800ms. There are no other processes that run at a higher priority. It appears the stall may be related to logging (syslog). If I stop logging the problem seems to go away. However it makes no difference if the log file is on JFFS2 or NFS. No other processes are writing to the "disk" - just syslog. What tools are available to me to help me track down why these stalls are occurring? I am aware of latencytop and will be using that. Are there some other tools that may be more useful? Some details: Kernel version: 2.6.32.8 libc (syslog functions): uClibc 0.9.30.1 syslog: busybox 1.15.2

    Read the article

  • How to keep my topmost window on top?

    - by Misko Mare
    I will first explain why I need it, because I anticipate that the first response will be "Why do you need it?". I want to detect when the mouse cursor is on an edge of the screen and I don't want to use hooks. Hence, I created one pixel wide TOPMOST invisible window. I am using C++ on Win XP, so when the window is created (CreateWindowEx(WS_EX_TOPMOST | WS_EX_TRANSPARENT ...) everything works fine. Unfortunately, if a user moves another topmost window, for example the taskbar over my window, I don't get mouse movements. I tried to solve this similarly to approaches suggested in: How To Keep an MDI Window Always on Top I tried to check for Z-order of my topmost window in WM_WINDOWPOSCHANGED first with case WM_WINDOWPOSCHANGED : WINDOWPOS* pWP = (WINDOWPOS*)lParam; yet pWP-hwnd points to my window and pWP-hwndInsertAfter is 0, which should mean that my window is on the top of the Z, even though it is covered with the taskbar. Then I tried: case WM_WINDOWPOSCHANGED : HWND topWndHndl = GetNextWindow(myHandle, GW_HWNDPREV) GetWindowText(topWndHndl, pszMem, cTxtLen + 1); and I'll always get that the "Default IME" window is on top of my window. Even if try to bring my window to the top with SetWindowPos() or BringWindowToTop (), "Default IME" stays on the top. I don't know what is "Default IME" and how to detect if the taskbar is on top of my window. So my question is: How to detect that my topmost window is not the top topmost window anymore and how to keep it on the top? P.S. I know that a "brute force" approach of periodically bringing my window to the top works, yet is ugly and could have some unwanted inference with the notification window for example. (Bringing my window to the top will hide the notification window.) Thank you on your time and suggestions!

    Read the article

  • What's the fastest way to bulk insert a lot of data in SQL Server (C# client)

    - by Andrew
    I am hitting some performance bottlenecks with my C# client inserting bulk data into a SQL Server 2005 database and I'm looking for ways in which to speed up the process. I am already using the SqlClient.SqlBulkCopy (which is based on TDS) to speed up the data transfer across the wire which helped a lot, but I'm still looking for more. I have a simple table that looks like this: CREATE TABLE [BulkData]( [ContainerId] [int] NOT NULL, [BinId] [smallint] NOT NULL, [Sequence] [smallint] NOT NULL, [ItemId] [int] NOT NULL, [Left] [smallint] NOT NULL, [Top] [smallint] NOT NULL, [Right] [smallint] NOT NULL, [Bottom] [smallint] NOT NULL, CONSTRAINT [PKBulkData] PRIMARY KEY CLUSTERED ( [ContainerIdId] ASC, [BinId] ASC, [Sequence] ASC )) I'm inserting data in chunks that average about 300 rows where ContainerId and BinId are constant in each chunk and the Sequence value is 0-n and the values are pre-sorted based on the primary key. The %Disk time performance counter spends a lot of time at 100% so it is clear that disk IO is the main issue but the speeds I'm getting are several orders of magnitude below a raw file copy. Does it help any if I: Drop the Primary key while I am doing the inserting and recreate it later Do inserts into a temporary table with the same schema and periodically transfer them into the main table to keep the size of the table where insertions are happening small Anything else? -- Based on the responses I have gotten, let me clarify a little bit: Portman: I'm using a clustered index because when the data is all imported I will need to access data sequentially in that order. I don't particularly need the index to be there while importing the data. Is there any advantage to having a nonclustered PK index while doing the inserts as opposed to dropping the constraint entirely for import? Chopeen: The data is being generated remotely on many other machines (my SQL server can only handle about 10 currently, but I would love to be able to add more). It's not practical to run the entire process on the local machine because it would then have to process 50 times as much input data to generate the output. Jason: I am not doing any concurrent queries against the table during the import process, I will try dropping the primary key and see if that helps. ~ Andrew

    Read the article

  • How can I sync files in two different git repositories (not clones) and maintain history?

    - by brian d foy
    I maintain two different git repos that need to share some files, and I'd like the commits in one repo to show up in the other. What's a good way to do that for ongoing maintenance? I've been one of the maintainers of the perlfaq (Github), and recently I fell into the role of maintaining the Perl core documentation, which is also in git. Long before I started maintaining the perlfaq, it lived in a separate source control repository. I recently converted that to git. Periodically, one of the perl5-porters would sync the shared files in the perlfaq repo and the perl repo. Since we've switched to git, we'e been a bit lazy converting the tools, and I'm now the one who does that. For the time being, the two repos are going to stay separate. Currently, to sync the FAQ for a new (monthly) release of perl, I'm almost ashamed to say that I merely copy the perlfaq*.pod files in the perlfaq repo and overlay them in the perl repo. That loses history, etc. Additionally, sometimes someone makes a change to those files in the perl repo and I end up overwriting it (yes, check git diff you idiot!). The files do not have the same paths in the repo, but that's something that I could change, I think. What I'd like to do, in the magical universe of rainbows and ponies, is pull the objects from the perlfaq repo and apply them in the perl repo, and vice-versa, so the history and commit ids correspond in each. Creating patches works, but it's also a lot work to manage it Git submodules seem to only work to pull in the entire external repo I haven't found something like svn's file externals, but that would work in both directions anyway I'd love to just fetch objects from one and cherry-pick them in the other What's a good way to manage this?

    Read the article

  • "Work stealing" vs. "Work shrugging"?

    - by John
    Why is it that I can find lots of information on "work stealing" and nothing on "work shrugging" as a dynamic load-balancing strategy? By "work-shrugging" I mean busy processors pushing excessive work towards less loaded neighbours rather than idle processors pulling work from busy neighbours ("work-stealing"). I think the general scalability should be the same for both strategies. However I believe that it is much more efficient for busy processors to wake idle processors if and when there is definitely work for them to do than having idle processors spinning or waking periodically to speculatively poll all neighbours for possible work. Anyway a quick google didn't show up anything under the heading of "Work Shrugging" or similar so any pointers to prior-art and the jargon for this strategy would be welcome. Clarification/Confession In more detail:- By "Work Shrugging" I actually envisage the work submitting processor (which may or may not be the target processor) being responsible for looking around the immediate locality of the preferred target processor (based on data/code locality) to decide if a near neighbour should be given the new work instead because they don't have as much work to do. I am talking about an atomic read of the immediate (typically 2 to 4) neighbours' estimated q length here. I do not think this is any more coupling than implied by the thieves polling & stealing from their neighbours - just much less often - or rather - only when it makes economic sense to do so. (I am assuming "lock-free, almost wait-free" queue structures in both strategies). Thanks.

    Read the article

  • C# COM Cross Thread problem

    - by user364676
    Hi, we're developing a software to control a scientific measuring device. it provides a COM-Interface defines serveral functions to set measurement parameters and fires an event when it measured data. in order to test our software, i'm implementing a simulation of that device. the com-object runs a loop which periodically fires the event. another loop in the client app should now setup up the com-simulator using the given functions. i created a class for measuring parameters which will be instanciated when setting up a new measurement. // COM-Object public class MeasurementParams { public double Param1; public double Param2; } public class COM_Sim : ICOMDevice { public MeasurementParams newMeasurement; IClient client; public int NewMeasurement() { newMeasurment = new MeasurementParam(); } public int SetParam1(double val) { // why is newMeasurement null when method is called from client loop newMeasurement.Param1 = val; } void loop() { while(true) { // fire event client.HandleEvent; } } } public class Client : IClient { ICOMDevice server; public int HandleEvent() { // handle this event server.NewMeasurement(); server.SetParam1(0.0); } void loop() { while(true) { // do some stuff... server.NewMeasurement(); server.SetParam1(0.0); } } } both of the loops run in independent threads. when server.NewMeasurement() is called, the object on the server is set to a new instance. but in the next function, the object is null again. do the same when handling the server-event, it works perfectly, because the method runs in the servers thread. how to make it work from client-thread as well. as the client is meant to be working with the real device, i cannot modify the interfaces given by the manufactor. also i need to setup measurements independent from the event-handler, which will be fired not regulary. i assume this problem related to multithreaded-COM behavior but i found nothing on this topic.

    Read the article

  • Web-based game in Python + Django and client browser polling

    - by ty
    I am creating a text-based game that implements a basic model in which multiple (10+) players interact with data and one moderator watches them and sets certain environmental statistics that affect gameplay. Recently I have begun to familiarize myself with Django. It seems to me that it would be an excellent tool for creating a game quickly, particularly because the nature of my game depends largely on sets of data (which lends itself quite well to a database). I am wondering how to "push" changes made by the game moderator to the players (for example, the moderator can decide to display an image to all players). The game is turn-based, not real-time, but certain messages need to be pushed out in roughly real-time. My thoughts: I could have each player's browser poll a status periodically (say, every 30 seconds) to see if there is a message from a moderator. But this forces a lag and means different players might receive it at different times. And reducing this interval to <10 seems like a bad idea for the server. Is there a better way to inform clients of changes? Would you suggest something other than using a web framework like Django? Thanks!

    Read the article

  • Messaging pattern question

    - by Al Bundy
    Process A is calculating values for objects a1, a2, a3 etc. and is sending results to the middleware queue (RabbitMQ). Consumers read the queue and process these results further. Periodically process A has to send a snapshot of these values, so consumers could do some other calculations. Values for these objects might change independently. The queue might look like this a1, a1, a2, a1, a2, a2, a3... Consumers process each item in the queue. The snapshot has to contain all objects and consumers will process this message for all objects in one go. So the requirement is to have a queue like this: a1, a1, a3, a2, a2, [snapshot, a1, a2, a3], a3, a1 ... The problem is that these items are of different types: one type for objects like a1, a2 and other for a snapshot. This means that they should be processed in a diferent queues, but in that case there is a race condition: consumers might process objects before processing a snapshot. Is there any pattern to solve this (quite common) problem? We are using RabbitMQ for message queueing.

    Read the article

  • Copy SQL Server data from one server to another on a schedule

    - by rwmnau
    I have a pair of SQL Servers at different webhosts, and I'm looking for a way to periodically update the one server using the other. Here's what I'm looking for: As automated as possible - ideally, without any involvement on my part once it's set up. Pushes a number of databases, in their entirely (including any schema changes) from one server to the other Freely allows changes on the source server without breaking my process. For this reason, I don't want to use replication, as I'd have to break it every time there's an update on the source, and then recreate the publication and subscription One database is about 4GB in size and contains binary data. I'm not sure if there's a way to export this to a script, but it would be a mammoth file if I did. Originally, I was thinking of writing something that takes a scheduled full backup of each database, FTPs the backups from one server to the other once they're done, and then the new server picks it up and restores it. The only downside I can see to this is that there's no way to know that the backups are done before starting to transfer them - can these backups be done synchronously? Also, the server being refreshes is our test server, so if there's some downtime involved in moving the data, that's fine. Does anybody out there have a better idea, or is what I'm currently considering the best non-replication way to go? Thanks for your help, everybody. UPDATE: I ended up designing a custom solution to get this done using BAT files, 7Zip,command line FTP, and OSQL, so it runs in a completely automatic way and aggregates the data from a dozen servers across the country. I've detailed the steps in a blog entry. Thanks for all your input!

    Read the article

  • Is it possible to store pointers in shared memory without using offsets?

    - by Joseph Garvin
    When using shared memory, each process may mmap the shared region into a different area of their address space. This means that when storing pointers within the shared region, you need to store them as offsets of the start of the shared region. Unfortunately, this complicates use of atomic instructions (e.g. if you're trying to write a lock free algorithm). For example, say you have a bunch of reference counted nodes in shared memory, created by a single writer. The writer periodically atomically updates a pointer 'p' to point to a valid node with positive reference count. Readers want to atomically write to 'p' because it points to the beginning of a node (a struct) whose first element is a reference count. Since p always points to a valid node, incrementing the ref count is safe, and makes it safe to dereference 'p' and access other members. However, this all only works when everything is in the same address space. If the nodes and the 'p' pointer are stored in shared memory, then clients suffer a race condition: x = read p y = x + offset Increment refcount at y During step 2, p may change and x may no longer point to a valid node. The only workaround I can think of is somehow forcing all processes to agree on where to map the shared memory, so that real pointers rather than offsets can be stored in the mmap'd region. Is there any way to do that? I see MAP_FIXED in the mmap documentation, but I don't know how I could pick an address that would be safe.

    Read the article

  • ASP.NET output caching - dynamically update dependencies

    - by ColinE
    Hi All, I have an ASP.NET application which requires output caching. I need to invalidate the cached items when the data returned from a web service changes, so a simple duration is not good enough. I have been doing a bit of reading about cache dependencies and think I have the right idea. It looks like I will need to create a cache dependency to my web service. To associate the page output with this dependency I think I should use the following method: Response.AddCacheItemDependency(cacheKey); The thing I am struggling with is what I should add to the cache? The dependency my page has is to a single value returned by the web service. My current thinking is that I should create a Custom Cache Dependency via subclassing CacheDependency, and store the current value in the cache. I can then use Response.AddCacheItemDependency to form the dependency. I can then periodically check the value and for a NotifyDependencyChange in order to invalidate my cached HTTP response. The problem is, I need to ensure that the cache is flushed immediately, so a periodic check is not good enough. How can I ensure that my dependant object in the cache which represents the value returned by the web service is re-evaluated before the HTTP response is fetched from the cache? Regards, Colin E.

    Read the article

  • Uninterruptible Windows Process

    - by Nullstr1ng
    Hi Guys, We're starting a new custom project right now from a client and one of the requirements is the process cannot be terminated unless the system is shutting down, restarting, or logging-off. This application monitors the USB interface. We will be using WMI to query the device periodically. The client want's to run the application on Windows XP Operating System and doesn't like installing .NET. So we targeted Visual Basic 6 as our language. My main concern is this application cannot be terminated. Our Project Adviser talks about Anti-virus and yes, some of the anti virus cannot be terminated. I was thinking how to do the same in Visual Basic 6. I know there will be API involved on the project but where should I go? so API is ok with me. I saw some articles that converts the EXE to a SERVICE, create Windows Service in Visual Basic 6, etc. So please .. share your thoughts.

    Read the article

  • C# reference collection for storing reference types

    - by ivo s
    I like to implement a collection (something like List<T>) which would hold all my objects that I have created in the entire life span of my application as if its an array of pointers in C++. The idea is that when my process starts I can use a central factory to create all objects and then periodically validate/invalidate their state. Basically I want to make sure that my process only deals with valid instances and I don't re-fetch information I already fetched from the database. So all my objects will basically be in one place - my collection. A cool thing I can do with this is avoid database calls to get data from the database if I already got it (even if I updated it after retrieval its still up-to-date if of course some other process didn't update it but that a different concern). I don't want to be calling new Customer("James Thomas"); again if I initted James Thomas already sometime in the past. Currently I will end up with multiple copies of the same object across the appdomain - some out of sync other in sync and even though I deal with this using timestamp field on the MSSQL server I'd like to keep only one copy per customer in my appdomain (if possible process would be better). I can't use regular collections like List or ArrayList for example because I cannot pass parameters by their real local reference to the their existing Add() methods where I'm creating them using ref so that's not to good I think. So how can this be implemented/can it be implemented at all ? A 'linked list' type of class with all methods working with ref & out params is what I'm thinking now but it may get ugly pretty quickly. Is there another way to implement such collection like RefList<T>.Add(ref T obj)? So bottom line is: I don't want re-create an object if I've already created it before during the entire application life unless I decide to re-create it explicitly (maybe its out-of-date or something so I have to fetch it again from the db). Is there alternatives maybe ?

    Read the article

  • How to handle dates that repeat indefinitely

    - by Addsy
    I am implementing a fairly simple calendar on a website using PHP and MySQL. I want to be able to handle dates that repeat indefinitely and am not sure of the best way to do it. For a time limited repeating event it seems to make sense to just add each event within the timeframe into my db table and group them with some form of recursion id. But when there is no limit to how often the event repeats, is it better to a) put records in the db for a specific time frame (eg the next 2 years) and then periodically check and add new records as time goes by - The problem with this is that if someone is looking 3 years ahead, the event won't show up b) not actually have records for each event but instead when i check in my php code for events within a specified time period, calculate wether a repeated event will occur within this time period - The problem with this is that it means there isn't a specific record for each event which i can see being a pain when i then want to associate other info (attendance etc) with that event. It also seems like it might be a bit slow Has anyone tried either of these methods? If so how did it work out? Or is there some other ingenious crafty method i'm missing?

    Read the article

  • What can cause Bonjour to not call me back during browsing?

    - by millenomi
    I have a rather popular Bonjour-based application in App Store. It works perfectly, but around 0.2% of my users report a bizarre bug: "no arrows appear on the edges of the screen, so I can't share stuff with other people!". Needless to say, displaying these arrows is tied to the browsing of a particular Bonjour service on the local domain. The problem is, the Apple review team seems to intermittently happen to be in this 0.2%. This isn't good for review results, as you might imagine. No matter how much I try, I cannot reproduce this bug. From the few logs I have, it looks like my app is running correctly, just not receiving NSNetServiceBrowser delegate calls. What can cause this? Things I've tried: Having a shorter service name < 14 chars in length to be in spec. Publishing on @"local." rather than @"" (aka Go Look For The Default Registration Domain). My app is rather useless on a wide-area network anyway. Things I haven't tried: restarting the browsing machinery periodically. (I have two browsers, though, one looking for the legacy longer name, one for the new shorter one.) What to do?

    Read the article

  • Long running operations (threads) in a web (asp.net) environment

    - by rrejc
    I have an asp.net (mvc) web site. As the part of the functions I will have to support some long running operations, for example: Initiated from user: User can upload (xml) file to the server. On the server I need to extract file, do some manipulation (insert into the db) etc... This can take from one minute to ten minutes (or even more - depends on file size). Of course I don't want to block the request when the import is running , but I want to redirect user to some progress page where he will have a chance to watch the status, errors or even cancel the import. This operation will not be frequently used, but it may happen that two users at the same time will try to import the data. It would be nice to run the imports in parallel. At the beginning I was thinking to create a new thread in the iis (controller action) and run the import in a new thread. But I am not sure if this is a good idea (to create working threads on a web server). Should I use windows services or any other approach? Initiated from system: - I will have to periodically update lucene index with the new data. - I will have to send mass emails (in the future). Should I implement this as a job in the site and run the job via Quartz.net or should I also create a windows service or something? What are the best practices when it comes to running site "jobs"? Thanks!

    Read the article

  • Need a way to determine if a file is done being written to.

    - by Khorkrak
    The situation I'm in is this - there's a process that's writing to a file, sometimes the file is rather large say 400 - 500MB. I need to know when it's done writing. How can I determine this? If I look in the directory I'll see it there but it might not be done being written. Plus this needs to be done remotely - as in on the same internal LAN but not on the same computer and typically the process that wants to know when the file writing is done is running on a Linux box with a the process that's writing the file and the file itself on a windows box. No samba isn't an option. xmlrpc communication to a service on that windows box is an option as well as using snmp to check if that's viable. Ideally Works on either Linux or Windows - meaning the solution is OS independent. Works for any type of file. Good enough: Works just on windows but can be done through some library or whatever that can be accessed with Python. Works only for PDF files. Current best idea is to periodically open the file in question from some process on the windows box and look at the last bytes checking for the PDF end tag and accounting for the eol differences because the file may have been created on Linux or Windows.

    Read the article

  • Which ORM to use?

    - by Paja
    I'm developing an application which will have these classes: class Shortcut { public string Name { get; } public IList<Trigger> Triggers { get; } public IList<Action> Actions { get; } } class Trigger { public string Name { get; } } class Action { public string Name { get; } } And I will have 20+ more classes, which will derive from Trigger or Action, so in the end, I will have one Shortcut class, 15 Action-derived classes and 5 Trigger-derived classes. My question is, which ORM will best suit this application? EF, NH, SubSonic, or maybe something else (Linq2SQL)? I will be periodically releasing new application versions, adding more triggers and actions (or changing current triggers/actions), so I will have to update database schema as well. I don't know if EF or NH provides any good methods to easily update the schema. Or if they do, is there any tutorial how to do that? I've already found this article about NH schema updating, quoting: Fortunately NHibernate provides us the possibility to update an existing schema, that is NHibernate creates an update script which can the be applied to the database. I've never found how to actually generate the update script, so I can't tell NH to update the schema. Maybe I've misread something, I just didn't found it. Last note: If you suggest EF, will be EF 1.0 suitable as well? I would rather use some older .NET than 4.0.

    Read the article

  • Is there a standard pattern for scanning a job table executing some actions?

    - by Howiecamp
    (I realize that my title is poor. If after reading the question you have an improvement in mind, please either edit it or tell me and I'll change it.) I have the relatively common scenario of a job table which has 1 row for some thing that needs to be done. For example, it could be a list of emails to be sent. The table looks something like this: ID Completed TimeCompleted anything else... ---- --------- ------------- ---------------- 1 No blabla 2 No blabla 3 Yes 01:04:22 ... I'm looking either for a standard practice/pattern (or code - C#/SQL Server preferred) for periodically "scanning" (I use the term "scanning" very loosely) this table, finding the not-completed items, doing the action and then marking them completed once done successfully. In addition to the basic process for accomplishing the above, I'm considering the following requirements: I'd like some means of "scaling linearly", e.g. running multiple "worker processes" simultaneously or threading or whatever. (Just a specific technical thought - I'm assuming that as a result of this requirement, I need some method of marking an item as "in progress" to avoid attempting the action multiple times.) Each item in the table should only be executed once. Some other thoughts: I'm not particularly concerned with the implementation being done in the database (e.g. in T-SQL or PL/SQL code) vs. some external program code (e.g. a standalone executable or some action triggered by a web page) which is executed against the database Whether the "doing the action" part is done synchronously or asynchronously is not something I'm considering as part of this question.

    Read the article

  • What's a good Java-based Master-Slave communication mechanism?

    - by plecong
    I'm creating a Java application that requires master-slave communication between JVMs, possibly residing on the same physical machine. There will be a "master" server running inside a JEE application server (i.e. JBoss) that will have "slave" clients connect to it and dynamically register itself for communication (that is the master will not know the IP addresses/ports of the slaves so cannot be configured in advance). The master server acts as a controller that will dole work out to the slaves and the slaves will periodically respond with notifications, so there would be bi-directional communication. I was originally thinking of RPC-based systems where each side would be a server, but it could get complicated, so I'd prefer a mechanism where there's an open socket and they talk back and forth. I'm looking for a communication mechanism that would be low-latency where the messages would be mostly primitive types, so no serious serialization is necessary. Here's what I've looked at: RMI JMS: Built-in to Java, the "slave" clients would connect to the existing ConnectionFactory in the application server. JAX-WS/RS: Both master and slave would be servers exposing an RPC interface for bi-directional communication. JGroups/Hazelcast: Use shared distributed data structures to facilitate communication. Memcached/MongoDB: Use these as "queues" to facilitate communication, though the clients would have to poll so there would be some latency. Thrift: This does seem to keep a persistent connection, but not sure how to integrate/embed a Thrift server into JBoss WebSocket/Raw Socket: This would work, but require a lot more custom code than I'd like. Is there any technology I'm missing? Edit: Also looked at: JMX: Have the client connect to JBoss' JMX server and receive JMX notifications for bidirectional comms.

    Read the article

  • Core Data: Deleting causes 'NSObjectInaccessibleException' from NSOperation with a reference to a deleted object

    - by Bryan Irace
    My application has NSOperation subclasses that fetch and operate on managed objects. My application also periodically purges rows from the database, which can result in the following race condition: An background operation fetches a bunch of objects (from a thread-specific context). It will iterate over these objects and do something with their properties. A bunch of rows are deleted in the main managed object context. The background operation accesses a property on an object that was deleted from the main context. This results in an 'NSObjectInaccessibleException', reason: 'CoreData could not fulfill a fault' Ideally, the objects that are fetched by the NSOperation can be operated on even if one is deleted in the main context. The best way I can think to achieve this is either to: Call [request setReturnsObjectsAsFaults:NO] to ensure that Core Data won't try to fulfill a fault for an object that no longer exists in the main context. The problem here is I may need to access the object's relationships, which (to my understanding) will still be faulted. Iterate through the managed objects up front and copy the properties I will need into separate non-managed objects. The problem here is that (I think) I will need to synchronize/lock this part, in case an object is deleted in the main context before I can finish copying. Am I missing something obvious? It doesn't seem like what I'm trying to accomplish is too out of the ordinary. Thanks for your help.

    Read the article

  • Strange Sql Server 2005 behavior

    - by Justin C
    Background: I have a site built in ASP.NET with Sql Server 2005 as it's database. The site is the only site on a Windows Server 2003 box sitting in my clients server room. The client is a local school district, so for data security reasons there is no remote desktop access and no remote Sql Server connection, so if I have to service the database I have to be at the terminal. I do have FTP access to update ASP code. Problem: I was contacted yesterday about an issue with the system. When I looked in to it, it seems a bug that I had solved nearly a year ago had returned. I have a stored procedure that used to take an int as a parameter but a year ago we changed the structure of the system and updated the stored procedure to take an nvarchar(10). The stored procedure somehow changed back to taking an int instead of an nvarchar. There is an external hard drive connected to the server that copies data periodically and has the ability to restore the server in case of failure. I would have assumed that somehow an older version of the database had been restored, but data that I know was inserted 7 days and 1 day before the bug occurred is still in the database. Question: Is there anyway that the structure of a Sql Server 2005 database can revert to a previous version or be restored to a previous version without touching the actual data? No one else should have access to the server so I'm going a little insane trying to figure out how this even happened. Any ideas?

    Read the article

  • Easiest way to programmatically create email accounts for use by web app?

    - by cadwag
    I have a web app that create groups. Each group gets their own discussion board. I would like to add the feature of allowing users to send emails to their "group" within the web app to start a new discussion or reply to an email from the "group" to make a new post in an already ongoing discussion. For example, to start a new discussion a user would send: From: [email protected] To: [email protected] Subject: Hey guys! Meet up on Tuesday? Body: Yes? No? All members of the group would receive an email: From: [email protected] Subject: Hey guys! Meet up on Tuesday? Body: Yes? No? Reply-To: group1@ example.com And, the app would start a new discussion with: Author: Bill Fake Subject: Hey guys! Meet up on Tuesday? Body: Yes? No? This is a pretty standard feature for Google Groups and other big sites. So how do us mere mortals go about implementing this? Is there an easy way? Or do I: 1. Install postfix 2. Write scripts to create new accounts for each new group 3. Access the server periodically via pop3 (or imap?) to retrieve the email messages sent to each account? 4. Parse the message for content If it's the latter, did I miss a step?

    Read the article

  • WPF ProgressBar - TargetParameterCountException

    - by Dr_Asik
    I am making my first WPF application, where I use the Youtube .NET API to upload a video to Youtube using the ResumableUploader. This ResumableUploader works asynchronously and provides an event AsyncOperationProgress to periodically report its progress percentage. I want a ProgressBar that will display this progress percentage. Here is some of the code I have for that: void BtnUpload_Click(object sender, RoutedEventArgs e) { // generate video uploader = new ResumableUploader(); uploader.AsyncOperationCompleted += OnDone; uploader.AsyncOperationProgress += OnProgress; uploader.InsertAsync(authenticator, newVideo.YouTubeEntry, new UserState()); } void OnProgress(object sender, AsyncOperationProgressEventArgs e) { Dispatcher.BeginInvoke((SendOrPostCallback)delegate { PgbUpload.Value = e.ProgressPercentage; }, DispatcherPriority.Background, null); } Where PgbUpload is my progress bar and the other identifiers are not important for the purpose of this question. When I run this, OnProgress will be hit a few times, and then I will get a TargetParameterCountException. I have tried several different syntax for invoking the method asynchronously, none of which worked. I am sure the problem is the delegate because if I comment it out, the code works fine (but the ProgressBar isn't updated of course). Thanks for any help.

    Read the article

  • Update database every 30 minutes once

    - by Ravi
    Hi, I am working on C#.Net Windows application with SQL Server 2005. This project i am using ADO.Net Data-service for database maintenance. I am working on industrial automation domain, here data keep on reading more than 8 hours. After reading data from device based on the trigger i have periodically update data to database. for example start reading data on 9.00AM, trigger firing on 9.50AM. Once trigger fire, last 30 minutes(9.20 AM to 9.50AM) data store into data base. After trigger firing keep on reading data from device and store into data base. 10.00AM trigger going to off at time storing data to database has to be stop. Again trigger firing on 11.00AM. Once trigger fire, last 30 minutes(10.30 AM to 11.00AM) data store into data base. After trigger firing keep on reading data from device and store into data base. After 10.00AM trigger not firing means data keep on store locally. Here i don't know until trigger fire keep on reading data where & how to maintain temporarily , After trigger firing, last 30 minutes data how to bring and store into database. I don't know how to achieve it. It would be great if anyone could suggest any idea. Thanks

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27 28 29 30  | Next Page >