Search Results

Search found 13068 results on 523 pages for 'copy and paste'.

Page 365/523 | < Previous Page | 361 362 363 364 365 366 367 368 369 370 371 372  | Next Page >

  • Outlook Macro not executing

    - by Tim
    Hello all, we are using an an outlook macro for incoming emails to unzip the attachments, log them in sql server and copy them to a special working folder where a windows service processes them. The problems are: the user must be logged in at the server otherwise the macro wont run. I think there is no workaround for this. whenever a modal dialog pops up in outlook and waits for user input, the macro wont execute either (f.e. AutoArchive) I have stopped AutoArchiving all x-days in the options. Because its a productive environment, are there other possible windows preventing the macro to execute, which i must disable now? Our outlook-macro solution surely is not the best because of above problems. Are there better alternatives to read emails, unzip attachments and move them to a working folder? My language is VB.Net and server OS is windows 2008. Regards, Tim

    Read the article

  • Query size of block device file in Python

    - by ??O?????
    Hello. I have a Python script that reads a file (typically from optical media) marking the unreadable sectors, to allow a re-attempt to read said unreadable sectors on a different optical reader. I discovered that my script does not work with block devices (e.g. /dev/sr0), in order to create a copy of the contained ISO9660/UDF filesystem, because os.stat().st_size is zero. The algorithm currently needs to know the filesize in advance; I can change that, but the issue (of knowing the block device size) remains, and it's not answered here, so I open this question. I am aware of the following two related SO questions: Determine the size of a block device (/proc/partitions, ioctl through ctypes) how to check file size in python? (about non-special files) Therefore, I'm asking: in Python, how can I get the file size of a block device file?

    Read the article

  • How to implement Cocoa copyWithZone on derived object in MonoMac C#?

    - by Justin Aquadro
    I'm currently porting a small Winforms-based .NET application to use a native Mac front-end with MonoMac. The application has a TreeControl with icons and text, which does not exist out of the box in Cocoa. So far, I've ported almost all of the ImageAndTextCell code in Apple's DragNDrop example: https://developer.apple.com/library/mac/#samplecode/DragNDropOutlineView/Listings/ImageAndTextCell_m.html#//apple_ref/doc/uid/DTS40008831-ImageAndTextCell_m-DontLinkElementID_6, which is assigned to an NSOutlineView as a custom cell. It seems to be working almost perfectly, except that I have not figured out how to properly port the copyWithZone method. Unfortunately, this means the internal copies that NSOutlineView is making do not have the image field, and it leads to the images briefly vanishing during expand and collapse operations. The objective-c code in question is: - (id)copyWithZone:(NSZone *)zone { ImageAndTextCell *cell = (ImageAndTextCell *)[super copyWithZone:zone]; // The image ivar will be directly copied; we need to retain or copy it. cell->image = [image retain]; return cell; } The first line is what's tripping me up, as MonoMac does not expose a copyWithZone method, and I don't know how to otherwise call it. Update Based on current answers and additional research and testing, I've come up with a variety of models for copying an object. static List<ImageAndTextCell> _refPool = new List<ImageAndTextCell>(); // Method 1 static IntPtr selRetain = Selector.GetHandle ("retain"); [Export("copyWithZone:")] public virtual NSObject CopyWithZone(IntPtr zone) { ImageAndTextCell cell = new ImageAndTextCell() { Title = Title, Image = Image, }; Messaging.void_objc_msgSend (cell.Handle, selRetain); return cell; } // Method 2 [Export("copyWithZone:")] public virtual NSObject CopyWithZone(IntPtr zone) { ImageAndTextCell cell = new ImageAndTextCell() { Title = Title, Image = Image, }; _refPool.Add(cell); return cell; } [Export("dealloc")] public void Dealloc () { _refPool.Remove(this); this.Dispose(); } // Method 3 static IntPtr selRetain = Selector.GetHandle ("retain"); [Export("copyWithZone:")] public virtual NSObject CopyWithZone(IntPtr zone) { ImageAndTextCell cell = new ImageAndTextCell() { Title = Title, Image = Image, }; _refPool.Add(cell); Messaging.void_objc_msgSend (cell.Handle, selRetain); return cell; } // Method 4 static IntPtr selRetain = Selector.GetHandle ("retain"); static IntPtr selRetainCount = Selector.GetHandle("retainCount"); [Export("copyWithZone:")] public virtual NSObject CopyWithZone (IntPtr zone) { ImageAndTextCell cell = new ImageAndTextCell () { Title = Title, Image = Image, }; _refPool.Add (cell); Messaging.void_objc_msgSend (cell.Handle, selRetain); return cell; } public void PeriodicCleanup () { List<ImageAndTextCell> markedForDelete = new List<ImageAndTextCell> (); foreach (ImageAndTextCell cell in _refPool) { uint count = Messaging.UInt32_objc_msgSend (cell.Handle, selRetainCount); if (count == 1) markedForDelete.Add (cell); } foreach (ImageAndTextCell cell in markedForDelete) { _refPool.Remove (cell); cell.Dispose (); } } // Method 5 static IntPtr selCopyWithZone = Selector.GetHandle("copyWithZone:"); [Export("copyWithZone:")] public virtual NSObject CopyWithZone(IntPtr zone) { IntPtr copyHandle = Messaging.IntPtr_objc_msgSendSuper_IntPtr(SuperHandle, selCopyWithZone, zone); ImageAndTextCell cell = new ImageAndTextCell(copyHandle) { Image = Image, }; _refPool.Add(cell); return cell; } Method 1: Increases the retain count of the unmanaged object. The unmanaged object will persist persist forever (I think? dealloc never called), and the managed object will be harvested early. Seems to be lose-lose all-around, but runs in practice. Method 2: Saves a reference of the managed object. The unmanaged object is left alone, and dealloc appears to be invoked at a reasonable time by the caller. At this point the managed object is released and disposed. This seems reasonable, but on the downside the base type's dealloc won't be run (I think?) Method 3: Increases the retain count and saves a reference. Unmanaged and managed objects leak forever. Method 4: Extends Method 3 by adding a cleanup function that is run periodically (e.g. during Init of each new ImageAndTextCell object). The cleanup function checks the retain counts of the stored objects. A retain count of 1 means the caller has released it, so we should as well. Should eliminate leaking in theory. Method 5: Attempt to invoke the copyWithZone method on the base type, and then construct a new ImageAndTextView object with the resulting handle. Seems to do the right thing (the base data is cloned). Internally, NSObject bumps the retain count on objects constructed like this, so we also use the PeriodicCleanup function to release these objects when they're no longer used. Based on the above, I believe Method 5 is the best approach since it should be the only one that results in a truly correct copy of the base type data, but I don't know if the approach is inherently dangerous (I am also making some assumptions about the underlying implementation of NSObject). So far nothing bad has happened "yet", but if anyone is able to vet my analysis then I would be more confident going forward.

    Read the article

  • Should I trust Redis for data integrity?

    - by Jiaji
    In my current project, I have PostgreSQL as my master DB, and Redis as kind of a slave, e.g., when some user adds another as a friend, first the relationship will be stored in PostgreSQL and then a friend list in Redis will be updated. When some user's friend list is requested, it will be pulled out of Redis instead of PostgreSQL. The question is: when I update the friend list in Redis, should I get a fresh copy outof PostgreSQL, and replace the old list in Redis with the new one or should I keep the old list and simply SADD the userid into the list? The latter is of course best for performance, but intuitively the former does a better job in keep the data integrity? And if something like Celery is used, is the second method worth the risk?

    Read the article

  • How do I pipe the Java console output to a file?

    - by Ced
    I found a bug in an application that completely freezes the JVM. The produced stacktrace would provide valuable information for the developers and I would like to retrieve it from the Java console. When the JVM crashes, the console is frozen and I cannot copy the contained text anymore. Is there way to pipe the Java console directly to a file or some other means of accessing the console output of a Java application? Update: I forgot to mention, without changing the code. I am a manual tester. Update 2: This is under Windows XP and it's actually a web start application. Piping the output of javaws jnlp-url does not work (empty file).

    Read the article

  • Sql Server copying table information between databases

    - by Andrew
    Hi, I have a script that I am using to copy data from a table in one database to a table in another database on the same Sql Server instance. The script works great when I am connected to the Sql Server instance as myself as I have dbo access to both databases. The problem is that this won't be the case on the client's Sql Server. They have seperate logins for each database (Sql Authentication Logins). Does anyone know if there is a way to run a script under these circumstances. The script would be doing something like. use sourceDB Insert targetDB.dbo.tblTest (id, test_name) Select id, test_name from dbo.tblTest Thanks

    Read the article

  • Recursive wildcards in GNU make?

    - by Roger Lipscombe
    It's been a while since I've used make, so bear with me... I've got a directory, flac, containing .FLAC files. I've got a corresponding directory, mp3 containing MP3 files. If a FLAC file is newer than the corresponding MP3 file (or the corresponding MP3 file doesn't exist), then I want to run a bunch of commands to convert the FLAC file to an MP3 file, and copy the tags across. The kicker: I need to search the flac directory recursively, and create corresponding subdirectories in the mp3 directory. And I want to use make to drive this.

    Read the article

  • macdeployqt and third party libraries

    - by user338170
    I've got an application project that depends on a couple of shared libraries that I have created myself. According to the Qt 4.6 documentation "Deploying an Application on Mac OSX": Note: If you want a 3rd party library to be included in your application bundle, then you must add an excplicit lib entry for that library to your application's .pro file. Otherwise, the macdeployqt tool will not copy the 3rd party .dylib into the bundle. I have added lib entries to my application's .pro file but the libraries that I have written do not get copied into the bundle when I execute macdeployqt. I have the following in my .pro file: LIBS += -L../Libraries -lMyLib Everything builds okay, its just when I try to run from the bundle that I run into problems i.e. "image not found" errors. Is there a bug in macdeployqt or do I have to something more to my .pro file?

    Read the article

  • SSIS - Update flag of selected rows from more than one table

    - by Rob Bowman
    Hi I have a SSIS package that copies data from table A to table B and sets a flag in table A so that the same data is not copied subsequently. This works great by using the following as the SQL command text on the ADO Net Source object: update transfer set ProcessDateTimeStamp = GetDate(), LastUpdatedBy = 'legacy processed' output inserted.* where LastUpdatedBy = 'legacy' and ProcessDateTimeStamp is not null The problem I have is that I need to run a similar data copy but from two sources table, joined on a primary / foreign key - select from table A join table B update flag in table A. I don't think I can use the technique above because I don't know where I'd put the join! Is there another way around this problem? Thanks Rob.

    Read the article

  • How Do I "open folder in explorer" in Netbeans PHP

    - by Hippyjim
    I'm attempting the conversion from Eclipse PDT to Netbeans PHP, and so far I'm really impressed. What would make things perfect is being able to open up a file structure in the standard windows explorer. Say I want to copy in a data file I've just tweaked in another app, or downloaded from a remote server, I have to navigate manually to the folder structure that I already have open in the IDE. It's just about the only thing from Eclipse that I miss in Netbeans. Any suggestions how I can reproduce the functionality?

    Read the article

  • LINQ-to-SQL and SQL Compact - database file sharing problem

    - by Eye of Hell
    Hello. I'm learing LINQ-to-SQL right now and i have wrote a simple application that define SQL data: [Table( Name = "items" )] public class Item { [ Column( IsPrimaryKey = true, IsDbGenerated = true ) ] public int Id; [ Column ] public string Name; } I have launched 2 copy of application connected to the same .sdf file and tested if all database modifications in one application affects another application. But strange thing arise. If i use InsertOnSubmit() and DeleteOnSubmit() in one application, added/removed items are instantly visible in other application via 'select' LINQ queue. But if i try to modify 'Name' field in one application, it is NOT visible in other applicaton until it reconnects the database :(. The test code i use: var Items = from c in db.Items where Id == c.Id select c; foreach( var Item in Items ) { Item.Name = "new name"; break; } db.SubmitChanges(); Can anyone suggest what i'm doing wrong and why InsertOnSubmit()/DeleteOnSubmit works and SubmitChanges() don't?

    Read the article

  • Synchronising local and remote DB

    - by nico
    Hi everyone, I have a general question about DB synchronisation. So, I'm developing a website locally (PHP + MySQL) and I would like to be able to synchronise at least the structure (and maybe the contents) of the two DB when one of the two is changed (normally I would change the local copy). Right now what I'm doing is to use mysqldump to dump the modified tables and then import them in the remote DB or do it by hand if the changes are minimal. However I find this tedious and error-prone. For the PHP I'm currently using Quanta+ which has the handy feature of finding files that have changed and just upload those. Is there something similar for MySQL? Otherwise how do you keep your DBs synchronised? Thanks nico PS: I'm sorry if this was already asked, I saw other questions that deal with similar topics, but couldn't really find an answer.

    Read the article

  • inserting std::strings in to a std::map

    - by PaulH
    I have a program that reads data from a file line-by-line. I would like to copy some substring of that line in to a map as below: std::map< DWORD, std::string > my_map; DWORD index; // populated with some data char buffer[ 1024 ]; // populated with some data char* element_begin; // points to some location in buffer char* element_end; // points to some location in buffer > element_begin my_map.insert( std::make_pair( index, std::string( element_begin, element_end ) ) ); This std::map<>::insert() operation takes a long time (It doubles the file parsing time). Is there a way to make this a less expensive operation? Thanks, PaulH

    Read the article

  • Mercurial clone operation works, but I don't have write access

    - by normalocity
    Somewhere I did something silly. I was deploying my Rails app via cloning the Mercurial repo down onto my Ubuntu server. It worked the first time, and then...well, I made a small change on my dev machine, pushed the changes to the repo, and then deleted the copy on the Ubuntu server and re-cloned from the repo. The clone operation (the second, and third, and 'n' times) works without error, but I don't have write access to the files that were cloned. When I try to startup my mongrel - it can't create the /tmp folder, and because of no write access, fails to start the Rails app.

    Read the article

  • Reading chunked data from HttpEntity

    - by Gagan
    I have the following code: HttpClient FETCHER HttpResponse response = FETCHER.execute(host, httpMethod); Im trying to read its contents to a string like this: HttpEntity entity = response.getEntity(); InputStream st = entity.getContent(); StringWriter writer = new StringWriter(); IOUtils.copy(st, writer); String content = writer.toString(); The problem is, when i fetch http://www.google.co.in/ page, the transfer encoding is chunked, and i get only the first chunk. It fetches till first "". How do i get all the chunks at once so i can dump the complete output and do some processing on it ?

    Read the article

  • Using device variable by multiple threads on CUDA

    - by ashagi
    I am playing around with cuda. At the moment I have a problem. I am testing a large array for particular responses, and when I get the response, I have to copy the data onto another array. For example, my test array of 5 elements looks like this: [ ][ ][v1][ ][ ][v2] Result must look like this: [v1][v2] The problem is how do I calculate the address of the second array to store the result? All elements of the first array are checked in parallel. I am thinking to declare a device variable int addr = 0. Every time I find a response, I will increment the addr. But I am not sure about that because it means that addr may be accessed by multiple threads at the same time. Will that cause problems? Or will the thread wait until another thread finishes using that variable?

    Read the article

  • Modify Emdeded String in C# compiled exe

    - by nitefrog
    I have an issue where I need to be able to have a compiled exe ( .net 3.5 c# ) that I will make copies of to distribute that will need to change a key for example before the exe is sent out. I cannot compile each time a new exe is needed. This is a thin client that will be used as part of a registration process. Is it possible to add a entry to a resource file with a blank value then when a request comes in have another application grab the blank default thin client, copy it, populate the blank value with the data needed. If yes how? If no do you have any ideas? I have been scratching my head for a few days now and the limitation as due to the boundaries I am required to work in. The other idea I has was to inject the value into a method, which I have no idea how I would even attempt that. Thanks.

    Read the article

  • Upload a file to SharePoint through the built-in web services

    - by Andy McCluggage
    What is the best way to upload a file to a Document Library on a SharePoint server through the built-in web services that version WSS 3.0 exposes? Following the two initial answers... We definitely need to use the Web Service layer as we will be making these calls from remote client applications. The WebDAV method would work for us, but we would prefer to be consistent with the web service integration method. There is additionally a web service to upload files, painful but works all the time. Are you referring to the “Copy” service? We have been successful with this service’s CopyIntoItems method. Would this be the recommended way to upload a file to Document Libraries using only the WSS web service API? I have posted our code as a suggested answer.

    Read the article

  • Why does GANTracker outputs an error "GANTracker.m" not found?

    - by favo
    Hi, I have used the Google Analytics Tracker in a previous iPhone OS project. Everything was working fine and I copy & pasted the GANTracker Library and the Tracker initialization. When starting my new project it tells me: Xcode could not locate source file: GANTracker.m (line: 177) To tell the truth, I don't know where to start to debug this one. I have included the library using #import "GANTracker.h". The error message occurs right within the application didFinishLaunchingWithOptions and does not seem to have any connection to what is really going on. If I set the breakpoint to [window makeKeyAndVisible]; for example and wait a second, it occurs right after that. That makes it look like there is something in the background going on with the GANTracker. The Tracker itself is created a little later by: [[GANTracker sharedTracker] startTrackerWithAccountID:@"xx" dispatchPeriod:10 delegate:nil]; [[GANTracker sharedTracker] trackPageview:@"pageview" withError:nil]; Thank you all in advance for your help!

    Read the article

  • Update Nexus repository with local artifacts

    - by mamuesstack
    Hi, i recently downloaded some maven artifacts directly to my local repository (.m2/repository). Now i installed the Nexus Repository Manager and need to fill its storage without to download all the artifacts again. Is there a way to update the Nexus repository with the local one. I don't want to simply copy them because Nexus separate artifacts concerning their public servers (central, codehaus, etc.) and the local repository structure doesn't. Update: Meanwhile i copied the the artifacts from the local repository to the Nexus storage (public repository). I can browse to the artifacts via the Nexus webapp, but Maven somehow can't resolve the artifacts from Nexus. Do i need to register them particularly? I re-indexed the public repository and restarted Nexus multiple times - no changes.

    Read the article

  • SQL select statement

    - by kwokwai
    Hi all, I got a Table which has two fields: Point, and Level, with some sample data as follows: ----------------------- Point | Level ----------------------- 10 | Level 1 20 | Level 2 30 | Level 3 40 | Level 4 Suppose that there is a user who has 25 points, to find the Level in which this user is in, the Select statement I used was: Select Level from Table where Point < 30 AND Point > 20; But the Select SQL ststament is a hard-copy one where you can see the ponts 30 and 20 are fixed. I want to alter the Select statement so that the new SQL Select statement can be applied to all users with different points, but I don't know how to do it.

    Read the article

  • Addressing "Access Denied" Exception with WMI Calls

    - by Joe
    I'm getting an exception with a message of "Access Denied" when executing against a WMI request. Some WMI requests appear to require higher security privileges than others. Ultimately my goal is to monitor process launches within the system and log. Regardless if there is a better approach, its now become a vendetta in getting this WMI approach to work. I've attempted the code at Security Tools - WMI Programming Using C#.Net and still receive the exception. If you copy the code found in the blog entry you can reproduce my issue. Another post on a similar topic can be found at link text but again, try the code and you'll the see the same security exception. How do I permit my code to execute these WMI requests? I'm running on Windows 7 Pro and VS 2010 in a new C# command line project.

    Read the article

  • MySqlDataAdapter or MySqlDataReader for bulk transfer?

    - by Jeff Meatball Yang
    I'm using the MySql connector for .NET to copy data from MySql servers to SQL Server 2008. Has anyone experienced better performance using one of the following, versus the other? DataAdapter and calling Fill to a DataTable in chunks of 500 DataReader.Read to a DataTable in a loop of 500 I am then using SqlBulkCopy to load the 500 DataTable rows, then continue looping until the MySql record set is completely transferred. I am primarily concerned with using a reasonable amount of memory and completing in a short amount of time. Any help would be appreciated!

    Read the article

  • How to remove svn folders over FTP

    - by Loftx
    Hi there, I've accidentally copied a large part of a folder tree from my SVN working copy to my shared Windows web host via FTP. The site is now littered with .svn directories and and I need some way of cleaning them. The only access I have to the server is via FTP, or by running a script on the server. Does any one have a script which can be run remotely to remove the files over FTP (any language Windows/Linux is fine) or a script in ASP, ASP.net or PHP I can run directly on the server to remove these directories? Thanks, Tom

    Read the article

  • question about frequency of updating access

    - by I__
    i have a table in an access database this access database is used on a regular basis, basically from 9-5 someone else has a copy of this exact table. sometimes records are added, sometimes deleted, and sometimes data within the records is updated. i need to update the access database table with the offsite table every hour or so. what is the best algorithm of updating the data? there are about 5000 records. would it severely lock up the table for a few seconds every hour? i would like to publicly apologize for my rude comment to david fenton

    Read the article

< Previous Page | 361 362 363 364 365 366 367 368 369 370 371 372  | Next Page >