Search Results

Search found 1657 results on 67 pages for 'writes on'.

Page 33/67 | < Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >

  • How to set Atomikos to not write to console logs?

    - by peter
    Atomikos is quite verbose when used. There seems to be lots of INFO messages (mostly irrelevant for me) that the transaction manager writes out to the console. The setting in the transaction.properties that is suppose to control the level of messaging com.atomikos.icatch.console_log_level does not seem to have any effect, since even when set to WARN (or ERROR) the INFO messages are still logged. Also the log4j settings for com.atomikos and atomikos seem to be ignored. Does anyone manage to turn off the INFO logs on the console with Atomikos?. How? Thanks Peter

    Read the article

  • How to prevent traffic to/from a slow Cassandra node using Python

    - by Sergio Ayestarán
    Intro: I have a Python application using a Cassandra 1.2.4 cluster with a replication factor of 3, all reads and writes are done with a consistency level of 2. To access the cluster I use the CQL library. The Cassandra cluster is running on rackspace's virtual servers. The problem: From time to time one of the nodes can become slower than usual, in this case I want to be able to detect this situation and prevent making requests to the slow node and if possible to stop using it at all (this should theoretically be possible since the RF is 3 and the CL is 2 for every single request). The questions: What's the best way of detecting the slow node from a Python application? Is there a way to stop using one of the Cassandra nodes from Python in this scenario without human intervention? Thanks in advance!

    Read the article

  • How to write stored procedures to separate files with mysqldump?

    - by Jader Dias
    The mysqldump option --tab=path writes the creation script of each table in a separate file. But I can't find the stored procedures, except in the screen dump. I need to have the stored procedures also in separate files. The current solution I am working on is to split the screen dump programatically. Is there a easier way? The code I am using so far is: mysqldump -p$PASSWORD --routines --skip-dump-date --no-create-info --no-data --skip-opt $DATABASE > $BACKUP_PATH/$DATABASE.sql mysqldump -p$PASSWORD --tab=$BACKUP_PATH --skip-dump-date --no-data --skip-opt $DATABASE

    Read the article

  • Licensing your code in Mono

    - by Jerry
    I'm working with some code in Visual Studio. My parter-in-crime fellow developer has suggested that the code also be available to work under Mono. I'm impresed witht he work that is already done in Mono, but I'm very new to Mono, so I don't know what it can/cannot do. I've already written a class in C# using the .NET LicenseManager object. It writes to the windows registry, so I know I'll have to modify it so that it will use some compiler flags like #if Win32 or #if MONO. My question is two-fold: 1) Does Mono implement the same LicenseManager class structure? 2) If so, how do you guys lock down your code using LicenseManager in Linux? (i.e. Write to files, use a hardware dongle, compare to hardware serials, etc??)

    Read the article

  • Is there a utility that will let me write to an Excel 2007 .xlsm file with macros enabled?

    - by Mike Webb
    I am writing a program that writes to Excel files. I am restricted to writing to Excel 2007, which is fine, and I'm using EPPlus, which is a great utility. The thing is that I need to have macros and VBA enabled for an update function in the sheet, but EPPlus will only write to .xlsx files, not macro-enabled .xlsm files. If I try to write to .xlsm files it won't open. Is there another code library that lets me accomplish what I need (again that's writing to Excell 2007 Macro-enabled workbooks)?

    Read the article

  • How to generate Doctrine models/classes that extend a custom record class

    - by Shane O'Grady
    When I use Doctrine to generate classes from Yaml/db each Base class (which includes the table definition) extends the Doctrine_Record class. Since my app uses a master and (multiple) slave db servers I need to be able to make the Base classes extend my custom record class to force writes to go to the master db server (as described here). However if I change the base class manually I lose it again when I regenerate my classes from Yaml/db using Doctrine. I need to find a way of telling Doctrine to extend my own Base class, or find a different solution to a master/slave db setup using Doctrine. Example generated model: abstract class My_Base_User extends Doctrine_Record { However I need it to be automatically generated as: abstract class My_Base_User extends My_Record { I am using Doctrine 1.2.1 in a new Zend Framework 1.9.6 application if it makes any difference.

    Read the article

  • Howto extract text from a file where date -time is the index

    - by Soham
    I have got around 800 files of maximum 55KB-100KB each where the data is in this format Date/Time/Float1/Float2/Float3/Float4/Integer Date is in DD/MM/YYYY format and Time is in the format of HH:MM Here the date ranges from say 1st May to 1June and each day, the Time varies from 09:00 to 15:30. I want to run a program so that, for each file, it extracts the data pertaining to a particular given date and writes to a file. I will not face any problem in writing into directory operations. I am trying to get around, to form a to do a search and extract operation. I dont know, how to do it, would like to have some idea. Thanks Soham

    Read the article

  • Safari ignores input type="file" on server post

    - by Jon
    I have a real problem with a classic ASP page. The page allows the user to upload a document and save it to the database. The intial page posts to another asp page which saves down to the db. This works on IE and Firefox. However on Safari it fails. I've debugged the problem and it boils down to the fact that of all the controls that the server page has access to, only 1 control is missing. This happens to be this: <input type="file" size="40" id="myfile" name="myfile" /> So I'm wondering why safari would decide to not give me access to this control (using asp's Request("") ) and why it works in FF and IE. I have some debug code which writes out all controls and it doesn't see this control. p.s. I hate Web development

    Read the article

  • Creating an SQL Compact file: Template or script?

    - by David Veeneman
    I am writing an application that writes to SQL Compact files that have a specific schema, and I am now implementing the New File use case. The simplest approach seems to be to use a Template pattern: first, create a template file that lives in the application directory. Then, when the user selects New File, the template is copied to the name and destination specified by the user in a New File dialog. The alternative is a scripted approach: Use the same New File dialog, but dispense with the template file. Instead, create an empty SQL Compact file using the name/destination specified by the user, and then execute a T-SQL script on it from managed code. At this point, I am leaning toward the Template approach, because it is simpler. Is there any reason I should not use that approach? Thanks for your help.

    Read the article

  • Modify strings in Rails?

    - by Daniel O'Connor
    Hey everyone, So I'm new to Rails (teaching myself as a senior project in high school), and I'm trying to figure out how to modify these strings. Let's say someone writes the following string in a form: "you know you are a geek when" How can I automatically change it to this: "You know you are a geek when..."? I need Rails to check the case of the first letter and check for the three dots then modify the string as necessary. I've looked here, but I can't find anything that would work. Thanks a lot!

    Read the article

  • How can I easily make a java application invisible to the user?

    - by Pedro Bellora
    Hello! I have developed a Java aplication that is currently being run by double-clicking on a ".bat" file that does something like "java -jar proy.jar". This application just listens on a port and writes to a database, so it does not have any user interface (such as a window). I need this application to run as in background mode, or as it where a service, but I don't really anything more than that. It's enough if the application is run in a way that is not noticeable by the user, so that the user is not bothered and so the application can not be mistakenly closed. By the way, this will be run on an specific computer so it's okay if I have to do any manual configuration ir order to make this work. Also, I need this application to run on startup. Any help/tips regarding this? In advance, thank you very much for your help! Regards, Pedro

    Read the article

  • What might cause SQL error Cannot find the object "dbo.InspectionEvents" because it does not exist o

    - by mrtrombone
    Hi My ASP.Net app is periodically getting the error 'Cannot find the object dbo."XXXX" because it does not exist or you do not have permissions' when it tries to execute a specific stored procedure that writes to the database. I have seen a few forum posts about this issue but the strange thing is that the method works fine almost all of the time, just every now and then I see it in my error logs. Can anyone tell me why this might works Ok most of the time but occassionally fire the error? Application is C# using Enterprise Library 4.1 Data Access. Database is SQL Server 2005 Cheers

    Read the article

  • three out of five file streams wont open, i believe its a problem with my ifstreams.

    - by user320950
    #include<iostream> #include<fstream> #include<cstdlib> #include<iomanip> using namespace std; int main() { ifstream in_stream; // reads itemlist.txt ofstream out_stream1; // writes in items.txt ifstream in_stream2; // reads pricelist.txt ofstream out_stream3;// writes in plist.txt ifstream in_stream4;// read recipt.txt ofstream out_stream5;// write display.txt int wrong=0; in_stream.open("ITEMLIST.txt", ios::in); // list of avaliable items if( in_stream.fail() )// check to see if itemlist.txt is open { wrong++; cout << " the error occured here0, you have " << wrong++ << " errors" << endl; cout << "Error opening the file\n" << endl; exit(1); } else{ cout << " System ran correctly " << endl; out_stream1.open("ITEMLIST.txt", ios::out); // list of avaliable items if(out_stream1.fail() )// check to see if itemlist.txt is open { wrong++; cout << " the error occured here1, you have " << wrong++ << " errors" << endl; cout << "Error opening the file\n"; exit(1); } else{ cout << " System ran correctly " << endl; } in_stream2.open("PRICELIST.txt", ios::in); if( in_stream2.fail() ) { wrong++; cout << " the error occured here2, you have " << wrong++ << " errors" << endl; cout << "Error opening the file\n"; exit (1); } else{ cout << " System ran correctly " << endl; } out_stream3.open("PRICELIST.txt", ios::out); if(out_stream3.fail() ) { wrong++; cout << " the error occured here3, you have " << wrong++ << " errors" << endl; cout << "Error opening the file\n"; exit (1); } else{ cout << " System ran correctly " << endl; } in_stream4.open("display.txt", ios::in); if( in_stream4.fail() ) { wrong++; cout << " the error occured here4, you have " << wrong++ << " errors" << endl; cout << "Error opening the file\n"; exit (1); } else{ cout << " System ran correctly " << endl; } out_stream5.open("display.txt", ios::out); if( out_stream5.fail() ) { wrong++; cout << " the error occured here5, you have " << wrong++ << " errors" << endl; cout << "Error opening the file\n"; exit (1); } else{ cout << " System ran correctly " << endl; }

    Read the article

  • Why is Read-Modify-Write necessary for registers on embedded systems?

    - by Adam Shiemke
    I was reading http://embeddedgurus.com/embedded-bridge/2010/03/different-bit-types-in-different-registers/, which said: With read/write bits, firmware sets and clears bits when needed. It typically first reads the register, modifies the desired bit, then writes the modified value back out and I have run into that consrtuct while maintaining some production code coded by old salt embedded guys here. I don't understand why this is necessary. When I want to set/clear a bit, I always just or/nand with a bitmask. To my mind, this solves any threadsafe problems, since I assume setting (either by assignment or oring with a mask) a register only takes one cycle. On the other hand, if you first read the register, then modify, then write, an interrupt happening between the read and write may result in writing an old value to the register. So why read-modify-write? Is it still necessary?

    Read the article

  • How can I send an automated reply to the sender and all recipients with Procmail?

    - by jchong
    I'd like to create a procmail recipe or Perl or shell script that will send an auto response to the original sender as well as anybody that was copied (either To: or cc:) on the original email. Example: [email protected] writes an email to [email protected] and [email protected] (in the To: field). Copies are sent via cc: to [email protected] and [email protected]. I'd like the script to send an auto response to the original sender ([email protected]) and everybody else that was sent a copy of the email ([email protected], [email protected], [email protected] and [email protected]). Thanks

    Read the article

  • OpenCV to use in memory buffers or file pointers

    - by The Unknown
    The two functions in openCV cvLoadImage and cvSaveImage accept file path's as arguments. For example, when saving a image it's cvSaveImage("/tmp/output.jpg", dstIpl) and it writes on the disk. Is there any way to feed this a buffer already in memory? So instead of a disk write, the output image will be in memory. I would also like to know this for both cvSaveImage and cvLoadImage (read and write to memory buffers). Thanks! My goal is to store the Encoded (jpeg) version of the file in Memory. Same goes to cvLoadImage, I want to load a jpeg that's in memory in to the IplImage format.

    Read the article

  • 500 error on function call in Codeigniter

    - by Ilia Lev
    I am using just installed CI 2.1.3 Following phpacademy tutorial I wrote in the routes.php: $route['default_controller'] = "site"; (instead of: $route['default_controller'] = "welcome";) and in controllers/site.php: <?php if ( ! defined('BASEPATH')) exit('No direct script access allowed'); class Site extends CI_Controller { public function index() { echo "default function started.<br/>"; } public function hello(){ echo "hello function started.<br/>"; } } After uploading it to the server and going to the [www.mydomain.ext] it works ok (writes: "default function started.") BUT if I add 'this-hello();' to the index() function it throws a 500 error. Why does it happen and how can I resolve this? Thank you in advance.

    Read the article

  • how to prune data set?

    - by sakura90
    The MovieLens data set provides a table with columns: userid | movieid | tag | timestamp I have trouble reproducing the way they pruned the MovieLens data set used in: http://www.cse.ust.hk/~yzhen/papers/tagicofi-recsys09-zhen.pdf In 4.1 Data Set of the above paper, it writes "For the tagging information, we only keep those tags which are added on at least 3 distinct movies. As for the users, we only keep those users who used at least 3 distinct tags in their tagging history. For movies, we only keep those movies that are annotated by at least 3 distinct tags." I tried to query the database: select TMP.userid, count(*) as tagnum from (select distinct T.userid as userid, T.tag as tag from tags T) AS TMP group by TMP.userid having tagnum = 3; I got a list of 1760 users who labeled 3 distinct tags. However, some of the tags are not added on at least 3 distinct movies. Any help is appreciated.

    Read the article

  • How to "defragment" MongoDB index effectively in production?

    - by dfrankow
    I've been looking at MongoDB. Feels good. I added some indexes to a collection, uploaded a bunch of data, then removed all the data, and I noticed the indexes did not change size, similar to the behavior reported here. If I call db.repairDatabase() the indexes are then squashed to near-zero. Similarly if I don't remove all the data, but call repairDatabase(), the indexes are squashed somewhat (perhaps because unused extends are truncated?). I am getting index size from "totalIndexSize" of db.collection.stats(). However, that takes a long time (I've read it could be hours on a large database). It's unclear to me how available the database is for reads or writes while it is running. I am guessing not so available. Since I want to run as few instances of mongod as possible, I want to understand more about how indexes are managed after deletes. Can anyone point me to anything or give any advice?

    Read the article

  • Every 3rd Insert Is Slow On Ms Sql 2008

    - by Chris
    I have a function that writes 3 lines into a empty table like so: INSERT [dbo].[yaf_ForumAccess] ([GroupID], [ForumID], [AccessMaskID]) VALUES (1, 8, 1) INSERT [dbo].[yaf_ForumAccess] ([GroupID], [ForumID], [AccessMaskID]) VALUES (2, 8, 4) INSERT [dbo].[yaf_ForumAccess] ([GroupID], [ForumID], [AccessMaskID]) VALUES (3, 8, 3) For some reason only the third query takes a long time to execute - and with each insert it grows longer. Profiler Image I have tried disabling all constraints on the table - same result. I just can't figure out why the first two would run so fast - and the last one would take so long. Any help would be greatly appreciated. Here is the statistics for a query ran MSSMS: Query: ALTER TABLE [dbo].[yaf_ForumAccess] NOCHECK CONSTRAINT ALL INSERT [dbo].[yaf_ForumAccess] ([GroupID], [ForumID], [AccessMaskID]) VALUES (1, 9, 1) INSERT [dbo].[yaf_ForumAccess] ([GroupID], [ForumID], [AccessMaskID]) VALUES (2, 9, 4) INSERT [dbo].[yaf_ForumAccess] ([GroupID], [ForumID], [AccessMaskID]) VALUES (3, 9, 3) ALTER TABLE [dbo].[yaf_ForumAccess] CHECK CONSTRAINT ALL Stats: Stats

    Read the article

  • Building highly scalable web services

    - by christopher-mccann
    My team and I are in the middle of developing an application which needs to be able to handle pretty heavy traffic. Not facebook level but in the future I would like to be able to scale to that without massive code re-writes. My thought was to modularise out everything into seperate services with their own interfaces. So for example messaging would have a messaging interface that might have send and getMessages() as methods and then the PHP web app would simply query this interface through soap or curl or something like that. The messaging application could then be any kind of application so a Java application or Python or whatever was suitable for that particular functionality with its own seperate database shard. Is this a good approach?

    Read the article

  • mongodb read/write performance and mongo hosting in the cloud

    - by z3cko
    we are currently developing a high traffic rails application with facebooker (facebook game). since amazon simpledb (aws-sdb) is really slow, we are thinking of using a dedicated mongodb server as offered by mongoHQ for example. questions: what is the read/writes peak value for a mongodb server running on a amazon ec2 instance? what would be a recommended setup for a ec2 hosted app with mongodb - a master on amazon EBS and replicas on the ec2 instances? any examples or experiences? is there a company that offers mongodb hosting in the cloud? thanks, mz

    Read the article

  • How can I find a package?

    - by Roman
    In my code I have the following statement import com.apple.dnssd.*; and compiler (javac) complains about this line. It writes that the package does not exist. But I think that it could be that "javac" search the package in a wrong place (directory). In this respect I have two questions: How can I know where javac search for the packages? I think that it is very likely that I have the above mentioned package but I do not know where it is located. What are the typical place to look for the packages?

    Read the article

  • What are all the disadvantages of using files as a means of communicating between two processes?

    - by Manny
    I have legacy code which I need to improve for performance reasons. My application comprises of two executables that need to exchange certain information. In the legacy code, one exe writes to a file ( the file name is passed as an argument to exe) and the second executable first checks if such a file exists; if does not exist checks again and when it finds it, then goes on to read the contents of the file. This way information in transferred between the two executables. The way the code is structured, the second executable is successful on the first try itself. Now I have to clean this code up and was wondering what are the disadvantages of using files as a means of communication rather than some inter-process communication like pipes.Is opening and reading a file more expensive than pipes? Are there any other disadvantages? And how significant do you think would be the performance degradation. The legacy code is run on both windows and linux.

    Read the article

  • Write directly to a remote Git repository, without adding objects to a local index/repo?

    - by Ryan B. Lynch
    Does Git support any commands that would allow me to commit directly from a local/working tree into a remote repository? The normal workflow requires a "git add", at least, to populate the object database with copies of the file contents, etc. I understand that this is NOT the normal, expected Git workflow. But I noticed that Git already supports downloading directly from the repository, with no local repo ("git archive"), so it seems reasonable that there might be a similar uploading operation. Alternatively, if there isn't such a command in the core Git itself, does any 3rd-party software support direct remote writes?

    Read the article

< Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >