Hi
In a multithreaded application. I have a bunch of function that loop through a collection to read the information. I also have a bunch of function that modifies that same collection.
I’m looking for a way to isolate all the read and the write together. I don’t want a write to be done while a read is in progress. I was thinking of using SyncLock on the collection object but this will block multiple read trying to work in parallel.
Hi all,
I have an application which takes live data from internet,
I want to develop another application which reads data from the internet and writes it
to an XML file (i.e. I want to save the state in an XML file).
THE ONLY THING I NEED IS HOW TO WRITE THIS DATA IN AN XML FILE.
FOR EXAMPLE say I have a combo box which takes top 10 fifa world cup watching sites,
now I want to write this information (i.e. whatever the data this combo box takes as input
into an xml file).
I want the answer in flex only. And I don't want answer in AIR.
Thankyou in advance.
I need to perform some activity when an appointmentitem (or specifically a meeting) is saved.
What I want is once the user has filled in the info and clicks 'send', Outlook does it's stuff and my code executes once.
However what I'm finding, is that the Write event occurs multiple times - at least twice, sometimes more (eg in updates).
Where this is an issue for me, is that I have an object that needs to be updated before it's serialized, and I don't want to be doing the update and serialization multiple times.
Has anyone come across this issue, before and is there a better way to do this to use than appointmentitem.write?
I need to write a tree class in Java where each level has a unique object type. The way it is written below does not take advantage of generics and causes alot of duplicate code. Is there a way to write this with Generics ?
public class NodeB {
private String nodeValue;
//private List<NodeB> childNodes;
// constructors
// getters/setters
}
public class NodeA {
private String value;
private List<NodeB> childNodes;
// constructors
// getters/setters
}
public class Tree {
private String value;
private List<NodeA> childNodes;
// constructors
// tree methods
}
hi i am a student and just start learning low level c programming.i tried to understand read() and write() methods with this program.
#include <unistd.h>
#include <stdlib.h>
main()
{
char *st;
st=calloc(sizeof(char),2);//allocate memory for 2 char
read(0,st,2);
write(1,st,2);
}
i was expecting that it would give segmentation fault when i would try to input more than 2 input characters.but when i execute program and enter " asdf " after giving " as " as output it executes "df" command.
i want to know why it doesn't give segmentation fault when we assign more than 2 char to a string of size 2.and why is it executing rest(after 2 char)of input as command instead of giving it as output only?
also reading man page of read() i found read() should give EFAULT error,but it doesn't.
I am using linux.
I am developing a c++ banking system.
I am able to get the float, newbal, values correctly and when I try to write to file, there is no data in the file.
else
{
file>>firstname>>lastname;
cout<<endl<<firstname<<" "<<lastname<<endl;
cout<<"-----------------------------------\n";
string line;
while (getline(file, line))
{
//stringstream the getline for line string in file
istringstream iss(line);
if (iss >> date >> amount)
{
cout<<date<<"\t\t$"<<showpoint<<fixed<<setprecision(2)<<amount<<endl;
famount+=amount;
}
}
cout<<"Your balance is $"<<famount<<endl;
cout<<"How much would you like to deposit today: $";
cin>>amountinput;
float newbal=0;
newbal=(famount+=amountinput);
cout<<"\nYour new balance is: $"<<newbal<<".\n";
file<<date<<"\t\t"<<newbal; //***This should be writing to file
but it doesn't.
file.close();
The text file looks like this:
Tony Gaddis
05/24/12 100
05/30/12 300
07/01/12 -300
//Console Output looks like this
Tony Gaddis
05/24/12 100
05/30/12 300
07/01/12 -300
Your balance is: #1
How much wuld you like to deposit: #2
Your new balance is: #1 + #2
write to file
close file.
//exits to main loop::::
How can I make it write to file and save it, and why is this happening.
I tried doing it with ostringstream as well considering how I used istringstream for the input. But it didn't work either :\
float newbal=0;
newbal=(famount+=amountinput);
ostringstream oss(newbal);
oss<<date<<"\t\t"<<newbal;
I am trying to self teach c++ so any relevant information would be kindly appreciated.
as the title points out, I'm getting this error when trying to connect to a PostgreSql database from command line, using PostgreSQL.
The client machine is an Ubuntu 11.10 x86_64 and the PostgreSQL libraries are from Version 9.1
Server is PostgreSql 8.3.
This is the command that I executed:
psql -U postgres -d my_database -h 192.168.0.161 -p 5432 -c "select * from xxyy"
I get the same results when I use sudo or su postgres.
The sad thing is that I can connect without problems using pgAdmin.
Any hint?
I am using Nginx and PHP-FPM on Linux. I am not sure whether the issue is that PHP is not writing to the location specified in the PHP.ini, or if it just isn't working at all.
Some of the logs produced by Nginx and PHP-FPM contain the PHP errors, but they are mixed in with other Nginx log output. When I run phpInfo(), value in the error_log is set to a folder in my home directory, but nothing is ever created.
I understand that values in the Nginx conf and PHP-FPM conf can overwrite those set in the PHP.ini, but surely running phpInfo(), would show the final config values?
I would like to be able to have 1 folder, with seperate files for the Nginx access and error log as well as PHP errors.
Thanks.
I'm a software developer troubleshooting a sticky problem on a client's production server, and I've got a bit of a problem.
They have a virtual server running Windows Server 2008, SQL Server 2008 R1 and IIS7. It was provisioned with two partitions: one that has the OS (~15 Gig), and the other has IIS' web sites (another ~15 Gig).
My application that's running this server has been running perfectly well, up until about an hour ago, when it started throwing System.IO.IOException: "There is not enough space on disk".
As soon as my client notified me, I cleared up some space on C:\, emptied the recycle bin, and restarted SQL Server and IIS. The web server came back up and the application was running, but it no longer saves information to the database. No error message is coming up, the application can get information out of the DB, but it can no longer save data back to it. I rebooted the server, to no effect.
I spoke with a sys admin at the hosting company, and he says SQL Server appears to have come up fine and the database is not in read-only mode. I confirmed that, as I can add records to tables from SQL Server Management Studio.
I looked at the event log immediately after trying to save an edited record in the app, and no new events appear in there that I can tell.
I'm assuming this is related to having run out of space, as it was all working fine prior to that, but I'm at a bit of a loss as to what exactly needs a kick in the pants to get going again.
Can anyone help me out? What the heck is going on here?
I have Centos 6.0 and installed vsftp with YUM, I added a user with Webmin panel, set its home dir to "/var/www/html" and its shell to "/bin/sh", user id is 500, user group is same as name: "adrian_ftp".
When I start a ftp program it logs in but the remote folder always shows empty.
I set directory owner and group to adrian_ftp:adrian_ftp , no change, I also made them 0777, no change.
Any ideas? I tried for over 3-4 hours :|
I have 3 blade servers that are Blue Screening with a 0xC2 error as far as we can tell randomly. When it started happening I found that the servers weren't set to do provide a dump because they each have 16GB RAM and a 16GB swap file divided over 4 partitions in 4GB files.
I set them to provided a small dump file (64K mini dump), but the dump files aren't being written. On start up the server event log is reporting both Event ID 45 "The system could not sucessfully load the crash dump driver." and Event ID 49 "Configuring the Page file for crash dump failed. Make sure there is a page file on the boot partition and that is large enough to contain all physical memory."
I understanding is the the small dump shouldn't need a swap file large enough for all physical memory, but the error seems to point to this not being the case. The issue of course is that the max swap file size is 4GB, so this seems to be impossible.
Can anyone point me where to go from here?
I would like to start writing an email with Mail.app from Terminal, and add an attachment. Something like this:
macbook:~ me$ /Applications/Mail.app/Contents/MacOS/Mail -s the_subject -to [email protected] < ~/Downloads/file.zip
Okay, here's a weird problem -- My wife just bought a 2014 Nissan Altima. So, I took her iTunes library and converted the .m4a files to .mp3, since the car audio system only supports .mp3 and .wma. So far so good. Then I copied the files to a DOS FAT-32 formatted USB thumb drive, and connected the drive to the car's USB port, only to find all of the tracks were out of sequence. All tracks begin with a two digit numeric prefix, i.e., 01, 02, 03, etc. So you would think they would be in order. So I called Nissan Connect support and the rep told me that there is a known problem with reading files in the correct order. He said the files are read in the same order they are written. So, I manually copied a few albums with the tracks in a predetermined order, and sure enough he was correct.
So I copied about 6 albums for testing, then changed to the top level directory and did a "find . music.txt". Then I passed this file to rsync like this:
rsync -av --files-from=music.txt . ../Marys\ Music\ Sequenced/
The files looked like they were copied in order, but when I listed the files in order of modified time, they were in the same sequence as the original files:
../Marys Music Sequenced/Air Supply/Air Supply Greatest Hits ls -1rt
01 Lost In Love.mp3
04 Every Woman In The World.mp3
03 Chances.mp3
02 All Out Of Love.mp3
06 Here I Am (Just When I Thought I Was Over You).mp3
05 The One That You Love.mp3
08 I Want To Give It All.mp3
07 Sweet Dreams.mp3
11 Young Love.mp3
So the question is, how can I copy files listed in a file named music.txt, and copy them to a destination, and ensure the modification times are in the same sequence as the files are listed?
I have 6 windows computers (xp,vista,7) that need to access a samba share (Ubuntu 12.04). I am trying to make it so only one client can open a file at a given time. I thought this was pretty standard behavior of file locks, but I can't get it to work.
The way it is right now a file can be open by two users, and changed and saved by either one of them. The last file saved overwrites what ever changes the other user made. At first I thought this was a Samba configuration problem, but I get this behavior even between two windows machines. So far I have only tested:
Windows Xp Windows Vista
Windows XP Samba << Windows Vista
and both give the same behavior.
When I tested the Samba configuration, I had set strict locking = yes and get errors logged like this:
close_remove_share_mode: Could not get share mode lock for file _prod/part_number_list_COPY.xlsx
Eventually all of the files are going to be moved onto the Samba share, so that is the configuration I am most concerned about fixing. Any ideas? Thanks in advance.
EDIT:
I tested an excel file again, and it is now working properly in both above mentioned cases, I am also no longer getting the above mentioned error. I don't know what happened, perhaps a restart fixed it? (also works with strict locking = no)
Although I still need to find a solution for the CAD/CAM files we use, the software is Vector and it does not seem to be using file locks.
Is there any software that I can use to manage these files, so two people can't open/edit them at a time? Maybe a windows application that forces file locks? Or a dirt simple version control system? (the only ones I have seen at are too complicated for our needs).
Hi!
I installed samba on my linux server for public file sharing on the LAN. I works great currently, but I would like to add some security:
People from LAN should be able to Read files present and Add new ones, but not delete files. I want to keep this privilege for me ;-)
How should-I do this? I have set up a "admin" account having full access even to deletion. There is just left to configure the "guest" acount. Google isn't helping that much right now...
I am having trouble writing to a few files on an external HD. I am using it to store media files as well as my time machine backup. The drive is formatted as HFS+ Journaled, and other files on the drive can be written successfully. Additionally, the time machine backup is working perfectly.
Permissions for the file:
$ ls -le -@ Parks\ and\ Recreation\ -\ S01E01.avi
-rw-rw-rw-@ 1 evantandersen staff 182950496 22 May 2009 Parks and Recreation - S01E01.avi
com.apple.FinderInfo 32
Things I have already tried:
sudo chflags -N
sudo chown myusername
sudo chown 666
sudo chgrp staff
Checked that the file is not locked (get info in finder)
Why can't I modify that file? Even with sudo I can't modify it at all.
Sirs, a conundrum. I have two packages that each create /usr/bin/ffprobe. One of them is ffmpeg from the Deb Multimedia repository, while the other is ffmbc 0.7-rc5 built from source. The hand-rolled one is business-critical, and we used to just install it from source wherever it was necessary. I can only assume it would clobber the ffmpeg file, and there were never any ill effects.
In theory, it should be acceptable for our ffmbc package to overwrite the file from the ffmpeg package. Is there any easy way to reconcile this?
Running Windows 7 x64. DVD drive is a BenQ DC DQ60 ATA dvd-dl rw. Everything functions correctly in linux, and I can boot to cd/dvds, so the drive itself does work.
Symptom: when I insert any CD or DVD (burned or retail), the drive spins up the disk, and (usually) displays the disk title in My Computer, but just continues to spin indefinitely. I cannot browse the disk in the drive, install from it, or read anything.
JUST THIS MONTH, we have started getting reports from a number of very stable clients that MrxSmb event id 50 errors keep appearing in their system event logs. Otherwise, they do not appear to have any networking problems except that there is a critical legacy application which seems to either be generating the MrxSmb errors or having errors occur because of them. The legacy application is comprised of 16 bit and 32 bit code and has not been changed or recompiled in many years. It has always been stable on Windows XP systems. The customers that have the problem usually have a small (5 clients or less) peer to peer network with all Windows XP systems. All service packs are loaded on the XP machines.
Note: The only thing that seems to correct the problem is disabling opportunistic locking. I don't like this solution because it seems to slow down the network and sometimes causes record locking issues between users (on some networks). Also, this seems to have just started happening - as if a Windows update for XP has caused it? However, I have removed recent updates and it did not correct the issue.
Thanks in advance for any help you can provide.
We have an interesting issue with one of our server shares, or possibly, our Win 7 desktops.
When our users try to save files in a sub folder, either via copy/paste or through an application, to a mapped drive on our DC they receive an error saying "Path not found". They can however browse this folder and open files from it. This is where the "Path Not Found" error doesn't seem to stack up in my opinion.
Users can however save files fine in the root folder of the mapped drive, it appears only to affect sub folders.
It seems to be random which users and machines this affects. The users can log on to a different machine and be able to save in sub folders fine, on the same mapped drive. Event viewer hasn't been much help either.
Currently, the only solution we have found is to image the machines affected which solves the issue.
Our servers are Server 2008 R2 with Win 7 Pro desktops.
Any help/pointers/suggestions would greatly be appreciated.
I have a samba share that works fine for PCs, but we have a mac user who seems to only be able to edit and rename existing files, he cannot add new files.
Any ideas?
Here is the share setup:
path = /media/freeagent/officeshare
read only = No
guest ok = Yes
writeable = yes
public = yes
When copying large files or testing writespeed with dd, the max writespeed I can get is about 12-15MB/s on drives using the NTFS filesystem. I tested multiple drives (all connected using SATA) which all got writespeeds of 100MB/s+ on Windows or when formatted with ext4, so it's not an alignment or drive issue.
top shows high cpu usage for the mount.ntfs process.
AMD dual core processor (2.2 GHz)
Kernel version: 3.5.0-23-generic
Ubuntu 12.04
ntfs-3g version: both 2012.1.15AR.1 (Ubuntu default version) and 2013.1.13AR.2
How can I fix the writespeed?
I have never used a Linux system in an AD environment before and am trying to join my laptop running Ubuntu to join our Active Directory (DC is a Windows Server 2008 machine) using Likewise-open.
Using the GUI wizard, I have joined the domain.
I can mount network shares using CIFS
Problem: I only have read access to our fileserver. What more is needed to get the AD to recognize me as a user who has the appropriate rights?
Any help is appreciated.