Search Results

Search found 72370 results on 2895 pages for 'file moving'.

Page 158/2895 | < Previous Page | 154 155 156 157 158 159 160 161 162 163 164 165  | Next Page >

  • Getting Error When Opening Files

    - by Nathan Campos
    I'm developing a simple Text Editor to understand better PocketC language, then I've done this: #include "\\Storage Card\\My Documents\\PocketC\\Parrot\\defines.pc" int filehandle; int file_len; string file_mode; initComponents() { createctrl("EDIT", "test", 2, 1, 0, 24, 70, 25, TEXTBOX); wndshow(TEXTBOX, SW_SHOW); guigetfocus(); } main() { filehandle = fileopen(OpenFileDlg("Plain Text Files (*.txt)|*.txt; All Files (*.*)|*.*"), 0, FILE_READWRITE); file_len = filegetlen(filehandle); if(filehandle = -1) { MessageBox("File Could Not Be Found!", "Error", 3, 1); } initComponents(); editset(TEXTBOX, fileread(filehandle, file_len)); } Then I tried to run the application, it opens the Open File Dialog, I select a file(that is at \test.txt) that I've created with notepad, then I got my MessageBox saying that the file wans't found. Then I want to know why I'm getting this if the file is all correct? *PS: When I click to exit the MessageBox, I saw that the TextBox is displaying where the file is(I've tested with many other files, and with all I got the error and this).

    Read the article

  • C++: Is there any good way to read/write without specifically stating character type in function nam

    - by Mark L.
    I'm having a problem getting a program to read from a file based on a template, for example: bool parse(basic_ifstream<T> &file) { T ch; locale loc = file.getloc(); basic_string<T> buf; file.unsetf(ios_base::skipws); if (file.is_open()) { while (file >> ch) { if(isalnum(ch, loc)) { buf += ch; } else if(!buf.empty()) { addWord(buf); buf.clear(); } } if(!buf.empty()) { addWord(buf); } return true; } return false; } This will work when I instantiate this class with <char>, but has problems when I use <wchar_t> (clearly). Outside of the class, I'm using: for (iter = mp.begin(); iter != mp.end(); ++iter ) { cout << iter->first << setw(textwidth - iter->first.length() + 1); cout << " " << iter->second << endl; } To write all of the information from this data struct (it's a map<basic_string<T>, int>), and as predicted, cout explodes if iter->first isn't a char array. I've looked online and the consensus is to use wcout, but unfortunately, since this program requires that the template can be changed at compile time (<char> - <wchar_t>) I'm not sure how I could get away with simply choosing cout or wcout. That is, unless there way a way to read/write wide characters without changing lots of code. If this explanation sounds awkwardly complicated, let me know and I'll address it as best I can.

    Read the article

  • warning: assignment makes pointer from integer without a cast

    - by FILIaS
    Im new in programming c with arrays and files. Im just trying to run the following code but i get warnings like that: warning: assignment makes pointer from integer without a cast Any help? It might be silly... but I cant find what's wrong. FILE *fp; FILE *cw; char filename_game[40],filename_words[40]; int main() { while(1) { /* Input filenames. */ printf("\n Enter the name of the file with the cryptwords array: \n"); gets(filename_game); printf("\n Give the name of the file with crypted words:\n"); gets(filename_words); /* Try to open the file with the game */ if (fp=fopen("crypt.txt","r")!=NULL) { printf("\n Successful opening %s \n",filename_game); fclose(fp); puts("\n Enter x to exit,any other to continue! \n "); if ( (getc(stdin))=='x') break; else continue; } else { fprintf(stderr,"ERROR!%s \n",filename_game); puts("\n Enter x to exit,any other to continue! \n"); if (getc(stdin)=='x') break; else continue; } /* Try to open the file with the names. */ if (cw=fopen("words.txt","r")!=NULL) { printf("\n Successful opening %s \n",filename_words); fclose(cw); puts("\n Enter x to exit,any other to continue \n "); if ( (getc(stdin))=='x') break; else continue; } else { fprintf(stderr,"ERROR!%s \n",filename_words); puts("\n Enter x to exit,any other to continue! \n"); if (getc(stdin)=='x') break; else continue; } } return 0; }

    Read the article

  • Multi-threaded .NET application blocks during file I/O when protected by Themida

    - by Erik Jensen
    As the title says I have a .NET application that is the GUI which uses multiple threads to perform separate file I/O and notice that the threads occasionally block when the application is protected by Themida. One thread is devoted to reading from serial COM port and another thread is devoted to copying files. What I experience is occasionally when the file copy thread encounters a network delay, it will block the other thread that is reading from the serial port. In addition to slow network (which can be transient), I can cause the problem to happen more frequently by making a PathFileExists call to a bad path e.g. PathFileExists("\\\\BadPath\\file.txt"); The COM port reading function will block during the call to ReadFile. This only happens when the application is protected by Themida. I have tried under WinXP, Win7, and Server 2012. In a streamlined test project, if I replace the .NET application with a MFC unmanaged application and still utilize the same threads I see no issue even when protected with Themida. I have contacted Oreans support and here is their response: The way that a .NET application is protected is very different from a native application. To protect a .NET application, we need to hook most of the file access APIs in order to "cheat" the .NET Framework that the application is protected. I guess that those special hooks (on CreateFile, ReadFile...) are delaying a bit the execution in your application and the problem appears. We did a test making those hooks as light as possible (with minimum code on them) but the problem still appeared in your application. The rest of software protectors that we tried (like Enigma, Molebox...) also use a similar hooking approach as it's the only way to make the .NET packed file to work. If those hooks are not present, the .NET Framework will abort execution as it will see that the original file was tampered (due to all Microsoft checks on .NET files) Those hooks are not present in a native application, that's why it should be working fine on your native application. Oreans support tried other software protectors such as Enigma Protector, Engima VirtualBox, and Molebox and all exhibit the exact same problem. What I have found as a work around is to separate out the file copy logic (where the file exists call is being made) to be performed in a completely separate process. I have experimented with converting the thread functions from unmanaged C++ to VB.NET equivalents (PathFileExists - System.IO.File.Exists and CreateFile/ReadFile - System.IO.Ports.SerialPort.Open/Read) and still see the same serial port read blocked when the file check or copy call is delayed. I have also tried setting the ReadFile to work asynchronously but that had no effect. I believe I am dealing with some low-level windows layer that no matter the language it exhibits a block on a shared resource -- and only when the application is executing under a single .NET process protected by Themida which evidently installs some hooks to allow .NET execution. At this time converting the entire application away from .NET is not an option. Nor is separating out the file copy logic to a separate task. I am wondering if anyone else has more knowledge of how a file operation can block another thread reading from a system port. I have included here example applications that show the problem: https://db.tt/cNMYfEIg - VB.NET https://db.tt/Y2lnTqw7 - MFC They are Visual Studio 2010 solutions. When running the themida protected exe, you can see when the FileThread counter pauses (executing the File.Exists call) while the ReadThread counter also pauses. When running non-protected visual studio output exe, the ReadThread counter does not pause which is how we expect it to function. Thanks!

    Read the article

  • Emacs: Often switching between Emacs and my IDE's editor, how to automatically 'synch' the files?

    - by WizardOfOdds
    I very often need to do some Emacs magic on some files and I need to go back and forth between my IDE (IntelliJ IDEA) and Emacs. When a change is made under Emacs (and after I've saved the file) and I go back to IntelliJ the change appears immediately (if I recall correctly I configured IntelliJ to "always reload file when a modification is detected on disk" or something like that). I don't even need to reload: as soon as IntelliJ IDEA gains focus, it instantly reloads the file (and I hence have immediately access to the modifications I made from Emacs). So far, so very good. However "the other way round", it doesn't work yet. Can I configure Emacs so that everytime a file is changed on disk it reloads it? Or make Emacs, everytime it "gains focus", verify if any file currently opened has been modified on disk? I know I can start modifying the buffer under Emacs and it shall instantly warn that it has been modified, but I'd rather have it do it immediately (for example if I used my IDE to do some big change, when I come back to Emacs what I see may not be at all anymore what the file contains and it's a bit weird).

    Read the article

  • Segmentation fault

    - by darkie15
    #include<stdio.h> #include<zlib.h> #include<unistd.h> #include<string.h> int main(int argc, char *argv[]) { char *path=NULL; size_t size; int index ; printf("\nArgument count is = %d", argc); printf ("\nThe 0th argument to the file is %s", argv[0]); path = getcwd(path, size); printf("\nThe current working directory is = %s", path); if (argc <= 1) { printf("\nUsage: ./output filename1 filename2 ..."); } else if (argc > 1) { for (index = 1; index <= argc;index++) { printf("\n File name entered is = %s", argv[index]); strcat(path,argv[index]); printf("\n The complete path of the file name is = %s", path); } } return 0; } In the above code, here is the output that I get while running the code: $ ./output test.txt Argument count is = 2 The 0th argument to the file is ./output The current working directory is = /home/welcomeuser File name entered is = test.txt The complete path of the file name is = /home/welcomeusertest.txt Segmentation fault (core dumped) Can anyone please me understand why I am getting a core dumped error? Regards, darkie

    Read the article

  • What is the fastest way for reading huge files in Delphi?

    - by dummzeuch
    My program needs to read chunks from a huge binary file with random access. I have got a list of offsets and lengths which may have several thousand entries. The user selects an entry and the program seeks to the offset and reads length bytes. The program internally uses a TMemoryStream to store and process the chunks read from the file. Reading the data is done via a TFileStream like this: FileStream.Position := Offset; MemoryStream.CopyFrom(FileStream, Size); This works fine but unfortunately it becomes increasingly slower as the files get larger. The file size starts at a few megabytes but frequently reaches several tens of gigabytes. The chunks read are around 100 kbytes in size. The file's content is only read by my program. It is the only program accessing the file at the time. Also the files are stored locally so this is not a network issue. I am using Delphi 2007 on a Windows XP box. What can I do to speed up this file access?

    Read the article

  • The Return Of __FILE__ And __LINE__ In .NET 4.5

    - by Alois Kraus
    Good things are hard to kill. One of the most useful predefined compiler macros in C/C++ were __FILE__ and __LINE__ which do expand to the compilation units file name and line number where this value is encountered by the compiler. After 4.5 versions of .NET we are on par with C/C++ again. It is of course not a simple compiler expandable macro it is an attribute but it does serve exactly the same purpose. Now we do get CallerLineNumberAttribute  == __LINE__ CallerFilePathAttribute        == __FILE__ CallerMemberNameAttribute  == __FUNCTION__ (MSVC Extension)   The most important one is CallerMemberNameAttribute which is very useful to implement the INotifyPropertyChanged interface without the need to hard code the name of the property anymore. Now you can simply decorate your change method with the new CallerMemberName attribute and you get the property name as string directly inserted by the C# compiler at compile time.   public string UserName { get { return _userName; } set { _userName=value; RaisePropertyChanged(); // no more RaisePropertyChanged(“UserName”)! } } protected void RaisePropertyChanged([CallerMemberName] string member = "") { var copy = PropertyChanged; if(copy != null) { copy(new PropertyChangedEventArgs(this, member)); } } Nice and handy. This was obviously the prime reason to implement this feature in the C# 5.0 compiler. You can repurpose this feature for tracing to get your hands on the method name of your caller along other stuff very fast now. All infos are added during compile time which is much faster than other approaches like walking the stack. The example on MSDN shows the usage of this attribute with an example public static void TraceMessage(string message, [CallerMemberName] string memberName = "", [CallerFilePath] string sourceFilePath = "", [CallerLineNumber] int sourceLineNumber = 0) { Console.WriteLine("Hi {0} {1} {2}({3})", message, memberName, sourceFilePath, sourceLineNumber); }   When I do think of tracing I do usually want to have a API which allows me to Trace method enter and leave Trace messages with a severity like Info, Warning, Error When I do print a trace message it is very useful to print out method and type name as well. So your API must either be able to pass the method and type name as strings or extract it automatically via walking back one Stackframe and fetch the infos from there. The first glaring deficiency is that there is no CallerTypeAttribute yet because the C# compiler team was not satisfied with its performance.   A usable Trace Api might therefore look like   enum TraceTypes { None = 0, EnterLeave = 1 << 0, Info = 1 << 1, Warn = 1 << 2, Error = 1 << 3 } class Tracer : IDisposable { string Type; string Method; public Tracer(string type, string method) { Type = type; Method = method; if (IsEnabled(TraceTypes.EnterLeave,Type, Method)) { } } private bool IsEnabled(TraceTypes traceTypes, string Type, string Method) { // Do checking here if tracing is enabled return false; } public void Info(string fmt, params object[] args) { } public void Warn(string fmt, params object[] args) { } public void Error(string fmt, params object[] args) { } public static void Info(string type, string method, string fmt, params object[] args) { } public static void Warn(string type, string method, string fmt, params object[] args) { } public static void Error(string type, string method, string fmt, params object[] args) { } public void Dispose() { // trace method leave } } This minimal trace API is very fast but hard to maintain since you need to pass in the type and method name as hard coded strings which can change from time to time. But now we have at least CallerMemberName to rid of the explicit method parameter right? Not really. Since any acceptable usable trace Api should have a method signature like Tracexxx(… string fmt, params [] object args) we not able to add additional optional parameters after the args array. If we would put it before the format string we would need to make it optional as well which would mean the compiler would need to figure out what our trace message and arguments are (not likely) or we would need to specify everything explicitly just like before . There are ways around this by providing a myriad of overloads which in the end are routed to the very same method but that is ugly. I am not sure if nobody inside MS agrees that the above API is reasonable to have or (more likely) that the whole talk about you can use this feature for diagnostic purposes was not a core feature at all but a simple byproduct of making the life of INotifyPropertyChanged implementers easier. A way around this would be to allow for variable argument arrays after the params keyword another set of optional arguments which are always filled by the compiler but I do not know if this is an easy one. The thing I am missing much more is the not provided CallerType attribute. But not in the way you would think of. In the API above I did add some filtering based on method and type to stay as fast as possible for types where tracing is not enabled at all. It should be no more expensive than an additional method call and a bool variable check if tracing for this type is enabled at all. The data is tightly bound to the calling type and method and should therefore become part of the static type instance. Since extending the CLR type system for tracing is not something I do expect to happen I have come up with an alternative approach which allows me basically to attach run time data to any existing type object in super fast way. The key to success is the usage of generics.   class Tracer<T> : IDisposable { string Method; public Tracer(string method) { if (TraceData<T>.Instance.Enabled.HasFlag(TraceTypes.EnterLeave)) { } } public void Dispose() { if (TraceData<T>.Instance.Enabled.HasFlag(TraceTypes.EnterLeave)) { } } public static void Info(string fmt, params object[] args) { } /// <summary> /// Every type gets its own instance with a fresh set of variables to describe the /// current filter status. /// </summary> /// <typeparam name="T"></typeparam> internal class TraceData<UsingType> { internal static TraceData<UsingType> Instance = new TraceData<UsingType>(); public bool IsInitialized = false; // flag if we need to reinit the trace data in case of reconfigured trace settings at runtime public TraceTypes Enabled = TraceTypes.None; // Enabled trace levels for this type } } We do not need to pass the type as string or Type object to the trace Api. Instead we define a generic Api that accepts the using type as generic parameter. Then we can create a TraceData static instance which is due to the nature of generics a fresh instance for every new type parameter. My tests on my home machine have shown that this approach is as fast as a simple bool flag check. If you have an application with many types using tracing you do not want to bring the app down by simply enabling tracing for one special rarely used type. The trace filter performance for the types which are not enabled must be therefore the fasted code path. This approach has the nice side effect that if you store the TraceData instances in one global list you can reconfigure tracing at runtime safely by simply setting the IsInitialized flag to false. A similar effect can be achieved with a global static Dictionary<Type,TraceData> object but big hash tables have random memory access semantics which is bad for cache locality and you always need to pay for the lookup which involves hash code generation, equality check and an indexed array access. The generic version is wicked fast and allows you to add more features to your tracing Api with minimal perf overhead. But it is cumbersome to write the generic type argument always explicitly and worse if you do refactor code and move parts of it to other classes it might be that you cannot configure tracing correctly. I would like therefore to decorate my type with an attribute [CallerType] class Tracer<T> : IDisposable to tell the compiler to fill in the generic type argument automatically. class Program { static void Main(string[] args) { using (var t = new Tracer()) // equivalent to new Tracer<Program>() { That would be really useful and super fast since you do not need to pass any type object around but you do have full type infos at hand. This change would be breaking if another non generic type exists in the same namespace where now the generic counterpart would be preferred. But this is an acceptable risk in my opinion since you can today already get conflicts if two generic types of the same name are defined in different namespaces. This would be only a variation of this issue. When you do think about this further you can add more features like to trace the exception in your Dispose method if the method is left with an exception with that little trick I did write some time ago. You can think of tracing as a super fast and configurable switch to write data to an output destination or to execute alternative actions. With such an infrastructure you can e.g. Reconfigure tracing at run time. Take a memory dump when a specific method is left with a specific exception. Throw an exception when a specific trace statement is hit (useful for testing error conditions). Execute a passed delegate which e.g. dumps additional state when enabled. Write data to an in memory ring buffer and dump it when specific events do occur (e.g. method is left with an exception, triggered from outside). Write data to an output device. …. This stuff is really useful to have when your code is in production on a mission critical server and you need to find the root cause of sporadic crashes of your application. It could be a buggy graphics card driver which throws access violations into your application (ok with .NET 4 not anymore except if you enable a compatibility flag) where you would like to have a minidump or you have reached after two weeks of operation a state where you need a full memory dump at a specific point in time in the middle of an transaction. At my older machine I do get with this super fast approach 50 million traces/s when tracing is disabled. When I do know that tracing is enabled for this type I can walk the stack by using StackFrameHelper.GetStackFramesInternal to check further if a specific action or output device is configured for this method which is about 2-3 times faster than the regular StackTrace class. Even with one String.Format I am down to 3 million traces/s so performance is not so important anymore since I do want to do something now. The CallerMemberName feature of the C# 5 compiler is nice but I would have preferred to get direct access to the MethodHandle and not to the stringified version of it. But I really would like to see a CallerType attribute implemented to fill in the generic type argument of the call site to augment the static CLR type data with run time data.

    Read the article

  • Generic/Text Printer on Windows 7 not prompting for file name

    - by FantaFan
    Guys & gals, Hope someone can shed some light on this. I am downloading reports from an AIX-based system by directing them to a TT printer which the terminal emulator (MultiView 2000) intercepts and directs to the default printer on the local system. This local printer is configured as a vanilla Generic/Text printer attached to a FILE port. When I print from AIX, the output is spooled down and the local printer prompts for a file name into which to save the file...but not under Windows 7. This has worked fine for many years, on both Win2K and WinXP. However, on Windows 7 the output gets spooled as a file into spool\PRINTERS (and looks as expected) but the print job then hangs with a status of "Error - Printing" and never prompts for a file name. I have to cancel the job. The Generic/Text printer works as expected with other applications. I have tried setting the printer to print directly rather than spooling but this only serves to hang the terminal session too. I've also tried to run the emulator in Windows 2000 Compatibility Mode and as Administrator in case it was something like that but with no luck. As you might expect, it does work fine in XP Mode (as long as I print to a printer defined therein and not the host's printer) but operationally this isn't going to be an option. Obviously this emulation software is a decade old (at least) and I could just cross/upgrade all the users (at a cost) but, before I do so, has anyone seen this sort of behaviour before and found some sort of fix? Remote OS: AIX 5 Client OS: Windows 7 Pro (32-bit) Printer: Generic/Text on a FILE port TE Software: MultiView 2000 (32-bit) Thanks in advance.

    Read the article

  • "Unable to open MRTG log file" error with nagios and mrtg

    - by Simone Magnaschi
    We have a strange issue with our setup of icinga / nagios and mrtg. Icinga is working great and has no problem, it can monitor basically everything without issues. We setup mrtg to gather bandwith data from our routers and switches. MRTG is working fine: it stores the log data in the /var/www/mrtg/ directory and displays the graph data via web. We assume so MRTG is doing great. We tried to setup bandwidth checks in nagios: define service{ use generic-service ; Inherit values from a template host_name zywall-agora service_description ZYWALL AGORA TRAFFICO check_command check_local_mrtgtraf!/var/www/mrtg/x.x.x.x_2.log!AVG!1000000,2000000!5000000,5000000!1000 check_interval 1 ; Check the service every 1 minute under normal conditions retry_interval 1 ; Re-check every minute until its final/hard state is determined } Where /var/www/mrtg/x.x.x.x_2.log is the correct log path file. We keep on getting Unable to open MRTG log file error in the test result in icinga web interface. We tried everything: give ownership to user nagios or icinga to the log file give chmod 777 to the file try to copy the file in another directory and give it full permission Same error. The strange thing is that if we use the command that nagios generate in a bash session the command works like a charm: /usr/lib64/nagios/plugins/check_mrtgtraf -F /var/www/mrtg/x.x.x.x_2.log -a AVG -w 10,20 -c 5000000,5000000 -e 10 Result: Traffic WARNING - Avg. In = 17.9 KB/s, Avg. Out = 5.0 KB/s|in=17.877930KB/s;10.000000;5000000.000000;0.000000 out=5.000000KB/s;20.000000;5000000.000000;0.000000 We ran that command line as root, as user nagios and as user icinga and all three worked ok. We thought that the command that nagios perform maybe has something wrong in it, so we debugged nagios but we found out that the generated command from nagios is the same as above. Searching on google for these kind of problem returns only issues of systems where mrtg is not installed or issues with the wrong path to the log file, but these seems not to be our case. We are stuck, can somebody help?

    Read the article

  • qemu-img: Could not open $FILE

    - by HTTP500
    I received a single-file VMDK from a vendor that has a virtual appliance for a particular product I'm interested in evaluating. We run a KVM solution (Proxmox) so I tried converting the file but on that system qemu-img blows up. (I was able to convert (multipart) VMDK files from bitnami without error.) So I figured I'll just yum install qemu-img on a RHEL 6.3 VM and do it there. But despite the fact that I can file the file just fine when I run qemu-img on it I get this error that it can't open the file: [root@host dir]# file 1.vmdk 1.vmdk: VMware4 disk image [root@host dir]# qemu-img info 1.vmdk qemu-img: Could not open 'vmdk' I've seen some other people post on the interwebs that they've had this problem but none of them seem to have a resolution. Does anyone have any ideas? I have checked the MD5SUM already. EDIT1: [root@host dir]# qemu-img info -f vmdk 1.vmdk qemu-img: Could not open '1.vmdk' EDIT2: Ran strace per suggestion. Not sure what to look for... Here is a possible: ioctl(3, CDROM_DRIVE_STATUS, 0x7fffffff) = -1 ENOTTY (Inappropriate ioctl for device)

    Read the article

  • Samba Share - MS Excel when saved (can't access the file, there are several possible reasons)

    - by brain90
    Dear Fellow ServerFaulter, I have a weird problem in my samba share. I have one share definition for 3 client (A,B,C) This share contain some excel file which having a lot of formula and linked each other. Client A access the file with libre office (ubuntu), client B access with WinXP & MS Office 2003, The write and read process working successfuly on Both of them. The problem occur when client C accessing the same file with MS Excel 2003 (windows xp). This messagebox appear when he saving the file : Microsoft office excel cannot access the \\192.168.1.23\myshare\ There are several possible reasons: - The File ort path does not exist The file is being used by another program. - The workbook you are trying to save has the same name as a - Currently open workbooks. I was trying http://support.microsoft.com/kb/291204 but it didnt work. Below is my share definition : [brainshare] comment = brainshare path = /opt/brainshare/ valid users = @brainshare force group = brainshare read only = No create mask = 0775 veto files = /*.scr/*.eml/thumbs.com/ Help me please... Thanks in advance ! Server: Ubuntu 10.10, Samba version 3.5.4

    Read the article

  • CouchDB crashes at startup when path to config file has space(s)

    - by Barry Wark
    I'm hoping to run CouchDB as a per-user Launch Agent on OS X. I'm using the coucdbx-core folder from the CouchDB Server.app as the base of my CouchDB deployment. I'd like each user to have their own couch instance (on a different port), necessitating separate config files for each instance. The logical place to put these files is in ~/Library/Application Support/ for each user. I can put the entire distribution in ~/Library/Application Support/my-app/coucdbx, and put the .ini at ~/Library/Application Support/my-app/local.ini. Starting couchdb as bin/couchdb -a ../local.ini (from ~/Library/Application Support/my-app/coucdbx) works great. But I'd like to save every user the ~50MB couchdbx and install the couchdbx-core in a shared location (e.g. within my app's .app bundle). When I do this, the path to the per-user config file contains a space, and I get the following error when starting CouchDB: $ bin/couchdb -n -a ~/Library/Application\ Support/us.physion.ovation/default.ini {"init terminating in do_boot",{{badmatch,{error,{bad_return,{{couch_app,start,[normal,["/Users/hs/prj/build-couchdb/build/etc/couchdb/default.ini","/Users/hs/prj/build-couchdb/build/etc/couchdb/local.ini"]]},{'EXIT',{{badmatch,{error,{error,enoent}}},[{couch_server_sup,start_server,1,[{file,"/Users/hs/prj/build-couchdb/dependencies/couchdb/src/couchdb/couch_server_sup.erl"},{line,56}]},{application_master,start_it_old,4,[{file,"application_master.erl"},{line,274}]}]}}}}}},[{couch,start,0,[{file,"/Users/hs/prj/build-couchdb/dependencies/couchdb/src/couchdb/couch.erl"},{line,18}]},{init,start_it,1,[]},{init,start_em,1,[]}]}} Is there any way to provide a config file at the command line, if that config file's path includes space(s)? Despite my best efforts in the mailing list archives, wiki and google, I haven't been able to find a solution or a definitive "it can't work". Any help greatly appreciated.

    Read the article

  • Create a special folder within an outlook PST file

    - by Tony Dallimore
    Original question I have two problems caused by missing special folders. I added a second email address for which Outlook created a new PST file with an Inbox to which emails are successfully imported. But there is no Deleted Items folder. If I attempt to delete an unwanted email it is struck out. If move an email to a different PST file it is copied. I created a new PST file using Data File Management. This PST file has no Drafts folder. This is not important but I fail to see why I cannot have Drafts folder if I want. Any suggestions for solving these problems, particularly the first, gratefully received. Update Thanks to Ramhound and Dave Rook for their helpful responses to my original question. I assumed the problem of not have a Drafts folder in an Archive PST file and not having a Deleted Items folder associated with an Inbox were part of the same problem or I would not have mentioned the Drafts folder issue since I have an easy work-around. Perhaps my question should have been: How to I load emails from an IMAP account and be able to delete the spam?

    Read the article

  • cURL hangs trying to upload file from stdin

    - by SidneySM
    I'm trying to PUT a file with cURL. This hangs: curl -vvv --digest -u user -T - https://example.com/file.txt < file This does not: curl -vvv --digest -u user -T file https://example.com/file.txt What's going on? * About to connect() to example.com port 443 (#0) * Trying 0.0.0.0... connected * Connected to example.com (0.0.0.0) port 443 (#0) * SSLv3, TLS handshake, Client hello (1): * SSLv3, TLS handshake, Server hello (2): * SSLv3, TLS handshake, CERT (11): * SSLv3, TLS handshake, Server key exchange (12): * SSLv3, TLS handshake, Server finished (14): * SSLv3, TLS handshake, Client key exchange (16): * SSLv3, TLS change cipher, Client hello (1): * SSLv3, TLS handshake, Finished (20): * SSLv3, TLS change cipher, Client hello (1): * SSLv3, TLS handshake, Finished (20): * SSL connection using DHE-RSA-AES256-SHA * Server certificate: * subject: serialNumber=jJakwdOewDicmqzIorLkKSiwuqfnzxF/, C=US, O=*.example.com, OU=GT01234567, OU=See www.example.com/resources/cps (c)10, OU=Domain Control Validated - ExampleSSL(R), CN=*.example.com * start date: 2010-01-26 07:06:33 GMT * expire date: 2011-01-28 11:22:07 GMT * common name: *.example.com (matched) * issuer: C=US, O=Equifax, OU=Equifax Secure Certificate Authority * SSL certificate verify ok. * Server auth using Digest with user 'user' > PUT /file.txt HTTP/1.1 > User-Agent: curl/7.19.4 (universal-apple-darwin10.0) libcurl/7.19.4 OpenSSL/0.9.8l zlib/1.2.3 > Host: example.com > Accept: */* > Transfer-Encoding: chunked > Expect: 100-continue > < HTTP/1.1 100 Continue

    Read the article

  • how to pass an xml file to lxml to parse?

    - by BeeBand
    I'm trying to parse an xml file using lxml. xml.etree allowed me to simply pass the file name as a parameter to the parse function, so I attempted to do the same with lxml. My code: from lxml import etree from lxml import objectify file = "C:\Projects\python\cb.xml" tree = etree.parse(file) but I get the error: Traceback (most recent call last): File "cb.py", line 5, in <module> tree = etree.parse(file) File "lxml.etree.pyx", line 2698, in lxml.etree.parse (src/lxml/lxml.etree.c:4 9590) File "parser.pxi", line 1491, in lxml.etree._parseDocument (src/lxml/lxml.etre e.c:71205) File "parser.pxi", line 1520, in lxml.etree._parseDocumentFromURL (src/lxml/lx ml.etree.c:71488) File "parser.pxi", line 1420, in lxml.etree._parseDocFromFile (src/lxml/lxml.e tree.c:70583) File "parser.pxi", line 975, in lxml.etree._BaseParser._parseDocFromFile (src/ lxml/lxml.etree.c:67736) File "parser.pxi", line 539, in lxml.etree._ParserContext._handleParseResultDo c (src/lxml/lxml.etree.c:63820) File "parser.pxi", line 625, in lxml.etree._handleParseResult (src/lxml/lxml.e tree.c:64741) File "parser.pxi", line 565, in lxml.etree._raiseParseError (src/lxml/lxml.etr ee.c:64084) lxml.etree.XMLSyntaxError: AttValue: " or ' expected, line 2, column 26 What am I doing wrong?

    Read the article

  • How can I ensure my programmatic uploads are done in the correct order?

    - by ccomet
    In our application, we store two copies of a file - an approved one and an unapproved one. Both track their versions separately. When the unapproved is then approved, all of its versions are added as new versions to the approved file. To do this properly, my code has to upload each version separately into the approved folder, and update the item each time with that version's information. For some reason, though, this doesn't always work properly. In my latest scenario, the latest version was uploaded first, and then all of the remaining versions were uploaded afterwards. However, my code explicitly is supposed to upload the other versions first, that's the order I wrote it in. Why is this happening? And if it is possible, how do I ensure that the versions are uploaded in the correct order? Clarification - It's not a problem with the enumeration - I'm getting the previous versions in the correct order. What is happening is that the final version, which is written after the loop, is being uploaded before the loop. Which really doesn't make any sense to me. Here's a condensed version of the relevant code. //These three are initialized earlier in the code. SPList list; //The document library SPListItem item; //The list item in the Unapproved folder int AID; //The item id of the corresponding item in the Approved folder. byte[] contents; //Not initialized. /* These uploads are happening second when they should happen first. */ if (item.File.Versions.Count > 0) { //This loop is actually a separate method call if that matters. //For simplicity I expanded it here. foreach (SPFileVersion fVer in item.File.Versions) { if (!fVer.IsCurrentVersion) { contents = fVer.OpenBinary(); SPFile fSub = aFolder.Files.Add(fVer.File.Name, contents, u1, fVer.CreatedBy, dt1, fVer.Created); SPListItem subItem = list.GetItemById(AID); //This method updates the newly uploaded version with the field data of that version. UpdateFields(item.Versions.GetVersionFromLabel(fVer.VersionLabel), subItem); } } } /* This upload happens first when it should happen last. */ //Does the same as earlier loop, but for the final version. contents = item.File.OpenBinary(); SPFile f = aFolder.Files.Add(item.File.Name, contents, u1, u2, dt1, dt2); SPListItem finalItem = list.GetItemById(AID); UpdateFields(item.Versions[0], finalItem); item.Delete();

    Read the article

  • Fread binary file dynamic size string [C]

    - by Blackbinary
    I've been working on this assignment, where I need to read in "records" and write them to a file, and then have the ability to read/find them later. On each run of the program, the user can decide to write a new record, or read an old record (either by Name or #) The file is binary, here is its definition: typedef struct{ char * name; char * address; short addressLength, nameLength; int phoneNumber; }employeeRecord; employeeRecord record; The way the program works, it will store the structure, then the name, then the address. Name and address are dynamically allocated, which is why it is necessary to read the structure first to find the size of the name and address, allocate memory for them, then read them into that memory. For debugging purposes I have two programs at the moment. I have my file writing program, and file reading. My actual problem is this, when I read a file I have written, i read in the structure, print out the phone # to make sure it works (which works fine), and then fread the name (now being able to use record.nameLength which reports the proper value too). Fread however, does not return a usable name, it returns blank. I see two problems, either I haven't written the name to the file correctly, or I haven't read it in correctly. Here is how i write to the file: where fp is the file pointer. record.name is a proper value, so is record.nameLength. Also i am writing the name including the null terminator. (e.g. 'Jack\0') fwrite(&record,sizeof record,1,fp); fwrite(record.name,sizeof(char),record.nameLength,fp); fwrite(record.address,sizeof(char),record.addressLength,fp); And i then close the file. here is how i read the file: fp = fopen("employeeRecord","r"); fread(&record,sizeof record,1,fp); printf("Number: %d\n",record.phoneNumber); char *nameString = malloc(sizeof(char)*record.nameLength); printf("\nName Length: %d",record.nameLength); fread(nameString,sizeof(char),record.nameLength,fp); printf("\nName: %s",nameString); Notice there is some debug stuff in there (name length and number, both of which are correct). So i know the file opened properly, and I can use the name length fine. Why then is my output blank, or a newline, or something like that? (The output is just Name: with nothing after it, and program finishes just fine) Thanks for the help.

    Read the article

  • properies profile when writing word file

    - by avani-nature
    Hai frnds i am new to php i am having following problems in my coding... 1.Actually i am opening word document with com object and storing it in textarea. 2.when content gets opened in textarea i am editing that content and saving the document 3.actually when i edited that file and done save after that if i open word document then file properties-custom the old content getting removed i wannt to retain that even if i edited the word document..please do the needful i am using below code <?php $filename = 'C:/xampp/htdocs/mts/sites/default/files/a.doc'; //echo $filename; if(isset($_REQUEST['Save'])){ $somecontent = stripslashes($_POST['somecontent']); // Let's make sure the file exists and is writable first. if (is_writable($filename)) { // In our example we're opening $filename in append mode. // The file pointer is at the bottom of the file hence // that's where $somecontent will go when we fwrite() it. if (!$handle = fopen($filename, 'w')) { echo "Cannot open file ($filename)"; exit; } // Write $somecontent to our opened fi<form action="" method="get"></form>le. if (fwrite($handle, $somecontent) === FALSE) { echo "Cannot write to file ($filename)"; exit; } echo "Success, wrote ($somecontent) to file ($filename) <a href=".$_SERVER['PHP_SELF']."> - Continue - "; fclose($handle); } else { echo "The file $filename is not writable"; } } else{ // get contents of a file into a string $handle = fopen($filename, "r"); $somecontent = fread($handle, filesize($filename)); $word = new COM("word.application") or die ("Could not initialise MS Word object."); $word->Documents->Open(realpath("$filename")); // Extract content. $somecontent = (string) $word->ActiveDocument->Content; //echo $somecontent; $word->ActiveDocument->Close(false); $word->Quit(); $word = null; unset($word); fclose($handle); } ?> <h6>Edit file --------><? $filenam=explode("/",$filename);$filename=$filename[7]; echo $filename ;?></h6> <form name="form1" method="post" action=""> <p> <textarea name="somecontent" cols="100" rows="20"><? echo $somecontent ;?></textarea> </p> <div style='padding-left:250px;'><input type="submit" name="Save" value="Save"></div> </p> </form> <? } ?>

    Read the article

  • Extract a section of a tgz file

    - by TRiG
    I have a 28.5 GB .tgz file which was created on the command line of a Linux computer, compressing one folder and all its many many subfolders. I now want to extract a single sub-sub folder from that .tgz file, using 7zip on Windows Vista. I can't see a way to do it. Opening the .tgz file in 7zip just shows the .tar file inside it. There doesn't seem to be any way to browse that .tar file and extract the section I want. I assume there is a way to do this, but I can't see it. Simply double-clicking on the .tar file brings up a progress bar which runs slowly till my computer complains it's running out of space; I imagine it's trying to extract the whole thing. Searching for "extract section of tgz" and "extract tgz subfolder" and similar found me a way to do it on the Linux command line, but no obvious way to do it on Windows. (Most results found were about extracting into a subfolder, not extracting a subfolder out of the archive.)

    Read the article

  • Generic/Text Printer on Windows 7 not prompting for file name

    - by Trevor Tippins
    Hope someone can shed some light on this. I am downloading reports from an AIX-based system by directing them to a TT printer which the terminal emulator (MultiView 2000) intercepts and directs to the default printer on the local system. This local printer is configured as a vanilla Generic/Text printer attached to a FILE port. When I print from AIX, the output is spooled down and the local printer prompts for a file name into which to save the file...but not under Windows 7. This has worked fine for many years, on both Win2K and WinXP. However, on Windows 7 the output gets spooled as a file into spool\PRINTERS (and looks as expected) but the print job then hangs with a status of "Error - Printing" and never prompts for a file name. I have to cancel the job. The Generic/Text printer works as expected with other applications. I have tried setting the printer to print directly rather than spooling but this only serves to hang the terminal session too. I've also tried to run the emulator in Windows 2000 Compatibility Mode and as Administrator in case it was something like that but with no luck. As you might expect, it does work fine in XP Mode (as long as I print to a printer defined therein and not the host's printer) but operationally this isn't going to be an option. Obviously this emulation software is a decade old (at least) and I could just cross/upgrade all the users (at a cost) but, before I do so, has anyone seen this sort of behaviour before and found some sort of fix? Remote OS: AIX 5 Client OS: Windows 7 Pro (32-bit) Printer: Generic/Text on a FILE port TE Software: MultiView 2000 (32-bit) Thanks in advance.

    Read the article

  • get renamed file names of multiple upload form [js array] in codeigniter

    - by artmania
    Hi friends, I use codeigniter. I have a multiple image upload form. The code below is working well for uploading, but I also need to save file names to database. How can I get the names in here? I spent hours & hours :/ but could not sort it :/ Appreciate helps!!! uploadform.php echo form_open_multipart('gallery/upload'); <input type="file" name="photo" size="50" /> <input type="file" name="thumb" size="50" /> <input type="submit" value="Upload" /> </form> I have a controller between form view and model load model (of course : )) but didnt post here because of no need. gallery_model.php function multiple_upload($upload_dir = 'uploads/', $config = array()) { /* Upload */ $CI =& get_instance(); $files = array(); if(empty($config)) { $config['upload_path'] = realpath($upload_dir); $config['allowed_types'] = 'gif|jpg|jpeg|jpe|png'; $config['max_size'] = '2048'; } $CI->load->library('upload', $config); $errors = FALSE; foreach($_FILES as $key => $value) { if( ! empty($value['name'])) { if( ! $CI->upload->do_upload($key)) { $data['upload_message'] = $CI->upload->display_errors(ERR_OPEN, ERR_CLOSE); // ERR_OPEN and ERR_CLOSE are error delimiters defined in a config file $CI->load->vars($data); $errors = TRUE; } else { // Build a file array from all uploaded files $files[] = $CI->upload->data(); } } } // There was errors, we have to delete the uploaded files if($errors) { foreach($files as $key => $file) { @unlink($file['full_path']); } } elseif(empty($files) AND empty($data['upload_message'])) { $CI->lang->load('upload'); $data['upload_message'] = ERR_OPEN.$CI->lang->line('upload_no_file_selected').ERR_CLOSE; $CI->load->vars($data); } else { return $files; } /* ------------------------------- Insert to database */ // problem is here, i need file names to add db. // if there is already same names file at the folder, it rename file itself. so in such case, I need renamed file name :/ } }

    Read the article

  • How to repair a corrupted Excel file

    - by SjoerdV
    I'm trying to restore some files for a friend who messed up a USB stick by pulling it out of the computer too soon. By searching invisible partitions I was able to restore the files. All files are intact, except 1 excel file that seems to be corrupted. I've googled about ways to repair an Excel file, because the default repair options in Excel 2003-2010 do not work for this file. I've came across a numerous third-party apps, of which the one seemes to be able to do the job: Recovery Toolbox for Excel. Of-course, this and all other apps I found require a purchase to actually repair the file. It seems a bit silly to pay 30 dollars for a one time thing, and something that is not for myself. So, I would like to keep that as a last-resort option. Is there anything I can try to fix this? I've attached a screenshot of how the file looks when I open it on my OSX when converted to a HTML file (to view the code), for the option that it could be a character set problem.

    Read the article

  • Setting up ssh config file with id_rsa through tunnel

    - by Rubens
    I've been struggling to set up a valid configuration to open a connection with a second machine, passing through another one, and using an id_rsa (which requests me a password) to connect to the third machine. I've asked this question in another forum, but I've received no answer that could be considered very helpful. The problem, better described, goes as follows: Local machine: user1@localhost Intermediary machine: user1@inter Remote target: user2@final I'm able to do the entire connection using pseudo-tty: ssh -t inter ssh user2@final (this will ask me the password for the id_rsa file I have in machine "inter") However, for speeding things up, I'd like to set my .ssh/config file, so that I can simply connect to machine "final" using: ssh final What I've got so far -- which does not work -- is, in my .ssh/config file: Host inter User user1 HostName inter.com IdentityFile ~/.ssh/id_rsa Host final User user2 HostName final.com IdentityFile ~/.ssh/id_rsa_2 ProxyCommand ssh inter nc %h %p The id_rsa file is used to connect to the middle machine (this requires me no password typing), and id_rsa_2 file is used to connect to machine "final" (this one requests a password). I've tried mixing up some LocalForward and/or RemoteForward fields, and putting the id_rsa files in both first and second machines, but I could not seem to succeed with no configuration whatsoever. Hope somebody can help me here! Regards! P.S.: the thread I've tried to get some help from: http://www.linuxquestions.org/questions/linux-general-1/proxycommand-on-ssh-config-file-4175433750/

    Read the article

  • Recover file from NTFS after it was formatted twice

    - by Phil
    I'm running Linux Mint and have a 2TB drive that I formatted as NTFS. I copied ~120GB of files from another computer to the 2TB drive, removing the files from the other computer as I did so. When they were all on the 2TB drive, I zipped them up as file "Gold.tar.gz". Then I reformatted the 2TB drive as ext3 in a moment of absolute stupidity. I formatted the 2TB back to NTFS, but of course everything is gone. Here is what I have tried: TestDisk -- won't find any lost partitions or undelete files, just the current empty one PhotoRec -- seems to only find some broken text files and misidentify their extensions. It never finds the 100's of avi files I had (before the 120GB copy, I already had 750GB on the drive full of avi files) or anything else that would show me it's working properly. Using dd I recovered the first 512MB of the drive and went hunting through it. I found all of the file as MFT entries, including the file "Gold.tar.gz" in a 2048 byte MFT record. I'm looking now for some way of either (1) telling PhotoRec to look at that record, or (2) analyze the MFT record myself and discover the sectors holding the data; I can piece it all together using dd and join the binary output if it's fragmented. One last thing - from the moment I got this drive a few days ago to the incident, there were only file copies made to it and no deletes. I formatted as NTFS, then copied thousands of files, then made a tar.gz, then reformatted to ext3, then reformatted to NTFS again. I'm hoping that the size of the drive and fact that there was no file modification/deleting happening makes for minimal file fragmentation.

    Read the article

< Previous Page | 154 155 156 157 158 159 160 161 162 163 164 165  | Next Page >