Search Results

Search found 77950 results on 3118 pages for 'large file upload'.

Page 215/3118 | < Previous Page | 211 212 213 214 215 216 217 218 219 220 221 222  | Next Page >

  • How do private BitTorrent trackers monitor how much users upload/download?

    - by Jon-Eric
    From Wikipedia: Most private trackers monitor how much users upload or download, and in most situations, enforce a minimum upload-to-download ratio. How exactly can a tracker figure out how much data was uploaded and downloaded by each user? My understanding is that a BitTorrent tracker is merely a registry of users that are currently downloading/seeding and that peers, once connected, transfer data directly. So I wouldn't think that the tracker would know anything about the amount of data transferred, much less, where it came from.

    Read the article

  • FTP upload for a PHP file hosting site, how to connect ProFTPD to mysql database?

    - by Igor
    I'm running a file upload service and users have requested to have FTP upload features Basically, I need to allow users to login, via FTP, to an FTP daemon (say, proFTPd) and they should be able to use their username and password (stored in a mysql database) to login there After logging in, I'll take care of the files with a cron job I'm stuck on how to make proftpd get users and passwords from my database..any ideas?

    Read the article

  • NSIS takes ownership of IIS system files

    - by Lucas
    I recently encountered an issue with NSIS that I believe is related to an interaction with UAC, but I am at a loss to explain it and I do not know how to prevent it in the future. I have an installer that creates and removes IIS virtual directories using the NsisIIS plugin. The installer appeared worked correctly on my Windows 7 workstation. When the installer was run on a Windows 2008 R2 server it installed properly, but the uninstaller removed all of the virtual directories and put IIS is an unusable state; to the point that I had to remove the Default Web Site and re-add it. What I eventually found was that all of the IIS configuration files under C:\Windows\System32\inetsrv\config had a lock icon on them. Some investigation seem to indicate that this means a user account has taken ownership of the file, however all the files listed SYSTEM as the file owner. I did check a different server that I have not run the installer on, and it does not have the lock icon applied to the IIS files. I have also seen the same lock icon appear on other files that the NSIS installer creates. For instance, I have a Web.Config.tpl file that is processed using the NSIS ReplaceInFile which also appears with the lock icon after the installer finished. After I explicitly grant another user account access to the file, the lock icon goes away. I run the installer under the local Administrator account on the 2008 R2 server, so I do not get the UAC prompt. Here is the relevant code from the install.nsi file RequestExecutionLevel admin Section "Application" APP_SECTION SectionIn RO Call InstallApp SectionEnd Section "un.Uninstaller Section" Delete "$PROGRAMFILES\${PROGRAMFILESDIR}\Uninstall.exe" Call un.InstallApp SectionEnd Function InstallApp File /oname=Web.Config Web.Config.tpl !insertmacro ReplaceInFile Web.Config %CONNECTION_STRING% $CONNECTION_STRING FunctionEnd Function un.InstallApp ReadRegStr $0 HKLM "Software\${REGKEY}" "VirtualDir" NsisIIS::DeleteVDir "$0" Pop $0 FunctionEnd I have three questions stemming from this incident: How did this happen? How can I fix my installer to prevent it from happening again? How can I repair the permissions on the IIS config files.

    Read the article

  • C# FileStream position is off after calling ReadLine()

    - by Cristi Diaconescu
    I'm trying to read a (small-ish) file in chunks of a few lines at a time, and I need to return to the beginning of particular chunks. The problem is, after the very first call to streamReader.ReadLine(); the streamReader.BaseStream.Position property is set to the end of the file! Now I assume some caching is done in the backstage, but I was expecting this property to reflect the number of bytes that I used from that file. For instance, calling ReadLine() again will (naturally) return the next line in the file, which does not start at the position previously reported by streamReader.BaseStream.Position. My question is, how can I find the actual position where the 1st line ends, so I can return there later? I can only think of manually doing the bookkeeping, by adding the lengths of the strings returned by ReadLine(), but even here there are a couple of caveats: ReadLine() strips the new-line character(s) which may have a variable length (is is '\n' ? is it "\r\n" ? etc) I'm not sure if this would work ok with variable-length characters ...so right now it seems like my only option is to rethink how I parse the file, so I don't have to rewind. If it helps, I open my file like this: using (var reader = new StreamReader( new FileStream(m_path, FileMode.Open, FileAccess.Read, FileShare.ReadWrite))) {...} Any suggestions?

    Read the article

  • Async file uploads in Firefox reset on any DOM change

    - by Vibhu
    I'm pretty sure this is a Firefox or flash-related bug, but I just want to check if anyone has ran into this problem or knows how to fix it. Basically, we have a multi-file upload widget for our highly dynamic web app (think Gmail). We've tried both uploadify for jQuery, and YUI uploader. We've also tried taking those out of our app interface and putting them in an iFrame. What happens is that in the event of any DOM manipulation, even if the uploader is in an iFrame, be it a tab change (in our web app) that covers the iframe temporarily, or a block, etc., the uploader will stop its current upload. In the case of YUI uploader, it fires the "contentReady" event again. This ONLY happens in Firefox. IE and Chrome are fine. In case you are wondering, we really don't have any custom needs here. Just need to have multi-upload file support, and we need to give people free reign to tab around in our interface while an upload is in progress. It seems like Yahoo! and Gmail have both solved this problem. How? What are we doing wrong?

    Read the article

  • Setup.exe files downloading without cab files over poor connections

    - by Colin
    We have customers who are trying to download a setup.exe file over mobile connections that appear to be very slow. They have reported that when they click on the downloaded setup.exe, the install wizard starts up, but part way through the wizard they get an error message indicating that a cab file is corrupt or missing. They couriered a problem tablet to us, and we downloaded the file without a problem but I could replicate the problem by using https to download the file (https is normally used to access the rest of the site, although it is not necessary for the download). When I did this the downloaded file was 2.8MB. It should be 8MB. I don't think that https is the root cause of the problem because I can see the download link in the browser history using http, so I know the customer tried to download using http. I think that the issue is that the poor connection is preventing a complete download, but the browser is acting as if it is complete. Is there a way to ensure the file is downloaded fully, or not at all? Why does the browser not indicate that the download is incomplete?

    Read the article

  • "The binary you uploaded was invalid. the file was not a valid zip file" Error message uploading app

    - by Keith Loughnane
    Hi, I'm trying to upload an iPhone app binary to iTunesConnect and keep getting the following error message "The binary you upload was invalid. the file was not a valid zip file". I had an app upload ok recently but this app is having problems. So after a while I carefully went through the following steps trying to make sure everything was ok. Any help is appreciated. The steps: renamed the project (Project-Rename... enter name into Rename project to:) to release name making sure the name has no spaces. Cleaned project Make sure references in build setting reflect new app name Create new app ID matching project name in iPhone Provisioning Portal Destroyed old developer and distributer provisioning profiles in Provisioning Portal, in XCode and on iPhone. Create new development provisioning profile using new app name. Install development provisioning profile into XCode 8) Build (Release) for iPhone OS 3.1.3 (highest my phone will upgrade to, I'm assuming current released version) Builds, Installs and Runs on actual iPhone: To me this implys App and developer ID's are OK. Create a distributor provisioning profile using existing Distributer ID. Install distributer ID into XCode Clean Checked that "Code Signing Identity" and "Any iPhone OS Device" lines in Build settings are set to Distributor ID Build for release for OS 3.1.3 Check Build results to make sure code is signed with Distributor Profile Reveal .app file and compress (alt click Compress "appName.app") Upload to iTunes connect Gives "The binary you uploaded was invalid. The file was not a valid zip file"

    Read the article

  • In Java, howd do I iterate through lines in a textfile from back to front

    - by rogue780
    Basically I need to take a text file such as : Fred Bernie Henry and be able to read them from the file in the order of Henry Bernie Fred The actual file I'm reading from is 30MB and it would be a less than perfect solution to read the whole file, split it into an array, reverse the array and then go from there. It takes way too long. My specific goal is to find the first occurrence of a string (in this case it's "InitGame") and then return the position beginning of the beginning of that line. I did something like this in python before. My method was to seek to the end of the file - 1024, then read lines until I get to the end, then seek another 1024 from my previous starting point and, by using tell(), I would stop when I got to the previous starting point. So I would read those blocks backwards from the end of the file until I found the text I was looking for. So far, I'm having a heck of a time doing this in Java. Any help would be greatly appreciated and if you live near Baltimore it may even end up with you getting some fresh baked cookies. Thanks!

    Read the article

  • Getting Error When Opening Files

    - by Nathan Campos
    I'm developing a simple Text Editor to understand better PocketC language, then I've done this: #include "\\Storage Card\\My Documents\\PocketC\\Parrot\\defines.pc" int filehandle; int file_len; string file_mode; initComponents() { createctrl("EDIT", "test", 2, 1, 0, 24, 70, 25, TEXTBOX); wndshow(TEXTBOX, SW_SHOW); guigetfocus(); } main() { filehandle = fileopen(OpenFileDlg("Plain Text Files (*.txt)|*.txt; All Files (*.*)|*.*"), 0, FILE_READWRITE); file_len = filegetlen(filehandle); if(filehandle = -1) { MessageBox("File Could Not Be Found!", "Error", 3, 1); } initComponents(); editset(TEXTBOX, fileread(filehandle, file_len)); } Then I tried to run the application, it opens the Open File Dialog, I select a file(that is at \test.txt) that I've created with notepad, then I got my MessageBox saying that the file wans't found. Then I want to know why I'm getting this if the file is all correct? *PS: When I click to exit the MessageBox, I saw that the TextBox is displaying where the file is(I've tested with many other files, and with all I got the error and this).

    Read the article

  • C++: Is there any good way to read/write without specifically stating character type in function nam

    - by Mark L.
    I'm having a problem getting a program to read from a file based on a template, for example: bool parse(basic_ifstream<T> &file) { T ch; locale loc = file.getloc(); basic_string<T> buf; file.unsetf(ios_base::skipws); if (file.is_open()) { while (file >> ch) { if(isalnum(ch, loc)) { buf += ch; } else if(!buf.empty()) { addWord(buf); buf.clear(); } } if(!buf.empty()) { addWord(buf); } return true; } return false; } This will work when I instantiate this class with <char>, but has problems when I use <wchar_t> (clearly). Outside of the class, I'm using: for (iter = mp.begin(); iter != mp.end(); ++iter ) { cout << iter->first << setw(textwidth - iter->first.length() + 1); cout << " " << iter->second << endl; } To write all of the information from this data struct (it's a map<basic_string<T>, int>), and as predicted, cout explodes if iter->first isn't a char array. I've looked online and the consensus is to use wcout, but unfortunately, since this program requires that the template can be changed at compile time (<char> - <wchar_t>) I'm not sure how I could get away with simply choosing cout or wcout. That is, unless there way a way to read/write wide characters without changing lots of code. If this explanation sounds awkwardly complicated, let me know and I'll address it as best I can.

    Read the article

  • warning: assignment makes pointer from integer without a cast

    - by FILIaS
    Im new in programming c with arrays and files. Im just trying to run the following code but i get warnings like that: warning: assignment makes pointer from integer without a cast Any help? It might be silly... but I cant find what's wrong. FILE *fp; FILE *cw; char filename_game[40],filename_words[40]; int main() { while(1) { /* Input filenames. */ printf("\n Enter the name of the file with the cryptwords array: \n"); gets(filename_game); printf("\n Give the name of the file with crypted words:\n"); gets(filename_words); /* Try to open the file with the game */ if (fp=fopen("crypt.txt","r")!=NULL) { printf("\n Successful opening %s \n",filename_game); fclose(fp); puts("\n Enter x to exit,any other to continue! \n "); if ( (getc(stdin))=='x') break; else continue; } else { fprintf(stderr,"ERROR!%s \n",filename_game); puts("\n Enter x to exit,any other to continue! \n"); if (getc(stdin)=='x') break; else continue; } /* Try to open the file with the names. */ if (cw=fopen("words.txt","r")!=NULL) { printf("\n Successful opening %s \n",filename_words); fclose(cw); puts("\n Enter x to exit,any other to continue \n "); if ( (getc(stdin))=='x') break; else continue; } else { fprintf(stderr,"ERROR!%s \n",filename_words); puts("\n Enter x to exit,any other to continue! \n"); if (getc(stdin)=='x') break; else continue; } } return 0; }

    Read the article

  • Multi-threaded .NET application blocks during file I/O when protected by Themida

    - by Erik Jensen
    As the title says I have a .NET application that is the GUI which uses multiple threads to perform separate file I/O and notice that the threads occasionally block when the application is protected by Themida. One thread is devoted to reading from serial COM port and another thread is devoted to copying files. What I experience is occasionally when the file copy thread encounters a network delay, it will block the other thread that is reading from the serial port. In addition to slow network (which can be transient), I can cause the problem to happen more frequently by making a PathFileExists call to a bad path e.g. PathFileExists("\\\\BadPath\\file.txt"); The COM port reading function will block during the call to ReadFile. This only happens when the application is protected by Themida. I have tried under WinXP, Win7, and Server 2012. In a streamlined test project, if I replace the .NET application with a MFC unmanaged application and still utilize the same threads I see no issue even when protected with Themida. I have contacted Oreans support and here is their response: The way that a .NET application is protected is very different from a native application. To protect a .NET application, we need to hook most of the file access APIs in order to "cheat" the .NET Framework that the application is protected. I guess that those special hooks (on CreateFile, ReadFile...) are delaying a bit the execution in your application and the problem appears. We did a test making those hooks as light as possible (with minimum code on them) but the problem still appeared in your application. The rest of software protectors that we tried (like Enigma, Molebox...) also use a similar hooking approach as it's the only way to make the .NET packed file to work. If those hooks are not present, the .NET Framework will abort execution as it will see that the original file was tampered (due to all Microsoft checks on .NET files) Those hooks are not present in a native application, that's why it should be working fine on your native application. Oreans support tried other software protectors such as Enigma Protector, Engima VirtualBox, and Molebox and all exhibit the exact same problem. What I have found as a work around is to separate out the file copy logic (where the file exists call is being made) to be performed in a completely separate process. I have experimented with converting the thread functions from unmanaged C++ to VB.NET equivalents (PathFileExists - System.IO.File.Exists and CreateFile/ReadFile - System.IO.Ports.SerialPort.Open/Read) and still see the same serial port read blocked when the file check or copy call is delayed. I have also tried setting the ReadFile to work asynchronously but that had no effect. I believe I am dealing with some low-level windows layer that no matter the language it exhibits a block on a shared resource -- and only when the application is executing under a single .NET process protected by Themida which evidently installs some hooks to allow .NET execution. At this time converting the entire application away from .NET is not an option. Nor is separating out the file copy logic to a separate task. I am wondering if anyone else has more knowledge of how a file operation can block another thread reading from a system port. I have included here example applications that show the problem: https://db.tt/cNMYfEIg - VB.NET https://db.tt/Y2lnTqw7 - MFC They are Visual Studio 2010 solutions. When running the themida protected exe, you can see when the FileThread counter pauses (executing the File.Exists call) while the ReadThread counter also pauses. When running non-protected visual studio output exe, the ReadThread counter does not pause which is how we expect it to function. Thanks!

    Read the article

  • Emacs: Often switching between Emacs and my IDE's editor, how to automatically 'synch' the files?

    - by WizardOfOdds
    I very often need to do some Emacs magic on some files and I need to go back and forth between my IDE (IntelliJ IDEA) and Emacs. When a change is made under Emacs (and after I've saved the file) and I go back to IntelliJ the change appears immediately (if I recall correctly I configured IntelliJ to "always reload file when a modification is detected on disk" or something like that). I don't even need to reload: as soon as IntelliJ IDEA gains focus, it instantly reloads the file (and I hence have immediately access to the modifications I made from Emacs). So far, so very good. However "the other way round", it doesn't work yet. Can I configure Emacs so that everytime a file is changed on disk it reloads it? Or make Emacs, everytime it "gains focus", verify if any file currently opened has been modified on disk? I know I can start modifying the buffer under Emacs and it shall instantly warn that it has been modified, but I'd rather have it do it immediately (for example if I used my IDE to do some big change, when I come back to Emacs what I see may not be at all anymore what the file contains and it's a bit weird).

    Read the article

  • Segmentation fault

    - by darkie15
    #include<stdio.h> #include<zlib.h> #include<unistd.h> #include<string.h> int main(int argc, char *argv[]) { char *path=NULL; size_t size; int index ; printf("\nArgument count is = %d", argc); printf ("\nThe 0th argument to the file is %s", argv[0]); path = getcwd(path, size); printf("\nThe current working directory is = %s", path); if (argc <= 1) { printf("\nUsage: ./output filename1 filename2 ..."); } else if (argc > 1) { for (index = 1; index <= argc;index++) { printf("\n File name entered is = %s", argv[index]); strcat(path,argv[index]); printf("\n The complete path of the file name is = %s", path); } } return 0; } In the above code, here is the output that I get while running the code: $ ./output test.txt Argument count is = 2 The 0th argument to the file is ./output The current working directory is = /home/welcomeuser File name entered is = test.txt The complete path of the file name is = /home/welcomeusertest.txt Segmentation fault (core dumped) Can anyone please me understand why I am getting a core dumped error? Regards, darkie

    Read the article

  • What is the fastest way for reading huge files in Delphi?

    - by dummzeuch
    My program needs to read chunks from a huge binary file with random access. I have got a list of offsets and lengths which may have several thousand entries. The user selects an entry and the program seeks to the offset and reads length bytes. The program internally uses a TMemoryStream to store and process the chunks read from the file. Reading the data is done via a TFileStream like this: FileStream.Position := Offset; MemoryStream.CopyFrom(FileStream, Size); This works fine but unfortunately it becomes increasingly slower as the files get larger. The file size starts at a few megabytes but frequently reaches several tens of gigabytes. The chunks read are around 100 kbytes in size. The file's content is only read by my program. It is the only program accessing the file at the time. Also the files are stored locally so this is not a network issue. I am using Delphi 2007 on a Windows XP box. What can I do to speed up this file access?

    Read the article

  • S3 file Uploading from Mac app though PHP?

    - by Ilija Tovilo
    I have asked this question before, but it was deleted due too little information. I'll try to be more concrete this time. I have an Objective-C mac application, which should allow users to upload files to S3-storage. The s3 storage is mine, the users don't have an Amazon account. Until now, the files were uploaded directly to the amazon servers. After thinking some more about it, it wasn't really a great concept, regarding security and flexibility. I want to add a server in between. The user should authenticate with my server, the server would open a session if the authentication was successful, and the file-sharing could begin. Now my question. I want to upload the files to S3. One option would be to make a POST-request and wait until the server would receive the file. Problems here are, that there would be a delay, when the file is being uploaded from my server to the S3 servers, and it would double the uploading time. Best would be, if I could validate the request, and then redirecting it, so the client uploads it directly to the s3-storage. Not sure if this is possible somehow. Uploading directly to S3 doesn't seem to be very smart. After looking into other apps like Droplr and Dropmark, it looks like they don't do this. Btw. I did this using Little Snitch. They have their api on their own web-server, and that's it. Could someone clear things up for me? EDIT How should I transmit my files to S3? Is there a way to "forward" it, or do I have to upload it to my server and then upload it from there to S3? Like I said, other apps can do this efficiently and without the need of communicating with S3 directly.

    Read the article

  • The Return Of __FILE__ And __LINE__ In .NET 4.5

    - by Alois Kraus
    Good things are hard to kill. One of the most useful predefined compiler macros in C/C++ were __FILE__ and __LINE__ which do expand to the compilation units file name and line number where this value is encountered by the compiler. After 4.5 versions of .NET we are on par with C/C++ again. It is of course not a simple compiler expandable macro it is an attribute but it does serve exactly the same purpose. Now we do get CallerLineNumberAttribute  == __LINE__ CallerFilePathAttribute        == __FILE__ CallerMemberNameAttribute  == __FUNCTION__ (MSVC Extension)   The most important one is CallerMemberNameAttribute which is very useful to implement the INotifyPropertyChanged interface without the need to hard code the name of the property anymore. Now you can simply decorate your change method with the new CallerMemberName attribute and you get the property name as string directly inserted by the C# compiler at compile time.   public string UserName { get { return _userName; } set { _userName=value; RaisePropertyChanged(); // no more RaisePropertyChanged(“UserName”)! } } protected void RaisePropertyChanged([CallerMemberName] string member = "") { var copy = PropertyChanged; if(copy != null) { copy(new PropertyChangedEventArgs(this, member)); } } Nice and handy. This was obviously the prime reason to implement this feature in the C# 5.0 compiler. You can repurpose this feature for tracing to get your hands on the method name of your caller along other stuff very fast now. All infos are added during compile time which is much faster than other approaches like walking the stack. The example on MSDN shows the usage of this attribute with an example public static void TraceMessage(string message, [CallerMemberName] string memberName = "", [CallerFilePath] string sourceFilePath = "", [CallerLineNumber] int sourceLineNumber = 0) { Console.WriteLine("Hi {0} {1} {2}({3})", message, memberName, sourceFilePath, sourceLineNumber); }   When I do think of tracing I do usually want to have a API which allows me to Trace method enter and leave Trace messages with a severity like Info, Warning, Error When I do print a trace message it is very useful to print out method and type name as well. So your API must either be able to pass the method and type name as strings or extract it automatically via walking back one Stackframe and fetch the infos from there. The first glaring deficiency is that there is no CallerTypeAttribute yet because the C# compiler team was not satisfied with its performance.   A usable Trace Api might therefore look like   enum TraceTypes { None = 0, EnterLeave = 1 << 0, Info = 1 << 1, Warn = 1 << 2, Error = 1 << 3 } class Tracer : IDisposable { string Type; string Method; public Tracer(string type, string method) { Type = type; Method = method; if (IsEnabled(TraceTypes.EnterLeave,Type, Method)) { } } private bool IsEnabled(TraceTypes traceTypes, string Type, string Method) { // Do checking here if tracing is enabled return false; } public void Info(string fmt, params object[] args) { } public void Warn(string fmt, params object[] args) { } public void Error(string fmt, params object[] args) { } public static void Info(string type, string method, string fmt, params object[] args) { } public static void Warn(string type, string method, string fmt, params object[] args) { } public static void Error(string type, string method, string fmt, params object[] args) { } public void Dispose() { // trace method leave } } This minimal trace API is very fast but hard to maintain since you need to pass in the type and method name as hard coded strings which can change from time to time. But now we have at least CallerMemberName to rid of the explicit method parameter right? Not really. Since any acceptable usable trace Api should have a method signature like Tracexxx(… string fmt, params [] object args) we not able to add additional optional parameters after the args array. If we would put it before the format string we would need to make it optional as well which would mean the compiler would need to figure out what our trace message and arguments are (not likely) or we would need to specify everything explicitly just like before . There are ways around this by providing a myriad of overloads which in the end are routed to the very same method but that is ugly. I am not sure if nobody inside MS agrees that the above API is reasonable to have or (more likely) that the whole talk about you can use this feature for diagnostic purposes was not a core feature at all but a simple byproduct of making the life of INotifyPropertyChanged implementers easier. A way around this would be to allow for variable argument arrays after the params keyword another set of optional arguments which are always filled by the compiler but I do not know if this is an easy one. The thing I am missing much more is the not provided CallerType attribute. But not in the way you would think of. In the API above I did add some filtering based on method and type to stay as fast as possible for types where tracing is not enabled at all. It should be no more expensive than an additional method call and a bool variable check if tracing for this type is enabled at all. The data is tightly bound to the calling type and method and should therefore become part of the static type instance. Since extending the CLR type system for tracing is not something I do expect to happen I have come up with an alternative approach which allows me basically to attach run time data to any existing type object in super fast way. The key to success is the usage of generics.   class Tracer<T> : IDisposable { string Method; public Tracer(string method) { if (TraceData<T>.Instance.Enabled.HasFlag(TraceTypes.EnterLeave)) { } } public void Dispose() { if (TraceData<T>.Instance.Enabled.HasFlag(TraceTypes.EnterLeave)) { } } public static void Info(string fmt, params object[] args) { } /// <summary> /// Every type gets its own instance with a fresh set of variables to describe the /// current filter status. /// </summary> /// <typeparam name="T"></typeparam> internal class TraceData<UsingType> { internal static TraceData<UsingType> Instance = new TraceData<UsingType>(); public bool IsInitialized = false; // flag if we need to reinit the trace data in case of reconfigured trace settings at runtime public TraceTypes Enabled = TraceTypes.None; // Enabled trace levels for this type } } We do not need to pass the type as string or Type object to the trace Api. Instead we define a generic Api that accepts the using type as generic parameter. Then we can create a TraceData static instance which is due to the nature of generics a fresh instance for every new type parameter. My tests on my home machine have shown that this approach is as fast as a simple bool flag check. If you have an application with many types using tracing you do not want to bring the app down by simply enabling tracing for one special rarely used type. The trace filter performance for the types which are not enabled must be therefore the fasted code path. This approach has the nice side effect that if you store the TraceData instances in one global list you can reconfigure tracing at runtime safely by simply setting the IsInitialized flag to false. A similar effect can be achieved with a global static Dictionary<Type,TraceData> object but big hash tables have random memory access semantics which is bad for cache locality and you always need to pay for the lookup which involves hash code generation, equality check and an indexed array access. The generic version is wicked fast and allows you to add more features to your tracing Api with minimal perf overhead. But it is cumbersome to write the generic type argument always explicitly and worse if you do refactor code and move parts of it to other classes it might be that you cannot configure tracing correctly. I would like therefore to decorate my type with an attribute [CallerType] class Tracer<T> : IDisposable to tell the compiler to fill in the generic type argument automatically. class Program { static void Main(string[] args) { using (var t = new Tracer()) // equivalent to new Tracer<Program>() { That would be really useful and super fast since you do not need to pass any type object around but you do have full type infos at hand. This change would be breaking if another non generic type exists in the same namespace where now the generic counterpart would be preferred. But this is an acceptable risk in my opinion since you can today already get conflicts if two generic types of the same name are defined in different namespaces. This would be only a variation of this issue. When you do think about this further you can add more features like to trace the exception in your Dispose method if the method is left with an exception with that little trick I did write some time ago. You can think of tracing as a super fast and configurable switch to write data to an output destination or to execute alternative actions. With such an infrastructure you can e.g. Reconfigure tracing at run time. Take a memory dump when a specific method is left with a specific exception. Throw an exception when a specific trace statement is hit (useful for testing error conditions). Execute a passed delegate which e.g. dumps additional state when enabled. Write data to an in memory ring buffer and dump it when specific events do occur (e.g. method is left with an exception, triggered from outside). Write data to an output device. …. This stuff is really useful to have when your code is in production on a mission critical server and you need to find the root cause of sporadic crashes of your application. It could be a buggy graphics card driver which throws access violations into your application (ok with .NET 4 not anymore except if you enable a compatibility flag) where you would like to have a minidump or you have reached after two weeks of operation a state where you need a full memory dump at a specific point in time in the middle of an transaction. At my older machine I do get with this super fast approach 50 million traces/s when tracing is disabled. When I do know that tracing is enabled for this type I can walk the stack by using StackFrameHelper.GetStackFramesInternal to check further if a specific action or output device is configured for this method which is about 2-3 times faster than the regular StackTrace class. Even with one String.Format I am down to 3 million traces/s so performance is not so important anymore since I do want to do something now. The CallerMemberName feature of the C# 5 compiler is nice but I would have preferred to get direct access to the MethodHandle and not to the stringified version of it. But I really would like to see a CallerType attribute implemented to fill in the generic type argument of the call site to augment the static CLR type data with run time data.

    Read the article

  • Generic/Text Printer on Windows 7 not prompting for file name

    - by FantaFan
    Guys & gals, Hope someone can shed some light on this. I am downloading reports from an AIX-based system by directing them to a TT printer which the terminal emulator (MultiView 2000) intercepts and directs to the default printer on the local system. This local printer is configured as a vanilla Generic/Text printer attached to a FILE port. When I print from AIX, the output is spooled down and the local printer prompts for a file name into which to save the file...but not under Windows 7. This has worked fine for many years, on both Win2K and WinXP. However, on Windows 7 the output gets spooled as a file into spool\PRINTERS (and looks as expected) but the print job then hangs with a status of "Error - Printing" and never prompts for a file name. I have to cancel the job. The Generic/Text printer works as expected with other applications. I have tried setting the printer to print directly rather than spooling but this only serves to hang the terminal session too. I've also tried to run the emulator in Windows 2000 Compatibility Mode and as Administrator in case it was something like that but with no luck. As you might expect, it does work fine in XP Mode (as long as I print to a printer defined therein and not the host's printer) but operationally this isn't going to be an option. Obviously this emulation software is a decade old (at least) and I could just cross/upgrade all the users (at a cost) but, before I do so, has anyone seen this sort of behaviour before and found some sort of fix? Remote OS: AIX 5 Client OS: Windows 7 Pro (32-bit) Printer: Generic/Text on a FILE port TE Software: MultiView 2000 (32-bit) Thanks in advance.

    Read the article

  • "Unable to open MRTG log file" error with nagios and mrtg

    - by Simone Magnaschi
    We have a strange issue with our setup of icinga / nagios and mrtg. Icinga is working great and has no problem, it can monitor basically everything without issues. We setup mrtg to gather bandwith data from our routers and switches. MRTG is working fine: it stores the log data in the /var/www/mrtg/ directory and displays the graph data via web. We assume so MRTG is doing great. We tried to setup bandwidth checks in nagios: define service{ use generic-service ; Inherit values from a template host_name zywall-agora service_description ZYWALL AGORA TRAFFICO check_command check_local_mrtgtraf!/var/www/mrtg/x.x.x.x_2.log!AVG!1000000,2000000!5000000,5000000!1000 check_interval 1 ; Check the service every 1 minute under normal conditions retry_interval 1 ; Re-check every minute until its final/hard state is determined } Where /var/www/mrtg/x.x.x.x_2.log is the correct log path file. We keep on getting Unable to open MRTG log file error in the test result in icinga web interface. We tried everything: give ownership to user nagios or icinga to the log file give chmod 777 to the file try to copy the file in another directory and give it full permission Same error. The strange thing is that if we use the command that nagios generate in a bash session the command works like a charm: /usr/lib64/nagios/plugins/check_mrtgtraf -F /var/www/mrtg/x.x.x.x_2.log -a AVG -w 10,20 -c 5000000,5000000 -e 10 Result: Traffic WARNING - Avg. In = 17.9 KB/s, Avg. Out = 5.0 KB/s|in=17.877930KB/s;10.000000;5000000.000000;0.000000 out=5.000000KB/s;20.000000;5000000.000000;0.000000 We ran that command line as root, as user nagios and as user icinga and all three worked ok. We thought that the command that nagios perform maybe has something wrong in it, so we debugged nagios but we found out that the generated command from nagios is the same as above. Searching on google for these kind of problem returns only issues of systems where mrtg is not installed or issues with the wrong path to the log file, but these seems not to be our case. We are stuck, can somebody help?

    Read the article

  • qemu-img: Could not open $FILE

    - by HTTP500
    I received a single-file VMDK from a vendor that has a virtual appliance for a particular product I'm interested in evaluating. We run a KVM solution (Proxmox) so I tried converting the file but on that system qemu-img blows up. (I was able to convert (multipart) VMDK files from bitnami without error.) So I figured I'll just yum install qemu-img on a RHEL 6.3 VM and do it there. But despite the fact that I can file the file just fine when I run qemu-img on it I get this error that it can't open the file: [root@host dir]# file 1.vmdk 1.vmdk: VMware4 disk image [root@host dir]# qemu-img info 1.vmdk qemu-img: Could not open 'vmdk' I've seen some other people post on the interwebs that they've had this problem but none of them seem to have a resolution. Does anyone have any ideas? I have checked the MD5SUM already. EDIT1: [root@host dir]# qemu-img info -f vmdk 1.vmdk qemu-img: Could not open '1.vmdk' EDIT2: Ran strace per suggestion. Not sure what to look for... Here is a possible: ioctl(3, CDROM_DRIVE_STATUS, 0x7fffffff) = -1 ENOTTY (Inappropriate ioctl for device)

    Read the article

  • Samba Share - MS Excel when saved (can't access the file, there are several possible reasons)

    - by brain90
    Dear Fellow ServerFaulter, I have a weird problem in my samba share. I have one share definition for 3 client (A,B,C) This share contain some excel file which having a lot of formula and linked each other. Client A access the file with libre office (ubuntu), client B access with WinXP & MS Office 2003, The write and read process working successfuly on Both of them. The problem occur when client C accessing the same file with MS Excel 2003 (windows xp). This messagebox appear when he saving the file : Microsoft office excel cannot access the \\192.168.1.23\myshare\ There are several possible reasons: - The File ort path does not exist The file is being used by another program. - The workbook you are trying to save has the same name as a - Currently open workbooks. I was trying http://support.microsoft.com/kb/291204 but it didnt work. Below is my share definition : [brainshare] comment = brainshare path = /opt/brainshare/ valid users = @brainshare force group = brainshare read only = No create mask = 0775 veto files = /*.scr/*.eml/thumbs.com/ Help me please... Thanks in advance ! Server: Ubuntu 10.10, Samba version 3.5.4

    Read the article

  • CouchDB crashes at startup when path to config file has space(s)

    - by Barry Wark
    I'm hoping to run CouchDB as a per-user Launch Agent on OS X. I'm using the coucdbx-core folder from the CouchDB Server.app as the base of my CouchDB deployment. I'd like each user to have their own couch instance (on a different port), necessitating separate config files for each instance. The logical place to put these files is in ~/Library/Application Support/ for each user. I can put the entire distribution in ~/Library/Application Support/my-app/coucdbx, and put the .ini at ~/Library/Application Support/my-app/local.ini. Starting couchdb as bin/couchdb -a ../local.ini (from ~/Library/Application Support/my-app/coucdbx) works great. But I'd like to save every user the ~50MB couchdbx and install the couchdbx-core in a shared location (e.g. within my app's .app bundle). When I do this, the path to the per-user config file contains a space, and I get the following error when starting CouchDB: $ bin/couchdb -n -a ~/Library/Application\ Support/us.physion.ovation/default.ini {"init terminating in do_boot",{{badmatch,{error,{bad_return,{{couch_app,start,[normal,["/Users/hs/prj/build-couchdb/build/etc/couchdb/default.ini","/Users/hs/prj/build-couchdb/build/etc/couchdb/local.ini"]]},{'EXIT',{{badmatch,{error,{error,enoent}}},[{couch_server_sup,start_server,1,[{file,"/Users/hs/prj/build-couchdb/dependencies/couchdb/src/couchdb/couch_server_sup.erl"},{line,56}]},{application_master,start_it_old,4,[{file,"application_master.erl"},{line,274}]}]}}}}}},[{couch,start,0,[{file,"/Users/hs/prj/build-couchdb/dependencies/couchdb/src/couchdb/couch.erl"},{line,18}]},{init,start_it,1,[]},{init,start_em,1,[]}]}} Is there any way to provide a config file at the command line, if that config file's path includes space(s)? Despite my best efforts in the mailing list archives, wiki and google, I haven't been able to find a solution or a definitive "it can't work". Any help greatly appreciated.

    Read the article

  • Create a special folder within an outlook PST file

    - by Tony Dallimore
    Original question I have two problems caused by missing special folders. I added a second email address for which Outlook created a new PST file with an Inbox to which emails are successfully imported. But there is no Deleted Items folder. If I attempt to delete an unwanted email it is struck out. If move an email to a different PST file it is copied. I created a new PST file using Data File Management. This PST file has no Drafts folder. This is not important but I fail to see why I cannot have Drafts folder if I want. Any suggestions for solving these problems, particularly the first, gratefully received. Update Thanks to Ramhound and Dave Rook for their helpful responses to my original question. I assumed the problem of not have a Drafts folder in an Archive PST file and not having a Deleted Items folder associated with an Inbox were part of the same problem or I would not have mentioned the Drafts folder issue since I have an easy work-around. Perhaps my question should have been: How to I load emails from an IMAP account and be able to delete the spam?

    Read the article

< Previous Page | 211 212 213 214 215 216 217 218 219 220 221 222  | Next Page >