Search Results

Search found 96184 results on 3848 pages for 'recent file list'.

Page 57/3848 | < Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >

  • unable to copy file to folder, permission denied without explanation

    - by ValekHalfHeart
    Recently Norton Internet security deleted ml.exe (an assembler I use to program in masm32) off of my computer, thinking that one of the programs I had written with it was a virus (it was most certainly not). Fortunately, I had a copy of ml.exe backed up in an external hard drive, and tried to copy it over to my computer. The old ml.exe was located in C:\masm32\bin, so I tried to copy the new one to that location. After disabling Norton (which had opened the folder and preventing me from accessing it), I am still unable to copy the new file to C:\masm32\bin. When I tried, Windows announced that I would need Administrator permission to copy the file. Since I'm an admin, I figured this wouldn't be a problem although it was unexpected, as I have never had to provide administrator permission to access this folder before. However, instead of prompting me to enter my password, Windows simply refuses to copy the file: I repeat, I was not asked to provide a password. It simply says that I do not have permission. Does anyone know what's happening and how to fix it? Is Norton still causing problems, or it something else?

    Read the article

  • File sharing problem on Windows Server 2003 x64

    - by O. Askari
    Hi, We have a customer that hosts our .NET application server on Windows Server 2003 x64. The problem is, its file sharing gets totally disabled after about 10-30 minutes. The only way to re-enable it is to restart the server but the same thing happens again after each restart. This server contains SQL Server 2005 Enterprise, .NET Framework 3.5 and our .NET based application server. We haven't had such a problem with any other customer before so we asked them to prepare another server to deploy our application on it. We installed our application server on the new machine and let SQL Server remain on the old one. Unfortunately the same problem happened to the new machine too. Now the old machine works only as database server and the new one works as application server but both of them have the same file sharing problem. File sharing on both machines doesn't get disabled on the same time but it eventually happens to both of them. I wonder why is this happening and how to find the reason to this problem. Any suggestion or solution is much appreciated.

    Read the article

  • Windows 7 Paging file apparently not being used

    - by Daniel F.
    I'm running Windows 7 Home Premium 32bit on a mobo with 24GB RAM. Of those 24GB, 20GB are assigned as a RAMDISK via ASRock XFastRAM. This RAMDISK has the drive letter X assigned to it. On X:\ I'm storing the temporary files folder, as well as pagefile.sys. Pagefile.sys has 6GB of size. The X:\ has usually around 14GB free space, so the temporary files are negligible, it's mostly the browsers which are storing their caches on there. Now my issue is that Firefox is crashing a lot on me, no error message pops up, but I know that this is because it's out of memory. I could kind of live with that, but now that I switched from using Eclipse to Android Studio, I know that I'm in trouble, because Java isn't capable of allocating, and Android Studio, together with the Java instances it launches, is quite a memory hog. So I tried to figure out what's wrong, and apparently Windows isn't swapping out memory onto the paging file. While my applications are crashing (firefox) / not starting (java vm's), the paging file is only using constantly around 15% of its size (checked with the performance monitor). 15% equals to 1GB aprox. I know that the correct solution would be to switch to 64 bit Windows, but I had to use the 32 bit version because of driver issues which I had about two years ago, and I guess that I'll have them again if I reformat and install the 64 bit version. Also, the machine is running quite stable, the only issue is the memory, so I'd like to use it as it is (as the apps are installed and configured) Is there a way to make Windows use the paging file more efficiently? None of my processes require more than 1GB, I'd just like it to swap out some seldomly used stuff, like GoogleCrashHandler.exe and stuff like that in order to have "more physical memory avaliable". Is that possible?

    Read the article

  • File permissions on web server

    - by plua
    I have just read this useful article on files permissions, and I am about to implement a as-strict-as-possible file permissions policy on our webserver. Our situation: we have a web server accessed through sftp by different users from within our company, and we have the general public accessing Apache - sometimes uploading files through PHP. I distinguish folders and files by their use. So based on this reading, here is my plan: All people who need to upload files will have separate users. But all of those users will belong to two groups: uploaders, and webserver. Apache will belong to the group webserver. Directories Permission: 771 Owner: user:uploaders Explanation: to access files in the folder, everybody needs to have execute permission. Only uploaders will be adding/removing files, so they also get r+w permission. Files within the web-root Permission: 664 Owner: user:uploaders Explanation: they will be uploaded and changed by different users, so both owner and group need to have w+r permissions. Webserver needs to only read files, so r permission only. Upload-directories Permission: 771 Owner: user:webserver Explanation: when files need to be uploaded, Apache needs to be able to write to this directory. But I figure it is safer to change the owner to webroot, thus giving Apache sufficient privileges (and all uploaders also belong to this group and will have the same permissions), while safeguarding from "others" writing to this folder. Uploaded files Permission: 664 Owner: user:webserver Explanation: after uploading Apache might need to delete files, but this is no problem because they have w+r permission of the folder. So no need to make this file any more accessible than r access for group. Being not an expert on file permissions, my question is whether or not this is the best possible policy for our situation? Any suggestions welcome.

    Read the article

  • List of recent motherboards with BIOS / without UEFI [on hold]

    - by jmn
    I am building a new desktop PC and I want to have full disk encryption on it. TrueCrypt doesn't support UEFI as of now. Are there still recent motherboards out there without UEFI ? I didn't find any list and I am afraid that I will have to study each potential candidate's technical sheet before purchase. I want to buy 2 or 3 of the same model to be future proof. Newegg links will not help, I don't live in the USA ... this means that this post is a legitimate target for PRISM ;-) Thanks for your help.

    Read the article

  • Input multiple file names in windows open file dialog box

    - by goodiet
    Windows 7 allows you to select multiple files to open at once by using ctrl or shift key. The "File Name" input field at the bottom of the dialog box would auto populate with the following sample: "aaa.txt" "bbb.txt" "ccc.txt" "ddd.txt" I have 14,000 files in a folder and I only need a range of files (approx 500). When I use the shift key to select a range of files, the "File Name" field auto populates all 500 file names. Windows would cut me off at the 260th character when I try to paste in a pre-generated string into the "File Name" field. Is there a way to bypass the 260 character limit so it would accept my entire string with 500 file names?

    Read the article

  • Text File Cannot read by Batch File

    - by Typowarrior
    I have the problem where TXT file that batch create can't be read. it turns to ECHO OFF Result. Here is 1st code need to be run. echo. wmic /output:huhu.txt Path CIM_DataFile WHERE Name='C:\\Users\\uJaNbaTus\\Desktop\\HyperTerminal.exe' Get Version echo. Then I create another .bat file with this code and run it. echo. setlocal ENABLEDELAYEDEXPANSION set revision= for /f "delims=" %%a in (huhu.txt) do ( set line=%%a if "x!line:~0,8!"=="xVersion " ( set revision=!line:~8! ) ) echo !revision! echo. endlocal When I run this .bat file the result Showing ECHO off. Btw if I create another file using notepad and replace (huhu.txt) I didn't get any error and the output come from txt file.

    Read the article

  • Online FTP or file sharing service [on hold]

    - by Frede
    We need to share large files with clients, e.g. clients upload a large file, we modify it and later make it available for download. Up until now we've used FTP but this has a number of drawbacks. A lot of management of files and setting up accounts etc. We are therefore considering online alternatives. Requirements: Cheap, 8-) Easy to use, ideally just requiring a web browser, but also possible for power users to connect e.g. via FTPS/SFTP No registration requried for users to upload/download files. We ourselves of course need to be able to login an view uploaded files and upload new files. No per user fee High bandwidth. As files may be GBs in size both upload and download speed cannot be too slow Secure. Encryption during upload/download. No way for users to access uploaded files. Once a user has uploaded a file they (or anyone else besides us) should be able to access the file. To download files users get a link with a password. Ideally the link expires after a set time. No software installation We do NOT need any sync features, backup, versioning etc. Just a quick, easy, secure way for us to share files with our clients. Services like JustCloud, DriveHQ etc seems bloated and "too much" for what we need. What other alternatives exist? Thanks!

    Read the article

  • /dev/null file became regular file

    - by user197719
    In our production server suddenly /dev/null became a regular file and due to this sshd service got stopped and not able to login the server. And also we tried to the below steps to configure back to character device file, rm -rf /dev/null mknod /dev/null c 1 3 As soon as we run the rm command /dev/null is being re-created as a regular file before mknod can run. We can't figure out how this happening and which component is creating this file. So until we solve this issue we are unable to create /dev/null as character device file.

    Read the article

  • C++ linked list based tree structure. Sanely copy nodes between lists.

    - by krunk
    edit Clafification: The intention is not to remove the node from the original list. But to create an identical node (data and children wise) to the original and insert that into the new list. In other words, a "move" does not imply a "remove" from the original. endedit The requirements: Each Node in the list must contain a reference to its previous sibling Each Node in the list must contain a reference to its next sibling Each Node may have a list of child nodes Each child Node must have a reference to its parent node Basically what we have is a tree structure of arbitrary depth and length. Something like: -root(NULL) --Node1 ----ChildNode1 ------ChildOfChild --------AnotherChild ----ChildNode2 --Node2 ----ChildNode1 ------ChildOfChild ----ChildNode2 ------ChildOfChild --Node3 ----ChildNode1 ----ChildNode2 Given any individual node, you need to be able to either traverse its siblings. the children, or up the tree to the root node. A Node ends up looking something like this: class Node { Node* previoius; Node* next; Node* child; Node* parent; } I have a container class that stores these and provides STL iterators. It performs your typical linked list accessors. So insertAfter looks like: void insertAfter(Node* after, Node* newNode) { Node* next = after->next; after->next = newNode; newNode->previous = after; next->previous = newNode; newNode->next = next; newNode->parent = after->parent; } That's the setup, now for the question. How would one move a node (and its children etc) to another list without leaving the previous list dangling? For example, if Node* myNode exists in ListOne and I want to append it to listTwo. Using pointers, listOne is left with a hole in its list since the next and previous pointers are changed. One solution is pass by value of the appended Node. So our insertAfter method would become: void insertAfter(Node* after, Node newNode); This seems like an awkward syntax. Another option is doing the copying internally, so you'd have: void insertAfter(Node* after, const Node* newNode) { Node *new_node = new Node(*newNode); Node* next = after->next; after->next = new_node; new_node->previous = after; next->previous = new_node; new_node->next = next; new_node->parent = after->parent; } Finally, you might create a moveNode method for moving and prevent raw insertion or appending of a node that already has been assigned siblings and parents. // default pointer value is 0 in constructor and a operator bool(..) // is defined for the Node bool isInList(const Node* node) const { return (node->previous || node->next || node->parent); } // then in insertAfter and friends if(isInList(newNode) // throw some error and bail I thought I'd toss this out there and see what folks came up with.

    Read the article

  • Windows Batch Script to Replace Environment Variables in a File

    - by skb
    Hi. I want to write a batch file that will take the contents of a file, and replace any environment variable references inside the file with the actual environment variable values. Is this possible? Basically, if a file had this: %PROGRAM FILES%\Microsoft SQL Server\ then I would want the file contents to become: C:\Program Files\Microsoft SQL Server\ after the batch script ran. This is just one example, but I want ALL environment variables to be expanded. Thanks in advance for any help!

    Read the article

  • List of header file locations for the Havok Physics Engine

    - by QAH
    Hello everyone! I am trying to integrate the Havok physics engine into my small game. It is a really nice SDK, but the header files are all over the place. Many headers are deeply nested in multiple directories. That gets confusing when you are trying to include headers for different important objects. I would like to know if there is a nice guide that will let you know where certian objects are and what headres they are in. I have already looked at Havok's documentation, and I also looked at the reference manual, but they don't give great detail as to where certain classes are located (header location). Also, is there any programs out there that can scan header files and create a list of where objects can be found? Thanks again

    Read the article

  • Is there a way to circumvent Python list.append() becoming progressively slower in a loop as the lis

    - by Deniz
    I have a big file I'm reading from, and convert every few lines to an instance of an Object. Since I'm looping through the file, I stash the instance to a list using list.append(instance), and then continue looping. This is a file that's around ~100MB so it isn't too large, but as the list grows larger, the looping slows down progressively. (I print the time for each lap in the loop). This is not intrinsic to the loop ~ when I print every new instance as I loop through the file, the program progresses at constant speed ~ it is only when I append them to a list it gets slow. My friend suggested disabling garbage collection before the while loop and enabling it afterward & making a garbage collection call. Did anyone else observe a similar problem with list.append getting slower? Is there any other way to circumvent this?

    Read the article

  • Implement Semi-Round-Robin file which can be expanded and saved on demand

    - by ircmaxell
    Ok, that title is going to be a little bit confusing. Let me try to explain it a little bit better. I am building a logging program. The program will have 3 main states: Write to a round-robin buffer file, keeping only the last 10 minutes of data. Write to a buffer file, ignoring the time (record all data). Rename entire buffer file, and start a new one with the past 10 minutes of data (and change state to 1). Now, the use case is this. I have been experiencing some network bottlenecks from time to time in our network. So I want to build a system to record TCP traffic when it detects the bottleneck (detection via Nagios). However by the time it detects the bottlenecking, most of the useful data has already been transmitted. So, what I'd like is to have a deamon that runs something like dumpcap all the time. In normal mode, it'll only keep the past 10 minutes of data (Since there's no point in keeping a boat load of data if it's not needed). But when Nagios alerts, I will send a signal in the deamon to store everything. Then, when Naigos recovers it will send another signal to stop storing and flush the buffer to a save file. Now, the problem is that I can't see how to cleanly store a rotating 10 minutes of data. I could store a new file every 10 minutes and delete the old ones if in mode 1. But that seems a bit dirty to me (especially when it comes to figuring out when the alert happened in the file). Ideally, the file that was saved should be such that the alert is always at the 10:00 mark in the file. While that is possible with new files every 10 minutes, it seems like a bit dirty to "repair" the files to that point. Any ideas? Should I just do a rotating file system and combine them into 1 at the end (doing quite a bit of post-processing)? Is there a way to implement the semi-round-robin file cleanly so that there is no need for any post-processing? Thanks Oh, and the language doesn't matter as much at this stage (I'm leaning towards Python, but have no objection to any other language. It's less of an issue than the overall design)...

    Read the article

  • \n not working in my fwrite()

    - by brett
    Not sure what could be the problem. I'm dumping data from an array $theArray into theFile.txt, each array item on a separate line. $file = fopen("theFile.txt", "w"); foreach ($theArray as $arrayItem){ fwrite($file, $arrayItem . '\n'); } fclose($file); Problem is when I open theFile.txt, I see the \n being outputted literally. Also if I try to programmatically read the file line by line (just in case lines are there), it shows them as 1 line meaning \n are really not having their desired effect.

    Read the article

  • Android ACTION_SEND Attached File

    - by Sean
    When you attach a file to an e-mail using the ACTION_SEND intent (with the extra EXTRA_STREAM) does the e-mail app copy that attached file to its own location? My app creates a file and attaches it to an email, but this can happen many times and I would like to be able to delete this file when it is no longer needed (so it doesn't flood the user's storage with junk data). Is the file safe to delete after the e-mail intent has started?

    Read the article

  • I am confused -- Will this code always work?

    - by Shekhar
    Hello, I have written this piece of code public class Test{ public static void main(String[] args) { List<Integer> list = new ArrayList<Integer>(); for(int i = 1;i<= 4;i++){ new Thread(new TestTask(i, list)).start(); } while(list.size() != 4){ // this while loop required so that all threads complete their work } System.out.println("List "+list); } } class TestTask implements Runnable{ private int sequence; private List<Integer> list; public TestTask(int sequence, List<Integer> list) { this.sequence = sequence; this.list = list; } @Override public void run() { list.add(sequence); } } This code works and prints all the four elements of list on my machine. My question is that will this code always work. I think there might be a issue in this code when two/or more threads add element to this list at the same point. In that case it while loop will never end and code will fail. Can anybody suggest a better way to do this? I am not very good at multithreading and don't know which concurrent collection i can use? Thanks Shekhar

    Read the article

  • VB.net: Is my Thread Safe List Solution actually safe?

    - by Shiftbit
    I've added teh following Extensions to my Project in order to create a thread safe list: Extensions If I want to conduct a simple operation on my list <Extension()> _ Public Sub Action(Of T)(ByVal list As List(Of T), ByVal action As Action(Of List(Of T))) SyncLock (list) action(list) End SyncLock End Sub If I want to pass it more than one parameter I could simply extend it with more items... <Extension()> _ Public Sub Action(Of T)(ByVal list As List(Of T), ByVal action As Action(Of List(Of T), T), ByVal item As T) SyncLock (list) Action(list, item) End SyncLock End Sub Actions I have created the following Action Examples: Private Sub Read(Of T)(ByVal list As List(Of T)) Console.WriteLine("Read") For Each item As T In list Console.WriteLine(item.ToString) Thread.Sleep(10) Next End Sub and also one that takes a parameter: Private Sub Write(Of T)(ByVal list As List(Of T), ByVal item As T) Thread.Sleep(100) list.Add(item) Console.WriteLine("Write") End Sub Initiating Then in my various threads I will call my Actions with: list.Action(AddressOf Read) or list.Action(AddressOf Write2, 10) Are these Extenxion methods thread safe or do you have other recommendations?

    Read the article

  • Why doesn't this list comprehension do what I expect it to do?

    - by Az
    The original list project_keys = sorted(projects.keys()) is [101, 102, 103, 104, 105, 106, 107, 108, 109, 110] where the following projects were deemed invalid this year: 108, 109, 110. Thus: for project in projects.itervalues(): # The projects dictionary is mapped to the Project class if project.invalid: # Where invalid is a Bool parameter in the Project class project_keys.remove(project.proj_id) print project_keys This will return a list of integers (which are project id's) as such: [101, 102, 103, 104, 105, 106, 107] Sweet. Now, I wanted it try the same thing using a list comprehension. project_keys = [project_keys.remove(project.proj_id) for project in projects.itervalues() if project.invalid print project_keys This returns: [None, None, None] So I'm populating a list with the same number as the removed elements but they're Nones? Can someone point out what I'm doing wrong? Additionally, why would I use a list comprehension over the for-if block at the top? Conciseness? Looks nicer?

    Read the article

  • MFC: Reading entire file to buffer...

    - by deostroll
    I've meddled with some code but I am unable to read the entire file properly...a lot of junk gets appended to the output. How do I fix this? // wmfParser.cpp : Defines the entry point for the console application. // #include "stdafx.h" #include "wmfParser.h" #include <cstring> #ifdef _DEBUG #define new DEBUG_NEW #endif // The one and only application object CWinApp theApp; using namespace std; int _tmain(int argc, TCHAR* argv[], TCHAR* envp[]) { int nRetCode = 0; // initialize MFC and print and error on failure if (!AfxWinInit(::GetModuleHandle(NULL), NULL, ::GetCommandLine(), 0)) { // TODO: change error code to suit your needs _tprintf(_T("Fatal Error: MFC initialization failed\n")); nRetCode = 1; } else { // TODO: code your application's behavior here. CFile file; CFileException exp; if( !file.Open( _T("c:\\sample.txt"), CFile::modeRead, &exp ) ){ exp.ReportError(); cout<<'\n'; cout<<"Aborting..."; system("pause"); return 0; } ULONGLONG dwLength = file.GetLength(); cout<<"Length of file to read = " << dwLength << '\n'; /* BYTE* buffer; buffer=(BYTE*)calloc(dwLength, sizeof(BYTE)); file.Read(buffer, 25); char* str = (char*)buffer; cout<<"length of string : " << strlen(str) << '\n'; cout<<"string from file: " << str << '\n'; */ char str[100]; file.Read(str, sizeof(str)); cout << "Data : " << str <<'\n'; file.Close(); cout<<"File was closed\n"; //AfxMessageBox(_T("This is a test message box")); system("pause"); } return nRetCode; }

    Read the article

  • How do I get the file size of a large (> 4 GB) file?

    - by endeavormac
    How can I get the file size of a file in C when the file size is greater than 4gb? ftell returns a 4 byte signed long, limiting it to two bytes. stat has a variable of type off_t which is also 4 bytes (not sure of sign), so at most it can tell me the size of a 4gb file. What if the file is larger than 4 gb?

    Read the article

  • trying to prune down a list of files

    - by romunov
    I have a list of files and I'm trying to extract all layer1_*.grd files. Is there a way of doing this in one grep expression? lof <- c("layer1_1.grd", "layer1_1.gri", "layer1_2.grd", "layer1_2.gri", "layer1_3.grd", "layer1_3.gri", "layer1_4.grd", "layer1_4.gri", "layer1_5.grd", "layer1_5.gri", "layer2_1.grd", "layer2_1.gri", "layer2_2.grd", "layer2_2.gri", "layer2_3.grd", "layer2_3.gri", "layer2_4.grd", "layer2_4.gri", "layer2_5.grd", "layer2_5.gri", "layer3_1.grd", "layer3_1.gri", "layer3_2.grd", "layer3_2.gri", "layer3_3.grd", "layer3_3.gri", "layer3_4.grd", "layer3_4.gri", "layer3_5.grd", "layer3_5.gri", "layer4_1.grd", "layer4_1.gri", "layer4_2.grd", "layer4_2.gri", "layer4_3.grd", "layer4_3.gri", "layer4_4.grd", "layer4_4.gri", "layer4_5.grd", "layer4_5.gri" I tried doing this in two steps: list.of.files <- list.files(pattern = c("1_")) list.of.files <- list.of.files[grep(".grd", list.of.files)] Can someone enlighten me how to do this with grep in one step? I naively tried passing list() and c() to the grep but, as you can imagine, it doesn't work. list.of.files <- list.files() list.of.files <- list.of.files[grep(list("1_", ".grd"), list.of.files)]

    Read the article

< Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >