Search Results

Search found 40479 results on 1620 pages for 'binary files'.

Page 17/1620 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • Conflicts with MS Office temporary files when using Offline Folders on Vista

    - by Tambet
    We are using Offline Folders feature of Windows Vista to make files on network shares available when out of office. Mostly it is working, but every time I do a sync I get a lot of such errors: D500E7B8.tmp - A file was deleted on this computer and changed on the server while this computer was offline. There are hundreds of them. I always select all of them and choose resolution "Delete from both locations". But what is causing this and how can I avoid it? I suspect the reason is that we are using Debian and Samba (3.4.7) on our file server. I've been looking for some Samba options that would cure this, but with no success. I learned that probably the cause is, that both Word and Excel are using specific pattern to change files - they never change the original file, but instead always write a new temporary file and rename it to original file, when you click Save. This is documented here: http://support.microsoft.com/kb/211632/?FR=1.

    Read the article

  • Conflicts with MS Office temporary files when using Offline Folders on Vista

    - by Tambet
    We are using Offline Folders feature of Windows Vista to make files on network shares available when out of office. Mostly it is working, but every time I do a sync I get a lot of such errors: D500E7B8.tmp - A file was deleted on this computer and changed on the server while this computer was offline. There are hundreds of them. I always select all of them and choose resolution "Delete from both locations". But what is causing this and how can I avoid it? I suspect the reason is that we are using Debian and Samba (3.4.7) on our file server. I've been looking for some Samba options that would cure this, but with no success. I learned that probably the cause is, that both Word and Excel are using specific pattern to change files - they never change the original file, but instead always write a new temporary file and rename it to original file, when you click Save. This is documented here: http://support.microsoft.com/kb/211632/?FR=1.

    Read the article

  • Unable to delete files in Temporary Internet Files folder

    - by Johnny
    I'm on Win7. I have a large number of of large .bin files, totaling 183GB, in my Temporary Internet Files folder. They all seem to come from video sharing sites like youtube. The files are invisible in Explorer even after allowing viewing of hidden files. The only way I can see them is by issuing "dir /fs" on the command line. Now when I try to delete them from the command line nothing happens. Trying to delete the whole folder from Explorer results in access denied because another process is using a file in the folder (IE is not running while I'm doing this). Trying to clear the folder using IE is also unsuccessful. How do I delete these files? How did they end up being there without being deleted by IE?

    Read the article

  • Disk fragmentation when dealing with many small files

    - by Zorlack
    On a daily basis we generate about 3.4 Million small jpeg files. We also delete about 3.4 Million 90 day old images. To date, we've dealt with this content by storing the images in a hierarchical manner. The heriarchy is something like this: /Year/Month/Day/Source/ This heirarchy allows us to effectively delete days worth of content across all sources. The files are stored on a Windows 2003 server connected to a 14 disk SATA RAID6. We've started having significant performance issues when writing-to and reading-from the disks. This may be due to the performance of the hardware, but I suspect that disk fragmentation bay be a culprit at well. Some people have recommended storing the data in a database, but I've been hesitant to do this. An other thought was to use some sort of container file, like a VHD or something. Does anyone have any advice for mitigating this kind of fragmentation?

    Read the article

  • Storing source files outside project file directory in Visual Studio C++ 2009

    - by Skurmedel
    Visual Studio projects assumes all files belonging to the project are situated in the same directory as the project file, or one underneath it. For a particular project (in the non-Visual Studio sense) this is not what I want. I want to store the MSVC-specific files in another folder, because there might be other ways to build the application as well, for example with SCons. Also all the stuff MSVC splurts out clutters the source directory. Example: /source /scons /msvc <- here is where I want my MSVC-specific stuff I can add the files, in Explorer, to the source directory manually, and then link them in Visual Studio with the project. It's not the end of the world, but it annoys me a bit that Visual Studio tries to dictate the folder structure of my project. I was looking through the schemas for the project files but realized that this annoying assumption is in the IDE and not the format of the project files. Do someone know a neater way to solve this than manually linking files to the project from the source directory?

    Read the article

  • How can I open binary image files? (.img)

    - by Simon Cahill
    I'm a Windows/Mac/Ubuntu and Androoid user, so I know what I'm talking about, when I say: How do I open binary image files? (.img) They just won't open, on any OS... I'm an Android dev... I'm currently working on a ROM, (I also program, using Windows) but I need to extract files, from .img files. I've converted them to .ext4.img but they just aren't recognized by Linux (Definitly not by Android), by Mac OS or Windows. In other words, I can't open, extract or mount them. Can anyone help me? I'm kinda confused...

    Read the article

  • Expire Files In A Folder: Delete Files After x Days

    - by Brett G
    I'm looking to make a "Drop Folder" in a windows shared drive that is accessible to everyone. I'd like files to be deleted automagically if they sit in the folder for more than X days. However, it seems like all methods I've found to do this, use the last modified date, last access time, or creation date of a file. I'm trying to make this a folder that a user can drop files in to share with somebody. If someone copies or moves files into here, I'd like the clock to start ticking at this point. However, the last modified date and creation date of a file will not be updated unless someone actually modifies the file. The last access time is updated too frequently... it seems that just opening a directory in windows explorer will update the last access time. Anyone know of a solution to this? I'm thinking that cataloging the hash of files on a daily basis and then expiring files based on hashes older than a certain date might be a solution.... but taking hashes of files can be time consuming. Any ideas would be greatly appreciated! Note: I've already looked at quite a lot of answers on here... looked into File Server Resource Monitor, powershell scripts, batch scripts, etc. They still use the last access time, last modified time or creation time... which, as described, do not fit the above needs.

    Read the article

  • undelete big files - mission impossible?

    - by johnrembo
    Hi, I've accidentaly deleted outlook.pst (6.7GB) file, while there was only 400MB free space left on primary NTFS partition (winxp). I've tried several recovery tools to get this file back. "Ontrack Easy Recovery Pro" found 0 pst files (complete scan mode), while "Recover My Files" in sector scan mode found 5 pst's, but 4 of them of sizes from 3 to 28 KB, while the 5th one - 1Gb. I've managed to succesfuly recover 1Gb pst file, which was 1 year old copy (the one used after the latest windows reinstall). Now, I'm frustrated and confused Why 1 year old file was succesfuly recovered if there were only 400MB left on primary partition? Where's 6.7GB file gone? I did some reading (i.e. here), and it seems that there's almost no probability to retrieve the file I'm looking for, but wait - none of recovery tools i've used found zero-sized pst file, moreover - if due to fragmentation a file might be corrupted - we could use scanpst.exe to fix some errors and survive with 10 or 100 emails missing - whatever. Could you please recommend some more sophisticated recovery tools for this particular task? Appretiate your help - thanks in advance

    Read the article

  • Disk fragmentation when dealing with many small files

    - by Zorlack
    On a daily basis we generate about 3.4 Million small jpeg files. We also delete about 3.4 Million 90 day old images. To date, we've dealt with this content by storing the images in a hierarchical manner. The heriarchy is something like this: /Year/Month/Day/Source/ This heirarchy allows us to effectively delete days worth of content across all sources. The files are stored on a Windows 2003 server connected to a 14 disk SATA RAID6. We've started having significant performance issues when writing-to and reading-from the disks. This may be due to the performance of the hardware, but I suspect that disk fragmentation may be a culprit at well. Some people have recommended storing the data in a database, but I've been hesitant to do this. An other thought was to use some sort of container file, like a VHD or something. Does anyone have any advice for mitigating this kind of fragmentation? Additional Info: The average file size is 8-14KB Format information from fsutil: NTFS Volume Serial Number : 0x2ae2ea00e2e9d05d Version : 3.1 Number Sectors : 0x00000001e847ffff Total Clusters : 0x000000003d08ffff Free Clusters : 0x000000001c1a4df0 Total Reserved : 0x0000000000000000 Bytes Per Sector : 512 Bytes Per Cluster : 4096 Bytes Per FileRecord Segment : 1024 Clusters Per FileRecord Segment : 0 Mft Valid Data Length : 0x000000208f020000 Mft Start Lcn : 0x00000000000c0000 Mft2 Start Lcn : 0x000000001e847fff Mft Zone Start : 0x0000000002163b20 Mft Zone End : 0x0000000007ad2000

    Read the article

  • How to generate and encode (for use in GA), random, strict, binary rooted trees with N leaves?

    - by Peter Simon
    First, I am an engineer, not a computer scientist, so I apologize in advance for any misuse of nomenclature and general ignorance of CS background. Here is the motivational background for my question: I am contemplating writing a genetic algorithm optimizer to aid in designing a power divider network (also called a beam forming network, or BFN for short). The BFN is intended to distribute power to each of N radiating elements in an array of antennas. The fraction of the total input power to be delivered to each radiating element has been specified. Topologically speaking, a BFN is a strictly binary, rooted tree. Each of the (N-1) interior nodes of the tree represents the input port of an unequal, binary power splitter. The N leaves of the tree are the power divider outputs. Given a particular power divider topology, one is still free to map the power divider outputs to the array inputs in an arbitrary order. There are N! such permutations of the outputs. There are several considerations in choosing the desired ordering: 1) The power ratio for each binary coupler should be within a specified range of values. 2) The ordering should be chosen to simplify the mechanical routing of the transmission lines connecting the power divider. The number of ouputs N of the BFN may range from, say, 6 to 22. I have already written a genetic algorithm optimizer that, given a particular BFN topology and desired array input power distribution, will search through the N! permutations of the BFN outputs to generate a design with compliant power ratios and good mechanical routing. I would now like to generalize my program to automatically generate and search through the space of possible BFN topologies. As I understand it, for N outputs (leaves of the binary tree), there are $C_{N-1}$ different topologies that can be constructed, where $C_N$ is the Catalan number. I would like to know how to encode an arbitrary tree having N leaves in a way that is consistent with a chromosomal description for use in a genetic algorithm. Also associated with this is the need to generate random instances for filling the initial population, and to implement crossover and mutations operators for this type of chromosome. Any suggestions will be welcome. Please minimize the amount of CS lingo in your reply, since I am not likely to be acquainted with it. Thanks in advance, Peter

    Read the article

  • How to create a complete binary tree of height 'h' using Python?

    - by Jack
    Here is the node structure class Node: def __init__(self, data): # initializes the data members self.left = None self.right = None self.parent = None self.data = data complete binary tree Definition: A binary tree in which every level, except possibly the deepest, is completely filled. At depth n, the height of the tree, all nodes must be as far left as possible. -- http://www.itl.nist.gov/div897/sqg/dads/HTML/completeBinaryTree.html I am looking for an efficient algorithm.

    Read the article

  • Can I use a binary literal in C or C++?

    - by hamza
    I need to work with a binary number. I tried writing: const x = 00010000 ; But it didn't work. I know that I can use an hexadecimal number that has the same value as 00010000 but I want to know if there is a type in C++ for binary numbers & if there isn't, is there another solution for my problem?

    Read the article

  • Part 7: EBS Modifications and Flagged Files in R12

    - by volker.eckardt(at)oracle.com
    Let me, based on my previous blog, explain the procedure of flagged files a bit better and facilitate the same with screenshots. Flagged files is a concept within the Oracle eBusiness Suite (EBS) release 12, where you flag a standard deployment file, let’s say a Forms file, a Package or a Java class file. When you run the patch analyse, the list of flagged files will be checked and in case one of these files gets patched, the analyse report will tell you. Note: This functionality is also available in release 11, here it is implemented and known as “applcust.txt”. You can flag as many files as you want, in whatever relationship they are with your customizations. In addition to the flag itself you can add a comment. You should use this comment to point to your customization reference (here XXAR_RPT_066 or XXAP_CUST_030). Consider the following two cases: You have created your own report, based on a standard report. In this case you will flag the report file itself, and the key views used. When a patch updates one of these files, you will be informed and can initiate a proper review and testing. (ex.: first line for ARXCTA.rdf) You have created an extensive personalization and because it is business critical you like to be informed if the page definition gets updated. In this case you register the PG.xml file as flagged file. (ex.: second line below for CreateExtBankAcctPG.xml) The menu path to register flagged files is the following: (R) System Administrator > (M) Oracle Applications Manager > Site Map > Maintenance > Register Flagged Files     Your DBA should now run the Patch Analyse every time he is going to apply a new patch. (R) System Administrator > (M) Oracle Applications Manager > Patch Wizard > Task “Recommend/Analyze Patches” The screenshot above shows the impact summary. For this blog entry the number “2” titled “Flagged Files Changed“ is in our focus. When you click the “2” you will get a similar screen like the first in this blog, showing you exactly the files which will get patched if you continue and apply this patch in this environment right now. Note: It is also shown that just 20% of all patch files will get applied. This situation might be different in case your environments are on a different patch level. For sure also the customization impact might then be different. The flagging step can be done directly in the Oracle Applications Manager.  Our developers are responsible for. To transport such a flag+comment we use a FNDLOAD script. It is suggested to put the flagged files data file directly into your CEMLI patch. Herewith the flagged files registration will be executed right at the same time when the patch gets applied. Process Steps: Developer: Builds CEMLI Reviews code and identifies key standard objects referenced Determines standard object files and flags them Creates FNDLOAD file and adds the same to the CEMLI patch DBA: Executes for every new Oracle standard patch the patch analyse in a representative environment Checks and retrieves the flagged files and comments Sends flagged file list back to development team for analyse / retest Developer: Analyses / Updates / Retests effected CEMLIs Prerequisite: The patch analyse has to be executed in an environment where flagged files have been registered. (If you run the patch analyse in a vanilla or outdated environment (compared to your PROD), the analyse will not be so helpful!) When to start with Flagged files? Start right now utilizing this feature. It is an invest to improve the production stability and fulfil your SLA!   Summary Flagged Files is a very helpful EBS R12 technique when analysing patches. Implement a procedure within your development process to maintain such flags. Let the DBA run the patch analyse in an environment with a similar patch and customization level as your current production.   Related Links: EBS Patching Procedures - Chapter 2-13 - Registered Flagged Files

    Read the article

  • Deserialization error using Runtime Serialization with the Binary Formatter

    - by Lily
    When I am deserializing a hierarchy I get the following error The input stream is not a valid binary format. The starting contents (in bytes) are The input stream is not a valid binary format. The starting contents (in bytes) are: 20-01-20-20-20-FF-FF-FF-FF-01-20-20-20-20-20-20-20 ..." Any help please? Extra info: public void Serialize(ISyntacticNode person) { Stream stream = File.Open(fileName, FileMode.OpenOrCreate); try { BinaryFormatter binary = new BinaryFormatter(); pList.Add(person); binary.Serialize(stream, pList); stream.Close(); } catch { stream.Close(); } } public List<ISyntacticNode> Deserialize() { Stream stream = File.Open(fileName, FileMode.OpenOrCreate); BinaryFormatter binary = new BinaryFormatter(); try { pList = (List<ISyntacticNode>)binary.Deserialize(stream); binary.Serialize(stream, pList); stream.Close(); } catch { pList = new List<ISyntacticNode>(); binary.Serialize(stream, pList); stream.Close(); } return pList; } I am Serializing a hierarchy which is of type Proxem.Antelope.Parsing.ISyntacticNode Now I have gotten this error System.Runtime.Serialization.SerializationException: Binary stream '116' does not contain a valid BinaryHeader. Possible causes are invalid stream or object version change between serialization and deserialization. i'm using a different instance. How may I avoid this error please

    Read the article

  • 'Binary XML' for game data?

    - by bluescrn
    I'm working on a level editing tool that saves its data as XML. This is ideal during development, as it's painless to make small changes to the data format, and it works nicely with tree-like data. The downside, though, is that the XML files are rather bloated, mostly due to duplication of tag and attribute names. Also due to numeric data taking significantly more space than using native datatypes. A small level could easily end up as 1Mb+. I want to get these sizes down significantly, especially if the system is to be used for a game on the iPhone or other devices with relatively limited memory. The optimal solution, for memory and performance, would be to convert the XML to a binary level format. But I don't want to do this. I want to keep the format fairly flexible. XML makes it very easy to add new attributes to objects, and give them a default value if an old version of the data is loaded. So I want to keep with the hierarchy of nodes, with attributes as name-value pairs. But I need to store this in a more compact format - to remove the massive duplication of tag/attribute names. Maybe also to give attributes native types, so, for example floating-point data is stored as 4 bytes per float, not as a text string. Google/Wikipedia reveal that 'binary XML' is hardly a new problem - it's been solved a number of times already. Has anyone here got experience with any of the existing systems/standards? - are any ideal for games use - with a free, lightweight and cross-platform parser/loader library (C/C++) available? Or should I reinvent this wheel myself? Or am I better off forgetting the ideal, and just compressing my raw .xml data (it should pack well with zip-like compression), and just taking the memory/performance hit on-load?

    Read the article

  • Shell not finding binary when attempting to execute it (it's _definitely_ there)

    - by eegg
    I have a specific set of binaries installed at: ~/.GutenMark/binary/<binaries...> These were previously working correctly, but for seemingly no reason when I attempt to execute them the shell claims not to find them: james@anubis:~/.GutenMark/binary$ ls -al ... -rwxr-xr-x 1 james james 2979036 2009-05-10 13:34 GUItenMark ... -rwxrwxrwx 1 james james 76952 2009-05-10 13:34 GutenMark ... -rwxr-xr-x 1 james james 10156 2009-05-10 13:34 GutenSplit ... james@anubis:~/.GutenMark/binary$ ./GutenMark bash: ./GutenMark: No such file or directory james@anubis:~/.GutenMark/binary$ I've tried to isolate the cause of this, with no success. The same happens with zsh, bash, and sh (all giving their appropriate "file not found" error -- this is definitely not a strange output from the binary itself). The same happens either as user james or as root. Nor is it directory specific; if I move the whole directory installation, or just a single binary, to anywhere else, the same happens when attempting to execute it. The same even happens when I put the directory in $PATH and just execute "GutenMark". It also happens when I execute it from a script (I've tried Python's commands module -- though this appears to just call sh). The problem appears to be specific to the binaries themselves, yet they appear to never actually get executed. Any ideas?

    Read the article

  • VMWare Server - Writing files to virtual hard drive performance

    - by Ardman
    We have just moved our infrastructure from physical servers to virtual machines. Everything is running great and we are happy with the result of the move. We have identified one problem, and that is reading/writing performance. We have an application that compiles files and writes to disk. This is considerably slower on the new virtual machines compared to the physical machines. Is there a performance bottleneck when writing to a virtual hard drive compared to a physical hard drive?

    Read the article

  • Where can I find WebSphere configuration files?

    - by Nicholas Key
    Hi there, I would like to know where are the WebSphere configuration details saved? Specifically, configuration details that are shown in the Administrative Console (from the web) or from the console using wsadmin. Some of the examples would be: Java and Process Management: Class loader, Process definition, Process execution Container Settings: Session management, SIP Container Settings, Web Container Settings, Portlet Container Settings Are there XML files that persist these configuration details? Nicholas

    Read the article

  • Apache - Serving static files from different subdomain + machine

    - by rubayeet
    Here's the scenario A site is running on this domain - www.someserver.com I'm going to host subdomain.someserver.com on my machine. Let's say all the image files are under the directory 'img'. I don't want to copy all their images to my machine. So what should be the Apache directive(s) that'll map the request for an image, like http://subdomain.someserver.com/img/image.png to http://www.someserver.com/img/image.png

    Read the article

  • VMWare - Writing files to virtual hard drive performance

    - by Ardman
    We have just moved our infrastructure from physical servers to virtual machines. Everything is running great and we are happy with the result of the move. We have identified one problem, and that is reading/writing performance. We have an application that compiles files and writes to disk. This is considerably slower on the new virtual machines compared to the physical machines. Is there a performance bottleneck when writing to a virtual hard drive compared to a physical hard drive?

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >