Search Results

Search found 40479 results on 1620 pages for 'binary files'.

Page 383/1620 | < Previous Page | 379 380 381 382 383 384 385 386 387 388 389 390  | Next Page >

  • How should I compile boost library in a small project?

    - by Vincenzo
    I have a small project where I need just part of boost library, boost::regex in particular. This is what I've done so far: /include /boost /regex /math .. 189 dirs, files, etc. /lib /boost-regex c_regex_traits.cpp cpp_regex_traits.cpp .. ~20 .cpp files myprog.cpp In my Makefile I compile all boost-regex .cpp files one by one, producing .obj files. Next, I'm building my project by means of compiling myprog.cpp together with all that .obj files from /lib/boost/regex. The question is whether I'm doing everything correct? The size of my output file is rather big (~3.5Mb), while my code is extremely small (10 lines).

    Read the article

  • Migrating from CVS to Mercurial - how to handle cross-repo symbolic links?

    - by NVRAM
    I have a project that is stored in CVS as numerous modules/repositories. In several of the modules the CVS tree has symbolic links to the files in another tree. For example, the internal support tools have links to binary files (DLL, EXE) that are created and stored in the C# module. In all cases, the files are modified only in in the module where the files exist and are treated as read-only in the tree where the symbolic link exists. More often than not, the files are pulled to machines running MSWindows so the use of symbolic links on the developer machine is not an option. My question is this: Is there a mechanism in Mercurial that can provide the same capabilities?

    Read the article

  • Restoring using SyncBack without profiles

    - by Thomas Matthews
    I backed up my internal hard drive (C:) using SyncBack onto an external (USB) hard drive with maximum compression. I then performed a clean install of Windows Vista onto the computer. I forgot to copy the SyncBack logs before the clean install. And now when ever I try to restore a directory, the RAR/ZIP files are copied to the system hard drive instead of extracting their contents to the hard drive. Also, SyncBack is not traversing the folders during the Restore process. How can I tell SyncBack to expand the compressed files? I am running the freeware version of SyncBack. I have to create new log files (unless SyncBack put them somewhere on the external drive). My alternative is to write a program that traverses the folders on the external drive and extracts files from the RAR/ZIP files. I am using Windows Vista, Service Pack 2, and the data size prior to backup was about 200 GB. (The backup process took over 72 hours due to "hiccups").

    Read the article

  • Is a yobibit really a meaningful unit? [closed]

    - by Joe
    Wikipedia helpfully explains: The yobibit is a multiple of the bit, a unit of digital information storage, prefixed by the standards-based multiplier yobi (symbol Yi), a binary prefix meaning 2^80. The unit symbol of the yobibit is Yibit or Yib.1[2] 1 yobibit = 2^80 bits = 1208925819614629174706176 bits = 1024 zebibits[3] The zebi and yobi prefixes were originally not part of the system of binary prefixes, but were added by the International Electrotechnical Commission in August 2005.[4] Now, what in the world actually takes up 1,208,925,819,614,629,174,706,176 bits? The information content of the known universe? I guess this is forward thinking -- maybe astrophyics or nanotech, or even DNA analysis really will require these orders of magnitude. How far off do you think all this is? Are these really meaningful units?

    Read the article

  • Do Nvidia's drivers work in 12.10?

    - by Pratyush Nalam
    I recently upgraded to 12.10 and was just checking for proprietary drivers to be installed. I found this in software sources When I enabled Nvidia binary Xorg driver, unity disappears along with the launcher and everything. I just have the wallpaper. I did manage to open the terminal and open software sources and select the nouveau display driver which fixed it. Question is how to enable NVIDIA binary Xorg driver and have everything working? Hardware: Apple MacBook Pro 9,1 Mid 2012 15 inch non-retina. NVIDIA GeForce GT650M EDIT: I tried the solution here. And now, when I boot into Ubuntu, I get a blinking cursor. That's all! No tty1, nothing. help !!

    Read the article

  • Sql: simultaneous aggregate from two tables

    - by Ash
    I have two tables: a Files table, which includes the file type, and a File Properties table, which references the file table via a foreign key. Sample Files table: | id | name | type | --------------------- | 1 | file1 | zip | | 2 | file2 | zip | | 3 | file3 | zip | | 4 | file4 | jpg | And the Properties table: | file_id | property | ----------------------- | 1 | x | | 2 | x | I want to make a query, which shows the count of each file type, and how many files of that type have a property. So in the example, the result would be | type | filecount | prop count | ---------------------------------- | zip | 3 | 2 | | jpg | 1 | 0 | I could accomplish this by select f.type, (select count(id) from files where type = f.type), count(fp.id) from files as f, file_properties as fp where f.id = fp.file_id group by f.type; But this seems very suboptimal and is very slow. Any better way to do this?

    Read the article

  • Determining whether a file is a duplicate

    - by Todd R
    Is there a reliable way to determine whether or not two files are the same? For example, two files with the same size and type may or may not be the same binarilly (yeah, I know it's not really a word). I assume that comparing one or two checksums of the files will help, but I wonder: How reliable are checksums at determining whether two files are different; what are the chances of two different files having the same checksum? Would reliability increase by applying additional checksum comparisons? Which checksum algorithm(s) would be the most efficient and/or reliable? Any ideas, suggestions or thoughts are appreciated! P.S. The code for this is being written in Java running on a nix system, but generic or platform agnostic input is most helpful.

    Read the article

  • How to cross-compile programs for the Raspberry Pi with gcc?

    - by InkBlend
    I am fond of using gcc to compile small little C and C++ programs on my main computer. However, I also have a Raspberry Pi, and, being a 700-MHz single-core computer, I would prefer to not have to do my development work on it every time I want to create a binary for it. How (for I know that there's a way) do I cross-compile my program for the Raspberry Pi using my x86 laptop? And is there a way that I may compile C(++) programs on the Pi but produce an x86 binary? If it's any help, "The SoC is a Broadcom BCM2835. This contains an ARM1176JZFS, with floating point..." (according to the official Raspberry Pi FAQ).

    Read the article

  • RewriteRule and php download counter

    - by rcourtna
    (1) I have a site that serves up MP3 files: http://domain/files/1234567890.mp3 (2) I have a php script that tracks file download counts: http://domain/modules/download_counter.php?file=/files/1234567890.mp3 After download_counter.php records the download, it redirects to the original file: Header("Location: $FQDN_url"); (3) I'd like all my public links to be presented as the direct file urls from (1). I'm trying to use Apache to redirect the requests to download_counter.php: RewriteRule ^files/(.+\.mp3)$ /modules/download_counter.php?file=/files/$1 [L] I'm currently stuck on (3), as it results in a redirect loop, since download_counter.php simply redirects the request back to the original file (rather than streaming the file contents). I'm also motivated to use download_counter.php as is (without modifying it's redirect behaviour). This is because the script is part of a larger CMS module, and I'd like to avoid complicating my upgrade path. Perhaps there is no solution to my problem (other than modifying the download_counter script). WDYT?

    Read the article

  • Failed to download repository information

    - by Bob Van Elst
    When i clicked check this error message came up. But it does not come up when update manager strarts automatically. When you open update manager this error comes up. Any ideas on how to fix it? Details: *W:GPG error: (http://ppa.launchpad.net precise Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY FC8CA6FE7B1FEC7C, W:Failed to fetch (http://ppa.launchpad.net/jonabeck/ppa/ubuntu/dists/precise/main/binary-amd64/Packages 404 Not Found , W:Failed to fetch (http://ppa.launchpad.net/jonabeck/ppa/ubuntu/dists/precise/main/binary-i386/Packages 404 Not Found , W:Failed to fetch (http://ppa.launchpad.net/jonabeck/ppa/ubuntu/dists/precide/main/source/Sources 404 Not Found , E:Some index files failed to download. They have been ignored, or old ones used instead.*

    Read the article

  • Will there ever be a version of Java which does not perform Type Erasure

    - by user63904
    Type erasure enables Java applications that use generics to maintain binary compatibility with Java libraries and applications that were created before generics Generics were introduced in Java 1.5, so presumably the statement "applications that were created before generics" is referring to Java 1.4? Given that Java 1.4 entered its End Of Life around 2006 and was officially End Of Life'd around 2008. Why is type erasure still being performed in Java 7, etc... Has the statement now become self referential i.e. Type erasure enables Java applications that use generics to maintain binary compatibility with Java libraries and applications that were created with Java versions that perform Type Erasure. Meaning therefore that there will never be a version of Java that doesn't perform Type Erasure.

    Read the article

  • Getting a 404 when using the Nexus 7 installer PPA, how do I fix this? [duplicate]

    - by Vitaliy
    This question already has an answer here: How can I fix a 404 Error when using a PPA? 2 answers ubuntu 13.10 sudo add-apt-repository ppa:ubuntu-nexus7/ubuntu-nexus7-installer OK sudo apt-get update: W: ?? ??????? ???????? http://ppa.launchpad.net/ubuntu-nexus7/ubuntu-nexus7-installer/ubuntu/dists/saucy/main/binary-amd64/Packages 404 Not Found W: ?? ??????? ???????? http://ppa.launchpad.net/ubuntu-nexus7/ubuntu-nexus7-installer/ubuntu/dists/saucy/main/binary-i386/Packages 404 Not Found Thanks for the answer

    Read the article

  • 11.10 - Update Manager Not working

    - by Mattlinux1
    W:Failed to fetch cdrom://Ubuntu 11.10 Oneiric Ocelot - Release i386 (20111012)/dists/oneiric/main/binary-i386/Packages Please use apt-cdrom to make this CD-ROM recognized by APT. apt-get update cannot be used to add new CD-ROMs , W:Failed to fetch cdrom://Ubuntu 11.10 Oneiric Ocelot - Release i386 (20111012)/dists/oneiric/restricted/binary-i386/Packages Please use apt-cdrom to make this CD-ROM recognized by APT. apt-get update cannot be used to add new CD-ROMs , E:Some index files failed to download. They have been ignored, or old ones used instead. This happens when i hit the check button? and the updates were working before.

    Read the article

  • How to implement B+ Tree for file systems ?

    - by user312544
    I have a text file which contains some info on extents about all the files in the file system, like below C:\Program Files\abcd.txt 12345 100 23456 200 C:\Program Files\bcde.txt 56789 50 26746 300 ... Now i have another binary which tries to find out about extents for all the files. Now currently i am using linear search to find extent info for the files in the above mentioned text file. This is a time consuming process. Is there a better way of coding this ? Like Implementing any good data structure like BTree. If B+ Tree is used what is the key, branch factor i need to use ?

    Read the article

  • Ubuntu 14.04 compiz thumbnails preview flicker

    - by HockeyBum
    Ubuntu 14.04.1 x64 (upgraded from 12.04) with Unity desktop Dell Latitude with NVIDIA GF119M (NVS 4200M) - Optimus is DISABLED Using NVIDIA legacy binary driver 304.117 When I set the CCSM option to allow thumbnail previews, the thumbnail 'flickers' for about 1 second before the actual thumbnail image is displayed. I don't think I had this enabled under 12.04 so not sure if it's version-specific. Should I be using a different driver (there are newer ones - Ubu shows NVIDIA binary driver 331.38 and the non-proprietary nouveau, plus NVIDIA has newer ones at the site)? TIA!

    Read the article

  • Whats the python way for recursively setting file permissions?

    - by Geoff
    What's the "python way" to recursively set the owner and group to files in a directory? I could just pass a 'chown -R' command to shell, but I feel like I'm missing something obvious. I'm mucking about with this: import os path = "/tmp/foo" for root, dirs, files in os.walk(path): for momo in dirs: os.chown(momo, 502, 20) This seems to work for setting the directory, but fails when applied to files. I suspect the files are not getting the whole path, so chown fails since it can't find the files. The error is: 'OSError: [Errno 2] No such file or directory: 'foo.html' What am I overlooking here?

    Read the article

  • Why Skype doesn't work in Quantal

    - by Lionthinker
    Since updating to Quantal Skype doesn't work and I get this error when updating: W:GPG error: http://ppa.launchpad.net quantal Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY F6A071ABEE5D5BA2 W:Failed to fetch http://archive.canonical.com/commercial-ppa-uploaders/skype/ubuntu/dists/quantal/main/source/Sources 404 Not Found [IP: 91.189.92.191 80] W:Failed to fetch http://archive.canonical.com/commercial-ppa-uploaders/skype/ubuntu/dists/quantal/main/binary-amd64/Packages 404 Not Found [IP: 91.189.92.191 80] W:Failed to fetch http://archive.canonical.com/commercial-ppa-uploaders/skype/ubuntu/dists/quantal/main/binary-i386/Packages 404 Not Found [IP: 91.189.92.191 80] E:Some index files failed to download. They have been ignored, or old ones used instead.

    Read the article

  • Update information outdated, "Failed to fetch cdrom"

    - by user285603
    I have a warning triangle on the top of my screen. When I click on it, it says that my update information is outdated. When I type sudo apt-get update && sudo apt-get upgrade into a terminal, I get this message: W: Failed to fetch cdrom://Ubuntu 14.04 LTS _Trusty Tahr_ - Release i386 (20140417)/dists/trusty/main/binary-i386/Packages Please use apt-cdrom to make this CD-ROM recognized by APT. apt-get update cannot be used to add new CD-ROMs W: Failed to fetch cdrom://Ubuntu 14.04 LTS _Trusty Tahr_ - Release i386 (20140417)/dists/trusty/restricted/binary-i386/Packages Please use apt-cdrom to make this CD-ROM recognized by APT. apt-get update cannot be used to add new CD-ROMs E: Some index files failed to download. They have been ignored, or old ones used instead. Any ideas?

    Read the article

  • Django admin proper urls inside listview

    - by hinnye
    Hi, My current target is to give users the chance to download CSV files from the admin site of my application. I successfully managed to create an additional column in the model's list view this way: def doc_link(self): return '<a href="files/%s">%s</a>' % (self.output, self.output) doc_link.allow_tags = True This shows the file name and creates the link, but sadly - because it's inside my 'searches' view - it has an URL: my_site/my_app/searches/files/13.csv. This is my problem, I would like to have my files stored in the admin media directory, like this: http://my_site/media/files/13.csv Does somebody know how to give url which points "outer" from the model's directory? Maybe somehow tell Django to use the ADMIN_MEDIA_PREFIX in the link? I'd really appreciate any help, thanks!

    Read the article

  • Changing Wallpaper on 12.04 Login Screen

    - by userIsAMonkey
    I'm using this link but seems to be not working on 12.04, here's the Terminal message below: Are there other softwares/tips for changing the login screen? I'm also using link but seems outdated. Failed to fetch http://ppa.launchpad.net/claudiocn/slm/ubuntu/dists/precise/main/source/Sources 404 Not Found W: Failed to fetch http://ppa.launchpad.net/claudiocn/slm/ubuntu/dists/precise/main/binary-amd64/Packages 404 Not Found W: Failed to fetch http://ppa.launchpad.net/claudiocn/slm/ubuntu/dists/precise/main/binary-i386/Packages 404 Not Found E: Some index files failed to download. They have been ignored, or old ones used instead. law@ubuntu:~$ sudo apt-get install simple-lightdm-manager Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package simple-lightdm-manager

    Read the article

  • A tool to determine jar dependencies based on existing code?

    - by geoffeg
    Is there a tool that can determine .jar dependencies given a directory of .jar files and a separate directory of java source code? I need to generate Eclipse .classpath files based on an existing code base that doesn't have any dependencies defined. To be more specific, I've been given a large codebase consisting of a dozen or so J2EE-style projects and a single directory of jar files. My client uses a custom development and build framework that is just too arcane for me to use and get any real work done. The projects do not have any information about their dependencies, either between projects or to jar libraries. I would expect this tool would have to spin through each jar file, indexing the classes available in that file and then go through each file in the project source code tree and match up the dependencies, possibly writing out a .classpath file with the required jar files. I realize this is a rather simplistic view of the operation, as duplicate classes among the jar files and such might make things more difficult.

    Read the article

  • Reverse engineering a custom data file

    - by kerchingo
    At my place of work we have a legacy document management system that for various reasons is now unsupported by the developers. I have been asked to look into extracting the documents contained in this system to eventually be imported into a new 3rd party system. From tracing and process monitoring I have determined that the document images (mainly tiff files) are stored in a number of 1.5GB files. These files seem to be read from a specific offset and then written to a tmp file that is then served via a web app to the client, and then deleted. I guess I am looking for suggestions as to how I can inspect these large files that contain the tiff images, and eventually extract and write them to individual files.

    Read the article

  • Recursive CTE with alternating tables

    - by SOfanatic
    I've created a SQL fiddle here. Basically, I have 3 tables BaseTable, Files, and a LinkingTable. The Files table has 3 columns: PK, BaseTableId, RecursiveId (ChildId). What I want to do is find all the children given a BaseTableId (i.e., ParentId). The tricky part is that the way the children are found works like this: Take ParentId 1 and use that to look up a FileId in the Files table, then use that FileId to look for a ChildId in the LinkingTable, if that record exists then use the RecursiveId in the LinkingTable to look for the next FileId in the Files table and so on. This is my CTE so far: with CTE as ( select lt.FileId, lt.RecursiveId, 0 as [level], bt.BaseTableId from BaseTable bt join Files f on bt.BaseTableId = f.BaseTableId join LinkingTable lt on f.FileId = lt.FileId where bt.BaseTableId = @Id UNION ALL select rlt.FileId, rlt.RecursiveId, [level] + 1 as [level], CTE.BaseTableId from CTE --??? and this is where I get lost ... ) A correct output for BaseTableId = 1, should be: FileId|RecursiveId|level|BaseTableId 1 1 0 1 3 2 1 1 4 3 2 1

    Read the article

  • How to use SOCI C++ Database library?

    - by NeDark
    I'm trying to implement soci in my program but I don't know how. I'm using C++ on linux, on a project on netbeans. I have followed the steps in http://soci.sourceforge.net/doc/structure.html to install it, and I tried to copy the files soci.h from /src/core and soci-mysql.h from /src/backends/mysql in my proyect but it gives compilation error (these files include other soci files, but it's illogical to copy all files into the directory...). I have read the guide several time but I don't understand what I'm doing wrong, the examples only include these files. Thanks. Edit: I have given more information in a comment below the answer. I don't know what steps I have to do to implement soci.

    Read the article

  • Linking to a file (e.g. PDF) within a CakePHP view.

    - by Hobonium
    I'd like to link to some PDFs in one of my controller views. What's the best practice for accomplishing this? The CakePHP webroot folder contains a ./files/ subfolder, I am confounded by trying to link to it without using "magic" pathnames in my href (e.g. "/path/to/my/webroot/files/myfile.pdf"). What are my options? EDIT: I didn't adequately describe my question. I was attempting to link to files in /app/webroot/files/ in a platform-agnostic (ie. no mod_rewrite) way. I've since worked around this issue by storing such files outside the CakePHP directory structure.

    Read the article

< Previous Page | 379 380 381 382 383 384 385 386 387 388 389 390  | Next Page >