Search Results

Search found 3296 results on 132 pages for 'executable compression'.

Page 10/132 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • Style sheet compression and .less add-in...updated with source

    Design time minification and .net less for style sheets.Read my previous post on this subject. http://blog.waynebrantley.com/2009/12/ultimate-automatic-stylesheet-combining.html Known IssuesIt has been reported that this does not work in 'web site project'. I do not use those anymore, not since they brought back our 'web application project'. If anyone wants to try and make it work, the...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • HTTP compression - can I configure a client to compress the data sent to a server?

    - by lgomide
    Hello, I'm using IIS 7 as web server for my application. If I enable dynamic content compression in the server, will this also enable clients to send compressed data to the server, if they can? I mean, my application uses SOAP webservices, and clients usually send large chunks of data to the server. The clients are written in C#/.NET. Is there any kind of configuration I can do in a web reference / serice reference in order to tell them to compress the content before they send it to IIS? And do I have to do any kind of configuration in IIS in order for this to work? Thanks in advance

    Read the article

  • How to make 7zip faster

    - by user34463
    I normally use winRAR over 7Zip simply because its faster and only a little less efficient with compression. I did a few tests on different filetypes and sizes comparing the 7zip and winRAR default settings on their normal compression and their best compression, and in a lot of cases winRAR was 50% faster and in some it was actually 100% faster. But, I do like FOSS more. So here are my questions: Is there a way to make 7zip speed up? I'd like it to at least be on par with rar's speed Is there a way to make recovery segments in 7zip like you can in rar? I didn't see any, but I guess it could be a command line thing. I tested winrar and 7zip using the latest stable version of each (4.something with 7zip). Is the 9.x beta release noticeably faster at compression? I'm talking about faster at a comparable setting in WinRAR, not just lowering to bare minimum compression. If it matters, I use a quad core intel i7 720 (1.6ghz)/(2.8ghz) with 4gb DDR3 ram, and the 64-bit version of 7zip.

    Read the article

  • How to make 7-Zip faster

    - by Matt
    I normally use WinRAR over 7-Zip simply because it's faster and only a little less efficient with compression. I did a few tests on different filetypes and sizes comparing the 7-Zip and WinRAR default settings on their normal compression and their best compression, and in a lot of cases WinRAR was 50% faster and in some it was actually 100% faster. But, I do like FOSS more. So here are my questions: Is there a way to make 7-Zip speed up? I'd like it to at least be on par with WinRAR's speed Is there a way to make recovery segments in 7-Zip like you can in WinRAR? I didn't see any, but I guess it could be a command line thing. I tested WinRAR and 7-Zip using the latest stable version of each (4-dot-something with 7-Zip). Is the 9.x beta release noticeably faster at compression? I'm talking about faster at a comparable setting in WinRAR, not just lowering to bare minimum compression. If it matters, I use a quad core Intel i7 720 (1.6 GHz)/(2.8 GHz) with 4 GB DDR3 RAM, and the 64-bit version of 7-Zip, and dual-boot Debian x64 5.0.4 and Windows 7 Home.

    Read the article

  • Use ImageMagick to convert TIFF to PNGs, how to improve the speed?

    - by Woo
    I am using "convert" from IM to get PNGs from multi-page TIFF files, everything is good except the speed. From "convert" documentation, I found: For the MNG and PNG image formats, the quality value sets the zlib compression level (quality / 10) and filter-type (quality % 10). For compression level 0, the Huffman-only strategy is used, which is fastest but not necessarily the worst compression. The default PNG compression is 75. So I tried "-quality 0", but almost no changes with the spreed. Anyone can share the ideas of how to improve the spreed? Here are my command: convert 100Pages.tif[0,1,2,3,4,5] -quality 0 100Pages.png Thanks!

    Read the article

  • Why does compressing and decompressing my SSD hard drive free up space?

    - by Paperflyer
    I bought an SSD (SandForce 2), created a tiny 25GB partition on it for Windows and installed Windows 7 64-bit. In order to free disk space, I enabled compression on the drive using the Properties entry in the context menu for the drive in Explorer. Prior to compressing I had around 5GB of free space. After compression I had 4GB, so compression was not working for me. I figured this might have happened because of the built-in data compression of the SSD. I decompressed the files again - after decompression, it left me with 7GB of free space! Better yet, after restarting, I had 10GB. What is happening here?

    Read the article

  • What is recommended minimum object size for gzip performance benefits?

    - by utt73
    I'm working on improving page speed display times, and one of the methods is to gzip content from the webserver. Google recommends: Note that gzipping is only beneficial for larger resources. Due to the overhead and latency of compression and decompression, you should only gzip files above a certain size threshold; we recommend a minimum range between 150 and 1000 bytes. Gzipping files below 150 bytes can actually make them larger. We serve our content through Akamai, using their network for a proxy and CDN. What they've told me: Following up on your question regarding what is the minimum size Akamai will compress the requested object when sending it to the end user: The minimum size is 860 bytes. My reply: What is the reason(s) for why Akamai's minimum size is 860 bytes? And why, for example, is this not the case for files Akamai serves for facebook? (see below) Google recommends to gzip more agressively. And that seems appropriate on our site where the most frequent hits, by far, are AJAX calls that are <860 bytes. Akamai's response: The reasons 860 bytes is the minimum size for compression is twofold: (1) The overhead of compressing an object under 860 bytes outweighs performance gain. (2) Objects under 860 bytes can be transmitted via a single packet anyway, so there isn't a compelling reason to compress them. So I'm here for some fact checking. Is the 860 byte limit due to packet size the end of this reasoning? Why would high traffic sites push this down to the 150 byte limit... just to save on bandwidth costs (since CDNs base their charges on bandwith offloaded from origin), or is there a performance gain in doing so?

    Read the article

  • Oracle Solaris 11 ZFS Lab for Openworld 2012

    - by user12626122
    Preface This is the content from the Oracle Openworld 2012 ZFS lab. It was well attended - the feedback was that it was a little short - thats probably because in writing it I bacame very time-concious after the ASM/ACFS on Solaris extravaganza I ran last year which was almost too long for mortal man to finish in the 1 hour session. Enjoy. Table of Contents Exercise Z.1: ZFS Pools Exercise Z.2: ZFS File Systems Exercise Z.3: ZFS Compression Exercise Z.4: ZFS Deduplication Exercise Z.5: ZFS Encryption Exercise Z.6: Solaris 11 Shadow Migration Introduction This set of exercises is designed to briefly demonstrate new features in Solaris 11 ZFS file system: Deduplication, Encryption and Shadow Migration. Also included is the creation of zpools and zfs file systems - the basic building blocks of the technology, and also Compression which is the compliment of Deduplication. The exercises are just introductions - you are referred to the ZFS Adminstration Manual for further information. From Solaris 11 onward the online manual pages consist of zpool(1M) and zfs(1M) with further feature-specific information in zfs_allow(1M), zfs_encrypt(1M) and zfs_share(1M). The lab is easily carried out in a VirtualBox running Solaris 11 with 6 virtual 3 Gb disks to play with. Exercise Z.1: ZFS Pools Task: You have several disks to use for your new file system. Create a new zpool and a file system within it. Lab: You will check the status of existing zpools, create your own pool and expand it. Your Solaris 11 installation already has a root ZFS pool. It contains the root file system. Check this: root@solaris:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 15.9G 6.62G 9.25G 41% 1.00x ONLINE - root@solaris:~# zpool status pool: rpool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 c3t0d0s0 ONLINE 0 0 0 errors: No known data errors Note the disk device the root pool is on - c3t0d0s0 Now you will create your own ZFS pool. First you will check what disks are available: root@solaris:~# echo | format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c3t0d0 <ATA-VBOX HARDDISK-1.0 cyl 2085 alt 2 hd 255 sec 63> /pci@0,0/pci8086,2829@d/disk@0,0 1. c3t2d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@2,0 2. c3t3d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@3,0 3. c3t4d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@4,0 4. c3t5d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@5,0 5. c3t6d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@6,0 6. c3t7d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@7,0 Specify disk (enter its number): Specify disk (enter its number): The root disk is numbered 0. The others are free for use. Try creating a simple pool and observe the error message: root@solaris:~# zpool create mypool c3t2d0 c3t3d0 'mypool' successfully created, but with no redundancy; failure of one device will cause loss of the pool So destroy that pool and create a mirrored pool instead: root@solaris:~# zpool destroy mypool root@solaris:~# zpool create mypool mirror c3t2d0 c3t3d0 root@solaris:~# zpool status mypool pool: mypool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM mypool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c3t2d0 ONLINE 0 0 0 c3t3d0 ONLINE 0 0 0 errors: No known data errors Back to topExercise Z.2: ZFS File Systems Task: You have to create file systems for later exercises. You can see that when a pool is created, a file system of the same name is created: root@solaris:~# zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 86.5K 2.94G 31K /mypool Create your filesystems and mountpoints as follows: root@solaris:~# zfs create -o mountpoint=/data1 mypool/mydata1 The -o option sets the mount point and automatically creates the necessary directory. root@solaris:~# zfs list mypool/mydata1 NAME USED AVAIL REFER MOUNTPOINT mypool/mydata1 31K 2.94G 31K /data1 Back to top Exercise Z.3: ZFS Compression Task:Try out different forms of compression available in ZFS Lab:Create 2nd filesystem with compression, fill both file systems with the same data, observe results You can see from the zfs(1) manual page that there are several types of compression available to you, set with the property=value syntax: compression=on | off | lzjb | gzip | gzip-N | zle Controls the compression algorithm used for this dataset. The lzjb compression algorithm is optimized for performance while providing decent data compression. Setting compression to on uses the lzjb compression algorithm. The gzip compression algorithm uses the same compression as the gzip(1) command. You can specify the gzip level by using the value gzip-N where N is an integer from 1 (fastest) to 9 (best compression ratio). Currently, gzip is equivalent to gzip-6 (which is also the default for gzip(1)). Create a second filesystem with compression turned on. Note how you set and get your values separately: root@solaris:~# zfs create -o mountpoint=/data2 mypool/mydata2 root@solaris:~# zfs set compression=gzip-9 mypool/mydata2 root@solaris:~# zfs get compression mypool/mydata1 NAME PROPERTY VALUE SOURCE mypool/mydata1 compression off default root@solaris:~# zfs get compression mypool/mydata2 NAME PROPERTY VALUE SOURCE mypool/mydata2 compression gzip-9 local Now you can copy the contents of /usr/lib into both your normal and compressing filesystem and observe the results. Don't forget the dot or period (".") in the find(1) command below: root@solaris:~# cd /usr/lib root@solaris:/usr/lib# find . -print | cpio -pdv /data1 root@solaris:/usr/lib# find . -print | cpio -pdv /data2 The copy into the compressing file system takes longer - as it has to perform the compression but the results show the effect: root@solaris:/usr/lib# zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 1.35G 1.59G 31K /mypool mypool/mydata1 1.01G 1.59G 1.01G /data1 mypool/mydata2 341M 1.59G 341M /data2 Note that the available space in the pool is shared amongst the file systems. This behavior can be modified using quotas and reservations which are not covered in this lab but are covered extensively in the ZFS Administrators Guide. Back to top Exercise Z.4: ZFS Deduplication The deduplication property is used to remove redundant data from a ZFS file system. With the property enabled duplicate data blocks are removed synchronously. The result is that only unique data is stored and common componenents are shared. Task:See how to implement deduplication and its effects Lab: You will create a ZFS file system with deduplication turned on and see if it reduces the amount of physical storage needed when we again fill it with a copy of /usr/lib. root@solaris:/usr/lib# zfs destroy mypool/mydata2 root@solaris:/usr/lib# zfs set dedup=on mypool/mydata1 root@solaris:/usr/lib# rm -rf /data1/* root@solaris:/usr/lib# mkdir /data1/2nd-copy root@solaris:/usr/lib# zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 1.02M 2.94G 31K /mypool mypool/mydata1 43K 2.94G 43K /data1 root@solaris:/usr/lib# find . -print | cpio -pd /data1 2142768 blocks root@solaris:/usr/lib# zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 1.02G 1.99G 31K /mypool mypool/mydata1 1.01G 1.99G 1.01G /data1 root@solaris:/usr/lib# find . -print | cpio -pd /data1/2nd-copy 2142768 blocks root@solaris:/usr/lib#zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 1.99G 1.96G 31K /mypool mypool/mydata1 1.98G 1.96G 1.98G /data1 You could go on creating copies for quite a while...but you get the idea. Note that deduplication and compression can be combined: the compression acts on metadata. Deduplication works across file systems in a pool and there is a zpool-wide property dedupratio: root@solaris:/usr/lib# zpool get dedupratio mypool NAME PROPERTY VALUE SOURCE mypool dedupratio 4.30x - Deduplication can also be checked using "zpool list": root@solaris:/usr/lib# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT mypool 2.98G 1001M 2.01G 32% 4.30x ONLINE - rpool 15.9G 6.66G 9.21G 41% 1.00x ONLINE - Before moving on to the next topic, destroy that dataset and free up some space: root@solaris:~# zfs destroy mypool/mydata1 Back to top Exercise Z.5: ZFS Encryption Task: Encrypt sensitive data. Lab: Explore basic ZFS encryption. This lab only covers the basics of ZFS Encryption. In particular it does not cover various aspects of key management. Please see the ZFS Adminastrion Manual and the zfs_encrypt(1M) manual page for more detail on this functionality. Back to top root@solaris:~# zfs create -o encryption=on mypool/data2 Enter passphrase for 'mypool/data2': ******** Enter again: ******** root@solaris:~# Creation of a descendent dataset shows that encryption is inherited from the parent: root@solaris:~# zfs create mypool/data2/data3 root@solaris:~# zfs get -r encryption,keysource,keystatus,checksum mypool/data2 NAME PROPERTY VALUE SOURCE mypool/data2 encryption on local mypool/data2 keysource passphrase,prompt local mypool/data2 keystatus available - mypool/data2 checksum sha256-mac local mypool/data2/data3 encryption on inherited from mypool/data2 mypool/data2/data3 keysource passphrase,prompt inherited from mypool/data2 mypool/data2/data3 keystatus available - mypool/data2/data3 checksum sha256-mac inherited from mypool/data2 You will find the online manual page zfs_encrypt(1M) contains examples. In particular, if time permits during this lab session you may wish to explore the changing of a key using "zfs key -c mypool/data2". Exercise Z.6: Shadow Migration Shadow Migration allows you to migrate data from an old file system to a new file system while simultaneously allowing access and modification to the new file system during the process. You can use Shadow Migration to migrate a local or remote UFS or ZFS file system to a local file system. Task: You wish to migrate data from one file system (UFS, ZFS, VxFS) to ZFS while mainaining access to it. Lab: Create the infrastructure for shadow migration and transfer one file system into another. First create the file system you want to migrate root@solaris:~# zpool create oldstuff c3t4d0 root@solaris:~# zfs create oldstuff/forgotten Then populate it with some files: root@solaris:~# cd /var/adm root@solaris:/var/adm# find . -print | cpio -pdv /oldstuff/forgotten You need the shadow-migration package installed: root@solaris:~# pkg install shadow-migration Packages to install: 1 Create boot environment: No Create backup boot environment: No Services to change: 1 DOWNLOAD PKGS FILES XFER (MB) Completed 1/1 14/14 0.2/0.2 PHASE ACTIONS Install Phase 39/39 PHASE ITEMS Package State Update Phase 1/1 Image State Update Phase 2/2 You then enable the shadowd service: root@solaris:~# svcadm enable shadowd root@solaris:~# svcs shadowd STATE STIME FMRI online 7:16:09 svc:/system/filesystem/shadowd:default Set the filesystem to be migrated to read-only root@solaris:~# zfs set readonly=on oldstuff/forgotten Create a new zfs file system with the shadow property set to the file system to be migrated: root@solaris:~# zfs create -o shadow=file:///oldstuff/forgotten mypool/remembered Use the shadowstat(1M) command to see the progress of the migration: root@solaris:~# shadowstat EST BYTES BYTES ELAPSED DATASET XFRD LEFT ERRORS TIME mypool/remembered 92.5M - - 00:00:59 mypool/remembered 99.1M 302M - 00:01:09 mypool/remembered 109M 260M - 00:01:19 mypool/remembered 133M 304M - 00:01:29 mypool/remembered 149M 339M - 00:01:39 mypool/remembered 156M 86.4M - 00:01:49 mypool/remembered 156M 8E 29 (completed) Note that if you had created /mypool/remembered as encrypted, this would be the preferred method of encrypting existing data. Similarly for compressing or deduplicating existing data. The procedure for migrating a file system over NFS is similar - see the ZFS Administration manual. That concludes this lab session.

    Read the article

  • What is recommended minimum object size for gzip benefits?

    - by utt73
    I'm working on improving page speed display times, and one of the methods is to gzip content from the webserver. Google recommends: Note that gzipping is only beneficial for larger resources. Due to the overhead and latency of compression and decompression, you should only gzip files above a certain size threshold; we recommend a minimum range between 150 and 1000 bytes. Gzipping files below 150 bytes can actually make them larger. We serve our content through Akamai, using their network for a proxy and CDN. What they've told me: Following up on your question regarding what is the minimum size Akamai will compress the requested object when sending it to the end user: The minimum size is 860 bytes. My reply: What is the reason(s) for why Akamai's minimum size is 860 bytes? And why, for example, is this not the case for files Akamai serves for facebook? (see below) Google recommends to gzip more agressively. And that seems appropriate on our site where the most frequent hits, by far, are AJAX calls that are <860 bytes. Akamai's response: The reasons 860 bytes is the minimum size for compression is twofold: (1) The overhead of compressing an object under 860 bytes outweighs performance gain. (2) Objects under 860 bytes can be transmitted via a single packet anyway, so there isn't a compelling reason to compress them. So I'm here for some fact checking. Is the 860 byte limit due to packet size the end of this reasoning? Why would high traffic sites push this lower/closer to the 150 byte limit... just to save on bandwidth costs, or is there a performance gain in doing so?

    Read the article

  • Compressing/compacting messages over websocket on Node.js

    - by icelava
    We have a websocket implementation (Node.js/Sock.js) that exchanges data as JSON strings. As our use cases grow, so have the size of the data transmitted across the wire. The websocket protocol does not natively offer any compression feature, so in order to reduce the size of our messages we'd have to manually do something about the serialisation. There appear to be a variety of LZW implementations in Javascript, some which confuses me on their compatibility for in-browser use only versus transmission across the wire due to my lack of understanding on low-level encodings. More importantly, all of them seem to take a noticeable performance drag when Javascript is the engine doing the compression/decompression work, which is not desirable for mobile devices. Looking instead other forms of compact serialisation, MessagePack does not appear to have any active support in Javascript itself; BSON does not have any Javascript implementation; and an alternative BISON project that I tested does not deserialise everything back to their original values (large numbers), and it does not look like any further development will happen either. What are some other options others have explored for Node.js?

    Read the article

  • Why does Ubuntu refuse to execute files from an NTFS partition?

    - by Ivan
    I mount an NTFS partition (where I've got some Linux binaries and scripts alongside with Win32 and data files) with the following fstab line: /dev/sda5 /mnt/dat ntfs-3g rw,dev,exec,auto,async,users,umask=000,uid=1000,gid=1000,locale=en_US.utf8, errors=remount-ro 0 0 All files seem to have executable attribute set then, but if I try to actually execute them, I get "Permission denied" error. Even with sudo. Even while execute (as well as read and write) permissions are granted to everyone and all the files owner is set to the user. So how do I set the system up to be able to run Linux binaries from NTFS?

    Read the article

  • Where can I find the application executables in the filesystem?

    - by richzilla
    Where are executables for programs stored in Ubuntu? An application (Komodo Edit) is asking me to identify an application to be used as a web browser. I've become used to just entering the application name as a command for situations such as these, but this scenario got me thinking. I know in Windows it would just be the relevant application folder in the 'program files' folder, but I'm assuming things are a bit different on Linux? I thought somewhere like bin would be logical but this appears to standard Linux/Unix applications. Where would I find the binary executable for applications stored on my system?

    Read the article

  • genisoimage and exec bit preservation

    - by user92187
    Maybe I'm just not doing right, but I can't seem to get genisoimage to produce a UDF image and preserve the exec bit. $ genisoimage --version genisoimage 1.1.11 (Linux) $ echo "echo 'Hello world'" > script.sh $ chmod +x script.sh $ ./script.sh Hello world $ genisoimage -input-charset utf-8 -r -udf -volid minimal -o minimal.iso script.sh Total translation table size: 0 Total rockridge attributes bytes: 250 Total directory bytes: 0 Path table size(bytes): 10 Max brk space used 0 420 extents written (0 MB) $ mkdir mount $ sudo mount minimal.iso $PWD/mount -o ro,loop -t udf $ ls -l script.sh mount/script.sh -r--r--r-- 1 root root 19 Sep 21 18:40 mount/script.sh -rwxrwxr-x 1 kip kip 19 Sep 21 18:40 script.sh You'll note in the last command that script.sh was executable at the time it was injected into the image, but does not appear to be inside of the mounted image. Is this a bug in genisoimage, a problem with the way I am mounting the image, or a problem in my usage of genisoimage?

    Read the article

  • How to execute a scipt just by double clicking like .EXE files in Windows?

    - by maythux
    How can I make a bash script executable by double clicking just like .exe files in Windows? I tried creating a launcher and assigning the script to it, but there are two consequences: the terminal twinkles, disappears, and nothing is done. you must specify to run in terminal in order to work. I have a script that installs tomcat on an offline PC, including all dependencies of tomcat in the script. I need to make the script work on double clicking like windows since most who use the script will not be familiar with Ubuntu. Forget the above explanation. I want to make a script that can be run by double-clicking on it, without using the terminal. Anybody knows how?

    Read the article

  • How to make a call to an executable from Python script?

    - by fx
    I need to execute this script from my Python script. Is it possible? The script generate some outputs with some files being written. How do I access these files? I have tried with subprocess call function but without success. fx@fx-ubuntu:~/Documents/projects/foo$ bin/bar -c somefile.xml -d text.txt -r aString -f anotherString >output The application "bar" also references to some libraries, it also creates some files besides the output. How do I get access to these files? Just by using open()? Thank you,

    Read the article

  • Problem with Mono and .exe file

    - by Vere Nicolson
    I have purchased a piece of software to configure programable radio control transmitters. It says it will run on Linux, see below: Digital Radio runs on: Microsoft Windows 2000/2003/XP Microsoft Windows Vista/Seven/2008, Linux Ubuntu or a distribution with Mono, 32 or 64 bit, also in a virtual machine. Linux requires the Mono package installed, with also the Visual Basic 2005 runtime library. The Linux version is the same executable file of the Windows platform, and can be execute using Mono. You don't need Wine. All the tests have been done on Ubuntu Desktop 10.10 I have tried for weeks to get the drivers for the cable to work in XP or Win7 and I admit defeat. It looks like Ubuntu can run the cable effortlessly but now I can't get the software going. Tried to run in Ubuntu 10.04 with mono, GUI failed and I got the following message in terminal. $ mono ~/Desktop/GigRadioLinux/DigitalRadio/DigitalRadio.exe The entry point method could not be loaded Windows installation requires using a 30 odd character Passkey and a 4.24k text file as a "license" to be entered during running of the exe file. Can someone tell me how I enter the passkey and license into terminal, or is that not my primary problem? I don't understand "entry point method". Tried Wine and that didn't work either. The developer responded to my earlier emails re the cable drivers, but hasn't replied to questions regarding this. If I have left out anything important let me know and I will try to supply more information.

    Read the article

  • Bash can't start a programme that's there and has all the right permissions

    - by Rory
    This is a gentoo server. There's a programme prog that can't execute. (Yes the execute permission is set) About the file $ ls prog $ ./prog bash: ./prog: No such file or directory $ file prog prog: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), for GNU/Linux 2.2.5, dynamically linked (uses shared libs), not stripped $ pwd /usr/local/bin $ /usr/local/bin/prog bash: /usr/local/bin/prog: No such file or directory $ less prog | head ELF Header: Magic: 7f 45 4c 46 01 01 01 00 00 00 00 00 00 00 00 00 Class: ELF32 Data: 2's complement, little endian Version: 1 (current) OS/ABI: UNIX - System V ABI Version: 0 Type: EXEC (Executable file) Machine: Intel 80386 Version: 0x1 I have a fancy less, to show that it's an actual executable, here's some more data: $ xxd prog |head 0000000: 7f45 4c46 0101 0100 0000 0000 0000 0000 .ELF............ 0000010: 0200 0300 0100 0000 c092 0408 3400 0000 ............4... 0000020: 0401 0a00 0000 0000 3400 2000 0700 2800 ........4. ...(. 0000030: 2600 2300 0600 0000 3400 0000 3480 0408 &.#.....4...4... 0000040: 3480 0408 e000 0000 e000 0000 0500 0000 4............... 0000050: 0400 0000 0300 0000 1401 0000 1481 0408 ................ 0000060: 1481 0408 1300 0000 1300 0000 0400 0000 ................ 0000070: 0100 0000 0100 0000 0000 0000 0080 0408 ................ 0000080: 0080 0408 21f1 0500 21f1 0500 0500 0000 ....!...!....... 0000090: 0010 0000 0100 0000 40f1 0500 4081 0a08 ........@...@... and $ ls -l prog -rwxrwxr-x 1 1000 devs 725706 Aug 6 2007 prog $ ldd prog not a dynamic executable $ strace ./prog 1249403877.639076 execve("./prog", ["./prog"], [/* 27 vars */]) = -1 ENOENT (No such file or directory) 1249403877.640645 dup(2) = 3 1249403877.640875 fcntl(3, F_GETFL) = 0x8002 (flags O_RDWR|O_LARGEFILE) 1249403877.641143 fstat(3, {st_mode=S_IFCHR|0620, st_rdev=makedev(136, 0), ...}) = 0 1249403877.641484 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2b3b8954a000 1249403877.641747 lseek(3, 0, SEEK_CUR) = -1 ESPIPE (Illegal seek) 1249403877.642045 write(3, "strace: exec: No such file or dir"..., 40strace: exec: No such file or directory ) = 40 1249403877.642324 close(3) = 0 1249403877.642531 munmap(0x2b3b8954a000, 4096) = 0 1249403877.642735 exit_group(1) = ? About the server FTR the server is a xen domU, and the programme is a closed source linux application. This VM is a copy of another VM that has the same root filesystem (including this programme), that works fine. I've tried all the above as root and same problem. Did I mention the root filesystem is mounted over NFS. However it's mounted 'defaults,nosuid', which should include execute. Also I am able to run many other programmes from that mounted drive /proc/cpuinfo: processor : 0 vendor_id : GenuineIntel cpu family : 15 model : 4 model name : Intel(R) Xeon(TM) CPU 3.00GHz stepping : 1 cpu MHz : 2992.692 cache size : 1024 KB fpu : yes fpu_exception : yes cpuid level : 5 wp : yes flags : fpu tsc msr pae mce cx8 apic mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall nx lm constant_tsc pni monitor ds_cpl cid cx16 xtpr bogmips : 5989.55 clflush size : 64 cache_alignment : 128 address sizes : 36 bits physical, 48 bits virtual power management: Example of a file that I can run I can run other programmes on that mounted filesystem on that server. For example: $ ls -l ls -rwxr-xr-x 1 root root 105576 Jul 25 17:14 ls $ file ls ls: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), for GNU/Linux 2.6.9, dynamically linked (uses shared libs), stripped $ ./ls attr cat cut echo getfacl ln more ... (you get the idea) ... rmdir sort tty $ less ls | head ELF Header: Magic: 7f 45 4c 46 02 01 01 00 00 00 00 00 00 00 00 00 Class: ELF64 Data: 2's complement, little endian Version: 1 (current) OS/ABI: UNIX - System V ABI Version: 0 Type: EXEC (Executable file) Machine: Advanced Micro Devices X86-64 Version: 0x1

    Read the article

  • ClickOnce Deployment of an Application with more then one executable file

    - by Ahron Levi
    Hi I am trying to deploy an application with two executable files one of which is the application it self. I used the publish tub on the VS 2008 and tried to publish manually using the MageUI.exe. in both cases I get the "Reference in the manifest does not match the identity of the downloaded assembly" error in regards to the second executable file. Dose Any one know how to publish an application with two executable files? Thanks, Ahron

    Read the article

  • How to link C# and C++ assemblies into a single executable

    - by swingkid
    I have VS2008 solution containg a project that generates a C# executable that references a project that generates a dll containing both C++/CLI and unmanaged C++. I would like to merge these into a single executable, as the C++ dll contains security code that I want to embed in the main executable. I cannot use ILMerge, as the dll contains both managed and unmanaged code. The suggested solution seems to be to use link.exe to link the C# assembly with the C++ object files. This is what I am trying to do. I manually edited the project file for the c# executable to generate a netmodule. I added a post build step to the executable project to run link.exe to link the c# netmodule and the compiled C++ object files together, then run mt.exe to merge the assembly manifests created by both projects. This runs successfully, but the exe still contains a reference to and uses the c++ types defined in the dll generated by the normal build process for the C++ project. I then specified /NOASSEMBLY in the project settings for the C++ dll, so it also generates a netmodule. In the C# project, I removed the reference to the C++ project, but added a project dependancy in the solution. I manually edited the C# project file to include similar to: <ItemGroup> <AddModules Include="..\Debug\librarycode.netmodule" /> </ItemGroup> i.e. to reference the C++ netmodule that is now generated by the C++ project. However, now the linker step in my post build event fails with: error LNK2027: unresolved module reference 'librarycode.netmodule' fatal error LNK1311: 1 unresolved module references: This is entirely understandable, as I am not linking in the librarycode netmodule; I am linking in the C++ object files used to generate the netmodule instead. So in short, how do I merge a c# executable and C++ object files into a single assembly? What have I missed? My source of reference so far (appart from the link.exe command link reference etc on MSDN) are the two following articles: http://blogs.msdn.com/texblog/archive/2007/04/05/linking-native-c-into-c-applications.aspx http://www.hanselman.com/blog/MixingLanguagesInASingleAssemblyInVisualStudioSeamlesslyWithILMergeAndMSBuild.aspx I have a demonstration solution that shows my workings so far, if this helps. Thank you very much in advance.

    Read the article

  • Configuring Cruise Control Net with sourcesafe - Unable to load array item 'executable'

    - by albert
    Hi all, I'm trying to create a continuous integration environment. To do so, I've used a guide that can be found at http://www.15seconds.com/issue/040621.htm. In this step by step, the goal is to create a CI with CCNet, NAnt, NUni, NDoc, FxCop and source safe. I've been able to create my build by using the command prompt (despite the the different versions issues). The problem has come with the configuration of ccnet.config I've made some changes because of the new versions, but I'm still getting errors when starting the CCNet server. Can anyone help me to fix this issue or point where to find a guide with this scenario? The error that I'm getting: Unable to instantiate CruiseControl projects from configuration document. Configuration document is likely missing Xml nodes required for properly populating CruiseControl configuration. Unable to load array item 'executable' - Cannot convert from type System.String to ThoughtWorks.CruiseControl.Core.ITask for object with value: "\DevTools\nant\bin\NAnt.exe" Xml: E:\DevTools\nant\bin\NAnt.exe My CCNet config file below: <cruisecontrol> <project name="BuildingSolution"> <webURL>http://localhost/ccnet</webURL> <modificationDelaySeconds>10</modificationDelaySeconds> <triggers> <intervaltrigger name="continuous" seconds="60" /> </triggers> <sourcecontrol type="vss" autoGetSource="true"> <ssdir>E:\VSS\</ssdir> <executable>C:\Program Files\Microsoft Visual SourceSafe\SS.EXE</executable> <project>$/CCNet/slnCCNet.root/slnCCNet</project> <username>Albert</username> <password></password> </sourcecontrol> <prebuild type="nant"> <executable>E:\DevTools\nant\bin\NAnt.exe</executable> <buildFile>E:\Builds\buildingsolution\WebForm.build</buildFile> <logger>NAnt.Core.XmlLogger</logger> <buildTimeoutSeconds>300</buildTimeoutSeconds> </prebuild> <tasks> <nant> <executable>E:\DevTools\nant\bin\nant.exe</executable> <nologo>true</nologo> <buildFile>E:\Builds\buildingsolution\WebForm.build</buildFile> <logger>NAnt.Core.XmlLogger</logger> <targetList> <target>build</target> </targetList> <buildTimeoutSeconds>6000</buildTimeoutSeconds> </nant> </tasks> <publishers> <merge> <files> <file>E:\Builds\buildingsolution\latest\*-results.xml</file> </files> </merge> <xmllogger /> </publishers> </project> </cruisecontrol> enter code here

    Read the article

  • Optimize video filesize without quality loss

    - by user12015
    Is there a simple way (on the command line - I want to write a script which compresses all videos in a folder) to reduce the filesize of a video (almost) without quality loss? Is there a method which works equally well for different video format (mp4, flv, m4v, mpg, mov, avi)? I should mention that most of the videos I would like to compress are downloaded web-videos (mp4, flv), so it's not clear if there is much room for further compression.

    Read the article

  • ClickOnce Deplyment of an Application with more then one executable file

    - by Ahron Levi
    Hi I am trying to deploy an application with two executable files one of which is the application it self. I used the publish tub on the VS 2008 and tried to publish manually using the MageUI.exe. in both cases I get the "Reference in the manifest does not match the identity of the downloaded assembly" error in regards to the second executable file. Dose Any one know how to publish an application with two executable files? Thanks, Ahron

    Read the article

  • Tomcat deploy: make included scripts executable

    - by AlexS
    I'm devellopping a WebApplication (for Tomcat) using netbeans on Windows 7. For the Webapplication to run I need to run a insall-script once. This script (*.bat for windows and *.sh for linux is included in my war-file (WEB_INF). Now everytime I deploy the WAR-file and want to run the script on linux I have to call chmod +x install.sh first. Is there a way that this script can be made executable by default? I don't want to have to execute some extra commands after the deploy to make the script executable. For clarification: I'm not new to Linux and I know how to set executable-rights on files. That's not the problem. My problem is: What do I have to do, so that this script is executable right after tomcat deployed my *.war-file (unpacked it). If I would be using Linux for development as well, I would try to set the rights according in my sources (maybe I'll try it when I have a little more spare time). But I am using Windows and netbeans. Are there any attributes I can set to achive my goal, or is it possible to achive this using ant? By the way: Are there security related issues with this approach? The script looks for java executable and calls a javabased GUI-installer...

    Read the article

  • Anyone have any opinions about Chilkatsoft? [closed]

    - by Joe Enos
    I'm considering purchasing the Chilkatsoft bundle, which includes a bunch of libraries on lots of technologies. Specifically, I care about .NET compression, encryption, FTP, and mail libraries, but I'm interested in looking at the rest of their stuff as well. Does anyone have any experience using these libraries, or opinions on the company or product in general? The price is right, and the content seems good, so I just want to make sure I do my homework before purchasing. Thanks

    Read the article

  • link dll to executable

    - by user353707
    How can I link the .dll file to an executable? I do not have the source for the dll nor executable. The two files operate on a 64-bit system. When the executable is ported from another system, I get "The application failed to initialize properly (0xc0150002). Click OK to Terminate the program.

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >